IFMBE Proceedings Series Editor: R. Magjarevic
Volume 29
The International Federation for Medical and Biological Engineering, IFMBE, is a federation of national and transnational organizations representing internationally the interests of medical and biological engineering and sciences. The IFMBE is a non-profit organization fostering the creation, dissemination and application of medical and biological engineering knowledge and the management of technology for improved health and quality of life. Its activities include participation in the formulation of public policy and the dissemination of information through publications and forums. Within the field of medical, clinical, and biological engineering, IFMBE’s aims are to encourage research and the application of knowledge, and to disseminate information and promote collaboration. The objectives of the IFMBE are scientific, technological, literary, and educational. The IFMBE is a WHO accredited NGO covering the full range of biomedical and clinical engineering, healthcare, healthcare technology and management. It is representing through its 58 member societies some 120.000 professionals involved in the various issues of improved health and health care delivery. IFMBE Officers President: Makoto Kikuchi, Vice-President: Herbert Voigt, Former-President: Joachim H. Nagel Treasurer: Shankar M. Krishnan, Secretary-General: Ratko Magjarevic http://www.ifmbe.org
Previous Editions: IFMBE Proceedings MEDICON 2010, “XII Mediterranean Conference on Medical and Biological Engineering and Computing 2010”, Vol. 29, 2010, Chalkidiki, Greece, CD IFMBE Proceedings BIOMAG2010, “17th International Conference on Biomagnetism Advances in Biomagnetism – Biomag2010”, Vol. 28, 2010, Dubrovnik, Croatia, CD IFMBE Proceedings ICDBME 2010, “The Third International Conference on the Development of Biomedical Engineering in Vietnam”, Vol. 27, 2010, Ho Chi Minh City, Vietnam, CD IFMBE Proceedings MEDITECH 2009, “International Conference on Advancements of Medicine and Health Care through Technology”, Vol. 26, 2009, Cluj-Napoca, Romania, CD IFMBE Proceedings WC 2009, “World Congress on Medical Physics and Biomedical Engineering”, Vol. 25, 2009, Munich, Germany, CD IFMBE Proceedings SBEC 2009, “25th Southern Biomedical Engineering Conference 2009”, Vol. 24, 2009, Miami, FL, USA, CD IFMBE Proceedings ICBME 2008, “13th International Conference on Biomedical Engineering” Vol. 23, 2008, Singapore, CD IFMBE Proceedings ECIFMBE 2008 “4th European Conference of the International Federation for Medical and Biological Engineering”, Vol. 22, 2008, Antwerp, Belgium, CD IFMBE Proceedings BIOMED 2008 “4th Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 21, 2008, Kuala Lumpur, Malaysia, CD IFMBE Proceedings NBC 2008 “14th Nordic-Baltic Conference on Biomedical Engineering and Medical Physics”, Vol. 20, 2008, Riga, Latvia, CD IFMBE Proceedings APCMBE 2008 “7th Asian-Pacific Conference on Medical and Biological Engineering”, Vol. 19, 2008, Beijing, China, CD IFMBE Proceedings CLAIB 2007 “IV Latin American Congress on Biomedical Engineering 2007, Bioengineering Solution for Latin America Health”, Vol. 18, 2007, Margarita Island, Venezuela, CD IFMBE Proceedings ICEBI 2007 “13th International Conference on Electrical Bioimpedance and the 8th Conference on Electrical Impedance Tomography”, Vol. 17, 2007, Graz, Austria, CD IFMBE Proceedings MEDICON 2007 “11th Mediterranean Conference on Medical and Biological Engineering and Computing 2007”, Vol. 16, 2007, Ljubljana, Slovenia, CD IFMBE Proceedings BIOMED 2006 “Kuala Lumpur International Conference on Biomedical Engineering”, Vol. 15, 2004, Kuala Lumpur, Malaysia, CD IFMBE Proceedings WC 2006 “World Congress on Medical Physics and Biomedical Engineering”, Vol. 14, 2006, Seoul, Korea, DVD IFMBE Proceedings BSN 2007 “4th International Workshop on Wearable and Implantable Body Sensor Networks”, Vol. 13, 2006, Aachen, Germany
IFMBE Proceedings Vol. 29 Panagiotis D. Bamidis • Nicolas Pallikarakis (Eds.)
XII Mediterranean Conference on Medical and Biological Engineering and Computing 2010 May 27 – 30, 2010 Chalkidiki, Greece
123
Editors Panagiotis D. Bamidis Aristotle University of Thessaloniki Lab of Medical Informatics, Medical Scho 541 24 Thessaloniki Greece E-mail:
[email protected] Nicolas Pallikarakis University of Patras Biomedical Technology Unit 265 04 Rio Patras Greece E-mail:
[email protected]
ISSN 1680-0737 ISBN 978-3-642-13038-0
e-ISBN 978-3-642-13039-7
DOI 10.1007/978-3-642-13039-7 Library of Congress Control Number: 2010927933 © International Federation for Medical and Biological Engineering 2010 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permissions for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The IFMBE Proceedings is an Official Publication of the International Federation for Medical and Biological Engineering (IFMBE) Typesetting: Scientific Publishing Services Pvt. Ltd., Chennai, India. Cover Design: deblik, Berlin Printed on acid-free paper 987654321 springer.com
Preface
Over the past three decades, the exploding number of new technologies and applications introduced in medical practice, often powered by advances in biosignal processing and biomedical imaging, created an amazing account of new possibilities for diagnosis and therapy, but also raised major questions of appropriateness and safety. The accelerated development in this field, alongside with the promotion of electronic health care solutions, is often on the basis of an uncontrolled diffusion and use of medical technology. The emergence and use of medical devices is multiplied rapidly and today there exist more than one million different products available on the world market. Despite the fact that the rising cost of health care, partly resulting from the new emerging technological applications, forms the most serious and urgent problem for many governments today, another important concern is that of patient safety and user protection, issues that should never be compromised and expelled from the Biomedical Engineering research practice agenda. Consequently, the associated field of Biomedical Engineering education – and its related branches – is undergoing a rapid evolution characterized by an increasing degree of specialization. This imposes new challenges, while the changing scene at the European level dictates the need for harmonization and standardization of education, with a focus on meeting the emerging needs for appropriately trained young engineers within this new landscape. Therefore, MEDICON 2010 is dedicated to all those young Biomedical Engineers who have to face this very demanding environment and are expected to play a key role in bringing the benefits of this technological evolution right to the patient. The 12th Mediterranean Conference on Medical and Biological Engineering and Computing - MEDICON 2010 has been supported by a limited number of external sponsors to which we would like to express our appreciation. However, this Conference would not have been possible to organize and prove successful without the valuable contribution of the many volunteers involved. Particularly we would like to express our deep thanks to the invited speakers for accepting to come to Chalkidiki on their own expenses and deliver their excellent presentations. It is also a great pleasure for us to acknowledge the dedicated and hard work done by all the members of the International Scientific as well as the Programme Committees, but also the many volunteering reviewers, that performed as a real team, thereby rendering the rigorous reviewing process a quality filter for an indeed noteworthy proceedings publication. All these have created an enthusiastic and challenging atmosphere that has all the necessary ingredients to make MEDICON2010 an unforgettable event. We hope you will appreciate this Proceedings volume as much as we are proud of it!
Nicolas Pallikarakis Panagiotis D. Bamidis MEDICON 2010 Co-chairs
Committees
Conference Chairs Nicolas Pallikarakis Panagiotis D. Bamidis
University of Patras, Greece Aristotle University of Thessaloniki, Greece
Local Organising Committee Chair Costas Pappas
Aristotle University of Thessaloniki, Greece
Local Organising Committee Stavros Panas George Sergiadis Dimitris Koufogiannis Nicholas Dombros
Aristotle University of Thessaloniki, Greece Aristotle University of Thessaloniki, Greece Aristotle University of Thessaloniki, Greece Aristotle University of Thessaloniki, Greece
International Scientific Committee Chairs Ratko Magjarevic Marcello Bracale
University of Zagreb, Croatia University "Federico II" of Naples, Italy
International Scientific Committee James C. H. Goh, Singapore Fernando A. Infantosi, Brazil Akos Jobbagy, Hungary Makoto Kikuchi, Japan Shankar M. Krishnan, USA Luis Kun, USA Mario Medvedec, Croatia Alan Murray, United Kingdom Patrick Pentony, Ireland Niilo Saranummi, Finland Jos A.E. Spaan, The Netherlands Herbert Voigt, USA
Program Committee Chairs Dimitris Koutsouris Nicos Maglaveras
National Technical University of Athens, Greece Aristotle University of Thessaloniki, Greece
VIII
Committees
Program Committee Pantelis Angelidis, Greece Maria Teresa Areddondo, Spain Theo Arvanitis, UK Alexandros Astaras, Greece Joe Barbenel, UK Catherine Chronaki, Greece Asuman Dogac, Tourkey Olaf Dossel, Germany Barry Eaglestone, UK David Elad, Israel Simon G. Fabri, Malta Dimitris Fotiadis, Greece Frederique Frouin, France Demetrius Georgiou, Greece Leontios Hadjileontiadis, Greece Maria Haritou, Greece Jens Haueisen, Germany Jiri Holcik, Czech Republic Andreas Ioannides, Cyprus Eleni Kaldoudi,Greece Panagiotis Ketikidis, Greece Stathis Konstantinidis, Greece Vassilis Koutkias, Greece Periklis Ktonas, Greece Efthyvoulos Kyriakou,Cyprus Igor Lackovic, Croatia Olof Lindahl, Sweden Dimitris Lymperopoulos, Greece Ilias Maglogiannis, Greece Nadia Magnenat-Thalmann, Switcherland Fillia Makedon, USA Andigoni Malousi, Greece Vicky Manthou, Greece Cristina Mazzoleni , Italy Damijan Miklavcic, Slovenia Joe Mizrahi, Israel
Conference Tracks and Chairs 1. Medical Devices & Instrumentation Chairs Alexandros Astaras Andreas Lymberis 2. Education Chairs Eleni Kaldoudi Daniela Giordano Stathis Konstantinidis
Zhivko Bliznakov, Greece Kristina Bliznakova, Greece Ivan Buliev, Boulgaria Babis Bratsas, Greece Ioanna Chouvarda, Greece Per Moeller, Denmark Robert Allen, UK Roger Moore, UK Konstantina Nikita, Greece Marc Nyssen, Belgium Christos Papadelis, Italy Iraklis Paraskakis, Greece Constantinos Pattichis,Cyprus Sotiris Pavlopoulos, Greece Thomas Penzel, Germany Ioannis Pitas, Greece Andriana Prentza, Greece Gunter Rau, Germany Georgios Sakas, Germany Laura Roa, Spain Abdul Roudsari, UK Göran Salerud, Sweden Theodoros Samaras, Greece Mario Sansone, Italy Andres Santos, Spain Christos Schizas, Cyprus Mario Forjaz Secca, Portugal Maria Siebes, The Netherlands Stella Spyrou, Greece Rita Stagni, Italy Gregory-Telemachos Stamkopoulos, Greece Selma Supek, Croatia Panayiotis Tsanakas, Greece Jan Wojcicki, Poland Michalis Zervakis, Greece
Committees
IX
3. Biomedical Imaging Chairs Kristina Bliznakova Ilias Maglogiannis 4. Biosignal Processing Chair Leontios Hadjileontiadis Christos Papadelis 5. Clinical Engineering and Safety Chairs Yadin David Jorge Calil Saide Zhivko Bliznakov 6. E-health Chairs Constantinos Pattichis Efthyvoulos Kyriakou 7. Workshop on BME and MP Education: Current Trends in Europe Chairs Slavik Tabakov Nicolas Pallikarakis
List of Reviewers Mansour Ahmadian Robert Allen Christos-Nikolaos Anagnostopoulos Pantelis Angelidis Antonis Antoniadis Theo Arvanitis Sara Assecondi Alexander Astaras Nizamettin Aydin Branko Babusiak Joe Barbenel Ofer Barnea Katarzyna Blinowska Zhivko Bliznakov Kristina Bliznakova Francesca Bovolo Marcello Bracale Charalampos Bratsas Christoph Braun Maide Bucolo Ivan Buliev Enrico Caiani Giovanni Calcagnini Martin Cerny Aristotelis Chatziioannou Giannis Chatzizisis
Michela Chiappalone Ki-H Chon Ioanna Chouvarda Christodoulos Christodoulou Catherine Chronaki Radu Ciupa Eleni Costaridou Hariton Costin Paul Cristea Eleni Dafli Andriani Daskalaki Kostas Delibasis Gianpaolo Demarchi Aris Dermitzakis Fabrizio De-Vico-Fallani Asuman Dogac Zlatica Dolna Olaf Dossel Charalampos Doukas Barry Eaglestone David Elad George Eleftherakis Miroslawa El-Fray Silvia Erla Simon Fabri Luca Faes
X
Silvia Fantozzi Jocelyne Fayn Mario Forjaz-Secca Dimitrios Fotiadis Christos Frantzidis Monique Frize Michal Gala Demetrius Georgiou Daniela Giordano James Goh Leontios Hadjileontiadis Karel Hana Maria Haritou Jens Haueisen Jan Havlik Martin HoÃbach Jiri Holcik Dimitris Iakovidis Adam Idzkowski Antonio-Fernando Infantosi Andreas Ioannides Robert Istepanian Sriram Iyengar Akos Jobbagy Erik Johannessen Vaggelis Kaimakamis Eleni Kaldoudi Anna Karahaliou Eleni Kargioti Kostas Karpouzis Spyros Kitsiou Manos Klados Athina Kokonozi Antonios Komnidis Evdokimos Konstantinidis Stathis Konstantinidis Dimitrios Kosmopoulos Sophia Kossida Kostas Kostopoulos Vassilis Koutkias Periklis Ktonas Dinesh Kumar Efthyvoulos Kyriacou Igor Lackovic Philip Langley Nikos Laskaris Olof Lindahl Angelika Lingnau Chrysa Lithari Andrej Luneski Andreas Lymberis Ilias Maglogiannis
Committees
Nadia Magnenat-Thalmann Fillia Makedon Andigoni Malousi Vicky Manthou Mattia Marconcini Ioannis Mariolis Michela Masa¨ Georgios Matis Giulia Matrone Veronica Mazza Cristina Mazzoleni Mario Medvedec Damijan Miklavcic Joe Mizrahi Per Moeller Mihaela Morega Antonis Mpillis Brian-Edmond Murphy Alan Murray Tuncay Namli Jila Nazari Marios Neofytou Kleanthis Neokleous Konstantina Nikita Maria Nikolaidou Giandomenico Nollo Leonidas Orfanidis Cristina Oyarzun-Laura Christos Papadelis Kostas Papathanasiou Iraklis Paraskakis Ana Pascoal Constantinos Pattichis Leandro Pecchia Marek Penhaker Krzysztof Penkala Patrick Pentony Thomas Penzel Francesca Pizzorni-Ferrarese Vassilis Plagianakos Vahe Poghosyan Senan Postaci Eugene Postnikov Deborah Prà Michal Prauzek Andriana Prentza Efi Psarouli Chiara Rabotti Dan Rafiroiu Gunter Rau Ulrike Richter Jose-Joaquin Rieta
Committees
Laura Roa Georgios Sakas Goran Salerud Mario Sansone Andres Santos Niilo Saranummi Roberto Sassi Christos Schizas Maurizio Schmid Nicu Sebe George Sergiadis Maria Siebes Pavel Smrcka Delia Soimu Tomasz Soltysinski Jos Spaan Stergiani Spyrou Pascal Staccini Rita Stagni Telemachos Stamkopoulos Stavros Stavrinidis Sebastian Steger
XI
Milan Stork Charalampos Styliadis Selma Supek Slavik Tabakov Luigi Tame Tong-Boon Tang Lucio Tommaso-De-Paolis Lubomir Traikov Riccardo Tranfaglia Ioannis Tsamardinos Styliani Tsigka Aristeidis Vaggelatos Emil Valchinov Teena Vellaramkalayil Vassilios Vescoukis Andreas Voss Wojtek Walendziuk Marta Wasilewska-Radwanska Jozef Wiora Jan Wojcicki Mustafa Yuksel Michalis Zervakis
This page intentionally left blank
Table of Contents
Biosignal Processing Quantitative Analysis of Two-Dimensional Catch-Up Saccades Executed to the Target Jumps in the Time-Continuous Trajectory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vincas Laurutis, Raimondas Zemblys
1
Spike Sorting Based on Dominant-Sets Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.A. Adamos, N.A. Laskaris, E.K. Kosmidis, G. Theophilidis
5
Differentiation of Human Bone Marrow Stromal Cells onto Gelatin Cryogel Scaffolds . . . . . . . . . L. Fassina, E. Saino, L. Visai, M.A. Avanzini, M.G. Cusella De Angelis, F. Benazzo, S. Van Vlierberghe, P. Dubruel, G. Magenes
9
Simples Coherence vs. Multiple Coherence: A Somatosensory Evoked Response Detection Investigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.B. Melges, A.M.F.L. Miranda de S´ a, A.F.C. Infantosi Measure of Similarity of ECG Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ´ Jobb´ ´ Nagy A. agy, A. Wavelet Phase Synchronization between EHGs at different Uterine Sites: Comparison of Pregnancy and Labor Contractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ´ Alexandersson, J. Terrien, B. Karlsson, C. Marque M. Hassan, A. Dynamic Generation of Physiological Model Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Kretschmer, A. Wahl, K. Moeller
13
17
21
25
Random Forest-Based Classification of Heart Rate Variability Signals by Using Combinations of Linear and Nonlinear Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alan Jovic, Nikola Bogunovic
29
Validation of MRS Metabolic Markers in the Classification of Brain Gliomas and Their Correlation to Energy Metabolism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.G. Kounelakis, M.E. Zervakis, G.J. Postma, L.M.C. Buydens, A. Heerschap, X. Kotsiakis
33
Event-Related Synchronization/Desynchronization for Evaluating Cortical Response Detection Induced by Dynamic Visual Stimuli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P.J.G. Da-Silva, A.F.C. Infantosi, J. Nadal
37
Investigating the EEG Alpha Band during Kinesthetic and Visual Motor Imagery of the Spike Volleyball Movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.V. Stecklow, M. Cagy, A.F.C. Infantosi
41
Principal Components Clustering through a Variance-Defined Metric . . . . . . . . . . . . . . . . . . . . . . . . . J.C.G.D. Costa, D.B. Melges, R.M.V.R. Almeida, A.F.C. Infantosi
45
XIV
Table of Contents
A Kurtosis-Based Automatic System Using Na¨ıve Bayesian Classifier to Identify ICA Components Contaminated by EOG or ECG Artifacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.A. Klados, C. Bratsas, C. Frantzidis, C.L. Papadelis, P.D. Bamidis Correlation between Fractal Behavior of HRV and Neurohormonal and Functional Indexes in Chronic Heart Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. D’ Addio, M. Cesarelli, M. Romano, A. Accardo, G. Corbi, R. Maestri, M.T. La Rovere, Paolo Bifulco, N. Ferrara, F. Rengo On the Selection of Time Interval and Frequency Range of EEG Signal Preprocessing for P300 Brain-Computer Interfacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N.V. Manyakov, N. Chumerin, A. Combaz, M.M. Van Hulle Development of a Simple and Cheap Device for Movement Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . Csan´ ad G. Erd˝ os, Gerg˝ o Farkas, B´ela Pataki
49
53
57 61
Signal Peptide Prediction in Single Transmembrane Proteins Using the Continuous Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I.A. Avramidou, I.K. Kitsas, L.J. Hadjileontiadis
65
Comparison of AM-FM Features with Standard Features for the Classification of Surface Electromyographic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.I. Christodoulou, P.A. Kaplanis, V. Murray, M.S. Pattichis, C.S. Pattichis
69
Studying Brain Visuo-Tactile Integration through Cross-Spectral Analysis of Human MEG Recordings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Erla, C. Papadelis, L. Faes, C. Braun, G. Nollo
73
Patient-Specific Seizure Prediction Using a Multi-feature and Multi-modal EEG-ECG Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Valderrama, S. Nikolopoulos, C. Adam, Vincent Navarro, M. Le Van Quyen
77
Horizontal Directionality Characteristics of the Bat Head-Related Transfer Function . . . . . . . . . S.Y. Kim, D. Nikoli´c, A.C. Meruelo, R. Allen
81
Assessment of Human Performance during High-Speed Marine Craft Transit . . . . . . . . . . . . . . . . . D. Nikoli´c, R. Collier, R. Allen
85
Effects of Electrochemotherapy on Microcirculatory Vasomotion in Tumors . . . . . . . . . . . . . . . . . . . T. Jarm, B. Cugmas, M. Cemazar
89
Non-linear Modeling of Cerebral Autoregulation Using Cascade Models . . . . . . . . . . . . . . . . . . . . . . N.C. Angarita-Jaimes, O.P. Dewhirst, D.M. Simpson
93
The Epsilon-Skew-Normal Dictionary for the Decomposition of Single- and Multichannel Biomedical Recordings Using Matching Pursuit Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D. Strohmeier, A. Halbleib, M. Gratkowski, J. Haueisen
97
On the Empirical Mode Decomposition Performance in White Gaussian Noise Biomedical Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Karagiannis, Ph. Constantinou
101
Table of Contents
XV
Simulation of Biomechanical Experiments in OpenSim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I. Symeonidis, G. Kavadarli, E. Schuller, S. Peldschus
107
Comparing Sensorimotor Cortex Activation during Actual and Imaginary Movement . . . . . . . . . A. Athanasiou, E. Chatzitheodorou, K. Kalogianni, C. Lithari, I. Moulos, P.D. Bamidis
111
Graph Analysis on Functional Connectivity Networks during an Emotional Paradigm . . . . . . . . C. Lithari, M.A. Klados, P.D. Bamidis
115
MORFEAS: A Non-Invasive System for Automated Sleep Apnea Detection Utilizing Snore Sound Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Charalampos Doukas, Theodoros Petsatodis, Ilias Maglogiannis
119
Improved Optical Method for Measuring Concentration of Uric Acid Removed during Dialysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Jerotskaja, F. Uhlin, M. Luman, K. Lauri, I. Fridolin
124
Correlations between Longitudinal Corneal Apex Displacement, Head Movements and Pulsatile Blood Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Danielewska, H. Kasprzak, M. Kowalska
128
The Analog Processing and Digital Recording of Electrophysiological Signals . . . . . . . . . . . . . . . . . F. Babarada, J. Arhip, C. Ravariu
132
Parameter Selection in Approximate and Sample Entropy-Complexity of Acute and Chronic Stress Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Loncar Turukalo, O. Sarenac, N. Japundzic-Zigon, D. Bajic
136
The Importance of Uterine Contractions Extraction in Evaluation of the Progress of Labour by Calculating the Values of Sample Entropy from Uterine Electromyogram . . . . . . . . . . . . . . . . . . J. Vrhovec, D. Rudel, A. Macek Lebar
140
Simultaneous Pneumo- and Photoplethysmographic Recording of Oscillometric Envelopes Applying a Local Pad-Type Cuff on the Radial Artery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Raamat, K. Jagom¨ agi, J. Talts, J. Kivastik
144
Estimation of Mean Radial Blood Pressure in Critically Ill Patients . . . . . . . . . . . . . . . . . . . . . . . . . . K. Jagom¨ agi, J. Talts, P. T¨ ahep˜ old, R. Raamat, J. Kivastik Photoplethysmographic Assessment of the Pressure-Compliance Relationship for the Radial Artery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Talts, R. Raamat, K. Jagom¨ agi, J. Kivastik High Frequency Acoustic Properties for Cutaneous Cell Carcinomas In Vitro . . . . . . . . . . . . . . . . . L.I. Petrella, W.C.A. Pereira, P.R. Issa, H.A. Valle, C.J. Martins, J.C. Machado Gender-Related Effects of Carbohydrate Ingestion and Hypoxia on Heart Rate Variability: Linear and Non-linear Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Princi, M. Klemenc, P. Golja, A. Accardo On the Analysis of Dynamic Lung Mechanics Separately in Ins- and Expiration . . . . . . . . . . . . . . K. M¨ oller, Z. Zhao, C.A. Stahl, J. Guttmann
148
152 156
160 164
XVI
Table of Contents
Clinical Validation of an Algorithm for Automatic Detection of Atrial Fibrillation from Single Lead ECG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Triventi, G. Calcagnini, F. Censi, E. Mattei, F. Mele, P. Bartolini
168
Mental and Motor Task Classification by LDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N. Gursel Ozmen, L. Gumusel
172
The Human Subthalamic Nucleus – Knowledge for the Understanding of Parkinson’s Disease T. Heida, E. Marani
176
Dissociated Neurons from an Extended Rat Subthalamic Area - Spontaneous Activity and Acetylcholine Addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Heida, E. Marani
180
Nigro-Subthalamic and Nigro-Trigeminal Projections in the Rat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E. Marani, N.E. Lazarov, T. Heida, K.G. Usunoff
184
Statistical Estimate on Indices Associated to Atherosclerosis Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.M. Ipate, A. Machedon, M. Morega
188
Study of Some EEG Signal Processing Methods for Detection of Epileptic Activity . . . . . . . . . . . R. Matei, D. Matei
192
Continuous Wavelet Transformation of Pattern Electroretinogram (PERG) - A Tool Improving the Test Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Penkala
196
An Interactive Tool for Customizing Clinical Transacranial Magnetic Stimulation (TMS) Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Faro, D. Giordano, I. Kavasidis, C. Pino, C. Spampinato, M.G. Cantone, G. Lanza, M. Pennisi
200
Medical Imaging Measurement Methodology for Tempomandibular Joint Displacement Based on Focus Mutual Information Alignment of CBCT Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . W. Jacquet, E. Nyssen, B. Vande Vannet
204
Computer Aided Diagnosis of Diffuse Lung Disease in Multi-detector CT – Selecting 3D Texture Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I. Mariolis, P. Korfiatis, C. Kalogeropoulou, D. Daoussis, T. Petsas, L. Costaridou
208
Statistical Pre-processing Method for Peripheral Quantitative Computed Tomography Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Cervinka, H. Sievanen, M. Hannula, J. Hyttinen
212
Security and Reliability of Data Transmissions in Biotelemetric System . . . . . . . . . . . . . . . . . . . . . . . M. Stankus, M. Penhaker, V. Srovnal, M. Cerny, V. Kasik A Novel Approach for Implementation of Dual Energy Mapping Technique in CT-Based Attenuation Correction Using Single kVP Imaging: A Feasibility Study . . . . . . . . . . . . . . . . . . . . . . . B. Teimourian, M.R. Ay, H. Ghadiri, M. Shamsaei Zafarghandi, H. Zaidi
216
220
Table of Contents
XVII
Computational Visualization of Tumor Virotherapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X.F. Gao, M. Tangney, S. Tabirca
224
New Approaches for Continuous Non Invasive Blood Pressure Monitoring . . . . . . . . . . . . . . . . . . . . Petr Zurek, Martin Cerny, Michal Prauzek, Ondrej Krejcar, Marek Penhaker
228
Wireless Power and Data Transmission for Robotic Endoscopic Capsules . . . . . . . . . . . . . . . . . . . . . R. Carta, J. Thon´e, R. Puers
232
Ulcer Detection in Wireless Capsule Endoscopy Images Using Bidimensional Nonlinear Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vasileios Charisis, Alexandra Tsiligiri, Leontios J. Hadjileontiadis, Christos N. Liatsos, Christos C. Mavrogiannis, George D. Sergiadis Pre-clinical Physiological Data Acquisition and Testing of the IMAGE Sensing Device for Exercise Guidance and Real-Time Monitoring of Cardiovascular Disease Patients . . . . . . . . . . . . . A. Astaras, A. Kokonozi, E. Michail, D. Filos, I. Chouvarda, O. Grossenbacher, J.-M. Koller, R. Leopoldo, J.-A. Porchet, M. Correvon, J. Luprano, A. Sipil¨ a, N. Maglaveras Thermal Images of Electrically Stimulated Breast: A Simulation Study . . . . . . . . . . . . . . . . . . . . . . . H. Feza Carlak, Nevzat G. Gen¸cer, Cengiz Be¸sik¸ci Magnetic Resonance Current Density Imaging Using One Component of Magnetic Flux Density: An Experimental Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Ers¨ oz, B.M. Ey¨ ubo˘glu
236
240
244
248
Computer-Aided Detection of COPD Using Digital Chest Radiographs . . . . . . . . . . . . . . . . . . . . . . . ´ Horv´ L. Nikh´ azy, G. Horv´ ath, A. ath, V. M¨ uller
252
Localisation, Registration and Visualisation of MRS Volumes of Interest on MR Images . . . . . . Yu Sun, Nigel P. Davies, Kal Natarajan, Theodoros N. Arvanitis, Andrew C. Peet
256
Magnetic Marker Monitoring: A Novel Approach for Magnetic Marker Design . . . . . . . . . . . . . . . S. Biller, D. Baumgarten, J. Haueisen
260
Corneal Nerves Segmentation and Morphometric Parameters Quantification for Early Detection of Diabetic Neuropathy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ana Ferreira, Ant´ onio Miguel Morgado, Jos´e Silvestre Silva Novel Catheters for In Vivo Research and Pharmaceutical Trials Providing Direct Access to Extracellular Space of Target Tissues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Bodenlenz, C. Hoefferer, F. Feichtner, C. Magnes, R. Schaller, J. Priedl, T. Birngruber, F. Sinner, L. Schaupp, S. Korsatko, T.R. Pieber
264
268
Statistical Texture Analysis of MRI Images to Classify Patients Affected by Multiple Sclerosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Faro, D. Giordano, C. Spampinato, M. Pennisi
272
WADEDA: A Wearable Affective Device with On-Chip Signal Processing Capabilities for Measuring ElectroDermal Activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.I. Konstantinidis, C.A. Frantzidis, C. Papadelis, C. Pappas, P.D. Bamidis
276
XVIII
Table of Contents
A Modular Architecture of a Computer-Operated Olfactometer for Universal Use . . . . . . . . . . . . A. Komnidis, E. Konstantinidis, I. Stylianou, M.A. Klados, A. Kalfas, P.D. Bamidis The Role of Geometry of the Human Carotid Bifurcation in the Formation and Development of Atherosclerotic Plaque . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P.G. Kalozoumis, A.I. Kalfas, A.D. Giannoukas
280
284
A Wearable Wireless ECG Sensor: A Design with a Minimal Number of Parts . . . . . . . . . . . . . . . E.S. Valchinov, N.E. Pallikarakis
288
Active Contours without Edges Applied to Breast Lesions on Ultrasound . . . . . . . . . . . . . . . . . . . . . W. G´ omez, A.F.C. Infantosi, L. Leija, W.C.A. Pereira
292
Automatic Identification of Trabecular Bone Fracture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Tassani, P.A. Asvestas, G.K. Matsopoulos, F. Baruffaldi
296
Visualization System to Improve Surgical Performance during a Laparoscopic Procedure . . . . . L.T. De Paolis, M. Pulimeno, G. Aloisio
300
The Blood Perfusion Mapping in the Human Skin by Photoplethysmography Imaging . . . . . . . . U. Rubins, R. Erts, V. Nikiforovs
304
Fingerprint Matching with Self Organizing Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.N. Ouzounoglou, T.L. Economopoulos, P.A. Asvestas, G.K. Matsopoulos
307
A Novel Model for Monte Carlo Simulation of Performance Parameters of the Rodent Research PET (RRPET) Camera Based on NEMA NU-4 Standards . . . . . . . . . . . . . . . . . . . . . . . . . . N. Zeraatkar, M.R. Ay, A.R. Kamali-Asl, H. Zaidi
311
Is the Average Gray-Level from Ultrasound B-Mode Images Able to Estimate Temperature Variations in Ex-Vivo Tissue? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C´esar A. Teixeira, A.V. Alvarenga, M.A. von Kr¨ uger, W.C.A. Pereira
315
CT2MCNP: An Integrated Package for Constructing Patient-Specific Voxel-Based Phantoms Dedicated for MCNP(X) Monte Carlo Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Mehranian, M.R. Ay, H. Zaidi
319
Noise Reduction in Fluoroscopic Image Sequences for Joint Kinematics Analysis . . . . . . . . . . . . . T. Cerciello, P. Bifulco, M. Cesarelli, L. Paura, M. Romano, G. Pasquariello, R. Allen
323
The Influence of Patient Miscentering on Patient Dose and Image Noise in Two Commercial ct Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.A. Habibzadeh, M.R. Ay, A.R. Kamali asl, H. Ghadiri, H. Zaidi
327
A Study on Performance of a Digital Image Acquisition System in Mammography Diagnostic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D. Dimitric, G. Nisevic, Z. Boskovic, A. Vasic
331
An Efficient Video-Synopsis Technique for Optical Recordings with Application to the Analysis of Rat Barrel-Cortex Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V. Tsitlakidis, N.A. Laskaris, G.C. Koudounis, E.K. Kosmidis
335
Table of Contents
XIX
Preoperative Planning Software for Hip Replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ˇ c´ M. Michal´ıkov´ a, L. Bednarˇc´ıkov´ a, T. T´ oth, J. Zivˇ ak
339
Entropy: A Way to Quantify Complexity in Calcium Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Fanelli, F. Esposti, J. Ion Titapiccolo, M.G. Signorini
343
A New Fluorescence Image-Processing Method to Visualize Ca2+ - Release and Uptake Endoplasmatic Reticulum Microdomains in Cultured Glia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Ion Titapiccolo, F. Esposti, A. Fanelli, M.G. Signorini
347
Experimental Measurement of Modulation Transfer Function (MTF) in Five Commercial CT Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.M. Akbari, M.R. Ay, A.R. Kamali asl, H. Ghadiri, H. Zaidi
351
Microcalcifications Segmentation Procedure Based on Morphological Operators and Histogram Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.A. Duarte, A.V. Alvarenga, C.M. Azevedo, A.F.C. Infantosi, W.C.A. Pereira
355
Segmentation of Anatomical Structures on Chest Radiographs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ´ Horv´ ´ Horv´ S. Juh´ asz, A. ath, L. Nikh´ azy, G. Horv´ ath, A. ath
359
Lung Nodule Detection on Rib Eliminated Radiographs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ´ Horv´ G. Orb´ an, A. ath, G. Horv´ ath
363
An Improved Algorithm for Out-of-Plane Artifacts Removal in Digital Tomosynthesis Reconstructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Bliznakova, Z. Bliznakov, N. Pallikarakis
367
Magnetic Resonance Imaging of Irreversible Electroporation in Tubers . . . . . . . . . . . . . . . . . . . . . . . Mohammad Hjouj and Boris Rubinsky
371
Superposition of Activations of SWI and fMRI Acquisitions of the Motor Cortex . . . . . . . . . . . . . M. Matos, M. Forjaz Secca, M. Noseworthy
376
Medical Devices and Instrumentation A New Optical Method for Measuring Creatinine Concentration during Dialysis . . . . . . . . . . . . . . I. Fridolin, J. Jerotskaja, K. Lauri, F. Uhlin, M. Luman
379
The Role of Viscous Damping on Quality of Haptic Interaction in Upper Limb Rehabilitation Robot: A Simulation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Oblak, I. Cikajlo, T. Keller, J.C. Perrry, J. Veneman, Z. Matjaˇci´c
383
A New Fibre Optic Pulse Oximeter Probe for Monitoring Splanchnic Organ Arterial Blood Oxygen Saturation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Hickey, N. Samuels, N. Randive, R. Langford, P.A. Kyriacou
387
Electrical Properties of Teeth Regarding the Electric Vitality Testing . . . . . . . . . . . . . . . . . . . . . . . . T. Marjanovi´c, Z. Stare, M. Ranilovi´c
391
Stiffness of a Small Tissue Phantom Measured by a Tactile Resonance Sensor . . . . . . . . . . . . . . . . V. Jalkanen, B.M. Andersson, O.A. Lindahl
395
XX
Table of Contents
Vectorial Magnetoencephalographic Measurements for the Estimation of Radial Dipolar Activity in the Human Somatosensory System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Haueisen, K. Fleissig, D. Strohmeier, R. Huonker, M. Liehr, O.W. Witte
399
Registration of Chest X-Rays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Csorba, B. Kormanyos, B. Pataki
402
Arterial Pulse Transit Time Dependence on Applied Pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Pilt, K. Meigas, M. Viigimaa, J. Kaik, R. Kattai, D. Karai
406
Influence of an Artificial Valve Type on the Flow in the Ventricular Assist Device . . . . . . . . . . . . D. Obidowski, P. Klosinski, P. Reorowicz, K. Jozwik
410
A New Stimulation Technique for Electrophysiological Color Vision Testing . . . . . . . . . . . . . . . . . . M. Zaleski, K. Penkala
414
Novel TiN-Based Dry EEG Electrodes: Influence of Electrode Shape and Number on Contact Impedance and Signal Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Fiedler, S. Brodkorb, C. Fonseca, F. Vaz, F. Zanow, J. Haueisen
418
A Finite Element Method Study of the Current Density Distribution in a Capacitive Intrabody Communication System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ˇ Luˇcev, A. Koriˇcan, M. Cifrek Z.
422
Voice Controlled Neuroprosthesis System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.C. Irimia, M.S. Poboroniuc, M.C. Stefan, Gh. Livint
426
Preoperative Planning Program Tool in Treatment of Articular Fractures: Process of Segmentation Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Tomazevic, D. Kreuh, A. Kristan, V. Puketa, M. Cimerman
430
Neuroimaging of Emotional Activation: Issues on Experimental Methodology, Analysis and Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. Styliadis, C. Papadelis, P.D. Bamidis
434
Using Grid Infrastructure for the Promotion of Biomedical Knowledge Mining . . . . . . . . . . . . . . . A. Chatziioannou, I. Kanaris, C. Doukas, Ilias Maglogiannis A Laboratory Scale Facility for the Parametric Characterization of the Intraocular Pressure of the Human Eye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.V. Michailidou, P. Chatzi, P.G. Kalozoumis, A.I. Kalfas, M. Pappa, I. Tsiafis, E.I. Konstantinidis, P.D. Bamidis
438
442
AM-FM Texture Image Analysis in Multiple Sclerosis Brain White Matter Lesions . . . . . . . . . . . C.P. Loizou, V. Murray, M.S. Pattichis, M. Pantziaris, I. Seimenis, C.S. Pattichis
446
Reliable Hysteroscopy Color Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I. Constantinou, V. Tanos, M. Neofytou, C. Pattichis
450
Comparison of Methods of Measurement of Head Position in Neurological Practice . . . . . . . . . . . P. Kutilek, J. Charfreitag, J. Hozman
455
Table of Contents
XXI
The Nanopous Al2O3 Material Used for the Enzyme Entrapping in a Glucose Biosensor . . . . . C. Ravariu, A. Popescu, C. Podaru, E. Manea, F. Babarada
459
Hand-Held Resonance Sensor Instrument for Soft Tissue Stiffness Measurements – A First Study on Biological Tissue In Vitro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V. Jalkanen, O.A. Lindahl
463
Head Position Monitoring System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Cech, J. Dlouhy, M. Cizek, J. Rozman, I. Vicha
467
Short Range Wireless Link for Data Acquisition in Medical Equipment . . . . . . . . . . . . . . . . . . . . . . . N.M. Roman, S. Gergely, R.V. Ciupa, M.V. Pusca
471
Corneal Quantitative Fluorometry – A Slit-Lamp Based Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.P. Domingues, Isa Branco, A.M. Morgado
475
Automatic Detection of Patients’ Spontaneous Activity during Pressure Support Ventilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. Matrone, F. Mojoli, A. Orlando, A. Braschi, G. Magenes
479
Determination of In Vivo Three-Dimensional Lower Limb Kinematics for Simulation of High-Flexion Squats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P.D. Wong, B. Callewaert, K. Desloovere, L. Labey, B. Innocenti
483
Evaluation of Chronic Diabetic Wounds with the Near Infrared Wound Monitor . . . . . . . . . . . . . . Michael Neidrauer, Leonid Zubkov, Michael S. Weingarten, Kambiz Pourrezaei, Elisabeth S. Papazoglou
487
Non-contact UWB Radar Technology to Assess Tremor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. Blumrosen, M. Uziel, B. Rubinsky, D. Porrat
490
High Frequency Mechanical Vibrations Stimulate the Bone Matrix Formation in hBMSCs (Human Bone Marrow Stromal Cells) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D. Pr`e, G. Ceccarelli, M.G. Cusella De Angelis, G. Magenes Mobispiro: A Novel Spirometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Eleni J. Sakka, Pantelis Aggelidis, Markela Psimarnou
494 498
A Computer Program for the Functional Assessment of the Rotational Vestibulo-Ocular Reflex (VOR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. B¨ ohler, M. Mandal´ a, S. Ramat
502
New Application for Automatic Hemifield Damage Identification in Humphrey Field Analyzer (HFA) Visual Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Salonikiou, V. Kilintzis, A. Antoniadis, F. Topouzis
506
The Effect of Mechano– and Magnetochemically Synthesized Magnetosensitive Nanocomplex and Electromagnetic Irradiation on Animal Tumor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V.E. Orel, A.V. Romanov, I.I. Dzyatkovska, M.O. Nikolov, Yu.G. Mel’nik, N.M. Dzyatkovska, I.B. Shchepotin Verification of Measuring System for Automation Intra – Abdominal Pressure Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ˇ c´ T. T´ oth, M. Michal´ıkov´ a, L. Bednarˇc´ıkov´ a, M. Petr´ık, J. Zivˇ ak
510
513
XXII
Table of Contents
Evolution in Bladder Pressure Measuring Implants Developed at K.U. Leuven . . . . . . . . . . . . . . . P. Jourand, J. Coosemans, R. Puers Including the Effect of the Thermal Wave in Theoretical Modeling for Radiofrequency Ablation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.A. L´ opez Molina, M.J. Rivera, M. Trujillo, V. Romero-Garc´ıa, E.J. Berjano Textile Integrated Monitoring System for Breathing Rhythm of Infants . . . . . . . . . . . . . . . . . . . . . . . H. De Clercq, P. Jourand, R. Puers Comparison between VHDL-AMS and PSPICE Modeling of Ultrasound Measurement System for Biological Medium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N. Aouzale, A. Chitnalah, H. Jakjoud, D. Kourtiche, M. Nadi
517
521 525
529
Stimulation Parameter Testing and Verification during Pacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Martin Augustynek, Marek Penhaker, Pavel Sazel, David Korpas
533
Biosignal Monitoring and Processing for Management of Hypertension . . . . . . . . . . . . . . . . . . . . . . . A. Stan, R. Lupu, M. Ciorap, R. Ciorap
537
Design and Development of an Electrophysiological Signal Acquisition System: A Technological Aid for Research, Teaching and Clinical Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E. Villavicencio, D. Garc´ıa, L. Navarro, M. Torres, R. Huaman´ı, L.F. Yabar Numerical Models of an Artery with different Stent Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Brand, M. Ryvkin, S. Einav, I. Avrahami, J. Rosen, M. Teodorescu A Macro-quality Field Control of Dual Energy X-Ray Absorptiometry with Anatomical, Chemical and Computed Tomographical Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Scafoglieri, S. Provyn, O. Louis, J.A. Wallace, J. De Mey, J.P. Clarys Digital Filter in Hardware Loop for On Line ECG Signal Baseline Wander Reduction . . . . . . . . A. Petrenas, V. Marozas, S. Daukantas, A. Lukosevicius
541 545
549 554
Biomedical Measurements and Modeling Assessment of a Patient-Specific Silicon Model of the Human Arterial Forearm . . . . . . . . . . . . . . . K. Van Canneyt, F. Giudici, P. Segers, P. Verdonck
558
Numerical Investigations of the Strain-Adaptive Bone Remodeling in the Prosthetic Pelvis . . . A. Bouguecha, I. Elgaly, C. Stukenborg-Colsman, M. Lerch, I. Nolte, P. Wefstaedt, T. Matthias, B.-A. Behrens
562
Development of a System Dynamics Model for Cost Estimation for the Implantation and Revision of Hip Joint Endoprosthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Schr¨ ottner, A. Herzog
566
The Volume Regulation and Accumulation of Synovial Fluid between Articular Plateaus of Knee Joints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Petrtyl, J. Danesova, J. L´ısal
570
Table of Contents
Anthropometric Measurements and Model Evaluation of Mass-Inertial Parameters of the Human Upper and Lower Extremities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G.S. Nikolova
XXIII
574
Validation of a Person Specific 1-D Model of the Systemic Arterial Tree . . . . . . . . . . . . . . . . . . . . . . P. Reymond, Y. Bohraus, F. Perren, F. Lazeyras, N. Stergiopulos
578
First Trimester Diagnosis of Trisomy-21 Using Artificial Neural Networks . . . . . . . . . . . . . . . . . . . . C.N. Neocleous, K. Nikolaides, K. Neokleous, C.N. Schizas
580
Numerical Analysis of a Novel Method for Temperature Gradient Measurement in the Vicinity of Warm Inflamed Atherosclerotic Plaques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Z. Aronis, E. Massarwa, L. Rosen, O. Rotman, R. Eliasy, R. Haj-Ali, S. Einav A Multilevel and Multiscale Approach for the Prediction of Oral Cancer Reoccurrence . . . . . . . Konstantinos P. Exarchos, G. Rigas, Yorgos Goletsis, Dimitrios I. Fotiadis An Automated Method for Levodopa-Induced Dyskinesia Detection and Severity Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M.G. Tsipouras, A.T. Tzallas, G. Rigas, P. Bougia, D.I. Fotiadis, S. Konitsiotis
584 588
592
Electrospinning Poly(o-methoxyaniline) Nanofibers for Tissue Engineering Applications . . . . . . Wen-Tyng Li, Mu-Feng Shie, Chung-Feng Dai, Jui-MingYeh
596
Diagnosis of Asthma Severity Using Artificial Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E. Chatzimichail, A. Rigas, E. Paraskakis, A. Chatzimichail
600
Enhanced Stem Cells Characteristic of Fibroblastic Mesenchymal Cells from HHT Patients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. Silvani, L. Benedetti, N. Crosetto, C. Olivieri, D. Galli, B. Magnani, G. Magenes, M.G. Cusella De Angelis High Frequency Vibration (HFV) Induces Muscle Hypertrophy in Newborn Mice and Enhances Primary Myoblasts Fusion in Satellite Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. Ceccarelli, L. Benedetti, D. Pr`e, D. Galli, L. Vercesi, G. Magenes, M.G. Cusella De Angelis Surface Characterization of Collagen Films by Atomic Force Microscopy . . . . . . . . . . . . . . . . . . . . . A. Stylianou, S.B. Kontomaris, M. Kyriazi, D. Yova Changes in Electrocardiogram during Intra-Abdominal Electrochemotherapy: A Preliminary Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B. Mali, T. Jarm, E. Gadˇzijev, G. Serˇsa, D. Miklavˇciˇc
604
608
612
616
Studying Postural Sway Using Wearable Sensors: Fall Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Turcato, S. Ramat
620
From Biomedical Research to Spin-Off Companies for the Health Care Market . . . . . . . . . . . . . . . O.A. Lindahl, B. Andersson, R. Lundstr¨ om, K. Ramser
624
A Continuous-Time Dynamical Model for the Vestibular Nucleus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Korodi, V. Ceregan, T.L. Dragomir, A. Codrean
627
XXIV
Table of Contents
Fast Optical Signal in the Prefrontal Cortex Correlates with EEG . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.V. Medvedev, J.M. Kainerstorfer, S.V. Borisov, J. VanMeter
631
Using Social Semantic Web Technologies in Public Health: A Prototype Epidemiological Semantic Wiki . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. Bratsas, A. Tzalavra, V. Vescoukis, P. Bamidis
635
Patellofemoral Contact during Simulated Weight Bearing Squat Movement: A Cadaveric Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Van Haver, J. Quintelier, M. De Beule, P. Verdonk, F. Almqvist, P. De Baets
639
Rapid Prototype Development for Studying Human Activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Fevgas, P. Tsompanopoulou, S. Lalis Rheological and Electrical Properties of RBC Suspensions in Dextran 70. Changes in RBC Morphology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N. Antonova, I. Ivanov, Y. Gluhcheva, E. Zvetkova
643
647
Numerical Simulation In Magnetic Drug Targeting. Magnetic Field Source Optimization . . . . . A. Dobre, A.M. Morega
651
Ontology for Modeling Interaction in Ambient Assisted Living Environments . . . . . . . . . . . . . . . . . J.B. Mochol´ı, P. Sala, C. Fern´ andez-Llatas, J.C. Naranjo
655
Protein Surface Atom Neighborhood Functional Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P.D. Cristea, R. Tuduce, O. Arsene
659
Performance Evaluation of a Grid-Based Heart Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R.S. Campos, M.P. Xavier, M. Lobosco, R.W. dos Santos
663
A Web-Based Tool for the Automatic Segmentation of Cardiac MRI . . . . . . . . . . . . . . . . . . . . . . . . . T.H. de Paula, M. Lobosco, R.W. dos Santos
667
Improved Modeling of Lane Intensity Profiles on Gel Electrophoresis Images . . . . . . . . . . . . . . . . . C.F. Maramis, A.N. Delopoulos
671
Affective Learning: Empathetic Embodied Conversational Agents to Modulate Brain Oscillations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.N. Moridis, M.A. Klados, V. Terzis, A.A. Economides, V.E. Karabatakis, A. Karlovasitou, P.D. Bamidis The Role of Electrically Stimulated Endocytosis in Gene Electrotransfer . . . . . . . . . . . . . . . . . . . . . . M. Pavlin, M. Kanduˇser, G. Pucihar, D. Miklavˇciˇc
675
679
A Frequency Synchronization Study on the Temporal and Spatial Evolution of Emotional Visual Processing Using Wavelet Entropy and IAPS Picture Collection . . . . . . . . . . . . . . . . . . . . . . . C.A. Frantzidis, C. Pappas, P.D. Bamidis
683
Frontal EEG Asymmetry and Affective States: A Multidimensional Directed Information Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P.C. Petrantonakis, L.J. Hadjileontiadis
687
Table of Contents
A Game-Like Interface for Training Seniors’ Dynamic Balance and Coordination . . . . . . . . . . . . . A.S. Billis, E.I. Konstantinidis, C. Mouzakidis, M.N. Tsolaki, C. Pappas, P.D. Bamidis
XXV
691
Incorporating Electroporation-Related Conductivity Changes into Models for the Calculation of the Electric Field Distribution in Tissue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I. Lackovi´c, R. Magjarevi´c, D. Miklavˇciˇc
695
Automated Estimation of 3D Camera Extrinsic Parameters for the Monitoring of Physical Activity of Elderly Patients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Deklerck, B. Jansen, X.L. Yao, J. Cornelis
699
Robotic System for Training of Grasping and Reaching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Podobnik, M. Munih Recognition and Identification of Red Blood Cell Size Using Angular Radial Transform and Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. Apostolopoulos, S. Tsinopoulos, E. Dermatas Collagen Gel as Cell Extracellular Environment to Study Gene Electrotransfer . . . . . . . . . . . . . . . S. Haberl, D. Miklavˇciˇc, M. Pavlin
703
707 711
The Influence of Seat Pan and Trunk Inclination on Muscles Activity during Sitting on Forward Inclined Seats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Mastalerz, I. Palczewska
715
Radiation Exposure in Routine Practice with PET/CT and Automatic Infusion System – Practical Experience Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Tomˇse, A. Biˇcek
719
A Pilot Study for Development of Shoulder Proprioception Training System Using Virtual Reality for Patients with Stroke: The Effect of Manipulated Visual Feedback . . . . . . . . . . . . . . . . . S.W. Cho, J.H. Ku, Y.J. Kang, K.H. Lee, J.Y. Song, H.J. Kim, I.Y. Kim, S.I. Kim
722
Computer Modeling to Study the Dynamic Response of the Temperature Control Loop in RF Cardiac Ablation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Alba, M. Trujillo, R. Blasco, E.J. Berjano
725
Mechanical Properties of Long Bone Shaft in Bending . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.M. Rajaai, K. PourAkbar Saffar, N. JamilPour
729
Active Behavior of Peripheral Nerves during Magnetic Stimulation . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Cretu, R. Ciupa, L. Darabant
733
Preparations’ Methodology for the Introduction of Information Systems in Hospitals . . . . . . . . . J. Sarivougioukas, A. Vagelatos, Ch. Kalamara
737
Supervised and Unsupervised Finger Vein Segmentation in Infrared Images Using KNN and NNCA Clustering Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Vlachos, E. Dermatas
741
An Echocardiographic Study for Assessment the Indices of Arterial and Ventricular Stiffness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.M. Stanescu, K. Branidou, A. Dan, I. Daha, C. Baicus, V. Manoliu, C. Adam, A.Gh. Dan
745
XXVI
Table of Contents
Estimation of Linear Parametric Distortions and Motions in the Frequency Domain . . . . . . . . . . D.S. Alexiadis, G.D. Sergiadis
749
Ex Vivo and In Vivo Regulation of Arginase in Response to Wall Shear Stress . . . . . . . . . . . . . . . R.F. da Silva, V.C. Olivon, D. Segers, R. de Crom, R. Krams, N. Stergiopuloss
753
Process Choreography for Interaction Simulation in Ambient Assisted Living Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. Fern´ andez-Llatas, J.B. Mochol´ı, C. S´ anchez, P. Sala, J.C. Naranjo
757
Measuring Device for Determination of Forearm Force . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Hlavoˇ n, J. Krejsa, M. Zezula
761
Subclavian Steal Syndrome – A Computer Model Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C. Manopoulos, S. Tsangaris
764
Mullins Effect in Human Aorta Described with Limiting Extensibility Evolution . . . . . . . . . . . . . . L. Horny, E. Gultova, H. Chlup, R. Sedlacek, J. Kronek, J. Vesely, R. Zitny
768
A Distribution of Collagen Fiber Orientations in Aortic Histological Section . . . . . . . . . . . . . . . . . . L. Horny, J. Kronek, H. Chlup, R. Zitny, M. Hulan
772
An Innovative Approach for Right Ventricular Volume Calculation during Right Catheterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Toumpaniaris, I. Skalkidis, S. Markatis, D. Koutsouris
776
Service Composition to Support Ambient Assisted Living Solutions for the Elderly . . . . . . . . . . . V. Moumtzi, C. Wills, A. Koumpis
780
Oscillations in Subthalamic Nucleus Measured by Multi Electrode Arrays . . . . . . . . . . . . . . . . . . . . J. Stegenga, T. Heida
784
Suitable Polymer Pipe to Modelling the Coronary Veins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Romola Laczko, Tibor Balazs, Eszter Bognar
788
Satisfaction Survey of Greek Inpatients with Brain Cancer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G.K. Matis, O.I. Chrysou, N. Lyratzopoulos, K. Kontogiannidis, T.A. Birbilis
792
End Stage Renal Disease Patients’ Projections Using Markov Chain Monte Carlo Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Rodina, K. Bliznakova, N. Pallikarakis
796
Influence of Bioimplant Surface Electrical Potential on Osteoblast Behavior and Bone Tissue Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yu. Dekhtyar, I. Khlusov, N. Polyaka, R. Sammons, F. Tyulkin
800
Nano-Sized Drug Carrier for Cancer Therapy: Dose-Toxicity Relationship of PEG-PCL-PEG Polymeric Micelle on ICR Mice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.L. Jiang, N.V. Cuong, S.C. Jwo, M.F. Hsieh
804
“Internet of Things”, an RFID – IPv6 Scenario in a Healthcare Environment . . . . . . . . . . . . . . . . H. Tsirbas, K. Giokas, D. Koutsouris
808
Table of Contents
XXVII
Development of Software Tool for Quantitative Gait Assessment in Parkinsonian Patients with and without Mild Cognitive Impairment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L. Iuppariello, R. Tranfaglia, M. Amboni, L. Lista, M. Sansone
812
Tensile Stress Analysis of the Ceramic Head Endoprosthesis with different Micro Shape Deviations of the Contact Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V. Fuis, M. Koukal
815
Modelling of Cancer Dynamics and Comparison of Methods for Survival Time Estimation . . . . Tomas Zdrazil, Jiri Holcik
819
Implications of Data Quality Problems within Hospital Administrative Databases . . . . . . . . . . . . J.A. Freitas, T. Silva-Costa, B. Marques, A. Costa-Pereira
823
Can the EEG Indicate the FiO2 Flow of a Mechanical Ventilator in ICU Patients with Respiratory Failure? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.G. Peranonti, M.A. Klados, C.L. Papadelis, D.G. Kontotasiou, C. Kourtidou-Papadeli, P.D. Bamidis
827
A European Biomedical Engineering Postgraduate Program – From Evaluation to Continuous Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V. Griva, N. Pallikarakis
831
Web Based Medical Applications and Telemedicine SOAP/WSDL-Based Web Services for Biomedicine: Demonstrating the Technique with the CancerResource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Meinel, M.S. Mueller, J. Ahmed, R. Yildiriman, M. Dunkel, R. Herwig, R. Preissner
835
A Web-Based Application for the Evaluation of the Healthcare Management . . . . . . . . . . . . . . . . . M. Bava, D. Zotti, R. Zangrando, M. Delendi
839
A System for Acquiring, Transmitting and Distributed EEG Data Processing . . . . . . . . . . . . . . . . . D. Kastaniotis, G. Maragos, N. Fragoulis, A. Ifantis
843
The Umbrella Database on Fever and Neutropenia in Children – Prototype for Internet-Based Medical Data Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matthias Faix, Daniela Augst, Hans J¨ urgen Laws, Arne Simon, Fritz. Haverkamp, J. Rentzsch
847
A Research Information System (RIS) for Breast Cancer Genetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.L. Leskoˇsek, J. Dimec, K. Gerˇsak, P. Ferk
851
WeCare: Wireless Enhanced Healthcare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hande Ozgur Alemdar, Cem Ersoy
855
The Functionality Control of Horizontal Agitators for Blood Bags . . . . . . . . . . . . . . . . . . . . . . . . . . . . Z. Vasickova, M. Penhaker, M. Darebnikova
859
Experimental Hardware Solutions of Biotelemetric System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dalibor Janckulik, Leona Motalova, Karel Musil, Ondrej Krejcar
863
Modern Tools for Design and Implementation of Mobile Biomedical System for Home Care Agencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dalibor Janckulik, Leona Motalova, Ondrej Krejcar
867
XXVIII
Table of Contents
Application of Embedded System for Sightless with Diabetes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L. Martinak, M. Penhaker
871
TELEMON – An Embedded Wireless Monitoring and Alert System for Homecare . . . . . . . . . . . C. Rotariu, H. Costin, R. Ciobotariu, F. Adochiei, I. Amariutei, Gladiola Andruseac
875
Graphical Development System Design for Creating the FPGA-Based Applications in Biomedicine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V. Kasik, M. Stankus
879
Low Cost Data Acquisition System for Biomedical Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Stankus, M. Penhaker, M. Cerny
883
Embedded Programmable Invasive Blood Pressure Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Kijonka, M. Penhaker
886
e-Health Generating and Transmitting Ambulatory Electronic Medical Prescriptions . . . . . . . . . . . . . . . . . . . M. Nyssen, K. Thomeer, R. Buyl
890
Estimating Pre-term Birth Using a Hybrid Pattern Classification System . . . . . . . . . . . . . . . . . . . . . M. Frize, N. Yu
893
Design and Implementation of a Radio Frequency IDentification (RFID) System for Healthcare Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.C. Polycarpou, G. Gregoriou, A. Dimitriou, A. Bletsas, I.N. Sahalos, L. Papaloizou, P. Polycarpou Reliability Issues in Regional Health Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S. Spyrou, P. Bamidis, N. Maglaveras Prevention and Management of Risk Conditions of Elderly People through the Home Environment Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L. Pastor-Sanz, M.M. Fern´ andez-Rodr´ıguez, M.F. Cabrera-Umpi´errez, M.T. Arredondo, E. Bekiaris
897 901
905
Multilevel Access Control in Hospital Information Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V. Baldas, K. Giokas, D. Koutsouris
909
Managing Urinary Incontinence through Hand-Held Real-Time Decision Support Aid . . . . . . . . Constantinos Koutsojannis, Chrysa Lithari, Eman Alkholy Nabil, Giorgos Bakogiannis, Ioannis Hatzilygeroudis
913
Renal Telemedicine and Telehealth – Where Do We Stand? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E. Kaldoudi, V. Vargemezis
920
A System for Monitoring Children with Suspected Cardiac Arrhythmias – Technical Optimizations and Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E. Kyriacou, C. Pattichis, D. Hoplaros, A. Kounoudes, M. Milis, A. Jossif
924
Use of Guidelines and Decision Support Systems within EHR Applications in Family Practice – Croatian Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D. Kralj, S. Tonkovi´c, M. Konˇcar
928
Table of Contents
A New Concept of the Integrated Care Service for Unstable Diabetic Patients . . . . . . . . . . . . . . . P. L ady˙zy´ nski, P. Folty´ nski, J.M. W´ ojcicki, K. Migalska-Musial, M. Molik, J. Krzymie´ n, G. Rosi´ nski, G. Opolski, K. Czajkowski, M. Tracz, W. Karnafel SHARE: A Meeting Point for the Promotion of Interoperability and Best Practices in eHealth Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Ortega-Portillo, M.M. Fernandez-Rodriguez, M.F. Cabrera-Umpierrez, M.T. Arredondo, G. Carrozza Long Term Evolution (LTE) Technology in e-Health – A Sample Application . . . . . . . . . . . . . . . . . R. Jagusz, J. Borkowski, K. Penkala
XXIX
932
935 939
BME Education EMITEL e-Encyclopaedia of Medical Physics – Project Development and Future . . . . . . . . . . . . . S. Tabakov, P. Smith, F. Milano, S.-E. Strand, C. Lewis, M. Stoeva
943
BME Education Program Following the Expectations from the Industry, Health Care and Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Augustyniak, R. Tadeusiewicz, M. Wasilewska-Radwa´ nska
945
Quality Assurance in Biomedical Engineering COOP-Educational Training Program: Planning, Implementation and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Alhamwi, Manal A. Farrag, T. Elsarnagawy
949
Accreditation of Medical Physics and Medical Engineering Programmes in the UK . . . . . . . . . . . S. Tabakov, D. Parker, F. Schlindwein, A. Nisbett Tools Based eLearning Platform to Support the Development and Repurposing of Educational Material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Stefanut, M. Marginean, D. Gorgan
953
955
A Feasible Teaching Tool for Physiological Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Stojanovic, D. Karadaglic, B. Asanin, O. Chizhova
959
Repurposing Serious Games in Health Care Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Protopsaltis, D. Panzoli, I. Dunwell, S. de Freitas
963
Geotagged Repurposed Educational Content through mEducator Social Network Enhances Biomedical Engineering Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.Th. Konstantinidis, N. Dovrolis, Eleni Kaldoudi, P.D. Bamidis MORMED: Towards a Multilingual Social Networking Platform Facilitating Medicine 2.0 . . . . Eleni Kargioti, Dimitrios Kourtesis, Dimitris Bibikas, Iraklis Paraskakis, Ulrich Boes
967 971
Design and Development of a Pilot on Line Electronic OSCE Station for Use in Medical Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.L. Dafli, P.D. Bamidis, C. Pappas, N. Dombros
975
Review of the Biomedical Engineering Education Programs in Europe within the Framework of TEMPUS IV, CRH-BME Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Z. Bliznakov, N. Pallikarakis
979
XXX
Table of Contents
Virtual Experiments: May VCV Impede Circulation More than PCV in (Virtual) Patients in the Lateral Position? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . T. Golczewski, M. Darowski
983
Clinical Engineering Human Factors Engineering Applied to Risk Management in the Use of Medical Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.P.S. Silva, R.M.A. Almeida, J.A. Ferreira, A. Gibertoni
987
Electromagnetic Interferences (EMI) from Active RFId on Critical Care Equipment . . . . . . . . . . Ernesto Iadanza, Fabrizio Dori, Roberto Miniati, Edvige Corrado
991
The Clinical Data Recorder: What Shall Be Monitored? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L.N. Nascimento, S.J. Calil
995
Clinical Engineering and Patient Safety: A Forty Year Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Frize, S. Weyand, K. Greenwood
999
Extracorporeal Membrane Oxygenation in the Treatment of Novel Influenza Virus Infection: A Multicentric Hospital-Based Health Technology Assessment in Lombardy Region . . . . . . . . . . 1003 P. Lago, I. Vallone, G. Zarola MRI-Induced Heating on Patients with Implantable Cardioverter-Defibrillators and Pacemaker: Role of Lead Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007 E. Mattei, G. Calcagnini, M. Triventi, F. Censi, P. Bartolini Adoption and Sophistication of Clinical Information Systems in Greek Public Hospitals: Results from a National Web-Based Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1011 S. Kitsiou, V. Manthou, M. Vlachopoulou, A. Markos Risk Management Process and CE Marking of Software as MD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017 Fabrizio Dori, Ernesto Iadanza, Roberto Miniati, Samuele Mattei From Laparoscopic Surgery to 3-D Double Console Robot-Assisted Surgery . . . . . . . . . . . . . . . . . . 1021 P. Lago, C. Lombardi, B. Dell’Anna Clinical Engineering and Clinical Dosimetry in Patients with Differentiated Thyroid Cancer Undergoing Thyroid Remnant Ablation with Radioiodine-131 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1025 M. Medvedec, D. Dodig Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1029 Keyword Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035
Quantitative Analysis of Two-Dimensional Catch-Up Saccades Executed to the Target Jumps in the Time-Continuous Trajectory Vincas Laurutis and Raimondas Zemblys Biomedical Engineering Centre, Siauliai University, Vilniaus st. 141, LT-76353 Šiauliai, Lithuania
Abstract— The purpose of this research was to investigate quantitatively the catch-up saccades occurring during smooth pursuit. In the first experiment, to evoke catch-up saccades we used high velocity predictable two-dimensional timecontinuous target trajectories. In the second experiment, catch-up saccades were evoked using target jump paradigm during sustained two-dimensional pursuit. Target jumps in the different directions were presented at the unexpected moments and positions of the interrupted time-continuous target trajectory. From the experimental results we made a comparison of the main sequences (relationship between peak velocity and amplitude) of the catch-up and refixation saccades and found that they are different. Also we can conclude that the peak velocity of catch-up saccades is strongly correlated with the velocity of the smooth pursuit target component. We found that both position error and retinal slip are taken into account in catch-up saccades programming to predict the future trajectory of the moving target. Ill. 5, bibl. 5 (in English, summaries in English, Russian and Lithuanian). Keywords— Eye movements, Smoth pursuit, Sacadic eye movements, Catch-up saccades.
I. INTRODUCTION Eye movements exist to aid vision by directing gaze towards new objects of interest and, if those objects move, by tracking them. This serves to bring pertinent retinal images into the fovea – the region of the most acute vision. An interesting feature of the brain’s control of eye movements is its modular organization, with different subsystems mediating special functions. The neural subsystem generating the rapid, saccadic eye movements used to capture new object, is quite distinct from that performing the pursuit, tracking movements (second subsystem). The third subsystem – vestibular ocular reflex (VOR) is entirely concerned with generating eye movements that compensate for rotations of head and so tends to stabilize the eyes with respect to the environment. To achieve single vision by two eyes they must be aligned on the target. This has resulted in the evolution of a fourth subsystem to generate vergence eye movements. The nervous system controls all of these eye movements with considerable precision and ability to adapt its performance trough motor learning processes [1].
Recently all four eye movement subsystems are properly studied and many characteristics and parameters of them are well known. More interesting topics of the investigation now are how these subsystems collaborate together or subsequently take in action one after another [2]. This research is dedicated to reveal interaction between saccadic and smooth pursuit eye movements when target tracking is interrupted by catch-up saccades. Saccades are fast, dart-like, conjugate eye movements used to position the fovea of the eyes in a time optimal manner. They could be categorized into refixation saccades and microsaccades, which are seen only during fixation. Normometric saccadic refixations could be either singlestep or multi-step movements. Multi-step (usually doublestep) saccades are executed in two-steps: primary and corrective saccades. Primary saccade could be too small (hypometric) and too large (hypermetric) with respect to the intended target position. Only 30 % of saccades are singlestep and precisely reaches new target. From the rest of the multi-step saccades only 30 % are hypermetric. Normometric saccades demonstrate pretty common trajectories. Saccadic latency, or reaction time, typically refers to the time from onset of the non-predictable step of target movement to onset of the saccadic eye movement initiated to foveate the displaced target. It is approximately 180 to 200 msec, with standard deviation 30 msec. Relationship between peak velocity and amplitude of the saccade, called the main sequence, typically separate these movements from other limb or head movements. Main sequence illustrate that peak velocity is 410 deg/sec for 10 deg amplitude of the saccade, 500 deg/sec for 15 deg amplitude and 650 deg/sec for 20 deg amplitude. Catch-up saccades are seen in the eye tracking trajectory of the smoothly moving target, when target velocity becomes too large for smooth pursuit eye movements. Also catch-up saccades could be executed to the target jumps in the time-continuous trajectory. In this case, at some moments smooth pursuit eye movements are interrupted by quick eye jumps – catch-up saccades, which allow maintain the target on the fovea. Parameters of the catch-up saccades differ from refixation saccades and they are investigated only in one dimensional mode [3].
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 1–4, 2010. www.springerlink.com
2
V. Laurutis and R. Zemblys
II. AIM AND TASKS OF THE WORK
IV. EXPERIMENTAL RESULTS
In this research we investigated quantitative parameters of catch-up saccades such as: main sequence (relationship between peak velocity Ep and amplitude A of catch-up saccades), saccadic latency Td, and the precision of the extrapolation of the target trajectory by smooth pursuit eye movements during time interval Te between target jump and catch-up saccade onset and time interval Tr when after catch-up saccade smooth pursuit behavior is restored. Further investigation was focused on the finding relationship between the quantitative parameters of the catch-up saccades and the velocity of the time-continuous target trajectory. For refixation saccades to stationary targets a sensory signal is the position error between target projection in the periphery of the retina and the fovea. When the target is moving and the eye and target velocities are different, retinal slip took place. To overcome these slip and delay the oculomotor system uses prediction of future target motion to program catch-up saccades to moving target. Previous studies did not clearly distinguish the influence of target velocity, position error, retinal slip and prediction in catchup saccades programming [4].
Four typical examples of the onset positions of catch-up saccades obtained during tracking of the targets which were moving in the clock-wise direction by square-shape and circle-shape trajectories are shown in Fig. 1. There we can see that for 10 times repeated trials catch-up saccades were more concentrated on the corners of the squares.
Fig. 1 Two-dimensional positions of catch-up saccades obtained during tracking of the square-shape (A, B) and circle-shape (C, D) target trajectories. Velocities of the target trajectories were 20 deg/sec for (A, C) and 50deg/sec for (B, D)
III. METHOD Movements of both eyes were recorded with eye tracker EyeGaze System produced by LC Technologies Ltd. Healthy subjects without any known oculomotor abnormalities were recruited after informed consent. Among the five subjects, two authors participated in the experiments. Twodimensional target trajectories were presented on the computer screen. During first experiment, subjects were asked to track target (white spot) moving in the clock-wise direction. Square-shape and circle-shape target trajectories with angular velocities in the range of 10 - 50 deg/sec were used. Two-dimensional positions, amplitudes and velocities of the catch-up saccades were recorded. In the second experiment, subjects were asked to track target moving with non-predictable time-continuous trajectory. At the randomly chosen time, time-continuous trajectory was interrupted by target jumps, amplitudes and directions of which also was random. Trajectories of catch-up saccades executed as the reactions to the target jumps were analyzed. Quantitative parameters of the catch-up saccades, such as reaction time (saccadic latency Td), amplitudes A, peak velocities Vp, were measured. Also the extrapolation ability during time interval between target jump and catch-up saccade onset was evaluated.
At these points the target trajectories change the movement direction from horizontal to vertical. Therefore, the control system of the smooth pursuit eye movements also is forced to change tracking direction. Pier of the horizontal eye globe muscles have to stop horizontal movement and vertical muscles start to act. This elicits tracking errors which are eliminated by catch-up saccades. Two-dimensional positions of the catch-up saccades in the figure 1 (C, D) obtained during tracking of circle-shape target trajectories more clearly demonstrates that even when eyesight moves in the circle catch-up saccades occurs when target changes movement direction from vertical to horizontal and from horizontal to vertical. In the figure 1 C, D, when target velocities were increased to the 50 deg/sec, the shape of eyesight trajectories differs from the target trajectories. They have angular shift in the clock-wise direction what demonstrates anticipation (focused on the future target position) of the predictable target trajectory. Investigation of the relationship between the peak velocities of catch-up saccades and the target velocities during pursuit of the predictable target trajectories reveals close correlation between them (Table 1).
IFMBE Proceedings Vol. 29
Quantitative Analysis of Two-Dimensional Catch-Up Saccades Executed to the Target Jumps in the Time-Continuous Trajectory
3
Table 1 Relationship between target velocity Vt and catch-up saccade peak velocity Vp in deg/sec
Vt Vp
10 43
20 66
30 95.5
40 128
50 164.5
In the figure 2, the trajectories of catch-up saccades in the horizontal x and vertical y directions as reaction to the target jump executed during time-continuous movement are shown. Oculomotor system demonstrates common (reflexive) behavior during ten trials elicited to the same target jump in the non-predictable target trajectory.
Fig. 3 Target trajectory (points 1, 2, 3, 6, 4) and eyesight trajectory (1, 2, 5, 6, 7, 8) obtained during two-dimensional tracking and plotted together
Fig. 2 Ten trajectories of catch-up saccades plotted together in the horizontal (A) and vertical (B) directions Target and eyesight trajectories in the figures 3 and 4 could be divided in the few stages. From point 1 to point 2 we can see precise pursuit of the initial target trajectory. Target trajectory from point 2 to point 3 makes jump. Eyesight in this situation from point 2 to point 5 continues to extrapolate initial target trajectory what is more clearly seen in figure 4. From point 5 to point 6 eyesight jumps to the new position on the further time-continuous target trajectory. At the points 7 and 8 eyesight makes corrective saccades. Most interesting finding in this research is notice that during catch-up saccade eyesight does not respond to the target jump, but catch future position in the time-continuous target trajectory (point 6). It means that vision is active during saccadic latency (points 2, 5).
Fig. 4 The same target and eyesight trajectories as in the figure 3 plotted in the horizontal (A) and vertical (B) directions Fig. 5 illustrates that the main sequences of the refixation and catch-up saccades differ. It means that the refixation and catch-up saccades are controlled by different neural circuits.
IFMBE Proceedings Vol. 29
4
V. Laurutis and R. Zemblys
Fig. 5 Main sequences of the refixation catch-up and saccades In the table 2, the parameters of catch-up saccades are placed. There we can see that saccadic latency T is stable for all five subjects. The extrapolation of the former target trajectory E was evaluated as the angular displacement between the real eyesight and extrapolated target trajectories at the end of the saccadic latency (point 5 in the figure 4)). Table 2 Parameters of catch-up saccades Subject T, sec E, deg
RZ 0.25 2.16
VL 0.28 2.58
GD 0.24 2.27
NR 0.22 2.18
AP 0.26 2.32
V. CONCLUSIONS 1. Catch up saccades aids the oculomotor system to reduce tracking error. Main parameters which induce catch-up saccades are position errors between the target projection on the retina and fovea and the retinal slip.
2. During two-dimensional tracking, when target changes the movement direction and therefore tracking errors increases, appearance of catch-up saccades is seen more often. 3. Saccadic latency for catch-up saccades (Tr = 240 msec) is bigger than for refixation saccades (T = 200 msec) and did not depend on the target velocity. 4. Peak velocity of catch-up saccades is strongly correlated with velocity of the smooth pursuit target component. 5. The relationship between the peak velocity and the amplitude (main sequence) for catch-up saccades differ from main sequence for refixation saccades. 6. Landing points of catch-up saccades do not depend on the amplitudes of the target jumps but is related with the new target position at the offset of catch-up saccade. The prediction of the future trajectory of the moving target requires more calculations and explains why the saccadic latency for catch-up saccades is bigger than for the refixation saccades.
REFERENCES 1. Kiuffedra. K. J., Tannen B. Eye movement Basics for the Clinician. StLouis: Mosby.- 1994.- 266p. 2. Laurutis V., Robinson D. A. Are fixational and pursuit eye movements created by two different neural circuits? // Mechanika. - Kaunas: Technologija, 1996.- No. 2(4). – P. 43-47. 3. Laurutis V., Daunys G. Prediction features of the two-dimensional smooth pursuit eye movements // Medical & Biological Engineering and computing Vol. 34. The 10th Nordic-Baltic conference on biomedical engineering. – 1996. – Finland, Tampere – P. 335 – 336. 4. Bennett S. J., Barnes G. R. Combined smooth and saccadic ocular pursuit during transient occlusion of a moving visual object // Exp. Brain Res. – 2006. – No. 168. – P. 313-321. 5. Skavenski A. A., Steinman R. M. Control of eye position in the dark // Vision Res. – 1970. – No. 10(193). – P. 319-326.
IFMBE Proceedings Vol. 29
Spike Sorting based on Dominant-Sets clustering
D. A. Adamos1, N. A. Laskaris2, E. K. Kosmidis3 and G. Theophilidis1 1
Laboratory of Animal Physiology, School of Biology, Aristotle University of Thessaloniki (AUTh), 54 124, Greece 2 Laboratory of Artificial Intelligence & Information Analysis, Department of Informatics, AUTh, 54 124, Greece 3 Laboratory of Physiology, School of Medicine, AUTh, 54 124, Greece
Abstract—Spike sorting algorithms aim at decomposing complex extracellularly recorded electrical signals to independent events from single neurons in the vicinity of electrode. The decision about the actual number of active neurons in a neural recording is still an open issue, with sparsely firing neurons and background activity the most influencing factors. We introduce a graph-theoretical algorithmic procedure that successfully resolves this issue. Dimensionality reduction coupled with a modern, efficient and progressively-executable clustering routine proved to achieve higher performance standards than popular spike sorting methods. Our method is validated extensively using simulated data for different levels of SNR. Keywords— Spike sorting, dominant sets, graph-theoretic clustering, ISOMAP, manifold learning I. INTRODUCTION
The basis of every spike sorting algorithm is the assumption that all the action potential traces of a particular neuron have nearly the same amplitude and shape. In extracellular recordings, the shapes of recorded spike waveforms mainly depend on neuron’s geometry as well as its distance to the recording electrode. The goal of a spike sorting routine is to process and analyze the usually composite recorded signals in order to identify the number of active neurons and extract detailed time courses of their spiking activity. Related algorithms constitute the core methodological component in various situations ranging from traditional neurophysiological experiments and clinical/neuroscience studies to cortexmachine interfaces. The battery of available spike sorting routines includes mainly automated techniques that analyze the recorded signals by means of their waveforms. At the initial stage, various linear techniques like Principal Component Analysis (PCA) and wavelets are used in order to reduce the dimensionality of the input data and enhance the signal content in the attempted representation. PCA-based projection is often restrained within the subspace spanned by the first two or three principal components, although the employment of more components has recently been reported to carry useful complementary information [1]. Alternatively, the wavelet transform is used for the decomposition of spike waveforms [2], featuring improved discrimination of local-
ized shape divergences. In both the above approaches, the representation scheme serves as a preprocessing for a clustering framework that would take over the detection of distinct signal sources (i.e. active neurons) and the isolation of the corresponding spiking contributions. Regarding clustering, Bayesian [3] and Expectation Maximization [4] methods have been proposed for spike sorting. Assuming a stationary Gaussian profile for the background noise, both methods consider Gaussian properties for the potential clusters residing in the PCA representation subspace. However, in the general form, background noise in neural recordings is non-stationary, non-Gaussian and carries a complex correlated profile with higher power in low frequencies. Synaptic coupling among neurons, superimposed field potentials and bursting neurons are some of the reasons that would make a non-Gaussian cluster profile more plausible. To avoid Gaussian considerations, clustering approaches featuring hierarchical [5] and nearestneighbors [2] algorithms have also appeared. The later employs a stochastic algorithm, known as super-paramagnetic clustering (SPC), which makes no prior assumptions for the statistical properties of the data. There are two important parameters when one evaluates a spike sorting classification process: the number of clusters (i.e. active neurons) being decided by the process and the number of spikes assigned in each cluster. Both are well incorporated by Type I / II errors [4] in the spike sorting domain. Type I (false positive (FP)) / II (false negative (FN)) errors derive from the traditional classification schemes and conceptualize misclassification. For example, the identification of less neurons than expected (underclustering) leads to high false positive errors, while the opposite case (over-clustering) results in a large amount of false negatives. Although a correct estimation of the number of clusters would limit both errors in cluster-delineation, the identification of actual number of active neurons is still an open issue. Even popular methods (like SPC) do not incorporate a sufficient treatment of this issue, leading to inappropriate results [6]. It is worth noting that, in laboratory practice, over-clustering is most often addressed in a less time-consuming way than under-clustering. In a previous work [1] we have stressed the importance of the previous fact and proposed a new cluster error definition that favors over-clustering over under-clustering errors.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 5–8, 2010. www.springerlink.com
6
D.A. Adamos et al.
There are two main reasons why the identification of neurons’ number is still an open issue. The first relates to the adopted clustering techniques, which require the a priori definition of number of groups. The second relates to the low SNR of the signals and the subsequent poor representation (in the original or reduced space) of the waveforms. In this work, we propose a sequential, subtractive clustering algorithm which is based on graph theoretic ideas and in particular the notion of dominant set [7]. The algorithm works in an iterative fashion by operating on a neighborhood graph. It identifies the core of the graph, using replicator dynamics formulation, and then removes it from the graph and feeds back the remaining graph. The procedure terminates when all the data have been assigned to distinct groups or no more compact group can be formed any further. The suggested algorithm is engaged to work within a representation space which is derived via a fully compatible dimensionality reduction technique, namely Isometric feature mapping (ISOMAP) [8]. ISOMAP is known to reveal the intrinsic data variation and is therefore expected to be insensitive to random variations due to noise. Hence, the resulting low-dimensional parameterization of the waveform variation is expected to enhance the clustering performance. Section II describes the proposed methodology. Section III presents the comparative evaluation of our spike sorting technique in relation with a popular alternative [2], using simulated data. Section IV concludes the paper. II. METHOD
A. Low-dimensional Representation The segments, extracted from the time-series of extracellular recordings via a root-mean-square threshold detector, are considered as the first raw representation of spiking activity. The ensemble of these loosely aligned spikewaveforms can be thought as a point-swarm residing in a multidimensional feature space with axes corresponding to signal amplitudes at particular latencies. Following the standard convention, the ith spike waveform is depicted as xi (t), t=1,2,…,T, i=1,2,…,N ( with t denoting discrete time or latency) and represented via the row-vector xi = [xi (1), xi (2),…, xi (t),…, xi (T)] RT. Similarly the whole ensemble is represented in a data-matrix format as X [N x T] = [x1 x2 … xi … xN]. Most often an enhanced representation is sought by employing a dimensionality reduction technique (denoising step). Here, we employ ISOMAP embedding in order to achieve a parsimonious representation, in which the true degrees of freedom can be easily recognized and directly associated with involved neurons.
The algorithmic details of ISOMAP technique can be found elsewhere [9]. It starts by building a neighborhood graph over the data-points in the original feature space. This graph is then used to compute all the geodesic inter-point distances. Multidimensional scaling is finally employed to derive a reduced coordinate space where these distances are preserved and therefore the intrinsic geometry of the data is faithfully represented. In our case, the ISOMAP-routine provides a geometrical picture, within an r-D space, of the spike-waveforms variation. Y [N x r] = [y1 y2… yi … yN ] = ISOMAP (X, r)
(1),
r
where yi = [ yi (1), yi (2),…, yi (r) ] R . The derived point-swarm (for a 2-d example, see Figure 1b) is accompanied with the residual variance, which is a performance index ranging from 0% to 100% and indicating the reliability of the mapping (for an example, see Figure 1c). The optimal dimensionality ro can be sought (as a compromise between accuracy and compression) by computing multiple maps with increasing r (r [1,10]), drawing the diagram of residual variance as a function of r and applying the elbow rule. B. Identifying neurons and classifying the data A recently introduced graph-theoretic algorithm [7] was employed for identifying the most-cohesive groups of vertices given the weighted similarity (adjacency) matrix of a graph. The algorithm is based on the identification of the dominant set of nodes and when repeatedly applied facilitates the effective clustering, in a sequential mode, of pairwise relational data. As stated in [7], the main property of a dominant set is that the overall similarity among internal nodes is higher than that between external and internal nodes, and this fact is the motivation of considering a dominant set as a cluster of nodes. One of its main characteristics is the compact, elegant formulation. In our case, ISOMAP-representation points (Y) are used to build an undirected edge-weighted graph with no self-loops G=(V, E, w), where V = {1,…, N} is the vertex set, E V x V is the edge set and w : E R+* is the positive weight function. Vertices represent data points, edges correspond to neighborhood relationships and edgeweights reflect similarity between pairs of linked vertices. In a preprocessing step, Euclidean distances dij are transformed to similarity weights, which increase with decreasing distances. For this transformation, we use the following form: w(i, j) = exp ( - dij / )
(2),
where is a real positive number estimated as 3 times the average of the mean distance of all dij. Consequently, we
IFMBE Proceedings Vol. 29
Spike Sorting Based on Dominant-Sets Clustering
7
represent the N-node graph G (V, E, w) with the corresponding similarity matrix A, with i representing the i-th data point and the weight of edge (i, j) being set to w(i,j): A =(aij) where ij = w(i, j) if (i, j) E and ij = 0 otherwise. As pointed in [7], the cohesiveness of a cluster is measured by the overall similarity of a dominant set; that is, a good cluster contains elements that have large values connecting one another in the similarity matrix. Hence, the problem of finding a compact cluster is formulated as the problem of finding a vector x that maximizes the following objective function: f (x) = xT A x
(3), n
T
subject to x , where ={x R : x 0 and e x = 1}. Thus, a maximally cohesive cluster denotes the most dominant solution set that is iteratively subtracted from the N-node graph (G). At the end of the iterations, where all the data have been classified, the overall cohesiveness f(x) of each step is exploited for the decision about the actual number of active neurons (for an example, see Figure 1d). III. RESULTS
For the detailed evaluation of our method, we generated
spike waveforms representing neural activity from three separate neurons. Aiming at realistic simulations, we utilized real action potentials from respiratory motoneurons which had been recorded in vitro with a single “hook” electrode from the peripheral nervous system of the beetle Tenebrio molitor [10]. Three such action potential waveforms served as the initial templates. These waveforms were recorded extracellularly with a sampling frequency of 30 kHz and time duration of 4 ms (120 samples). The templates were replicated multiple times and added to segments of background noise extracted from the same recording (randomly extracted from latencies during which the spikedetector was silent). In order to pursue evaluation results under different SNR-levels, each extract of real background noise was modulated by a variable, positive amplitude factor . The SNR of the resulting waveform (template plus noise segment) was then defined as follows:
Three hundreds (300) waveforms per template were generated yielding 900 single spikes corresponding to the three neural classes of our data set. In addition, paired combinations among the three templates were realized by first inducing variable delays and then adding noise segments
Figure 1: a) The overall simulated data set; The SNR is 5 for all waveforms. b) 2-D ISOMAP representation of the data set. c) Residual variance diagram of ISOMAP. The optimal dimensionality (ro=4) is selected using the elbow rule. d) Group-cohesiveness values, iteratively computed for each subtracted cluster. e) Classification results visualized within 2-D ISOMAP space. Colors indicate classes, while unclassified data are left black. f) Classification results presented in the original data domain using the corresponding colors.
IFMBE Proceedings Vol. 29
8
D.A. Adamos et al.
using amplitude factors so as to achieve a given SNR-level. In this way 150 waveforms of double-overlaps were generated; 50 for each template pair. Finally, 50 more waveforms were added corresponding to triple overlaps with their SNRlevel adjusted accordingly. The complete data set of 1100 waveforms is shown in Figure 1a. The 2-D ISOMAP representation of this dataset is shown in Figure 1b. Considering the high density areas of this point-diagram, there are apparently three clusters (pointing at the existence of three active neurons). The residual variance graph, which is a supplementary output of the ISOMAP routine, is included in Fig.1c and clearly designates the first four dimensions as necessary for the faithful low-dimensional representation of the data. Hence, the first four ISOMAP coordinates were used to represent each spike waveforms and dominant-set clustering was applied to the new data matrix Y[1100 x 4]. The overall cohesiveness for every cluster subtracted by the iterative process is shown in Figure 1d. The high ranking of the first three groups denotes the presence of three active neurons in the data. The straight line in the figure depicts the cluster quality for the whole graph taken as single component. For the selected groups, the classification algorithm assigns their waveforms to the corresponding classes, while the rest were left unclassified. This data-sieving procedure step is visualized in ISOMAP space (Figure 1e) and in the original data domain (Figure 1f) using a three-level color-code. Type I/II errors were used for the performance evaluation. The total number of FP and FN is referred to the identified classes. The adopted error-rate accounts for the total number of false positive and false negative spikes:
with i running over the number of single-spike classes (i.e. the number of identified neurons). For comparison purposes, the measurements corresponding to Waveclus [2] are also included. Both Waveclus and our proposed methodology classified the spikes of the data set into three main classes and an additional noise class. The results of this classification are shown in Figure 2, using 10 different realizations of the dataset. Averaged values for the errorrates and the FP-FN errors are shown as a function of SNR level. It can be seen that our proposed algorithm achieves the lowest error rate which is lower than 1% when the SNR is higher than 4.
CONCLUSIONS The present work introduces a graph-theoretical approach to spike sorting; it iteratively pursuits a dominant set in the graph (the most dominant in each iteration) and then
Figure 2: Average error rates for our method and Waveclus.
removes it, until all the data have been clustered. The efficiency of the proposed clustering method has been combined with the robustness of ISOMAP representation. The hybrid scheme has been extensively evaluated. The results indicate high robustness to noise and a measured performance that goes beyond the contemporary standards.
REFERENCES 1.
Adamos DA, Kosmidis EK and Theophilidis G. (2008) Performance evaluation of PCA-based spike sorting algorithms. Computer methods and programs in biomedicine 91(3):232-44 2. Quian Quiroga R, Nadasdy Z, Ben-Shaul Y (2004) Unsupervised Spike Detection and Sorting with Wavelets and Superparamagnetic Clustering. Neural Comp 16:1661-1687. 3. Lewicki M (1998) A review of methods for spike sorting: the detection and classification of neural action potentials. Network: Computation in Neural Systems 9:R53-R78. 4. Harris KD, Henze DA, Csicsvari J, Hirase H, Buzsaki G (2000) Accuracy of Tetrode Spike Separation as Determined by Simultaneous Intracellular and Extracellular Measurements. J Neurophysiol 84:401-414. 5. Fee MS, Mitra PP, Kleinfeld D (1996) Automatic sorting of multiple unit neuronal signals in the presence of anisotropic and non-Gaussian variability. Journal of Neuroscience Methods 69:175-188. 6. Herbst JA, Gammeter S, Ferrero D, Hahnloser RH (2008) Spike sorting with hidden Markov models. J Neurosci Methods 174 (1):12634 7. Pavan M and Pelillo M (2007) Dominant Sets and Pairwise Clustering. IEEE Trans. Pattern Anal. Mach. Intell. 29(1): 167-172 8. Tenenbaum JB, de Silva V, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290(5500):2319-23 9. Laskaris NA and Ioannides AA (2002) Semantic geodesic maps: a unifying geometrical approach for studying the structure and dynamics of single trial evoked responses. Clinical Neurophysiology 113(8):1209-1226 10. Zafeiridou G and Theophilidis G (2004) The action of the insecticide imidacloprid on the respiratory rhythm of an insect: the beetle Tenebrio molitor. Neuroscience Letters 365(3):205-209 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Dimitrios A. Adamos School of Biology, Aristotle University Aristotle University Campus, 54124 Thessaloniki Greece
[email protected]
Differentiation of human bone marrow stromal cells onto gelatin cryogel scaffolds L. Fassina1,7, E. Saino2,7, L. Visai2,7, M.A. Avanzini3, M.G. Cusella De Angelis4,7, F. Benazzo5,7, S. Van Vlierberghe6, P. Dubruel6, G. Magenes1,7 1
Dipartimento di Informatica e Sistemistica, University of Pavia, Pavia, Italy 2 Dipartimento di Biochimica, University of Pavia, Pavia, Italy 3 Oncoematologia Pediatrica, IRCCS San Matteo, University of Pavia, Pavia, Italy 4 Dipartimento di Medicina Sperimentale, University of Pavia, Pavia, Italy 5 Dipartimento SMEC, IRCCS San Matteo, University of Pavia, Pavia, Italy 6 Polymer Chemistry and Biomaterials Research Group, University of Ghent, Ghent, Belgium 7 Centre for Tissue Engineering (C.I.T., http://cit.unipv.it/cit), University of Pavia, Pavia, Italy Abstract— Biomaterials have been widely used in reconstructive bone surgery to heal critical-size long bone defects due to trauma, tumor resection, and tissue degeneration. In particular, gelatin cryogel scaffolds are promising new biomaterials owing to their biocompatibility; in addition, the in vitro modification of biomaterials with osteogenic signals enhances the tissue regeneration in vivo, suggesting that the biomaterial modification could play an important role in tissue engineering. In this study we have followed a biomimetic strategy where differentiated human bone marrow stromal cells built their extracellular matrix onto gelatin cryogel scaffolds. In comparison with control conditions without differentiating medium, the use of a differentiating medium increased, in vitro, the coating of gelatin cryogel with bone proteins (decorin, osteocalcin, osteopontin, type-I collagen, and typeIII collagen). The differentiating medium aimed at obtaining a better in vitro modification of gelatin cryogel in terms of cell colonization and coating with osteogenic signals, like bone matrix proteins. The modified biomaterial could be used, in clinical applications, as an implant for bone repair. Keywords— Gelatin cryogel, bone marrow stromal cell, cell proliferation, bone extracellular matrix, surface modification, biomimetics.
I. INTRODUCTION One of the key challenges in reconstructive bone surgery is to provide living constructs that possess the ability to integrate in the surrounding tissue. Bone graft substitutes, such as autografts, allografts, xenografts, and biomaterials have been widely used to heal critical-size long bone defects and maxillofacial skeleton defects due to trauma, tumor resection, congenital deformity, and tissue degeneration. The biomaterials used to build 3D scaffolds for bone tissue engineering are, for instance, the hydroxyapatite [1], the partially demineralized bone [2], and biodegradable porous polymer-ceramic matrices [3]. The preceding osteoinductive and osteoconductive biomaterials are ideal in order to follow a typical approach of the tissue engineering, an approach that involves the seeding and the in vitro culturing of cells within a porous scaffold before the implantation. Gorna and Gogolewski [4, 5] have drawn attention to the ideal features of a bone graft substitute: it should be porous with interconnected pores of adequate size allowing for the ingrowth of capillaries and perivascular tissues; it should attract mesenchymal stem cells from the surrounding bone and promote their differentiation into osteoblasts; it should avoid shear forces at the interface between bone and bone graft substitute; and it should be biodegradable.
In this study, following the preceding “golden rules” of Gorna and Gogolewski, we have elected gelatin cryogel [6-8] as bone graft substitute and, applying a differentiating medium to bone marrow stromal cells, we have attempted to populate it with extracellular matrix and differentiated osteoblasts. Gelatin cryogel [6-8] is a promising new biomaterial owing to its biocompatibility. The in vitro modification of gelatin cryogel, with osteogenic signals of the transforming growth factor-E superfamily and with bone morphogenetic proteins, enhances the tissue regeneration in vivo [9] suggesting that the modification of gelatin cryogel could play an important role in tissue engineering. As consequence, aiming, in a future work, at accelerated and enhanced bone regeneration in vivo, in the present study of tissue engineering, we show a particular “biomimetic strategy” that consists in the in vitro modification of gelatin cryogel with differentiated bone marrow stromal cells and their extracellular matrix produced in situ. In other words, using a differentiating medium, our aim was to enhance a bone marrow cell culture onto a gelatin cryogel, that is, to coat the gelatin cryogel with physiological and biocompatible cell-matrix layers. Using this approach, the in vitro cultured material could be theoretically used, in clinical applications, as an osteointegrable implant. II. MATERIALS AND METHODS Gelatin cryogel disks: Bovine gelatin cryogel disks (diameter, 10 mm; height, 2 mm) were kindly provided by Polymer Chemistry and Biomaterials Research Group, University of Ghent (Ghent, Belgium) [6-8] (Fig. 1).
Fig. 1 Unseeded gelatin cryogel [6-8]
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 9–12, 2010. www.springerlink.com
10
L. Fassina et al.
Cells from bone marrow aspirates: Mononuclear cells were isolated from bone marrow aspirates (30 ml) by density gradient centrifugation in Ficoll (density, 1.077 g/ml) (Lymphoprep, Nycomed Pharma) and plated in non-coated 75-175 cm2 polystyrene culture flasks (Corning Costar, Celbio) at a density of 160000 cell/cm2 [10]. The culture condition was based on the basal medium Mesencult (Stem Cell Technologies) supplemented with 2 mM Lglutamine, 50 Pg/ml gentamycin, and 10% fetal calf serum. Cultures were maintained at 37°C in a humidified atmosphere containing 5% CO2. After 48 h, non-adherent cells were discarded and culture medium was replaced twice a week. After reaching 80% confluence as a minimum, the cells were harvested and re-plated for expansion at a density of 4000 cell/cm2 until 5th passage. The colony-forming unit-fibroblast assay (CFU-F) was performed as described previously [11]. CFU-F formation was examined after incubation for 12 days in a humidified atmosphere (37°C, 5% CO2); the clonogenic efficiency was calculated as the number of colonies per 106 bone marrow mononuclear cells seeded. According to the International Society for Cellular Therapy on the nomenclature of mesenchymal progenitors, the cells cultured for this study were defined as multipotent stromal cells [12]. Cell culture: To study the osteogenic differentiation potential, the obtained bone marrow stromal cells were then cultured in DMEM (Invitrogen) supplemented with 10% fetal bovine serum, 50 Pg/ml penicillin-streptomycin, and 1% L-glutamine. After reaching 80% confluence as a minimum, the cells were harvested and replated for expansion at a density of 2.5u104 cell/cm2. The cells were cultured at 37°C with 5% CO2, three fifths of the medium were renewed every 3 days, and then the cells were routinely trypsinized, counted, and seeded onto the gelatin cryogel disks. Cell seeding: To anchor the gelatin cryogel disks to standard well-plates, 3% (w/v) agarose solution was prepared and sterilized in autoclave, and during cooling, at 45°C, 100 Pl of agarose solution were poured inside the wells to hold the placed gelatin disks and to fix them after completed cooling. The well-plates with the biomaterial disks were sterilized by ethylene oxide at 38°C for 8 h at 65% relative humidity. After 24 h of aeration in order to remove the residual ethylene oxide, the disks were ready. A suspension of 5u105 bone marrow stromal cells in 400 Pl was added onto the top of each disk and, after 0.5 h, 600 Pl of culture medium was added to cover the disks. We have utilized two types of culture medium. For the control well-plate, we have used the “proliferative” medium i.e. D-MEM supplemented with 10% fetal bovine serum, 50 Pg/ml penicillinstreptomycin, 1% L-glutamine. Whereas, for the “differentiating” well-plate, we have utilized the proliferative medium only for two weeks and then the “differentiative” one, i.e. the same as above to which ascorbic acid (50 Pg/ml), dexamethasone (10-7 M), and Eglycerophosphate (5 mM from day 21) were added. The duration of the control and differentiating cultures was 6 weeks and the culture media were changed every 3 days. Scanning electron microscopy (SEM) analysis: Gelatin cryogel disks were fixed with 2.5% (v/v) glutaraldehyde solution in 0.1 M Na-cacodylate buffer (pH=7.2) for 1 h at 4°C, washed with Nacacodylate buffer, and then dehydrated at room temperature in a gradient ethanol series up to 100%. The samples were kept in 100% ethanol for 15 min, and then critical point-dried with CO2.
The specimens were sputter coated with gold and observed at 500u magnification with a Leica Cambridge Stereoscan 440 microscope at 8 kV. DNA content: At the end of the culture period, the cells were lysed by a freeze-thaw method in sterile deionized distilled water and the released DNA content was evaluated with a fluorometric method (Molecular Probes). A DNA standard curve [13], obtained from a known amount of cells, was used to express the results as cell number per disk. Set of rabbit polyclonal antisera: L.W. Fisher (National Institutes of Health, Bethesda, MD) presented us, generously, with the following rabbit polyclonal antibody immunoglobulins G: antiosteocalcin, anti-type-I collagen, anti-type-III collagen, antidecorin, and anti-osteopontin (antiserum LF-32, LF-67, LF-71, LF-136, and LF-166, respectively) [14]. Set of purified proteins: Decorin [15], osteocalcin (immunoenzymatic assay kit, BT-480, Biomedical Technologies), osteopontin (immunoenzymatic assay kit, 900-27, Assay Designs), type-I collagen [16], and type-III collagen (Sigma-Aldrich). Confocal microscopy: At the end of the culture period, the disks were fixed with 4% (w/v) paraformaldehyde solution in 0.1 M phosphate buffer (pH=7.4) for 8 h at room temperature and washed with PBS (137 mM NaCl, 2.7 mM KCl, 4.3 mM Na2HPO4, 1.4 mM KH2PO4, pH=7.4) three times for 15 min. The disks were then blocked by incubating with PAT (PBS containing 1% [w/v] bovine serum albumin and 0.02% [v/v] Tween 20) for 2 h at room temperature and washed. L.W. Fisher’s antisera were used as primary antibodies with a dilution equal to 1:1000 in PAT. The incubation with the primary antibodies was performed overnight at 4°C, whereas the negative controls were based upon the incubation, overnight at 4°C, with PAT instead of the primary antibodies. The disks and the negative controls were washed and incubated with Alexa Fluor 488 goat anti-rabbit IgG (H+L) (Molecular Probes) with a dilution of 1:500 in PAT for 1 h at room temperature. At the end of the incubation, the disks were washed in PBS, counterstained with Hoechst solution (2 Pg/ml) to target the cellular nuclei, and then washed. The images were taken by blue excitation with the TCS SPII confocal microscope (Leica Microsystems) equipped with a digital image capture system at 100u magnification. Extraction of the extracellular matrix proteins from the cultured disks and ELISA assay: At the end of the culture period, in order to evaluate the amount of the extracellular matrix constituents over the gelatin surface, the disks were washed extensively with sterile PBS (137 mM NaCl, 2.7 mM KCl, 4.3 mM Na2HPO4, 1.4 mM KH2PO4, pH=7.4) in order to remove the culture medium, and then incubated for 24 h at 37°C with 1 ml of sterile sample buffer (1.5 M Tris-HCl, 60% [w/v] sucrose, 0.8% [w/v] sodium dodecyl sulphate, pH=8.0). At the end of the incubation period, the sample buffer aliquots were removed and the total protein concentration in the two culture systems was evaluated by the BCA Protein Assay Kit (Pierce Biotechnology). The total protein concentration was 198 r 35 Pg/ml in the control culture and 332 r 51 Pg/ml in the differentiating culture (p<0.05). The calibration curves to measure decorin, osteocalcin, osteopontin, type-I collagen, and type-III collagen were performed by an ELISA assay with L.W. Fisher’s antisera. The amount of extracellular matrix constituents onto the disks is expressed as fg/(celludisk).
IFMBE Proceedings Vol. 29
Differentiation of Human Bone Marrow Stromal Cells onto Gelatin Cryogel Scaffolds
11
Statistics: The disks number was 24 in each repeated experiment (12 disks in the control culture and 12 disks in the differentiating culture). The experiment was repeated 4 times. Results are expressed as mean ± standard deviation. In order to compare the results between the two culture media, one-way analysis of variance (ANOVA) with post hoc Bonferroni test was applied, electing a significance level of 0.05. III. RESULTS The human bone marrow stromal cells were seeded onto gelatin cryogel disks, and then cultured without or with a differentiating stimulus for 6 weeks. These culture methods permitted the study of the cells as they modified the biomaterial through the proliferation and the coating with extracellular matrix. The cell-matrix distribution was compared between the two culture media. Microscope analysis: In comparison to control condition, SEM images revealed that, due to the differentiative medium, the bone marrow stromal cells differentiated and built their extracellular matrix over the available gelatin surface (Fig. 2). At the end of the culture period, control culture showed few cells essentially not surrounded by extracellular matrix, therefore wide biomaterial regions remained devoid of cell-matrix complexes (Fig. 2A). In contrast, the differentiative medium caused a wide-ranging coat of the biomaterial surface: several stromal cells differentiated and the biomaterial was tending to be hidden by cell-matrix layers (Fig. 2B).
A
B
Fig. 4 Immunolocalization of decorin in the control (A) and differentiating (B) cultures, 100u magnification
The differentiative properties of the medium were confirmed by the measure of the DNA content at the end of the culture period: in the control culture the cell number per disk grew to 5.72u105 r 4.3u104 and in the differentiating culture to 5.17u105 r 3.7u104 with p>0.05. Extracellular matrix extraction: In order to evaluate the amount of bone extracellular matrix onto the gelatin cryogel disks, an ELISA of the extracted matrix was performed: at the end of the culture period, in comparison with the control culture, the differentiative stimulation significantly increased the surface coating with decorin, osteocalcin, osteopontin, type-I collagen, and type-III collagen (p<0.05) (Table 1).
Table 1 Bone matrix constituents onto gelatin cryogel [fg/(celludisk)]
A
B
Fig. 2 SEM images of the control (A) and differentiating (B) cultures, bar equal to 10 Pm, 500u magnification
Control culture (C)
Differentiating culture (D)
D/C
Decorin
0.45 r 0.11
3.01 r 0.12
6.68
Osteocalcin
0.10 r 0.34
5.20 r 0.17
52.00
Osteopontin
0.38 r 0.23
9.30 r 0.33
24.47
Type-I collagen
7.26 r 0.14
21.60 r 0.21
2.97
Type-III collagen
1.32 r 0.27
4.17 r 0.25
3.15
Table note: p<0.05 in all “Control” vs. “Differentiating” comparisons
The immunolocalization of type-I collagen and decorin showed the differentiating effects in terms of more intense building of the extracellular matrix (Figs. 3 and 4). The immunolocalization of osteocalcin, osteopontin, and type-III collagen revealed similar results (data not shown).
A
B
Fig. 3 Immunolocalization of type-I collagen in the control (A) and differ-
IV. DISCUSSION The aim of this study was the in vitro modification of a gelatin cryogel with extracellular matrix and osteoblasts differentiated from bone marrow stromal cells in order to make the biomaterial more biocompatible for the bone repair in vivo. A discussion about the concept of “biocompatibility” is necessary. When a biomaterial is implanted in a biological environment, a non-physiologic layer of adsorbed proteins mediates the interaction of the surrounding host cells with the material surface. The body interprets this protein layer as a foreign invader that must be walled off in an avascular and tough collagen sac. Therefore, the biomedical surfaces must be developed so that the host tissue can recognize them as “self”. Castner and Ratner think the “biocompatible surfaces” of the “biomaterials that heal” as the surfaces with the characters of a “clean, fresh wound” [17]: these “selfsurfaces” could obtain a physiological inflammatory reaction leading to normal healing. In this study we have followed a biomimetic
entiating (B) cultures, 100u magnification
IFMBE Proceedings Vol. 29
12
L. Fassina et al.
strategy where the seeded bone marrow stromal cells built a biocompatible surface made of bone matrix [18]. To enhance the coating of the biomaterial surface, a differentiative stimulus was applied to the seeded biomaterial. The differentiating medium didn’t increase the cell proliferation, however significantly enhanced the synthesis of type-I collagen, decorin, osteopontin, osteocalcin, and type-III collagen, which are fundamental constituents of the physiological bone matrix: in particular type-I collagen is the most important and abundant structural protein of the bone matrix; decorin is a proteoglycan considered a key regulator for the assembly and the function of many extracellular matrix proteins with a major role in the lateral growth of the collagen fibrils, delaying the lateral assembly on the surface of the fibrils [19]; osteopontin is an extracellular glycosylated bone phosphoprotein secreted at the early stages of the osteogenesis before the onset of the mineralization, it binds calcium, it is likely to be involved in the regulation of the gelatin cryogel crystal growth, and, through specific interaction with the vitronectin receptor, it promotes the attachment of the cells to the matrix [20]; osteocalcin is secreted after the onset of mineralization and it binds to bone minerals [21]. In this study the differentiating method obtained the biomimetic modification of the material, whose surface was coated by differentiated osteoblasts and by a layer of bone matrix. The use of autologous bone marrow stromal cells showed the potential of the differentiative medium and the worth of the new gelatin cryogel scaffold for total immunocompatibility and complete biocompatibility with the patient, respectively. In conclusion, we theorize that the cultured “self-surface” could be used fresh, that is, rich in autologous cells and matrix, or after sterilization with ethylene oxide, that is, rich only in autologous matrix. In future work, we intend to use our constructs, which are rich in autologous matrix, as a simple, storable, tissue-engineering product for the bone repair.
ACKNOWLEDGMENT This work was supported by Fondazione Cariplo Grants (2004.1424/10.8485 and 2006.0581/10.8485) to F. B. and by FIRB Grant (RBIP06FH7J) from Italian Ministry of Education, University and Research to M.G. C. De A..
7. 8.
9.
10.
11.
12.
13.
14.
15. 16. 17. 18.
19. 20.
21.
Dubruel P, Unger R, Van Vlierberghe S et al. (2007) Porous gelatin hydrogels: 2. In vitro cell interaction study. Biomacromolecules 8:338–344 Van Vlierberghe S, Dubruel P, Lippens E et al. (2009) Correlation between cryogenic parameters and physico-chemical properties of porous gelatin cryogels. J Biomater Sci Polym Ed 20:1417–1438 Ripamonti U, Ferretti C, Heliotis M (2006) Soluble and insoluble signals and the induction of bone formation: molecular therapeutics recapitulating development. J Anat 209:447–468 Bernardo ME, Avanzini MA, Perotti C et al. (2007) Optimization of in vitro expansion of human multipotent mesenchymal stromal cells for cell-therapy approaches: further insights in the search for a fetal calf serum substitute. J Cell Physiol 211:121–130 Pozzi S, Lisini D, Podestà M et al. (2006) Donor multipotent mesenchymal stromal cells may engraft in pediatric patients given either cord blood or bone marrow transplantation. Exp Hematol 34:934–942 Horwitz EM, Le Blanc K, Dominici M et al. (2005) Clarification of the nomenclature for MSC: The International Society for Cellular Therapy position statement. Cytotherapy 7:393–395 Fassina L, Visai L, Benazzo F et al. (2006) Effects of electromagnetic stimulation on calcified matrix production by SAOS-2 cells over a polyurethane porous scaffold. Tissue Eng 12:1985–1999 Fisher LW, Stubbs JT III, Young MF (1995) Antisera and cDNA probes to human and certain animal model bone matrix noncollagenous proteins. Acta Orthop Scand Suppl 266:61–65 Vogel KG, Evanko SP (1987) Proteoglycans of fetal bovine tendon. J Biol Chem 262:13607–13613 Rossi A, Zuccarello LV, Zanaboni G et al. (1996) Type I collagen CNBr peptides: species and behavior in solution. Biochemistry 35:6048–6057 Castner DG, Ratner BD (2002) Biomedical surface science: Foundations to frontiers. Surf Sci 500:28–60 Fassina L, Visai L, Cusella De Angelis MG et al. (2007) Surface modification of a porous polyurethane through a culture of human osteoblasts and an electromagnetic bioreactor. Technol Health Care 15:33–45 Sini P, Denti A, Tira ME et al. (1997) Role of decorin on in vitro fibrillogenesis of type I collagen. Glycoconj J 14:871–874 Kasugai S, Nagata T, Sodek J (1992) Temporal studies on the tissue compartmentalization of bone sialoprotein (BSP), osteopontin (OPN), and SPARC protein during bone formation in vitro. J Cell Physiol 152:467–477 Hauschka PV, Lian JB, Cole DE et al. (1989) Osteocalcin and matrix Gla protein: vitamin K-dependent proteins in bone. Physiol Rev 69:990–1047
Lorenzo Fassina, Ph.D. Dipartimento di Informatica e Sistemistica Via Ferrata 1 27100 Pavia Italy E-mail:
[email protected]
REFERENCES 1.
2.
3.
4.
5.
6.
Nishikawa M, Myoui A, Ohgushi H et al. (2004) Bone tissue engineering using novel interconnected porous hydroxyapatite ceramics combined with marrow mesenchymal cells: quantitative and three-dimensional image analysis. Cell Transplant 13:367–376 Mauney JR, Blumberg J, Pirun M et al. (2004) Osteogenic differentiation of human bone marrow stromal cells on partially demineralized bone scaffolds in vitro. Tissue Eng 10:81–92 Devin JE, Attawia MA, Laurencin CT (1996) Three-dimensional degradable porous polymer-ceramic matrices for use in bone repair. J Biomater Sci Polym Ed 7:661–669 Gorna K, Gogolewski S (2002) Biodegradable polyurethanes for implants. II. In vitro degradation and calcification of materials from poly(H-caprolactone)poly(ethylene oxide) diols and various chain extenders. J Biomed Mater Res 60:592–606 Gorna K, Gogolewski S (2003) Preparation, degradation, and calcification of bio-degradable polyurethane foams for bone graft substitutes. J Biomed Mater Res 67:813–827 Van Vlierberghe S, Cnudde V, Dubruel P et al. (2007) Porous gelatin hydrogels: 1. Cryogenic formation and structure analysis. Biomacromolecules 8:331– 337
IFMBE Proceedings Vol. 29
Simples Coherence vs. Multiple Coherence: A Somatosensory Evoked Response Detection Investigation D.B. Melges, A.M.F.L. Miranda de Sá, and A.F.C. Infantosi Federal University of Rio de Janeiro/Biomedical Engineering Program, COPPE, Rio de Janeiro, Brazil Abstract— In this work the performance of two techniques useful for detecting the somatosensory evoked potential (SEP), namely the Magnitude-Squared Coherence (MSC or Simple Coherence) and its multivariate version, the Multiple Coherence (MC), was compared. Electroencephalographic (EEG) signals during somatosensory stimulation were collected from forty adult volunteers without history of neurological pathology using the 10-20 International System. All leads were referenced to the earlobe average. The stimulation was carried out throughout current pulses (200 µs width) applied to the right posterior tibial nerve (motor threshold intensity level) at the rate of 5 Hz. The response detection was based on rejecting the null hypothesis of response absence (M = 100 epochs and significance level α = 0.05). The MSC was applied to the derivations [Cz], [Fz], [C3] and [C4], usually employed in the SEP recordings when bipolar derivations are used. The MC was applied to the pairs [Cz][Fz] and [C3][C4]. The results indicated that if two derivations are available, it should be better to use the MC applied to both leads than the MSC applied to each one. Keywords— Somatosensory evoked potential, MagnitudeSquared Coherence, Multiple Coherence, Objective Response Detection.
I. INTRODUCTION The somatosensory evoked potential (SEP) is useful for neurological assessment for both clinical and intra-operative monitoring purposes. The application of statistical methods known as Objective Response Detection (ORD) Techniques has been widely investigated to overcome the subjective component of the morphological (visual) analysis made by the specialist. These techniques allow inferring about the presence (or absence) of a stimulus response with a maximum false-positive rate defined a priori, which is the significance level of the statistical test applied. However, the probability of detecting stimulus response using ORD techniques applied to EEG with fixed signal-tonoise ratio is only achieved by augmenting the recording time [1] and, hence, the exam duration. Nevertheless, this should be avoided, especially for intra-operative monitoring, when the speed of detection is critical. In order to overcome this limitation, many works [1-4] studied the possibility of augmenting the probability of
detection by using information from more than one EEG derivation. Therefore, in an ORD approach, multivariate extensions of the ORD techniques (MORD) should be employed. In this work, we compare the performance of Simple and Multiple Coherence applied to EEG during somatosensory stimulation.
II. MATERIAL AND METHODS A. Multiple Coherence (MC) The MC between a periodic signal and a set of N random ones (yj[k], j = 1..N) is given by [2]:
κˆ N2 ( f ) = V H ( f )Sˆ −yy1 ( f ) V ( f ) M
(1) T
M M M where V( f ) = ⎡∑ Y1*i ( f ) ∑ Y2*i ( f ) " ∑ YNi* ( f )⎤ . ⎥ ⎢ i =1 i =1 ⎦ ⎣ i =1 H and T superscript mean, respectively, Hermitian and the matrix transpose; and the pth-row, qth-column element of
Sˆ yy ( f ) is Sˆ yp yq ( f ) =
M
∑Y
∗ pi
( f )Yqi ( f ) .
i =1
The critical value for a significance level α, M epochs and N signals can be expressed as [2]:
κˆ N2 crit =
Fcrit α , 2 N , 2( M − N )
Fcrit α , 2 N , 2( M − N ) + [M − N ] N
(2)
The detection is identified based on the rejection of the Null Hypothesis (H0) of Response Absence, which is achieved when the estimate values exceed the critical value ( κˆ N2 ( f ) > κˆ N2 crit ). B. Magnitude-Squared Coherence (MSC) or Simple Coherence The MSC represents the parcel of the squared-mean value of the measured EEG that can be explained by the stimulation. The MSC for a discrete-time, finite duration and windowed signal can be estimated as described by [5].
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 13–16, 2010. www.springerlink.com
14
D.B. Melges, A.M.F.L. Miranda de Sá, and A.F.C. Infantosi
For the case of a periodic stimulus, the MSC estimate depends only on the measured EEG and can be expressed as:
κˆ ( f ) = 2
M
∑ Yi ( f ) i =1
2
M
M ∑ Yi ( f )
2
(3)
i =1
where “^” superscript denotes estimation, Yi ( f ) is the Fourier Transform of the ith window of EEG and M is the number of epochs used for the estimates calculation. (See [6] for further details about the MSC interpretation). The analytic critical values for the coherence estimate can be calculated from the distribution obtained in [7] as described in [8]: 1
κˆ 2 crit = 1 − α M −1
(4)
The detection is based on rejecting of the null hypothesis (H0) of response absence, which is reached when the estimate values exceed the critical value ( κˆ 2 ( f ) > κˆ 2 crit ). C. EEG Acquisition The electroencephalogram (EEG) during somatosensory stimulation was collected from forty adult volunteers aging from 21 to 41 years old (mean±standard deviation: 28.6±4.6 years) and without history of neurological pathology. The signals were collected using the EEG BNT-36 (EMSA, Brazil, www.emsamed.com.br) according to the 10-20 International System and all leads were referenced to the earlobe average. The volunteers were laid down in the supine position with eyes closed. The stimuli were applied by means of current pulses (200 µs width) to the right posterior tibial nerve using the Atlantis Four (EMSA). The stimulus was applied at the motor threshold intensity level and at the rate of 4.80 Hz (nominal frequency: 5 Hz). The ground electrode was positioned on the poplitea fossa. Surface silver and gold electrodes were used, respectively, for recording and stimulation. The local ethics committee (CEPHUCFF/UFRJ) approved this research and all volunteers gave written informed consent to participate. D. Pre-processing First, the signals were band-filtered within 0.5 – 100 Hz and digitized with BNT-36 (16-bits resolution) at the sampling rate of 600 Hz. The EEG signals were segmented into epochs of 207 ms, synchronized with the stimulation (i.e. windows of one inter-stimulus duration were used), resulting in spectral resolution of 4.8 Hz. The first 5 ms after each stimulus were set to zero in order to avoid the stimulus artifact, which produces distortion in the frequency domain.
Additionally, the final 5 ms were zero padded to ensure window symmetry. Furthermore, a Tukey window with 7 ms rising (falling) time has been applied to each epoch to ensure that the late components of the artifact were also attenuated. Noisy epochs were next discarded through a semi-automatic artifact rejection algorithm. (See [6] for further details about the windowing and the artifact rejection). κˆ N2 ( f ) and κˆ N2 crit were calculated for the acquired signals using expressions (1) and (2) with N = 2, α = 5% and M = 100. These values were also used to calculate κˆ 2 ( f ) and κˆ 2 crit (expressions (3) and (4)). The MSC was applied to the derivations [Cz], [Fz], [C3] and [C4], usually employed in the SEP recordings when bipolar derivations are used. The MC was applied to the pairs [Cz][Fz] and [C3][C4]. In order to evaluate the overall result for response detection, the percentage of volunteer for whom it was possible to detect de stimuli response with each technique was calculated for each frequency. Then, the performance of MSC and MC was compared based on the detection percentages by means of the proportion test [9].
III. RESULTS The detection rates for κˆ 2 [C3], κˆ 2 [C4] and
κˆ 22 [C3][C4], are drawn in the Figure 1, for the calculated estimates with M = 100 epochs. Considering only the frequencies within the band from 20 to 60 Hz (referred from now towards of maximum response band or optimum band of SEP as suggested by [6]), κˆ 2 [C3], κˆ 2 [C4] and κˆ 22 [C3][C4] presented detection rates varying, respectively from 10 to 32.5%, 10 to 62.5% and 27.5 to 75%. Without considering the edge frequencies (20 and 60 Hz), the rates for κˆ 2 [C4] vary from 30 to 62.5%. Clearly, κˆ 2 [C3] presents the worst result between the three estimates. The frequencies for which were found significant difference between the detection rates of the MC and the MSC are shown in Table 1. As it can be seen, the MC surpassed the MSC applied to C3 in the whole maximum response band. On the other hand, the MC exceeded the MSC applied to C4 in only four frequencies from nine of this band. Figure 2 shows the percentages of detection for κˆ 2 [Cz],
κˆ 2 [Fz] and κˆ 22 [Cz][Fz], which varied, in the optimum frequency band, from 2.6 to 84.6%, 5 to 45% and 17.9 to 89.7%. Considering only the frequencies from 25 to 55 Hz, these percentages vary from 33.3 to 84.6%, 10 to 45% and 48.7 to 89.7%, respectively, that is, the minimum rates are higher for this frequency range. Moreover, κˆ 2 [Fz] shows
IFMBE Proceedings Vol. 29
Simples Coherence vs. Multiple Coherence: A Somatosensory Evoked Response Detection Investigation
15
100 90 80 2
κ [C3]
70
MSC and MC
2
κ [C4]
60
2
κ2[C3][C4]
50 40 30 20 10 0 0
10
20
30
40
50
60
70
80
90
100
90
100
Frequency (Hz) Fig. 1 Detection rates for κˆ 2 [C3], κˆ 2 [C4] and κˆ 22 [C3][C4]
100 90 2
80
κ [Cz]
70
κ [Fz]
60
κ2[Cz][Fz]
2
MSC and MC
2
50 40 30 20 10 0 0
10
20
30
40
50
60
70
80
Frequency (Hz) Fig. 2 Detection
rates for
κˆ 2
[Cz],
κˆ 2
[Fz] and
κˆ 22
[Cz][Fz]. For derivation Cz, it was only possible to obtain 100 artifact-free epochs
for 39 from the 40 volunteers, hence, the percentage of detection for
κˆ 2 [Cz] and κˆ 22 [Cz][Fz] were calculated over 39
IFMBE Proceedings Vol. 29
16
D.B. Melges, A.M.F.L. Miranda de Sá, and A.F.C. Infantosi
the lowest rates. It can also be noted that the rates found for these derivations are higher than that found for [C3] and [C4]. The performance of κˆ 22 [Cz][Fz] exceeds κˆ 2 [Fz] and κˆ 2 [Cz]. However, the detection trace profile of κˆ 2 [Cz] is more similar to that obtained for the MC. This result can be confirmed in Table 2 that indicates the significant difference between κˆ 22 [Cz][Fz] and κˆ 2 [Fz] for the whole maximum response band and between κˆ 22 [Cz][Fz] and κˆ 2 [Cz] for about half the frequencies within this band. Table 1 Frequencies for which it was observed significant difference between the performance of MC and MSC
κˆ [C3] vs κˆ [C3][C4]
κˆ [C4] vs κˆ [C3][C4]
20-60
20, 50-60
2
2 2
2
2 2
Table 2 Idem Table 1
κˆ 2 [Cz] vs κˆ 22 [Cz][Fz] 20, 25, 50-60
κˆ 2 [Fz] vs κˆ 22 [Cz][Fz] 20-60
IV. DISCUSSION AND CONCLUSIONS The MC presented higher detection rates than the MSC, both for derivations [C3] and [C4] and for [Cz] and [Fz], for at least three frequencies of the maximum response band. Each pair of EEG channels has one lead with lower signalto-noise ratio, that is, [C3] and [Fz]. Even in this case, the use of additional information from them resulted in better percentages of detection with the Multiple Coherence. This result agree with those found by [1], who indicated, using simulation, the possibility of performance improvement of the MC even when the second EEG derivation presents signal-to-noise ratio lower than that for the first available derivation. In a previous work [3], we have also reported better performance of the MC when compared with the MSC, but applied to the bipolar derivations commonly used in the tibial nerve SEP recording, [C3’-C4’] and [Cz’-Fpz’]. These results are similar to those found by [1] and [2], who employed these techniques for detecting the visual evoked response recorded in the leads O1 and O2. In these studies, the authors pointed out higher detection rates for the MC.
Based on these results, if two derivations are available, it should be better to use the Multiple Coherence than the Magnitude-Squared Coherence applied to each lead.
ACKNOWLEDGMENT To the Brazilian research and education agencies, the Rio de Janeiro State Research Council (FAPERJ), the National Council for Scientific and Technological Development (CNPq - Ministry of Science and Technology) and CAPES (Ministry of Education) for the financial support. We also acknowledge the Military Police Central Hospital of Rio de Janeiro for providing infrastructure support.
REFERENCES 1. Miranda de Sá AMFL, Felix LB (2002) Improving the detection of evoked responses to periodic stimulation by using multiple coherence – application during photic stimulation. Med Eng Phys 24(4):245-252. 2. Miranda de Sá AMFL, Felix LB, Infantosi AFC (2004) A matrix-based algorithm for estimating multiple coherence of a periodic signal and its application to the multichannel EEG during sensory stimulation. IEEE Trans Biomed Eng 51 (7): 1140-1146. 3. Infantosi AFC, Melges DB, Miranda de Sá AMFL, Cagy M. (2005) Uni- and multi-variate coherence-based detection applied to EEG during somatosensory stimulation, IFMBE Proc. vol. 11, 3rd. European Medical & Biological Engineering Conference, Czech Republic, 2005, pp. 1-4. 4. Felix LB, Miranda de Sá AMFL, Infantosi AFC, Yehia HC. (2007) Multivariate objective response detectors (MORD): statistical tools for multichannel EEG analysis. Ann Biomed Eng 35(3):443-452. 5. Miranda de Sá AMFL, Infantosi AFC, Simpson DM. (2002) Coherence between one random and one periodic signal for measuring the strength of responses in the EEG during sensory stimulation. Med Biol Eng Comput 40:99-104. 6. Infantosi AFC, Melges DB, Tierra-Criollo CJ. (2006) Use of magnitude-squared coherence to identify the maximum driving response band of the somatosensory evoked potential. Braz J Med Biol Res 39:1593-1603. 7. Miranda de Sá AMFL. (2004) A note on the sampling distribution of coherence estimate for the detection of periodic signals. IEEE Signal Process Lett 11(3): 323-325. 8. Miranda de Sá AMFL and Infantosi AFC. (2007) Measuring the synchronism in the ongoing Electroencephalogram during intermittent stimulation – a partial coherence-based approach. Med Biol Eng Comput 47(7): 443-452. 9. MOORE D. (2005) A Estatística Básica e sua prática. LTC Editora, Rio de Janeiro. Author: Danilo Barbosa Melges Institute: Biomedical Engineering Program (COPPE/UFRJ) Street: Av. Horácio Macedo 2030, Centro de Tecnologia, Bloco H, Sala 327, Cidade Universitária; CEP 21941-914. City: Rio de Janeiro Country: Brazil Email:
[email protected]
IFMBE Proceedings Vol. 29
Measure of Similarity of ECG Cycles Á. Jobbágy1, Á. Nagy1 1
Budapest University of Technology and Economics/Dept. Measurement and Information Systems, Budapest, Hungary
Abstract— According to cardiologists approximately only half of the existing (theoretically available) diagnostic information is extracted presently from ECG. This paper introduces a new parameter: Periodicity of ECG signal, PES. A method is suggested based on the SVD analysis to quantify the similarity of ECG cycles. The raw data are filtered and cycles are normalized preceding the SVD analysis. The ratio of the dominant basis vector and all other basis vectors is the suggested measure for the similarity. Recordings from patients after open chest surgery and from control subjects are analyzed. Keywords— ECG analysis, measure of similarity, PES, qausiperiodicity. I. INTRODUCTION
The electrical conduction system of the heart has parts with varying conduction velocity. When the heart rate changes, the shape of the ECG cycle also changes. Consequently, longer and shorter cycles cannot be aligned by a simple linear transformation. This is the main reason for the quasiperiodic nature of ECG. The parallel conductive paths can also cause differences between cycles. This is a wellknown phenomenon. Up to now measure of the similarity of ECG cycles over a given period has not been used as diagnostic information. This paper suggests an algorithm to assign a number between 0 and 1 to the measure of quasiperiodicity of ECG, where 0 stands for white noise and 1 for periodic signal. Patients who passed open chest surgery measured their blood pressure daily with a home health monitoring device (HHM) for several months [1]. A single measurement does not give enough information to qualify the blood pressure of a person, usually not even to discriminate normal value from pathologic one. The blood pressure is varying during the day, 20...30 mmHg differences are not uncommon even for healthy subjects. To increase the accuracy of blood pressure measurement, HHM also recorded Einthoven I lead ECG and photplethysmographic signal from both index fingers. The recordings together with the corresponding basic processing Matlab files are available for download from http://home.mit.bme.hu/~csordas/indexm.html These recordings were analyzed and compared to recordings taken from healthy control subjects.
II. MATERILAS AND METHODS
A. The SVD analysis Biosignals are often quasiperiodic. Consider testing the movement coordination, when patients are asked to perform movement patterns repeatedly. Finger tapping movement is a typical test [2]. The Singular Value Decomposition (SVD) analysis [3] is an appropriate tool to give a quantitative measure of how close is the quasiperiodic signal to periodic. The method can be used to extract dominant patterns from 3D kinematic data [4]. SVD analysis is able to break down the quasiperiodic signal into base vectors of any shape. Thus it fits much better to characterizing quasiperiodic signals than Fourier analysis that uses only sine waves. As an example, the triangle wave can only be composed of infinite number of sine waves while the SVD analysis would result in a single dominant vector. The recorded ECG signal (with " samples) can be regarded as a vector, y:
y = y(1) y(2) ... y(k) ... y( " ) Let y(pi) mark the beginning of the ith period. The samples belonging to an ECG cycle are considered to be row vectors r(i).
r(1) = [y(p1) y(p1+ 1) ... y(p2 -1)] r(2) = [y(p2) y(p2 + 1) ... y(p3-1)] r(m) = [y(pm) y(pm + 1) ... y(pm + c)] The SVD method requires that each period contain exactly the same number of samples, i.e. the length of r(i) vectors must be the same. The length of a heart cycle is not necessarily constant. As a first step the time function must be segmented into periods. A heart cycle is between two adjacent R peaks. The ith row vector r(i) belongs to the ith cycle. The duration of a cycle varies over the test, so varies the length of the corresponding row vector. The equal length of all r(i) row vectors is assured by resampling the data in each row. The median (denoted by n)
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 17–20, 2010. www.springerlink.com
18
Á. Jobbágy and Á. Nagy
of the lengths of the row vectors will be the length of each resampled vector. (i < m) : pj - pi, j = i + 1 length >r i @ = ® ¯ (i = m) : c + 1 n = median ^length >r i @`
Resampling is accomplished by linear interpolation. The first and last elements of the resampled row vectors are the same as in the original row vectors, yr(i,1) = y(pi), yr(i,n) = y(p(i+1)-1) except for the last row vector, where yr(m,n) = y(pm+c). The elements of the resampled row vectors yr(i,j) are interpolated between the original y(pi+k-1) and y(pi+k) points,
B. Filtering the raw data The first ten ECG cycles were cut out from the recordings to eliminate the initial transient. (Tested subjects have to press the start button and can touch the ECG electrode only afterwards when using HHM [1].) ECG is corrupted with power line noise even while recording at home. Fig. 1. shows the PES(t) function, the time function with rough time resolution and the cycles behind one another normalized in time. A motion artefact distorts the ECG signal causing a reduced PES while at least one corrupted cycle is within the window of processing.
k = entier(n/length(r(i))*j), 2 j n. The matrix so created is: ª yr(1,1) yr(1,2) yr(1,n) º « » « » « yr(2,1) yr(2,2) yr(2,n) » » X = « « » « » « » « yr(m,1) yr(m,2) yr(m,n)» ¬ ¼
When the matrix is composed the SVD function of MATLAB® (The MathWorks Inc.) is used. This determines the matrices S, V and Ȉ so that X = SȈVT. Ȉ is a diagonal matrix. Its Vi elements can be regarded as weighting factors of the basis functions that are needed to describe the ECG cycles in the sampled data y and represented by the row vectors of X. The columns of V can be regarded as basis functions. Adding up the vj basis functions weighted by uiıi we get the ith cycle of the signal, i.e. the ith row of X. The periodicity of the ECG signal (PES) is characterized by the ratio of the dominant basis function to all other basis functions necessary to describe the complete record, i.e. all cycles. In case of movement patterns the periodicity is usually characterized by the ratio of squared basis functions, e.g. [2]. This would not be appropriate for ECG cycles which are much more uniform.
PES
Fig. 1 ECG signal after low-pass filtering. Time function (bottom left), normalized cycles (bottom right) and PES(t) (top).
V1 m
¦V
i
i 2
During the ECG analysis, PES was determined in a window spanning over 30 cycles. The window was moved along the whole recording with a step of 5 cycles resulting in PES(t).
Fig. 2 Frequency spectrum (left) and PES(t) (right) for the original ECG (top), after bandstop filtering (50 Hz and 100 Hz, middle) and lowpass filtering (35 Hz, bottom).
IFMBE Proceedings Vol. 29
Measure of Similarity of ECG Cycles
19
The effect of band-stop and low-pass filtering was analyzed. The harmonic content of the power-line noise can be significant, thus band-stop filtering was applied both around 50 Hz and 100 Hz. The low-pass filter had a 35 Hz corner frequency. All applied filters were Butterworth type, 3rd order.
seconds later. Inflation was stopped – 23 … 27 seconds later – at a cuff pressure depending on the actual systolic pressure. Slow deflation until 40 mmHg cuff pressure lasted for about 25 seconds. At this cuff pressure a ventil was opened causing cuff pressure to drop to near zero. ECG recording continued further for 24 seconds.
III. DISCUSSION
Healthy subjects were tested immediately following a short physical stress (one minute run up and down on stairs or 30 squats). Five young males aged 22 – 24 volunteered for the test. Figure 3 shows a typical result. At the beginning of the test – as a result of short physical stress – PES is quite high, about the same as for senior subjects at rest. As the subject calms down the PES significantly decreases. After 100 heart beats this value drops below 0,9. The ECG recording lasted for 97,7 s. During this time there were 205 ECG cycles. (In Figure 3 PES is given for 165 cycles. 10 cycles were cut out at the beginning and the length of the window is 30 cycles.) This means an average 126 beats/min pulse rate. PES(t) values were very similar for all other young healthy subjects.
a.)
b.)
Fig. 3 Recording after short physical stress, young healthy subject. Time function (bottom left), normalized cycles (bottom right) and PES(t) (top).
Fig. 4 Recordings from two patients who underwent open-chest surgery. Time function (bottom left), normalized cycles (bottom right) and PES(t) (top).
Figure 4 shows the result of the analysis made for two patients who underwent through open chest surgery. Eight patients (5 women aged 57 – 65, and 3 men aged 63 - 70) participated in the test. The ECG recordings were made in parallel with oscillometric blood pressure measurement. Recording of Einthoven I lead signal started immediately at initiation of the measurement by pressing a button. Slow inflation (approximately 6 mmHg/s) of the cuff started 24
The PES values of this patient group – calculated for the ECG during blood pressure measurement – vary in a very narrow range, 0,924 … 0,932 for one patient (Fig. 4a) and 0,935 … 0,945 for another patient (Fig. 4b). Such high values together with small variation was typical for the PES of the tested patient group. The average pulse rate during the test was 67 (Fig. 4a) and 63 (Fig. 4b).
IFMBE Proceedings Vol. 29
20
Á. Jobbágy and Á. Nagy IV. CONCLUSION
REFERENCES
The suggested new parameter PES gives the possibility to simply assess the stress level of persons provided reference value determined in a stress-free state is available for them. Thus PES can be part of a personalized blood pressure measurement preventing the start of cuff inflation when the person is not relaxed enough. PES also characterizes the variation in the heart conduction system. This parameter was found to be significantly higher for elder (age above 50) persons than for young (age between 22 and 25) subjects. To make use of this new parameter during diagnosis and treatment, further tests, involving patients with cardiovascular diseases as well as age matched healthy control subjects are necessary.
1. 2. 3. 4.
Jobbágy Á, Csordás P, Mersich A, Magjarevic R, Lackovic I, Mihel J:Home Health Monitoring. DOI 10.1007/978-3-540-69367-3_122, Proc. IFMBE Vol. 20. pp. 454-457. Proc. NBC 08, Riga, Latvia. Jobbágy Á, Harcos P, Karoly R, Fazekas G: Analysis of the Finger Tapping Test. Journal of Neuroscience Methods, January 30, 2005. Vol 141/1, pp. 29-39. Kanjilal PP, Palit S, Saha G: Fetal ECG Extraction from SingleChannel Maternal ECG Using Singular Value Decomposition. IEEE Tr on BME, vol. 44. 1997 No. 1. pp. 51-59. Stokes V, Lanshammer H, Thorstensson A: Dominant Pattern Extraction from 3D Kinematic Data. IEEE Tr. on BME, 1999 January, pp. 100-106. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Ákos Jobbágy Budapest University of Technology and Economics Magyar Tudósok krt. 2. Budapest Hungary
[email protected]
Wavelet phase synchronization between EHGs at different uterine sites: comparison of pregnancy and labor contractions M. Hassan 1,2, Á. Alexandersson3, J. Terrien2, B. Karlsson 2,4 and C. Marque 1 1
Université de Technologie de Compiègne, CNRS UMR 6600, Biomechanics and Biomedical Engineering, Compiègne, France. 2 School of Science and Engineering, Reykjavik University, Reykjavik, Iceland. 3 Faculty of Medicine, University of Iceland, Reykjavik, Iceland. 4 Department of physiology, University of Iceland, Reykjavik, Iceland.
Abstract— This study investigates phase synchronization in
the time-frequency domain between uterine signals recorded at different sites during the same contraction from women during pregnancy and women in labor. We used the complex Morlet wavelet transform to estimate the phase synchronization between the uterine signals. The method was applied on a set of uterine bursts during pregnancy and in labor. The results indicated that the uterine bursts are more synchronized in phase during pregnancy than during labor. This phase desynchronization during labor may be a tool to differentiate between contractions during pregnancy and labor and could therefore be used in the prediction of preterm labor. Keywords— Uterine contraction; synchronization; Preterm labor; EHG;
Wavelet
phase
INTRODUCTION
Electrohysterograms (EHG), or uterine electromyograms recorded externally on women, have been proven to be representative of uterine contractility. The analysis of these signals may allow the prediction of a preterm labor threat as soon as 28 weeks of gestation (WG) [1]. The study of signals dependency has become an important tool in understanding the function of the biological system. Numerous methods have been proposed to study signal interdependencies. They basically belong to two sets: linear methods (based on intercorrelation or coherence functions) and non-linear methods (based on non-linear regression, mutual information or comparison of phase trajectories in a state space built from signals). However, there is increasing interest in the use of wavelet-based techniques in processing non-stationary signals like EEG signals: investigating oscillatory behavior [2], spike detection [3] and filtering [4]. In this study we examine phase relationship between uterine electrical activities recorded at two different locations during the same contractions in order to differentiate between phase synchronization between signals during pregnancy and in labor.
Phase synchronization is a relationship between phases of two signals while their amplitudes may be uncorrelated. These facts differ the phase synchronization analysis from the coherence analysis because coherence, based on Fourier analysis, is highly dependent on stationarity of analyzed signals and, as a measure of spectral covariance, coherence does not separate effects of phase and amplitude [5]. In a previous study, we used ―wavelet coherence‖ to detect the interaction between uterine electrical activities [6], but the wavelet coherence does not separate the effects of amplitude and phase in the dependency between two signals. Since we have already studied the amplitude correlation between uterine bursts by using the nonlinear correlation coefficient [7], we are interested in this study to investigate the phase relationship between uterine bursts. The aim of this paper is to investigate the difference in phase synchronization between uterine contractions recorded from women during pregnancy and in labor. The uterine signals are recorded during the same contractions at different locations. MATERIALS AND METHODS
A. Data Real EHG signals used in this study were obtained from 3 women during pregnancy (30-32 week of gestation) and 3 women during labor. The measurements were obtained by using a 16 channel multi-purpose physiological signal recorder, most commonly used for investigating sleep disorders (Embla A10). We used reusable Ag/AgCl electrodes. The measurements were performed at the Landspitali University hospital in Iceland, following a protocol approved by the relevant ethical committee (VSN 02-0006-V2). The signals used for this study were the bipolar signals Vb7 and Vb8 corresponding to two channels on the median vertical axis of the uterus (see [8] for more details). The signal sampling rate was 200 Hz. The recording device has
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 21–24, 2010. www.springerlink.com
22
M. Hassan et al.
an anti-aliasing filter with a low pass cut-off frequency of 100 Hz. The concurrent tocodynamometer (Toco) paper trace was digitized in order to ease the identification of contractions. The EHG signals were segmented manually to extract segments containing uterine activity bursts (Figure 1). 0.5
0.02
0.4
0.015 ) v m ( e d u t i l p m A
0.01 0.005 0 -0.005
0.3 0.2 0.1 0 -0.1
C. Phase synchronization
-0.2
-0.01
-0.3
-0.015
-0.4
-0.02
-0.5
0
20
40
60
80
100
120
140
0.02
0.3
0.015
0.2
0.005 0 -0.005
20
40
0
20
40
60 Time(s)
80
100
120
60
80
100
120
0 -0.1 -0.2
-0.01
-0.3
-0.015 -0.02
0
0.1 Amplitude(mv)
Amplitude (mv)
0.01
0
20
40
60
80
100
120
140
-0.4
Time(s)
Time(s)
Fig. 1 Two uterine bursts recorded during the same contractions: pregnancy (left) and labor (right). Both represent Vb7 (top) and Vb8 (bottom) bipolar signals.
B. Wavelet transform The continuous wavelet transform (CWT) can decompose a signal into a set of finite basis functions. Wavelet produced through the Wx ( a, ) are coefficients convolution of a mother wavelet function (t ) with the analyzed signal x(t) or
Wx (a, )=
1 t x(t )* ( )dt a a
where a and τ denote the scale and translation parameter; * denotes complex conjugation. By adjusting the scale a, a series of different frequency components in the signal can be extracted. The factor a is for energy normalization across the different scales. Through wavelet transforms, the information of the time series x(t) is projected on the two dimension space (scale a and translation τ). In this study, the Morlet wavelet was used, it is given by:
0 (t )
where ω0 is the wavelet central pulsation. In this paper we used ω0=1. Morlet wavelet is a Gaussian–windowed complex sinusoid; the Gaussian’s second order exponential decay of the Morlet function gives a good time localization in the time domain [9]. We chose the complex Morlet wavelet transform (MWT) as it provides the signal amplitude and phase simultaneously. This property allows us to use the MWT to investigate the coherence/synchronization between two signals recorded at two different sites.
1/4 i0t
e e
The parameter used for measuring phase synchronization is the relative phase angle between two oscillatory systems. The Morlet wavelet transform acts as a bandpass filter and, at the same time, yields separate values for the instantaneous amplitude A(t) and the phase (t) of a timeseries signal at a specific frequency. Thus, the wavelet phases of two signals X and Y can be used to determine their instantaneous phase difference in a given frequency band, and to establish a synchronization measure (Wavelet Phase Synchronization: WPS) which quantifies the coupling of phases independently from amplitude effects. The principle of phase synchronization corresponds to a phase locking between two systems defined as:
n,m (t ) n X (t ) mY (t ) C where X (t ) and Y (t ) are the unwrapped phases of the signals of the two systems and C is a constant. For real noisy data the cyclic relative phase, n ,m (t ) mod 2𝜋, is preferentially used. Note that according to the above equation, the phase difference has to be calculated from the univariate phase angle of each signal. Phase locking is observed if the phase difference remains approximately constant over some time period. In order to evidence the variation of the strength of phase synchronization between two uterine segment bursts, we used the intensity of the first Fourier mode of the distribution, given by:
n,m (t ) cos(n,m (t )) 2 sin(n,m (t )) 2 where denote the average over time. The measure of synchronization strength varies from 0 (no phase synchronization) to 1 (perfect phase synchronization). It is also called the synchronization index. In this paper we use m=n=1.
1 t2 2
IFMBE Proceedings Vol. 29
Wavelet Phase Synchronization between EHGs at different Uterine Sites: Comparison of Pregnancy and Labor Contractions 0.02
23
0.5 Vb7 Vb8
0.015
Vb7 Vb8
0.4 0.3
0.01 0.2 Amplitude
Amplitude
0.005 0 -0.005
0.1 0 -0.1 -0.2
-0.01 -0.3 -0.015 -0.02
-0.4 0
20
40
60
80
100
120
-0.5
140
0
20
40
Time (s)
Wavelet Phase Synhronization
100
120
1
1
0.9
0.9
0.9
0.9
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.6
0.6
0.6
0.6
0.5
0.5
0.5
0.5
0.4
0.4
Frequency(Hz)
Frequency(Hz)
80
Wavelet Phase Synhronization
1
1
60 Time (s)
0.4
0.4
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0
0
20
40
60 80 Time(s)
100
0
0
120
0.85
0
20
40
60 Time(s)
80
0
100
1
0.8
0.9
0.75 0.8
0.65
Mean WPS
Mean WPS
0.7
0.6 0.55 0.5
0.7 0.6 0.5
0.45 0.4
0.4 0.35
0
0.1
0.2
0.3
0.4 0.5 0.6 Frequency(Hz)
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4 0.5 0.6 Frequency(Hz)
0.7
0.8
0.9
1
Fig. 2 (Top): Two uterine bursts (bipolar channels Vb7 and Vb8). (Middle): The phase synchronization between the two uterine bursts in the time-frequency plan. (Bottom): The mean of wavelet phase synchronization over frequency bands: pregnancy (left) and labor (right).
D. Results The results (figure 2) indicate that higher phase synchronization is located at the lower frequencies in the case of labor and pregnancy and that phase synchronization during pregnancy is higher than during labor. In order to find if this method can be a pertinent tool to classify pregnancy vs. labor bursts, we compute the mean of WPS at different frequency bands.
The results in table 1 correspond to 10 contractions (CTs) from 3 women during pregnancy (30-32 WG) and 10 contractions from 3 women during labor (delivery time of 39, 40 and 42 WG). Table 1 indicates a kind of phase desynchronization from pregnancy to labor (notice that the values correspond to the frequencies between: 0-0.25 Hz).
IFMBE Proceedings Vol. 29
24
M. Hassan et al.
Table 1 The mean of WPS between 10 contractions from 3 women during pregnancy and 10 contractions from 3 women in labor CT #
Pregnancy
Labor
1
0,64±0.01
0,49±0.01
2
0,63±0.08
0,52±0.01
3
0,61±0.02
0,52±0.02
4
0,58±0.03
0,49±0.02
5
0,60±0.08
0,55±0.01
6
0,59±0.05
0,48±0.03
7
0,58±0.02
0,45±0.01
8
0,54±0.05
0,53±0.02
9
0,57±0.01
0,53±0.03
10
0,61±0.02
0,58±0.04
Mean
0,595±0.02
0,514±0.03
pregnancy than labor. These findings can possibly provide a method for differentiating between pregnancy and labor contractions. Although yet to be tested, they could also aid in distinguishing between contractions in labor at term and preterm. Ultimately, our goal is to detect preterm labor and so we find these results encouraging.
ACKNOWLEDGMENT This project is financed by the Icelandic Centre for Research RANNÍS and the French National Centre for Universities and Schools (CNOUS). REFERENCES [1]
DISCUSSION
In this paper we have presented the first application of wavelet phase synchronization on EHG signals in order to detect relationships between uterine electrical activities recorded at different sites during the same contraction. This method can be used to detect the phase synchronization between uterine bursts as it respects the non stationarity of EHG signals and the non linearity of the propagation expected for uterine EHGs. Since the uterus is supposed to work as a synchronized organ during labor and amplitude correlation has already been proven to be more important during labor than during pregnancy [7], we would have expected to obtain the same evolution for phase synchronization. The results presented here indicate the opposite. The same observations (increase in amplitude correlation and decrease in phase correlation) have also been observed on EEG signals during the transition from preictal (desynchronized system) to ictal (synchronized system) stage [10]. This phase desynchronization still needs confirmation and interpretation on a bigger database. We also plan to use signals recorded during pregnancy and labor for the same woman. By studying the time-frequency phase synchronization longitudinally along the weeks of gestation, we expect to be able to define the parameters related to propagation of EHG signals during pregnancy and labor .If so; these methods could be used to predict preterm labor. CONCLUSION
This paper presents the use of wavelet analysis in the study of phase synchronization between uterine electrical activities. We observed more phase synchronization during
[2]
[3] [4] [5]
[6]
[7]
[8]
[9] [10]
H. Leman, C. Marque, and J. Gondry, "Use of the electrohysterogram signal for characterization of contractions during pregnancy," IEEE Trans Biomed Eng, vol. 46, pp. 12229, Oct 1999. A. Klein, T. Sauer, A. Jedynak, and W. Skrandies, "Conventional and wavelet coherence applied to sensoryevoked electrical brain activity," IEEE Trans Biomed Eng, vol. 53, pp. 266-72, Feb 2006. L. Senhadji and F. Wendling, "Epileptic transient detection: wavelets and time-frequency approaches," Neurophysiol Clin, vol. 32, pp. 175-92, Jun 2002. E. L. Glassman, "A wavelet-like filter based on neuron action potentials for analysis of human scalp electroencephalographs," IEEE Trans Biomed Eng, vol. 52, pp. 1851-62, Nov 2005. M. Le Van Quyen, J. Foucher, J. Lachaux, E. Rodriguez, A. Lutz, J. Martinerie, and F. J. Varela, "Comparison of Hilbert transform and wavelet methods for the analysis of neuronal synchrony," J Neurosci Methods, vol. 111, pp. 83-98, Oct 30 2001. M. Hassan, J. Terrien, B. Karlsson, and C. Marque, "Application of wavelet coherence to the detection of uterine electrical activity synchronization in labor " IRBM:Ingénierie et Recherche Biomédicale / BioMedical Engineering and Research vol. (to be published), 2010. M. Hassan, J. Terrien, B. Karlsson, and C. Marque, "Spatial analysis of uterine EMG signals: Evidence of increased in synchronization with term," Conf Proc IEEE Eng Med Biol Soc, vol. 1, pp. 6296-9, 2009. B. Karlsson, J. Terrien, V. Guðmundsson, T. Steingrímsdóttir, and C. Marque, "Abdominal EHG on a 4 by 4 grid: mapping and presenting the propagation of uterine contractions," in 11th Mediterranean Conference on Medical and Biological Engineering and Computing, Ljubljana, Slovenia, 2007, pp. 139-143. C. Torrence and G. P. Compo, "A Practical Guide to Wavelet Analysis," Bull. Amer. Meteor. Soc, vol. 79, pp. 61-78, 1998. X. Li, X. Yao, J. Fox, and J. G. Jefferys, "Interaction dynamics of neuronal oscillations analysed using wavelet transforms," J Neurosci Methods, vol. 160, pp. 178-85, Feb 15 2007. Author: Mahmoud Hassan Institute: School of Science and Engineering, Reykjavik University. Street: Menntavegi 1 City: Reykjavik Country: Iceland Email:
[email protected]
IFMBE Proceedings Vol. 29
Dynamic Generation of Physiological Model Systems J. Kretschmer, A. Wahl, and K. Moeller Furtwangen University/Department of Biomedical Engineering, Villingen-Schwenningen, Germany
Abstract— A versatile software for dynamic generation of physiological model systems is proposed. Via a graphical user interface the user can choose and combine models of varying abstraction level and complexity from the following three model families: respiratory mechanics, gas exchange and cardiovascular dynamics. Tests of different simulation runs showed results and model delay times consistent with human physiology. Keywords— Physiological simulation, MATLAB, cardiovascular dynamics, respiratory mechanics, gas exchange.
I. INTRODUCTION Mathematical models are widely used to simulate physiological processes in the human body and can be exploited for diagnostic purpose or the automation of therapeutical measures [1]. The standard models described in literature usually assume the organs to be isolated mechanisms and therefore lack of any interaction with other physiological processes in the human body. But in model based diagnosis or therapy the interaction of different physiological systems is mandatory. These interactions can include e.g. cardiovascular response to intrathoracic pressure or reaction of body gas exchange to variations in cardiac output. Complex models with interaction between different physiological processes are usually not consisting of interchangeable submodels, so that any adaption of the submodels’ abstraction level or extension of the model requires time consuming redesign. We therefore designed a versatile software based on MATLAB with dynamically exchangeable subsystems within the three model families of respiratory mechanics, gas exchange and cardiovascular dynamics. This model system is part of the Autopilot-BT system devoted to the partial automation of respiratory therapy [1]. In model based diagnosis and therapy the model system has to be individualized to patient specific physiological behaviour. Therefore parameter identification is required to fit all model parameters such that the patient characteristics are reproduced by the model system. This parameter identification will as well profit from this modular approach.
II. METHODS A. Model Families and Submodels To allow interchangeability and interaction between the submodels, important parameters were extracted from the corresponding literature. Common interfaces were defined for each model family based on these parameters to ensure interchangeability within the same model family. For the simulation of human body gas exchange we used a 2compartment model described by Chiari et. al. [2] with the carbon dioxide dissociation curve as found by Sharan et al. [3]. Their gas exchange model assumes laminar, continuous blood and gas flow. To integrate their model and at the same time to ensure interaction with respiratory mechanics the tidal breathing model introduced by Benallal et al. [4] was added to the alveoli model equation. The model family of cardiovascular dynamics consisted of a 3-compartment model by Parlikar et. al. [5] as well as a serially connected 14-compartment model by Danielsen and Ottesen [6] and a parallel connected 19-compartment model with CNS controller by Leaning et. al. [7]. Moreover both the cardiovascular models by Danielsen and Ottesen and Leaning et al. were extended to respond to intrathoracic pressure. The respiratory mechanics were based on a 1st order RC-model and a 2nd order RC-model. All described models have been coded in MATLAB software following the model descriptions in literature and have been tested separately to assure operability within physiological limits. In Fig. 1 the model interactions are shown. Starting with the lungs the first interface is located between the respiratory mechanics and the ventilator settings. Both models are connected by the following parameters: applied airway pressure (Paw), the positive end-expiratory pressure (PEEP) and the ventilation frequency (fR). The second interface between respirator and human physiology is concerned with gas exchange processes. It includes as parameters the inhaled gas fractions of oxygen (FiO2) and carbon dioxide (FiCO2) directly influencing the end-capillary partial pressures of these gases. As mentioned above a separate tidal breathing model had to be added, which in the MATLAB
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 25–28, 2010. www.springerlink.com
26
J. Kretschmer, A. Wahl, and K. Moeller
introduced. Via the graphical user interface described below the user can change parameters while the simulation is running. Upon the next pause and continue phase of the simulation there will be a check if the actual breath or heartbeat is finished (this depends on the chosen parameter). If the result is positive, the new parameter value is saved and the simulation is continued using the last calculated result as new starting conditions. Otherwise the remaining time of the actual breath/heartbeat is simulated before changing the parameter value (Fig. 2).
Respiratory settings Parameter input P
FiCO2
fR
Simulation time: e.g. 100s
1s
End of simulation
.
VA VA I/E
Gasaustausch Lung gas exchange Lunge CaO2 CaCO2
CeO2
CvO2
CeCO2
Atemmechanik Respiratory mechanics
PAO2 PACO2
CO
End of simulation time reached ? no
Pth
CO
yes
Get changed parameters End of breath/heartbeat reached?
CvCO2
Body gas exchange
Start of simulation
Starting conditions =
FiO2
Kardiovaskuläre Cardiovascular dynamics Dynamik
yes
last values of previous simulation step
representation is separated from the body gas exchange. This block called “lung gas exchange” is moreover influenced by alveolar volume (VA), air flow ( VA ) and inspiration/expiration ratio (I/E) given by respiratory mechanics. Both lung and body gas exchange are also influenced by cardiac output (CO) which is determined by the cardiovascular dynamics. The last interface is located between cardiovascular dynamics and respiratory mechanics where intrathoracic pressure (Pth) influences cardiac output.
Change parameters
no
Get remaining time tm of breath/heartbeat
Fig. 1 Model interfaces setup
Simulate tm
B. MATLAB Source Code Setup
Fig. 2 Algorithm for parameter alteration
Since the MATLAB differential equation solver needs the model system to provide all state signals at the same time step, consecutively solving of the separate submodels is not possible. One approach to sorting out this problem would be incorporating all chosen submodels into one file to allow MATLAB to call all ODEs at the same time. This on the other hand would compromise simple interchanging of several submodels. The solution to the just mentioned dilemma is given by using a dedicated caller algorithm. This invokes all chosen submodels at the same time step and creates the vector containing all state signal derivatives. Afterwards this vector is handed over to the ODE solver which calculates the corresponding state signal vector. The caller program is created in such a way that it can combine an arbitrary number of submodels as it is most flexible using the powerful “eval” command in MATLAB. To allow communication between the different submodels all exchangeable parameters are part of a global parameter struct, being updated when the corresponding submodel is invoked. To allow parameter alteration during the simulation, a stepwise solving of the differential equations has been
C. Graphical User Interface To allow user friendly handling of the various submodels a graphical user interface (GUI) has been created (see Fig. 3). The GUI was coded using MATLAB GUIDE software which allows easy arrangement of user interface elements such as buttons, graphs and pop-up menus in a graphical programming environment. The GUI is basically divided into two parts: simulation settings and plotting options. In the simulation settings the user can select one submodel from each model family to be combined to the model system. Via the plotting options the user can plot graphs of the simulation data directly from the GUI. The options allow plotting of alveolar gas partial pressures, tidal volume, aortic and venous pressures as well as arterial and venous gas concentrations. Upon pressing of the “Simulate” button, it gets disabled to avoid repetitious starting of parallel simulation runs which would slow down calculation speed of the simulation massively. Also plotting options are disabled before first simulation and during simulation runs.
IFMBE Proceedings Vol. 29
Dynamic Generation of Physiological Model Systems
27
PAO2 [mmHg]
106 105 a) 104 103
0
5
10
15
20
25 time [s]
30
35
40
45
50
PAO2 [mmHg]
107 106 105
b)
104 103
0
5
10
15
20
25 time [s]
30
35
40
45
50
PAO2 [mmHg]
108 106 c) 104 102
0
5
10
15
20
25 time [s]
30
35
40
45
50
Fig. 4 Comparison of results for alveolar oxygen partial pressure from three different model combinations. a) 3-compartment gas exchange with continuous air flow and cardiac output. b) 2-compartment body gas exchange with tidal breathing 2-compartment lung gas exchange. c) 2compartment body gas exchange with tidal breathing 2-compartment lung gas exchange and 14-compartment cardiovascular dynamics
120
Fig. 3 Graphical user interface
110
Simulations showed distinct differences in simulation output depending on the chosen submodels (Fig. 4). Combination of respiratory mechanics, tidal breathing, gas exchange and 14-compartment cardiovascular dynamics showed direct influence of intrathoracic pressure (Pth) on cardiac output and thereby on arterial and venous gas concentrations. Simulation results also showed pronounced influence of Pth on aortic and venous pressures as superimposed oscillations can be seen in the output. When applying mechanical ventilation, aortic pressure clearly drops because the chest is not expanding actively anymore and positive intrathoracic pressure is pressing on the pulmonary vessels and cardiac muscles (Fig. 5). This phenomenon also leads to a decrease in oxygen blood concentration (from 0.1932 [l(STPD)/l] to 0.1927 [l(STPD/l]). Although application of PEEP causes even larger decrease in aortic pressure, change in oxygen concentration was buffered by increase in tidal volume. Venous pressure showed only slight reaction to PEEP. The unphysiological reaction of cardiac output to intrathoracic pressure is due to lack of a cardiovascular controller in the 14-compartment model. Using the 19-compartment model by Leaning et al. with its implanted controller was less sensitive to intrathoracic pressure.
Aortic pressure [mmHg]
100
III. RESULTS
90 80 70 60 50 40 Pao with spont. breath. Pao with mech. vent. (ZEEP)
30 0
5
10
15
time [s]
Fig. 5 Comparison of aortic pressure with spontaneous breathing and mechanical ventilation (ZEEP) In order to test model delay time with respect to changes in simulation parameters, alterations in ventilation frequency were applied to a model combination consisting of no respiratory mechanics (sinusoidal flow assumption), tidal breathing in the lungs, 2-compartment gas exchange and continuous cardiac output. The simulation results were compared to the data collected by Jensen et al. [8]. In this experiment changes in end-tidal CO2 (etCO2) following alterations in ventilation frequency were measured on patients undergoing general anaesthesia in endotracheal intubation. Ventilation was started with 12 or 14 breaths per minute and was altered following the protocol shown in Fig. 6 when etCO2 had
IFMBE Proceedings Vol. 29
28
J. Kretschmer, A. Wahl, and K. Moeller
reached a new equilibrium. Simulation was performed using a predefined sequence of ventilation frequencies in sinusoidal form following the protocol specified in the publication. Thus ventilation frequency changed in the following order: 12/min – 14/min – 10/min – 16/min – 8/min. Results (Fig. 6) showed that model delay time is mostly consistent with the data collected by Jensen et al. [8]. Original patient data etCO2 [mmHg]
50 40
Another disadvantage is the temporal complexity of simulation runs which rises rapidly with increasing model detail. This may be a handicap in its use as a prediction tool for finding the optimal ventilation strategy. To improve simulation speed the source code set up has to be re-evaluated and optimized. One aspect for optimization could be the applied ODE solver. As all submodels are invoked at the same time step, one submodel with stiff behaviour can slow down the simulation of the overall model system. If adaption of the simulation step size for every submodel could be done individually, simulation speed would increase significantly.
30 20
0
10
20
30
40
50 60 time [min]
70
80
90
REFERENCES
100
Fittedsimulationdata PACO2 [mmHg]
80 60 40 20
0
10
20
30
40
50 60 time [min]
70
80
90
100
Fig. 6 Alveolar partial pressure of CO2 following alteration in ventilation frequency compared with etCO2 data taken by Jensen et al. [8]
IV. CONCLUSIONS As demonstrated in the results section the pursued way of combining the separate submodels is working as expected. All submodels are invoked at the same time step so that chronological inaccuracies are avoided. Furthermore the model combinations show a simulation output that is consistent with physiological data and time delay. The interface arrangement can always be extended easily by more detailed submodels and parameters or even new model families. Via the graphical user interface the user is able to select the desired submodels and specify basic parameters easily. Despite the promising results concerning modelling of complex physiological systems one must also keep in mind that parameter identification is even more difficult to achieve. Due to interactions, each submodel as an integral part of the complete model system may show different behaviour to the same input stimulus than if it is run separately. Nevertheless the fitting process may profit from fitting the submodels separately. The parameters derived from the separate fitting can be valuable for choosing the initial values for the final fitting process.
[1] S. Lozano, K. Möller, A. Brendle et al., “AUTOPILOT-BT: A system for knowledge and model based mechanical ventilation,” Technol Health Care, vol. 16, no. 1, pp. 1-11, 2008. [2] L. Chiari, G. Avanzolini, and M. Ursino, “A comprehensive simulator of the human respiratory system: validation with experimental and simulated data,” Ann Biomed Eng, vol. 25, no. 6, pp. 985-99, NovDec, 1997. [3] M. Sharan, and S. Selvakumar, “A mathematical model for the simultaneous transport of gases to compute blood carboxyhaemoglobin build-up due to CO exposures: application to the end-expired breath technique,” Environ Pollut, vol. 105, no. 2, pp. 231-242, November 1998, 1998. [4] H. Benallal, C. Denis, F. Prieur et al., “Modeling of end-tidal and arterial PCO2 gradient: comparison with experimental data,” Med Sci Sports Exerc, vol. 34, no. 4, pp. 622-9, Apr, 2002. [5] T. Parlikar, and G. Verghese, “A simple cycle-averaged model for cardiovascular dynamics,” Conf Proc IEEE Eng Med Biol Soc, vol. 5, pp. 5490-4, 2005. [6] M. Danielsen, and J. T. Ottesen, "A Cardiovascular Model," Applied Mathematical Models in Human Physiology (Monographs on Mathematical Modelling and Computation), SIAM monographs on mathematical modeling and computation J. T. Ottesen, M. S. Olufsen and J. K. Larsen, eds., Philadelphia: Society for Industrial and Applied Mathematics, 2004. [7] M. S. Leaning, H. E. Pullen, E. R. Carson et al., “Modelling a complex biological system: the human cardiovascular system - 1. Methodology and model description,” Transactions of the Institute of Measurement and Control, vol. 5, pp. 71-86, 1983. [8] M. C. Jensen, S. Lozano, D. Gottlieb et al., “An evaluation of endtidal CO2 change following alterations in ventilation frequency,” in MBEC, Antwerpen, 2008.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Jörn Kretschmer Furtwangen University Jakob-Kienzle-Straße 17 78054 Villingen-Schwenningen Germany
[email protected]
Random Forest-Based Classification of Heart Rate Variability Signals by Using Combinations of Linear and Nonlinear Features Alan Jovic1, Nikola Bogunovic1 1
Faculty of Electrical Engineering and Computing, University of Zagreb/Department of Electronics, Microelectronics, Computer and Intelligent Systems, Zagreb, Croatia
Abstract— The goal of this paper is to assess various combinations of heart rate variability (HRV) features in successful classification of four different cardiac rhythms. The rhythms include: normal, congestive heart failure, supraventricular arrhythmia, and any arrhythmia. We approach the problem of automatic cardiac rhythm classification from HRV by employing several features’ schemes. The schemes are evaluated using the random forest classifier. We extracted a total of 78 linear and nonlinear features. Highest results were achieved for normal/supraventricular arrhythmia classification (93%). A feature scheme consisting of: time domain (SDNN, RMSSD, pNN20, pNN50, HTI), frequency domain (Total PSD, VLF, LF, HF, LF/HF), SD1/SD2 ratio, Fano factor, and Allan factor features demonstrated very high classification accuracy, comparable to the results for all extracted features. Results show that nonlinear features have only minor influence on overall classification accuracy. Keywords— heart rate variability, ECG, linear features, nonlinear features, random forest
ability to adapt. Several authors later demonstrated the existence of nonlinear components in HRV [5, 6]. Author [6] pointed out that linear analysis using time and frequency features is inadequate for obtaining complete information about HRV. Regarding the nature of HRV series, author [2] showed that HRV series is nonlinear and stochastic. Nevertheless, authors continue to successfully utilize features stemming from nonlinear dynamics that rely on the assumption of underlying determinism. Nonlinear features are mostly used in combination with linear features [7]. For short-term analysis of cardiac rhythms, wavelet transform, a specific type of time-frequency localization, gives satisfying results [2, 8]. It is the purpose of this work to demonstrate the efficacy of several different schemes of features in a difficult classification setting. We want to examine how much predictive potential the linear and nonlinear features possess in the case when classification of a large number of different patients’ rhythms is required. Our purpose is to determine the classification potential of these combinations of features.
I. INTRODUCTION
Heart rate variability (HRV) analysis examines fluctuations in the sequence of cardiac interbeat (RR) intervals, usually obtained from electrocardiogram (ECG) recordings. It allows us to assess how the fluctuations can be employed in detecting presence of cardiovascular diseases [1]. Decrease in HRV has been associated with old age as a result of progressive autonomic system dysfunction. Cardiac dysfunction is often manifested by systematic changes in the variability of the RR interval sequence relative to that of a normal rhythm [2]. HRV is analyzed by using both linear and nonlinear features. Linear features are mostly oriented on time and frequency characterization of the RR interval series [3]. The field of nonlinear analysis of biological rhythms is a relatively new area of scientific exploration. A pioneer work done by [4] introduced the concept of nonlinear dynamics into the field of cardiology. He showed that healthy physiological systems have fractal complexity whereas unhealthy biological systems lack the nonlinear properties and are marked by periodical dynamics and loss of the
II. METHODS AND MATERIAL
A. Cardiac records We collected several hundred patient records from PhysioBank, a web database collection of biological signals [9]. In Table 1, we list the analyzed records. We decided to extract features for the following cardiac rhythms: normal, any arrhythmia, supraventricular arrhythmia (SVA), and congestive heart failure (CHF). The primary reason why these cardiac rhythms were analyzed, and not some others, is due to sufficient number of the records to be able to establish valid conclusions. We analyzed 500 RR intervals at the time, which constitutes to about five minutes of recording. An overlapping window was used that covers half of RR intervals from the preceding window, i.e. intervals 1-500, 251-750, 501-1000... were analyzed. In this way, we extracted a large number of feature vectors. There were a few nonexistent or invalid records within some of the databases listed in Table 1 that were omitted
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 29–32, 2010. www.springerlink.com
30
A. Jovic and N. Bogunovic
Table 1. Patient records Heart rhythm (total no. of feature vectors) Normal heart rhythm (665) Any arrhythmia (492) Supraventricular arrhythmia (312) Congestive heart failure (747)
PhysioBank database MIT-BIH Normal Sinus Rhythm Database, Normal Sinus Rhythm RR Interval Database MIT-BIH Arrhythmia Database, CAST RR Interval Sub-Study Database MIT-BIH Supraventricular Arrhythmia Database BIDMC Congestive Heart Failure Database, Congestive Heart Failure RR Interval Database
ECG annotations records
RR intervals analyzed
MIT-BIH: 16265-19830 NSR: nsr001-nsr054
1-500, 251-750, 501-1000, 7511250, 1001-1500, 1251-1750, 1501-2000, 1751-2250, 2001-2500
MIT-BIH: 100-234 CAST: e001a-e130a, f001a-f130a
1-500, 501-1000
800-894
1-500, 251-750, 501-1000, 7511250
from the analysis. A total of 2216 feature vectors were extracted from the patients' annotated records. B. Features We implemented many linear and nonlinear features for HRV described in literature. Full list is comprehensive (78 features) and is given in Table 2. References to implementation details and partition in schemes are shown. Advanced sequential trend analysis (ASTA) is not covered in literature. It is an extension of the idea to describe RR interval prolongation and shortening [18] with a more detailed specification of the degree of pace change. In ASTA, two out of possible four quadrants are analyzed in detail: prolongation / prolongation (+/+) and shortening / shortening (-/-). The features include: no change in RR interval length, small change, medium change, large change
BIDMC: chf01-chf15 CHF RR: chf201-chf219
1-500, 251-750, 501-1000, ... , 3751-4250, 4001-4500
and very large change (nine features in total). Additionally, total number of points in each of the two quadrants is taken (two additional features). Number of RR interval changes falling in each of these subsections divided by the number of RR interval changes in all four quadrants is represented by each feature in ASTA. Carnap 1D entropy has not been previously applied to HRV or ECG analysis. We implemented the algorithm proposed by [15] for time series analysis and allowed for Carnap entropy extraction on multiple scales, similar to sample entropy [14]. Nonlinear chaos attractor features possess a parameter named interval T (lag) that shows which pairs of RR intervals are used in the calculation (e.g. if T=2, an RR interval between two RR intervals is skipped). Authors [17] showed that if multiple intervals are taken into consideration, the classification accuracy improves.
Table 2. Feature schemes Scheme number 1
Features in scheme SDNN [3], pNN20 [3, 10], pNN50 [3, 10], RMSSD [3], HTI [3]
Number of features 5
Description
Comment
Linear, time domain Linear, frequency domain Linear Linear + nonlinear
2
(PSD, VLF, LF, HF, LF/HF) [3]
5
3
Linear (time domain), linear (frequency domain)
10
4
Linear, SD1/SD2 ratio [11], Fano factor [2], Allan factor [2]
13
5
(Spatial filling index (SFI), Correlation dimension (D2), Central tendency measure (CTM)) [12]
3
Nonlinear chaos attractor features
Time interval (lag) , T={1,2,5, 10, and 20}, reconstruction dimension d=2
46
Entropies
Dimension m=2 for ApEn and SampEn
11
ASTA
5
DFA
9
Approximate entropy (ApEn1-ApEn4) [12], Maximum approximate entropy (MaxApEn) [13], r for MaxApEn, Multiscale sample entropy (SampEn1-SampEn20) [14], Multiscale Carnap 1D entropy(Carnap1Carnap20) [15] Advanced sequential trend analysis (ASTA): ASTA1-ASTA11 Detrended fluctuation analysis (DFA): DFA 5, DFA 7, DFA 10, DFA 15, DFA 20 [16] (SFI, D2, CTM, ApEn1-ApEn4, SDNN, pNN20, RMSSD, HTI) [17]
10
All features
6 7 8
11 78
IFMBE Proceedings Vol. 29
Features combination T=1, d=2, m=2 Advanced linear + nonlinear chaos attractor features (T=1) + entropies + ASTA + DFA
Random Forest-Based Classification of Heart Rate Variability Signals by Using Combinations of Linear and Nonlinear Features
Therefore, for the analysis of scheme number 5 from Table 2, we extracted five times the amount of feature vectors: T = {1, 2, 5, 10, and 20}, 11.080 feature vectors in total. Most of the feature extraction algorithms were implemented in a Java-based platform, ECG Chaos Extractor [12]. The only exceptions are spectral (frequency) features, which were extracted from RR interval series in Matlab using autoregressive (AR) model of order 12. C. Classification procedure In order to classify feature vectors with high accuracy, we used a state-of-the-art classifier named random forest (RF) [19]. Random forest is composed of a large number of decision trees that choose their splitting features from a random subset of k features at each internal node. Best split based on Gini index is taken among these randomly chosen features and the trees are built without pruning. Feature vectors are sampled using the bootstrap procedure. RF ensures at the same time the smallest obtainable bias and very low data variance, which often gives excellent classification results. Random forest was constructed with 40 trees for each feature scheme. A 10x10-fold stratified cross-validation testing procedure was used in order to obtain representative classification accuracy. Analysis was performed in Weka system, version 3.6.1 [20]. Four distinct classification tasks were pursued: simultaneous classification of all four examined cardiac rhythms; classification between normal rhythm and any arrhythmia; classification between normal rhythm and supraventricular arrhythmia; classification between normal
31
rhythm and congestive heart failure.
III. RESULTS
Classification results are presented in Fig. 1. Scheme numbers 4, 9, and 10 give the best results. Linear + nonlinear features analyzed by scheme number 4 show almost as good classification accuracy as do all the features collectively (scheme number 10). Also, nonlinear features in feature scheme 4 have only minor influence on classification accuracy (when compared to scheme 3). A combination of linear and nonlinear features recently proposed by authors [17] (scheme number 9) can be, for all practical purposes, replaced by a linear combination of features, i.e. scheme number 3. Even the simple feature scheme 1, which contains only time-domain linear measures, is comparable to scheme number 9. Spectral features from scheme number 2 also demonstrate high classification capacity, comparable to time-domain features. Also, nonlinear chaos attractor and entropy features failed to achieve high classification rates, probably due to inspection of only a single dimension (d=2 and m=2). DFA shows the worst results for all classification tasks and is not suitable for classification of examined rhythms. The results for ASTA are fair considering the fact that it was the only method in scheme 7. Although scheme number 10 provides us with the most accurate solution, the combination of 78 features is highly impractical regarding the description of the underlying
Fig. 1.Classification of HRV records using features’ schemes IFMBE Proceedings Vol. 29
32
A. Jovic and N. Bogunovic
rhythm. The results show that the most accurate classification is achieved for discerning normal rhythms from supraventricular arrhythmia (around 93% for scheme 10), which indicates that normal heart rhythm and supraventricular ectopic beats differ significantly.
1.
IV DISCUSSION
3.
One of major problems in classification of HRV signals is the small number of abnormal heart beats present in most records. This fact severely limits the application of many analytical methods, because an abnormal rhythm seldom differs significantly from a normal one. In this work, we analyzed the records disregarding the actual number of abnormal beats in each record. Further work should concentrate on finding a minimal number of abnormal beats in records to be able to successfully apply the classification schemes. Results of ASTA should be investigated further. We plan to extend the trend change algorithm with a procedure that would take into account not only two consecutive changes in RR interval duration, but three. In this way, more information about abnormal beats might prove useful for automatic classification tasks. We suppose that nonlinear chaos attractor features and entropy measure do not demonstrate high classification accuracy due to the calculation of only a single, low dimension (m=2 and d=2). Researches performed by other authors almost always included feature calculations over a range of dimensions.
REFERENCES
2.
4. 5.
6.
7.
8.
9. 10. 11.
12.
13. 14. 15.
V. CONCLUSION
We have assessed the classification capabilities of several combinations of HRV features on a large sample of cardiac records for four different cardiac rhythms. The results show that the combination of time and frequency domain linear features and several nonlinear features: SD1/SD2, Fano factor, and Allan factor gives high classification accuracy. Other examined nonlinear features have very little influence on classification accuracy. Overall results suggest that linear features carry the most weight in all four classification tasks, with only a minor improvement obtained by adding some of the nonlinear features to the feature set. Further work has to conclude which nonlinear features should be used together with standard time and frequency domain linear features in HRV analysis in order to obtain the best results.
16.
17.
18.
19. 20.
Kitney R, Linkens D, Selman A et al. (1982) The interaction between heart rate and respiration: part II – nonlinear analysis based on computer modeling. Automedica 4:141–153 Teich MC, Lowen SB, Jost BM et al. (2001) Heart Rate Variability: Measures and Models. Nonlinear Biomed Sig Proc Vol. II, Dynamic Analysis and Modeling, IEEE Press, New York, 159–213 Task Force of The European Society of Cardiology and The North American Society of Pacing and Electrophysiology (1996) Heart rate variability guidelines: Standards of measurement, physiological interpretation, and clinical use. European Heart Journal 17:354–381 Goldberger AL (1996) Non-linear dynamics for clinicians: chaos theory, fractals, and complexity at the bedside. Lancet 11:312–1314 Iyengar N, Peng CK, Goldberger AL et al. (1996) Age-related alterations in the fractal scaling of cardiac interbeat interval dynamics. Am J Physiol 271:1078–1084 Braun C, Kowallik P, Freking A et al. (1998) Demonstration of Nonlinear Components in Heart Rate Variability of Healthy Persons. Am J Physiol Heart Circ Physiol 275(5)1577–1584 Asl BM, Setarehdan SK, Mohebbi M (2008) Support vector machinebased arrhythmia classification using reduced features of heart rate variability signal. Artif Intell Med 44(1):51–64 Chen SW (2002) A wavelet-based heart rate variability analysis for the study of nonsustained ventricular tachycardia. IEEE Trans. Biomed. Eng. 49(7):736–742 PhysioBank, at http://www.physionet.org Hutchinson TP (2003) Statistics and graphs for heart rate variability: pNN50 or pNN20? Physiol Meas 24(3)N9–N14 Kitlas A, Oczeretko E, Kowalewski M et al. (2005) Nonlinear dynamics methods in the analysis of the heart rate variability. Roczniki Akademii Medycznej w Białymstoku, Annales Academiae Medicae Bialostocensis 50(Suppl. 2) Jovic A, Bogunovic N (2007) Feature Extraction for ECG TimeSeries Mining Based on Chaos Theory. Proc. 29th Int. Conf. Inf. Tech. Interfaces, ITI 2007, Cavtat, Croatia, 2007, pp. 63–68 Chon KH, Scully CG, Lu S (2009) Approximate Entropy for All Signals. IEEE Eng. Med. & Biol. Mag. 28(6):18–23 Costa M, Goldberger AL, Peng CK (2005) Multiscale entropy analysis of biological signals. Phys Rev E 71:021906 Jovic F, Krmpotic D, Jovic A et al. (2008) Information Content of Process Signals in Quality Control. IPSI BgD Transactions on Internet Research 4(2):10–16 Acharya RU, Kannathal N, Krishnan SM (2004) Comprehensive analysis of cardiac health using heart rate signals. Physiol Meas 25(5):1139–51 Jović A, Bogunović N (2009) Feature Set Extension for Heart Rate Variability Analysis by Using Non-linear, Statistical and Geometric Measures. Proc. 31st Int. Conf. Inf. Tech. Interfaces, ITI 2009, Cavtat, Croatia, 2009, pp. 35–40 de Carvalho JLA, Rocha AF, Nascimento FA et al. (2002) Development of a Matlab software for analysis of heart rate variability. Proc. 6th Int. Conf. on Signal Processing, Beijing, China, 2002, vol. 2, pp. 1488–91 Breiman L (2001) Random forests. Mach Learn 45:5–32 Hall M, Frank E, Holmes G, Pfahringer B et al. (2009) The WEKA Data Mining Software: An Update. SIGKDD Explor. 11(1):10–18. Author: Alan Jovic Institute: Faculty of Electrical Engineering and Computing Street: Unska 3 City: Zagreb, HR-10000 Country: Croatia Email:
[email protected]
IFMBE Proceedings Vol. 29
Validation of MRS metabolic markers in the classification of brain gliomas and their correlation to energy metabolism M.G. Kounelakis1, M.E. Zervakis1, G.J. Postma2, L.M.C.Buydens2, A. Heerschap3 and X. Kotsiakis4 1
Dept. of Electronic & Computer Engineering, Technical University of Crete, Chania, Greece 2 Dept. of Analytical Chemistry, Radboud University, Nijmegen, the Netherlands 3 Dept. of Radiology, University Medical Center Nijmegen, Nijmegen, the Netherlands 4 Dept. of Neurosurgery, General Hospital of Chania, Chania, Greece
Abstract- The aim of this study is to validate the significance of recently identified MRS (Magnetic Resonance Spectroscopy) ratio-type metabolic markers used in brain gliomas classification, through the energy metabolism profile of these complex tumors. It is an attempt to integrate the metabolic knowledge extracted from MRS analysis of patient gliomas provided, with proteomic knowledge derived from metabolic enzymes that participate in the energy production process, called glycolysis. The results indicate that the increased levels of lactate, alanine and fatty acids measured from MRS spectra in gliomas are justified from the behavior of metabolic enzymes confirming thus the fact that the ratio-type metabolic markers are highly significant for the discrimination of such brain tumors.
blocks required for cell proliferation, it has been proposed that cancer cells (and normal proliferating cells) may need to activate glycolysis despite the presence of oxygen in order to proliferate [3].
Keywords- Energy metabolism, glycolysis, brain gliomas I. INTRODUCTION
Malignant rapidly-growing tumor cells, including brain gliomas, typically have very high glycolytic rates compared to their counterparts in normal tissue. There are two common explanations for this fact. The classical explanation is that there is poor blood supply to tumors causing local depletion of oxygen. The other explanation stems from the well known hypothesis of Otto Warburg, who claimed that most cancer cells predominantly produce energy by glycolysis followed by lactic acid fermentation in the cytosol, rather than by oxidation of pyruvate in mitochondria like most normal cells [1]. This occurs even if oxygen is plentiful. Warburg postulated that this change in metabolism is the fundamental cause of cancer [2], a claim now known as the Warburg effect. This effect may simply be a consequence of damage to the mitochondria in cancer, or an adaptation to low-oxygen environments within tumors, or a result of cancer genes shutting down the mitochondria because they are involved in the cell's apoptosis program which would otherwise kill cancerous cells. The Warburg effect may also be associated with cell proliferation. Since glycolysis provides most of the building
Fig. 1. Glycolysis and lipogenesis processes Glycolysis (a sugar splitting process), as shown in Fig. 1, involves a series of biochemical reactions in which glucose is broken down to pyruvate with the release of usable energy in the form of ATP (adenosine triphosphate) molecules. Under aerobic conditions, the dominant product in most tissues is pyruvate and the pathway is known as aerobic glycolysis. When oxygen is depleted, as for instance in hypoxic-necrotic tumorous tissues of gliomas, the dominant glycolytic product in many tissues is lactate and the process is known as anaerobic glycolysis. Thus, lactate metabolite is a sensitive indicator of anaerobic glycolysis and reduced cellular oxygenation in living tissues. Along similar lines alanine, in conjunction with lactate, increases
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 33–36, 2010. www.springerlink.com
34
M.G. Kounelakis et al.
in tissues during hypoxia; made by transamination of pyruvate to prevent further increases in lactate [4]. Finally, fatty acids (lipids) are an important source of energy too. Excess glucose can be stored efficiently as fat. All cell membranes are built up of phospholipids, each of which contains fatty acids [5]. Therefore, reliable estimates of the levels of lactate, alanine and lipid resonances as exhibitors of glycolysis are of special interest for the clinical management of brain gliomas patients. The metabolic pathway of glycolysis is a series of chemical reactions catalyzed by specific metabolic enzymes. These glycolytic enzymes have direct relation to the metabolites mentioned above. The main goal of this study is to correlate the knowledge derived from these glycolytic enzymes with the information derived from the statistical analysis of the above mentioned metabolites, thus validating the diagnostic importance of ratio-type MRS markers, recently identified [6,7], wherein these metabolites dominantly participate. II. MATERIALS AND METHODS
Among the glycolytic enzymes that participate in the energy production process, necessary both for the healthy and cancerous cells, the HK-Hexokinase (EC 2.7.1.1), PKPyruvate Kinase (EC 2.7.1.40), and LDH-Lactate Dehydrogenase (EC 1.1.1.27) are the most significant, as it can be observed from the glycolysis metabolic pathway presented in KEGG (Kyoto Encyclopedia of Genes and Genomes) and BRENDA (The Comprehensive Enzyme Information System) enzyme databases. The EC number corresponds to the Enzyme Commission classification. HK is the enzyme that catalyzes the first reaction in glycolysis pathway, the glucose conversion to pyruvate, even when blood glucose levels are relatively low. PK catalyzes the last step of glycolysis, in which pyruvate and ATP are formed. Finally LDH converts pyruvate to lactate when oxygen is absent or in short supply (anaerobic process). The strategy followed in order to reveal the interrelation of these enzymes with the metabolites mentioned in the introduction, consists of two steps: 1. 2.
Identify (through literature search) the bioenergetic activity of each one of the glycolytic enzymes in gliomas and, Relate this activity with the metabolic behaviour of the pyruvate, lactate, alanine and fatty acids (lipids) by measuring their peak-area levels in a given dataset of short echo magnetic resonance spectroscopy imaging (MRSI) data from 21 glioma patients.
The dataset provided consists of short echo magnetic resonance spectroscopy imaging (MRSI) data from 21 glioma patients, as presented in Table 1. The twodimensional MRSI data was collected by the Radboud University and contains 303 pre-processed 1H-MRSI volume elements (voxels) corresponding to 303 spectra. Each patient case had passed strict quality control and validation procedures, including consensus histopathologic determination. Table 1 Analysis of the MRSI dataset GR2: glioma grade 2, GR3: glioma grade 3 and GR4: glioma grade 4 Tissue type GR2 GR3 GR4
No of subjects
No of voxels
10 4 7 Total
176 57 70 303
The peak areas, Fig. 2, obtained by peak integration of pyruvate (at 2.37 ppm), alanine (at 1.48 ppm), lactate (at 1.33 ppm) and lipids resonances (at 0.90 and 1.30 ppm) [7].
Fig. 2. Spectrum obtained from a voxel. Y axis: peak heights (proportional to metabolites concentration). X axis: frequency (position) in parts per million. Pyr (Pyruvate), Ala (Alanine), Lac (Lactate), Lipids (mobile lipids) are the metabolites observed. The shaded area corresponds to the areas under the peaks
III. RESULTS
Following the above-mentioned strategy, an investigation of recent and past studies involving the glycolytic activities of these 3 enzymes has been performed in order to record their metabolic behaviour in brain gliomas. The bioenergetic activities of these enzymes are presented in Table 2. It can be observed that the activity of
IFMBE Proceedings Vol. 29
Validation of MRS Metabolic Markers in the Classification of Brain Gliomas and Their Correlation to Energy Metabolism
the enzymes increases as the tumor becomes more malignant [8-10].
GR3 GR4 Lactate
Table 2
Tissue type GR4 GR3 GR2
HK Rise
PK Rise
LDH
Alanine
Lipids
Fig. 3. The mean peak-area values of the 4 metabolites measured in the 3 classes provided.
Furthermore, we estimate the statistical significance of the differences of these means. These observations are presented at Table 3. Relating the enzymes activities of HK, PK and LDH with the statistical findings of the MRS metabolites levels of Table 3, we can deduce important interactions which are explained in the Discussion section. Table 3 Metabolites / Classes Pyruvate GR2
The mean values of the 4 metabolites peak-areas HS: p value < 0,01 – S: p value < 0,05 Low grade
Intermediate
High grade
GR2
GR3 S
GR4 S
GR2
GR3 HS
GR4 HS HS
GR2
GR3 S
GR4 HS HS
GR2
GR3 HS
GR4 HS HS
GR2 GR3 GR4
Rise
The following step is to measure, in the 3 classes of Table 1, the mean values of the areas under the peaks of the pyruvate, lactate, alanine and lipids from the given MRSI dataset. The comparison of the mean values is shown in Fig. 3 below. The most important observation in this figure is the rapid increase of both lactate and lipids in GR4 compared to their levels in GR3 and GR2. This fact underlines the lack of oxygen in this tumorous tissue. Alanine and pyruvate are also elevated as tumor grade increases, but their levels are considerably lower than those of lactate and lipids.
S
GR2 GR3 GR4
Bioenergetic activities of HK, PK and LDH in gliomas
GR2 GR3 GR4
35
IV. DISCUSSION
As mentioned in the introduction, brain tumors and gliomas, in particular, develop high glycolytic rates. This phenomenon has an intracellular impact to the glycolytic enzymes’ activities that control the ATP energy molecules or in other words the energy flux in the cell [11]. Due to this fact the HK enzyme activity is increased as shown at Table 2. This behavior is justified from the fact that HK, particularly the HK-II isoform, plays a critical role in initiating and maintaining the high glucose catabolic rates of rapidly growing tumors [12]. PK as the last enzyme of the glycolytic chain also increases with tumor grade [9] as shown in Table 2. This activity forces the levels of pyruvate as an end glycolytic product to rise accordingly, as shown in Fig. 3. Furthermore and dependent on the energy needs, the pyruvate metabolite is converted to either lactate or lipids. Lactate is an energy rich molecule which, given some oxygen, can be also converted to pyruvate and so enter the mitochondria to generate a bucket load of ATP but also lipids, as shown in Fig. 1. In highly hypoxic-necrotic areas of the tumor cells, like in GR4, the brain neurons do not use glucose at all, glucose is converted to lactate by the astrocytes and it is lactate which feeds directly in to the neuronal mitochondria via pyruvate. So lactate with oxygen is a potent combination for ATP generation. In the aerobic bulk of the tumour glucose can be burned via pyruvate in the mitochondria and there is no need for lactate production. Lactate dehydrogenase (LDH) is also a key metabolic enzyme catalyzing pyruvate into lactate and is excessively expressed by tumor cells [10, 13] causing an increase in the lactate levels as shown in Fig. 3 too. Furthermore, alanine is also produced during hypoxia by transamination to pyruvate from another amino acid [4]. Pyruvate levels when compared with those of lactate and lipids in the three types of tumors vary significantly as shown in Fig. 3. This is expected since pyruvate is
IFMBE Proceedings Vol. 29
36
M.G. Kounelakis et al.
immediately converted to either lactate after the glycolytic process (Warburg effect) through the fermentation process or lipids through the lipogenesis process. Then, lipids and lactate metabolites levels increase rapidly as the malignancy increases. The highly statistical differences in their mean values also prove this tendency. Furthermore, Alanine’s mean values also present a highly significant difference between GR2 and GR4 but also between GR3 and GR4, due to the hypoxia observed in high grade tumors. Based on these observations and the fact that lactate, alanine and lipids metabolites are the main regulators of the energy flux within the cancerous cells we easily conclude to the fact that they should be significant in the discrimination of gliomas types and grades too. Since glycolysis is of vital importance in brain cancer cells survival and proliferation, understanding of the metabolic activities of the enzymes mentioned and their related metabolites, in different grades of gliomas, can help us identify reliable markers for diagnostic purposes. Previous research accomplished by our team has proved the significance of these metabolites in gliomas discrimination [6, 7]. In these studies we have shown that peak area ratio-type markers who involve lactate, alanine and lipids play a crucial role in grade and type classification of such complex tumors. More specifically the ratio markers of Lac/Cre, Ala/Cre, Ala/S, Lips/Cho and Lips/Cre, which have been found to provide a classification accuracy of 84% in GR2 vs GR3 gliomas. Furthermore the S variable used as denominator in this binary classification but also in the GR3 vs GR4 and GR2 vs GR4 includes the peak area levels of the metabolites.
REFERENCES 1. 2. 3. 4.
5. 6.
7.
8.
9.
10.
V. CONCLUSIONS
The study of bioenergetics in gliomas is a very promising field for clinical and biological management of complex brain tumors. Since the Warburg hypothesis, a lot of research has been directed towards identifying how the metabolic enzymes’ activities and their relation to the expression of certain metabolites, related to glycolysis and mitochondrial respiration, affect tumor cell survival and proliferation. Adopting this hypothesis in this study we attempt to identify the way in which important metabolic enzymes such as HK, PK and LDH are related to the metabolic behavior and functionality of lactate, alanine and lipids in 3 different types of gliomas in a dataset of 21 patients. The significant influence of these metabolites in gliomas classification was also confirmed by recent studies of our team where it is clear that diagnostic markers that contain these metabolites provide high classification accuracies.
11. 12.
13.
Kim J.W, Dang C.V. (2006) Cancer's molecular sweet tooth and the Warburg effect, Cancer Res., 66:8927–8930. Warburg O (1956) On the origin of cancer cells, Science 123: 309–314. Lopez-Lazaro M (2008) The Warburg effect: why and how do cancer cells activate glycolysis in the presence of oxygen?, Anticancer Agents Med Chem., 8: 305–312. Ben-Yoseph O, Badar-Goffer R.S, Morris P.G and Bachelard H.S (1993) Glycerol 3-phosphate and lactate as indicators of the cerebral cytoplasmic redox state in severe and mild hypoxia respectively: a 13Cand 31P-n. m. r. study, Biochem J., 291: 915-919. Ledwozyw A, Lutnicki K (1992) Phospholipids and fatty acids in human brain tumors, Acta Physiol Hung., 79:381-387. Kounelakis M.G, Zervakis M.E, Blazadonakis M.E, Postma G.J, Buydens L.M.C, Heerschap A and Kotsiakis X (2008) Feature Selection for Brain Tumour Classification using Ratios of Metabolites' Peak Areas from MRSI Data, 6th European Symposium on Biomedical Engineering ESBME, Chania, Greece, 2008, pp.1-6. Kounelakis M.G, Zervakis M.E, Postma G.J, Buydens L.M.C, Heerschap A and Kotsiakis X (2009) Revealing the metabolic profile of brain tumours for diagnosis purposes, 31st Annual International IEEE EMBS Conference, Minnesota, 2009, pp.3538. Dominguez J.E, Graham J.F, Cummins C.J, Loreck D.J, Galarraga J, Van der Feen J, DeLaPaz R and Smith B.H (1987) Enzymes of glucose metabolism in cultured human gliomas: Neoplasia is accompanied by altered hexokinase, phosphofructokinase, and glucose-6-phosphate dehydrogenase levels, Metab Brain Dis., 2: 17-30. Javalkar V.K, Vinod K.Y, Sharada S, Chandramouli B.A, Subhash M.N, Kolluri V.R (2009) Study of pyruvate kinase activity in human astrocytomas - Alanine-inhibition test revisted”, Neurol India, vol 57, pp. 140-142, 2009. Baumann F, Leukel P, Doerfelt A, Beier C.P, Dettmer K, Oefner P.J, Kastenberger M, Kreutz M, Nickl-Jockschat T, Bogdahn U, Bosserhoff A.K and Hau P (2009) Lactate promotes glioma migration by TGF-ȕ2–dependent regulation of matrix metalloproteinase-2, Neuro Oncol, 11:368-380. Meixensberger J, Herting B, Roggendorf W, Reichmann H (1995) Metabolic patterns in malignant gliomas, J NeuroOncol., 24:153-161. Mathupala S.P, Rempel A and Pedersen P.L (2004) Aberrant Glycolytic Metabolism of Cancer Cells: A Remarkable Coordination of Genetic, Transcriptional, Post-translational, and Mutational Events That Lead to a Critical Role for Type II Hexokinase, J Bioenerg Biomembr., 29:339-343. Subhash, M.N., Rao B.S.S, and Shankar S.K (1993) Changes in lactate dehydrogenase isoenzyme pattern in patients with tumors of the central nervous system, Neurochem Int., 22:121124.
Author: Michail G. Kounelakis Institute: Technical University of Crete Street: Kounoupidiana, Chania City: Chania, Crete Country: Greece Email:
[email protected]
IFMBE Proceedings Vol. 29
Event-Related Synchronization/Desynchronization for Evaluating Cortical Response Detection Induced by Dynamic Visual Stimuli P.J.G. Da-Silva, A.F.C. Infantosi, and J. Nadal Biomedical Engineering Program/COPPE, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil Abstract— In the present work, the Event-Related Synchronization and Desynchronization (ERD/ERS) index was used for evaluating the cortical response to dynamic visual stimulation in an postural control protocol. EEG signals (O1, P3, P4 and O2 derivations) of 33 healthy subjects were acquired during stabilometric test. The trials were conducted with the subject observing a white wall (WW) and a virtual scene on static (SS) and dynamic (DS) conditions. The power spectra estimates were compared using the Spectral F-Test (SFT, α = 0.05 and Bonferroni correction) and the ERD/ERS index. The SFT results indicate no difference between the EEG power contribution of WW and SS, and decreased power within alpha band during DS for all derivations. The ERD/ERS index allowed successfully detecting above 83% of the desynchronization within the alpha band during dynamic stimulation. Moreover, it also promotes postural instability. This finding indicates the potentiality of the ERD/ERS technique in studies of postural control assessed by dynamic visual stimulation.
As it is well known, the Spectral F-Test (SFT) can be used to test whether there is a cortical response to visual stimuli. Recently, Infantosi and Miranda de Sá [6] proposed ERD/ERS indexes based on the SFT, and applied them to intermittent photo stimulation. In the present work, a dynamic visual stimulation in a postural control protocol is investigated for assessing the cortical response. The parietal and occipital EEG signals without stimulation are compared to those acquired during static and dynamic virtual stimulation. It is carried out by statistically comparing the spectral estimates of these conditions using both the SFT and the ERD/ERS index.
Keywords— Dynamic Visual Stimulation, EEG, EventRelated Synchronization/Desynchronization, Spectral F-Test, Virtual Environment.
A casuistry of 33 healthy subjects (23 male and 10 female), age ranging from 21 to 45 years, height of 172.7 ± 9.4 cm and mass of 73.3 ± 17.3 kg (mean ± standard deviation) was used in this study. All subjects present neither historical of neurological pathologies, osseous, muscles and joints diseases nor equilibrium disorder. The anamnesis was carried out to obtain information about headache, illness, vertigo, eyestrain and the use of corrective lens or glasses. Nevertheless, subjects using lens or glasses were included. Moreover, the subjects previously gave their informed consent.
I. INTRODUCTION Dynamic visual stimulation has been used in the study of the human orthostatic postural control in order to drive postural instability response [1, 2]. However, not all visual environmental information can be perceived and used to maintain or change the body spatial position [3]. Although theses works have used virtual dynamic visual stimulation, the evoked cortical response has not been assessed. Usually, visual stimulation elicits evoked response, which is phase-locked to stimuli. Furthermore, it has smaller amplitude compared to the background electroencephalogram (EEG) [4]. In addition to this evoked response, paced events also result in time-locked changes of the ongoing EEG [5], i.e. occurring during the stimulation period. Such non-phase locked activity is often referred as eventrelated synchronization (ERS) and desynchronization (ERD), depending on whether there is increase or decrease in the EEG power spectrum. According to [6], phase-locked and non-phase-locked activities cannot be separated when they are within the same frequency band.
II. MATERIALS AND METHODS A. Casuistry
B. Experimental Protocols The EEG and stabilometric signals were acquired simultaneously at the same environmental condition for each subject, which was upright standing in a force platform. The first trial was conducted without stimulation, with the subject observing a white wall (condition denoted as WW) at 1 m apart from the force platform for five minutes. After three minutes of resting in a comfortably chair, a stimulation trial was performed with the subject observing a virtual scene projected with 1.52 x 1.06 m. This scene (Fig. 1), consisting of a room containing a gridded floor (similar to the reverse pattern) and a table with a chair placed in the center, were developed using IDE Delphi and OpenGL.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 37–40, 2010. www.springerlink.com
38
P.J.G. Da-Silva, A.F.C. Infantosi, and J. Nadal
~
Hanning windows ( Pxxm ( f ) , frequency resolution of 1 Hz). Now, denoting the SS EEG as y[k] and the DS EEG as x[k], the Spectral F-Test was estimated as [7]: 2 1 My ~ ∑ Ym ( f ) ˆ Pyy ( f ) M y m=1 = SFT ( f ) = 2 ˆ 1 Mx ~ Pxx ( f ) ∑ Xm( f ) M x m=1
Fig. 1 Virtual scene For carrying out the dynamic visual stimulation (DS), the virtual scene was randomly magnified or understated with a velocity of 200 cm/s during 250 ms interspersed by a 10 s of the static scene (SS). A set of a 100 DS (50 of each condition) was performed, with a SS preceding each DS. The SS and DS scenes were codified by a pulse width (synchronized by the start of exhibiting the scene). The sequence of pulses generates a trigger signal to be used during the signal processing. Both visual (occipital) and associative cortex (parietal) monopolar EEG derivations (according to the International 10/20 System and bilateral references corresponding earlobes) were acquired using the BrainNet - BNT 36 (EMSA, Brazil) at a sampling frequency of 400 Hz. The EEG signals were then filtered by a 4th order low-pass Butterworth with cutoff frequency at 100 Hz (anti-aliasing) and 2nd order high-pass Butterworth at 0.1 Hz. In the present study, only the EEG signals of the alpha band (8 to 13 Hz) were investigated. C. EEG Signal Processing During visual stimulation, the EEG signals of SS and DS were identified and separated using the trigger signal, resulting in 100 segments for each stimulation condition. Taking the final 1 s EEG signal of each SS (just preceding a DS), the power spectra were estimated using M = 100 epochs for each EEG derivation. For calculating DS power spectra with the same frequency resolution, since the DS time duration segment was 250 ms, a zero padding procedure was adopted to complete 1 s. For the EEG signals in the WW condition only the first 100 s duration was processed. This segment was sectioned into M = 100 epochs of equal duration (1 s) and then the WW power spectra were also estimated. The power spectra were estimated by Discrete Fourier Transform, using the Bartlett periodogram method and
(1)
where f is the frequency index, M is the number of epochs, Xm(f) and Ym(f), are, respectively, the Fourier Transform of the m-th epoch of x[k] and y[k]. Knowing that SFT(f) has a central Fisher distribution F with 2 Mx,2 My degrees of freedom, the null hypothesis (H0) of same theoretical power contribution at a frequency f, the critical value can be readily obtained:
SFTcrit = F(2M x ,2M y ,α )
(2)
where α is the level of significance of the test. Furthermore, since there is no guarantee that the power of x[k] is always higher than that of y[k], a two-tailed test should be used. Thus, taking α = 0.05, 2 Mx = 2 My = 200 result in a SFTcrit of 0.75 and 1.32 for respectively the lower and upper limits at any spectral frequency f. Therefore, H0 can be rejected if SFT(f) ≤ 0.75 or SFT(f) ≥ 1.32, otherwise H0 is accepted. For testing the power contribution in ±1 Hz around alpha peak, the Bonferroni correction (α/n, where, n = 3 is the number of harmonics) was applied. In this case, H0 within this band can be rejected if SFT(f) ≤ 0.71 or SFT(f) ≥ 1.40. Taking the usual ERD/ERS index definition as the calculation of the percent power difference between the DS EEG and SS EEG stimulation (reference signal), the ERD/ERS can be re-written as [6]: ⎤ ⎡ M 2 ⎥ ⎢ ∑ Yi ( f ) ERD / ERS ( f ) = 100 × ⎢ i =1 − 1⎥ = 100 × [SFT ( f ) − 1] ⎥ ⎢M 2 ⎥ ⎢ ∑ Xi( f ) ⎦ ⎣ i =1
(3)
A negative value in (3) indicates a power decrease during the dynamic stimulation and hence a desynchronization (ERD). Otherwise, in case of synchronization, (3) is positive (ERS). Based on SFTcrit values, the lower and upper critical values of (3) are -25 and 32, respectively. Therefore, for detecting the dynamic stimulus at a frequency f, ERD should be lower than -25. Considering y[k] as the EEG background (WW) and x[k] as one of the EEG during visual stimulation, the Spectral FTest was also estimated by (1). In both cases, the ERD/ERS was calculated using (3).
IFMBE Proceedings Vol. 29
Event-Related Synchronization/Desynchronization for Evaluating Cortical Response Detection Induced by Dynamic Visual Stimuli
III. RESULTS Figure 2 depicts the power spectra before and during stimulation, the application of both the SFT and the ERD/ERS index, and the rate of ERD detection for the subject # 7 in the O1, P3, P4 and O2 derivations. Firstly, for all such derivations, it can be noted that the EEG spectra in WW and SS conditions behaves similarly, showing an increased magnitude in the alpha peak at 11 Hz (Fig. 2a). The SFT results (Fig. 2b) indicate that there is no statistical difference (0.71 < SFT(Δf) < 1.40, α = 0.05) between WW and SS power spectra neither in occipital nor parietal derivations (therefore, the null hypothesis was not reject). In this case, the eq. (3) tends to zero, hence ERD/ERS(f) are not shown. On the other hand, the decreased power contribution in DS is observed within the alpha band of all
39
derivations (SFT < 0.71), implying in rejecting H0 when DS is compared to both WW and SS conditions. The negative mean index (ERD/ERS < -25, Fig. 2c, black circles) achieved for all derivations indicate desynchronization. However, the 95% confidence limits of the ERD/ERS index indicate that some dynamic visual stimulation does not evoke cortical response. For this subject (#7), the rate of ERD detection was higher than 85% within the alpha band (Fig. 2d), and above 95% at the peak (11 Hz). The total power contribution around the alpha peak (grey area in Fig. 2a) in WW, SS and DS conditions for all 33 subjects show significant differences (Wilcoxon sign rank test, α = 0.05), excepting between WW and SS (Table 1). Furthermore, ERD rate detection during dynamic stimulation varies from 83 to 88% for the occipital and parietal EEG derivations (Fig. 3).
Fig. 2 Power spectra before and during stimulation (a), the application of the SFT (b) and ERD/ERS (c), and the rate of ERD detection (d) obtained in O1, P3, P4 and O2 derivations for the subject #7. The gray areas in (a) and (b) indicate the total power contribution of the ±1 Hz frequency around alpha peak. In (a), the dashed, dash-dot and solid lines represent the power spectra of WW, SS and DS, respectively. In (b), the squares, circles and stars represent the SFT between WW and SS, WW and DS, and between SS and DS, respectively. In (c), the mean ERD/ERS indexes and the 95% confidence limits of the ERD/ERS distribution are indicated with circles and triangles, respectively. In (b) and (c), the SFT and ERD/ERS critical values are shown in horizontal dotted line. The horizontal solid line in (b) represents the SFT critical values (two-tailed, α = 0.05) after the Bonferroni correction IFMBE Proceedings Vol. 29
40
P.J.G. Da-Silva, A.F.C. Infantosi, and J. Nadal
Fig. 3 The ERD rate detection after dynamic stimuli for 33 subjects, in alpha frequency band (derivations O1, P3, P4 and O2) Table 1 The p-value of the Wilcoxon sign rank test (α = 0.05) applied between the distributions of the total power contribution around the alpha peak in WW, SS and DS conditions Derivations
WW x SS
WW x DS
SS x DS
O1
0,4052
<< 0,0001
0,0004
P3
0,7138
<< 0,0001
0,0003
P4
0,7243
<< 0,0001
0,0005
O2
0,4132
<< 0,0001
0,0004
with a static and dynamic scene. The dynamic stimulation causes desynchronization within the EEG alpha band and also promotes postural instability. Therefore, the ERD/ERS technique can be useful in studies of postural control assessed by dynamic visual stimulation.
ACKNOWLEDGMENT This work received the financial support from the Brazilian Research Council (CNPq).
REFERENCES IV. DISCUSSION AND CONCLUSION The dynamic stimulation caused a reduction in the power spectra estimates around the alpha peak (±1 Hz), when compared to both the background EEG (WW) and the static scene stimulation (SS). The change in amplitude spectrum is expected to occur since evoked responses are phaselocked to the stimulation [4]. The similarity observed between EEG from WW and SS conditions suggests that the static virtual scene can be used as spatial reference for the body sway control in promoting the postural stability. Furthermore, applying SFT allows detecting significant spectral changes due to DS in all frequencies of the alpha band. The ERD/ERS index allowed to successfully detecting above 83% of the desynchronization within the alpha band during dynamic stimulation. This finding is in accordance with Infantosi and Miranda de Sá [6]. In our study, moving the visual surround was carried out to induce the perception that the body is also in movement, but as pointed out by [1, 2], in an opposite direction of the stimulus. Hence, there is the need for compensating it, that is, a compensatory adjustment in the same direction of the surround motion is required. In summary, by using the ERD/ERS index it was possible to distinguish the evoked cortical response of stimulating
1. Streepey J W, Kenyon R V, Keshner E A (2007) Field of view and base of support width influence postural responses to visual stimuli during quiet stance. Gait & Posture 25:49–55 DOI 10.1016/j.gaitpost.2005.12.013 2. Dokka K, Kenyon R V, Keshner A (2009) Influence of visual scene velocity on segmental kinematics during stance. Gait & Posture 30:211–216 DOI 10.1016/j.gaitpost.2009.05.001 3. Patla A E (1997) Understanding the roles of vision in the control of human locomotion. Gait & Posture 5:54–69 DOI 10.1016/S09666362(96)01109-5 4. Chiappa K H (1997) Evoked Potentials in Clinical Medicine, 2., Raven Press, New York 5. Pfurtscheller G and Lopes da Silva F H (1999) Event-related EEG/MEG synchronization and desynchronization: basic principles. Clin Neurophysiol 110: 1842-1857 DOI 10.1016/S13882457(99)00141-8 6. Infantosi A F C and Miranda de Sá A M F L (2007) A statistical test for evaluating the event-related synchronization/desynchronization and its potential use in brain-computer-interfaces. IFMBE Proc. vol 18, IV Latin American Congress on Biomed Eng, Isla de Margarita, Venezuela, 2007, pp 1122–1136. 7. Shiavi R (1999) Introduction to applied statistical signal analysis. Academic Press, London Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Paulo José Guimarães da Silva Biomedical Engineering Program (COPPE/UFRJ) P. O. Box 68.510 21941-972 - Rio de Janeiro - RJ Brazil
[email protected]
Investigating the EEG Alpha Band during Kinesthetic and Visual Motor Imagery of the Spike Volleyball Movement M.V. Stecklow1, M. Cagy2, and A.F.C. Infantosi1 1 2
Federal University of Rio de Janeiro / Biomedical Engineering Program, Rio de Janeiro, Brazil Fluminense Federal University / Department of Epidemiology and Biostatistics, Niterói, Brazil
Abstract— Motor Imagery (MI) is the mental simulation of motor tasks and its execution may recover a motor planning in the central nervous system. The purpose of this study was to analyze the changes along trials of Motor Imagery (MI) in Kinesthetic (KMI) and Visual (VMI) modalities. The casuistry initially consisted of 15 right-handed male volleyball athletes (AG) and 18 non-athletes (NAG). By previously applying MIQ-R, 3 subjects from NAG were dismissed of the study due to their low questionnaire score. Both groups performed 30 trials of KMI and VMI of spike volleyball movement, intermixed with mental countdown (REF). The t-test (α = 0.05) resulted in similar MIQR mean scores for both groups, although AG demonstrated greater facility to imagine the volleyball task than NAG. Each EEG free-artifact signal was subdivided in three sequences (S1, S2 and S3) with 8 epochs each. Wilcoxon paired tests (α = 0.05) of the EEG power spectrum in the vicinity of alpha peak (ABP) suggests habituation during initial trial (S1) mainly for athletes. The comparisons between REF and KMI using the spectral Ftest (SFT with α = 0.05) indicates the left occipital (athletes) and parietal (non-athletes) sites as those in which SFT>SFTcrít. Based on such results, one can conclude that the KMI modality promotes more cortical changes than VMI, particularly for nonathletes. These findings suggest a different learning process of MI execution of repetitive sequences, related to previous knowledge of the real task. Keywords— Motor Imagery, Kinesthetic and Visual, EEG, spectral F-test, spike volleyball movement.
EEG frequency bands[8-10], mainly in alpha band (813 Hz). The alpha activity may differ inter-individually according to inheritance[11], age[12], gender[13] and cognitive state[14]. This variation is more pronounced at the frequency where the maximum power contribution occurs, so called the alpha band peak (ABP). There are basically two modalities do MI: the Visual Motor Imagery (VMI), which corresponds to one imagining himself executing a motor task as an spectator, and the Kinesthetic Motor Imagery (KMI), which generates sensations about the body position and joint movements[8]. Moreover, some studies have concluded that there is an association between cortical responses and different MI modalities[10,14]. Since the works concerning MI paradigm do not consider the inter-individual differences and previous knowledge of motor action, we hypothesize that athletes with previous knowledge of a complex motor action will execute the correspondent MI more easily than non-athletes. Such difference in MI performance would be reflected as changes in the frequency domain, more specifically in ABP of parietal and occipital areas, when a sequence of complex MI task is executed several times. Hence this work aims at investigating the existence of differences in alpha band power, focusing the frequencies around the alpha peak, during sequences of MI of the spike volleyball movement in athletes and non-athletes.
I. INTRODUCTION The Motor imagery (MI) is the technique of mentally simulating a motor action, without the real execution of such movement[1]. It is already well established that mental training using MI can improve muscular force[2] and motor performance[3,4], constituting an important method to recover neuro-muscular disabilities[5]. Many techniques have been used to assess physiological changes during MI. One of them, the electroencephalogram (EEG), extensively applied in research of Brain Computer Interfaces[7], has the advantage of being a non-invasive technique which offers a good temporal resolution[6]. Usually, the cerebral activity investigation with MI paradigms in frequency domain is based on the mean power in
II. MATERIAL AND MEHOTDS A. Casuistry The casuistry consisted of 15 indoor-volleyball athletes (age 22.7 ± 2.3) (AG) and 18 subjects (age 27.3 ± 4.1) without experience in this sport (NAG). All volunteers signed a consent form containing the description of the purpose of the study and indicating the anonymity of the participants, answered a personal information questionnaire about athletic history, and indicated no neurological disturbs nor the use of drugs that could affect the cognitive performance. All subjects of both groups were right-handed males. This study was submitted and approved by ethical committee and the tests were carried out in the Laboratory of
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 41–44, 2010. www.springerlink.com
42
M.V. Stecklow, M. Cagy, and A.F.C. Infantosi
Table 1 p-values of Wilcoxon tests comparing the ABP between sequences during kinesthetic and visual motor imagery condition for all electrodes. The [*] indicate the comparisons where p < α
KMI O1 O2 P3 P4
S1 x S2 0.0479* 0.0730 0.0215* 0.0946
Athletes S1 x S3 S2 x S3 0.2293 0.4573 0.0833 0.1070 0.0256* 0.1688 0.0479* 0.1688
S1 x S2 0.3591 0.1688 0.8469 0.0054*
VMI Non-athletes S1 x S3 0.4212 0.5614 0.9341 0.8040
S2 x S3 0.3894 0.1876 0.1354 0.1070
Cerebral Mapping and Motor Sensory Integration (Psychiatry Institute) and the Laboratory of Image and Signal Processing (Biomedical Engineering Program) both from Federal University of Rio de Janeiro. B. Data Acquisition Before carrying out the protocol, the Revised version of Movement Imagery Questionnaire[15] (MIQ-R), concerning body movements and their respective motor imagery in kinesthetic and visual modalities, was applied (version in Portuguese) to ensure that all subjects could see or feel mental images with a minimum of clearness. As exclusion criterion, the sum of scores in each imagery subscales below 15 yielded the exclusion of 3 subjects from NAG. A 5-min video was played on a monitor for all subjects before the acquisition, showing the scene of an expert volleyball athlete during execution of several trials of spike volleyball movement task in different points of view, in order to demonstrate the attack movement. The EEG signals were acquired according to the 10-20 international system[16] with BNT-36 electroencephalograph (EMSA – Rio de Janeiro), previously filtered with an antialiasing of 100 Hz and high-pass of 0.1 Hz, and sampled at 240 Hz (using digital 60 Hz notch filter), in three different conditions: reference condition (RC), kinesthetic motor imagery (KMI) and visual motor imagery (VMI). RC was composed of free mental countdown started at number 1000 during 90 s and was registered preceding MI conditions and in the final of experiment. KMI and VMI conditions have 30 target MI trials of volleyball spike randomly intermixed with 20 trials of hand clapping in kinesthetic and visual motor imagery respectively. The start of each trial was triggered by a beep with one of two tonalities to indicate the task to be imagined (volleyball spike or hand-clapping), preceded by a preparation beep presented 2 s before. This procedure was applied to avoid the subjects to anticipate the MI executions before the trigger sounds. After the end of KC and VC all subjects indicated the imagery clearness of each condition block using the respective MIQ-R subscales. The acquired signals were segmented into 5-s epochs synchronized with the start trigger of KMI and VMI. The
S1 x S2 0.3591 0.1070 0.4543 0.0413*
Athletes S1 x S3 S2 x S3 0.0256* 0.2293 0.1514 0.4543 0.0637 0.1876 0.0637 0.4543
S1 x S2 0.0833 0.1070 0.5995 0.5245
Non-athletes S1 x S3 0.1876 0.5245 0.7197 0.2524
S2 x S3 0.8469 0.8040 0.3028 0.5245
artifacts were removed by means of an adapted algorithm[17], resulting in 24 artifact-free epochs. In order to assign changes in ABP along epochs, the set of 24 epochs was sequentially divided into three segments (S1, S2 and S3) with 8 epochs each one and the periodogram technique was implemented with epochs equally subdivided in five windows of 1 s (spectral resolution of 1 Hz). Aiming at minimizing the inter-individual alpha peak variability (frequency of maximal contribution of band power), the power was calculated within a narrow band of 2 Hz (ABP) centered in alpha peak. The results of multiple Wilcoxon (paired, non-parametric) tests (α = 0.05) are shown in the Table 1 and indicates changes in ABP during initial epochs. Hence, the 8 first epochs of each experimental block for all electrodes were eliminated from subsequent signal analysis. To check differences between MI modalities and groups of the signals, the spectral F-test (SFT) has been implemented. The SFT(f) is a statistical test that compares the PSD of two distinct signals, specifically their power contribution in each frequency. Considering two Gaussian and independent signals x[t] and y[t], and their respective PSD Pxx(f) and Pyy(f), where Mx and My are the numbers of epochs, the SFT(f) is expressed as[18]: Pˆ ( f ) = SFT ( f ) = xx Pˆ yy ( f )
2 1 Mx ~ ∑ Xm( f ) M x m=1 2 1 My ~ ∑ Ym ( f ) M y m=1
(1)
where X~ m ( f ) and Y~m ( f ) correspond to PSD of the m-th epoch. From equation 1, the SFT(f) has 2Mx, 2My degrees of freedom. Hence, the null hypothesis (H0) of equality of the power in some frequency can be assessed with the critical value of SFT(f), described in other study[17] and defined as:
SFTCrit = F2M ,2M ,α
(2)
where M = Mx = My and α is the level of significance. Applying this on the 16 epochs (5 windows each) of EEG, then
IFMBE Proceedings Vol. 29
Investigating the EEG Alpha Band during Kinesthetic and Visual Motor Imagery of the Spike Volleyball Movement
M = 80,
and
considering
α = 0.05
SFTCrit = 1.3045
( SFTCrit = 1.4018 after the Bonferroni correction). MIQ-R scores and vividness during experiment were analyzed with simple (inter-groups) and paired (intra-groups) Student ttest (α = 0.05). The differences in ABP between S1, S2 and S3 were assessed using the Wilcoxon (paired, nonparametric) test (α = 0.05).
observe a great number of subjects with significant differences in O1 of athletes. In parietal sites of non-athletes the comparison between RC and KMI and between KMI and VMI also demonstrates a high number of subjects with power differences. Table 2 Mean scores, statistical analysis inter groups and inter modalities of MIQ-R and MI cleareness of Spike Volleyball MIQ-R
III. RESULTS While the t-test indicated no statistical differences in MIQ-R mean scores between groups or MI modalities, the comparison of volleyball MI cleareness indicated that athletes imagine themselves more clearly than non-athletes during the MI of volleyball attack (p groups) as appointed in the Table 2. The intra-group comparison (p modalities) indicated no significant differences in clearness between MI modalities for both groups. Observing the Figure 1, one can notice the differences in the frequency of alpha peak between the power spectrum of non-athlete #9 (11 Hz) and the athlete #1 (10 Hz). Changes in ABP occurred between RC, KMI and VMI. These alterations are presented in gray areas of (a) and (c), whilst statistically significant differences (α = 0.05) based on the results of the respective SFT(f) are shown in (b) and (d).
Fig. 1 Power Spectral Density (in arbitrary scale of power) and SFT(f) from non-athlete #9 [a,b] and athlete #1 [c,d] both in O1 site. The horizontal thin line is the SFTcrit and the thick one is the SFTcrit with Bonferroni correction to ABP (grey areas) The percentage of subjects where SFT > SFTcrit in the comparison between the three experimental conditions for AG and NAG is summarized on the Table 3. It is possible to
43
KMI AG 21.73 NAG 20.79 p groups 0.53
VMI 23.67 21.14 0.10
MI Cleareness p modalities 0.22 0.84
KMI 6.00 4.20 <0.01
VMI 5.60 4.47 0.04
p modalities 0.25 0.63
Table 3 Percentage of subjects where SFT>SFTcrit in ABP, comparing the three experimental conditions to both groups Athletes
Non-athletes
REFxKMI REFxVMI KMIxVMI REFxKMI REFxVMI KMIxVMI
O1 O2 P3 P4
80,0 73,3 73,3 73,3
66,7 60,0 66,7 53,3
46,7 60,0 66,7 60,0
73,3 66,7 86,7 80,0
66,7 53,3 73,3 53,3
66,7 73,3 86,7 80,0
IV. DISCUSSION The MIQ-R is composed of real execution and mental simulation in kinesthetic and visual MI of motor tasks such as bending and extending the knee in stand position and complex movement like jumping with both hands up, which is similar to some part of volleyball spike movement. Considering that MIQ-R have unfamiliar movements for both groups, there is no significant difference in MIQ-R scores between them. Nevertheless, the AG scores were slightly higher than those for NAG. Since a high level of motor skill is demanded from athletes when executing the volleyball movements during the real training sessions and matches, when compared with unexperient subjects (who declared to perform no complex task in their quotidian life), an increase in the number of subjects may statistically confirm that athletes tend to be better imaginers than non-athletes. During the experimental MI blocks, significant differences were found in clearness score of MI modalities of spike volleyball movement, statistically stronger in KMI, with athletes imagining more easily than non-athletes. This response is presumably because volleyball athletes execute this specific movement exhaustingly, which implies in a solid planning of the motor action. Regarding the time evolution of the study, no significant differences have been found between segments S2 and S3 in any derivation neither in any group. Although there were
IFMBE Proceedings Vol. 29
44
M.V. Stecklow, M. Cagy, and A.F.C. Infantosi
cases (derivations or groups, in both MI modalities) where no difference in ABP could be inferred throughout the whole experiment, whenever a difference has been achieved it occurred between S1 and S2 or between S1 and S3. This finding, although is not confirmative, at least suggests some level of habituation in the beginning of the experiment, more pronouncedly in athletes. It could be interpreted as a learning process along the sequences of MI, which become easier to execute, specifically in the subjects, who have long experience in real practice of the imagined task. The spectral F-test (α = 0.05) of subsequent trials appointed a great number of subjects who showed SFT > SFTCrit in left occipital site for athletes and left parietal sites for non-athletes in the comparisons between REF and KMI. This modality of MI promotes more cortical changes than VMI specifically for non-athletes as appointed in similar study[8], since the non-athletes have not yet retrieved the planning of the volleyball spike movement.
V. CONCLUSIONS According to these results, the sequential execution of motor imagery of complex tasks, such as the volleyball spike movement, promotes changes in occipital and parietal areas differentially in kinesthetic and visual modalities, exhibiting a temporal adaption of the responses, which may indicate a habituation caused by the learning process of motor imagery. Such process occurs more quickly with athletes than with non-athletes, and is reflected by the mean alpha power (within the vicinity of alpha peak), which is associated to the knowledge of real task execution and the modality of motor imagery executed. Further studies using different statistical techniques and protocols to analyze the behavior of alpha power during motor imagery must be conducted in order to support these results.
ACKNOWLEDGMENT To CNPq and FAPERJ for the financial support.
REFERENCES 1. Gentilli R, Papaxanthis C, Pozzo T (2006) Improvement and generalization of arm motor performance through motor imagery practice. Neuroscience 137:761-772 2. Ranganathan VK, Sieminow V, Liu JZ et al. (2004) From mental power to muscle power – gaining strength by using the mind Neuropsychologia 42:944-956
3. Hall JC (2002) Imagery practice and the development of surgical skills Am J Surg 184:465-470 4. Roure R, Collet C, Deschaumes-Molinaro C et al. (1999) Imagery quality estimated by autonomic response is correlated to sporting performance enhancement Physiol Behav 66:63-72 5. Dickstein R, Dunsky A, Marcovitz E (2004) Gait performance may be enhanced by use of motor imagery in mental practice of motor tasks in people with hemiparesis following a stroke Phys Ther 84:11671177 6. Cochin S, Barthelemy C, Roux S et al (1999) Observation and execution of movement: similarities demonstrated by quantified electroencephalography Eur J Neurosci 11:1839-1842 7. Neuper C, Shcherer R, Reiner M et al (2005) Imagery of motor actions: Differential effects of kinesthetic and visual-motor mode of imagery in single trial EEG Cogn Brain Res 25:668-677 8. Cremades JG (2002) The effects of imagery perspective as a function of skill level on alpha activity Int J Psychophysiol 43:261-271 9. Marks DF, Isaac AR (1995) Topographical distribution of EEG activity accompanying visual and motor imagery in vivid and non-vivid imagers. (electroencephalographic activity) (Imagery and Motor Processes). Brit J Psychol 86:271-282 10. Stecklow MV, Infantosi AFC, Cagy M (2007) Alterações da banda alfa do eletrencefalograma durante imagética motora visual e cinestésica Arq Neuropsiquiatr 65:1084-1088 11. Smit CM, Wright MJ, Hansell NK et al (2006) Genetic variation of individual alpha frequency (IAF) and alpha power in a large adolescent twin sample Int J Psychophysiol 61:235-243 12. Böttger D, Herrmann CS, Cramon DY (2002) Amplitude differences of evoked alpha and gamma oscillations in two different age groups Int J Psychophysiol 45:245-251 13. Clark CR, Veltmeyer MD, Hamilton RJ et al (2004) Spontaneous alpha peak frequency predicts working memory performance across the age span Int J Psychophysiol 53:1-9 14. Sirigu A, Duhamel JR. (2001) Motor and visual imagery as two complementary and neurally dissociable mental processes J Cognitive Neurosci 13:910-919 15. Hall CR, Martin KA (1997) Measuring movement imagery abilities: A revision of movement imagery questionnaire J Ment Imagery 21:143-154 16. Jasper HH (1958) The ten twenty electrode system of International Federation. Electroen Clin Neuro 10:917-975 17. Tierra-Criollo CJ, Infantosi AFC (2006) Low-frequency oscillations in human tibial somatosensory evoked potentials Arq Neuropsiquiatr 64:402-406 18. Tierra-Criollo CJ, Simpson DM, Infantosi AFC (1998) Detección objetiva de las respuestas evocadas en el EEG con la prueba espectral F ponderada Memorias del 1er Congresso Latinoamericano de Ingenieria Biomédica, Matzalán, Mexico, 1998, pp 151-154
The address of the corresponding author: Author: Marcus Vinicius Stecklow Institute: Biomedical Engineering Program/COPPE/UFRJ Street: Av. Brigadeiro Trompowsky, Bloco H, Centro de Tecnologia City: Rio de Janeiro Country: Brazil Email:
[email protected]
IFMBE Proceedings Vol. 29
Principal Components Clustering through a Variance-Defined Metric J.C.G.D. Costa, D.B. Melges, R.M.V.R. Almeida, and A.F.C. Infantosi COPPE-Federal University of Rio de Janeiro /Biomedical Engineering Program, Rio de Janeiro, Brazil Abstract— This work aims at proposing a clustering procedure through a new metric, a weighted Euclidean distance, in which the weights are the ratio of corresponding eigenvalues and the largest eigenvalue found after a Principal Components Analysis. In order to illustrate the method, the procedure was carried out on twenty-one newborn EEG segments, classified as TA (Tracé Alternant) or HVS (High Voltage Slow) patterns. The observed clustering structure was assessed by the cophenetic and agglomerative coefficients. Results showed that, despite its unlikely existence, a clustering structure was suggested by the traditional approach. This structure, however, was not confirmed by the proposed method. Keywords— Cluster Analysis, EEG, Principal Components Analysis.
I. INTRODUCTION Cluster Analysis (CA) and Principal Components Analysis (PCA) are multivariate methods commonly used in studies of biomedical signals. Despite their limitations, they provide an exploratory, informative insight of data structure. On the other hand, biomedical signals are high dimensional data difficult to interpret due to the presence of artifacts and noise. Despite of that, CA and PCA are frequently carried out in order to extract features of interest, especially for diagnosis purposes [1, p.449]. CA through Euclidean distance is usually performed on a raw data matrix M (for example, a matrix with n individuals and p variables), but when PCA is applied to M all n individuals displayed in the p-dimensional space can be re-positioned in a new coordinate system, so that data variability is considered in the analysis. With this simple procedure, the information on variance is added to the analysis, without the need of discarding PCA dimensions. Furthermore, the distance between two points projected onto the axis with the lowest variance has the same value than that the same distance onto the highest one, because only an axes translation / rotation was carried out. The aim of this study was to introduce a clustering procedure through a new metric, a weighted Euclidean distance, in which the weights are the ratio of estimated eigenvalues and the largest eigenvalue found after PCA. In order to illustrate the method, the procedure was carried out on twenty-one newborn EEG segments, classified as TA (Tracé Alternant) or HVS (High Voltage Slow) patterns.
II. BACKGROUND A. Clustering Algorithm One of the most common types of clustering algorithm is the Single Linkage Hierarchical Algorithm (SLHA) [2], which, starting from a dissimilarity measure, assumes that the individuals (or signals) have been merged to the nearest neighbor point. Individuals are characterized by points in the Euclidean space and each one is grouped subsequently to the others, obeying some clustering rule (for example, to decrease variance within clusters and increasing variance between clusters). The most used dissimilarity measure is the Euclidean distance. A clustering strategy frequently begins with the raw matrix M, with n individuals in rows and p variables in columns, from which a dissimilarity matrix is built. Since SLHA is a monotone admissible strategy, and since any monotone transformation to the dissimilarity matrix does not alter clustering results [2], this represents an interesting property for the study of dissimilarity measures. B. Principal Components Analysis One of the most used methods to display individuals as points in the Euclidean space is the mentioned Principal Components Analysis. PCA is based on a singular value decomposition (SVD) algorithm of the covariance (or correlation) matrix [3]. Used as an exploratory tool, PCA can reveal clusters graphically, in which case the Gaussianity assumption for data distribution is not mandatory [3, p. 49]. For graphical cluster representation, the relationship between individuals can be highlighted, using principal components (PC's) as axes plotting the individual points (IPs). Thus, after suitable scaling of M, one has:
Q = svd (S ) = U ⋅ D ⋅ V T
(1)
where U and V are orthogonal matrices and the diagonal matrix D has eigenvalues in descending order. Since the covariance matrix S is a square matrix, the coordinates of individuals can be found as Z=MV. This approach yields eigenvector-eigenvalue pairs with higher retention of variance, and the PCs become the new uncorrelated variables.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 45–48, 2010. www.springerlink.com
46
J.C.G.D. Costa et al.
C. The New Metric D* Hierarchical cluster algorithms are frequently employed to identify associations between IPs. However, one of the drawbacks of these algorithms is that their results always generate clusters, even if these are actually unstable and non-meaningful, thus demanding extra validation strategies for result assessment [2]. To deal with this drawback in defining actual “clusters” in PCA plots, a new metric (D*) is proposed, in line with the tolerance distance statistic suggested in a previous work [4]. This metric takes into account the explained variance pertinent to each PCA axis through weighted Euclidean distances. The idea is that if an axis has lower variability, then the distance between two clusters projected onto that axis should have less "importance" than the same distance in a higher variance axis. Thus, D* has the ordinary Euclidean distances weighted by the ratio between all eigenvalues and the eigenvalue of the first axis (which is the highest variance axis): D * (x, y ) =
(x1 − y1 )2 + (x2 − y2 )2 + … τ1
τ2
(2)
where x and y are the IP coordinates and τi are the ratio between the eigenvalue corresponding to the i-th axis and the first one. Equation (2) can be re-written as: 2
2
⎛ x y ⎞ ⎛ x y ⎞ D * ( x, y ) = ⎜ 1 − 2 ⎟ + ⎜ 2 − 2 ⎟ + … ⎜ τ τ 1 ⎟⎠ ⎜⎝ τ 2 τ 2 ⎟⎠ ⎝ 1
(3)
which is an ordinary Euclidean distance onto a new coordinate system. However, since τ1=1, only axes of lower explained variances are re-scaled, and, hence, D* decrease IPs' similarity for dimensions of lower variances. SLHA can then be carried out on the new Euclidean space.
III. MATERIALS AND METHODS A. EEG Acquisition and Pre-processing EEG signals were collected from derivation F4-P4 of seventeen full-term newborns (gestational age of 37-42 weeks and APGAR ≥ 8 in the first and fifth minutes postdelivery) during physiologic sleeping at the Instituto Fernandes Figueira (FIOCRUZ, Rio de Janeiro, Brazil). The signals were band-filtered (0.5-70 Hz) and digitized at the sample rate of 200 Hz (for further details refer to [5]). Firstly, the EEG recordings corresponding to the Quiet Sleep Stage were identified and classified in the sleep patterns High Voltage Slow (HVS) or Tracé Alternant (TA) by a clinical expert. Twenty-one artifact-free EEG segments
with thirty seconds of duration were selected, being fourteen of the TA pattern and seven of HVS. Four newborns had two segments selected for analysis (S.2-S.3, S.4-S.5, S.9-S.10, and S.13-S.14, as listed in Table 1). Finally, the power spectral density was calculated using the Bartlett’s Periodogram with M=10 subsegments. Thereby, spectral resolution was set to 0.333 Hz, and, in order to minimize spectral leakage, a Hamming window was applied to each subsegment. Seven real-valued parameters were selected for characterizing EEG signals: the maximum power spectral density (μV2/Hz) for the bands slow delta (0.25-2 Hz), fast delta (24 Hz), theta (4-8 Hz), alpha (8-13 Hz) and beta (13-30 Hz), and also the standard deviation (SD) of the samples in the samples segments and the difference between maximum positive and minimum negative values (Mn) of the samples segment (both μV). B. Comparison between Methods A raw data matrix with segments in rows and the features in columns was column-scaled for zero mean and unity variance, and a distance matrix was calculated. After the correlation matrix was defined, all subjects were displayed in the multidimensional space spanned by PC's, and D* was calculated from the segments' coordinates obtained by PCA. Then, after re-scaling the coordinates, the SLHA was applied to both distance matrices. For assessing clustering performance in the results obtained by applying the ordinary Euclidean distance and applying D*, the cophenetic correlation (CC) and the agglomerative coefficient (AC) were used. CC is the correlation coefficient between the set with elements of the cophenetic distance matrix and the set composed by corresponding elements of the dissimilarity matrix, where each element of the cophenetic matrix is the distance in the dendrogram at which the respective pair of segments is merged [2]. AC is an index that measures the quality of structure found by the clustering algorithm [6] and it is defined as:
AC =
1 n ∑ (1 − m(i)) n i =1
(4)
where m(i) is the ratio between the dissimilarity (distance) in which the segment i is merged at the first step to the distance achieved in the last step (when all n segments are joined together). Therefore, both indices are dimensionless and vary in the range 0-1. If CC and AC are close to unity a strong structure may be accepted as existing [6].
IFMBE Proceedings Vol. 29
Principal Components Clustering through a Variance-Defined Metric
C. Simulated Data To determine whether the proposed method can identify an actual clustering structure in the data, three well-defined clusters were generated and the two procedures outlined above were carried out on them. Forty-five points were grouped in three clusters with fifteen individuals and two Gaussian variables, one with unity variance (V1) and the other with variance equal to 4.0 (V2). The Pearson correlation coefficient between variables was set to 0.016 (Fig. 1).
47
in Fig. 3, and the one for the proposed approach is shown in Fig. 4.
Table 1 Data summary of 21 EEG subsegments EEG
S. Delta F. Delta
Theta
Alpha
Beta
SD
Mn
TA
1801
176
73
19
17
7.9
97
TA
17171
627
125
49
54
22.2
140
TA
12157
1491
361
119
48
23.4
216
HVS
34190
1776
638
83
112
29.0
184
TA
34473
1040
615
78
153
28.1
174
HVS
44793
2567
1479
175
48
32.8
236
TA
93349
2158
391
154
39
41.0
316
TA
24198
1195
589
130
58
25.7
179
TA
13528
839
187
55
13
21.3
168
TA
15289
1026
296
74
13
18.9
156
TA
11762
655
179
43
39
17.5
147
TA
22518
2395
585
103
26
23.0
168
TA
16190
1777
675
115
31
21.1
219
TA
10740
1019
403
67
11
20.3
126
HVS
10524
869
248
84
49
19.3
119
HVS
9544
1199
276
124
40
17.6
150
TA
10123
1586
321
63
30
17.1
135
TA
14210
377
169
72
17
24.2
154
HVS
13806
1596
358
58
41
19.9
154
HVS
4939
523
123
39
47
17.7
116
HVS
8420
1520
411
105
96
17.7
107
Fig. 1 Simulated data
All computations were performed with the R software (version 2.8.1), freely available on the Internet at www.rproject.org. Filtered signals were obtained from a MATLAB™ environment through R.matlab package and post-processed by R's signal package.
IV. RESULTS The PCA plot for all 21 subsegments in the first two dimensions is showed in Figure 2, where it can be seen that no cluster structure between TA and HVS patterns was suggested. Explained variances were respectively 43.6% and 18.8%. The dendrogram for the raw matrix M is shown
Fig. 2 Principal Components plot CC and AC for the traditional approach were 0.91 and 0.62, respectively, suggesting a reasonable structure, while for the proposed approach the same indices found were 0.69 and 0.49, suggesting a weak structure, more accordingly to Fig. 2, 3 and 4. For the simulated data, both approaches showed the same values for CC and AC, 0.90 and 0.85, respectively, suggesting a genuine clustering structure, as expected.
IFMBE Proceedings Vol. 29
48
J.C.G.D. Costa et al.
distances onto lower explained variances axes relatively to the original Euclidean space, since IPs coordinates are rescaled by a factor larger than unity for all axes, since τi ≤ 1. In addiction, although CA is often used after dimensionality reduction by PCA [7], some important data characteristics can be lost if few dimensions are retained for analysis. Thus, the use of additional knowledge about the data is recommended, and the data variance pertaining to each PCA axis was the statistic chosen to this end. The results suggested that the new metric is more robust than the ordinary Euclidean distance, given that a clustering structure was suggested by the latter in an unlikely grouping representation (EEG), as opposed to the former. Further studies should include higher dimensional data in order to better explain the specific reasons for this discrepancy. Fig. 3 SLHA for the raw matrix
ACKNOWLEDGMENTS The National Council of Research of Brazil (CNPq) partially financed this research.
REFERENCES
Fig. 4 SLHA for D*
V. DISCUSSION The objective of this work was to propose a new metric, based on a weighted Euclidean distance which considers data variance, instead of the ordinary Euclidean distance frequently used. This metric considers the concept of "variance means information", implying that less variance should have less "weight" in the analysis. This is not the same as discarding the less explained variance axes, since some important features can be extracted from these axes [3]. Thus, the proposed approach "stretches" the projected
1. Rangayan, R M (2002) Biomedical Signal Analysis, a case Study Approach. IEEE Press, Piscataway, NJ, USA. 2. Gordon, A D (1987) A review of hierarchical classification. J Royal Stat Soc 150: 119-137. 3. Jolliffe, I.T (2004) Principal Component Analysis. Springer, NewYork, USA. 4. Costa, J C G D, Almeida, R M V R, Infantosi, A F C et al. (2008) A heuristic index for selecting similar categories in multiple correspondence analysis applied to living donor kidney transplantation. Comput Methods Programs Biomed 90:217-219. 5. Melges, D B, Infantosi, A F C, Ferreira, F R, Rosas, D B (2006) Using the discrete hilbert transform for the comparison between tracé alternant and high voltage slow patterns extracted from full-term neonatal EEG, IFMBE Proc. vol. 14, World Congress on Med. Phys. & Biomed. Eng., Seoul, 2006, pp 1003-1006. 6. Struyf, A, Hubert, M, Rousseeuw, P J (1996) Clustering in an objectoriented environment. J Stat Soft 1. 7. Mauldin-Jr, F W, Levy, J H, Behler, R H, Nichols, T C, Marron, J S, Gallippi, C M (2006) Blind source separation and k-means clustering for vascular ARFI image segmentation, in vivo and ex vivo. IEEE Ultrasonics Simposium 1:1666-1671. Author: A.F.C. Infantosi Institute: Biomedical Engineering Program / COPPE–Federal University of Rio de Janeiro Street: Av. Horacio Macedo, 2030, Bl. H, Zip code 21941-972 City: Rio de Janeiro Country: Brazil Email:
[email protected]
IFMBE Proceedings Vol. 29
A Kurtosis-Based Automatic System Using Naïve Bayesian Classifier to Identify ICA Components Contaminated by EOG or ECG Artifacts M.A. Klados1, C. Bratsas1, C. Frantzidis1, C.L. Papadelis2, and P.D. Bamidis1 1
Aristotle University of Thessaloniki, School Of Medicine, Laboratory of Medical Informatics, P.O. Box 323 54124, Thessaloniki, Greece 2 Center for Brain/Mind Sciences (CIMEC), University of Trento, Mattarello, Trentino, Italy
Abstract— Electrical signals detected along the scalp by an Electroencephalogram (EEG), but that originate from noncerebral origin are called artifacts. Especially when these artifacts are produced by the human body we talk about biological artifacts. The most common biological artifacts are the electrical signals produced by ocular and heart activity. EEG data is almost always contaminated by such artifacts. The last decade Independent Component Analysis (ICA) has a crucial role in neuroscience and it takes great attention for artifact rejection purposes. According to ICA’s methodology, EEG signals are decomposed to statistical Independent Components (IC) and then an EEG specialist is called to recognize the artifactual ICs. Some of the major limitations of the current approach are that the aforementioned selection is subjective, it demands a high skill EEG operator, it is time consuming and it cannot be applied in online processing. Our study employs machine learning techniques in order to recognize the contaminated ICs with ocular or heart artifacts. More specific 19-channel EEG datasets from 86 normal subjects were decomposed using ICA (19x86=1634 ICs in total). Then three independent observers marked an IC as artifactual if it includes ocular or heart artifacts, otherwise it was marked as normal. Then kurtosis was computed in short segments with 1250 sample points fixed length without overlap for each IC. The mean kurtosis value was computed for each IC and the Naïve Bayes Classifier (NBC) classifier was adopted in order to classify the ICs as artifactual or normal. The results suggest that the NBC has correctly classified 1611/1634 ICs (98.5924 %) so it can be suggested that kurtosis seems to be convenient for the classification of contaminated ICs by ocular or heart artifacts.
evaluation of EEG signals, so artifact rejection (AR) is a key step at the preprocessing level in both real time applications [1] and offline analyses. The most frequently seen biological artifacts are due to ocular, heart or muscular activity while the most troublesome of them are the Electrooculographic (EOG) and the Electrocardiographic (ECG) artifacts [2]. This piece of paper focuses in physiological artifacts derived by ocular and heart activity. Ocular artifacts are high voltage patterns in the cerebral signals caused by eye – blinking, or low – frequency patterns produced by eye movements [3]. It is known that cornea is positively charged with reference to retina. This retinocorneal potential difference generates a dipole within the eye – ball, thus ocular artifacts are in due to the reorientation of the aforementioned dipole [4,5]. Ocular activity introduce substantial artifacts into EEG signals (backward propagation), and they are most prominent at anterior sites [6]. On the other hand ECG artifacts are related to the field of the heart potentials over the surface of the scalp. Generally, people with short and wide necks have the largest ECG artifacts on their EEGs. ECG artifacts appear as sharp waves which are strongly correlated with the QRS complex and they can be easily recognized in EEG by their rhythmic (Fig 1).
Keywords— ICA, Naïve Bayes Classifier, EOG, ECG, Artifacts.
I. INTRODUCTION Nowadays electroencephalography (EEG) is commonly used for understanding the cerebral functions as well as for evaluating the neuronal abnormalities, brain injuries and disorders. Electric potentials originated from brain tissues are low voltage signals, thus they are vulnerable not only to the different kinds of external noise but also to the physiological artifacts derived from internal sources of our body. The absence of artifacts is crucial for the accurate
Fig. 1 EOG and ECG Artifactual Components. As it can be seen, in the above figure, the first IC is related with EOG activity where two eye-blinks are clearly observable. On the other hand the second IC is contaminated by ECG artifacts where the QRS complex is obvious
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 49–52, 2010. www.springerlink.com
50
M.A. Klados et al.
It is widely accepted that the artifactual signals are independent from the ongoing cerebral activity, so artifactual signals should be extracted by the Independent Component Analysis (ICA) method. The last decade ICA has a crucial role in neuroscience research and it takes great attention for artifact rejection purposes [7, 8]. In order to make an automatic artifact rejection system there is an urgent need of some features capable to compute the probability of an IC to be artifactual or not. Many studies proposed the joint use of kurtosis with different entropies [9, 10]. As it can be noticed kurtosis is a common feature in both approaches. According to the literature [11] kurtosis is positive for “spiky” activity distributions, typical feature of artifactual components containing ECG and EOG activity; in contrast to this, kurtosis is negative for “flat” activity distributions [12]. So kurtosis should be the proper criterion for the recognition of the contaminated ICs by EOG and ECG activity. It is understandable that in a feature-based automatic systems it is preferable to use as less features as is possible without harming its performance. The current study proposes an automatic system for the recognition of the contaminated ICs by EOG and ECG artifacts. So the remainder of this paper is structured as follows. In section II methodological background is provided alongside a detailed description of the proposed herein approach. Section III consists of tables and illustrations of the results; the latter are finally discussed in last two sections of the paper.
II. MATERIALS AND METHODS Independent Component Analysis Independent Components Analysis (ICA) tries to recover independent source signals
the INFOMAX algorithm to perform BSS on n recorded signals x , having either sub – Gaussian or super – Gaussian distributions [14]. Naïve Bayesian Classifier A Naïve Bayesian Classifier (NBC) is a simple classifier based on Bayes theorem with enhanced independence assumptions. In other words a NBC assumes that the occurrence of a particular event is uncorrelated with the occurrence of any other event. Abstractly, the probability model for the NBC is a conditional model:
p (C | F1 ,..., Fn ) =
n 1 p (C )∏ p ( Fi | C ) Z i =1
where Z is a scaling factor dependent on feature variables {Fi }i=1,...,n and C is the dependent class variable on
{Fi }i=1,...,n . The NBC uses the aforementioned model and
combines it with a decision rule. The most common rule is to choose the most probable hypothesis, which is already known as the maximum a posteriori decision rule. The NBC is then described by the following function: n
cl ( f1 ,..., f n ) = arg max p(C = c)∏ p( Fi = f i | C = c) c
i =1
Kurtosis Kurtosis was used in order to detect “abnormal” peaked distributions as far as kurtosis is a measure of peakedeness. So we are expecting to have higher kurtosis values for artifactual ICs rather than for normal (Fig. 2).
s = {s1 (t ), s2 (t ),..., sn (t )} once they are linearly mixed by an unknown matrix A without a priori knowledge about the sources or the mixing process, only n recorded mixtures of above sources are known
x = {x1 (t ), x2 (t ),..., xn (t )} .
The major problem is to find a square matrix W which will recover a version u = Wx very close to the original sources. Bell and Sejnowski [13] proposed a simple neural network algorithm for BSS of n recorded signals x , to n independent sources s , using the information maximization principle (INFOMAX). They proved that maximizing the joint entropy H ( y ) , of the output of a neural processor, the mutual information among the output components is minimized. Ext-ICA extends the ability of
Fig. 2 Kurtosis values for normal and artifactual ICs. As it can be seen, in the above figure, the kurtosis values for artifactual ICS are much higher compared to kurtosis values for normal ICs Kurtosis was computed by the next formula:
IFMBE Proceedings Vol. 29
K = m4 − 3m22
A Kurtosis-Based Automatic System Using Naïve Bayesian Classifier to Identify ICA Components
51
where m n is the n th central moment:
mn = E
{ (x − ~x ) }. n
Real EEG Data Real EEG data have been obtained from twenty seven healthy subjects [14 males (mean age: 28.2±7.5) and 13 females (mean age: 27.1±5.2)] during a visual evoked potential (VEP) experiment. During this experiment, subjects were exposed to four different groups of emotional pictures (each group contains 40 trials), selected from International Affective Picture System (IAPS) [15] presented on a PC monitor; so 108 datasets have been obtained in total. From these 108 dataset, only 86 have been chose. The rest were excluded from this analysis, because they have many artifacts introduced by external sources like electrode movements or bad tangencies. Each dataset lasts almost two minutes and it was recorded by nineteen scalp electrodes placed according to the International 10-20 system. More specific sensors were placed at Fp1, Fp2, F3, F4, F7, F8, Fz, C3, C4, Cz, T3, T4, T5, T6, P3, P4, Pz, O1 and O2 sites. Data Pre-processing The earlobe montage has been used for our analysis [16]. According to this montage electrodes with odd indices were referenced to the left mastoid while electrodes with even indices were referenced to the right mastoid. Central electrodes (Fz,Cz, Pz) were referenced to the half of the sum of left and right mastoid. The signals were digitized at a rate of 500Hz and further filtered (band pass filter at 0,540Hz and notch filter at 50Hz). ICA was applied in filtered datasets and the result was nineteen ICs. Each IC was separated into 40 trials (according to the presented pictures) and kurtosis was computed for each trial. The mean kurtosis value was computed for each IC and the Naïve Bayes Classifier (NBC) classifier was adopted in order to classify the ICs as artifactual or normal. Then three independent observers manually marked the contaminated ICs (by EOG and ECG activity) as artifactual while the rest were marked as normal in order to quantify the NBC’s classification efficiency. Automatic Artifact Rejection System According to the herein proposed model, EEG signals have to be decomposed into statistical ICs using ICA. Kurtosis value is then computed for each IC separately and a NBC automatically recognizes the artifactual components. Finally the system isolates the recognized as artifactual components and reconstructs the signals free of ocular and heart artifacts (Fig 3).
Fig. 3 Automatic Artifact Rejection System. According to this approach contaminated EEG signals are decomposed using ICA, then kurtosis was computed for each IC separately. NBC recognizes and the system removes the artifactual components (both EOG and ECG). The rest, normal ICs are re-projected back reconstructing with that way the cleaned EEG signals
III. RESULTS The classification using the NBC was performed by means of a 10-fold cross-validation. The accuracy rates, as well as, other statistical measurements regarding the classifier’s performance, are reported in the next table (Table 1). Table
1 Summary Statistics of the NBC classification performance used for artifactual component identification Artifactual component identification task. Stratified cross-validation Summary Total Number of Instances
1634
Correctly Classified Instances
1611/1634 (98.5924 %)
Incorrectly Classified Instances
23/1634 (1.4076 %)
Kappa statistic
0.8912
Mean absolute error
0.0192
Root mean squared error
0.1142
More specifically, the first and the second line show the number and the percentage of the cases that were correctly and incorrectly classified respectively. The third line illustrates the kappa statistic which measures the agreement of predictions with the artifactual and normal instances. Finally the last two lines demonstrate the Mean Absolute Error and the Root Mean Squared Error which provide measurements of the difference between the predicted values and the real ones. The detailed accuracy for each class is presented in Table 2. In specific, Table 2 illustrates the percentage of correctly classified items (TP rate), as well as, the percentage of the instances that were wrongly classified as items of the class under consideration (FP rate). Moreover, the precision feature is derived by dividing the number of elements were correctly classified with the total amount of instances that were classified in the class under consideration, whereas recall is
IFMBE Proceedings Vol. 29
52
M.A. Klados et al.
the number of the correctly classified elements divided by the total number of the real elements of the class under consideration environment. Table 2 Detailed Accuracy for each Class Class Artifactual IC Normal IC
TP FP Precision Recall Rate Rate 0.962 0.012 0.843 0.962 0.988 0.038
0.997
0.988
0.899
ROC Area 0.997
0.992
0.998
F-Measure
IV. CONCLUSSIONS AND DISCUSSION It is logical beyond the obvious (Fig 2.) that normal ICs are more than artifactual. So the probability of an IC to be normal is much higher than to be artifactual. In this sense, the probability of an IC to be normal is 0.935 while a probability of an IC to be artifactual is 0.065. The mean±SD kurtosis value for normal components is 0.232±0.334 and 5.153±3.670 for artifactual. It has also to be mentioned that the herein proposed system has correctly recognized 102/106 artifactual components, which means that only 4 artifactual ICs didn’t recognized properly (0.03%). On the other hand our approach has successfully identified 1509/1518 normal components and only 19 normal components identified as artifactual and rejected (0.01%). One limitation of the current approach is that kurtosis is not able to recognize if an artifactual component is in due to heart or ocular activity. Despite this, the results suggest that kurtosis is more than enough for the separation of artifactual components from normal ones.
ACKNOWLEDGMENT This work is partially funded by the ICT Policy Support Programme (ICT PSP) as part of the Competitiveness and Innovation Framework Programme by the European Community (project LLM).
REFERENCES [1] Blankertz B., Muller K.-R., Curio G., Vaughan T.M., Schalk G., Wolpaw J.R., Schlogl A., Neuper C., Pfurtscheller G., Hinterberger T., Schroder M., Birbaumer N., "The BCI competition 2003: progress and perspectives in detection and discrimination of EEG single trials", IEEE Transactions on Biomedical Engineering; Volume 51, Issue 6, June 2004 Page(s):1044 – 1051. [2] Fisch B.J., 1991. Artifacts. In: Fisch, B.J. (Ed.), Spehlmann’s EEG Primer, 2nd edition. Elsevier, Amsterdam, The Netherlands, pp. 108-124.
[3] Anderer P, Roberts S, Schlogl A, Gruber G, Klosch G, Herrmann W, et al. Artifact processing in computerized analysis of sleep EEG – a review. Neuropsychobiology 1999;40:150–7. [4] T-P. Jung, C. Humphries, T.W. Lee, M.J. McKeown, V. Iragui, S. Makeig, and T.J. Sejnowski. Removing electroencephalographic artifacts by blind source separation. Psychophysiology, 37:163-178, 2000. [5] D.A. Overton and C. Shagass. Distribution of eye movement and eye blink potentials over the scalp. Electroencephalography and Clinical Neurophysiology, 27:546, 1969. [6] McFarland DJ, McCane LM, David SV, Wolpaw JR. Spatial filter selection for EEG-based communication. Electroencephalogr Clin Neurophysiol 1997;103:386–94. [7] Cichocki A, Vorobyov S.A. (2000), “Application of ICA for automatic noise and interference cancellation in multisensory biomedical signals”, Proceedings of the Second International Workshop on Independent Component Analysis and Blind Signal Separation, Helsinki, Finland, June 19–22, pp, 621–626. [8] Jung T. P., Makeig S., Humphries C., Lee T. W., McKeown M. J., Iragui V. and Sejnowski T. J. “Removing electroencephalographic artifacts by blind source separation”. Psychophysiology, 37(2):163178, 2000. [9] Delorme A., Makeig S., Sejnowski T. “Automatic artifact rejection for EEG data using high-order statistics and independent component analysis”. Proceedings of the 3rd International Workshop on ICA, San Diego, December. 2001. p. 457–62. [10] Antonino Greco, Nadia Mammone, Francesco Carlo Morabito, and Mario Versaci, Semi-Automatic Artifact Rejection Procedure based on Kurtosis, Renyi’s Entropy and Independent Component Scalp Maps, World Academy of Science, Engineering and Technology 7 2005 [11] Giulia Barbati, Camillo Porcaro, Filippo Zappasodi et.al., Optimization of an independent component analysis approach for artifact identification and removal in magnetoencephalographic signals, Clinical Neurophysiology 115 (2004) 1220–1232 [12] Delorme A, Makeig S, Sejnowski T. Automatic artifact rejection for EEG data using high-order statistics and independent component analysis. Proceedings of the 3rd International Workshop on ICA, San Diego, December. 2001. p. 457–62. [13] Bell A. J., Sejnowski T. J. 1995. An information-maximization approach to blind separation and blind deconvolution. Neural Computation, 7, 1129-1159. [14] Lee T. -W., Girolami M., Sejnowski T. J. 1999. Independent component analysis using an extended infomax algorithm for mixed sub- Gaussian and super Gaussian sources. Neural Computation, 11, 606-633. [15] P.J. Lang, M.M. Bradley and B.N Cuthbert International Affective Picture System (IAPS): Technical Manual and Affective Ratings , NIMH Center for the Study of Emotion and Attention, 1997. [16] Manousos A. Klados, Christos Frantzidis, Ana B. Vivas, et al., “A Framework Combining Delta Event-Related Oscillations (EROs) and Synchronisation Effects (ERD/ERS) to Study Emotional Processing,” Computational Intelligence and Neuroscience, vol. 2009, Article ID 549419, 16 pages, 2009. doi:10.1155/2009/54941
Author: Manousos A. Klados Institute: Laboratory of Medical Informatics, School Of Medicine, Aristotle University of Thessaloniki Street: P.O. Box 323 54124 City: Thessaloniki Country: Greece Email:
[email protected]
IFMBE Proceedings Vol. 29
Correlation between Fractal Behavior of HRV and Neurohormonal and Functional Indexes in Chronic Heart Failure G. D’ Addio1, M. Cesarelli2, M. Romano2, A. Accardo3, G. Corbi4, R. Maestri5, M.T. La Rovere5, Paolo Bifulco2, N. Ferrara6, and F. Rengo6 1
2
Department of Biomedical Engineering, S. Maugeri Foundation, IRCSS, Telese Terme, Italy Department of Biomedical, Electronic and Telecommunication Engineering, University “Federico II”, Naples, Italy 3 DEEI, University of Trieste, Trieste, Italy 4 Department of Health Sciences, University of Molise, Campobasso, Italy 5 S. Maugeri Foundation, IRCSS, Montescano, Italy 6 S. Maugeri Foundation, IRCSS, Telese Terme, Italy
Abstract— Higher neurohormonal activation levels are known markers of severity and adverse prognosis in heart failure (HF) patients. Classical linear indexes of heart rate variability (HRV) have been shown to be associated with neurohormonal activation. Whether and to what extent non-linear properties of HRV, as expressed by the fractal dimension (FD) index, also reflect neurohormonal activation is not known. Aim of the study was to assess the association between FD, plasma norepinephrine (NPE) levels and functional parameters as compared to classical linear indexes of HRV in HF patients. Ninety-nine stable mild-to-moderate HF patients in sinus rhythm (age: 51±8 years, New York Heart Association class IIIII, left ventricular ejection fraction [EF] 24±6%, maximal oxygen consumption [VO2max] during exercise tests 14±4 mL kg-1 min-1) were studied. Each patient underwent to a 24-hour Holter recording and to NPE assay within one week plasma, besides standard clinical and laboratory examinations. The standard deviation between normal to normal beats (SDNN) and the power in the low frequency band (LFP, 0.04-0.15 Hz) were computed on consecutive 5-min RR sequences. The FD was estimated by the Higuchi method. The association between HRV and neurohormonal and functional indexes was assessed by Spearman correlation coefficient. NPE, LFP, SDNN and FD were respectively (mean ± SD): 363±210 pg/l, 162±171 ms2, 36±15 ms and 1.6±0.1. Both SDNN and LFP showed a moderate but significant negative correlation with NPE levels (r=0.37 and -0.44 respectively, p<0.0001 for both); FD exhibited a weaker association (r=0.29, p<0.005). Linear indexes were significantly associated with VO2max (r=0.31 and 0.36 respectively, p<0.001), while FD showed a negative correlation of similar magnitude (r=-0.34, p<0.001). Similar relationships were found with EF (r=0.34, 0.35 and -0.42 for SDNN, LFP and FD, respectively, p<0.001 for all). These findings suggest that although the association of linear and fractal dimension HRV indexes with functional parameters is similar, the former, particularly the power in the low frequency band, appears to reflect more closely the level of adrenergic activation of HF patients. Keywords— HRV, fractal dimension, chronic heart failure.
I. INTRODUCTION Cardiovascular diseases are the first cause of morbidity and mortality in western and industrialized countries. Heart failure (HF) is a disabling and deadly condition which usually worsens over time, involving about 10% of the elderly population and accounting for 1-2% of health-care costs [1]. HF is associated with prominent alterations in the autonomic control of the cardiovascular system and higher neurohormonal activation levels are known markers of severity and adverse prognosis in these patients [2]. Heart rate variability (HRV) is a well known noninvasive assessment technique of the heart autonomic control. Linear indexes of HRV have been shown to be associated with neurohormonal activation [3]. More recently, it has been speculated than nonlinear HRV indexes might provide more valuable information about assessment of autonomic control impairments in cardiac patients. Fractal analysis is an emerging nonlinear technique. Among several methods proposed so far to measure the fractal behavior of the HRV signal, that based on spectral power-law relationship [4,5,6] and that based on iterative direct algorithms from RR time series [7, 8] have gained wide interest in the last years. The first way has traditionally been approached following the chaos-theory, with the aim of modeling the attractor extracted from HRV sequences [6], estimating the fractal dimension (FD) from the slope of the 1/f-like relationship [9]. Alternatively a FD value can be directly estimated from HRV sequences by means of Higuchi algorithm [8]. This method, whose good reproducibility has been already studied in CHF [10], allows better fractal estimation, eliminating the errors due to indirect estimation of FD from the spectral power. Since whether and to what extent non-linear properties of HRV, as expressed by the FD index, also reflect neurohormonal activation is not known, aim of this study was to assess the association between FD, plasma norepinephrine (NPE)
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 53–56, 2010. www.springerlink.com
54
G. D’ Addio et al.
levels and functional parameters as compared to classical linear indexes of HRV in HF patients.
the algorithm constructs k new time series; each of them, Xmk, is defined as Xmk:X(m),X(m+k),X(m+2*k),..., X(m+int((N-m)/k)*k)
II. MATERIALS AND METHODS A. Study Group Ninety-nine stable mild-to-moderate HF patients in sinus rhythm (age: 51±8 years, New York Heart Association class II-III) admitted to the Heart Failure Unit of the Scientific Institute of Montescano were studied. Inclusion criteria were: sinus rhythm, stable clinical condition during the last 2 weeks, absence of pulmonary or neurological disease, or any other disease limiting survival, no recent (within the previous 6 months) myocardial infarction or cardiac surgery. All patients underwent to a 24-hour Holter recording and standard clinical and laboratory examinations, including 2D echocardiography for left ventricular ejection fraction (EF) evaluation, ECG stress test for maximal oxygen consumption (VO2max) estimation and a blood sample for plasma norepinephrine assay collected within one week from the Holter recording and assessed by a single-isotope radioenzymatic method.
where m=1,2,...,k and k are integers indicating the initial time and the interval time, respectively. Then the length, Lm(k), of each curve Xmk is calculated and the length of the original curve for the time interval k, L(k), is estimated as the mean of the k values Lm(k) for m=1, 2, ..., k. If the L(k) value is proportional to k-D, the curve is fractal-like with the dimension D. Then, if L(k) is plotted against k, for k ranging from 1 to kmax, on a double logarithmic scale, the data should fall on a straight line with a slope equal to -D. Thus, by means of a least-square linear best-fitting procedure applied to the series of pairs (k, L(k)), obtained by increasing the k value, the angular coefficient of the linear regression of the graph ln(L(k)) vs. ln(1/k), which constitutes the D estimation, is calculated.
B. Holter Recordings Holter recordings were performed using a two-channel recorder and processed using a Synetec System (ElaMedical, S.p.A., Segrate-Milano, Italy) with a sampling rate of 200 Hz. After automatic scanning, an expert analyst carefully edited all the recordings. In order to be considered eligible for the study, each recording had to have at least 12 hours of analyzable RR intervals in sinus rhythm. Moreover, this period had to include at least half of the nighttime (from 00:00 AM trough to 5:00 AM) and half of the daytime (from 7:30 AM trough to 11:30 AM)[11]. Before analysis, identified RR time series were preprocessed according to the following criteria: 1) RR intervals associated with single or multiple ectopic beats or artefacts were automatically replaced by means of an interpolating algorithm, 2) RR values differing from the preceding one more than a prefixed threshold were replaced in the same way as for artefacts. The RR time series were finally interpolated by piecewise cubic spline and resampled at 2 Hz. C. Fractal Dimension Analysis Fractal dimension was calculated by using the Higuchi's algorithm [8]. From a given time series X(1), X(2), ... X(N),
Fig. 1 Example of an hi sequence determination on a curve for the length calculation D. Linear Analysis Most important time- and frequency-domain parameters (standard deviation between normal to normal beats – SDNN-, and the power in the low frequency band LFP, 0.040.15 Hz) were computed on consecutive 5-min RR sequences, as defined in accordance with the ACC/AHA/ESC consensus [12].
IFMBE Proceedings Vol. 29
Correlation between Fractal Behavior of HRV and Neurohormonal and Functional Indexes in Chronic Heart Failure
E. Statistical Analysis
55
Table 2 Mean and standard deviation of HRV indexes
Kolmogorov-Smirnov (KS) test was used to assess the normality of the distribution of all variables studied and, due to the marked skewness in the distribution of some variables, the associations between HRV, neurohormonal and functional indexes were assessed by Spearman correlation coefficient.
FD
1.6 ± 0.1 2
LFP [ms ]
162 ± 171
SDNN [ms]
36 ± 15
Table 3 Spearman correlation analysis between neurohormonal levels and functional indexes with HRV indexes: r and p values (in brackets)
III. RESULTS
NPE
In Tables 1 neurohormonal levels and functional indexes are reported. Mean and standard deviation of HRV indexes obtained by linear and fractal analysis are shown in Table 2. In Table 3 the Spearman correlation analysis between neurohormonal levels and functional indexes with HRV indexes is listed. Results showed a moderate but significant negative correlation between NPE levels and both SDNN and LFP; FD exhibited a positive weaker association (see also figure 2). Linear indexes showed a moderate but significant positive association with VO2max; FD showed a negative correlation of similar magnitude (see also figure 3). FD showed a moderate but significant negative correlation with EF values; SDNN and LFP exhibited positive weaker correlation (see also figure 4).
VO2max
FD
0.29 (.0035)
- 0.34 (.0008)
-0.42 (.0001)
LFP
-0.44 (.0001)
0.36 (.0003)
0.35 (.0004)
SDNN
-0.37 (.0002)
0.31 (.0023)
0.34 (.0009)
Fig. 3 Correlation between FD and LFP with VO2max
Fig. 4 Correlation between FD and
indexes
NPE [pg/L]
363 ± 210
EF [%]
24 ± 6
VO2max [(mLxkg-1x min-1]
14 ± 4
LFP with EF
IV. CONCLUSIONS
Fig. 2 Correlation between FD and LFP with NPE Table 1 Mean and standard deviation of neurohormonal levels and functional
EF
The assessment of autonomic control of the cardiovascular function is crucial to understand the pathophysiology of heart failure. For this purpose, several techniques have been proposed so far, yet this still represents a challenging task. The measurement of plasma catecholamines levels provides a practical way to assess sympathetic activity and has been widely used, despite its limitation of being a "systemic" instead of organ specific measurements of sympathetic activation.
IFMBE Proceedings Vol. 29
56
G. D’ Addio et al.
The use of more specific measurements, such as cardiac norepinephrine spillover, is limited to small studies due to the invasiveness and complexity of these techniques [13]. Since heart rate variability is under the control of the autonomic nervous system, many efforts have been devoted to the development of methods based on the analysis of spontaneous fluctuations in heart rate to assess both sympathetic and parasympathetic branches of the autonomic nervous system. Some nonlinear methodologies have been also recently proposed to estimate sympathetic and parasympathetic cardiac modulation [14]. Results obtained in this work, suggest that, although the association of linear and fractal dimension HRV indexes with functional parameters is similar, the former, particularly the power in the low frequency band, appears to reflect more closely the level of adrenergic activation in HF patients.
REFERENCES 1. Bundkirchen A, Schwinger RH (2004). Epidemiology and economic burden of chronic heart failure. Eur Heart J Supplements; 6 (Suppl D): 57-60. 2. Cohn JN, Levine TB, Olivari MT, Garberg V, Lura D, Francis GS, Simon AB, Rector T (1984). Plasma norepinephrine as a guide to prognosis in patients with chronic congestive heart failure. N Engl J Med 1984; 311: 819-823. 3. Kienzle MG, Ferguson DW, Birkett CL, Myers GA, Berg WJ, Mariano DJ (1992). Clinical, hemodynamic and sympathetic neural correlates of heart rate variability in congestive heart failure. Am J Cardiol; 69 (8): 761-767. 4. Bigger T, Steinman R, Rolnitzky L, Fleiss J, Albrecht P, Cohen R (1996). Power law behavior of RR-Interval Variability in healthy middle-aged persons, patients with recent acute myocardial infarction and patient with hearth transplants. Circulation; 93:2142-51.
5. Makikallio TH, Huikuri H, Hintze U, Videbaek J, Mitrani RD, Castellanos A,Myerburg R, Moller M (2001). Fractal analysis and time- and frequency-domain measures of heart rate variability as predictors of mortality in patients with heart failure. Am J Cardiol; 87(2):178-82. 6. Cerutti S, Carrault G, Cluitmans PJ, Kinie A, Lipping T, Nikolaidis N, Pitas I, Signorini MG (1996). Non-linear algorithms for processing biological signals. Comp Met Prog Biomed 1996;1:51-73. 7. Goldberger AL (1992). Fractal mechanisms in the electrophysiology of the heart. IEEE Eng Med Biol 1992;11:47-52. 8. Higuchi T (1988). Approach to an irregular time series on the basis of the fractal theory. Physica D; 31:277-83. 9. Butler GC, Yamamoto Y, Xing HC, Northey DR, Hughson RL (1993). Heart rate variability and fractal dimension during orthostatic challenges. J Appl Physiol; 75(6):2602-12. 10. D'Addio G, Accardo A., Maestri R, Picone C., Furgi G, Rengo F (2003). Reproducibility of nonlinear indexes of HRV in in chronic heart failure. Pace, February 2003, Volume 26, No.2, Part II p. S171. 11. Bigger T, Fleiss J, Rolnitzky L, Steinman R. Stability over time of heart period variability in patients with previous myocardial infarction and ventricular arrhythmias. Am J Cardiology 1992;69:718-23. 12. Task Force of the European Society of Cardiology. Heart Rate Vari bility Standard of Measurement, Physiological Interpretation and Clinical Use. Circulation 1996;93:1043-65. 13. Esler M, Kaye D (2000). Measurement of sympathetic nervous system activity in heart failure: the role of norepinephrine kinetics. Heart Failure Reviews; 5: 17-25. 14. Guzzetti S, Borroni E, Garbelli PE, Ceriani E, Della Bella P, Montano N, Cogliati C, Somers VK, Mallani A, Porta A (2005). Symbolic dynamics of heart rate variability: a probe to investigate cardiac autonomic modulation. Circulation; 112 (4): 465-470.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Gianni D’Addio S. Maugeri Foundation - Bioengeneering Dpt Via Bagni Vecchi 82037 Telese Terme (BN) Italy
[email protected]
On the Selection of Time Interval and Frequency Range of EEG Signal Preprocessing for P300 Brain-Computer Interfacing N.V. Manyakov1 , N. Chumerin1 , A. Combaz1 and M.M. Van Hulle1 1
Laboratory for Neuro- and Psychofysiology, K.U.Leuven, Herestraat 49, POBox 1021, 3000 Leuven, Belgium
Abstract— We consider an EEG-based, wireless braincomputer interface (BCI) with which subjects can “mind-type” text on a computer screen. The application is based on the detection of the P300 event-related potential (ERP). The frequency range for preprocessing the EEG recordings, and the location and length of the time interval after stimulus onset, are selected with respect to the classification accuracy obtained for different subjects, and for different numbers of trials used for averaging the P300 ERP. Keywords— Brain-computer interface, mind-typer, P300, frequency range selection, time interval selection.
I. I NTRODUCTION Research on brain computer interfaces (BCIs) has witnessed a tremendous development in recent years (see, for example, the editorial in Nature [1]), and is now widely considered as one of the most successful applications of the neurosciences. BCIs can significantly improve the quality of life of patients suffering from amyotrophic lateral sclerosis, brain stroke, brain/spinal cord injury, cerebral palsy, muscular dystrophy, etc. Brain computer interfaces are either invasive (intra-cranial) or noninvasive. The first ones have electrodes implanted into the premotor- or motor frontal areas or into the parietal cortex (see review in [2]), whereas the noninvasive ones mostly employ electroencephalograms (EEGs) recorded from the subject’s scalp. In this study we focus on noninvasive BCI that rely on the event-related potential (ERP), i.e., a stereotyped electrophysiological response to an internal or external stimulus [12]. One of the most known and explored ERPs is the P300. It can be detected while the subject is classifying two types of events with one of the events occurring much less frequently than the other (“rare event”). The rare event elicits an ERP consisting of an enhanced positive-going signal component with a latency of about 300 ms [13]. In order to detect the ERP, one trial is usually not enough, and averaging over several trials is required. The averaging is necessary because the recorded signal is a superposition of all ongoing brain activities. By averaging the recordings, those that
are time-locked to a known event (e.g., the attended stimulus) are extracted as ERPs, whereas those that are not related to the stimulus presentation are averaged out. The stronger the ERP signal, the fewer trials are needed, and vice versa. Ideally, one would like to be able to robustly detect ERPs from single trials. Unfortunately, this is still beyond reach. For detecting the P300 wave, usually a relatively long interval after stimulus onset (up to 600 − 1000 ms) is considered, the selection of which is not optimized. Feature selection [14], or feature extraction [15], is then performed, given this interval, to best separate the P300- from the non-P300 waves. Approaches have been considered that do not perform feature selection or -extraction, but that directly rely on more powerful classifiers, such as the kernel support-vector machine [16] or other types of non-linear classifiers that are able to deal with high dimensional data. However, such strategies can be prone to overfitting, especially for cases where only a small amount of training samples are available. The question is whether the the time interval as well as the frequency range are apt for the task? Traditonally, in mind typing, the focus is on the detection of the P300 potential (around 300 ms after stimulus onset) [17]. Recently, also the wave N200 was shown to be improve the classification performance [18]. Hence, the first part of our question: what interval after stimuli onset yields the best classification accuracy? In this paper, we try to find an answer to this question for a particular type of classifier and feature extraction strategy. The second part of the question is about the choice of the appropriate frequency range, when preprocessing the EEG signals. P300-based mind-typers perform prefiltering in a frequency range, as in [ f1 , f2 ] [16]. There is no consistency in the selection of this frequency range: f1 varies from 0.1 to 0.5 Hz and f2 from 10 to 25 Hz. In this paper, we also investigate the influence of the frequency range selection on the classification accuracy.
II. M ETHODS A. EEG data acquisition The EEG recordings were performed using a prototype of an ultra low-power 8-channel wireless EEG system This sys-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 57–60, 2010. www.springerlink.com
58
N.V. Manyakov et al.
Fig. 1: Typing matrix of the mind-typer. The intensification of the third column (left panel) and the second row (right panel) are shown.
tem was developed by IMEC1 and is built around their ultralow power 8-channel EEG amplifier chip [19]. Recordings were made with eight electrodes located on the parietal and occipital areas, namely in positions Cz, CPz, P1, Pz, P2, PO3, POz, PO4, according to the international 10–20 system. The reference electrode and ground were placed on the left and right mastoids. B. Data-stimuli synchronization In order to synchronize the data and the stimuli, we saved the exact time stamps of the start and end moments of the recording session, as well as the time stamps of the stimulus onsets and offsets. Assuming that the reconstructed EEG signal has a constant sampling rate, it is possible to find very precise correspondences between time stamps and data samples. We used this correspondence mapping for partitioning the EEG signal into signal tracks, for further processing. C. Experiment design Seven healthy subjects (5 male and 2 female, aged 22–36 with average age of 27, six righthanded and one lefthanded) participated in the experiments. Each experiment consisted of one training and several testing stages. We used the same visual stimulus paradigm as in the first P300-based mind-typer, introduced by Farwell and Donchin in [17]: a matrix of 6 × 6 symbols. The only (minor) difference was in the symbol set, which in our case was a set of 26 Latin characters, eight digits and two special symbols (’ ’ used instead of space and ’¶’ used as an end of input indicator). During the training and testing stages, columns and rows of the matrix were intensified in a random manner (see Fig. 1). The intensification duration was 100 ms, followed by a 100 ms of no intensification. Each column and row flashed only once during one trial, so each trial consisted of 12 stimulus presentations (6 rows and 6 columns). 1 Interuniversity Microelectronics Centre (IMEC), http://www. imec.be
As was mentioned in the Introduction, one trial is not enough for robust ERP detection, hence, we adopted the common practice of averaging the recordings over several trials before performing the classification of the (averaged) recordings. During the training stage, all 36 symbols of the typing matrix were presented to the subject. Each symbol had 10 trials of intensification for each row/column (10-fold averaging). The subject was asked to count the number of intensifications of the attended symbol. The counting was used only for keeping the subject’s attention onto the symbol. The recorded data were filtered in the [ f1 , f2 ] frequency band with a fourth order zero-phase digital Butterworth filter, and cut into signal tracks. Each of these tracks consisted of 1000 ms of recording, starting from stimulus onset. Note that subsequent tracks overlapped in time, since the time between two consequent stimuli onsets was 200 ms. Then, each of these tracks was downsampled to 2[ f2 ] + 1 tabs (according to the Nyquist-Shannon sampling theorem), and assigned to one of two possible groups: target and nontarget (according to the stimuli to which they were locked). For classification purposes, only points within the interval [t1 ,t2 ] were considered. D. Feature Extraction In order to classify averaged and subsampled EEG recordings into target and nontarget classes we used the onedimensional version of the linear feature extraction (FE) approach proposed by Leiva-Murillo and Art´es-Rodr´ıguez in [20]. As is the case with all linear FE methods, this method considers as features the projections of the centered input vectors X = {xi : xi ∈ RD } onto the appropriate ddimensional subspace (d < D), and the task is to find that subspace. The method searches for the ”optimal” subspace maximizing (an estimate of) the mutual information between the set of projections Y = {WT xi }, and the set of corresponding labels C = {ci }. E. Classification During the testing stage, for each trial, we had 12 tracks (for 6 rows and 6 columns) of 1000 ms EEG data recorded from each electrode. The averaged (along trials) EEG response for each electrode was determined for each row and column. Then, all 12 averaged tracks were sequentially fed into the feature extractor, which extracted a scalar feature yi for each track i. The classifier selected the best ”row candidate“ among extracted features as arg max{y1 , . . . , y6 } and the best ”column candidate“ as arg max{y7 , . . . , y12 }. The symbol on the intersection of the these row and column in the matrix, was then considered as the result of the classification.
IFMBE Proceedings Vol. 29
On the Selection of Time Interval and Frequency Range of EEG Signal Preprocessing for P300 Brain-Computer Interfacing
59
Fig. 2: (A-C) Classification accuracy of the mind-typing system as a function of prefiltering in interval [ f1 , f2 ]: for subject VP after 5 fold averaging (A), for subject MNV after 2- (B) and 5-fold averaging (C); (D-F) Classification accuracy of the mind-typing system as a function of time interval [t1 ,t2 ] considered for classification: for subject AP after 10-fold averaging (A), for subject MNV after 2- (B) and 5-fold averaging (C).
F. Filtering frequency range and time interval selection We performed an off-line classification of P300 data from all subjects. The data was prefiltered in the band [ f1 , f2 ] before applying the classifier (discussed above), for all recordings of length 1000 ms, after stimulus onset. We estimated the classification accuracy for all possible frequency intervals, where 0.1 ≤ f1 ≤ 20 and 2 ≤ f2 ≤ 30 were taken with a step size of 0.1 Hz (see Fig. 2A–C). For the selection of the time interval [t1 ,t2 ], we first determined, for each subject, the best filtering range, and after that performed a search for the time interval on the filtered and subsampled recordings (see Fig. 2D–F).
III. R ESULTS As expected, the obtained accuracy is subject dependent (compare Fig. 2 A and C, where the accuracy is plotted for two different subjects, given a classifier trained on 5-fold averaging). Looking at the performance across all subjects, we can conclude that f1 should be small (0.1 Hz), since low frequency oscillations have a huge impact on the classification accuracy. Concerning the upper frequency f 2 , we have to mind the following. First of all, we should remember that,
for a small f 2 , we will have a smaller number of points for classification, because of the downsampling according to the Nyquist-Shannon theorem. An upper limit for f 2 can also be detected for some subjects (see, for example, Fig. 2A), where it could not exceed 15 Hz to obtain a good performance. On the other hand, it should not be smaller than 3 − 4 Hz. Based on this, and the analysis of the recordings from all subjects, we suggest to prefilter the data in the band 0.1 − 10 Hz, before considering them for classification. Note that this interval was taken to be appropriate for all numbers of trials used for averaging. But, we should add that the more trials considered for averaging, the broader the interval that can be used (as well as f1 can be taken higher). This can be verified by comparing Fig. 2 A and C, where the accuracy is plotted for the same subject, but for different number of trials for averaging (2 fold averaging in panel B and 5 fold averaging in panel C). When comparing the results across subjects, we observe a consistency in the choice of the time intervals that lead to a good classification accuracy (compare, for example, panels D and E–F in Fig. 2, where the results for two different subjects are shown). It can also be seen that, contrary to the selection of the frequency interval, the number of trials used for averaging does not have a large impact onto the choice of
IFMBE Proceedings Vol. 29
60
N.V. Manyakov et al.
the time interval (compare panels E and F in Fig. 2, where the result for 2- and 5 fold averaging are shown for the same subject). Based on the results from all the subjects, we can conclude that t1 should not be larger than 330 ms, and t2 should not be smaller than 200 ms. Thus, this interval should contain the P300 component (to which class of mind-typer belongs) and the N200 component (the importance of which was mentioned in the recent work of [18]). Interestingly, the positive going P600 component also contributes to the decoding, for some subjects. As can be seen from Fig. 2D (in the center of the figure) an isolated interval around 600 ms yields a classification performance around 50% (chance level is 2.8%). This suggests the possibility to include the P600 components into the decoding process.
IV. C ONCLUSION We have argued for a more careful choice of the frequency range for prefiltering EEG recordings, and of the time interval after stimulus onset for maximizing the classification accuracy in a mind-typing brain-computer interface. We have motivated our choice based on a study performed on several subjects, using different numbers of trials for averaging. The proposed filtering range is 0.1 − 10 Hz. We have also shown that the time interval chosen should contain the N200, P300, and P600 components of the event-related potential, because they jointly contribute to the classification accuracy.
ACKNOWLEDGEMENTS NVM and AC are supported by the European Commission (IST-2004-027017), NC is supported by the European Commission (STREP-2002-016276), MMVH is supported by research grants received from the Excellence Financing program (EF 2005) and the CREA Financing program (CREA/07/027) of the K.U.Leuven, the Belgian Fund for Scientific Research – Flanders (G.0234.04 and G.0588.09), the Interuniversity Attraction Poles Programme – Belgian Science Policy (IUAP P5/04), the Flemish Regional Ministry of Education (Belgium) (GOA 2000/11), and the European Commission (STREP-2002-016276, IST-2004-027017, and IST-2007-217077). The authors wish to thank Refet Firat Yazicioglu, Tom Torfs and Herc Neves from the Interuniversity Microelectronics Centre (IMEC) in Leuven for providing us with the wireless EEG system.
R EFERENCES 1. Editorial Comment: Is this the Bionic Man? Nature. 2006;442:109. 2. Pesaran B., Musallam S., Andersen R.A.. Cognitive neural prosthetics Current Biology. 2006;16:77–80. 3. Vidal J.J.. Toward direct brain-computer communication Annual review of Biophysics and Bioengineering. 1973;2:157–180. 4. Sutter E.E.. The brain response interface: communication through visually-induced electrical brain responses Journal of Microcomputer Applications. 1992;15:31–45. 5. Middendorf M., McMillan G., Calhoun G., Jones K.S.. Brain-computer interfaces based on the steady-state visual-evoked response IEEE Transactions on Rehabilitation Engineering. 2000;8:211–214. 6. K¨ubler A., Kotchoubey B., Kaiser J., Wolpaw J.R., Birbaumer N.. Brain-computer communication: unlocking the locked in Psychological Bulletin. 2001;127:358–375. 7. Birbaumer N., Kubler A., Ghanayim N., et al. The thought translation device (TTD) for completely paralyzedpatients IEEE Transactions on Rehabilitation Engineering. 2000;8:190–193. 8. Wolpaw J.R., McFarland D.J., Vaughan T.M.. Brain-computer interface research at the Wadsworth Center IEEE Transactions on Rehabilitation Engineering. 2000;8:222–226. 9. Pfurtscheller G., Guger C., M¨uller G., Krausz G., Neuper C.. Brain oscillations control hand orthosis in a tetraplegic Neuroscience letters. 2000;292:211–214. 10. Blankertz B., Dornhege G., Krauledat M., M¨uller K.R., Curio G.. The non-invasive Berlin brain–computer interface: fast acquisition of effective performance in untrained subjects NeuroImage. 2007;37:539–550. 11. Mill´an J. del R., Renkens F., Mouri˜no J., Gerstner W.. Noninvasive brain-actuated control of a mobile robot by human EEG IEEE Transactions on Biomedical Engineering. 2004;51:1026–1033. 12. Luck S.J.. An introduction to the event-related potential technique. MIT Press Cambridge, MA: 2005. 13. Pritchard W.S.. Psychophysiology of P300 Psychological Bulletin. 1981;89:506. 14. Ahi S.T., Kambara H., Koike Y.. A comparison of dimensionality reduction techniques for the P300 response in Proceedings of the 3rd International Convention on Rehabilitation Engineering & Assistive Technology:28ACM 2009. 15. Chumerin Nikolay, Manyakov Nikolay V., Combaz Adrien, et al. P300 Detection Based on Feature Extraction in On-line Brain-Computer Interface Lecture Notes in Computer Science. 2009;5803:339-346. 16. Thulasidas M., Guan C., Wu J.. Robust classification of EEG signal for brain-computer interface IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2006;14:24–29. 17. Farwell L.A., Donchin E.. Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials Electroencephalography and clinical Neurophysiology. 1988;70:510–523. 18. Hong B., Guo F., Liu T., Gao X., Gao S.. N200-speller using motiononset visual response Clinical Neurophysiology. 2009;120:1658–1666. 19. Yazicioglu R.F., Merken P., Puers R., Van Hoof C.. Low-power lownoise 8-channel EEG front-end ASIC for ambulatory acquisition systems in The 32nd European Solid-State Circuits Conference. Proceedings of:247–250 2006. 20. Leiva-Murillo JM, Artes-Rodriguez A.. Maximization of mutual information for supervised linear feature extraction IEEE Transactions on Neural Networks. 2007;18:1433–1441.
Corresponding author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Marc M. Van Hulle Medical School, Katholieke Universiteit Leuven Herestraat 49, bus 1021 Leuven Belgium
[email protected]
Development of a Simple and Cheap Device for Movement Analysis Csanád G. Erdős, Gergő Farkas, and Béla Pataki Budapest University of Technology and Economics (BUTE), Department of Measurement and Information Systems, Budapest, Hungary Abstract— The way how someone is sitting down or standing up from a chair tells us a lot about his motion, as when sitting down or standing up his muscles, joints, knees and hip are working hard. Improper working of these kinds of abilities and organs can be detected by observing and analysing these processes of motion. For example the falling back and asymmetric load on the legs are the signs of these kinds of disorder. The process of sitting down and standing up can be analysed easily and simply by the tool which was developed by us and which includes the hardware, software and signal processing algorithms. Moreover no special prerequisites are needed during the use of this tool. The appliance was tested on a smallmedium sized population (few dozens of humans). Keywords— sit-to-stand (STS), motion analysis, motion disorder, home health monitoring.
I. INTRODUCTION In developed countries with the increasing rate and number of elderly people the examination and monitoring of the condition of movement disorder with simple instruments – perhaps at home - have become more important. Home accidents, of which a large number are due to motion disorder, are very often the reason of deteriorating living standards, or many times even the cause of death. The subject of our study is the development and realization of a chair prepared for motion monitoring. The idea is that the way how someone is sitting down or standing up from a chair tells us a lot about his motion [1], as when sitting down or standing up his muscles, joints, knees and hip are working hard. For instance the person can start on it again and again, may plop back on the chair or can load more heavily one of his legs etc… In the literature there are several similar projects. B. Najafi et. al deal with a patient - who is outside a hospital, but receives medical aid from it - monitoring in their study [2]. Their system is able to detect body postures (sitting, standing and lying) and periods of walking in elderly person using a kinematic sensor attached to the patient's chest. T. Liu et. al Japanese engineers developed a wearable sensory system for human lower extremity motion analysis [3]. In our development, the aim was that we will be able to examine the person’s sit-to-stand task without the sensors placed on the body.
A sit-to-stand assistance system equipped with a moving handrail to lead elderly and Parkinson disease people to stand up from a chair was developed by K. Tomuro et. al [4]. The construction provides great help for the Parkinson's disease people in their everyday life, and the system with its multi-faceted measurement opportunities can also be well used to satisfy the examination aims. The device worked perfectly, but it is very expensive, so it is not a solution for home monitoring: as most of the people cannot afford it. The goal of the research of M. Goffredo et. al [5] is the visual observation based monitoring of the human body and its movements. They have done two-dimensional analysis on the recorded videos. S. Allin and A. Mihailidis worked on similar research [6]. The technical equipment, which is based on three dimensional analysis of the video recordings, is able to evaluate the movement of standing up from a chair. The method based on video monitoring is not suitable for our purposes since the continuous home monitoring raises legal and ethical issues, moreover very complex problems related to image processing and shape recognition has to be solved. Our goal was to create a cheap and simple instrument which makes it possible to monitor elderly people at home in the long run. Currently the prototype has been developed and the basic measurement methods were tested. During our work we have equipped a chair with cheap strain gauges (used in simple and cheap commercial scales). We showed the ready made prototype to physiotherapist experts who made suggestions for some alterations and at the end of the work helped us in testing the instrument. According to their suggestion we have placed a foothold before the chair which gives even more possibilities for measurement. During the work the hardware; which solves the filtering, amplification, sampling and the digitalization of the analogue signals provided by the strain-gauge bridges and the communication with the PC; was developed and tested. The operating software has been written too, which gives the possibility of expert controlled measurement and eventtriggered automatic measurement as well. We have carried out tests on a small population (some dozens of people). Among them were healthy and slightly disabled people as well. The signal processing software is also ready; it was written and tested in MATLAB environment.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 61–64, 2010. www.springerlink.com
62
C.G. Erdős, G. Farkas, and B. Pataki
A. Hardware The prototype chair is a cheap kitchen stool. The flat plate in front of the chair was made from wooden components. The positions of the footholds are variable so asymmetric layouts can be set also (Fig. 1a). The instrument consists of a chair without a back-rest, and two footrests on a plane before the chair, onto which the patient puts his foot in the course of the measurement. The load is measured by strain gauges. 1-1 of these is placed on each leg of the chair and 4-4 of these are at each foothold (Fig. 1b).
Fig. 1 The devise and the position of the sensors Wheatstone bridge is used at each sensor in order to get a voltage signal which is proportional to the load. After the amplification and analogue filtering the signal goes to the analogue-to-digital converter of the microcontroller. A signal conditioning circuit supports the hardware minimization of the offset voltage. Data collection is performed by two AVR Atmega16 microcontrollers which work in a master-slave scheme. Data can be transferred to the PC via wired or wireless communication by the master. Measurement can be executed without PC control also. In this case the data is saved onto a Micro SD card. However our goal was the analysis of the standing up and sitting down, a simple stabilograph like system was a side product of our research. Sense of balance and stability can be measured by using the footholds only. B. Software There are two basic types of measurements which can be performed with the developed tool. Examination of the process of sitting down and standing up is the first measurement type. In this case all of the 12 sensors are used. During the routines, which have been executed yet, the patients have had to sit down and stand up for three times.
The other measurement type is the examination of the balance and the stability. During this process, the patient stands on the footholds and stays for 30 seconds. It is executed first in natural body position and in another measurement the patient is asked to load his legs equally, he can check the balance on the monitor, which provides a biological feedback. Each of the measurement types is controlled by its own MATLAB GUI (Graphical User Interface). One of the two GUIs plots load-time figures for the examination of the sitting down standing up process. The second one, which is used during the examination of the sense of balance, calculates load distribution between the two legs and among the parts of the foot and gives the biofeedback mentioned. The sum and the difference of the results of some sensor groups can be plotted also. For instance the sum of left and right side sensors placed on the chair and the difference of them can be calculated so the asymmetry of the left and right side can be examined easily. The data processing can be split into two groups. The first group contains the analysis of the load-time figures during the examination. The second one is based on the projection of the centre of the mass of the patient onto a horizontal plane. During the analysis of load-time figures (Fig. 2), rise time, fall time and overshoot were calculated after the determination of the level of 100% load. Then these parameters were examined and their standard deviations were obtained. This method was used basically in the sit down stand up process analysis. 80 Weight (kg)
II. SYSTEM DESCRIPTION
60 40 20 0
3.5
4
4.5
5 5.5 Time (sec)
6
6.5
7
Fig. 2 Load-time figures The accurate coordinates of the sensors were determined. The weighted mean of coordinates of the sensors, where the weights are the loads measured by the sensor, gives the coordinates of the centre of the mass (1).
IFMBE Proceedings Vol. 29
12
12
X COG =
∑ x i mi i =1 12
∑m i =1
i
∑ym i
; YCOG =
i =1 12
∑m i =1
i
i
(1)
Development of a Simple and Cheap Device for Movement Analysis
healthy people, are closely located to the centre of the figure while the red dots can be found in three different groups. These groups are in different directions from the location of the green points. Supposedly these three groups can be characterized by the disease of the patients.
600 600
600 600
500 500
500 500
500 500
400 400
400 400
400 400
Y (mm)
300 300
300 300 200 200
200 200
100 100
100 100
100 100
III. RESULTS
00 100 200 300 00 100 200 300 X (mm) X (mm)
14 12 10
100 200 300 00 100 200 300 X (mm) X (mm)
- Stand-to-Sit
Fig. 4 Examination of the movement of the centre of mass Further conclusions can be drawn from the graphs which show the projection of the movement of the centre of the mass during the patient is sitting down and standing up. An example is shown in Figure 4. It can be noticed in this figure that the three measured stand-to-sit and sit-to-stand curves are highly similar to each other. This indicates that everybody has its own specific sitting down and standing up movement. This depends on the movement coordination and the work of the motion system. Therefore these curves characterize the movement of the person so doctors may use them to diagnose the patient.
8
2 0
0
0.05
0.1 0.15 0.2 0.25 Deviation of rise time (sec)
0.3
0.35
Fig. 3 Examination of the parameters An example plot is shown in Figure 3. The two groups were distinguished by their colours (red dots are the patients with some disease and green dots correspond to the healthy people). The standard deviation of the overshoots is given as the function of the standard deviation of the rise times. It seems that the green points, which are representing the
600 600
600 600
500 500
500 500
500 500
400 400
400 400
400 400
300 300
Y (mm)
4
600 600
Y (mm)
6
Y (mm) Y (mm)
Deviation of overshoot (%)
16
00 100 200 300 00 100 200 300 X (mm) X (mm)
-----Sit-to-Stand
A. The Results of the Examination of the Sit-to-Stand Task
18
300 300
200 200
00
The corresponding diagnostic software was tested on the data of 32 patients. The averages and the standard deviations of the rise times, fall times, overshoots were calculated. We looked for such parameters that the patients with motion disorder show significant difference from the parameters of healthy people.
Y (mm)
600 600
Y (mm) Y (mm)
The calculation of the coordinates of the centre of mass can be executed for each sample so the movement of the centre of the mass can be plotted. The movement of the centre of mass was examined for both measurement processes (sitting down-standing up, stability). For stability measurement the ratio of the parts of the graph near and far from the stable point was calculated as well. (Near and far were defined using a reference circle.) The value of this ratio is inversely proportional to the sense of the balance of the patient; therefore it gives a tool for the examination of stability. The reference area was determined this way: the smallest circle which contains all of the measured points of the centre of mass projection was calculated for every measurement. Then the average of these circles was taken and the reference circle became the quarter of this average one.
63
300 300
300 300
200 200
200 200
200 200
100 100
100 100
100 100
00
00 0
0
100 200 300 X (mm) 100 200 X (mm)
300
-----Sit-to-Stand
Fig.
00 0
0
100 200 300 X (mm) 100 200 X (mm)
300
100 200 300 00 100 200 300 X (mm) X (mm)
- Stand-to-Sit
5 A curve of the movement of the centre of mass of a patient with motion disease
IFMBE Proceedings Vol. 29
64
C.G. Erdős, G. Farkas, and B. Pataki
The graphs, which are highly different from the typical ones, can be informative also. Figure 5 shows the curves of a patient with a lower lamb disease. It can be seen that the patient’s centre of mass suddenly moves to right at the two third of the process. B. Results of the Examination of the Sense of Balance The diagnostic software was run on the preliminarily tested 32 patients. The tool calculated the percentage of the measured points which are in the reference circle. If the patient keeps its balance well, even the 100% of the samples can be in the circle (Fig. 6).
IV. DISCUSSION A configurable mechanical construction and measurement system was developed. Strain gauge sensors were used in the system. The conditioning, digitalising of the analogue signals of the sensors and the data collection and signal processing were developed as well. A multi-function diagnostic software was written. This tool helps the doctor, who executes the examination, to check the condition of the patient easily. The tool meets our preliminary goals because it is simple, cheap (a total cost of 45-55€), in the long run appropriate for home usage and it was presented that several examination methods can be performed with it.
ACKNOWLEDGMENT
555 560 Y (mm)
Y (mm)
550 545 540 535
550 540 530
135 140 145 150 X (mm)
Fig. 6
100%
140 X (mm)
Fig. 7
160
The authors would like to express their gratitude to György Baur (Budapest University of Technology and Economics) for his precious help in the forming of the mechanics. They would also like to thank Dr. Mónika Horváth - Head of the Department of Physiotherapy of Semmelweis University - for her useful advises.
REFERENCES 85,32%
The calculations were performed in two measurements. In the first case the patient did not see the indicator on the screen, while in the second case the patient did see. So in this latter case a biofeedback about the distribution of the load was realized. Two facts were discovered by comparing these two percentage values. At first, in most of the cases (about 80% of the cases) the biofeedback improved the parameters of the sense of balance. However in special cases (mostly with old patients), concentrating on the frequently and swiftly changing indicator made the examined persons disconcerted. The reason of this effect can be that the person overcompensates. In other words the patient sees the indicator which exceeded the limits then he or she changes its body position but the change is too much so the indicator exceeds the limits in the other direction. Secondly the patients can be divided into three groups by our investigations. The members of the first group passed the described process with the result of 85-100% (Fig. 6-7), patients in the second group earned 35-80% while the people in the third group got a result of 5-25%.
1. R. Aissaoui, J. Dansereau, „Biomechanical analysiy and modelling of sit to stand task : a literature review” IEEE SMC '99 Conference Proceedings. IEEE International Conference on Systems, Man, and Cybernetics, 1999 , Volume 1, pp: 141-146South J, Blass B (2001) The future of modern genomics. Blackwell, London 2. Bijan Najafi, Kamiar Aminian, Anisoara Paraschiv-Ionescu, François Loew, Christophe J. Büla, and Philippe Robert, „Ambulatory System for Human Motion Analysis Using a Kinematic Sensor: Monitoring of Daily Physical Activity in the Elderly” IEEE Transactions On Biomedical Engineering, vol. 50, no. 6, pp 711-723 ,JUNE 2003 3. Tao Liu, Yoshio Inoue, Kyoko Shibata, Haruhiko Morioka, „Development of Wearable Sensor Combinations for Human Lower Extremity Motion Analysis” Proceedings of the 2006 IEEE International Conference on Robotics and Automation Orlando, Florida, MAY 2006 4. Kosuke Tomuro, Osamu Nitta, Yoshiyuki Takahashi, Takashi Komeda, „Development of a Sit-to-Stand Assistance System” J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 2157–2160, 2008 5. Michela Goffredo, Maurizio Schmid, Silvia Conforto, Marco Carli, Alessandro Neri, and Tommaso D’Alessio,”Markerless Human Motion Analysis in Gauss–Laguerre Transform Domain: An Application to Sit-To-Stand in Young and Elderly People” IEEE Transactions On Information Technology In Biomedicine, vol. 13, no. 2, pp 207-216, MARCH 2009 6. Sonya Allin, Alex Mihailidis, „Low-cost, Automated Assessment of Sit-To-Stand Movement in "Natural" Environments” J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 76–79, 2008
IFMBE Proceedings Vol. 29
Signal Peptide Prediction in Single Transmembrane Proteins Using the Continuous Wavelet Transform I.A. Avramidou1, I.K. Kitsas1 and L.J. Hadjileontiadis1 1
Aristotle University of Thessaloniki/ Dept. of Electrical & Computer Engineering, GR-54124 Thessaloniki, Greece
Abstract—The signal peptide (SP) is directly associated with a protein’s translocation across biological membranes and consequently with the expression of its functional role. A scheme of predicting the exact position of the SP within a protein is proposed in this work, by applying the continuous wavelet transform (CWT) to the hydrophobic sequence of the protein. The scheme was developed with regard to proteins of known structure extracted from public available databases. The results have verified the effectiveness of the method, which is comparable to existing methods, thus revealing a novel and fast approach to the prediction of SP in single transmembrane proteins, with a prospect of a generalized application. Keywords—Signal peptide, prediction, transmembrane, proteins, continuous wavelet transform. I. INTRODUCTION
In both prokaryotic and eukaryotic cells, proteins are allowed entry into the secretory pathway only if they are endowed with a specific targeting signal: a signal peptide (SP). The SP is in most cases a transient extension to the amino terminus of the protein and is removed once its targeting function has been carried out [1]. A number of computational methods aiming at the prediction of the exact position of the SP in the primary amino acid sequence have been developed. The majority of them is based on neural networks [2], [3] trained and tested on a set of experimentally derived SPs from eukaryotes and prokaryotes, or on Hidden Markov Models (HMMs) [4]-[6], which model the different sequence regions of a SP in a series of interconnected, by transition probabilities, states. Likewise, other schemes have been proposed either implementing a position weight matrix approach [7], [8], or being based on sequence alignment techniques [9], or using support vector machines [10], [11]. However, most of the methods mentioned above incorporate some kind of data dependences or complexity, thus leaving room for further research on the prediction of SPs by applying novel detection schemes. Neural network methods [2], [3], for example, have to sacrifice the computational cost of training on the altar of better accuracy. Similarly, weight matrices become more precise as the amount of data on which they are based increases. Moreover, the general architecture of learning systems, such as neural
networks and HMMs, makes it difficult to trace the cause of false predictions. Finally, sequence alignment techniques call for the maintenance of large sets of reference data. In this paper, we have developed a method, namely Continuous Wavelet Transform – SP Detector (CWT-SPD), for the prediction of SPs, which is devoid of the drawbacks described above. Specifically, the method focuses on the identification of the last amino acid of the primary amino acid sequence that belongs to a SP, which in most cases is referred to as cleavage site, on the grounds that the SP is usually a transient extension to the amino terminus of the protein as mentioned before. The proposed algorithm is based on the CWT and it is applied to the arithmetic sequence produced by the conversion of the amino acid sequence to an arithmetic sequence, by means of the Kyte and Doolittle [12] hydrophobicity scale. The isolation of the area of CWT coefficients that represents the SP precedes the prediction, which is made according to the sum of coefficients across all scales. The method has been applied to a dataset of human proteins with a single transmembrane segment that are endowed with a reported SP. The results have indicated that CWT-SPD is a fast and effective approach to the problem of SP prediction. II. MATERIALS AND METHODS
A. Dataset Characteristics The dataset used in this work consists of documented transmembrane proteins extracted from SWISS-PROT Release 46.0 [13]. From the initial set of 12108 human protein sequences, automatically selected based on the presence in the feature table of the ‘TRANSMEM’ keyword, a subset of 1390 sequences was extracted, containing all the proteins with a single-membrane segment. This subset was further refined by excluding similar sequences, so that every pair of sequences had less than 30% of sequence identity, thus resulting to 499 single transmembrane proteins. These proteins were further divided in two groups, with respect to the existence of a SP, by the presence of the ‘SIGNAL’ keyword in the feature table. The process described above resulted in the following subsets: one with 327 single transmembrane proteins with a SP and another, with 172 single
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 65–68, 2010. www.springerlink.com
66
I.A. Avramidou, I.K. Kitsas, and L.J. Hadjileontiadis
transmembrane proteins without a SP in the feature table. The method was tested on data derived from the first of these subsets. The SPs taken as reference had an average length of 23 amino acids. B. Continuous Wavelet Transform (CWT) The continuous wavelet transform of a series of hydrophobic values, x(n), is defined as [14] W x (a, b) =
where
α
1 a
n
∫ x ( n )ψ 0
*
⎛n−b⎞ , ⎜ ⎟ dn ⎝ a ⎠
(1)
is a scaling parameter and b a dilation parameter;
(a, b ∈ ℜ, a > 0) . n is the amino acid sequence length of
the protein containing the SP, while ψ(n) is the analyzing mother wavelet scaled by the factor a , and dilated by a factor b . The Mexican Hat Wavelet [15] was chosen for the realization of the CWT. This symmetrical wavelet, which is defined as the second derivative of the Gaussian probability density function, was selected in order to ensure common reference with other SP prediction methods. C. Prediction of the Signal Peptide The CWT coefficients are first thresholded thus keeping only positive values. Next the area of coefficients that represents the SP is identified and isolated, taking into account three parameters: a) the distance from the amino terminus of the vertical axis used to indicate it, b) the location of the coefficient with the greatest value, and c) the location and amplitude of the peaks resulting from the sum of the coefficients across all scales. A range of effective scales is dynamically selected in the following step, comprising at least 70% of the energy of the signal resulting by summation of the isolated coefficients across all residues. Finally, the prediction is made according to the zero-crossing point of the signal resulting from the sum of the coefficients across the range of effective scales. A characteristic example of the steps of SP prediction along with the modifications that take place at the CWT domain is depicted in Fig.1. In particular, Figs. 1(a) and 1(b) correspond to the CWT coefficients (before and after isolating the SP area), whereas Fig. 1(c) depicts the energy of the signal resulting by summation of the isolated coefficients across all residues. The cleavage site of the SP is determined by the zerocrossing, as illustrated in Fig. 1(d). Fig. 1 (a) The magnitude of the CWT of hydrophobicity sequence derived from the transmembrane protein Q9Y5G1 (SWISS-PROT Release 46.0), (b) the isolated section of (a) corresponding to the SP, (c) the energy of the signal resulting by summation of the isolated coefficients across all residues, (d) prediction (red vertical line) of the cleavage site (circle) of the SP by summation of CWT coefficients across all scales.
D. Evaluation and Performance Indices The method proposed in this paper is evaluated according to the methodology described by Cuthbertson [16], which
IFMBE Proceedings Vol. 29
Signal Peptide Prediction in Single Transmembrane Proteins Using the Continuous Wavelet Transform
Fig. 2 The distribution of Csc values for the CWT-SPD method applied to the entire dataset.
NC NC , ⋅ NO N P
Fig. 3 Comparative analysis of the prediction power QP for the CWT-SPD, Signal-BLAST, Phobius, SignalP-HMM and SignalP-NN methods as a function of D parameter.
introduces the index Csc defining it as the absolute deviation between the actual cleavage site and the predicted value. As far as the performance of the method is concerned, it is measured by the index QP, defined by Tusnády and Simon [17], as follows:
QP = 100
67
(2)
where NC, NP and NO are the number of SPs that have been correctly predicted, that have been located and that actually exist, respectively. Moreover, an index D, corresponding to the maximum acceptable value of CSC in order to consider a SP prediction to be correct, was introduced and taken into account during the evaluation process and comparison of CWT-SPD with previous works. III. RESULTS
The application of CWT-SPD to the transmembrane protein Q9Y5G1 (SWISS-PROT Release 46.0) as already described is illustrated in Fig. 1. As shown in Fig.1(d) the zero-crossing of the signal resulting by summation of the CWT coefficients across all scales coincides with the true cleavage site (30th residue). The CWT-SPD was further applied to the entire set of single transmembrane proteins with a SP and initially evaluated by estimating the index Csc, as shown in Fig. 2. In particular, the latter illustrates the distribution of the CSC score across the proteins included in
the set. From this figure, it is apparent that, for the most (90.8%) of the sequences of the dataset, the prediction of the cleavage site was within a distance of five residues away from the true position, thus limiting the estimation error within acceptable ranges. From the same figure, it is also clear that the distribution exhibits a peak (21.7%) corresponding to a deviation of just a single residue between the prediction and the actual cleavage site, whereas a percentage of 13.1% results in an exact match between the true and predicted position of the cleavage site. The performance of the CWT-SPD regarding the examined dataset was also measured by means of the index QP, as shown in Fig. 3. This figure depicts the resulted QP values for the CWT-SPD method (line with circle markers), for the parameter D ranging from 0 to 24 for the proposed method. In the same figure, the resulting QP values for the methods Signal-Blast [18], Phobius [19], SignalP-NN and SignalP-HMM [20] are also shown, with the parameter D taking values up to 37 (Signal-BLAST). IV. DISCUSSION
The results of the proposed CWT-SPD method were compared to those derived from four efficient SP prediction methods reported in the literature. The four methods were applied on the aforementioned set of proteins using their Web-interface. An effort was made to include algorithms with a variety of methodologies, such as sequence alignment techniques ( Signal-BLAST ), HMMs ( Phobius,
IFMBE Proceedings Vol. 29
68
I.A. Avramidou, I.K. Kitsas, and L.J. Hadjileontiadis
Table 1 Method CWT-SPD Signal-BLAST Phobius SignalP-NN SignalP-HMM
Comparison of performance by QP index D=4
D=5
D=8
84.10 79.57 86.13 91.44 91.53
90.83 84.49 88.34 92.66 93.69
98.17 93.70 93.73 98.47 98.00
D: maximum acceptable value of deviation between the actual cleavage site and the predicted value, QP: Prediction Power index for the entire dataset of single transmembrane proteins with signal peptide for three indicative values of D.
the characterization of a protein as to whether it contains a SP or not.
REFERENCES 1. 2. 3. 4.
SignalP-HMM), and neural networks (SignalP-NN). As illustrated in Fig. 3, the CWT-SPD method superpasses the Signal-BLAST and Phobius methods for D taking values greater than three and four, respectively. Furthermore, a comparable performance is observed between the CWTSPD method and the SignalP-HMM and SignalP-NN methods for more flexible values of the D parameter. Moreover, as shown in Fig. 3, it is clear that assigning values greater than eight to the parameter D has little effect on the ranking of the methods. A comparative exhibition of the index QP regarding all the aforementioned methods, for indicative values of D is presented in Table 1. From this table it is apparent that Signal-Blast exhibits a rather poor performance compared to the rest of the methods with values of QP at least 3% lower than the three methods with best performance for all cases. As far as the Phobius method is concerned, the prediction power of 86.13%, when D equals four, is clearly comparable to the 84.1% of the CWT-SPD. However, the superiority of the latter is apparent for values of D greater than four. Regarding the SignalP-NN and SignalP-HMM methods it is obvious that they exhibit a similar behavior, better than the rest of the methods (QP>91%), however CWT-SPD, outperforms the latter for values of D greater than seven and favorably compares with the first (QP>98.17%). This is even more significant, considering the independence of the CWT-SPD method from training procedures and datasets.
5. 6. 7. 8. 9. 10. 11. 12. 13.
14. 15. 16. 17. 18.
V. CONCLUSIONS
An effective scheme of predicting the position of the SP in the primary amino acid sequence of single transmembrane proteins has been proposed. The effectiveness of the CWT in detecting special features of a numerical sequence justifies its selection for the detection of the SPs in an amino acid sequence. Ongoing work is on the way towards the expansion of the method to different datasets, the improvement of the overall performance with respect to the distribution of the CWT coefficients across high and low scales and
19. 20.
Von Heijne G (1990) The signal peptide. J Membr. Biol 115:195-201 Nielsen H, Engelbrecht J, Brunak S, Von Heijne G (1997) Identification of prokaryotic and eukaryotic signal peptides and prediction of their cleavage sites. Protein Eng 10:1-6 Fariselli P, Finocchiaro G, Casadio R (2003) SPEPlip: the detection of signal peptide and lipoprotein cleavage sites. Bioinformatics 19:2498-2499 DOI 10.1093/bioinformatics/btg360 Nielsen H, Krogh A (1998) Prediction of signal peptides and signal anchors by a hidden Markov model, ISMB Proc., Int. Conf. Intell. Syst. Mol. Biol., Montreal, Canada, 1998, pp 122-130 Zhang Z, Wood W (2003) A profile hidden Markov model for signal peptides generated by HMMER. Bioinformatics 19:307-308 Käll L, Krogh A, Sonnhammer L (2004) A combined transmembrane topology and signal peptide prediction method. J Mol Biol 338:10271036 DOI 10.1016/j.jmb. 2004. 03. 016 Von Heijne G (1986) A new method for predicting signal sequence cleavage sites. Nucleic Acids Res 14:4683-4690 Hiller K, Grote A, Scheer M, Münch R, Jahn D (2004) PrediSi: prediction of signal peptides and their cleavage positions. Nucleic Acids Res 32:375-379 DOI 10.1093/nar/gkh378 Frank K, Sippl M (2008) High-performance signal peptide prediction based on sequence alignment techniques. Bioinformatics 24:21722176 DOI 10.1093/bioinformatics/btn422 Chou KC (2001) Prediction of protein signal sequences and their cleavage sites. Proteins 42:136-139 Vert JP (2002) Support vector machine prediction of signal peptide cleavage site using a new class of kernels for strings, PSB Proc., Pac. Symp. Biocomput., Lihue, Hawaii, USA, 2002, pp 649-660 Kyte J, Doolittle R (1982) A simple method for displaying the hydropathic character of a protein. J Mol Biol 157:105-132 Boeckmann B, Bairoch A, Apweiler R, Blatter M, Estreicher A, Gasteiger E, Martin M, Michoud K, O'Donovan C, Phan I, Pilbout S, Schneider M (2003) The SWISS-PROT protein knowledgebase and its supplement TrEMBL in 2003. Nucleic Acids Res 31:365-370 Addison P (2002) The illustrated wavelet transform handbook: Introductory theory and applications in science, engineering, medicine and finance. Institute of Physics (IOP) Publishing, Bristol Daubechies I (1992) Ten lectures on wavelets. SIAM Cuthbertson A, Doyle D, Sansom M (2005) Transmembrane helix prediction: a comparative evaluation and analysis. Protein Eng Des Sel 18:295-308 Tusnády G, Simon I (1998) Principles governing amino acid composition of integral membrane proteins: application to topology prediction. J Mol Biol 283:489-506 Signal-BLAST at http://sigpep.services.came.sbg.ac.at/signalblast. html Phobius at http://phobius.sbc.su.se/ SignalP 3.0 Server at http://www.cbs.dtu.dk/services/SignalP
Author: I. A. Avramidou, I. K. Kitsas and L. J. Hadjileontiadis Institute: Dept. of Electrical & Computer Engineering, Aristotle University of Thessaloniki Street: University Campus, GR-54124 City: Thessaloniki Country: Greece Email:
[email protected], {ikitsas, leontios}@auth.gr
IFMBE Proceedings Vol. 29
Comparison of AM-FM Features with Standard Features for the Classification of Surface Electromyographic Signals C.I. Christodoulou1, P.A. Kaplanis1, V. Murray2, M.S. Pattichis2, C.S. Pattichis1 2
1 Department of Computer Science, University of Cyprus, Nicosia, Cyprus Department of Electrical and Computer Engineering, University of New Mexico, Albuquerque, USA
Abstract— In this work AM-FM features extracted from surface electromyographic (SEMG) signals were compared with standard time and frequency domain features, for the classification of neuromuscular disorders at different force levels. SEMG signals were recorded from a total of 40 subjects: 20 normal and 20 abnormal cases, at 10%, 30%, 50%, 70% and 100% of maximum voluntary contraction (MVC), from the biceps brachii muscle. For the classification, three classifiers were used: (i) the statistical K-nearest neighbour (KNN), (ii) the neural self-organizing map (SOM) and (iii) the neural support vector machine (SVM). For all classifiers the leaveone-out methodology was implemented for the classification of the SEMG signals into normal or pathogenic. The test results reached a classification success rate of 77% for the AM-FM features whereas standard features failed to provide any meaningful results on the given dataset. Keywords— SEMG, AM-FM, classification
I. INTRODUCTION The electromyographic (EMG) examination provides important information for the assessment of neuromuscular disorders and is generally carried out using needle electrodes. Surface electrodes and the acquisition of surface EMG signals provide a non-invasive alternative to needle EMG for the detection of neuromuscular disorders. At present a surface detected signal is preferred only for obtaining “global” information about the time and/or intensity of superficial muscle activation [1]. Time and frequency features have been extensively used in EMG signal classification [2]-[6]. Using needle EMG, Abel et al. [3] used turned analysis and small signals segments to obtain 75% classification rate for 45 cases (12 normal, 18 myopathy, and 15 neuropathy patients). However authors concluded that the classification methods used, did not offer better results than the interference pattern analysis and could not match the diagnostic success of an experienced clinician. Christodoulou et al. [4] developed a modular neural networks system where multiple features extracted from needle EMG signals were fed into multiple classifiers to yield 87.5% classification rate in 38 cases (12 normal, 13 myopathy, and 13 motor neuron disease cases).
Abou-Chadi et al [5] compared three neural networks systems for surface EMG classification. Unsupervised techniques gave 80% correct classification when tested on 10 cases (5 myopathy and 5 normal) selected from a pool of 28 cases. Recordings were performed for 5 seconds at 50% MVC. Abou-Chadi et al [5] reached the conclusion that when SEMG is properly processed, it may provide the physician with a diagnostic assisting tool. Also for surface EMG, Kaplanis [6] reached a correct classification score of 82.9% on 111 cases (91 normal and 20 abnormal cases). One may comment that this result may be misleading due to a higher number of control subjects as compared with pathogenic cases, however normalisation based on the number of subjects for each group during the classification process was performed. For the classification of surface EMG signals, we presented an earlier study on the use of AmplitudeModulation Frequency-Modulation (AM-FM) features in [7]. In the current paper, we compare the performance of the AM-FM features against time and frequency features. We provide a description of the data acquisition process in Section II. In section III we describe the extraction algorithms for the standard time and frequency features and the AM-FM features. In Section IV, we present results using three different classifiers: (i) the statistical K-nearest neighbour (KNN) classifier, (ii) the neural self-organizing map (SOM) and (iii) and the neural support vector machine (SVM). We give the results in Section V and provide concluding remarks in Section VI. II. MATERIAL AND DATA ACQUISITION Surface EMG recordings were acquired from 20 control subjects (NOR) and 20 neuromuscular cases (11 myopathy and 9 neuropathy cases). Patients referred were first examined and diagnosed by their physician and were divided according to the general type of neuromuscular disorder (myopathy or neuropathy). The data were collected at a special Electromyography / Electroencephalography / Evoked Potential (EMG/EEG/EP) lab at the Department of Clinical Neurophysiology at the Cyprus Institute of Neurology and Genetics, Nicosia, Cyprus [6]. The Nicolet Viking IV electromyography a two-channel amplifier used
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 69–72, 2010. www.springerlink.com
70
C.I. Christodoulou et al.
unit was fully electrically isolated to IEC 601-1 and BSS 5724, Part 1 Type BF. The input impedance of the system, Ζin was stated to be > 1000 MΩ. Through the system the low and high frequency values for recording were set at 20 and 500 Hz respectively. A calibrated force measurement system, with a total weight of 40 kg was placed at the foot end of a couch, used for the subjects to lie down. The weights were lifted via a strap placed at the subjects’ wrist and connected to the system through a force transducer, which was connected directly to a calibration circuit. The subject was required to pull at maximum voluntary contraction (MVC) three times with an interval of two minutes in between. A note of the MVC was made on the oscilloscope with a red tape. Recordings were made at five different force levels, i.e. at 10%, 30%, 50%, 70% and 100% of MVC from the biceps brachii muscle generated under isometric voluntary contraction (IVC). III. FEATURE EXTRACTION A. Standard Features For each surface EMG (SEMG) epoch, at each force level, the following parameters were measured in the time domain: 1. Turns per second (t/s): Number of slope reversals separated from the previous one and the following turn by an amplitude difference greater than 20 µV. 2. Zero crossings per second (z/c): Defined as the number of sign reversals exceeding a threshold of 20 µV. For each 512 ms epoch, the average power spectrum (PS) curve was computed by taking the FFT of 512 points, with 25% overlap segments. The following parameters were computed from the power spectrum curve: 3. Median frequency: Frequency dividing the area under the PS curve into two equal parts. 4. Mean frequency 5. Maximum frequency 6. Total power: Calculated as the total area under the PS curve, with values reported in nV2/Hz. Logarithmic units were taken due to the large spread of values recorded. 7. Maximum power The above seven features were normalized before use by subtracting their mean value and dividing with their standard deviation. B. AM-FM features Amplitude-modulation frequency-modulation (AM-FM) models can be used to characterize non-stationary signal behavior [8], [9]. Using a multi-scale filter bank, for any given signal f (k ) , we compute a one-dimensional, single scale analytic signal, as given by [8]:
f AS (k ) = f (k ) + jH { f (k )},
where H {} ⋅ denotes the Hilbert transform operator. We estimate the instantaneous amplitude (IA), the instantaneous phase (IP) and the instantaneous frequency (IF) of the signal using
a(k ) = f AS (k ) ,
(2) imag ( f AS (k )) ϕ (k ) = arctan (3) real ( f AS (k )) f (k + n ) + f AS (k − n ) (4) ∂ϕ (k ) 1 . ≅ arccos AS ϕ k (k ) = ∂k n 2 f AS (k ) where in (4), n is a variable displacement from 1 to 4, based on the argument that provides the minimum condition number to arcos function. From the two generated estimates, the histograms for 32 equal width bins were computed and were used as input feature sets for classification. The histograms were further normalized by division of the histogram with the number of SEMG signal points in order to alleviate any bias due to different signals lengths. Figure 1 shows a sample of SEMG signal from a normal subject and its corresponding AM-FM histograms (only shown 1000 samples points for visibility).
Fig. 1. Sample of SEMG signal from a normal subject at 100% force level and its corresponding AM-FM histograms. First 32 samples show the histogram of the instantaneous amplitude. This is followed by the histogram of the instantaneous phase and the instantaneous frequency.
IV. CLASSIFICATION The seven standard features and the three AM-FM histograms (i.e. 96 bins) were used as two different feature vectors and were inputted into three classifiers. The leaveone-out methodology was applied where for each input pattern to be classified all the remaining patterns were used as the training set. The average of all classifications scores was the final score. This made the classification procedure
(1) IFMBE Proceedings Vol. 29
Comparison of AM-FM Features with Standard Features for the Classification of Surface Electromyographic Signals
71
independent of bootstrap sets and the results more robust and reliable.
the class m of the SNm with the greatest value, as normal or pathogenic.
A. The KNN Classifier The statistical k-nearest neighbor (KNN) classifier [10] was used for the classification of the SEMG signals using the leave-one-out methodology. In the KNN algorithm, in order to classify a new input pattern, its k nearest neighbors from the training set were identified. The new pattern was classified to the most frequent class among its neighbors based on a similarity measure that is usually the Euclidean distance. In this work the KNN system was implemented for several values of k (k=1, 3, 5, 7, 9, 11, 13 and 15) and it was tested using for input the two different feature vectors at the different force levels.
C. The SVM Classifier The Support Vector Machine (SVM) was also used for developing classification models for the problem. The method is initially based on a nonlinear mapping of the initial data set using a function φ(.) and then the identification of a hyperplane which is able to achieve the separation of two categories of data. Details about the implementation of the SVM algorithm used can be found in [12]. The SVM network was implemented using Gaussian Radial Basis Function (RBF) kernels; this was decided, as the rest of the kernel functions could not achieve satisfactory results.
B. The SOM Classifier The SOM was chosen because it is an unsupervised learning algorithm where the input patterns are freely distributed over the output node matrix [11]. The weights are adapted without supervision in such a way, so that the density distribution of the input data is preserved and represented on the output nodes. This mapping of similar input patterns to output nodes, which are close to each other, represents a discretisation of the input space, allowing a visualization of the distribution of the input data. The output nodes are usually ordered in a two dimensional grid and at the end of the training phase, the output nodes are labeled with the class of the majority of the input patterns of the training set, assigned to each node. In the evaluation phase, a new input pattern was assigned to the winning output node with the weight vector closest to the new input vector. In order to classify the new input pattern, the majority of the labels of the output nodes in an RxR neighborhood window centered at the winning node, were considered. The number of the input patterns in the neighborhood window for the two classes m={1, 2}, (1=normal, 2=pathogenic), was computed as:
SN
m
=
L
∑W i =1
i
N mi
(5)
where L is the number of the output nodes in the RxR neighborhood window with L=R2 (e.g. L=9 using a 3x3 window), and Nmi is the number of the training patterns of the class m assigned to the output node i. Wi=1/(2 di), is a weighting factor based on the distance di of the output node i to the winning output node. Wi gives the output nodes near to the winning output node a greater weight than the ones farther away (e.g. in a 3x3 window, for the winning node W=1, for the four nodes perpendicular to the winning node Wi=0.5 and for the four nodes diagonally located Wi =0.3536, etc). The evaluation input pattern was classified to
D. Combining From each subject, five feature vectors were calculated one for each force level and inputted to the classifiers. The five classification outputs per subject were further combined using majority voting, i.e. the subject was assigned to the class where the majority of the five individual SEMG signals per force level were assigned. This was done in order to get a final and more reliable estimate of the classification result, since as it was shown in [4], modular neural networks system enhanced the diagnostic performance of the individual classifiers making the whole system more robust and reliable. V. RESULTS Surface EMG recordings from 20 control subjects (NOR) and 20 neuromuscular subjects (11 myopathy (MYO) and 9 neuropathy (NEURO)) were recorded at 10%, 30%, 50%, 70% and 100% of maximum voluntary contraction (MVC), from the biceps brachii muscle. For each SEMG recording two different feature vectors were extracted as described above (i) seven standard features from time and frequency domain (ii) the AM-FM features instantaneous amplitude (IA), instantaneous phase (IP), and the instantaneous frequency (IF). The IA, IP, IF were normalized by the signal length in order to alleviate any biases due to different signals lengths and their histograms were used as input to the three classifiers. Table 1 tabulates the AM-FM correct classifications success rate for the three classifiers KNN, SOM and SVM and for the five force levels. In addition, the five force level scores per subject were combined with majority voting and the results are also given in Table 1. For the KNN classifier the values provided in Table 1 are for k=11 which gave the best results and for the SOM for a 7x7 map matrix and an evaluation neighborhood window 3x3 for the same reason.
IFMBE Proceedings Vol. 29
72
C.I. Christodoulou et al.
Table 1 AM-FM features correct classifications rate in % per classifier, per force level and when the five force level scores were combined using majority voting.
Force Level
KNN
SOM
SVM
Average
10%
52.5
55.0
67.5
58.3
30%
50.0
60.0
75.0
61.7
50%
60.0
57.5
67.5
61.7
70%
52.5
55.0
65.0
57.5
100%
62.5
62.5
75.0
66.7
Average
55.5
58.0
70.0
61.2
Combined
57.5
60.0
77.5
65.0
Table 2 Standard features correct classifications rate in % per classifier, per force level and when the five force level scores were combined using majority voting.
Force Level
KNN
SOM
SVM
Average
10%
40.0
38.5
37.5
38.7
30%
40.0
47.5
42.5
43.3
50%
42.5
45.5
47.5
45.2
70%
32.5
46.5
20
33.0
100%
25.0
48.5
17.5
30.3
Average
36.0
45.3
33.0
38.1
Combined
25.0
35.0
22.5
27.5
Best classifier was by far the SVM with average success rate 70.0% compared to 58.0 % for the SOM and 55.5% for the KNN classifier. Best force level was the 100% MVC with average success rate 66.7% for the three classifiers. Best individual result was 75% with the SVM classifier at 100% and 30% MVC. These results need further investigation and interpretation. Combining the five force level scores per subject with majority voting improved the average success rate, reaching in the case of the SVM classifier from 70.0% to 77.5%. Combining all the outputs from all the classifiers gave 62.5%. From the three AM-FM features, the best was the IF feature with average success rate 68.0% followed by the IA with 55.0% and the IP with 53.5%. The same experiment was repeated using the seven time and frequency domain features described in section III. Standard features failed to provide any meaningful results on the given dataset reaching an average correct classification rate of 38.1% as tabulated in Table 2. These results are well below the 50% threshold for a two-class problem, which shows the need of a careful extraction and selection of features for this kind of problem and the superiority of the AM-FM algorithm. Feature selection among the standard features did not differentiate significantly the above trend.
VI. CONCLUDING REMARKS In this work it was shown that AM-FM approaches provide new feature sets, which can be used successfully for the classification of SEMG signals with a high success rate; comparable to results obtained when using needle EMG data [3], [4]. The AM-FM features significantly outperformed the standard time and frequency domain features showing that these new features can provide the tool for successful SEMG classification. The results also show that SEMG can be used as a non-invasive alternative to needle EMG for the assessment of neuromuscular disorders. Additionally in future work, cumulative density functions (CDF) and probability density functions (PDF) can be extracted from the AM-FM representations and used for classification instead of the histograms for an improved classification performance. These along with efficient neural classifiers like the SVM can provide the tools for designing a successful SEMG diagnostic system. REFERENCES [1] Merletti R, De Luca C.J. “New techniques in surface electromyography, in Computer aided electromyography and expert systems,” edited by Desmedt J.E., Elsevier, vol. 2, Amsterdam-New York-Oxford, Chapter 9, section 3, pp. 115-124, 1989. [2] Farina D, Fosci M, Merletti R, “Motor unit recruitment strategies investigated by surface EMG variables,” Journal of Applied Physiology, 92, pp. 235-247, 2002. [3] Abel E.W, Zacharia P.C., Forster A., Farrow T.L., “Neural network analysis of the EMG interference pattern,” Medical Engineering and Physics, 1(1): pp. 12-17, 1995. [4] Christodoulou C.I., Pattichis C.S., Fincham W.F., “A Modular Neural Network Decision Support System in EMG Diagnosis,” Journal of Intelligent Systems, Special issue on Computational Intelligent Diagnostic Systems in Medicine, ed. by C.N. Schizas, Volume 8, Nos. 1-2, pp. 99-143, 1998. [5] Abou-Chadi F.E.Z., Nashar A., Saad M., “Automatic analysis and classification of surface electromyography,” Frontiers in Medical Biological Engineering, 1(1)1: pp. 13-19, 2001. [6] Kaplanis P.A, “Surface Electromyography for the Assessment of Neuromuscular Disorders,” Ph.D. Thesis, Kings College, University of London, 2004. [7] Christodoulou C.I., Kaplanis P.A., Murray V., Pattichis M.S., Pattichis C.S., “Classification of Surface Electromyographic Signals using AMFM Features”, CD-ROM Proceedings of the 9th International Conference on Information Technology and Applications in Biomedicine, ITAB 2009, Larnaca, Cyprus, Nov. 5-7, 2009. [8] Murray V., V. Rodriguez P., Pattichis M.S., “Robust Multiscale AM-FM Demodulation of Digital Images,” IEEE International Conference on Image Processing, vol.1, pp. 465-468, Oct. 2007. [9] Pattichis M.S., Bovik A.C., “Analyzing image structure by multidimensional frequency modulation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 5, pp. 753–766, 2000. [10] Tou J.T., Gonzalez R.C., “Pattern Recognition Principles”, AddisonWesley Publishing Company, Inc., 1974 [11] Kohonen T., “The Self-Organizing Map,” Proceedings of the IEEE, Vol. 78, No. 9, pp. 1464-1480, Sept. 1990. [12] Joachims T., “Making large-scale SVM learning practical,” In: Schölkopf B, Burges C, Smola A (eds) Advances in kernel methodssupport vector learning, MIT, Cambridge, Chapter 11, 1999.
IFMBE Proceedings Vol. 29
Studying Brain Visuo-Tactile Integration through Cross-Spectral Analysis of Human MEG Recordings S. Erla1,2, C. Papadelis1, L. Faes2, C. Braun1, and G. Nollo2 1
Laboratory of Functional Neuroimaging, Center for Mind/Brain Sciences (CIMeC), University of Trento, Italy 2 Biophysics and Biosignals Lab, Department of Physics (BIOtech), University of Trento, Italy
Abstract— An important aim in cognitive neuroscience is to identify the networks connecting different brain areas and their role in executing complex tasks. In this study, visuotactile tasks were employed to assess the functional correlation underlying the cooperation process between visual and tactile regions. MEG data were recorded from eight healthy subjects while performing a visual, a tactile, and a visuo-tactile task. To define regions of interest (ROIs), event-related fields (ERFs) were estimated from MEG data related to visual and tactile areas. The ten channels with the highest increase in ERF variance, moving from rest to task, were selected. Cross-spectral analysis was then performed to assess potential changes in the activity of the involved regions and quantify the coupling between visual and tactile ROIs. A significant decrease (p<0.01) in the power spectrum was observed during performing the visuo-tactile task compared to rest, both in alpha and beta bands, reflecting the activation of both visual and tactile areas during the execution of the corresponding tasks. Compared to rest, the coherence between visual and tactile ROIs increased during the visuo-tactile task. These observations seem to support the binding theory assuming that the integration of spatially distributed information into a coherent percept is based on transiently formed synchronized functional networks. Keywords— MEG, ERFs, Power, Coherence, Visuo-Tactile.
I. INTRODUCTION In everyday life, it is common to perceive visual and tactile stimuli at the same time or to perform tasks requiring integration of visual and haptic information. One of the main goals of neuroscience is to identify brain networks connecting different brain areas and their role in executing complex tasks. Recent advancements enhanced our understanding of multisensory integration process that was shown to take place in midbrain, thalamus and cortex [1]. The combination of information from different sensory cues in these areas often seems to enhance the neural activity in comparison to the activity caused by the respective pure stimuli. This is also expected for the visuo-tactile integration process. Many studies have already investigated this topic, focusing mostly on tactile texture perception [2] since it is crucial for both healthy as well as blind subjects [3]. However, a clear neurophysiological explanation of the
interaction between visual and tactile sensory systems is still unclear. The present study was undertaken in order to explore large-scale integration of visual and tactile processes in the cortex through analyzing the coupling relations of cortical activity recorded simultaneously from visual and somatosensory areas.
II. MATERIALS AND METHODS A. Experimental Protocol and Data Acquisition Neurophysiological brain signals were recorded from eight healthy subjects during performing three different tasks. During the first task, namely visual block (V), participants were asked to fixate on the center of a white screen showing a geometric pattern consisted of many black dots (Fig 1a). The visual stimuli resembled letters of the Braille code developed for helping blind people. Subjects had to identify if the orthogonal segment was turned to the left or to the right side. During the tactile session (T), subjects had to touch a tablet with the same geometric pattern as this appeared on the screen during the previous session. No visual stimuli were provided at this time. The subjects’ task was again to identify the direction of the pattern’s orthogonal segment (Fig 1a). During the visuo-tactile block (VT), both visual and tactile stimuli were provided to the subjects. They had to identify if the two patterns were identical or not (Fig 1a). MEG signals were recorded (VSM System, 275 gradiometers) in two time intervals of one second (w0 and w1), starting one second before the stimulus, as described in Fig. 1b. Each block was repeated 40 times (trials). B. Preprocessing and ROIs Selection The acquired signals were FFT band-pass filtered (2-45 Hz), downsampled from 586 to 293 Hz, and normalized (zero-mean). In order to focus on the relevant information, a reduced number of scalp recordings was considered for further analysis. Two wide regions (Fig. 2c), corresponding to the
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 73–76, 2010. www.springerlink.com
74
S. Erla et al.
Fig. 1 (a)
Schematic diagram of the experimental protocol. Three block sessions were totally performed (b) Time course of data acquisition
somatosensory and the visual cortex, were identified. Then, event-related fields (ERFs) were estimated from MEG data related to these two areas. For each subject, rest ERF variance was calculated on the whole w0 window. Differently, task ERF variance was computed considering the typical time intervals of visual and somatosensory ERFs (70-150 ms and 25-70 ms after stimulus respectively). The ten channels with the highest increase in ERF variance moving from rest to task were selected both for somatosensory and visual regions of interest (ROIs).
where p is the model order, A(k) are M×M matrices describing the linear interaction at lag k from yj(n-k) to yi(n), and U(n)=[u1(n),...,uM(n)]T is a vector of zero-mean uncorrelated white noises with diagonal covariance matrix Σ. Estimation of the model coefficients, with fixed order p=9, was performed through standard vector least squares identification. Multivariate spectral analysis was performed transforming (1) in the frequency domain to yield Y( f )=H( f )U( f ), where Y( f ) and U( f ) are the Fourier Transform of Y(n) and U(n), and the transfer matrix H( f ) was obtained as the inverse of the frequency domain coefficient matrix A( f ) = I −
Σk =1 A(k )e p
− i 2π f kT
. Then, the spectral matrix
was obtained as S( f ) = H ( f )ΣH H ( f ) . The diagonal elements of S( f ), Sii( f ), are the power spectral densities of each modeled signal yi. Multivariate spectral decomposition [4] was applied to each spectrum Sii to find the partial spectra Sii(α) and Sii(β) related to the poles of the process with frequency inside the alpha (8-13 Hz) and beta (13-30 Hz) frequency bands; the area underlying each partial spectra was then taken as a measure of the power within the two bands, Pα and Pβ . The off-diagonal elements of S( f ) were used to measure in the frequency domain the linear coupling between each pair of signals yi and yj through the squared coherence function: Cij2
(f )=
S ij ( f )
2
S ii ( f )S jj ( f )
(2)
D. Statistical Analysis
Fig. 2 Event-Related Fields (ERFs) after stimulus, for visual (a) and somatosensory (b) brain areas in one representative subject. Maps of the visual (blue) and somatosensory (red) considered wide regions and of the ten channels with the higher increase in signal variance moving from rest to task (ROIs) for this participant (c) C. Cross-Spectral Analysis For each subject and experimental condition, the preprocessed signals belonging to the two selected ROIs were collected into the M×N data matrix Y={ym(n)}, n=1,...N, m=1,...M (M=20 signals, N=293 samples) and then described by a multivariate autoregressive model as: Y(n ) =
p
∑ A(k )Y(n − k ) + U(n ) k =1
(1)
Forty values (corresponding to the 40 trials) of alpha and beta power were obtained for each sensor of the two selected ROIs during rest, V, T and VT tasks in each subject. Statistical analysis was performed separately for each subject with the following scheme. Two-way ANOVA was performed to assess the significance of differences due to scalp-position (somatosensory or visual cortex) and to stimulus-type (no stimulus, V, T, VT). When the ANOVA test yielded a significant p value (p<0.05) for both considered factors, post-hoc multiple paired Wilcoxon tests, with Holm correction for multiple comparisons [5], were performed to assess differences between pairs of power values. The number of significant changes across subjects was considered to analyze population tendencies. In a second step, for each subject, 40 coherence values (one for each trial) were calculated averaging the estimated coherence within the alpha and in the beta frequency bands (8-13 Hz and 13-30 Hz respectively). For each participant, trial and frequency band, the average coherence value between each visual channel and all the somatosensory
IFMBE Proceedings Vol. 29
Studying Brain Visuo-Tactile Integration through Cross-Spectral Analysis of Human MEG Recordings
75
channels was considered as representative for the coupling relation between the visual channel and the whole somatosensory ROI. Repeating this computation for all the ten visual channels, ten measures of coherence between the two areas were obtained for each trial. In each subject, multiple Wilcoxon T-tests were performed to assess the significance of differences between the calculated coherence values in rest, V, T and VT conditions. Again, the number of significant changes across subjects was considered as an index of population tendencies.
III. RESULTS In Fig. 3, results of cross-spectral analysis performed for one representative subject are shown. In the first column, power spectral density plots are presented for one channel located above the visual cortex during the execution of V, T, and VT tasks. In the second column, the same is shown for one channel representing the somatosensory cortex. In respect to rest, during V task, the power content was lower in both alpha and beta frequency bands in the occipital area, but it did not show evident differences in the somatosensory area. During the T task, rhythmic oscillation at alpha and beta bands showed amplitude decrease for the somatosensory channels. Finally, during VT task alpha and beta power decreased both in occipital and somatosensory areas, thus suggesting the activation of both brain areas while executing the combined task. Coherence spectra, estimated between the two channels belonging to occipital and somatosensory area, are shown in the last column of Fig. 3. Coherence was lower during the V task in both frequency bands, whereas it was unchanged in the alpha and increased in the beta band during the T task. Finally, coherence was higher in both frequency bands during the combined task. These trends suggest that only when the two tasks were performed contemporaneously an increase of coupling between the two areas could be observed. In Fig. 4, values averaged over trials for the same subject are shown. The power content of alpha (Fig. 4a) and beta (Fig. 4b) was significantly lower during task in comparison with rest, indicating brain activation, during V in the occipital cortex, during T in the somatosensory areas and during VT in both areas. In the beta band, a significant power decrease was also noted in the occipital area during T. In Figs. 4c and 4d coherence results are presented. The coherence increased significantly in both frequency bands during VT and only in the beta band during T. Figs. 5a and 5b show the percentage of subjects in which a significant change in the power spectra values was revealed in alpha and beta frequency bands respectively. In the alpha band, a significant power decrease was found in
Fig. 3 Cross-spectral analysis on a representative subject for one channel located above the visual cortex (occ, first column) and for one channel representing the somatosensory cortex (ss, second column). Coherence spectra between the two channels (coh, third column). All this results are shown for the V (first row), the T (second row) and the VT (third row) task, during rest (black) and during performing the task (red) more than 50% of the subjects during V in the occipital cortex, during T in the somatosensory areas and during VT in both considered brain regions. A significant decrease was present in a larger number of subjects in the same brain locations and stimulation conditions in the beta band. Fig. 5c evidences a larger percentage of subjects where the coherence increased during VT, but not during V and T, in the alpha band. The same result was obtained in the beta band. Moreover, in this frequency band more than 50% of the subjects showed increased coherence also during T.
IV. DISCUSSION The accordance in time and space of different sensory inputs is nowadays considered to promote multisensory integration [6]. In this study, this process is evoked by a task in which healthy subjects perceived visual and tactile stimuli simultaneously provided to the subjects. Two control conditions were also considered (pure-visual and puretactile task) in order to distinguish the processes which are involved or not in the multisensory integration. Cross-spectral analysis was performed in the two sensory ROIs in order to (i) evidence changes in the rhythmic oscillations of the involved regions due to activation and (ii) quantify the coupling between visual and tactile ROIs.
IFMBE Proceedings Vol. 29
76
S. Erla et al.
Coherence results showed no significant changes during pure-visual and pure-tactile tasks. On the contrary, coherence increased significantly in the most of the subjects during the combined task in both frequency bands. This result suggested the tendency, at least in our small population sample, to enhance the coupling between visual and somatosensory cortex during tasks involving both areas at the same time.
V. CONCLUSIONS
Fig. 4 Mean (± SE) alpha power (a), beta power (b), alpha coherence (c) and beta coherence (d) results in one representative subject for visual cortex (occ) and somatosensory cortex (ss) for the V, the T and the VT task, during rest (black) and task (grey). * significant (multiple testing)
The proposed visual tactile task seems able to elicit multisensory integration process that are quantifiable by crossspectral analysis of MEG recordings in healthy volunteers. These findings bring evidence supporting the binding theory, that describes the integration of spatially distributed information into a coherent percept as due to transiently synchronized functional networks [7,8,9].
ACKNOWLEDGMENT Authors would like to kindly acknowledge Carola Arfeller and Svenja Borchers, researchers of the MEG Center, Eberhard-Karls-University of Tübingen in Germany, for their helpful contribution in data collection.
REFERENCES
Fig. 5 Population significance considerations. Percent of subjects for which decrease (black) or increase (grey) in alpha (a) or beta (b) power was significant as a relation to the performed task (V, T, VT) and brain area (occ, ss). Percentual number of subjects for which increase in coherence was significant as a relation to the performed task (V, T, VT) While executing the pure-visual task, subjects showed a decreased power in both alpha and beta frequency bands in the occipital area, but not in the somatosensory area. On the contrary, pure-tactile task induced activation (power decrease) only in the somatosensory region. Finally, the execution of a combined VT task suggested activation of both brain areas. These results indicated the neuronal network activation during task performance engaging different sensory regions and confirmed the ability of the considered spectral estimator to reveal it.
1. Stein BE, Meredith M (1993). The Merging of the Senses. MIT Press, Cambridge, MA 2. Lederman SJ, Thome G, Jones B (1986). Perception of texture by vision and touch: multidimensionality and intersensory integration. J Exp Psychol: Hum Percept Performance 12: 169-180 3. Heller MA (1989). Texture perception in sighted and blind observers. Percept Psychophys 45: 49-54 4. Baselli G, Porta A, Rimoldi O et al (1997). Spectral Decomposition in Multichannel Recordings Based on Multivariate Parametric Identification. IEEE Trans Biomed Eng 44(11): 1092-1101 5. Holm S (1979). A Simple Sequentially Rejective Multiple Test Procedure. Scand J Statistics 6(2): 65-70 6. Driver J, Spence C (2000). Multisensory perception: Beyond modularity and convergence. Curr Biol 10: R731-735 7. Senkowski D, Schneider TR, Foxe JJ, Engel AK (2008). Crossmodal binding by neural coherence: Implications for multisensory processing. Trends in Neurosciences 31: 401-409 8. Bauer M (2008). Multisensory integration: a functional role for interarea synchronization? Curr Biol 18(16): R709-710. 9. Miltner WH, Braun C, Arnold M, Witte H, Taub E (1999). Coherence of gamma-band EEG activity as a basis for associative learning. Nature 397(6718): 434-6. Author: Erla Silvia Institute: CIMeC and BIOtech, University of Trento Street: Via delle Regole 101, Mattarello City: Trento Country: Italy Email:
[email protected]
IFMBE Proceedings Vol. 29
Patient-specific seizure prediction using a multi-feature and multi-modal EEG-ECG classification M. Valderrama1, S. Nikolopoulos1, C. Adam1-2, Vincent Navarro1-2 and M. Le Van Quyen1 1
Centre de Recherche de l'Institut du Cerveau et de la Moelle épinière (CRICM) INSERM UMRS 975 - CNRS UMR 7225, Hôpital de la Pitié-Salpêtrière, Paris, FRANCE 2 Epilepsy Unit, Groupe Hospitalier Pitié-Salpêtrière Assistance Publique – Hôpitaux de Paris, FRANCE Abstract— Epilepsy, a neurological disorder in which patients suffer from recurring seizures, affects approximately 1% of the world population. In spite of available drug and surgical treatment options, more than 25% of individuals with epilepsy have seizures that are uncontrollable. For these patients with intractable epilepsy, the unpredictability of seizure occurrence underlies an enhanced risk of sudden unexpected death or morbidity. Therefore, a device that could predict a seizure and notify the patient of the impending event or trigger an antiepileptic device would dramatically increase the quality of life for those patients. Here, a patient-specific classification algorithm is proposed to distinguish between preictal and interictal features extracted from ECG-EEG recordings. It demonstrates that the classifier based on a Support Vector Machine (SVM) can distinguish preictal from interictal with a high degree of sensitivity and specificity. The proposed algorithm was applied to long-term recordings of 4 patients with partial epilepsy, totaling 29 seizures and more than 1333-hour-long interictal, and it produced average sensitivity and specificity values of 90.6% and 85.6 % respectively using 10-minute-long window of preictal recording. Keywords— Multiple channels; Multiple features; Feature extraction; Seizure prediction; Classification. I. INTRODUCTION
The epileptic seizures are not randomly occurring events, but they are instead the product of highly nonlinear dynamics in brain circuits evolving over time, and are expected to be detectable with some antecedence. Indeed, it has long been observed that the transition from the interictal state (far from seizures) to the ictal state (seizure) is not sudden and may be preceded from minutes to hours by preictal clinical, metabolic or electrical changes [1]. In recent years, new answers to this question have begun to emerge from quantitative analyses of the electroencephalogram (EEG). In particular, our group and others have demonstrated that epileptic seizures are preceded by preictal changes in the brain several minutes before their electro-clinical onset in patients with partial epilepsy [2-3]. Most current seizure prediction approaches used quantities that draw inferences about the level of EEG complexity,
such as an effective correlation dimension, correlation density, entropy related measures, or Lyapunov exponents [4-5]. More recently introduced, bivariate measures that estimate dynamical interactions between two time series of two EEG channels such as phase synchronization or other measures for generalized synchronization were especially promising for seizure prediction [6-7]. In addition to EEG, it has been shown that the epileptic seizures are often associated with several cardiovascular and respiratory modifications. In particular, electrocardiogram (ECG) events, such as heart rate variations, are known to be valuable clinical signs with respect to early manifestations of an epileptic discharge [8]. These studies suggested that multiple features in the EEG or ECG may be useful for the detection of pre-ictal changes. But, when compared to signals recorded for long periods of time, capturing all the changes in the above features measured over the day as a patient undergoes in and out of different physiological states, those algorithms produced a significant number of false positives. Besides that, the best predictive features may vary from patient to patient. The aim of this study is to improve generalization capabilities in seizure prediction by the use of a larger set of features than has previously been done, with multiple algorithms contributing simultaneously to the construction of high dimensional features space where pre-seizure regions can be identified. The higher the dimension the more probable it is to avoid false alarms, since more discriminating dimensions are available. We used here simultaneously-acquired multi-channel EEG and ECG and evaluated 28 distinct features on long term recording of epileptic patients as part of standard clinical evaluation for surgery. A machine learning methodology was then used to identify the optimal combination of certain sets of features able to distinguish between preictal (immediately preceding a seizure) and interictal (ordinary between seizures). Of the available classifiers, we have chosen the support vector machine (SVM) [9]. Due to its robustness for estimating predictive models from noisy, sparse and high-dimensional data, SVM methodology has been successful in many applications ranging from genomics to financial data analysis and signal processing. To the best of our knowledge this is the first attempt that
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 77–80, 2010. www.springerlink.com
78
M. Valderrama et al.
II. MATERIALS AND METHODS
We examined a group of 4 epileptic patients suffering from medically intractable partial epilepsy and recorded in the Epilepsy Unit of the Pitié-Salpêtrière Hospital-Paris. In order to capture their habitual seizures, EEG-ECG were recorded at a sampling rate of 400 Hz with a Nicolet acquisition system using at least 29 channels for at least 7 consecutive days (13.89 ± 6.81 days). In one patient, in order to better localize their epileptogenic regions for possible resection, intracranial EEG was recorded from stereotactically implanted depth electrodes (n=50) in addition to scalp ones. All seizures were completely annotated by clinicians of the epilepsy unit. Applied to each simultaneously-acquired EEG and ECG channel, 28 algorithmic features have been developed (Table 1). These include a very large number of single-channel EEG features like power estimates in some pre-defined frequency bands or broadband complexity measures, as well as several measures of Heart Rate Variability described in time and frequency domains of the RR time series extracted from the ECG. EEG features have been calculated from nonoverlapping, consecutive 5-second periods and ECG ones from consecutive 300-second windows with a sliding step of 5 seconds in order to obtain values at the same times that EEG ones. For classification, the SVM algorithm in the package of LIBSVM [10] was applied with the Radial Basis Function (RBF) kernel. The SVM parameters C and Ȗ were optimized for each patient using a double 3-fold crossvalidation and grid-search process as proposed in [10]. After having linearly scaled all computed features to the range [0,1], data segments were labeled as interictal, preictal, ictal or postictal according to the following criteria: if the segment was between 10 minutes before the seizure onset and the seizure onset, the segment was labeled as preictal; if the segment was between the seizure onset and the seizure end, the segment was labeled as ictal; if the segment was between the seizure end and 5 minutes after the seizure end, the segment was labeled as postictal; all other segments were labeled as interictal. For training purposes, the 50 % of the data was randomly selected from all available preictal, ictal and postictal segments and an equal number of selected preictal segments were randomly chosen from interictal ones. This in order to avoid big differences between the number of segments belonging to one type compared to the others. All non-selected segments (more than 99 % of all available data) were used further for testing purposes (test-set).
Once the parameters of the classifier were fully optimized, the performance of the classification was assessed using the test-set. Sensitivity, Specificity and False Detection Rate (FDR) values were determined in a similar way as described in [11]. Sensitivity corresponds to the percentage of preictal epochs correctly identified as preictal by the classifier. Specificity to the percentage of labeled non-preictal epochs correctly classified as non-preictal. FDR corresponds to the percentage of non-preictal epochs incorrectly identified as preictal. III. RESULTS
Figure 1 presents the classification rates obtained for one patient (Pat1). The rates were computed when testing the classifier with all features (from the test-set) for all electrodes at the same time and also when only the features (from the test-set) corresponding to each single electrode were used separately as input set for the classification process. In average, the best results were obtained for preictal states (72.11 ± 7.78 %) followed by interictal (64.72 ± 8.54%), postictal (61.17 ± 15.23 %) and ictal (45.70 ± 16.50%) states. These results showed that, excluding ECG electrodes, classification rates were higher when all available information was used (all features for all electrodes) for the classification process than when only the information provided by single electrodes was considered. One of the reasons explaining the higher rates for the ECG electrodes can be the fact that the features chosen for ECG analysis were more stables along the recordings than EEG ones. 100% Interictal Preictal Ictal Postictal
90%
80% Classification rate
combines multiple multi-modal EEG-ECG features with SVM classification for seizure prediction
70%
60%
50%
40%
30% All electrodes (n=29)
EEG electrodes (n=27)
ECG electrodes (n=2)
Figure 1. Classification rates along different combinations for Pat1
IFMBE Proceedings Vol. 29
Patient-Specific Seizure Prediction Using a Multi-feature and Multi-modal EEG-ECG Classification
Figure 2 presents the classification rates for the four patients considered in the current study. Color bars indicate the computed rates for each one of the four pre-defined epileptic states. Results were obtained after performing the classification processes using all available information for each patient (i.e. all features for all electrodes). In average, highest rates were obtained for preictal states (90.59 ± 6.74 %) followed by postictal (89.06 ± 11.60%), interictal (85.64 ± 7.78 %) and ictal (65.62 ± 16.84%) states. Interestingly, the highest rates for all of the four pre-defined states were obtained for Pat4, the only one presenting intracranial electrodes beside scalp ones. This observation suggests that intracranial electrodes can contain more specific information about epileptic activity than scalp ones, thus, making them more suitable for prediction purposes.
79
appear before (or after) the chosen time for the definition of both states (10 minutes before the seizure onset and 5 minutes after the seizure end respectively) as can by clearly appreciated, for example, by the similar evolution of the features after the vertical line marking the end of the postictal period.
100% Interictal Preictal Ictal Postictal
90% 80%
Classification rate
70% interictal
60%
preictal
ictal
postical
interictal
3 min
Figure 3. Evolution of features and classification output for one seizure
50% 40%
Table 2 presents the performance results for the four patients according to the measures defined in part II. In general, results show high classification performances compared to or, in some cases, better than results reported by other prediction studies [12]
30% 20% 10% 0%
Pat1
Pat2
Pat3
Pat4
Figure 2. Classification rates for the four patients
Figure 3 shows the evolution of the computed features for four intracranial electrodes around a chosen seizure for the implanted patient (Pat4). Some global differences can be appreciated between the four pre-defined states by simple visual inspection. In particular, changes in the evolution of the features can be visually appreciated some minutes before the seizure onset during the preictal state (segment enclosed by the ellipse), differentiating it from the preceding interictal one. This figure presents also the classification output (color bars in the bottom) with each color representing a classified state according to same convention presented in figure 2. Gray bars correspond to the segments used for the training process and therefore not considered for the performance evaluation. As can be observed, segments classified as preictal state are also present in the previous interictal part as well as segments classified as post ictal states are also present in the posterior interictal one. This observation suggest that specific information related to preictal and postictal states can
Table 2. Performance results for the classification process Pat1 Pat2 Pat3 Pat4 Mean ± Std Sensitivity (%) 90.28 82.56 90.48 99.05 90.60 ± 6.74 Specificity (%) 85.17 75.28 88.22 93.87 85.63 ± 7.79 FDR (%) 12.46 17.73 8.40 5.44 11.00 ± 5.32
IV. DISCUSSION
Most current seizure prediction techniques have been hypothesis-based, where a few EEG features were selected and evaluated using simple binary thresholds [1]. At the current state of research, no EEG feature has been found to be unique to the pre-seizure activity, so these approaches have resulted in many false positives on long, continuous and uninterrupted recordings. In particular, algorithms that use a short-time prediction horizon of 10 minutes usually produce low sensitivity less than 60% and high false positive rates around 1 per hour [12]. The alternative approach we proposed here is based on machine learning methodology. In particular, a multi-modal combination of multiple EEG-ECG features was used and we reported a significant sensitivity and specificity that outperform
IFMBE Proceedings Vol. 29
80
M. Valderrama et al.
previous seizure prediction methods on a 10-minute-long prediction horizon. From a clinical viewpoint, our results further support that the brain-heart system can be considered as a coupled dynamical control system in which bioenergetic processes in the brain have an autonomic influence in the heart. In the future, different values of time for the definition of the preictal and postictal periods must be tested in order to integrated specific, early and late changes, related to each one of these two states and thus, improve the classification performances. V. CONCLUSIONS
A patient-specific algorithm for seizure prediction based on EEG recordings is proposed. This algorithm extracts 28 algorithmic features from EEG-ECG recordings and classifies preictal (10-minute before seizures) and interictal using SVM classification. Applied to long-term recordings via 3-fold cross-validation, the proposed algorithm resulted in an average sensitivity and specificity of approximately 90% and 86 % respectively. The signal framework introduced here raises the possibility of multi-channel, multi-signal intelligent seizure prediction system by taking account of, and combining, several available physiological parameters in a clinical environment of epilepsy monitoring.
7.
Le Van Quyen M, Soss J, Navarro V, Robertson R, Chavez M, Baulac M, Martinerie J (2005) Preictal state identification by synchronization changes in long-term intracranial EEG recordings. Clinical Neurophysiology 116 : 559 – 568 8. Leutmezer, F. Leutmezer, C. Schernthaner, S. Lurger, K. Pötzelberger and C. Baumgartner (2003) Electrocardiographic changes at the onset of epileptic seizures, Epilepsia 44: 348–354. 9. V.N. Vapnik, The nature of statistical learning theory, Springer, NewYork; 1995. 10. http://www.csie.ntu.edu.tw/_cjlin/libsvm/ 11. Greene B, Boylan G, Reilly R, de Chazal P, Connolly S. Combination of EEG and ECG for improved automatic neonatal seizure detection. Clinical Neurophysiology (2007) 118: 1348–1359. 12. Schelter B, Winterhalder M, Feldwisch H, Wohlmuth J, Nawrath J, Brandt A, Schulze-Bonhage A, Timmer J (2007) Seizure prediction The impact of long prediction horizons. Epilepsy Research 73 : 213217. Corresponding author: Michel LE VAN QUYEN. Centre de Recherche de l'Institut du Cerveau et de la Moelle épinière (CRICM). Hôpital de la Pitié-Salpêtrière. 47, boulevard de l’hôpital, Paris, FRANCE Email:
[email protected] Table 1. EEG and ECG algorithmic features EEG Features
Time domain
ACKNOWLEDGMENT This work is funded by the europeean FP7 project “Evolving Platform for Improving Living Expectation of Patients Suffering from IctAl Events” (EPILEPSIAE).
Frequency domain
REFERENCES 1. 2. 3.
4. 5. 6.
Lehnertz K, Le Van Quyen M, Litt B (2007) "Seizure prediction." In: Engel J, Pedley TA (eds.) Epilepsy: A comprehensive textbook. 2nd ed. Lippincott Williams & Wilkins, Philadelphia, pp. 1011-1024 Martinerie J, Adam C, Le Van Quyen M, Baulac M, Clémenceau S, Renault B, Varela F (1998) Epileptic seizures can be anticipated by non-linear analysis. Nature Medicine 4: 1173-1176. Le Van Quyen M, Martinerie J, Navarro V, Boon P, D’Havé M, Adam C, Renault B, Varela F, Baulac M (2001) Anticipation of epileptic seizures from standard EEG recording. The Lancet 357: 183188. Sackellares JC, Iasemidis LD, Shiau DS, Gilmore RL, Roper SN. Epilepsy - when chaos fails. In: Chaos in the brain? Eds K Lehnertz and CE Elger, Word Scientific, Singapore, 1999. Lehnertz K, Elger CE (1998) Can Epileptic Seizures be predicted? Evidence from Nonlinear Time Series Analysis of Brain Electrical Activity. Phys Rev Lett 80: 5019-5022. Mormann F, Andrzejak RG, Kreuz T, Rieke C, David P, Elger CE, and Lehnertz. K (2003) Automated detection of a pre-seizure state based on a decrease in synchronization in intracranial EEG recordings from epilepsy patient. Phys. Rev. E 67: 21912.
Time and Frequency
First statistical moment of EEG amplitudes (mean) Second statistical moment of EEG amplitudes (variance) Third statistical moment of EEG amplitudes (skewness) Fourth statistical moment of EEG amplitudes (kurtosis) Long term energy Mean-Squared error of estimated AR models Relative power of spectral band delta (0.1-4Hz) Relative power of spectral band theta (4-8Hz ) Relative power of spectral band alpha (8 -15Hz ) Relative power of spectral band beta (15-30Hz ) Relative power of spectral band gamma (30-200Hz) Spectral edge frequency Decorrelation time Hjorth mobility Hjorth complexity Energy of the wavelet coefficients (Daubechies-4 with 6 decomposition levels) ECG Features
Mean NN intervals [ms] Variance of NN intervals [ms] Maximum NN interval [ms] Minimum NN interval [ms] Time Mean Heart Rate [bpm] domain Variance of the Heart Rate [bpm] Maximum Heart Rate [bpm] Maximum Heart Rate [bpm] Approximate Entropy (ApEn) describing the complexity and irregularity of the RR-intervals Very low frequency (VLF): <0.04 Hz Frequency Low frequency (LF): 0.04 – 0.15 Hz domain High frequency (HF): 0.15 – 0.4 Hz.
IFMBE Proceedings Vol. 29
Horizontal Directionality Characteristics of the Bat Head-Related Transfer Function S. Y. Kim1, D. Nikoliü1, A. C. Meruelo1 and R. Allen1 1
Institute of Sound and Vibration Research, University of Southampton, Southampton, United Kingdom
Abstract—In this paper, the measurements of the headrelated transfer function (HRTF) of the bat-head models are presented and analyzed in order to evaluate acoustic spectral characteristics of the sound received by the bat’s ears during the echolocation process. For that purpose, three different artificial models of the bat head are considered – a sphere model, a sphere model with pinna and a bat-head cast – with aim to identify the spectral localization cues in the frontal part of the horizontal plane defined by the bat’s head and pinna shape and size. Keywords— Bio-inspired systems, Echolocation, Headrelated transfer function, Spectral cues, Biosignals.
model with the pinna attached and a solid bat-head cast of the Egyptian fruit bat (Rousettus aegyptiacus) were considered here. The HRTFs were measured in the frontal part in the horizontal plane and further analyzed with a special emphasis on the spectral characteristics, i.e. the angledependent changes in the center notch frequencies present in the measured magnitude responses, and the acoustic gain of the measured HRTFs in octave frequency bands. The rest of the paper is organized as follows. In Section II, the methodology used to measure the HRTFs of the bathead models is explained in details. The general spectral characteristics of the bat-head models are analyzed and discussed in Section III.
I. INTRODUCTION
There have been many studies to reveal the acoustical cues which might be used in the echolocation by bats. It has been generally accepted that the interaural disparity between two ears provides the cues for target localization in the horizontal plane whereas pinna is associated with distinguishing the elevation of the target. This study considers three different types of artificial batheads with aim to investigate the effect of head and pinna on the acoustical characteristics of the signal received at ear by measuring the head-related transfer function (HRTF). The HRTF introduces both monaural and binaural information, which provide the basis for the localization study. It describes the acoustical influence of the body, head and external ears. The effects of the external ears can be considered as a linear time-invariant system where information about the system is encoded in impulse response (in the time domain) or transfer function (in the frequency domain). The transfer function at each ear, measured at various angles and distances from the sound source, contains the information available to the subject in order to determine the position of the source. There have been a few studies on the HRTF measurements in bats [1]-[4], however HRTF of an artificial bat head has rarely been investigated. Hence, this study intends to help understand the characteristics of a sonar receiver, which is modeled after a bat. The objective of this study is to investigate the horizontal directionality characteristics of the HRTFs of the bat-head models. For that purpose, three different artificial models of the bat head – a plain sphere model of the head, a sphere
II. METHODOLOGY
A. Measurement set-up All measurements were conducted in the small anechoic chamber in the Institute of Sound and Vibration Research (ISVR), Southampton. The walls, ceiling and the floor of the anechoic chamber are covered with sound-absorbing wedge forms to reduce sound reflections within the room. The experimental setup used for the HRTF measurements is shown in Fig. 1. The HRTF measurements were made separately for each of the three bat-head models. Fig. 2(a) describes the measurement coordinates used. The bat-head model was fixed at the origin facing a positive y-direction and the location of the sound source, ș, was varied. A complete set of measurements covered a full range of azimuth angle ș (from 0° to 360°) with angular resolution of 5°, i.e. 72 positions in total in the horizontal x-y plane. The set of microphone, bat-head model and the rotary stage was fixed at the measurement rig shown in Fig. 1. The measurement rig was placed on top of a metal frame mounted on the floor of the chamber. The height between the floor and bottom level of the rotary stage was approximately 70 cm and the bat-head model and the speaker were set 60 cm above the baseline of the rig. The microphone was inserted inside of the bat-head model with its top positioned at the entrance horizontally facing the ear’s hole. Fig. 2(b) shows the bat-head models with microphone fixed
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 81–84, 2010. www.springerlink.com
82
S.Y. Kim et al.
at the position of the ear. The microphone was fixed on to the centre of the rotary stage specifically designed for this experiment. The reference position of the bat-head cast was the middle point between the two ears at the level of the microphone holes and defined as the centre of interaural axis for the current measurement. The speaker was fixed on the vertical metal beam with its centre positioned 60 cm above the baseline of the rig and placed at the 1 m distance from the interaural centre point of the bat-head model. The positioning of the system was controlled using two laser beams to line up the bat-head cast and the speaker and ensure that the height of the centre point of the speaker was kept same as the microphone in each case.
5.5 cm. The pinnae of both ears are about 1.5 cm long but the angle of the two pinnae is not exactly symmetrical due to natural variation. Inside the cast, a hole is made at the entrance to each ear so that the right-angled microphone can be positioned inside. Because of the small dimensions of the head, the positioning of the microphone’s shaft is critical as inserting of the microphone inside of the head perfectly is not an easy task.
(a)
Fig. 1 Experimental setup in anechoic chamber used for HRTF measurements of the bat-head models.
(b)
Fig. 2 (a) Scheme of the measurement setup, (b) Bat-head models used in the experiment – sphere (styrofoam ball) with pinna attached at the position of the ear (top); and the rigid bat-head cast of the Egyptian fruit bat Rousettus aegyptiacus with microphone inserted into the left ear channel (bottom).
B. Materials
C. Measurement procedure
The simplified sphere model of the bat head used in this experiment is a ball made of styrofoam, cut in half and glue together with the microphone inserted inside (Fig. 2(b)). The diameter of the sphere is 4 cm, which is the same distance as the distance between two ears of the bat’s head cast model. The experiment also used some pinna shaped protuberance over the sphere model to compare the results between a plain sphere and a sphere with pinna attached to the sphere. The artificial pinna, approximately 1.5 cm in height, was made in a triangular shape using clay and attached to the sphere at the position of eardrum as shown in Fig. 2(b). The rigid bat-head cast of the Egyptian fruit bat (Rousettus aegyptiacus) used in this study is shown in Fig. 2(b). The Egyptian fruit bat is the only species that use echolocation among Megachiroptera (Megabat) species. The dimension between the two ears is approximately 3-4 cm. The length of the front part of the head is approximately 2 cm and the distance between front and back is approximately
In order to measure impulse response, the computer generated pink noise signal used as an input to the system was amplified (S55A Amplifier 18 kHz- 300 kHz, ± 3 dB) and sent through the speaker (Ultra Sound Advice S56 10 kHz200 kHz). The signal received by the microphone (B&K Type 4939) inserted into the model of the head at ear position was then amplified (B&K Type 2670 preamplifier 4 Hz-100 kHz and B&K 2690 conditioning amplifier with 140 kHz upper limit) and transmitted to the A/D converter with a sampling rate of 500 kHz. Measurements were controlled by a computer located in a room adjacent to the chamberǤ The impulse responses were calculated by means of deconvolution of the received signal with the generated signal looped back to the data acquisition card. The postprocessing is then applied to the measured impulse responses in order to eliminate effect of the microphone and speaker magnitude responses by deconvolving the impulses responses
IFMBE Proceedings Vol. 29
Horizontal Directionality Characteristics of the Bat Head-Related Transfer Function
30 25 20
III. RESULTS AND DISCUSSION
15 10 5 0 −5 −10
Azimuth, θ [°]
(a)
Relative Magnitude [dB]
with the corresponding impulse responses measured in the free-field conditions (without bat-head model present). The equalized impulse responses are further band-pass filtered through a second-order Butterworth filter from 4 kHz to 120 kHz in order to smooth responses and transform to the frequency domain using Fast Fourier Transform (FFT) in order to determine their magnitude response.
83
−15 −20 −25 −30
A. Spectral analysis 10
20
30
40
50
60
70
80
90
100
110
Frequency [kHz]
30 25 15 10 5 0 −5 −10
Azimuth, θ [°]
(b)
Relative Magnitude [dB]
20
−15 −20 −25 −30
10
20
30
40
50
60
70
80
90
100
110
Frequency [kHz]
30 25 20 15 10 5 0 −5 −10
Azimuth, θ [°]
(c)
Relative Magnitude [dB]
It is known that bat during its approach to the target usually orients its head to keep the target in the same region of space. Due to this fact, the azimuth region of interest in this study is restricted to the range from –30° to +30°. In Fig 3, the transfer functions measured for three different models of the bat head are plotted stacked with a vertical offset of 25 dB in the frequency region from 10 kHz to 110 kHz and azimuth angle ș ranging from –30° to +30°. This frequency region covers all frequencies in the bat’s sonar sound. Plots (a) and (b) in Fig. 3 depict the measured magnitude responses of the plain sphere and the sphere with pinna. As it can be seen from these two plots, the major center notch frequencies lay in the frequency region from 65 to 75 kHz in the ipsilateral region. The effect of the pinna attached to the sphere bat-head model can be seen from Fig. 3(b). In the ipsilateral region, i.e. the azimuth angle ș from –30° to 0° (midline position), the presence of the notch frequencies around 30 kHz is apparent comparing to the results of the plain sphere shown in plot (a). In the contralateral region, the shift of central notch frequencies from 65 kHz to 75 kHz as the azimuth angle increases from 0° to +30° is more evident and the magnitudes of the central notch frequencies are more prominent at larger azimuth angles, i.e. for the source positions above 20°. Plot (c) in Fig. 3 illustrates the magnitude responses measured at the left ear of the bat-head cast. In this case, the asymmetrical shape of bat head introduces another distinct region with center notch frequencies varying from 78 kHz to 83 kHz. The directional effects of bat pinna are more clearly seen in the frequency region around 30 kHz where more prominent notch frequencies are present in the ipsilateral region. Moreover, the center notch frequencies linearly shifts from 25 kHz to 38 kHz as the azimuth angle increases from 0° to +30°, i.e. as the sound source moves further ipsilaterally. This linear shift of the center notch frequencies is considered as essential in resolving target position.
−15 −20 −25 −30
10
20
30
40
50
60
70
80
90
100
110
Frequency [kHz]
Fig. 3 Measuerd HRTFs plotted in the azimuth range from –30° to +30° with 5° step and the frequency range from 10 to 110 kHz for the three bathead models: (a) a plain sphere, (b) a sphere with pinna attached and (c) the left ear of the bat-head model.
IFMBE Proceedings Vol. 29
84
S.Y. Kim et al.
B. Analysis of gain in octave frequency bands The gain of HRTF for each model was calculated using the 1/3 octave band frequency analysis in the same azimuth range (ș varied from –30° to +30°) as used in the previous section. The centre frequency is set to have the same interval in logarithmic scale and the bandwidth for each frequency was determined based on 1/3 octave band calculation for upper and lower frequencies. Therefore, the bandwidth increases as the frequency increases. For this analysis, six frequency regions with the centre frequencies at 10, 19.953, 25.119, 31.623, 63.096 and 79.433 kHz, were chosen to plot the relative gain of HRTFs for three different types of the bat-heads as shown in Fig. 4. Sphere
Sphere+pinna
Bat-head cast
fc=19.953 kHz
fc=25.119 kHz
fc=31.623 kHz
Relative gain [dB]
Relative gain [dB]
fc=10.000 kHz
gain of the bat-head cast at the frequencies of 25.119, 31.623 and 79.433 kHz appears to be similar or even smaller compared to those of the sphere models. This effect is considered to be due to the notch frequencies appeared in the frequency spectrums of HRTFs of the bat-head cast that are shown in Fig. 3(c). On the other hand, the gain was shown to be generally larger when the pinna is attached to the sphere, compared to the result when there is no pinna on the sphere. It is noted that this gain is reduced in the frequency regions where the frequency notches appear, for example, in the frequency region of 63.096 kHz. IV. CONCLUSION
The results of performed measurement shown in this study demonstrate how different types of sonar heads affect the acoustical signals received at the ear due to the shape of head and characteristics of pinna. It can be concluded that the notch frequencies present in the HRTF are mainly determined by the pinna while their positions vary depending on the details of the head and pinna although the average head size is similar between different models. The octave band analysis shows that the change of HRTF acoustic gain in the azimuth range from –30° to +30° is similar between different types of the bat-head models. However, the relative gain is significantly affected by the notch frequencies.
ACKNOWLEDGMENT The authors are very grateful to RCUK for support through the BIAS Basic Technology Programme.
REFERENCES
fc=79.433 kHz
Relative gain [dB]
fc=63.096 kHz
1.
2. 3. Azimuth, q [°]
Azimuth, q [°]
Fig. 4 Acoustic gain obtained from the HRTFs of three different types of bat-head models. The results are plotted over the azimuth from ipsilateral (–30°) to contralateral (+30°) position. In general, the acoustic gain decreases as the source moves from ipsilateral to contralateral position in the given range. The gain obtained from the bat-head cast is larger than those from the sphere models for the frequencies of 10, 19.953, 63.096 kHz by approximately 5 dB. However, the
4.
Aytekin M, Grassi E, Sahota M, Moss C F (2004). The bat headrelated transfer function reveals binaural cues for sound localization in azimuth and elevation. Journal of the Acoustical Society of America, 116(6):3594–3605 Fuzessery Z M (1996) Monaural and binaural spectral cues created by the external ears of the pallid bat. Hearing Research, 95(1-2):1-17 Firzlaff U, Schuller G (2004) Directionality of hearing in two CF/FM bats, Pteronotus parnellii and Rhinolophus rouxi. Hearing Research, 197(1-2):74–86 Wotton J M, Haresign T, Simmons J A (1995) Spatially dependent acoustic cues generated by the external ear of the big grown bat, Eptesiscus fuscus. Journal of the Acoustical Society of America, 98(3):1423-45 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Su Yeon Kim Institute of Sound and Vibration Research University Road Southampton United Kingdom
[email protected]
Assessment of Human Performance during High-Speed Marine Craft Transit D. Nikoliü1, R. Collier2, R. Allen1 1
Institute of Sound and Vibration Research, University of Southampton, Southampton, United Kingdom 2 School of Health Sciences, University of Southampton, Southampton, United Kingdom
Abstract— The aim of this study is to investigate human factors specific to high-speed craft operation during transits at sea. For that purpose, a pilot methodology to simultaneously measure and synchronize boat and human physiological data during a transit was designed and conducted. Some measures of interest in the study were seat motions and vibration coupled with head motions, heart rate and the activity of certain spinal muscles. The surface electromyography (EMG) signals were used in order to investigate if the fatiguing characteristics of the lumbar spine muscles of a RIB crew change over time. Additionally, the electrocardiogram (ECG) signals were used to analyze the effect of body vibrations on heart rate variability. Keywords— Human performance, Surface EMG analysis, Muscle fatigue, Whole body vibration, ECG analysis. I. INTRODUCTION
There is an increasing demand for the use of highperformance marine crafts. The performance of these crafts closely depends on the performance of their crew, which in turn is critically influenced by the manner in which the vessel responds to the variable sea conditions. Subjective evidence from high-speed boat crews operating in poor sea conditions usually results in comments describing the ride as rough, tiring, painful and sometimes leading to injury. Quantitative evidence demonstrates that the occupation of high-speed boat crew has the high incidence of hospitalization [1]. As well as injuries, high-speed boat crews are subject to high levels of fatigue during and post transits. As high-speed craft are often used as transit vehicles to delivery personnel undertaking activities such as search and rescue, it is important that they arrive at the destination physically and cognitively able to under take the task as lives are dependant on their performance. Thus, it is important to examine and reduce the cause of the fatigue and, at the same time, reduce potential risk of injury. It is generally assumed that the cause of fatigue is the human body's reaction to the motion of the high-speed boat. The motion of a high-speed boat can be split into 3 domains: (i) high frequency vibration mainly caused by the engine and small irregularities in the water surface, (ii) low frequency motion resulting from the boat movement over bigger swells in the sea, and (iii) shock, the impact due to
the boat landing on the water after taking off from the surface. Of these three motion domains it is the shocks that are most likely to cause injuries, whilst all three have the potential to cause motion-induced-fatigue, but to what extent is currently unknown. The aim of this study is to investigate human factors specific to high-speed craft operation considering the influence on human factors during high-speed transits at sea. For that purpose, the proposed methodology was designed and implemented to simultaneously measure and synchronize boat and human physiological data during a transit. The measures of interest in the study were seat motions and vibration coupled with head motions, heart rate and the activity of certain spinal muscles. The surface electromyography (EMG) signals were used in order to analyze and describe the muscles fatigue characteristics and also to investigate if the fatiguing characteristics of the lumbar spine muscles of lifeboat crew change over time to look for an association between exposure to shock and random vibration. Furthermore, the electrocardiogram (ECG) signals were used to analyze the effect of body vibrations on heart rate variability. The paper is organized as follows. A method used to measure, process and analyze human body vibration and physiological data is explained in details in Section II. The results of the experiment are presented and discussed in Section III followed by the conclusion given in Section IV. II. METHOD DESCRIPTION
A. Experimental procedure The experiment was undertaken during a sea trial with a Rigid-hull Inflatable Boat (RIB-X Expert XT650). A male subject participated in this study. The experiment took place off the south coast of England (The National Oceanography Centre, Southampton). The conditions of the sea were moderate (sea state 3). The trial was approximately 32 minutes long with an average boat speed of 13 m/s. The data measured during the sea trial were: (i) human whole-body vibration (WBV) data – head and seat acceleration and (ii) human physiological data – the electrocardiogram (ECG) and surface electromyography (sEMG) from upper fibers of Trapezius muscle and the Multifidus muscle
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 85–88, 2010. www.springerlink.com
86
D. Nikoliü, R. Collier, and R. Allen
in the lumbosacral region. A schematic representation of the experimental setup is shown in Fig. 1.
Fig. 1 Experimental setup used for collection of data during the sea trial
Raw signals were amplified, band-pass filtered (3 dB bandwidth: 6-6000 Hz) and digitized with a sampling frequency of 1 kHz and 12-bit resolution using an 8-channel portable data logger (MIE Medical Research ltd). To synchronize data measured with two instruments, a tri-axial accelerometer transducer (range r25 g) was mounted on the backside of participant’s helmet close to other head accelerometer (range r100 g) with the corresponding axes of detection aligned. The vibration signal acquired by this transducer was used in the processing stage to establish an exact match between time scales of all recorded signals. Finally, all measured data were stored on the memory card and converted later into a MATLAB format for processing purpose.
(H – head accelerometer, S – seat accelerometer)
C. Data processing and analysis B. Data acquisition and instrumentation The locations of the tri-axial accelerometers (Crossbox CXL100HF3, range r100 g) and the coordinate system used to measure and assess human vibrations and motions are illustrated in Fig. 1. The accelerometers positions, denoted as H and S, correspond respectively to the passenger’s head (the accelerometer attached at the back side of the helmet) and the seat (the accelerometer positioned at the front side of the seat). Measurements of vibration were conducted according to the recommended method [2]. Thus, the axes of the seat and head acceleration signals were referenced to the human body such that the x-axis measured motion in the sagittal plane, the y-axis measured motion in the frontal plane, and the z-axis measured motion along a longitudinal axis. The acceleration signals were acquired at sampling frequency of 2.5 kHz using a 16-channel logger (IOTECH Logbook 300). To examine responses of spinal muscles to body vibration, surface EMG signals were recorded during the trial. Four EMG signals selected for this experiment were from: Left/Right Upper Trapezius and Left/Right Multifidus at the level of the junction of the 4th and 5th lumbar vertebrae. A differential pair of pre-gelled self-adhesive electrodes (3 cm in diameter) was positioned approximately 2.5 cm apart over each muscle of interest with an orientation parallel to the muscle fibers. A reference electrode with a built-in 1k gain preamplifier was placed at equal distance from the electrode pair. Before attaching electrodes, the skin where the electrodes were to be placed was prepared according to SEMIAN guidelines to reduce skin impedance. Pre-amp cables were fixed to the skin with adhesive tape to reduce the potential artifacts caused by cable movements. A similar approach was performed by placing the electrodes on the sternum to record the ECG signal.
Whole body vibration data The frequency-weighted seat and head accelerations are calculated using the weighting filters, Wd for the horizontal x and y-axes and Wb for the vertical z-axis, according to [3]. These filters have high and low-pass band-limiting filters at 0.4 Hz and 100 Hz respectively and are suitable for vibration frequency range 0.5-80 Hz. Applying frequencyweighted filters to human vibration signals is required to correlate physical vibration measurements to human response. The root-mean-square (rms) of the frequency-weighted acceleration is calculated for each axis for the total period during which vibration occurs to represents the average of acceleration over a period of time. In addition, the rms magnitude of the weighted acceleration signals in x, y and z directions is calculated and used as a measure of overall vibration. There are no strict limiting acceptable values for magnitudes of frequency-weighted rms vibration regarding human discomfort, although some values are given as indications of potential reaction [3]-[4]. Values below 0.315 m·s–2 are considered as not uncomfortable, and above 2 m·s–2 as extremely uncomfortable [3]. Physiological data By analyzing ECG signal, it is possible to identify potential irregularities of heart activity due to vibration exposure. Determination of precise timing of R peaks in the ECG QRS complex was achieved by using the modified PanTompkins QRS detection algorithm [5], [6]. For this purpose, raw signal was initially filtered in the range of 1 to 35 Hz using a band-pass filter composed of cascaded highpass and low-pass fourth-order Butterworth filters to reduce noise artifacts and improve detection accuracy and then downsampled to 250 Hz. Subsequent processes were double-differentiation, squaring and smoothing the signal by
IFMBE Proceedings Vol. 29
Assessment of Human Performance during High-Speed Marine Craft Transit
III. RESULTS AND DISCUSSION
Table 1 Vibration parameters of the frequency-weighted head and seat accelerations calculated for each axis direction Parameter Units Seat Acceleration [g] Peak value [m·s–2] [g] rms value [m·s–2] Head Acceleration [g] Peak value [m·s–2] [g] rms value [m·s–2]
x-axis
y-axis
z-axis
Total
0.23 2.27 0.018 0.181
1.54 15.09 0.102 1.005
1.36 13.37 0.035 0.347
– – 0.110 1.079
0.58 5.68 0.058 0.567
2.29 22.42 0.130 1.278
0.49 4.80 0.035 0.345
– – 0.147 1.440
B. ECG analysis The calculated instantaneous heart rate plotted against rms values of frequency-weighted seat acceleration magnitude is shown in Fig. 2. The estimated delay between the instantaneous heart rate and seat acceleration was approximately 2 s.
|Sw|norm
1 0.5 0 0
200
400
600
200
400
600
800
1000
1200
1400
1600
800
1000
1200
1400
1600
1
HRnorm
using a moving window integration filter of length 150 ms. In the decision stage, two techniques to reduce false peaks detection were undertaken. A new peak is detected only when the height of the peak level reached a certain threshold and when the time interval between two consecutive peaks, i.e. RR interval, is within the established range of 50% to 150% of the average RR interval based on the eight most recent intervals. After successful detection of R peaks in ECG waveform, heart rate fluctuations were assessed by calculating instantaneous value of heart rate (expressed in beats per minute). The calculated instantaneous heart rate value was further compared to the rms values of the frequency-weighted seat acceleration signals calculated from the 1 s time windows prior to each heart beat. The EMG variable of interest in this study was the root mean square (rms) value and the power spectrum density. The running rms values were calculated from the consecutive 1 s time windows of the EMG signals and compared with rms values of the measured acceleration signals. Power spectrum density of the EMG signal has been shown to be a valuable technique for investigating muscle fatigue properties and motor unit behavior. Typically, power spectrum density has been used to investigate the properties of muscle during sustained maximum voluntary muscle contraction, although, techniques used in this study are appropriate for muscle contraction at lower levels during active movement. In this study, power spectrum densities of the measured EMG signals were estimated and averaged using Welch’s method with a window FFT size of 512 samples and 50% overlap.
87
0.5 0 0
Time [s]
Fig. 2 The frequency-weighted seat rms acceleration magnitudes (top) and
A. Human vibration analysis The main vibration parameters (peak and rms values) of the frequency-weighted seat and head acceleration signals are calculated for each axis direction and reported in Table 1. Applying the frequency-weighting filters to the seat vibration resulted in the highest impact magnitude value of 1.54 g encountered in the lateral (y-axis) direction. The highest rms acceleration magnitudes of 1.005 m/s2 occurred also in the lateral (y-axis) direction at the seat base. Overall frequency-weighted rms acceleration amplitude of 1.440 calculated for the head vibration was found to be considerable larger than the seat rms acceleration amplitude of 1.079 and is considered to be very uncomfortable [3]. It should be noted that the most severe vibration during this trial occurred in the lateral direction.
instantaneous heart rate (bottom) during trial
The results demonstrate a statistically significant correlation (p<0.0001) between the seat vibration and heart rate variations. It is also shown that the largest influence on the heart rate is caused by seat vibration in the longitudinal (xaxis) direction with the correlation coefficient of 0.4770 (Table 2). Table 2 Correlation coefficients between the instantaneous heart rate and the unweighted and frequency-weighted rms seat acceleration amplitudes Seat acc.
x-axis (longitudinal)
y-axis (lateral)
z-axis (vertical)
Magnitude
unweighted
0.3275
0.4419
0.3673
0.4287
weighted
0.4770
0.4398
0.3853
0.4499
IFMBE Proceedings Vol. 29
88
D. Nikoliü, R. Collier, and R. Allen
C. EMG analysis
IV. CONCLUSIONS
The power spectrum densities of the EMG signals calculated for the successive time segments of 10 minutes duration are shown in Figs. 3 and 4. The general shape of these spectrum densities is in agreement with the spectral analysis of averaged EMG signal in the frequency domain during voluntary contraction [7], [8]. The plots of the power spectrum densities from the Left and Right Multifidus muscles are typical of those seen as muscle fatigues during an activity – spectral content shifts to the lower frequencies – shows a cumulative effect consistent with muscle fatigue [7]. Similar changes are not seen in the upper fibers of Trapezius which are not showing the same fatiguing characteristics as the muscles that support the lumbar spine and lumbo-sacral region. 0-10 min Left Upper Trapezius
1
20-30 min
ACKNOWLEDGMENT
Right Upper Trapezius
PSD
norm
1
10-20 min
0.5
0
0.5
50
1
100
150
50
1
100
PSD
0.5
The authors acknowledge the support of colleagues from the School of Engineering Sciences, Ship Science, University of Southampton, for their help in conducting this experiment.
REFERENCES
150
Right Multifidus
norm
Left Multifidus
0
The methodology presented in this paper shows the feasibility of human performance measurement in high-speed transits at sea. The preliminary results also demonstrate very high influence of body vibration on heart rate variability and back muscle fatigue during vibration exposure. The statistically significant correlation between the rms seat acceleration and heart rate variation as well as the rms amplitudes of surface EMG signals is also reported. This preliminary analysis of the physiological measure also revealed a delay of approximately 2 s in heart rate changes in response to seat vibration. The spectral EMG analysis demonstrates spectral shifts from higher to lower frequency range that might be evidence of muscle fatigue during the sea transit.
1. 2.
0.5
3. 0
50
100
150
0
Frequency [Hz]
50
100
150
Frequency [Hz]
4.
Fig. 3 Power spectrum densities of the surface EMG signals
Correlation coeff.
The calculated correlation coefficients between the rms values of the EMGs and the rms seat acceleration amplitudes for each axis direction are illustrated in Fig. 4. It is shown that the highest impact on the Multifidus muscles is caused by seat acceleration in vertical (z-axis) direction. 1 x-axis
y-axis
z-axis
magnitude
0.5
0
Left Upper Trapezius Right Upper Trapezius Left Multifidus
Right Multifidus
Fig. 4 Correlation coefficients between the rms values of the surface EMG signals and the seat acceleration signals
5. 6. 7. 8.
Ensign W, Hodgdon J A, Prusaczyk W K, Ahlers S, Shapiro D, Lipton M (2000) A survey of self-reported injuries among boat operators. Naval Health Research Centre. Tech Report 00-48 ISO 2631-1: Mechanical vibration and shock – evaluation of human exposure to whole-body vibration – Part 1: general requirement. (1997) International Organization for Standardization. pp. 1-31 BS 6841: Measurement and evaluation of human exposure to wholebody mechanical vibration and repeated shock. (1987) British Standards Institute Griffin M J (2004) Minimum health and safety requirements for workers exposed to hand-transmitted vibration and whole-body vibration in the European Union; a review. Occupational and Environmental Medicine. 61:387-397 Pan J, Tompkins W J (1985) A real-time QRS detection algorithm. IEEE Transactions on Biomedical Engineering. 32:230-236 Allen D P, Taunton D J, Allen R (2008) A study of shock impacts and vibration dose values onboard high-speed marine craft. International Journal of Maritime Engineering. 150:1-10 Merletti R, Parker P (2004) Electromyography – Physiology, Engineering and Noninvasive Applications. first ed. John Wiley&Sons, Inc. Hoboken, New Jersey Cifrek M, Medved V, Tonkoviü S, Ostojiü S (2009) Surface EMG based muscle fatigue evaluation in biomechanics. Clinical Biomechanics. 24(4):327-40 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Dragana Nikoliü Institute of Sound and Vibration Research University Road Southampton United Kingdom
[email protected]
Effects of Electrochemotherapy on Microcirculatory Vasomotion in Tumors T. Jarm1, B. Cugmas1, and M. Cemazar2 1
University of Ljubljana, Faculty of Electrical Engineering, Laboratory of Biocybernetics, Ljubljana, Slovenia 2 Institute of Oncology Ljubljana, Department of Experimental Oncology, Ljubljana, Slovenia
Abstract— Electrochemotherapy (ECT) is a potent antitumor therapy with excellent results demonstrated in experimental and clinical studies. ECT consists of an injection of a chemotherapeutic drug (bleomycin or cisplatin) followed by an application of a series of short high-voltage electric pulses locally to the tumor mass. Antitumor effectiveness of ECT is attributed largely to increased cytotoxicity of the drug due to transient electroporation (EP) of tumor cells' membranes, which facilitates an increased uptake of the drug by tumor cells. However, two additional mechanisms have been recognized as critical for complete eradication of treated tumors: the role of the host's immune system and the vascular disrupting effect of ECT. Application of electric pulses to tumors is followed by rapid and profound reduction of tumor blood flow. Some minor reperfusion takes place within an hour after the treatment. By using laser Doppler flowmetry for continuous monitoring of tumor blood flow we provide evidence that in addition to the baseline blood flow changes, application of electric pulses and/or chemotherapeutic drug bleomycin also induces significant changes in vasomotional activity in tumor microcirculation within the first hour after treatment. Vasomotion is a collective term describing low-frequency (e.g. <0.5 Hz) fluctuations in local microcirculation as a result of cycles of contraction and relaxation of smooth microvascular musculature brought about by different physiological sources. The observed changes in vasomotion in tumors after EP and ECT support the hypotheses about the mechanisms of blood flowmodifying effects of electric pulses suggested in other studies. Keywords— experimental tumor, blood flow, laser Doppler, electroporation, continuous wavelet transform.
I. INTRODUCTION Reversible electroporation (EP) is physical phenomenon in which high-voltage electric pulses of short duration (e.g. 100 μs) are used to increase transiently the permeability of cellular membrane [1]. Electrochemotherapy (ECT) of tumors is a combined use of EP and chemotherapeutic drugs. An injection, systemic or intratumoral, of a drug at a very low dose is followed by application of EP locally to the tumor. The uptake of otherwise poorly permeant drug molecules by the tumor cells is thus vastly increased and the cytotoxicity of the drug enhanced. For two chemotherapeutic drugs, bleomycin and cisplatin, very high effectiveness of ECT has been demonstrated in experimental and clinical studies on tumors of various etiologies and ECT with these
two drugs is now used in a steadily increasing number of clinics and veterinary hospitals [2]. Additional mechanisms involved in the over-all antitumor effect of ECT have been identified: the role of the host's immune system and vascular-disrupting effects of ECT. The crucial role of immune system was confirmed in studies on immunodeficient mice in which ECT was far less effective than in immunocompetent mice [3]. The evidence of profound effects of EP and ECT on tumor blood flow and blood vessels was demonstrated in several in vivo studies. Furthermore, studies in vitro showed that EP induces temporarily massive cytoskeletal changes in endothelial cells and compromises endothelial barrier function [4]. Endothelial cells were also shown to be highly sensitive to ECT. All this evidence led to a conclusion that the vasculardisrupting effect of ECT plays an important role in eradication of treated tumors especially in the light of known inferiority, both structural and functional, of tumor blood flow and vascularization in comparison to normal tissues. Even though the characteristics of tumor blood flow can represent a serious obstacle for some antitumor treatments, it can also be exploited in new treatment strategies such as vascular-disrupting therapies and other treatments designed specifically to take advantage of tumor vasculature's weaknesses. In the present study laser Doppler flowmetry (LDF) was used to monitor changes in tumor blood flow after application of electric pulses (EP), a chemotherapeutic drug bleomycin (BLM), and combined application of both (ECT). The main objective was to investigate the effects of these treatments on vasomotional activity present in LDF signals during the first hour after the treatment.
II. MATERIALS AND METHODS The blood flow signals reported on and used in the present study were acquired in a previous study, so only a brief outline of the experimental protocol is provided here [5]. A. Animals, Tumors and Treatment Protocols Solid Sa-1 fibrosarcoma tumors in A/J mice were used. Subcutaneous tumors were grown dorsolaterally on the right flank of mice. Experiments were performed when tumors reached a volume of ~50 mm3. Mice were anesthetized
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 89–92, 2010. www.springerlink.com
90
T. Jarm, B. Cugmas, and M. Cemazar
using isoflurane delivered via a face-mask in a mixture of O2 and N2O. The body temperature was monitored and maintained within physiological range with a heating pad. Mice were assigned randomly to one of four experimental groups: control, electroporation (EP), bleomycin (BLM) and electrochemotherapy (ECT). At least 13 animals per group were used. BLM was injected intravenously at a dose of 1 mg kg-1 three minutes before application of EP. For EP, eight square electric pulses (amplitude 1040 V, duration 100 μs, repetition frequency 1 Hz) were delivered via parallel stainless-steel electrodes (inter-electrode distance 8 mm) placed percutaneously at two opposite sides of the tumor. ECT-treated mice received both treatments, while EP- and BLM-treated mice received only a corresponding single treatment with the other one replaced by a sham treatment. Relative blood flow (perfusion) was monitored by means of laser Doppler flowmetry (LDF). A two-channel LDF instrument OxyFlo2000 (Oxford Optronix Ltd., Oxford, UK) was used to measure simultaneously blood flow in two different locations within a tumor. Thin bare-fiber probes (diameter 0.2 mm) were inserted though small superficial incisions into the tumor. LDF monitoring was started ~30 minutes before tumors were subjected to therapy and was maintained for at least one hour after the treatment. LDF signals were acquired at 100 Hz sampling rate. B. Signal Processing Signals were processed using Matlab software (Mathworks, Natick, MA, USA). a) Baseline Tumor Blood Flow To obtain the gross (baseline) tumor blood flow signal, raw LDF signals were filtered by a special digital filter to remove movement artifacts caused by respiration and downsampled to 1 Hz. For presentation of the baseline blood flow changes, this signal was averaged over 20-second epochs centered at predefined intervals and all data were normalized with respect to the pretreatment value. b) Continuous Wavelet Analysis Continuous wavelet transform (CWT) was used for analysis of the low-frequency content of LDF signals. The theory of CWT can be found in numerous literature, e.g. [6]. CWT provides information about the signal simultaneously in time and frequency spaces. This is achieved by using special oscillatory-like and time-localized analyzing functions called wavelets, which play a role in CWT analogous to that of the complex exponential in Fourier transform. Many different wavelets exist, but all wavelets of the same family are derived from the mother wavelet ψ (t ) by two transformations: scaling by a factor a (compressing or dilating) and translation by a time-shift b (moving along
time-axis) - see Eq. (1). Division by a ensures that wavelets at all scales have the same energy. ⎛ t − b ⎞⎟ ψ ⎜⎜ ⎟ a ⎜⎝ a ⎠⎟
1
ψa ,b =
(1)
CWT of a continuous signal x(t ) at scale a and time location b is defined as: X ( a, b) = ∫
∞
−∞
x(t )ψa∗,b (t )dt .
(2)
where ψa∗,b is a complex conjugate of a ψa ,b . The scale parameter a is inversely proportional to a characteristic frequency f of ψa ,b ( a ∝ 1/ f ), so the results of CWT can be presented in the time-frequency space. The complex Morlet wavelet was used in the present study [6]. In Matlab it is defined by its mother wavelet as: ψ (t ) =
1 π fb
exp ( j 2π f c t ) exp (− t 2 fb ) .
(3)
This wavelet is a complex sinusoid waveform (1st exponential term) enclosed within a Gaussian envelope (2nd exponential term). One can control the number of "effective" oscillations within the envelope and the width of the envelope by changing f c and f b respectively. We used f b = 1 and f c = 1.5 (dimensionless) in our work. Physical frequency f (in Hz) corresponding to scale a is:
f =
fc f s , a
(4)
where f s is the sampling frequency used (in Hz).
c) Analysis of Vasomotion in Tumor Blood Flow Each LDF signal was treated individually. The raw signal was filtered using an anti-aliasing low-pass filter and downsampled to 10 Hz. Signal values were normalized with respect to the baseline blood flow determined 5 min before the treatment. Detrending was performed by subtracting the output of a MA filter (length 60 s) from the signal. Finally, the signal was filtered by a low-pass (cut-off frequency 0.35 Hz) to remove the influence of respiratory component from the signal. This was necessary because all signals were heavily contaminated by the respiratory movement artifacts due to anatomical location of tumors. Vasomotional activity was assessed: 5 minutes before (-5 min), 5 minutes after (+5 min) and 60 minutes after therapy (+60 min). Segments of length 500 seconds centered at moments -5, +5 and +60 min were extracted from the signals. CWT was calculated over the scales corresponding to
IFMBE Proceedings Vol. 29
Effects of Electrochemotherapy on Microcirculatory Vasomotion in Tumors
91
the range of frequencies between 0.01 and 0.5 Hz. The boundary regions of length 100 s were discarded from both ends of the resulting CWT to minimize the boundary effect. The results were presented as plots of CWT modulus above the time-frequency plane. For statistical evaluation of the results the average energy density within predefined frequency bands was calculated from the CWT according to a procedure described in [7]. Average energy density within a frequency band ( f1 , f 2 ) is defined as: E ( f1 , f 2 ) =
1 T a1 1 2 X (a, b) dadb , 2 ∫ ∫ 0 a 2 a T
(5)
where ai denotes the scale corresponding to the physical frequency fi according to Eq. 4. The frequency range was divided into two frequency bands based on experimental indication of two regions with prominent peaks in CWT: band I (0.01-0.07 Hz) and band II (0.07-0.5 Hz) resulting in average energy densities EI and EII respectively. The repeated measures ANOVA on ranks and Dunn test were used for statistical evaluation of the results. Differences were considered statistically significant for P ≤ 0.05 .
Fig. 1 Blood flow changes after different treatments assessed by means of LDF. Mean ±SE values are shown (N ≥ 13 for all groups). Reproduced from [5] in accordance with the Licence to publish of Br.J.Cancer Table
1 Changes in vasomotional activity during the observation period. Values of averaged energy densities 5 min before the treatment were compared to those 60 min after the treatment separately for frequency bands I and II (EI and EII) and for the entire frequency range (E). The direction of change and statistical significance of the change are shown EI (0.01-0.07 Hz)
EII (0.07-0.5 Hz)
E (0.01-0.5 Hz)
change
signif.
change
signif.
change
Control
decrease
NO
increase
YES
increase
NO
BLM
increase
YES
increase
YES
increase
YES
EP
decrease
YES
decrease
YES
decrease
YES
ECT
decrease
YES
decrease
YES
decrease
YES
Group
III. RESULTS Changes in baseline blood flow following different treatments are presented in Fig. 1 [5]. It can be observed that both EP and ECT induced rapid and profound (highly significant) decrease in blood flow, followed by a minor reperfusion which stopped ~10 minutes after the treatment with no significant further improvement up to 1 h afterwards. Changes in blood flow of control tumors were insignificant, while in the BLM-treated tumors there was a tendency of a slight blood flow increase with border-line statistical significance ~45 minutes after the treatment. Changes in vasomotional activity after EP and ECT were dramatic. Fig. 2 presents examples of low-frequency fluctuations in a tumor treated by EP (top) and BLM (bottom) over 5-minute intervals centered at three characteristic moments before and after the treatment. In the left-most panels (before treatment) a vivid vasomotional activity can be seen with a distinct peak of activity between 0.1 and 0.2 Hz. In this EP-treated tumor there was also a strong activity centered around 0.03 Hz. 5 min after the treatment with EP the vasomotion was almost non-existent and only partial recovery was reached 1 h after the treatment. In the tumor treated with BLM, there was a strong evidence of increasing vasomotion at all frequencies, in particularly around 0.1 and 0.03 Hz. In control tumors the changes were far less dramatic, even though there was an indication of slight increase in vasomotion towards the end of observation (not shown).
signif.
In ECT-treated tumors (not shown) the changes were very similar to those seen in EP-tretaed tumors. There were large differences between tumors of the same group. But in all tumors a strong activity in the 0.1-0.2 Hz range was evident before the treatment. However, the location and existence of distinct peaks at lower frequencies were not as consistent. In all EP- or ECT-treated tumors the vasomotion was abolished 5 min after the treatment over the entire frequency range and was only partially recovered by the end of the observation. In BLM-treated tumors the activity was statistically significantly enhanced at the end of observation in both frequency bands. The significance of vasomotional activity changes is summarized in Table 1.
IV. DISCUSSION WITH CONCLUSIONS Our results show that in addition to well-known effects of EP and ECT with BLM on baseline tumor blood flow [8], these treatments also significantly alter the vasomotional activity in blood flow signals. In humans it was shown that
IFMBE Proceedings Vol. 29
92
T. Jarm, B. Cugmas, and M. Cemazar
Fig. 2 Low-frequency content of blood flow fluctuations expressed as the modulus of CWT of the processed LDF signals. Lighter shades of gray correspond to more intense vasomotional activity. Examples of an EP-treated (top) and a BLM-treated tumor (bottom) are shown for 5-minute time intervals centered at moments 5 min before (left), 5 min after (middle) and 60 min after the treatment (right) the frequency range of vasomotion can be divided into at least three regions belonging to endothelial-mediated, neurogenic and myogenic components of local blood flow regulation [7]. It is not possible to translate frequency regions known for humans directly to mice. However, it is known that the immediate but short-lasting (~1-5 min) first phase of decrease in blood flow following EP (or ECT) is a result of sympathetically-mediated vascular spasm of afferent arterioles [9]. The second slower and much longer-lasting phase (~hours) is largely due to electroporation of endothelial cells, which causes profound structural and functional changes in these cells [4]. Based on the observed dynamics of changes of vasomotional activity immediately after and one hour after EP we suggest that the band II (0.07-0.5 Hz) reflects mostly the myogenic component and that the band I (0.01-0.07) contains largely the other two components of vasomotion.
ACKNOWLEDGMENT The authors acknowledge financial support by the Slovenian Research Agency (projects P3-0003 and P2-0249).
REFERENCES 1. Orlowski S, Belehradek J, Paoletti C et al. (1988) Transient electropermeabilization of cells in culture - increase of the cyto-toxicity of anticancer drugs. Biochem. Pharmacol. 37(24):4727-4733
2. Sersa G, Cemazar M, Miklavcic D, Rudolf Z. (2006) Electrochemotherapy of tumours. Radiol. Oncol. 40(3):163-174 3. Sersa G, Miklavcic D, Cemazar M et al. (1997) Electrochemotherapy with CDDP on LPB sarcoma: comparison of the anti-tumor effectiveness in immunocompetent and immunodeficient mice. Bioelectrochem. Bioenerg., 43:279-283 4. Kanthou C, Kranjc S, Sersa G et al. (2006) The endothelial cytoskeleton as a target of electroporation-based therapies. Mol. Cancer Ther. 5(12):3145-3152 5. Sersa G, Jarm T, Kotnik T et al. (2008) Vascular disrupting action of electroporation and electrochemotherapy with bleomycin in murine sarcoma. Br. J. Cancer 98(2):388-398 6. Addison PS (2002) The illustrated wavelet transform handbook. IOP Publishing, Bristol 7. Stefanovska A, Bracic M, Kvernmo HD (1999) Wavelet analysis of oscillations in the peripheral blood circulation measured by laser Doppler technique. IEEE T. Biomed. Eng. 46(10):1230-1239 8. Jarm T, Cemazar M, Sersa G (2010) Tumor blood flow-modifying effects of electroporation and electrochemotherapy – experimental evidence and implications for the therapy. In: Pakhomov AG, Miklavcic D, Markov MS (Eds.) Advanced Electroporation Techniques in Biology and Medicine. CRC Press, Boca Raton (in press) 9. Gehl J, Skovsgaard T, Mir LM (2003) Vascular reactions to in vivo electroporation: characterization and consequences for drug and gene delivery. Biochim. Biophys. Acta-General Subjects 1569(1-3):51-58
Author: Tomaz Jarm Institute: University of Ljubljana, Faculty of Electrical Engineering, Laboratory of Biocybernetics Street: Trzaska 25 City: Ljubljana, SI-1000 Country: Slovenia Email:
[email protected]
IFMBE Proceedings Vol. 29
Non-linear modeling of cerebral autoregulation using cascade models N.C. Angarita-Jaimes1, O.P. Dewhirst1 and D.M. Simpson 1 1
ISVR, University of Southampton, Southampton, SO17 1BJ, UK
Abstract— Autoregulation mechanisms maintain blood flow approximately stable despite changes in arterial blood pressure. A model that characterizes this system is of great use not only in understanding cerebral hemodynamics but also for the quantitative assessment of function/impairment of autoregulation. Using arterial blood pressure (ABP) as input and cerebral blood flow velocity (CBFV) as output, the autoregulatory mechanism was modeled using only spontaneous variability in both signals, in accordance with previous work. In this study a non-linear approach, based on a cascade, also known as block structure models, is presented, whose parameters are estimated by Differential Evolution. The results were compared with other linear and non-linear approaches previously used to model cerebral autoregulation. The performance of each model was assessed by the model’s predicted CBFV in terms of the normalized mean square error (NMSE) and the correlation coefficient. The results show that for relatively short signals (150 sec) containing only spontaneous fluctuations, cascade models performed better than a frequency domain method but are not significantly different from linear time-domain techniques tested. These results also show that slightly better performance can be obtained with the cascade models compared with more complicated non-linear models with the advantage of having more easily interpretable parameters and a simpler structure that facilitates their use in diagnostic methods. Keywords— Cerebral Autoregulation, Non-linear system identification, blood flow, blood pressure, physiological modeling, cascade models, Differential Evolution I. INTRODUCTION
The active control of the diameter of small blood vessels, also known as autoregulation, protects the brain against injury due to insufficient or excessive blood flow resulting from a temporary drop or surge in arterial blood pressure (ABP). Autoregulation is of great clinical interest, as it can be impaired or lost in a number of conditions, such as stroke and subarachnoid haemorrhage [1]. In the last decade, most of the research in this field has concentrated on the analysis of the dynamic behaviour of cerebral blood flow velocity (CBFV) – as obtained from non-invasive Doppler ultrasound measurements- in response to transient changes in ABP [1;2], known as dynamic cerebral autoregulation (dCAR). In order to provoke larger changes in ABP, the
sudden deflation of a thigh cuff [2], large sinusoidal variations in lower-body negative pressure, periodic breathing or squatting, and the Valsalva maneuver have been used [1]. However, it has been shown that dCAR can also be identified from the small constantly occurring spontaneous fluctuations in ABP and CBFV [3,4] with the main advantage of providing recordings with a minimum discomfort for the patients. Nonetheless, the limited variability of the signals in some subjects and the considerable variability of results have limited the applicability of this technique. The frequency and impulse responses have been used to characterize the dynamic relationship between ABP and CBFV based on spontaneous fluctuations [1]. In the frequency domain, via the Transfer Function Analysis (TFA), the phase shift and gain between spontaneous oscillations of ABP and CBFV have shown the high pass behavior of blood flow control [4]. In the time domain, linear and non linear models have been proposed. Among the linear models, ARMA (autoregressive moving average) structures [5], linear filters (FIR) [6] and an empirical set of predefined second-order differential equations that quantifies the efficiency of dynamic autoregulation [2] by yielding an autoregulatory index (ARI) ranging from 0 (absence of autoregulation) to 9 (excellent autoregulation) have been used. These models assume a linear relationship between ABP and CBFV, and in spite of producing good results, there is evidence of nonlinearities present in the autoregulatory system [7,8] which are not included in the linear models. Nonlinear models, based on Volterra series and neural networks have been proposed [7,8,9]. Although the techniques have been shown to provide different advantages in terms of modeling, the expressions often include a large number of parameters, which in turn requires relatively long data segments for the parameter estimation that can be difficult to obtain in practice. The presents work proposes the modeling of the ABPCBFV relationship through the use of cascade models. These models, also known as block structure models, are a restricted subset of the Volterra series and have been shown to provide a very efficient description for a more limited class of nonlinear systems which are easier to interpret and estimate compared to the functional expansions obtained with the Volterra series [10]. The results obtained with the cascade models are also compared with results obtained
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 93–96, 2010. www.springerlink.com
94
N.C. Angarita-Jaimes, O.P. Dewhirst, and D.M. Simpson
with linear and nonlinear models previously used in studying cerebral autoregulation.
represented by a power series, the constitutive equations for the cascade models shown in Fig. 1 are: Q
T
x (t ) = ∑ h(τ ) P(t −τ ),
II. METHODS
w(t ) = ∑ c ( q ) x ( q ) (t )
τ =1
q =0
(1)
T
Vc (t ) = ∑ g (τ ) w(t −τ )
A. Data Collection and Pre-processing
τ =1
The study was performed on 15 healthy volunteer subjects (age 32 ± 8.8 years) and was approved by the Leicestershire Research Ethics Committee. All recordings were made with subjects in the supine position with the head elevated. Middle cerebral artery velocity was measured using a Transcanial Doppler Ultrasound system (Scimed QVL-120) in conjunction with a 2MHz transducer held in position by an elastic headband. Simultaneously arterial blood pressure (ABP) was non-invasively monitored using a finger cuff device (Ohmeda 2300 Finapress Bp monitor). The signals were pre-processed off-line. The maximum velocity envelope from the spectra of the Doppler signal was extracted using a microcomputer-based analyzer that performs a fast Fourier transform (FFT) every 5 ms. The ABP signals were digitised at 200 Hz. Short periods of evident artefact as well as any spikes on the signals were removed by linear interpolation and the signals (ABP, CBF) were low pass filtered with an eighth-order Butterworth zero-phase digital filter with a cut-off frequency of 20 Hz. The start of each heart cycle was automatically identified from the ABP signal with visual correction, after which the average ABP and CBFVs from the right and left MCA were calculated for each heartbeat. This time series was then interpolated with a third-order polynomial, and sampled at a constant rate of 5 Hz. B. Data Analysis For each subject data segments 300 s long were available. In order to reduce the serial correlation, the signals were decimated to a new sampling rate of 1 Hz, following anti-alias filtering with a cutoff frequency at 0.5 Hz. The recordings were normalized by their mean values, and the mean values of the resultant signals were then removed. This process enabled the relative change in each signal to be obtained. The preprocessed ABP and CBFV signals are denoted as P(t) and V(t) respectively. Cascade Models: Cascade models consist of several interconnections of alternating linear dynamic – with memory (L) and zero-memory nonlinear (N) elements [10]. The most widely studied cascade systems to date are the LN (also known as the Wiener model) and NL (or Hammerstein Model), or sandwich systems LNL. If the nonlinearity is
where T is the length of the Finite Impluse Response (FIR) filter and Q is the polynomial order. The noise free output of the LN and NL models can be written as: VLN (t ) =
V NL (t ) =
Q
⎛
T
⎞
⎝ τ =1
⎠
∑ c (q ) ⎜⎜ ∑ h(τ ) P(t − τ ) ⎟⎟
q =0
q
T −1
⎧Q
⎫
τ =0
⎩q = 0
⎭
∑ g (τ ) ⎨ ∑ c ( q ) P(t − τ ) q ⎬
(2)
The relationships of these expressions to the Volterra series can be found in [10]. As the cascade models are nonlinear in their parameters and have a differentiable cost function their parameters can be estimated using a nonlinear local optimization method. One of the most widely used is the Korenberg and Hunter (KH) method [11] an iterative identification approach used in the modeling of biological systems. The success of this method and other similar iterative, local optimization approaches depends greatly on the initialization of the parameter vector: a poor initial guess may result in the algorithm converging to a local minimum rather than the global minimum or requiring many iterations to achieve convergence [12]. Alternatively, evolutionary algorithms, stochastic search techniques use mechanisms inspired by biological evolution to find or approximate solutions to optimization problems. Differential Evolution (DE) has previously been shown to give relatively fast, robust convergence and good local search performance [12]. Among these algorithms, Differential Evolution DE, has the advantage that it requires few control parameters, is easy to implement and has been successfully applied to estimate parameters of the LN, NL and LNL models [10]. It is, therefore, the method of choice for the estimation of the parameters of the cascade models in the present contribution. The details of the implementation can be found [10,13].
IFMBE Proceedings Vol. 29
Fig. 1 LN and the NL block structures.
Non-linear Modeling of Cerebral Autoregulation Using Cascade Models
C. Selection of Parameters and Statistical Analysis
Relative Units ( % / % )
In addition to comparing the results obtained with the different cascade models, the data was also evaluated using 3 linear models and a second order non-linear model that have been previously described in the literature. The details of the implementations of such approaches is beyond the scope of the present contribution, thus only a brief overview of each method is presented here. First, the frequency-domain transforms of P(t) and V(t) were computed with an FFT algorithm and the complex transfer function is approximated using the power spectrum of P(t) and the cross-spectrum [1,4]. The impulse response is then calculated with the inverse FFT transform [9,10]. As only the causal component (i.e. change in CBFV provoked by changes in ABP) is of interest, the impulse response for negative time was neglected. Second, the set of models proposed by Tiecks et al. [2] was evaluated using the parameter values given by the authors and the data was fitted to a specific model (combination of parameters) by selecting the ARI leading to the highest correlation coefficient between the model generated velocity and the measured V(t). Third, a first order FIR filter was generated following the usual least-mean-squares approach [5]. The VolterraWeiner approach was used to estimate a nonlinear representation of ABP and CBFV as proposed by [8,9,10]. To obtain a non-linear, quadratic model, both the first- and second order terms were considered using the Wiener-Laguerre estimation procedure [10]. For each model, the predicted velocity response was compared to the measured data and their performance was evaluated using the normalized mean square error (NMSE) and the correlation coefficient. The former defined as the mean-square value of the errors (difference between the predicted and measured velocity) normalized by the meansquare of the measured velocity. Furthermore, the method of cross-validation was also adopted whereby the signals for each subject were divided into two, training and validation, each of 150 s duration. Models were first generated using the training set and evaluated with the test set and viceversa.
95
Means and standard deviations quantify the errors across the set of recordings. Finally, to compare different model results, Wilcoxon’s non-parametric sign test was used: two results were considered significantly different when p<0.05. III.
The different models were tested on 15 subjects, and the structure that generated the best average results in the validation set for all subjects was selected. For the cascade models the number of lags of the Linear Filters was 6 (the same values were used in the linear FIR filter model) and the nonlinearities were of third order. The number of lags for the first and second order kernels for the Wiener– Laguerre model was 12. Fig. 2 shows a typical continuous recording of ABP and CBFV. The performance of the different approaches for the training and validation is given in Table 1. For all 6 methods, better performance was obtained using the training data, as was expected from theory [10], with the cascade models giving the best performance in terms of R and NMSE. Wilcoxon paired tests showed that the main differences were between the TFA and the other five modeling approaches. From the results it was also observed (although not shown) that the highest variability in the NMSE was between subjects rather than between methods. In general the results show that comparable results can be obtained with linear and non-linear models when relatively short duration signals and spontaneous fluctuations are considered. Further explanation of this can be obtained by considering the parameters of the estimated cascade models (Fig. 3). The non-linear element of both the LN and NL models (Fig. 3 A and B) has a mildly sigmoid shape, and therefore an approximately linear response is seen over the central range, which is of most interest. The histogram of sample-amplitudes (Fig. 3 C and D) shows the range of amplitudes found. It should also be noted, that even though higher order polynomials can be used to represent the systems nonlinear response, aberrant results where often found when validating the models with unseen data. Table 1 Mean ± SD correlation and MSE for the population studied (n=15)
ABP CBFV
5
Model
Training Correlation
NMSE %
Validation Correlation
NMSE%
Aaslid
0.73 ± 0.12
46.2 ± 17.1
0.72 ± 0.14
48.2 ± 19.5
TFA
0.59 ± 0.15
69.8 ± 25.2
0.57 ± 0.17
72.8 ± 32.2
FIR
0.77 ± 0.12
37.2 ± 17.0
0.74 ± 0.14
43.7 ± 21.4
Wiener-Laguerre
0.75 ± .012
42.0 ± 19.7
0.73 ± .014
45.8 ± 22.1
NL Model
0.79± .009
35.5 ± 15.6
0.72 ± .016
44.5 ± 21.3
LN Model
0.80 ± 0.12
34.8 ± 14.4
0.74 ± 0.14
44.6± 22.4
0
-5 20
40
60
80
100 120 Time (s)
RESULTS
140
160
180
Fig. 2. Representative recording of mean values of ABP and CBFV. The signals have been normalized, thus they represent relative changes. From the signals it can be observed the phase lead of CBFV shown to be an indicator of a good cerebral autoregulatory response.
IFMBE Proceedings Vol. 29
96
N.C. Angarita-Jaimes, O.P. Dewhirst, and D.M. Simpson A
Output %/%
IR(%/%)
0.5 0
-0.5
5 0
-5 1
2 3 4 Time(s)
5
-8 -6 -4 -2 0 2 4 Input %/%
B10
1
5
0.5
0
IR (%/%)
0
Output %/%
tions; the relatively large change in ABP that is obtained with the thigh cuff was used in [9] and even though their results suggests the existence of a nonlinear behaviour, involving not only an amplitude factor but possibly a directional effect, it was found that their neural network did not perform significantly better than the other time-domain linear methods in agreement with the results in this contribution. The relatively poor performance of the TFA model is probably due to its non-causal component having been neglected; while its physiological significance for autoregulation is not clear, its contribution to model fitting is evident. The results reinforce previous observations that the nonlinearity is only a minor contributor in the relationship between ABP and CBFV during small spontaneous variations. The current work used different model structures, but lead to similar conclusions. This work, however, also illustrates the potential of the LN and NL models (and the DE algorithm used in its estimation) to investigate physiological systems, with a major benefit in the ease with which the non-linear models can be interpreted.
10
6
8
0
-5
-0.5
-10 -8 -6 -4 -2 0 2 4 6 Input %/%
0
8
1
2
3 4 Time(s)
5
30
30
C
D
25
20
20
ACKNOWLEDGMENT
%
%
25
15
15
10
10
5
5
0 -8 -6 -4 -2
0
2
4
Input (%/%)
6
8
We would like to thank Stephanie Foster and David Evans for providing the anonymized data used in this study, Lingke Fan for the Doppler analyser component and Innovation China UK for funding support
0 -8 -6 -4 -2
0
2
4
Input (%/%)
6
Fig. 3 Parameters of the A) LN Model B) NL Model – estimated for all recordings. The bold line represents the average of the model estimates for all subjects. C, D) Histogram of signal amplitudes that are input to the nonlinearity for the C) LN and D) NL models. 3% of the data fell outside the plotted range
IV.
REFERENCES
8
1. 2. 3. 4. 5.
DISCUSSION AND CONCLUSIONS
Different approaches to model the ABP –CBFV dynamic relationship have shown good capacity to predict the blood velocity when spontaneous fluctuations are considered. A new approach, based on cascade models, showed superior predictive performance when compared to TFA, but was not significantly better than other time-domain approaches for linear and non linear models. Furthermore, the results suggest that in order to highlight the nonlinear behaviour of cerebral hemodynamics, bigger transients of the ABP or longer recordings may be required. In [7], long segments of data (2h) allowed to investigate the contribution of slower frequencies components which contributed to the better performance of the nonlinear model on spontaneous fluctua-
6. 7. 8. 9. 10. 11. 12. 13.
Panerai R(1998) Assessment of cerebral pressure autoregulation in humans -a review of measurement methods. Physiol.Meas. 19. Aaslid R, Lindegaard K et al. (1989) Cerebral autoregulation dynamics in humans. Stroke, 20:45-52. Panerai R et al. (1998). Grading of cerebral autoregulation from spontaneous fluctuations in ABP. Stroke, 29:2341-2346. Zhang R et al. (1998) Transfer function analysis of dynamic cerebral autoregulation in humans. Am. J. Physiol., 274:233-241. Liu Y et al. (2003) Dynamic cerebral autoregulation assessed using an ARX model: comparative study. Med. Eng. Phys., 25. Liu J et al (2005). High spontaneous fluctuation in arterial blood pressure improves the assessment of CA. Physiol. Meas,26:25-41 Mitsis G et al (2004) Nonlinear modeling of the dynamic effects of arterial pressure and blood gas variations on cerebral blood flow in healthy humans. Adv Exp Med Biol. 551(11):259-65. Panerai R et al (1999). Linear and nonlinear analysis of human dynamic cerebral autoregulation. Am J Heart Circ Physiol. 277 Panerai R et al. (2004) Neural network modeling of dynamic cerebral autoregulation. Med. Eng. Phys. 26(1):43-52. Westwick D and Kearney R (2006). Identification of Nonlinear Physiological Systems. IEEE Press, first edition. Korenberg M. and Hunter I. (1986). The identification of nonlinear biological systems. Biol. Cyb, 50(2):125–134. Dewhirst O et al (2009). Wiener–Hammerstein parameter estimation using differential evolution. Proc. of Biosignals 2010 Storn R and Price K (1997). Differential evolution a simple and efficient heuristics for global optimization. J. of Global Opt. 11.
IFMBE Proceedings Vol. 29
The Epsilon-Skew-Normal dictionary for the decomposition of single- and multichannel biomedical recordings using Matching Pursuit algorithms D. Strohmeier1 , A. Halbleib1 , M. Gratkowski1 and J. Haueisen1,2 1
Institute of Biomedical Engineering and Informatics, Ilmenau University of Technology, Germany 2 Biomagnetic Center, Department of Neurology, Friedrich Schiller University Jena, Germany
Abstract— Matching Pursuit based algorithms are a wellestablished method for decomposing single- and multichannel biomedical recordings. Due to the time-frequencycharacteristics, Gabor dictionaries are commonly used. However, symmetric Gabor atoms fail at approximating asymmetric oscillatory components. We present the Epsilon-Skew-Normal dictionary which is built from symmetric as well as asymmetric components. The new dictionary can be considered as an extension of the Gabor dictionary. We compared both dictionaries based on the decomposition of simulated as well as real EEG data and found that the Epsilon-Skew-Normal dictionary causes smaller decomposition errors compared to the Gabor dictionary. We conclude that the Epsilon-Skew-Normal dictionary can be used for decomposing single- and multichannel datasets. Keywords— Matching Pursuit, Epsilon-Skew-Normal atom, electroencephalography, somatosensory evoked potentials
based techniques attain better localization results [6]. In this paper, we present the Epsilon-Skew-Normal (ESN) dictionary built from of symmetric as well as asymmetric oscillatory components for MP based algorithms which can be used to decompose single- and multichannel signals. In addition, we compare the decomposition of simulated as well as real EEG data based on the ESN and Gabor dictionary.
II. M ATERIALS AND M ETHODS A. Gabor dictionary - state of the art Multichannel MP algorithms provide an adaptive and iterative scheme for decomposing spatio-temporally distributed multichannel data, such as EEG or MEG recordings, into components (atoms). M
D = ∑ cTi · Ai + RM
I. I NTRODUCTION In the last decade, different extensions to the Matching Pursuit (MP) algorithm [1] have been presented which are used to decompose multichannel signals into spatio-temporal components taken from a highly redundant dictionary. According to the phase of the spatio-temporal atoms, the multichannel MP techniques can be divided into fix-phase, e.g. Multichannel Matching Pursuit (MMP) [2] and Spatial Matching Pursuit (SMP) [3], and variable-phase methods, e.g. Topographic Matching Pursuit (TMP) [4]. Both types have been used for the analysis of multichannel biomedical recordings such as electroencephalography (EEG) and magnetoencephalography (MEG) [3, 4, 5]. Typical analysis schemes involve denoising, artifact reduction as well as component and mapping analysis in the time or time-frequency domain. For this purpose, in particular Gabor dictionaries are applied due to their optimal time-frequency characteristics [4]. Moreover, new applications in the field of biosignal source localization have been published applying Gabor dictionaries [5, 6] or application-specific dictionaries [7]. Compared to other multichannel decomposition methods, the MP
(1)
i=1
with D - data set, M - number of iterations, ci - coefficients, Ai - atoms and RM - Mth -order residual. State of the art in multichannel MP applications is the application of Gabor dictionaries. Gabor atoms are scaled, translated and modulated Gauss functions 1 t − un · ei · ξn · t G(un ,sn ,ξn ) (t) = √ · g (2) sn sn with sn - scale, un - translation, ξn - modulation and 2 g(t) = 21/4 · e−πt - Gauss function. As the uncertainty theorem of Heisenberg is an equality for Gabor atoms, their time-frequency-representations (TFR) provide optimal Heisenberg boxes. Thus, the Gabor dictionary is appropriate to multichannel MP applications using the TFR of the multichannel data for further analysis. Due to the symmetric envelope, Gabor atoms adversely affect the decomposition of datasets containing asymmetric components. Hence, the signal as well as the component structure is reconstructed incorrectly (cf. Figure 1).
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 97–100, 2010. www.springerlink.com
98
D. Strohmeier et al.
Fig. 1: A simulated asymmetric oscillation (dotted line) and the first (light solid line) and second (dark solid line) component of its MP decomposition. The nonconformity of the symmetric Gabor atoms becomes apparent at the beginning (0 to 200 ms) and end (600 to 800 ms) of the simulated signal. The 1st Gabor atom shows a proper match of the frequency but fails to approximate the shape due to its symmetry. The residual is only partly corrected by the 2nd atom having a different frequency.
B. Epsilon-Skew-Normal dictionary
(a) absolute value
In order to improve the approximation of asymmetric oscillatory components, we present the ESN dictionary. Its atoms are defined according to the epsilon-skew-normal distribution [8] as t − un 1 G(un ,sn ,ξn ,εn ) (t) = √ · gε , εn · ei · ξn · t , (3) sn sn where εn ∈ [-1, 1] - skewness factor and gεn (t, εn ) = 21/4 · e
−π
t2 (1+(sign(t) εn ))2
·
(4)
with sign(t) - signum function. A positive skewness factor ε creates an atom with a positive skew envelope and vice versa. As gε (t, ε) = g(t) holds for ε = 0, the ESN dictionary can be considered as an extension of the Gabor dictionary. Figure 2 shows three ESN atoms in the time and time-frequency domain. Obviously, the steeper slope (ε > 0) or decay (ε < 0) of ESN atoms causes characteristic distortions of the optimal TFR of Gabor atoms (ε = 0). The distortion increases with the absolute value of the skewness parameter. Therefore, the analysis of resulting TFR is difficult if there are ESN atoms with high skewness factors closely located in the timefrequency domain. However, TFRs can be estimated from ESN atoms but the distortion has to be considered.
(b) pseudo Wigner-Ville distribution
Fig. 2: Absolute value and pseudo Wigner-Ville distribution for three ESN atoms with identical scale, translation and modulation. The skewness factor ε is specified in each figure. The envelope (absolute value of the Hilbert transformed signal) highlights the skewness. Attention should be paid to the maxima of the plots which are isochronous for the absolute values but shifted for the TFR due to the skewness.
ESN dictionary can be infinite, a sampled dictionary is applied in order to calculate a rough estimate of the optimal parameter set. Scale, translation and modulation are sampled applying a dyadic scheme [4] whereas the skewness factor ε is sampled in steps of 0.1. The optimal atom is chosen iteratively from the redundant dictionary based on a similarity measure, i.e. the joint energy of the multichannel data, weighted and subtracted from the dataset.
(M+1)
Ri
(M)
= Ri
(M) − Ri , G(s∗n ,u∗n ,ξn∗ ,εn∗ ) G(s∗n ,u∗n ,ξn∗ ,εn∗ ) (5)
(M)
where Ri G(s∗n ,u∗n ,ξn∗ ,εn∗ )
- Mth -order residual of channel i and - best matching ESN atom according to
(s∗n , u∗n , ξn∗ , εn∗ ) = arg max
(sn ,un ,
ξn ,εn )
C. Multichannel and Topographic Matching Pursuit with Epsilon-Skew-Normal atoms The MMP and TMP algorithm using the ESN dictionary is similar to standard MMP and TMP methods [2, 4]. As the
N
∑
(M)
Ri
2 , G(un ,sn ,ξn ,εn )
(6)
i=1
In order to refine the parameter set the Nelder-Mead Simplex Method is used. The decomposition is terminated by reaching a predefined stop criterion which can be based on signal characteristics such as energy or variation measures.
IFMBE Proceedings Vol. 29
The Epsilon-Skew-Normal Dictionary for the Decomposition of Single- and Multichannel Biomedical Recordings
D. Application Example 1: We used simulated single-channel oscillations (sampling rate 1 kHz) which consisted of a real Gabor and a real, positive skew ESN atom (cf. Table 1) with varying translations, and additive white Gaussian noise. Table 1: Parameter sets for the simulation of single-channel oscillations Gabor atom ESN atom
scale [ms] 200 200
frequency [Hz] 30 20
phase 0 0
skewness 0 0.7
99
(P15), a tangential component (N20) localized in Brodmann area 3b and a radial activation (P25) located in Brodmann area 1 [9].
III. R ESULTS Example 1: The decomposition error as a function of overlap is presented in Figure 3a for both methods.
We applied the single-channel MP algorithm using both dictionaries separately. The comparison was based on the decomposition error which we defined as: N
T
calc 2 Edecomp = ∑ ∑ (Gsim i, j − Gi, j )
(7)
(a) decomposition error against overlap
i=1 j=1
with N - number of iterations, T - number of samples, Gsim i, j jth sample of the ith simulated atom, Gcalc jth sample of the i, j ith calculated atom. Furthermore, we defined the overlap of the atoms as the number of samples where both atoms (Gabor and ESN) had a value higher than 10% of their respective maximum. We performed two different trials, decomposition error against overlap (SNR = 3 dB) and decomposition error against SNR (25 samples overlap). Example 2: For the verification of suitability, we analyzed data from a study similar to [9] containing 60-channel electroencephalographic recordings (international 10-10-system, common average reference, sampling rate 5 kHz) during electrical peripheral stimulation of the median nerve (6000 trials with a stimulation frequency of 2 Hz). According to the literature, electrical stimulation of peripheral somatosensory nerves evokes two different early brain activities, a lowfrequency response (up to 250 Hz) and high frequency oscillations (HFOs) (mean frequency around 600 Hz). The decomposition of these HFOs was the basis for comparing the dictionaries. Hence, the preprocessing consisted of artifact rejection, filtering (4th order Butterworth 450 - 750 Hz), baseline correction (-20 to -2 ms), and averaging. We performed multichannel signal decomposition using MMP and TMP with both Gabor and ESN dictionary. The decomposition was terminated as soon as the standard deviation of the estimated component was lower than twice the standard deviation of a noise estimate derived from a prestimulus interval (-25 to -5 ms). The analysis was limited to the interval from 10 to 30 ms as three high frequency oscillatory components are reported to be evoked in this period, a thalamic component
(b) decomposition error against SNR
Fig. 3: Decomposition error as a function of overlap (a) and SNR (b). For low overlap, the decomposition error is approximately constant and lower for the ESN dictionary compared to the Gabor dictionary. With increasing overlap, the error rises. For high overlap, both error plots converge. As a function of the SNR, the decomposition error is lower for the ESN dictionary as can be seen in Figure 3b. The decomposition error of both methods is decreasing with increasing SNR and converging to zero for the ESN and approx. four for the Gabor dictionary. In both trials, the symmetric oscillatory component was approximated identically with both methods. Hence, the decomposition error is induced by the approximation of the asymmetric oscillatory component. Example 2: The EEG recording is presented in Figure 4a which was decomposed using MMP and TMP until the stop criterion was met. Both dictionaries, Gabor and ESN, were applied. The result of the TMP decomposition can be seen in Figure 4b and Figure 4c respectively. Similar results were achieved with the MMP algorithm. Both decompositions provide three components which are consistent with the literature. Scale, translation and frequency of the estimated components are listed in Table 2. The residual energy was 18% higher for the Gabor dictionary.
IFMBE Proceedings Vol. 29
100
D. Strohmeier et al.
V. C ONCLUSION To our knowledge this is the first study which presents and compares single- and multichannel MP algorithms using Gabor and ESN atoms. We conclude that ESN atoms provide a proper adaptation to asymmetric oscillatory signals. The single- and multichannel signal decompositions can be used to estimate time-frequency-representations as long as attention is paid to the characteristic distortions. Due to the enhanced adaptation to asymmetric signals, ESN atoms might improve MP based source localizations. Fig. 4: TMP decomposition of a real EEG recording (a) using ESN (b) and
ACKNOWLEDGMENTS
Gabor dictionary (c). Each decomposition consists of three oscillatory components with a frequency close to 600 Hz.
Table 2: Parameter sets of the estimated high frequency oscillations. atom 1 atom 2 atom 3 Gabor ESN Gabor ESN Gabor ESN scale [ms] 3.91 4.03 3.18 3.45 5.04 4.99 translation [ms] 14.58 14.65 18.70 18.83 23.48 22.97 frequency [Hz] 570.41 565.04 607.76 631.96 626.41 608.00 ε 0 0.04 0.32
IV. D ISCUSSION Example 1: The lower decomposition error achieved with the ESN dictionary is to be attributed to the enhanced adaptation to asymmetric oscillations. As the symmetric oscillation was approximated identically with both dictionaries, the ESN dictionary is applicable for symmetric components without adverse effects compared to the Gabor dictionary. High overlap values have an impact on the estimation of the skewness parameter causing the skewness parameter to approach zero. Therefore, the decomposition error of the ESN and Gabor based decompositions converge for high overlap. Example 2: The achieved results using both methods are consistent with the literature. The HFOs are decomposed in three overlapping components. Due to the overlap, the difference between ESN and Gabor decomposition is small (cf. Example 1). However, the residual is lower for the ESN based multichannel MP decomposition. In some of the recordings, a forth component was detected at the end of the latest component which had a clearly lower amplitude and slightly lower frequency compared to the estimated P25 oscillation. We have the hypothesis that this activation is caused by a chirplike activation due to an attenuation of the last HFO. As both dictionaries are not able to create chirp-like activations using one component, two oscillations were detected. This hypothesis will be the subject of further studies.
This work was supported by the Deutsche Forschungsgemeinschaft (Wi 1166/9-1). The authors would like to thank Theresa G¨otz (Biomagnetic Center, Department of Neurology, Friedrich Schiller University Jena, Germany) for providing the EEG data.
R EFERENCES 1. Mallat S, Zhang Z. Matching pursuit with time-frequency dictionaries IEEE Trans Sig Proc. 1993;41:3391-3415. 2. Gribonval R. Piecewise linear source separation in Proc. SPIE 03;5207(San Diego, CA, USA):297310 2003. 3. Gratkowski M, Haueisen J, Arendt-Nielsen L, Chen A CN, Zanow F. Decomposition of Biomedical Signals in Spatial and Time-Frequency Modes Meth Inf Med. 2008;47:26-37. 4. Gratkowski M, Haueisen J, Arendt-Nielsen L, Zanow F. Topographic Matching Pursuit of spatio-temporal bioelectromagnetic data Przeglad Elektrotechniczny. 2007;83:138-141. 5. Durka P J, Matysiak A, Montes E Martnez, Sosa P Valds, Blinowska K J. Multichannel matching pursuit and EEG inverse solutions J Neurosci Meth. 2005;148:49-59. 6. Lelic D, Gratkowski M, Valeriani M, Arendt-Nielsen L, Drewes A M. Inverse Modeling on Decomposed Electroencephalographic Data: A Way Forward? J Clin Neurophysiol. 2009;26:227-235. 7. Geva A B, Pratt H, Zeevi Y Y. Spatio-temporal multiple source localization by wavelet-type decomposition of evoked potentials Electroen Clin Neuro. 1995;96:278-286. 8. Mudholkar G. S., Hutson A. D.. The epsilon-skew-normal distribution for analyzing near-normal data J Stat Plan Infer. 2000;83:291-309. 9. Jaros U, Hilgenfeld B, Lau S, Curio G, Haueisen J. Nonlinear interactions of high-frequency oscillations in the human somatosensory system Clin Neurophysiol. 2008;119:26472657.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Daniel Strohmeier Institute of Biomedical Engineering and Informatics, Ilmenau University of Technology Gustav-Kirchhoff Str. 2 Ilmenau Germany
[email protected]
On the Empirical Mode Decomposition Performance in White Gaussian Noise Biomedical Signals A. Karagiannis and Ph. Constantinou National Technical University of Athens, Electrical and Computer Engineering Department, Athens, Greece Abstract— Empirical Mode Decomposition (EMD) is widely used in biomedical field for electrocardiogram (ECG) processing. Removal of high-frequency noise which is one the main artifacts that corrupt ECG, is carried out by proper selection of Intrinsic Mode Functions (IMF) produced by EMD and partial signal reconstruction. In this paper a study of the influence of White Gaussian Noise in synthetic electrocardiograms of various Signal to Noise ratios (SNR) in terms of total number of Intrinsic Mode Functions is presented. Simulations reveal that a pre-processing stage before the application of EMD optimizes processing time by reducing the total number of IMFs without significant loss of information content. Keywords— Empirical Mode Decomposition, Gaussian Noise, Biomedical signal processing.
White
I. INTRODUCTION Biosignals arising from physical processes are often considered linear and time series data sampled out of these biosignals are processed under the assumption of stationarity. The validity of the two assumptions even in portion of signals, initiates the application of signal processing techniques and information extraction methods for feature identification and signal complexes detection. Most of the signals related to the dynamic biological systems that are ruled by nonlinear equations are considered to be nonlinear and nonstationary. Dealing with nonlinearity and nonstationarity requires an adaptive nature of a processing method. Unlike Fourier-based methods, the Empirical Mode Decomposition (EMD) [1] decomposes a signal into its components adaptively without using a priori basis. The decomposition is based on the local time scale of data. The adaptive nature of the process successfully decomposes time series from nonlinear processes and nonstationary signals in the time domain. Characteristic time scales in the data is the basis for the identification of intrinsic oscillatory modes and the decomposition. Each component extracted from the original signal through an iterative and shifting process is required to satisfy certain conditions in order to be characterized as intrinsic mode function (IMF). Application of EMD in time series data results in the production of a set of IMFs and a residue signal. The notion behind this
procedure is that a subset of the IMFs is directly related to the underlying physical process. Literature references variety reveals the extensive range of applications of EMD in different areas of biomedical engineering field. Particularly there are publications concerning the application of EMD in the study of heart rate variability (HRV) [2], analysis of respiratory mechanomyographic signals [3], ECG enhancement artifact and Baseline wander correction [4], R-peak detection [5], Crackle sound analysis in lung sounds [6] and enhancement of cardiotocograph signals [7]. The acceptance of the method as a processing tool is stressed by the large number of publications in diverse areas of signal processing including financial applications [8], fluid dynamics, ocean engineering [9] and electromagnetic field time series analysis [10]. Interest is focused on interpolation techniques employed by EMD proposing the replacement of original cubic spline fitting by other kinds of interpolation or by a parabolic partial differential equation. Amplitude peaks and frequency components detection have been identified by various techniques [11]. Analysis of the characteristics of EMD and the behavior in stochastic signals with broadband noise reported that EMD acts essentially as a dyadic filter bank resembling wavelet decomposition [12]. The contributions of this work lie in two aspects. First, a numerical experiment study of the EMD performance in terms of number of extracted IMFs as a function of SNR and signal length is analyzed. Second, a mixed scheme of processing is proposed in order to remove a main source of artifacts in ECG signals. Quantitative experiments are carried out for synthetic noise cases in order to establish a solid base for EMD performance when applied on ECG signals in conjunction with the conclusions of previous work [13]. The outline of the paper is as follows. In Section 2 a brief review of the Empirical Mode Decomposition is presented. The methodological procedure and the synthetic biosignals as well as the white gaussian noise characteristics are explained in Section 3. Section 4 presents the experimental results and demonstrates the performance of EMD in terms of number of extracted IMFs after the application of preprocessing stage and compares with the non application case. Finally conclusions are drawn in Section 5.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 101–106, 2010. www.springerlink.com
102
A. Karagiannis and Ph. Constantinou
•
Equal number of extrema and zero crossings (or at most differed by one) The envelopes (defined by all the local maxima and minima) are symmetric with respect to zero. This implies that the mean value of each IMF is zero.
Given a signal x(t), the algorithm of the EMD can be summarized as follows : 1. Locate local maxima and minima of d0(t)=x(t). 2. Interpolate between the maxima and connect them by a cubic spline curve. The same applies for the minima in order to obtain the upper and lower envelopes eu(t) and el(t), respectively. 3. Compute the mean of the envelopes:
e (t ) + el (t ) m(t ) = u 2
(1)
N
(2)
n=1
In step 5, in order to terminate the sifting process it is commonly used a criterion which is the sum of difference
| d k −1 (t ) − d k (t ) |2 d k2−1 (t ) t =0
White Gaussian Noise
0.1
0.8 0.7 0.6 0.5
0.05
0
-0.05
0.4 0.3 0
1000
2000
3000
4000
-0.1
5000
0
1000
2000
Samples
3000
4000
5000
6000
Samples White Gaussian Noise corrupted ECG SNR = 25dB
1
0.8
0.6
0.4
0.2 1000
2000
3000
4000
5000
Samples
The result of the EMD process produces N IMFs (c1(t), c2(t),…cN(t)) and a residue signal (rN(t)) :
T
SD = ∑
Simulated ECG 1 0.9
0
4. Extract the detail d1(t)= d0(t)-m(t) (sifting process) 5. Iterate steps 1-4 on the residual until the detail signal dk(t) can be considered an IMF (satisfy the two conditions): c1(t) = dk(t) 6. Iterate steps 1-5 on the residual rn(t)=x(t)- cn(t) in order to obtain all the IMFs c1(t),.., cN(t) of the signal.
x(t ) = ∑ cn (t ) + rN (t )
In this paper synthetic electrocardiogram signals (Fig.1) are produced by MATLAB code. The sampling frequency of the synthetic signals is 1000Hz and the amplitude is expressed in normalized values taking into account the time and magnitude scales of the various complexes from the real biosignal. For comparison purposes MIT-BIH ECG signals [14] are used as reference signals.
Normalized Amplitude
•
III. METHODOLOGY
Normalized Amplitude
The empirical mode decomposition does not require any known basis function and is considered a fully data driven mechanism suited for nonlinear processes and nonstationary signals. Each component extracted (IMF) is defined as a function with
The first IMFs extracted are the lower order IMFs which captures the fast oscillation modes while the last IMFs produced are the higher order IMFs which represent the slow oscillation modes. The residue reveals the general trend of the time series.
Normalized Amplitude
II. EMPIRICAL MODE DECOMPOSITION
(3)
When the SD is smaller than a threshold, the first IMF is obtained and this procedure iterates till all the IMFs are obtained. In this case, the residue is either a constant, or a monotonic slope or a function with only one extremum.
Fig. 1 Synthetic Electrocardiogram with White Gaussian Noise added White Gaussian Noise (WGN) is added to the synthetic electrocardiogram in various energy magnitude scales in order to acquire synthetic biosignals corrupted by WGN at a wide range of SNR. Only SNR higher than 0 dB are considered because significant noise level distorts ECG in such a degree that low magnitude complexes of ECG are not identifiable. The lack of analytical expression for the EMD and the study of the characteristics of White Gaussian Noise in ECG signals determine the methodology directions towards the conduction of numerical experiments. For the experiment, the white Gaussian noise ECG signals are decomposed into IMFs by the EMD method. A significant number of iterations of the experiment are carried out to acquire a statistically safe sample of data. The first set of corrupted ECG signals in various SNR levels is processed without the application of preprocessing stage and the second set after applying the preprocessing stage. In this work, the preprocessing stage is implemented as high pass, low pass filters and Savitzky-Golay method (table 1). The number of IMFs is computed in the statistically safe sample mentioned.
IFMBE Proceedings Vol. 29
On the Empirical Mode Decomposition Performance in White Gaussian Noise Biomedical Signals
Table 1 Preprocessing stage Characteristics
Preprocessing stage High pass filter Low pass filter – 1 Low pass filter - 2
Fcutof f = 3Hz Fcutoff = 40Hz Fcutoff = 49Hz Default values of MATLAB implementation
Savitzky-Golay
IV. EXPERIMENTAL RESULTS Simulations for various SNR levels and ECG signal lengths are carried out and number of extracted IMFs is employed as performance metrics. Simulation experiments are performed over synthetic ECG signals corrupted by White Gaussian Noise.
5
10
25
#IMF 30
35
5
10
15
20
25
#IMF 30
35
10
15
20
25
30
35
5
10
15
20
25
30
35
10
20
25
#IMF 30
35
10
15
20
25
20
25
30
35
40
30
35
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
30
35
40
10
5 0
10 5
15
Simulated ECG Signal length 2000
12
0
5
40
Simulated ECG Signal length 2000
14
#IMF
15
40
10
5 0 5
#IMF
#IMF
10 0
35
Simulated ECG Signal length 1800
12
8
0
40
Simulated ECG Signal length 1800
8
30
10
5 0 0
25
Simulated ECG Signal length 1600
10 8
10
40
Simulated ECG Signal length 1600
12
20
Simulated ECG Signal length 1400
5 5
15
10
10
0
5
40
Simulated ECG Signal length 1400
12
10
10
5 0
8 0
5
Simulated ECG Signal length 1200
10
8
0
40
#IMF
#IMF
20
Simulated ECG Signal length 1200
12
#IMF
15
#IMF
#IMF
5
8 0
Unfiltered Signal Lowpass-2 filtered Signal
Simulated ECG Signal length 1000 10
10
6
#IMF
Preprocessing stage significantly affects performance of EMD in terms of number of IMFs. Empirical Mode Decomposition is sensitive to extrema detection in the signal and the iterative nature of the method is adapted to the peaky nature of the original signal and the residuals as the method proceeds. High pass filtering (table 1) of the synthetic ECG signal produced in various SNR levels corrupted with WGN, affects in minimum degree the performance of EMD. This kind of filtering is usually employed in Baseline Wander (BW) removal of the ECG signals and tends to preserve peaks in high frequencies. Low pass filtering with different cutoff frequencies tends to affect high frequency content of the original signal and IMFs extracted exhibit a lower energy magnitude in higher frequencies thus a degraded peaky nature of the outcome of the procedure.
Unfiltered Signal Highpass filtered Signal
Simulated ECG Signal length 1000
12
103
5
10
15
20
25
SNR(dB)
40
SNR(dB)
5
10
25
#IMF 30
35
40
5
10
15
20
25
#IMF 30
35
40
#IMF
10
10
15
20
25
30
35
40
10
0
5
10
15
20
25
30
35
20
25
#IMF 30
35
40
#IMF
10
10
15
20
25
#IMF
25
30
35
40
10
15
20
25
30
35
40
30
35
40
30
35
40
30
35
40
Simulated ECG Signal length 1600
0
5
10
15
20
25
30
35
0
5
10
15
20
25
Simulated ECG Signal length 2000
10 8
8 5
5
12
Simulated ECG Signal length 2000
0
20
10 8
15
15
10
8 10
40
Simulated ECG Signal length 1800
10
5
35
12
12
0
10
40
Simulated ECG Signal length 1800
12
30
8
8
8 6
5
12
Simulated ECG Signal length 1600
12
25
Simulated ECG Signal length 1400
0
5
20
10
8 0
15
8
12
Simulated ECG Signal length 1400
12
10
Simulated ECG Signal length 1200
0
0
5
10
8
6
0
12
#IMF
#IMF #IMF
20
10
6
#IMF
15
Simulated ECG Signal length 1200
12
#IMF
8 6
0
Unfiltered Signal Savitzky-Golay filtered Signal
Simulated ECG Signal length 1000
10
8 6
#IMF
Unfiltered Signal Lowpass-1 filtered Signal
Simulated ECG Signal length 1000
10
0
5
10
15
20
25
SNR(dB)
40
SNR(dB)
Fig. 2 Number of IMFs before the application (red line) and after the application of filter (blue line). Top figure depicts the highpass filter; bottom figure depicts the lowpass1 filter
Fig.
3 Number of IMFs before the application (red line) and after the application of filter (blue line). Top figure depicts the lowpass2 filter; bottom figure depicts the Savitzky-Golay method applied on Simulated Noisy ECG
IFMBE Proceedings Vol. 29
104
A. Karagiannis and Ph. Constantinou
Fig. 4 3D plots of the number of IMFs as a function of the SNR and the length of a simulated White Gaussian Noise ECG before the application of the highpass filter (left figure) and after the application of the highpass filter (right figure)
Fig. 5 3D plots of the number of IMFs as a function of the SNR and the length of a simulated White Gaussian Noise ECG before the application of the lowpass1 filter (left figure) and after the application of the lowpass1 filter (right figure)
Fig. 6 3D plots of the number of IMFs as a function of the SNR and the length of a simulated White Gaussian Noise ECG before the application of the lowpass2 filter (left figure) and after the application of the lowpass2 filter (right figure)
Fig. 7 3D plots of the number of IMFs as a function of the SNR and the length of a simulated White Gaussian Noise ECG before the application of the Savitzky-Golay method (left figure) and after the application of the Savitzky-Golay method (right figure)
IFMBE Proceedings Vol. 29
On the Empirical Mode Decomposition Performance in White Gaussian Noise Biomedical Signals
105
The minimization of the number of the peaks in the preprocessed signals results in the significant reduction of the number of IMFs produced after the application of EMD due to the smaller number of iterations and extremas in synthetic noise corrupted ECG. A smaller set of IMFs is directly related to a smaller number of iterations thus optimization of the processing time for the application of the method. Savitzky-Golay method is considered mainly for its good reported performance in ECG processing and especially for the ability of the method to preserve the peaks in the ECG signals with minimum distortion. As it is expected, the minimum affection on the peaky nature of the corrupted ECGs results in negligible effects in the number of IMFs extracted by the application of EMD. Curves of the number of IMFs after the application of preprocessing and the non application case (Fig 3 Bottom curves) seem to be identical in various SNR levels and for different signal lengths indicating the good performance of Savitzky-Golay method in preserving peaks in ECG signals and the sensitivity of EMD in total number of extremas in the signal.
EMD does not distort significantly the information content of the biosignal, the output of the method (reduced set of IMFs) carries essentially the same amount of information on the physical process compared to the set of IMFs produced without any preprocessing stage. The main difference lies on the fact that the same information content is split into a smaller number of IMFs thus preprocessing could be an indirect way of excluding IMFs without physical meaning. The physical meaning of the IMFs is still an open issue and no formulation or procedure has been proposed in order to identify which IMFs in an IMF set may contribute to the signal and are closely related to the underlying physical process and the production of a signal. Statistical significance tests are carried out in IMFs in order to identify a suitable procedure for proper IMF selection and initiation of the partial signal reconstruction. Information content of IMFs may be properly enhanced by a preprocessing stage and results in this work and previous works suggest that a mixed scheme of preprocessing and EMD along with statistical significance tests may be suitable for determining the IMFs with physical meaning.
V. CONCLUSIONS
REFERENCES
Empirical Mode Decomposition procedure lacks analytical expression, highlighting the need for simulations and statistical approaches as useful tool in order to assess the performance of the technique. In this paper a mixed scheme of preprocessing of ECG biosignal and application of EMD is proposed and performance of EMD is studied. Synthetic ECG signals are employed in various signal lengths and corrupted by White Gaussian Noise in multiple SNR levels. A high number of iterations is carried out in order to establish a safe statistical sample for drawing conclusions. Various filters are implemented for the preprocessing stage with spectral characteristics commonly used in literature and practice for the filtering of ECG signals. These filters are used as first stage in processing chain and the outcome of this procedure is monitored in terms of number of extracted IMFs. The effectiveness of EMD is related to the sensitivity of the method and the main characteristic that determines the ability of the method to extract IMFs is the peaky nature of the signal and especially the number of peaks in the signal. Preprocessing stage causes changes in the spectral characteristic of the input signal to the EMD and this reflects to the output of the method. Depending on the characteristics of the preprocessing stage, a reduced set of IMFs is obtained with an effect in minimization of computation time. Assuming that preprocessing stage and
[1] Huang, N.E., Shen, Z., Long, S.R., Wu, M.C., Shih, H.H., Zheng, Q., Yen, N.-C., Tung, C.C. and Liu, H.H., The empirical mode decomposition and Hilbert spectrum for nonlinear and nonstationary time series analysis. Proc. R. Soc. London. V454. 903-995, 1998. [2] J. C. Echeverría, J. A. Crowe, M. S. Woolfson and B. R. Hayes-Gill, Application of empirical mode decomposition to heart rate variability analysis. Med. Biol. Eng. Comput. V39 i4. 471-479. [3] Abel Torres, Member IEEE, José A. Fiz, Raimon Jané, Member IEEE , Juan B. Galdiz, Joaquim Gea, Josep Morera, Application of the Empirical Mode Decomposition method to the Analysis of Respiratory Mechanomyographic Signals, Proceedings of the 29th Annual International Conference of the IEEE EMBS Cité Internationale, Lyon, France [4] Blanco-Velasco M, Weng B, Barner KE, ECG signal denoising and baseline wander correction based on the empirical mode decomposition. Comput. Biol Med. 2008 Jan; 38(1):1-13. [5] Nimunkar AJ, Tompkins WJ. R-peak detection and signal averaging for simulated stress ECG using EMD. Conf Proc IEEE Eng Med Biol Soc. 2007; 2007: 1261-4. [6] Charleston-Villalobos, S.; Gonzalez-Camarena, R.; Chi-Lem, G.; Aljama-Corrales, T. Crackle Sounds Analysis By EprclMode Decomposition. Engineering in Medicine and Biology Magazine, IEEE Vol. 26, Issue 1, Jan.-Feb. 2007 Page(s):40 – 47 [7] B.N. Krupa, M.A. Mohd Ali, E.Zahedi. The application of empirical mode decomposition for the enhancement of cardiotocograph signals. Physiol. Meas. 30 (2009) 729-743 [8] Huang N, Wu M, Qu W, Long S, Shen S. Applications of Hilbert– Huang transform to non-stationary financial time series analysis. Appl Stochastic Models Business Industry 2003; 19: 245–68. [9] Rao R, Hsu E-C. Hilbert–Huang transform analysis of hydrological and environmental time series.Water Sci Technol Libr 2008;60. [10] Karagiannis A, Constantinou Ph. Electromagnetic Radiation Monitoring Time Series Analysis Based on Empirical Mode Decomposition. BIOEM 2009, Davos, Switzerland.
IFMBE Proceedings Vol. 29
106
A. Karagiannis and Ph. Constantinou
[11] Feldman M, Analytical basics of the EMD: Two harmonics decomposition, Mechanical Systems and Signal Processing Volume 23, Issue 7, October 2009, Pages 2059-2071 [12] P Flandrin, G Rilling, P Goncalves Empirical mode decomposition as a filter bank. IEEE Signal Process Lett 2004; XI: 112–4.
[13] Karagiannis A., Constantinou Ph. Noise component identification in biomedical signals based on Empirical Mode Decomposition. Proceedings of the 9th International Conference on Information Technology and Applications in Biomedicine, ITAB 2009, Larnaca, Cyprus [14] http://www.physionet.org/physiobank/database/mitdb
IFMBE Proceedings Vol. 29
Simulation of Biomechanical Experiments in OpenSim I. Symeonidis, G. Kavadarli, E. Schuller, and S. Peldschus Institution of Legal Medicine, Munich University, Germany Abstract— Biomechanical experiments produce large amounts of data that is complicated to visualize analyze and interpret. Simulation of an experiment can provide a framework to integrate all the different measured parameters in the subject and can help to understand their influence in the model. OpenSim software is used for this purpose. Opensim is free open-source software, developed in Stanford University, which can simulate active musculoskeletal models and provide information about muscle activity during a motion. In this paper the analysis of an experiment concerning the volunteer’s head neck reaction during motorcycle’s braking is presented. Keywords— muscle, simulation, biomechanics.
I. INTRODUCTION During a biomechanical experiment several parameters are measured like: kinematics, electromyography and ground reaction force. These measurements produce a large amount of data that is then analyzed to provide indicators about a diagnosis or to support hypothesis about the studied subject. Since the focus of the study is usually a human the complicated musculoskeletal structure, the anthropometric differences and the variance in its behavior during a measurement produce a further problem because of the additional variability of the recorded data of the measurement; making the results difficult to compare with normative data or to extract a conclusion. The OpenSim software provides a platform to study each experiment individually and to understand the cause and effect relationship in musculoskeletal systems [1]. An application to the analysis of an experiment is presented as an example of this method.
emerges from motorcycle braking at a certain traveling speed was simulated with backward motion of the MGD with a constant acceleration. A helmet was also used with a weight of 1 kg. The acceleration of the sled was measured with a uniaxial accelerometer. An optoelectronic motion capture system was used to capture the motion of the volunteer. Reflective markers were placed on several anatomical landmarks of the volunteer's body. Especially for the head and neck, markers were placed on the prominent process of the seventh cervical vertebrae (C7), on the occipital protuberance, on the sternum, and on the tragus of each ear. The motion of the markers was recorded with eight high speed cameras with red light strobes. Their frame rate was set to 1000Hz. A surface EMG device with eight channels was used to measure the muscle activity of three neck (splenius capitis, sternocleidomastoid and posterior cervical muscles) and one arm (lateral head of triceps brachii) muscles, with the reference electrode placed on the mastoid process. The volunteer’s instrumentation is presented in figure 1.
II. EXPERIMENT The experiment was performed to study the kinematics’ behavior of the volunteers during motorcycle braking [2]. The focuses of the study is the head neck biomechanics and in more detail the relationship between the muscle activity in the neck and the kinematics of the head during motorcycle braking. A Device was built that could reproduce the Geometry of a Motorcycle (MGD). The MGD included only the motorcycle - rider interface. The MGD was mounted on a sled; a construction with a falling weight was used to accelerate the sled with a stable acceleration of 0.4g. The deceleration that
Fig. 1 Subject setup for the experiment The EMG signal was rectified and onsets were calculated, then it was filtered with a band pass filter to minimize the measurement noise and finally a low pass filter was applied to create an envelope of the signal. The motion capture data was treated as presented in [3]. The EMG and the motion capture data were synchronized so the muscle activation in relation to the neck kinematics could be studied; results are shown in figure 2.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 107–110, 2010. www.springerlink.com
108
I. Symeonidis et al.
Table 1 Inertial properties of the head-neck bones [8] mass (kg) head
Ixx Iyy Izz (kg/mm²) (kg/mm²) (kg/mm²)
4.690
18100
23600
17300
c1
0.22
220
220
420
c2
0.25
250
250
480
c3
0.24
240
240
465
c4
0.23
230
230
440
c5
0.23
230
230
450
c6
0.24
240
240
470
c7
0.22
220
220
430
With the latest current version of OpenSim, 2.01 loads can be defined in musculoskeletal models. This way the sled deceleration was simulated. A constant force acting on the center of mass of the torso was applied, that produces the same average acceleration (figure 3) based on Newton’s second law. Fig. 2 The EMG activity of the sternocleidomastoid, splenius capitis, posterior cervical and triceps muscles for left and right side synchronized with the C7 and occiput rotation from the initial posture [9]
III. SIMULATION The Opensim software was used to simulate the experiment presented previously. Opensim is a free opensource code that unifies a multibody dynamics engine with a detailed muscle simulation code and an optimization algorithm. The multibody system has rigid skeletal bones paired with muscles that are used as constrains and actuators and it is able to synthesize the equations of motion for this system. The muscle simulation part includes a detailed Hill muscle model [4,5,6] and additionally parameters that describe the penation angle, maximum contraction velocity, damping, optimal fiber length, tendon slack length, tendon force-length curve and wrapping surfaces. The actuators redundancy (more muscles from the available degrees of freedom) creates an indeterminate problem to resolve directly with an inverse dynamic analysis. For this reason static optimization is performed. For the study of head and neck biomechanics the model proposed from [7] with some modifications to the maximum muscle forces of individual muscles and the addition of the inertial properties of the skeletal structures of interest, was used. Markers were placed at the same anatomical points like the experiment. Additional weight was placed on the head to represent the helmet.
Fig. 3 Sled acceleration (blue) and average value (green) The simulation model is presented in figure 4.
Fig. 4 Model setup for the simulation, skeleton(white), muscles(red), markers(yellow), force(green)
IFMBE Proceedings Vol. 29
Simulation of Biomechanical Experiments in OpenSim
Finally, the simulation procedure as presented in Table 2 was followed. Table 2 Simulation procedure Steps
109
perform a simulation of the motion based on the muscle activation and the external loads. This way a complete description of the motion can be established from the ground up.
Action Scaling of the musculoskeletal model to the subjects anthropometry using a static posture from the kinematics acquired with the motion capture system. Inverse kinematics (joint angles and translations of the models’ segments) are calculated from the kinematics of the subject.
1 2
Inverse dynamics (joint moments) are calculated from the kinematics of the model
3
Residual reduction algorithm is applied to make the data of the kinematics and the forces measured during the test more dynamically consistent. Computed muscle control uses a static optimization criterion to distribute the joint torques to several muscles Forward dynamics simulation that uses the calculated muscle activations to drive the model in the performance of the subject during the experiment.
4 5 6
Fig. 5 Pitch joint moments on the cervical vertebrae joints
Scaling is performed based on a combination of: •
•
measured distances between the markers on the subject and the markers on the model taken from a pose of the subject right before the initiation of the experiment and manually-specified scale factors based on anthropometric measurements.
The inverse kinematics are calculated from a weighted least squares problem that finds the pose with the minimum distance of the subject’s and model’s markers. The inverse dynamics calculate the forces and moments at each joint responsible for a given movement. From the kinematics describing the movement of a model and kinetics, external loads applied to the model, an inverse dynamics analysis is performed. The equations of motion of the system are synthesized and solved in the inverse dynamics sense, to calculate the forces and moments at each joint. An example of the results is shown at figure 5 for the pitch moments at the cervical vertebrae joints. Static optimization is used as an extension to inverse dynamics that further resolves the joint moments into individual muscle forces at each instant in time according to the muscle model. Constraint functions are used like the selection of ideal force generators muscles based on their moment arms or their Hill muscle properties (velocity length - force surface) while the optimization process is minimizing the objective function of muscle activation. After the muscle activations have been calculated from the static optimization, the forward dynamics module can
IV. CONCLUSIONS The difficulty to study the relationship between the muscle activity and the produced motions in complicated dynamic systems like the neck has several challenges. • • •
The anatomical structure of the neck with several joints with complicated geometry that produce a large number of degrees of freedom. The even larger number of actuators (muscles) that produce a problem of redundancy. The muscle activation and co-activation mechanics.
These issues can be studied in more detail and produce a better biomechanical insight, using a simulation of each experiment individually like the one proposed from OpenSim, instead of analyzing directly the captured signals for all the experiments. In this way the nervous system coordination of muscle activation can be analyzed and a small insight about the muscle recruitment from the nervous system can be gained. Even though, a big gap due to the model’s individual component validity has to be closed, to be able to output reliable results.
ACKNOWLEDGMENT This research was performed during founding from the EU Marie Curie Project MYMOSA.
IFMBE Proceedings Vol. 29
110
I. Symeonidis et al.
REFERENCES 1. Delp SL, Anderson FC, Arnold AS, Loan P, Habib A, John CT, Guendelman E, Thelen DG (in press). OpenSim: Open-source Software to Create and Analyze Dynamic Simulations of Movement. IEEE Transactions on Biomedical Engineering, 2007, vol. 54, p.19401950. 2. Symeonidis I., Kavadarli G., Peldschus S., Schuller E., Fraga F., van Roij L., Laboratory set-up for the analysis of motorcyclists' behaviour during deceleration, The 6th Int. Forum of Automotive Traffic Safety (INFATS), Xiamen, China, 2008 3. Symeonidis I., Kavadarli G., Schuller E., Peldschus S., Capturing human motion inside a moving vehicle, that obstructs the camera field of view. CMBBE conference, Valencia, Spain, 2010 4. Hill, A. V. The heat of shortening and the dynamic constants of muscle. Proc. R. Soc. Lond. B Biol. Sci. 126: 136–195, 1938 5. Huxley, A. F.Muscle structure and theories of contraction. Prog. Biophys. Biophys. Chem. 7: 255–318, 1957
6. Zajac, F. E. Muscle and tendon: properties, models, scaling, and application to biomechanics and motor control. In: CRC Critical Reviews in Biomedical Engineering, edited by J. R. Bourne. Boca Raton, FL: CRC, 1989, vol. 17, p. 359–411 7. Vasavada A, Li S, Delp SL, Influence of Muscle Morphometry and Moment Arms on the Moment-Generating Capacity of Human Neck Muscles, Spine, 23:4:412-422, 1998 8. Jager, M. K. J. de (1996). Mathematical head-neck models for acceleration impacts. Ph.D thesis, University of Eindhoven, Nederlands 9. Symeonidis I., Kavadarli G.,Brenna C., Zhao Z., Fraga F., van Roij L., Schuller E., Peldschus S., Developing a method to simulate injury mechanisms in motorcycle crashes, J of biological physics and chemistry [in press] 10. Opennsim 2.01 User’s Guide at https://simtk.org/home/opensim Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Symeonidis Ioannis Institution of Legal Medicine, University of Munich Nussbaumstr 26 Munich Germany
[email protected]
Comparing Sensorimotor Cortex Activation during Actual and Imaginary Movement A. Athanasiou, E. Chatzitheodorou, K. Kalogianni, C. Lithari, I. Moulos, and P.D. Bamidis Lab of Medical Informatics, Medical School, Aristotle University of Thessaloniki (AUTH), Greece Abstract— Mobility and movement restoration is one of the main goals of brain computer interface (BCI) research. Motor imagery is comprehensively studied to be used as a BCI modality. Non-invasive EEG-based BCIs are most commonly applied and many EEG features (such as ERD/ERS of SMR) are used for movement classification and device control. As BCIs need to provide more real-time response and more natural, fluid controls, it is imperative to identify and study appropriate modalities. To that accord, we focused on sensorimotor cortex activation during hand (biceps) and foot (quadraceps) movement in healthy subjects, both actual and imaginary. Those movements are distinctly represented at the cortex, and the source can be identified with appropriate signal analysis methods. In this work, we present the preliminary results of our study confirming that, generally, the sensorimotor cortex is activated in motor imagery similarly to real movement, studying the changes in EEG mu-rhythm (synchronization/desynchronization). Keywords— brain computer interface, cortex activation, motor imagery, mu rhythm, sensorimotor.
I. INTRODUCTION Motor imagery, imagining a motor task or movement without actually executing it, is nowadays widely applied in the realization of brain computer interfaces that aim for mobility and movement restoration. The concept behind these applications lie in exploiting one person's will to move, regardless if he/she is able to execute this movement due to reasons like spinal cord injury [1] (or stroke or amyotrophic lateral sclerosis). One main issue in designing such systems has been the recognition and classification of movement volition. Cortical activation during motor imagery has been widely studied using a variety of available methods, including regional cerebral blood flow (rCBF) and functional magnetic resonance imaging (fMRI). These methods revealed activation in supplementary motor area (SMA) and premotor cortex but not in primary sensorimotor cortex [2]. A commonly used EEG feature corresponding to movement volition has been identified in the mu-rhythm oscillations [2]. This rhythm is associated with the inhibition of movement, typically decreases in amplitude when the corresponding motor areas are activated and is best recorded over the area of primary sensorimotor cortex [3]. There is not a firm agreement among researchers yet for the exact range of
mu-rhythm, since some researchers recording it at 8-12 Hz [4], or wider [1], generally overlapping with the alpharhythm and the lower part of the beta-rhythm band [2]. This rhythm (frequently called sensorimotor rhythm -SMR), is commonly studied using Event-Related desynchronization/ synchronization (ERD/ERS) [4], as it is associated with the inhibition of motion. ERD usually denotes the activation of cortical areas, while ERS denotes a decrease in excitability and information processing [5]. As brain computer interfaces need to become more user friendly and efficient, certain drawbacks have to be addressed, such as the large classification period that fatigues the users and the system unresponsiveness due to waiting periods [6]. Ultimately BCI systems have to accurately and rapidly classify different movements, aiming to successfully interpret the person's will to move [7]. In our work we attempt to make a step towards addressing such issues, using a classic ERD/ERS SMR motor imagery paradigm. We study motor imagery activation patterns regarding feasibility to classify motor volition relatively fast and relatively accurately by examining neural signals by signal source and time of response.
II. METHODOLOGY Methods: In this experiment 7 healthy right-handed subjects participated; 4 male and 3 female, mean age of 28.1 (23-37). Each subject was asked to execute 4 tasks; a) Real hand movements (RH task), b) Imagery hand movements (ImH task), c) Real Foot movements (RF task) and d) Imagery Foot movements (ImF task). Each task consisted of 95 trials, divided into sets of 19 trials. There was a 1 minute rest between sets, and 5 minutes rest between tasks. The subjects were presented visual feedback during the whole procedure, in the form of the word “move” appearing in a computer screen for 1 sec (in green letters - black background), followed by black screen (resting period). During this period the subjects had to perform the respective motor task. Simultaneously with the “move” a small 10x10 pixels white square appeared at the lower left corner of the feedback screen which was covered with an optic fiber. This way the “move” period was recorded as an additional trigger channel. The whole visual feedback program was realized in flash player.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 111–114, 2010. www.springerlink.com
112
A. Athanasiou et al.
The procedure was accurately explained beforehand and the participants had little, if not at all, training in the concept of imagining the movement. Materials and features: We used a 64-channel NihonKohden EEG and an active electrodes cap (actiCAP by Brain Products). Measurements were taken with 17 EEG electrodes (cp3, cp1, cpz, cp2, cp4, c5, c3, c1, cz, c2, c4, c6, fc3, fc1, fc2, fc4), referenced with A1 and A2 mastoid electrodes. A ground electrode was applied and impedance threshold was set below 20 kOhm. The mentioned electrodes covered the skull area above the sensorimotor cortex, loosely corresponding to the primary motor area, primary somatosensory area and premotor cortex. The EEG feature that we opted to focus on was mu-rhythm ERS/ERD. We chose to focus to a slightly larger range than usual, (8-15 Hz) possibly including a small fraction of the lower beta band [2]. Signal analysis: We used the latest versions of Matlab (7.01) and Eeglab toolbox (6.0). Artifacts, such as eyeblinks were removed with ICA (independent component analysis). The optic fiber signal was used as an event channel and epochs were set from 0.8 sec pre-stimulus to 2.2 sec post-stimulus. ERD/ERS was tested between 300 miliseconds pre-stimulus and three separate time intervals for each subject (100-400ms, 400-700ms and 700-1000ms poststimulus).The 95 epochs of each task were averaged in sets of 19 for each subject. For each participant, task (4) and each set of trials (5), ERD/ERS was calculated, resulting to 5x4 values for each participant. Real movement tasks were compared to imagery ones (RH- ImH, RF-ImF) to examine the difference between cortical activation for real and imagery movements with Student’s T-tests. RH was also compared to RF, and ImH to ImF, to examine differences of cortical source discrimination between hand and foot with Student’s T-tests.
III. RESULTS Motor imagery (Figure 1): Four out of seven subjects (57.14%) performed equally well in both trials (subjects 1, 3, 4 and 7) activating their cortex in motor imagery with the same patterns as in real movement in all three time intervals (except for the 700-1000ms period for subject 3). For example, for subject 4 on Cz electrode during the first interval (100-400ms) the T-test for RH-ImH was 0.546 and for RFImF was 0.65 meaning that the subject managed to produce the same pattern of activation during real and imaginary tasks for both hand and foot. Two out of seven (42.86%) subjects (subjects 2 and 5) performed well only in the hand movement trial, failing to produce similar patterns of activation during the RF and ImF tasks. Subject 6 performed well only in the foot tasks. Figure 1 shows failure to produce
similar activation patterns by electrode and time interval for each subject.
Fig. 1 ERD/ERS
significant difference (p<0.05) between real movements and their imaginary counterparts. This depicts the subjects that were not able to activate the cortex with the same pattern in both real and imaginary tasks
Source detection (Figure 2): In two out of seven subjects (28.57%, subjects 1 and 5) there was no possible discrimination between hand and foot movement at all, regarding the ERD/ERS of the mu-rhythm. In another two subjects (28.57%, subjects 3 and 7) source discrimination was really poor, both spatially and temporally. Two subjects (28.57%, subjects 4 and 7) produced distinctive patterns of real hand and real foot movement across most electrodes and time intervals. Only subject 2 could produce distinctive patterns for imaginary hand and foot movements, but not satisfying enough for the real movements (best discrimination at cpz electrode). For example, for subject 2 on Cz during the second interval (400-700) the T-test for ImH-ImF revealed a difference of high statistical significance (p=0.015), denoting the ability to discriminate between imaging the
IFMBE Proceedings Vol. 29
Comparing Sensorimotor Cortex Activation during Actual and Imaginary Movement
movement of the hand and the foot. Figure 2 shows significant discrimination between foot and hand tasks for each subject by electrode and time interval.
113
preferable [8], since in our experiment each subject tended to have distinct activation patterns as regards both different sets of electrodes, as well as time intervals. It is worthy to mention the limited significant discrimination between ImH and ImF in our healthy participants. This may indicate the need for continuous and longer training of the participants on the imagination of the movements. Meanwhile, there are some limitations to be addressed in our study. First of all, the number of participants is low. However, it seems that individuality of pattern activations will not be eliminated even with a large number of participants, denoting the need of self-paced approach and treatment in terms of BCI. Moreover, the absence of simultaneous electromyographic signal does not provide us the exact response time to the visual stimulus for motion. As a result, we consider that the three test intervals detect ERD/ERS starting from 100 milliseconds pre-stimulus.
V. FUTURE WORK
Fig. 2 ERD/ERS significant difference (p<0.05) between hand and foot movements (both real and imaginary). This depicts significant signal source classification based on ERD/ERS for two distinctively represented areas in the motor cortex
In our future work we intend to carry out relative experiments both with greater number of healthy subjects but also with different groups of patients. The key point of motor imagery research is to ensure that the skill of cortex activation in motor imagery remains unaffected by the various clinical conditions (spinal cord injury, amyotrophic lateral sclerosis and others) and through the course of time and disorder progression. In the future directions of our research we intend to examine functional cortical networks during motor execution and imagery, as well as the disorders' effect on brain connectivity and plasticity of these networks. In addition, neural networks can be used to examine the discrimination between ImH and ImF, resulting to the recognition of the subject’s willing as accurately as possible. Overall, the creation of real-time movement substitution systems controlled by the subject's sensorimotor activity appears nowadays to be a feasible goal more than ever [9].
IV. DISCUSSION
REFERENCES
During motor imagery tasks the sensorimotor cortex produces similar patterns of activation, as in actual movement. Those patterns fluctuate in a greater degree than when the motor tasks are actually performed but tend to stabilize with training. While the cortex activation is less intense during imaginary movement, it is possible for subjects to perform better when they train in the concept of visualizing their movements. SMR rhythm amplitude and signal source separation appear to be a viable option to classify the subjects' volition regarding motor tasks. Our results seem to indicate that when it comes to motor restoration systems design, a self-paced approach would be
1. Pfurtscheller G, Linortner P, Winkler R, Korisek G, Muller-Putz1 G (2009) Discrimination of Motor Imagery-Induced EEG Patterns in Patients with Complete Spinal Cord Injury. Computational Intelligence and Neuroscience. DOI 10.1155/2009/104180 2. Pfurt scheller G, Neuper C (1997) Motor imagery activates primary sensorimotor area in humans. Neuroscience Letters 239, 65-68 3. Arroyo S, Lesser RP, Gordon B, Uematsu S, Jackson D, Webber R (1993) Functional significance of the mu rhythm of human cortex: an electrophysiologic study with subdural electrodes. Electroencephalography and Clinical Neurophysiology 87(3), 76-87 4. Neuper C, Scherer R, Wriessnegger S, Pfurtscheller G (2008) Motor imagery and action observation: Modulation of sensorimotor brain rhythms during mental control of a brain-computer interface. Clin Neurophysiol, DOI: 10.1016/j.clinph.2008.11.015
IFMBE Proceedings Vol. 29
114
A. Athanasiou et al.
5. Friedrich EVC et al (2008) A scanning protocol for a sensorimotor rhythm-based brain–computer interface. Biol Psycho, DOI: 10.1016 /j.biopsycho.2008.08.004 6. Morash V et al (2008) Classifying EEG signals preceding right hand, left hand, tongue, and right foot movements and motor imageries. Clinical Neurophysiology 119, 2570–2578 7. Zhou J et al (2009) EEG-based classification for elbow versus shoulder torque intentions involving stroke subjects. Computers in Biology and Medicine 39, 443–452 8. Scherer R, Schloegl A, Lee F, Bischof H, Jansa J, Pfurtscheller G (2007) Computational Intelligence and Neuroscience, DOI: 10.1155/2007/79826
9. Editorial, Brain–computer-interface research: Coming of age (2006) Clinical Neurophysiology 117, 479–483
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Alkinoos Athanasiou Lab of Medical Informatics, Medical School, AUTH Ag. Dimitriou, 54124 Thessaloniki Greece
[email protected]
Graph Analysis on Functional Connectivity Networks during an Emotional Paradigm C. Lithari1, M.A. Klados2, and P.D. Bamidis1 1
Lab of Medical Informatics, Medical School, Aristotle University of Thessaloniki, Thessaloniki, Greece
Abstract— Electroencephalographic (EEG) signals were recorded from 28 participants as they passively viewed emotional stimuli from International Affective Picture System (IAPS), categorized in 4 groups ranging in pleasure and arousal. The aim of the study was to examine if the Functional Connectivity Networks estimated during different emotional stimuli, differ in their characteristics. Functional Connectivity Networks were estimated for the four categories of emotional stimuli using coherence between each pair of electrodes on the frequency band of alpha rhythm. Graph metrics were calculated for each network and they were statistically analyzed. Pleasure was found to modify the local efficiency of the networks with unpleasant stimuli appearing to form networks with clusters easier than pleasant stimuli. Arousal also affected the global efficiency of the functional networks, with high arousing stimuli appearing to form networks with more efficient communication among nodes. Keywords— emotions, EEG, graphs, connectivity networks.
I. INTRODUCTION Over the last few years there has been an interest in emotional perception and processing by human brain. To this direction, researchers have used a variety of stimuli to induce emotional experiences such as emotional films, pictures, words, and sounds. The International Affective Picture System (IAPS) was developed by Lang et al. [1] containing standardized images rated for emotional pleasure and arousal. Pleasure (valence) ranges from attraction to aversion, whereas arousal is a more general property of the stimulus referring to the level of activation regardless of the direction [2]. Pleasure and arousal of the stimuli appear to modify the Event Related Potentials (ERP), recorded as the electroencephalographic (EEG) response of the brain to the emotional stimuli. The late ERP (P300 component) is found to be modified by arousal with high arousing visual stimuli eliciting greater amplitudes than low arousing ones [3]. Moreover, valence of the stimuli affects P100 which was found to be greater for unpleasant than pleasant stimuli [4]. Apart from the classical approach to the amplitude and latency of ERPs, functional connectivity networks can be also extracted from EEG. So far, either linear or non-linear methods have been applied on EEG signals, in order to estimate functional connectivity in the sense of direct flow
of information between scalp electrodes [5] [6]. Since graph is a mathematical representation of a network reduced to nodes and connections between nodes, functional connectivity networks can be analyzed with metrics already established for treatment of graphs [7]. Such metrics are the path length, the cluster index, the density of the nodes as well as the global and local efficiency, which are similar to the path length and cluster index respectively, but are more representative for undirected networks [8]. To our knowledge, there are quite few studies examining functional connectivity networks estimated by EEG, especially as regards emotions. It was found that alexithymics have reduced coherence between the right frontal lobe and the left hemisphere, independent of film (either emotional or neutral) [9]. Theta long-distant connectivity between prefrontal and posterior association cortex was enhanced during emotionally positive experience [10].However, most studies on emotions and coherence do not consider brain processing as a functional network and do not apply graph analysis on it. In particular, this work aims to investigate if the efficiency index is significantly different among functional connectivity networks derived from passive viewing of different kinds of emotional stimuli.
II. METHODS A. Experiment Participants: 28 healthy persons (14 females), free of any neurological disorder and with normal or corrected-tonormal vision (10/10) participated in the experimental procedure. Stimuli: Visual stimuli were pictures from the IAPS collection; these were selected along with their emotional content defined in terms of their pleasure and arousal ratings. The selected stimuli divided the pleasure-arousal 2D space by naturally forming four groups of pictures: pleasant and high arousing (PHA), pleasant and low arousing (PLA), unpleasant and high arousing (UHA) and unpleasant and low arousing (ULA) (Figure 1). The pictures were presented in blocks and the sequence of the blocks was counterbalanced among participants. Each epoch has a 500msec prestimulus period and a 2000msec post-stimulus period.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 115–118, 2010. www.springerlink.com
116
C. Lithari, M.A. Klados, and P.D. Bamidis
B. Analysis EEG Recordings: All EEG/ERP recordings were initially band-pass filtered (low-pass IIR filter with cut-off frequency 40 Hz, high-pass IIR filter with cut-off frequency: 0.5 Hz). The INFOMAX Independent Component Analysis (ICA) algorithm was applied to the filtered EEG data for removing the biological artifacts [11]. For all EEG electrodes (19) and each participant, the averaged ERP waveforms were estimated for each of the four picture categories. Coherence for each pair of electrodes was calculated and used as the criterion to form a symmetric matrix for each participant and for each picture category (28x4 matrices). The Magnitude Square Coherence (MSC) of two different EEG signals has been computed in alpha frequency band using the next formula:
Cxy ( f ) =
Pxy ( f )
El =
1 N ∑ E g (Si ) N i =1
(4)
Since graph metrics were calculated for all 28x4 graphs, repeated measures ANOVA was used to reveal any statistically significant differences between picture categories. Stimulus pleasure (pleasant or unpleasant) and arousal (high or low) were considered as the within subjects factors.
2
Pxx ( f ) Pyy ( f )
(1)
As it can be seen from the above typo, the MSC is a function of the Power Spectrum Density (PSD) of x and y (Pxx and Pyy), and the cross PSD (Pxy) of x and y adjusted to alpha frequency band. PSD was estimated using the Welch method [12]. According to this method the signals were divided into 400 sp segments with 1sp overlap. Graph Analysis: Different coherence thresholds were used to decide if 2 electrodes (nodes) were connected with each other. The 28x4 corresponding symmetrical adjacency matrices (AM) were calculated for each threshold. Element [i,j] of an AM is equal to 1, if the coherence between electrodes i and j is above the threshold and 0 otherwise, constituting an unweighted graph. As the coherence between signal x and y is equal to the coherence between y and x, the graphs formed by the AM are undirected as well. Metrics from Graph Theory [12] were calculated for each of the 28x4 graphs. Density (K) was used to figure out the amount of links (L) between the graph nodes (N).
2L K= N ( N − 1)
(2)
Global Efficiency (Eg) of a graph is the arithmetical mean of the inverse of the distance (d) between each pair of nodes [13]. Global Efficiency represents how efficient is the communication within a network. Global efficiencies of a fully connected and of an empty graph are equal to 1 and 0 accordingly.
Eg =
by removing the node i and considering the remaining graph consisting of the nodes that were connected to the removed one. Local efficiency reflects the tendency of a graph to form clusters. Local efficiencies of a fully connected and of an empty graph are equal to 1 and 0 accordingly.
2 1 ∑ N ( N − 1) i ≠ j d (i, j )
(3)
Local Efficiency (El) of a graph is the mean of the global efficiencies of each subgraph S. Each subgraph Si is created
Fig. 1 IAPS stimuli used in our experiment plotted on arousal and pleasure dimensions
III. RESULTS The coherence threshold to create the functional connectivity brain networks was firstly set to 0.6. The statistic analysis revealed a significant valence effect on El (p=0.016, F(1,27)=6.66). The El of the functional network created when the participants viewed pleasant pictures (0.531±0.18) was less than the El when viewing unpleasant pictures (0.572±0.17) (Figure 2). As for the arousal of the emotional stimuli, it affected the Eg of the functional network, reflected on an almost significant effect (p=0.053, F(1,27)=4.095). The Eg of the network created when participants viewed high arousing images (0.739±0.1), was less than the Eg of the network created during low arousing stimuli (0.755±0.1). The density (K) of the graphs was not affected by arousal or pleasure of the pictures.
IFMBE Proceedings Vol. 29
Graph Analysis on Functional Connectivity Networks during an Emotional Paradigm
117
High Arousing
Pleasant
Low Arousing
Unpleasant Fig. 2 Graphs created from average coherence between pairs of electrodes and threshold set to 0.6. Unpleasant Stimuli appear to form a network with more efficient communication among nodes than pleasant ones
The coherence threshold was then set to 0.7, resulting in a more ‘strict’ functional network, where the connections between electrodes were present only if the coherence between them was greater than 0.7. The statistical analysis now showed a clearly significant arousal effect on the Eg of the networks (p=0.009, F(1,27)=7.984) (Figure 3). High arousing pictures induced functional networks with less Eg (0.614±0.14) than the low arousing ones (0.641±0.14). Arousal and pleasure did not modify the density (K) of the graphs.
Fig. 3 Graphs created from average coherence between pairs of electrodes and threshold set to 0.7. High arousing stimuli appear to form a network with less local efficiency than low arousing ones
IV. DISCUSSION Summarizing our results, we could say that in frequencies adjusted to alpha rhythm, unpleasant stimuli tends to form functional connectivity networks with a stronger tendency to create clusters that pleasant ones. Moreover, high arousing stimuli appear to form functional connectivity networks with less global efficiency reflecting less efficient communication in comparison to the low arousing ones. As for the first observation regarding the valence of the stimuli, it appears that unpleasant stimuli may be more complex to be perceived by human brain and they are processed by more different brain regions forming clusters, in terms of coherence, much easier than pleasant ones do. The forming of clusters reflects the complexity of the process as
IFMBE Proceedings Vol. 29
118
C. Lithari, M.A. Klados, and P.D. Bamidis
well as the independency between brain regions (network hubs) and, in terms of networks, more clusters mean more robustness to random external attacks. In other words, unpleasant pictures seem to require more ‘co-operation’ between close nodes-electrodes although they tend to make the brain to form clusters other than spread connectivity to all electrodes resulting to a higher local efficiency. As for the topology of the connections, they are evident on frontal lobe, as well as central, parietal and occipital sites for both pleasant and unpleasant stimuli. On the other hand, arousal is found to modify the global efficiency; high arousing stimuli, mostly related to dangerous, threatening or erotic stimuli, evoke functional connectivity networks with lower global efficiency. Lower global efficiency reflects lower efficiency in the communication among electrodes in terms of coherence. High arousing stimuli, which are usually connected to life-threatening situations or primary instincts, do not require much efficiency in the communication between electrodes. In an evolutionary framework, it can be said that high arousing stimuli demand a quick reaction and as a result, the functional connectivity between brain regions needs to be in essential levels. This difference was almost significant when the threshold was set to 0.6, whereas it became clearly significant when the threshold was set to 0.7 and the connections among graph nodes were present only when the coherence was very strong. Regarding the connections’ topology, it is observed that the frontal electrodes appear to be connected in terms of coherence, which may be attributed to muscle movements, as well as the centroparietal and occipital sites where the visual stimuli is firstly processed. Regarding the limitations of our study, we have to mention the low number of electrodes (19). The number of electrodes is crucial to this kind of network analysis since it determines the number of nodes and the size of the graph. Moreover, the metric we used to calculate the connectivity for each pair of electrodes was their coherence which allows no direction in a connection between two electrodes. This resulted to an undirected graph which provides less information than a directed one. In addition, the threshold used to estimate the network has to be chosen carefully as it is evident that it leads to an unweighted graph and it can modulate the significance of the statistical differences. However, the arousal effect which was almost significant with the lower threshold, became clearly significant with the higher threshold but to the same direction. In conclusion, the study of EEG data in the sense of functional connectivity networks adjusted on more brain rhythms may add to the research in the field of Affective Computing [18,19] by providing more classification features.
REFERENCES 1. Lang PJ, Bradley MM, Cuthbert BN (1997) Motivated attention: affect, activation, and action. In: Lang PJ, Simons RF, Balaban M (eds) Attention and orienting: sensory and motivational processes. Erlbaum Associates, Hillsdale, NJ. 2. Barrett LF, Russell JA (1999) The structure of current affect: controversies and emerging consensus. Am Psychol Soc Bull 8:10–14. 3. Lithari C, Frantzidis CA, Papadelis C, Vivas AB, Klados MA, Kourtidou-Papadeli C, Pappas C, Ioannides AA, Bamidis PD.(2010) Are females more responsive to emotional stimuli? A neurophysiological study across arousal and valence dimensions, Brain Topogr. 23:27-40. 4. Oloffson JK, Nordin S, Sequeira H, Polich J (2008) Affective picture processing: an integrative review of ERP findings. Biol Psychol 77:247–265. 5. Stam CJ, van Dijk BW (2002): Synchronization likelihood: An unbiased measure of generalized synchronization in multivariate data sets. Phys D 163:236–251. 6. Tononi G, Sporns O, Edelman GM (1994): A measure for brain complexity: Relating functional segregation and integration in the nervous system. Proc Natl Acad Sci USA 91:5033–5037. 7. Stam CJ (2004): Functional connectivity patterns of human magnetoencephalographic recordings: A ’small-world’ network? Neurosci Lett 355:25–28. 8. De Vico Fallani, F., Astolfi, L., Cincotti, F., Matia, D., Grazia Marciani, M., Sallinari, S., Kurths, J., Gao, S., Cichocki, A., Colosimo, A., Babiloni, F., (2007). Cortical Functional Connectivity Networks in Normal and Spinal Cord Injured Patients: Evaluation by Graph Analysis, Hum Brain Map 28: 1334-1346. 9. Houtveen, J.H., Bermond, B., Elton, M.R., (1997) Alexithymia: A disruption in a cortical network? An EEG power and coherence analysis. Journal of Psychophysiol 11:147-157 10. Aftanas, L.I., Golocheikine, S.A., (2001). Human anterior and frontal midline theta and lower alpha reflect emotionally positive state and internalized attention: high resolution EEG investigation of meditation, Neurosci Let 310:57-60. 11. Jung TP, Makeig S, Humphries C, Lee TW, Mckeown MJ, Iraqui V, Sejnowski TJ (2000) Removing electroencephalographic artifacts by blind source separation. Psychophysiology 37:163–178. 12. Welch, P.D, "The Use of Fast Fourier Transform for the Estimation of Power Spectra: A Method Based on Time Averaging Over Short, Modified Periodograms," IEEE Trans. Audio Electroacoustics, Vol. AU-15 (June 1967), pp.70-73. 13. Latora V, Marchiori M (2001). Efficient behaviour of small-world networks. Phys Rev Lett 87:198701. 14. P.D. Bamidis, C. Papadelis, C. Kourtidou-Papadeli, C. Pappas, A. Vivas, ` Affective Computing In The Era Of Contemporary Neurophysiology And Health Informatics`, Interacting With Computers, 2004, 16(4):715-721. 15. CA. Frantzidis, C Bratsas, M Klados, E Konstantinidis, CD. Lithari, AB. Vivas, C Papadelis, C Pappas and PD. Bamidis, On the classification of emotional biosignals evoked by affective pictures: an integrated data mining based approach, IEEE Trans Inf Technol Biomed. 2009, In Press. Author: Chrysa Lithari Institute: Aristotle University of Thessaloniki City: Thessaloniki Country: Greece Email:
[email protected]
IFMBE Proceedings Vol. 29
MORFEAS: A Non-Invasive System for automated Sleep Apnea Detection utilizing Snore Sound Analysis Charalampos Doukas1, Theodoros Petsatodis2 and Ilias Maglogiannis3 1
University of the Aegean, Samos, Greece University of Aalborg, Aalborg, Denmark 3 University of Central Greece, Lamia, Greece 2
Abstract— Apnea is considered one of the major sleep disorders with great accession in population and significant impact on patient’s health. Symptoms include disruption of oxygenation, snoring, choking sensations, apneic episodes, poor concentration, memory loss, and daytime somnolence. Diagnosis of apnea involves monitoring patient’s biosignals and breath during sleep in specialized clinics requiring expensive equipment and technical personnel. This paper discusses the design and technical details of a platform capable for preliminary detection of sleep apnea at patient’s home utilizing snore analysis. Keywords— Sleep Apnea detection, mobile sound processing, snore signals, snore detection.
I. INTRODUCTION
Sleep is a basic human need in which there is a transient state of altered consciousness with perceptual disengagement from one’s environment. Obstructive Sleep Apnea (OSA) is a sleep disorder characterized by pauses in breathing during sleep. It can occur due to complete or partial obstruction of the airway during sleep. Sleep Apnea is also known to cause loud snoring, oxyhemoglobin desaturations and frequent arousals. Each apnea episode lasts long enough so that one or more breaths are missed, while such episodes occur repeatedly throughout sleep The standard definition of an apneic event includes a minimum of 10 seconds interval between breaths, with either a neurological arousal, a blood oxygen desaturation of 3-4% or greater, or both arousal and desaturation. Clinically significant levels of sleep apnea are defined as five or more episodes per hour of any type of apnea. There are three distinct forms of sleep apnea: central, obstructive, and complex (i.e., a combination of central and obstructive) constituting 0.4%, 84% and 15% of cases respectively. Breathing is interrupted by the lack of respiratory effort in central sleep apnea. Regardless of type, the individual with sleep apnea is rarely aware of having difficulty breathing, even upon awakening. Symptoms may be present for years (or even decades) without identification, during which time the sufferer may become conditioned to the daytime sleepiness and fatigue associated with significant levels of sleep disturbance. As a
result, affected persons have unrestful sleep and excessive daytime sleepiness ([1], [2]). The disorder is also associated with hypertension impotence and emotional problems ([2]). Because obstructive sleep apnea often occurs in obese persons with comorbid conditions, its individual contribution to health problems is difficult to discern. The disorder has, however, been linked to angina, nocturnal cardiac arrhythmias myocardial infarction stroke and even motor vehicle crashes ([3] – [7]). It is estimated that 20 million Americans are affected by sleep apnea ([8], [9]). That would represent more than 6.5%, or nearly 1 in 15 Americans, making sleep apnea as prevalent as asthma or diabetes. It is also estimated that 8590 percent of individuals affected are undiagnosed and untreated. The Wisconsin Sleep Cohort Study found that, among the middle-aged, nine percent of women and 24 percent of men had sleep apnea. In Greece, an average of 2.500 patients per year is examined at sleep disorder centers and almost 80% of them are diagnosed with obstructive sleep apnea ([10]). The costs of untreated sleep apnea reach further than just health issues. It is estimated that the average untreated sleep apnea patient's health care costs $1,336 more than an individual without sleep apnea. If approximations are correct, 17 million untreated individuals account for $22,712 million, or almost 23 billion in health care costs ([11]). All the above facts prove the significance of sleep apnea as a medical problem and justify the research done in this field. Polysomnography (PSG, see Fig. 1) is the most common method for diagnosing obstructive sleep apnea. In this technique, multiple physiologic parameters are measured while the patient sleeps in a laboratory. Typical parameters in a sleep study include eye movement observations (to detect rapid-eye-movement sleep), an electroencephalogram (to determine arousals from sleep), chest wall monitors (to document respiratory movements), nasal and oral air-flow measurements, an electrocardiogram, an electromyogram (to look for limb movements that cause arousals) and oximetry (to measure oxygen saturation). Apneic events can then be documented based on chest wall movement with no air-flow and oxyhemoglobin desaturation. PSG requires special equipment of high cost to be installed and special-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 119–123, 2010. www.springerlink.com
120
C. Doukas, T. Petsatodis, and I. Maglogiannis
ized personnel to be present, while it offers limited resources for patient assessment (e.g., sleeping beds). In addition, elderly or sick patients often find the PSG equipment too cumbersome, and may be reluctant to spend the night in the sleep laboratory ([12]). Recent studies have shown the potential advantages of using acoustical snore signal properties as a reliable and non-invasive alternative to conventional PSG. This paper presents the concept of MORFEAS, a mobile platform for remotely and automatically diagnosing sleep apnea based on snore analysis of sleep sounds collected at user’s site. The rest of the paper is structured as follows: Section 2 presents related work in the context of snore analysis and background information. Section 3 describes the proposed architecture for capturing and analyzing patient’s sounds. Sound analysis details are presented in Section 4 while Section 5 discusses the hardware specifications of the mobile device utilized for sound capturing. Finally, Section 6 concludes the article.
In [21] authors present a pneumatic bio-measurement method installed on patient’s bed for monitoring heartbeat, respiration, snoring and body movements. The latter achieves maximum patient comfort but still requires specialized hardware, a lot of data preprocessing and training and can only be used in Sleep Clinics.
Fig. 2. Microphones established at “Euagelismos Sleep Disorder Clinic” for capturing and analyzing snore sounds
Fig. 3. Illustration of the magnitude of the snoring signal
Fig. 1. Patients being assessed for Obstructive Sleep Apnea (OSA) using Polysomnography equipment.
II. RELATED WORK & BACKGROUND INFORMATION
Additional methods to Polysomnography have been proposed in literature for Sleep Apnea assessment. Authors in [13] present a method for screening OSA based on single ECG signals. Signal processing is used for the detection of RR intervals and QRS complexes and then the latter are classified using neural networks. The accuracy of the method in identifying patients with OSA is up to 88% according to authors. This method however requires from the patient to wear specific equipment and therefore cannot be characterized totally non-invasive. Furthermore the method relies on the existence of a training set of healthy patients and patients diagnosed with OSA. A body-fixed accelerometer sensor is used in [14] for acquiring vibration sounds during patients’ sleep. The latter technique is less invasive than PSG but still can cause discomfort to the patient and results can be easily biased by the placement of the sensor.
Less invasive methods that have been used more extensively utilize sound processing of breath and snore sounds generated by patients during sleep. The feasibility of sleep apnea characterization through specific snore signal features has been proved in previous studies ([15] - [18]). Sound data acquisition is performed through microphones that are installed near patient’s beds at Sleep Clinic (see Fig. 2). Sound capturing is followed by proper processing for noise removal, and feature extraction for further characterization of the snore as apneic or benign. Noise removal can be performed by applying adaptive cancellation filters ([17]), Linear Predictive Coding for speech removal ([18]), Kalman filtering ([19]) and Wavelet transformation ([20]).The extracted features can include the magnitude of the signal (see Fig. 3) and signal pitch frequencies analysis ([18], [19]). All the aforementioned works that utilize snore signal processing for OSA characterization are based on microphone installations at Sleep Clinics. The proposed system is based on a mobile device that can be installed at patient’s home and can transmit snore sound data to the Sleeping Clinics remotely. Maximum patient comfort during sleep is achieved and a greater number of patients can be examined, resulting in better and faster prognosis of the disorder. The following sections present details regarding the proposed
IFMBE Proceedings Vol. 29
MORFEAS: A Non-Invasive System for Automated Sleep Apnea Detection Utilizing Snore Sound Analysis
system architecture, hardware specifications and snore sound analysis. III. PROPOSED SYSTEM ARCHITECTURE
121
the quantification of breath intervals and sound features extraction that could help the identification of OSA. The following sections present the mobile device specifications in more details and an overview of snore sound analysis techniques that could be utilized for snore detection, feature extraction and apnea characterization.
In this section we discuss the major components of the MORFEAS system as illustrated in Fig. 4. IV. MOBILE HARDWARE SPECIFICATIONS
The following hardware modules are proposed for creating the mobile device that can capture, perform initial processing, code and transmit/store snore signal data: x
Fig. 4. Proposed architecture of the MORFEAS platform illustrating major components and processing steps.
The core of the system is the mobile acquisition device, which is placed next to patient’s bed and records all sounds generated during sleep. The hardware consists of a small LCD display for interaction with user, microphones for capturing sounds, appropriate networking modules (with 3G and/or WLAN interfaces), a memory module for storing the acquired sounds and finally the main Digital Signal Processing (DSP) board. The latter hosts appropriate firmware that interconnects all the aforementioned components and is also responsible for performing a number of sound processing steps before the sound data is stored or transmitted to the monitoring unit at the Sleep Laboratory. These steps can include initial filtering of the sound (e.g., in order to remove background noise or start transmission only when snoring sounds are detected, etc.), appropriate coding of the sound (e.g., compression with MP3 encoder for optimizing storage and transmission) and encryption of the data for privacy protection (e.g., using a symmetric encryption algorithm). The DSP board stores the captured data in the storage media and transmits the data to the monitoring units using any available network interface. When no transmission is possible, data can be delivered manually to the medical experts using portable storage media (e.g., SD memory cards). At the monitoring unit (i.e. a Sleep Disorder Clinic), appropriate software is installed that decodes accordingly the transmitted sound data (i.e., decrypts and decompresses data) for further processing. Further processing could include the identification and extraction of snoring sounds,
x
x
x x
DSP board: For the main “heart” of the system the TMS320C6713 DSP board by Texas Instruments TM is proposed. The latter features a 225 MHz processor, embedded JTAG support via USB, high-quality 24-bit stereo codec for audio capturing and processing, four 3.5mm audio jacks for microphone, line in, speaker and line out, 512K words of Flash and 16 MB SDRAM and expansion port connectors for additional plug-in modules. This board with the available software development kit (SDK) is capable of performing the appropriate sound pre-processing and coding for transmission and storage. User interface: A 16x2 LCD module can be used as a sample interface for displaying basic functionality to user (e.g., device is on and capturing, snores are detected, data transmission is initiated, etc.). The module can be connected to the DSP board through the analog interface. Microphone modules: A variety of microphone devices can be used. Ideal number of microphones would be two or three in order to be able to suppress background noise more efficiently. Networking module: A 3G modem in conjunction to a WLAN interface can be used as network interfaces for transmitting captured sound data. Storage module: An SD card module connected to the digital I/O interface of the DSP board can be utilized for storing the acquired snore signal. Storing data into SD cards facilitates the data delivery process in case no high speed wired/wireless network is available. V. SNORE DETECTION & APNEA INDICATION
According to the clinical protocol, an apnea incident occurs when patient breath is interrupted between snores for more than 6 seconds [2]. Thus, in order to detect apnea during patient’s sleep from the acquired snore signal, snore events have to be identified and quantified.
IFMBE Proceedings Vol. 29
122
C. Doukas, T. Petsatodis, and I. Maglogiannis
Based on conducted experiments, when analyzing the captured snore sound signal, and applying short-term (i.e. frame lengths below 100ms) Discrete Fourier Transform (DFT), the distribution of real and imaginary parts of snoring coefficients can be modeled by a Laplacian distribution, as illustrated in Fig. 5. Same properties apply for voice modeling and detection ([24]). Thus, we have applied a modified version of a voice activity detection algorithm proposed by [23] in order to isolate and identify the snoring sounds.
Fig. 5. Normalized histograms of frequency components of snore sound signal using short-term DFT.
Frequencies of background noise are assumed to be Gaussian distributed. The result of the signal modeling through Laplacian and Gaussian distributions is a set of possibilities per sound sample for snore events (see Fig 6, lower part). Preliminary results based on snore signals collected at “Euagelismos Sleep Clinic”, Medical School, University of Athens have shown that the system can achieve a detection performance of 90-93% with very low rate of false detections.
event can be detected. The major advantage of this method is that it can be fully implemented on the DSP board and executed on the mobile device. This way, recording of snore sounds can be initiated only when snores are detected, and a preliminary assessment can be provided to the experts. VI. DISCUSION
Despite the fact that Obstructive Sleep Apnea is not widely known, it is a very common disease with high potential implications and effects on patient’s health. The most common assessment method involves the overnight physiological sign monitoring of the patient in Sleep Clinics, and requires specific equipment and specialized personnel. This paper presents MORFEAS, a non-Invasive system for automated Sleep Apnea detection utilizing snore sound analysis. Snore signals are recorded in the device and snore events, along with apneic events can be detected by the mobile device. The major benefit of the system is the ability to monitor patients at home improving this way the prognosis and treatment procedure and offering the maximum comfort to patients at same time. Future work includes the installation of the proposed system at patients’ homes for evaluating the efficiency, accuracy and usability of the latter in real conditions. In addition, sound properties of the acquired snore signal repository will be examined for assessing potential association of the latter with apneic events.
ACKNOWLEDGMENT Authors would like to thank Dr. Vayakis and Dr. Koutsourelakis from “Euagelismos Sleep Clinic”, Medical School, University of Athens, for the collaboration and the provision of snore sound samples.
REFERENCES 1. 2. Fig. 6. Up: Snoring sample. Detected periods of snoring are marked with line. Down: Likelihood of snore presence on given samples based on Laplacian and Gaussian distributions of the snore signal.
The whole process results in the automated annotation of snore events. This way, silent periods between two sequential snores (i.e., the time patient does not breath or exhales) are quantified and depending on their duration, an apneic
3.
4.
5.
Shepard JW Jr. Cardiopulmonary consequences of obstructive sleep apnea. Mayo Clin Proc 1990;65: 1250-9. Kales A, Caldwell AB, Cadieux RJ, Vela-Beuno A, Ruch LG, Mayes SD. Severe obstructive sleep apnea--II: associated psychopathology and psychosocial consequences. J Chronic Dis 1985;38:427-34. Wei K, Bradley TD. Association of obstructive sleep apnea and nocturnal angina [Abstract]. Am Rev Respir Dis 1992;145(4 pt 2):A443. Guilleminault C, Connolly S, Winkle RA. Cardiac arrhythmia and conduction disturbances during sleep in 400 patients with sleep apnea syndrome. Am J Cardiol 1983;52:490-4. Hung J, Whitford EG, Parsons RW, Hillman DR. Association of sleep apnea with myocardial infarction in men. Lancet 1990;336:261-4.
IFMBE Proceedings Vol. 29
MORFEAS: A Non-Invasive System for Automated Sleep Apnea Detection Utilizing Snore Sound Analysis 6.
7. 8.
9.
10. 11. 12.
13.
14.
15.
16.
17.
Partinen M, Guilleminault C. Daytime sleepiness and vascular morbidity at seven-year follow-up in obstructive sleep apnea patients. Chest 1990;97: 27-32. Aldrich MS. Automobile accidents in patients with sleep disorders. Sleep 1989;12:487-94. Young, T, et al., “Epidemiology of obstructive sleep apnea: a population health perspective.” Am J Respir Crit Care Med 165 (2002): 1217-1239. Young, Terry, et al., "The Occurrence of Sleep-Disordered Breathing among Middle-Aged Adults." The New England Journal of Medicine. 328, no. 17 (1993): 1230-1235. Source: University of Athens, Medical School, «Euagelismos Hospital», Sleep Disorder Clinic. Kapur, MD, Vishesh, et al., "The Medical costs of Undiagnosed Sleep Apnea". Sleep. 22, no. 6 (1999): 749-755. Pang KP, Terris DJ., “Screening for obstructive sleep apnea: an evidence-based analysis”, Am J Otolaryngol. 2006 MarApr;27(2):112-8. Mendez MO, Bianchi AM, Matteucci M, Cerutti S, Penzel T., “Sleep apnea screening by autoregressive models from a single ECG lead”, IEEE Trans Biomed Eng. 2009 Dec;56(12):2838-50. Epub 2009 Aug 25. Sanchez Morillo, D. Rojas Ojeda, J. L. Foix, C. Leon, A., “Accelerometer-Based Device for Sleep Apnea Screening”, IEEE Transactions on Information Technology in Biomedicine, accepted for future publication, first published 2009-07-28. Brunt D, Lichstein KL, Noe SL, Aguillard RN, Lester KW (1997) Intensity pattern of snoring sounds as a predictor for sleep-disordered breathing. Sleep 20:1151–1156. Fiz JA, Abad J, Jane R, Riera M, Mananas MA, Caminal P, Rodenstein D, Morera J (1996) Acoustic analysis of snoring sound in patients with simple snoring and obstructive sleep apnoea. Eur Respir J 9:2365–2370. Andrew K. Ng, T. S. Koh, Eugene Baey and K. Puvanendran, “Diagnosis of Obstructive Sleep Apnea using Formant Features of Snore Signals”, In Proc. Of World Congress on Medical Physics and Biomedical Engineering 2006, pp. 967-970, 2007.
123
18. Ng AK, Koh TS, Baey E, Lee TH, Abeyratne UR, Puvanendran K., “Could formant frequencies of snore signals be an alternative means for the diagnosis of obstructive sleep apnea?”, Sleep Med. 2008 Dec;9(8):894-8. Epub 2007 Sep 6. 19. Zhu Liang Yu and Wee Ser, “Kalman Smoother and Its Application in Analysis of Snoring Sounds for the Diagnosis of Obstructive Sleep Apnea”, In Proc. Of World Congress on Medical Physics and Biomedical Engineering 2006, pp. 1041-1044, 2007. 20. Ng AK, San Koh T, Puvanendran K, Ranjith Abeyratne U., “Snore signal enhancement and activity detection via translation-invariant wavelet transform”, IEEE Trans Biomed Eng. 2008 Oct;55(10):233242. 21. Watanabe K, Watanabe T, Watanabe H, Ando H, Ishikawa T, Kobayashi K., “Noninvasive measurement of heartbeat, respiration, snoring and body movements of a subject in bed via a pneumatic method.”, IEEE Trans Biomed Eng. 2005 Dec;52(12):2100-7. 22. A. Davis and R. Togneri. Statistical voice activity detection using low-variance spectrum estimation and an adaptive threshold. IEEE Transactions on Audio, Speech and Language Processing, 14:412– 424, March 2006. 23. S. Gazor, W. Zhang, ”A soft voice activity detector based on a Laplacian-Gaussian model”, IEEE Trans. on Speech and Audio Proc., vol. 11, no. 5, pp. 498-505, 2003. 24. Voice activity detection based on multiple statistical models, JoonHyuk Chang Nam Soo Kim Mitra, S.K. Signal Processing, IEEE Transactions on On page(s): 1965 - 1976 , Volume: 54 Issue: 6, June 2006.
Use macro [author address] to enter the address of the corresponding author: Author: Ilias Maglogiannis Institute: University of Central Greece Country: Greece Email:
[email protected]
IFMBE Proceedings Vol. 29
Improved Optical Method for Measuring Concentration of Uric Acid Removed during Dialysis J. Jerotskaja1, F. Uhlin1,2, M. Luman1, K. Lauri1, and I. Fridolin1 1
Department of Biomedical Engineering, Tallinn University of Technology, EST-19086 Tallinn, Estonia 2 Department of Nephrology, University Hospital, Linköping, S-581 85 Linköping, Sweden
Abstract— The aim of this study was to compare concentration measurements of uric acid (UA) removed during dialysis. Algorithms based on ultraviolet (UV) absorbance and 1st derivate of UV-absorbance whereby single and multi-wavelength was used. Ten uremic patients from Tallinn and ten from Linköping, during 30+40 haemodialysis treatments, were followed at the Departments of Dialysis and Nephrology at North-Estonian Medical Centre and at Linköping University Hospital. The dialysate samples were taken and analyzed by means of UA concentration at the chemical laboratory and with a doublebeam spectrophotometer. UV absorbance and derivate of UV absorbance value on single or multi wavelength was transformed into UA concentration in the spent dialysate using the regression models from the total material, noted as UVabsorbance (UV_A_single and UV_A_multi) and the 1st derivate of UV absorbance (UV_D_single and UV_D_multi) method. Concentrations of UA from the different methods were finally compared regarding mean values and SD. Mean concentration of UA were 52,40 ± 23,1 micromol/l measured at the chemical laboratory (UA_Lab), 52,39 ± 21,8 micromol/l determined by UV_A_single, 52,42 ± 22,4 micromol/l determined by UV_A_multi and 52,4 ± 22,2 micromol/l determined by UV_D_single and 52,4 ± 22,9 micromol/l determined by UV_D_multi. The results of mean concentrations were not significantly different (p ≥ 0,95). The systematic errors were -0,7- -2,6% and random errors were 8 - 16 % using different methods. The systematic and random errors were significantly different (p < 0.05) between different algorithms indicating that the algorithm that uses multi wavelengths from derivative spectra enables more accurate UA estimation. Our study indicates that the removed UA can be reliably and more accurately estimated by the UV_D_multi technique. Keywords— uric acid, dialysis, ultraviolet, absorbance spectra, derivative spectra.
I. INTRODUCTION The fact that the number of dialysis patients in the world is increasing and quality requirements of the treatment are high raises a need for a simple, inexpensive, compact, mobile and reliable method for measuring concentration of uremic toxins in the spent dialysate. Monitoring of the
removal of different uremic toxins during dialysis can prevent serious pathological conditions and decrease mortality of patients. Uric acid (UA), a final product of the metabolism of purine (MW=168,1) is mostly excreted from human body through the kidneys in the form of urine. The concentration of UA in blood increases when the source of UA increases or the kidney malfunctions. High level of serum UA, hyperuricaemia, has been suggested to be an independent risk factor for cardiovascular and renal disease especially in patient with in heart failure, hypertension and/or diabetes [1]-[5]. Also is hyperuricaemia a novel risk factor for type 2 diabetes mellitus [6] and has also shown in a rat model to the cause of renal disease [7]. UA is removed from plasma in a similar manner as urea during dialysis treatment [8] but so far has not been investigated concerning patient outcome, compared to urea. UA is mostly associated with gout, but studies have found that UA affects biological systems [9], [10] and could also cause higher mortality in dialysis patients [10]. According to European Society of Cardiology guidelines 2008 for the diagnosis and treatment of heart failure (HF) elevated UA level is associated with a poor diagnosis in HF and UA is one of the biomarker in HF [11]. A good correlation between ultra-violet (UV)-absorbance in dialysate and the concentration of several solutes in the spent dialysate of the dialysis patients has been shown in earlier studies, indicating that the technique can be used to estimate the removal of retained substances [12]. The possibility to estimate dialysis dose (urea-Kt/V) [13] and total removed urea by UV-absorbance [14] has been presented. Also it is shown that the removed UA can be reliably estimated in different dialysis centres by the UV technique [1516]. The aim of this study was to compare concentration measurements of UA removed during dialysis by different algorithms based on the value of UV-absorbance and the 1st derivate of UV absorbance, and on a single or multi wavelength approach, using data from two different dialysis centres.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 124–127, 2010. www.springerlink.com
Improved Optical Method for Measuring Concentration of Uric Acid Removed during Dialysis
3,5
a)
3
0 min 10 min
2,5
absorbance
This study was performed after approval of the protocol by the Ethics Committee, at University Hospital of Linköping, Sweden and by the Tallinn Medical Research Ethics Committee at the National Institute for Health Development, Estonia. Ten uremic patients, three females and seven males, mean age 62.6 ± 18.6 years, were included in the study at the Department of Dialysis and Nephrology, North-Estonian Medical Centre, Estonia and 10 uremic patients, four females and six males, mean age 62.8 ± 20.9 years were included at the Department of Nephrology, University Hospital of Linköping, Sweden, using the clinical set-up of the experiments as described earlier [15]. All patients were on chronic thrice-weekly haemodialysis. The patients were monitored during three dialysis treatments in Tallinn and four dialysis treatments in Linköping with duration from 240 to 270 minutes (totally 70 haemodialysis sessions). The studied treatments were consecutive in Tallinn and not consecutive in Linköping but were performed within three weeks for each patient. In Linköping an althane dialyser was used with the effective membrane area of 1.8 m2 (AF180, Ahltin Medical, Ronneby, Sweden). The dialysate flow was 500 mL/min and the blood flow was 300 mL/min except in one session (250 mL/min) due to temporary access (needle) problems. Two types of machines were used, AK 200 (Gambro Lundia AB, Sweden) and Fresenius 4008H (Fresenius Medical Care, Germany). In Tallinn F8 HPS (N=14), F10 (N=3), and FX80 (N=13) (Fresenius Medical Care, Germany) with the effective membrane area of 1.8 m2, 2.2 m2, and 1.8 m2 were used, respectively. The dialysate flow was 500 mL/min and the blood flow varied between 245 to 350 mL/min. The type of dialysis machine used was Fresenius 4008H (Fresenius Medical Care, Germany). Dialysate samples were taken at 5 (only in Linköping OIL), 10 (only in Tallinn), 15 (OIL), 30 (OIL), 60, 90 (OIL), 120, 180 minutes after the start of the dialysis session and at the end of the session (210, 240 or 270 minutes). Also sample from the total dialysate collection, marked as “tank” was included into analysis in Tallinn. Pure dialysate was collected before the start of a dialysis session, used as the reference solution. The concentration of UA was determined at the Clinical Chemistry Laboratories at North-Estonian Medical Centre and at Linköping University Hospital using standardized methods. Double-beam spectrophotometers (UVIKON 943, Kontron, Italy) in Linköping and (SHIMATSU UV-2401 PC, Japan) in Tallinn were used for the determination of UVabsorbance. Spectrophotometric analysis over a wavelength
range of 190 - 380 nm was performed by an optical cuvette with an optical path length of 1 cm. (Fig. 1a). The obtained UV-spectra’s were processed with a signal processing tool using Savitzky-Golay algorithm for smoothing and the first derivative calculation. (Fig. 1b).
60 min
2
120 min 180 min
1,5
240 min tank
1 0,5 0 190 -0,5
210
230
250
0,1
1st derivate of absorbance
II. MATERIALS AND METHODS
125
270
290
310
330
350
370
290
310
330
350
370
b)
0,05
0 190
210
230
250
270
-0,05
-0,1
-0,15
wavelength, nm
Fig. 1 An
example of the absorbance spectrum (a) and 1st derivate of absorbance spectrum (b) obtained over a wavelength range 190-380 nm on the spent dialysate samples at different times during a dialysis session
The data acquisition module consisted of a PC incorporated in the spectrophotometer using UV-PC software (UVPC personal spectrophotometer software, version 3.9 for Windows). The obtained UV-absorbance values were processed by software Panorama fluorescence, the regression analysis was performed with STATISTICA (Statistica 9.0) and the final data processing was performed in EXCEL (Microsoft Office Excel 2003). Data of 20 patients was used to generate models for estimating concentration of UA. For that regression line of the collected dialysate samples and corresponding UVabsorbance values at the wavelength 298 nm and 298+264 nm and derivative spectra values at the wavelength 302 nm and 272+302 nm was assessed to transform UV-absorbance into UA concentration. The obtained relationships used for estimating UA concentration. Student’s t-test (two tailed) was used to compare means and accuracy values for different methods.
IFMBE Proceedings Vol. 29
126
J. Jerotskaja et al.
III. RESULTS The values of the absorption and the 1st derivate of the absorption over a wavelength range 190-380 nm and concentrations of UA measured in clinical laboratory passed regression analyze. The optimal wavelengths for creating models were determined. It was found that better results can be achieved when more than just one wavelength is used. For absorbance spectra analysis with 298 nm and 298 + 264 nm was performed (Fig. 2); for derivative spectra values at 302 nm and 272 nm were used (Fig. 3).
Predicted UA concentration [micromol/l]
140
The relationships used for generating concentration calculation algorithms to estimate UA concentration were: y = 60,12*A298 - 2,51 for original UV-absorbance spectra at 298 nm; (2) y = 74,97*A298 – 8,12*A264 – 2,32 for original UV-absorbance spectra at 298 nm+264 nm; y = -942,47*D302 – 3,18
(3)
for the derivative spectra at 302 nm; y = -1025,73*D302 + 229,36*D272 – 3,51
(4)
for the derivative spectra at 302 nm+272 nm. Table 1 shows the mean values of the concentrations and errors of the UA measured at the laboratory and UA calculated from different models.
120
100
Table 1 Summary of results for the different methods to measure concen-
80
tration of the uric acid
60
UA 40 UA_pred_298 nm 20
UA_pred_298+264 nm
R2 = 0,89 r =0,94 R2 = 0,94 r = 0,97
0 0
20
40
60
80
100
120
140
UA concentration [micromol/l] measured at the laboratory
Fig. 2 Measured vs. predicted UA concentrations. Results achieved using a single or two wavelengths from original absorbance spectra. (N = 446) 140
120 Predicted UA concentration [micromol/l]
(1)
100
80
60
40
UA_pred_302 nm UA_pred_302+272nm
20
20
40
60
80
100
Syst. Error [%]
Random Error R2 [%]
446 52,40 ± 23,1
-
-
-
UV_A_298 nm UV_A_298+264 nm UV_D_302 nm UV_D_302+272 nm
446 52,39 ± 21,8
-2,6
16,3
0,89
446 52,42 ± 22,4
-1,6
11,6
0,94
446 51,40 ± 22,2
-1,9
14
0,92
446 51,40 ± 22,9
-0,7
8
0,98
These results demonstrate that using two wavelengths instead of one improves estimation of the UA noticeably. More reliable results are achieved when derivative spectra is used instead of the original absorbance spectra. The systematic and random errors were significantly different (p ≤ 0.05) between the methods indicating that the 1st derivate and multi wavelength algorithm enables more accurate UA estimation. These results show that utilizing several wavelengths and derivative spectra the concentration of the UA can be predicted more reliably and accurately.
IV. DISCUSSION
2
R = 0,98 r = 0,99
120
140
UA concentration [micromol/l] measured at the laboratory
Fig. 3 Measured vs. predicted UA concentrations. Results achieved using a single or two wavelengths from derivative spectra. (N=446)
Concentration of UA ± SD [micromol/l]
Laboratory
R2 = 0,92 r = 0,96
0 0
N
The presented results show the possibility to estimate UA concentrations with different models based on UVabsorbance and the 1st derivate of UV absorbance. A good possibility to estimate UA concentrations using UV technique has shown in earlier studies [15-16] but if we use some signal processing tools we can essentially improve the accuracy and reliability of the results.
IFMBE Proceedings Vol. 29
Improved Optical Method for Measuring Concentration of Uric Acid Removed during Dialysis
Coefficient of determination, R2 , between laboratorial and calculated values of UA are higher in case of the UV_D (single/multi) compared to the UV_A (single/multi) (0.92/0,98 vs. 0,87/0,94). This indicates that using several wavelengths instead of single gives good effect and this effect is greater when we use processed spectra instead of original absorbance spectra. Table 1 show that the systematic and random errors are decreasing if we use several wavelengths and/or derivative spectra. Considering improvement in accuracy of the model and in systematic and random error, the signal processing and information from several wavelengths should be certainly used in the future. In this study the best result achieved with the model which uses derivative spectra values at two wavelengths. The high correlation between UV-absorbance and UA could be explained by the characteristic absorbance around 294 nm for UA in combination with relatively high millimolar extinction coefficients of UA in this wavelength region compared to other chromophores-uremic retention solutes eliminated from blood into the spent dialysate during dialysis [17]. This makes it possible to determine UA concentration even when the technique does not measure solely UA. The clinical aim in the future is to develop an on-line monitoring system that could offer an estimation of the removal of several clinical important solutes during haemodialysis. Using quite simple signal processing tool might be very helpful for achieving more accurate and reliable results.
V. CONCLUSIONS This study investigated the effect of using several wavelengths and signal processing to estimate concentration of UA using UV absorbance. It was found that using several wavelengths, smoothing and the first derivative of absorbance spectra leads to more accurate results. As UA is shown to be an independent risk marker of cardiovascular and renal disease, and also a novel risk factor for type 2 diabetes mellitus [1]-[6], it is advantageous to develop reliable and rapid methods for uric acid concentration measurements.
ACKNOWLEDGMENT The authors wish to thank all dialysis patients who participated in the experiments, Per Sveider, Jan Hedblom and Rain Kattai for skilful technical assistance and Galina Velikodneva for assistance during clinical experiments. The work is supported in part by the Estonian Science Foundation Grant No 6936, by the Estonian targeted financing
127
project SF0140027s07, and by the European Union through the European Regional Development Fund.
REFERENCES 1. Alderman M, Aiyer KJ. (2004) Uric acid: role in cardiovascular disease and effects of losartan. Curr Med Res Opin. 20(3):369-79 2. Heinig M, Johnson RJ. (2006) Role of uric acid in hypertension, renal disease, and metabolic syndrome. Cleve Clin J Med. 73(12):10591064 3. Viazzi F, Leoncini G, Ratto E, Pontremoli R. (2006) Serum uric acid as a risk factor for cardiovascular and renal disease: an old controversy revived. J Clin Hypertens. 8(7):510-518 4. Feig DI, Kang DH, Johnson RJ. (2008) Uric acid and cardiovascular risk. N Engl J Med. Oct 23;359(17):1811-21. Review. 5. Høieggen A, Alderman MH, Kjeldsen SE et al. (2004) LIFE Study Group. The impact of serum uric acid on cardiovascular outcomes in the LIFE study. Kidney Int. Mar;65(3):1041-9 6. Dehghan A, van Hoek M, Sijbrands EJG, Hofman A, Witteman JCM. (2008) High serum uric acid as a novel risk factor for type 2 diabetes mellitus. Diabetes Care. Feb;31(2):361-2 7. Nakagawa T, Mazzali M, Kang D-H, Sánchez-Lozada LG, HerreraAcosta J, Johnson RJ. (2006) Uric acid- a uremic toxin? Blood. 2006; 24:67-70 8. Vanholder RC, De Smet RV, Ringoir S M. (1992) Assessment of urea and other uremic markers for quantification of dialysis efficacy. Clin Chem. 38:1429-1436. 9. De Smet R, Glorieux G, Hsu C, Vanholder R. (1997) P-cresol and uric acid: two old uremic toxins revisited. Kidney Int. Nov;62:S8-11 10. Perlstein T S, Gumieniak O, Hopkins P, Murphey L, Brown N, Williams G, et. al. (2004) Uric acid and the state of intrarenal reninangiotensin system in humans. Kidney Int. 66:1465-1470. 11. ESC Guidelines for the diagnosis and treatment of acute and chronic heart failure 2008, European Heart Journal (2008) 29, 2388-2442 12. Fridolin I, Magnusson M, Lindberg L-G. (2002) On-line monitoring of solutes in dialysate using absorption of ultraviolet radiation: technique description. Int J Artif Organs Aug;25(8):748-61 13. Uhlin F, Fridolin I, Lindberg L-G, Magnusson M. (2003) Estimation of delivered dialysis dose by on-line monitoring of the UVabsorbance in the spent dialysate. Am J Kidney Dis. May;41(5):102636 14. Uhlin F, Fridolin I, Lindberg L-G, Magnusson M. (2005) Estimating total urea removal and protein catabolic rate by monitoring UV absorbance in spent dialysate. Nephrol Dial Transplant. Nov; 20(11):2458-64 15. Jerotskaja J, Uhlin F, Fridolin I, (2008) A Multicenter Study of Removed Uric Acid Estimated by Ultra Violet Absorbance in the Spent Dialysate. IFMBE Proc., vol. 20, pp. 252-256 16. Jerotsjaka J, Uhlin F, Fridolin I, Lauri K, Luman M, Fenrström A, (2010) Optical online monitoring of uric acid removal during dialysis. Blood Purif 2010; 29:69-74 17. Fridolin I and Lindberg L-G. (2003) On-line monitoring of solutes in dialysate using absorption of ultraviolet radiation - wavelength dependence. Med Biol Eng Comput. May;41(3):263-70. Author: Jana Jerotskaja Institute: Department of Biomedical Engineering, Technomedicum, Tallinn University of Technology Street: Ehitajate tee 5 City: 19086 Tallinn Country: Estonia Email:
[email protected]
IFMBE Proceedings Vol. 29
Correlations between Longitudinal Corneal Apex Displacement, Head Movements and Pulsatile Blood Flow M. Danielewska, H. Kasprzak, and M. Kowalska Visual Optics Group, Institute of Physics, Wroclaw University of Technology, Wybrzeze Wyspianskiego 27, 50-370 Wroclaw, Poland Abstract— Determination of the phase dependencies between longitudinal corneal apex displacement (LCAD), anterioposterior (AP) head displacements and pulsatile blood flow, using the cross-correlation function, was the main goal of our investigation. We have proposed a noninvasive method to measure signals LCAD and head movements using two ultrasonic distance sensors. Synchronically, the blood pulsation was registered with pulsoxymeter. We calculated the phase relationship between particular signals for the first four harmonics associated with the heart rate. In this paper we presented results obtained for the 3rd harmonic because of the highest value of the coherence function. We applied the time window in calculation of the correlation function due to the nonstationary nature of analyzed signals. The length of this window was selected for each harmonics. It allowed to observe more details in phase shift variations in time. Results show that the time shift between LCAD and pulse, as well as between head movements and pulse exist. We can notice more clear relationship for pair of signals: AP head displacements and blood pulsation. Obtained data are not sufficient enough to explain influence the blood pulsation on the ocular pulse. But proposed method might be helpful to acquire information about this phenomenon. Keywords— ocular pulse, cross-correlation and coherence function, phase shift, blood pulsation.
I. INTRODUCTION Deformation of the eye globe and expansion of the corneal surface is mainly caused by variations in intraocular pressure (IOP) [1,2]. It is also closely related to pulsatile ocular blood flow (POBF) [3], which depends on a heart rate [4,5]. Previous studies described the high correlation existing between longitudinal corneal apex displacements and electrical activity of the heart [6]. Moreover, there are other factors, such as age [7], axial length [8], which influence on the ocular pulse amplitude. During investigation of the corneal surface displacements we can’t forget about significant head movements. The magnitude of head movements is a valid component to estimate the proper amplitude of the eye displacements and it affects the accuracy of measurements. Therefore, it is very important during measurements of ocular pulse to minimize the head movements. So far relationship between longitudinal corneal
apex displacements, fine head movements and signal ECG were examined using Fourier transform and coherence function, but without any information about a phase dependencies between particular signals [9]. Understanding the relationships between the ocular pulse, head movements and cardiopulmonary system can help to explain the effect of the heart activity on the eye globe pulsation and the IOP changes.
II. MATERIALS AND METHOD To measure longitudinal corneal apex displacement (LCAD) and anterio-posterior (AP) head displacements we have proposed a non-invasive method using ultrasonic distance sensors. A designed headrest with a belt and bite bar was applied to minimize head movements significantly. Two ultrasonic distance sensors working at the frequency of 0.8 MHz were used to register signals LCAD and AP head displacements at the same time. One ultrasonic sensor was placed in front of the left eye in distance around 15 mm. Because of a close position of blood vessels to the skin surface on the face it was difficult to find a face area, where the measurement of the head displacements will be represented reliably. Therefore to register the head displacements we asked subjects for wearing a protective goggle during the measurements. The second ultrasonic transducer was placed in front of the goggle surface over the eye, as is presented in Fig. 1. Synchronically the pulsatile blood flow was registered using pulsoxymeter placed on the right earlobe.
Fig. 1 The measurement setup of longitudinal corneal apex and anterioposterior head displacements and blood pulsation
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 128–131, 2010. www.springerlink.com
Correlations between Longitudinal Corneal Apex Displacement, Head Movements and Pulsatile Blood Flow
Five subjects in age from 25 to 60 without any eye and heart pathologies were examined in this study. The time of measurements was 12.15 seconds and during this time the patient was asked for abstain from blinking. All signals were sampled at a frequency of 247 Hz. Obtained data were numerically processing using a custom written program in Matlab 7.0. First the signals were linearly detrended. Then the band-pass filter (0.6-20 Hz) was applied to remove frequencies, which are not related to the pulse. For example breathing rate, which is around 0.20.3 Hz. Spectral characteristics were calculated using Fourier Transform for derivatives of all signals. Because of non-stationary of analyzed signals the time-frequency characteristic were computed, using the Short Time Fourier Tranform (STFT) [10]. It allows to show variations of signals’ spectra in time. To receive information about phase compliance between corresponding frequency components contained in two signals, the coherence function was calculated between them. Value of this function is independent of frequency components’ amplitude. The main goal in our investigation was to determine dependencies between LCAD, AP head displacements and blood pulsation and what phase shifts exists between them using the cross-correlation function. This function gives information about similarities of analyzed signals shape. We examined the phase relationship between registered signals for the first four harmonics related to the heart rate. Due to a nonstationary nature of considered signals for each harmonic a length of the time window was chosen separately. The time window length was obtained as the 3 averaged periods of particular harmonics based on Fourier spectra. Then the time window of one pair of signals for selected harmonic was shifting with the step of 0.01 s. The crosscorrelation function and its extrema have been calculated in the window area, as follow:
R xy (τ n ) =
129
determining the time shift between two analyzed signals. Presented procedure helps to observe the time shift variations in time for particular frequency components of both signals.
III. RESULTS Measurements of LCAD of the left eye, anterio-posterior (AP) head displacements and blood pulsation were carried out five times for each subject. In this paper we present results obtained for subject MD (25 years old woman, emmetrope). The time representations of three synchronically registered signals are shown in Fig. 2.
Fig. 2 Time representations of three sinchronically registered signals: LCAD of the left eye (red line), anterio-posterior (AP) head displacements (blue line) and blood pulsation (black line) Frequency characteristics of the analyzed signals are presented in Fig.3.
N −1− τ n
∑ [ x (k ) ⋅ y (k + τ
n
)],
(1)
k =0
where Rxy(τ) defines the value of cross correlation function for the time shift equal τn between signals x and y, k is the sample index and τn is number of 10ms time shift between signals x and y. In presented measurements, signal x describes a fragment of longitudinal corneal apex displacement (LCAD) of the left eye or anterio-posterior head displacements and y describes a fragment of blood pulsation. Inside the moving window signals x and y are indexed from 0 to N−1, where N describes the number of samples, which are taken from the proper time window of both signals [10]. Position of the correlation maximum allows
Fig. 3 Peridograms of longitudinal corneal apex velocity of the left eye (red line), AP head velocity (blue line) and blood pulsation’s derivative (black line)
IFMBE Proceedings Vol. 29
130
M. Danielewska, H. Kasprzak, and M. Kowalska
In all Fourier spectra we can notice the highest peak (around 1.2 Hz) associated with the heart activity and their successive harmonics. The amplitude values for harmonics in LCAD spectrum are higher in comparison to AP head displacements spectrum. Time-frequency characteristics of AP head velocity and blood pulsation are displayed in Fig. 4. We can observe that the corresponding harmonics in both signals vary in time in the very similar way. Information about their phase dependencies gives the coherence function (Fig. 5). For the 3rd harmonic we received the coherence value around 0.9. It indicates some phase stability between the same harmonics of two signals. Therefore, we decided to presents in this paper results obtained for this harmonic.
Fragment of time characteristics of the 3rd harmonic of signals LCAD of the left eye and AP head displacements (presented in Fig.2) up to 5 seconds, with the marked time window of 0.81 seconds length, is shown in Fig. 6.
Fig. 6 Time characteristics of the 3rd harmonic of signals: LCAD of the left eye and AP head displacements with marked time window of 0.81 seconds length
The cross-correlation function computed for the 3rd harmonic (by use of formula 1), applying the appropriate time window (0.81 s), characterizes local maxima (red bands) and minima (blue bands) (Fig. 7). The position of correlation maximum, calculated for selected harmonics of both signals, describes the time shift existing between them. Due to nonstationary nature of analyzed signals the value of this time shift varies in time.
Fig. 4 Time-frequency characteristics of signals: AP head velocity (top) and blood pulsation (bottom) Fig. 7 Contour plot of correlation function values computed between signals LCAD of the left eye and pulsatile blood flow for the 3rd harmonic
Fig. 5 Coherence function between particular pairs of signals
Fig. 8 presents the comparison of variations in time of the 3rd frequency components from spectra of signals: LCAD of the left eye (red line) and blood pulsation (black line) (top plot), AP head displacements (blue line) and blood pulsation (black line) (bottom plot). These characteristics were received in following way: first STFT for the time representations of the 3rd harmonics for all signals were calculated, using the time window of the averaged 3 periods length (0.81 s). Then, for each window shift maximum of all analyzed signals spectra was selected. Additionally, the
IFMBE Proceedings Vol. 29
Correlations between Longitudinal Corneal Apex Displacement, Head Movements and Pulsatile Blood Flow
green dashed line describes the variations of the time shift between 3rd harmonics of both signals, computed using the correlation function, with the same time window (right vertical scale).
131
such as: the mechanical properties of cornea, the propagation of IOP or the rational eye movements disturb a direct relation between LCAD and pulse. Applying the appropriate length of the time window in the calculation of correlation function for the 3rd harmonics of two considered signals helped to receive more details of the time shift variations. We can see that the time shift value depends on the frequency, but this relationship is not clear yet. Therefore we need to examine more subjects and find an unambiguous parameter describing this phenomenon. Obtained results might be useful for the analysis of the influence of blood pulsation on the eye deformation and IOP variations. Further measurements can lead us to create a noninvasive method of ocular and cardiovascular diseases.
ACKNOWLEDGMENT Fig. 8 Variations in time of the 3rd harmonic components from spectrum of signals LCAD (red line), head movements (blue line) and pulse (black line), as well as variation in time of the selected maximum of the correlation function (green dash line) between LCAD of the left eye and blood pulsation (top) and between head movements and pulse (bottom), using the time window of 0.81 seconds length
Frequencies of the 3rd harmonics of analyzed signals vary in time in range from around 3.3 to 4.1 Hz. We can see that more clear local time shifts exist between variations in time of the 3rd harmonics of signals: AP head displacements (blue line) and pulse (black line) (bottom plots) in comparison to the local time shifts existing between LCAD (red line) and pulse (black line) (top plots). The values of the time shift calculated between two considered pair of signals vary in range from around 0.07 to 0.12 s, which proves the green dashed plot. It can be connected with the frequency value.
IV. CONCLUSIONS We found some interesting relationships between axial corneal displacements, anterio-posterior head displacements and pulsatile blood flow using the cross-correlation function. We have noticed that coherence values for analyzed signals’ harmonics associated with the heat rate are higher for pair of signals: AP head displacements and pulse, than for LCAD of the left eye and pulse. We can also observe a greater similarity between frequency variations in time for the 3rd harmonics of AP head displacements and blood pulsation. The reason can be related to the fact that ocular pulse phenomenon is influenced by many factors. Parameters
Part of this work was supported by the funds from Polish Ministry of Education grant, number NN518423336.
REFERENCES 1. Best M, Kelly T, Galin M (1970) The ocular pulse-technical features. Acta Ophthalmol 48(3):357–367 2. Hørven I, Nornes H (1971) Crest time evaluation of corneal indentation pulse. Acta Ophthalmol 86(1):5–11 3. Langham M.E, Farell R.A, O'Brien V et al. (1989) Blood flow in the human eye. Acta Ophthalmol Suppl 191:9–13 4. Suzuki I (1962) Corneal pulsation and corneal pulse waves. Jpn J Ophthalmol 6:190-194 5. Trew D.R, James C.B, Simon H.L et al. (1991) Factors influencing the ocular pulse – the heart rate. Graefes Arch Clin Exp Ophthalmol 229: 553–556 6. Kasprzak H, Iskander D.R (2007) Spectral characteristics of longitudinal corneal apex velocities and their relation to the cardiopulmonary system. Eye 21(9):1212–1219 7. Ravalico G, Toffoli G, Pastori G et al. (1996) Age-related ocular blood flow changes. Invest Ophthalmol Vis Sci 37:2645-2650 8. James C.B, Trew D.R, Clark D et al. (1991) Factors influencing the ocular pulse-axial length. Graefes Arch Clin Exp Ophthalmol 229:341–344 9. Kasprzak H, Iskander D.R (2010) Ultrasonic measurement of fine head movements in a standard ophthalmic headrest, IEEE Trans Instr Meas 59(1):164–170 10. Porat B (1996) A Course in Digital Signal Processing. John Wiley & Sons Inc, New York Author: Monika Danielewska Institute: Visual Optics Group, Institute of Physics, Wroclaw University of Technology Street: Wybrzeze Wyspianskiego 27 City: 50-370 Wroclaw Country: Poland Email:
[email protected]
IFMBE Proceedings Vol. 29
THE ANALOG PROCESSING AND DIGITAL RECORDING OF ELECTROPHYSIOLOGICAL SIGNALS F. Babarada1, J. Arhip2 and C. Ravariu1 1
University Politechnica of Bucharest, Faculty Electronics Telecommunications and Information Technology, DCAE, ERG, Bucharest, Romania 2 S.C. Seletron Software si Automatizari SRL, Bucharest, Romania
Abstract— The paper presents the aspects of design and implementation of a chain for electrophysiological signal recording, respectively collection, analog processing and digital data recording. The analog processing is based of dynamic range compressor circuit composed by the automatic gain control with a recovery delay to minimize the distortions, thermal behavior compensation and an adaptation stage for levelmeter. At compressor output was added a clipper, which must catch all the transitions, that escapes out of the dynamic range compressor and at clipper output a lowpass filter to cuts abruptly the high frequencies. The data vector recording is performing by strong internal resources microcontroller including ten bits A/D conversion port. Keywords— bioelectronics, analog processing, digital recording.
I. THE ELECTROPHYSIOLOGICAL SIGNALS AQUIRING
The paper presents an analog processing and digital recording system for electrophysiological signals, with the possibility to utilize in medical applications like ECG, EEG, EMG etc, [1]. The electrodes represent the electric conductors together with the contact electrolyte for electrophysiological signals collection. For a vector data storage the design of the ensemble source-electrodes-amplifier is very important. So for high contact area electrodes exceeded 100Pm square, it is very important to minimize the amplifier noise and for low contact electrodes area, the noise introduced by the electrodes begins to be most significant [2]. As the results of modeling of the ensemble sourceelectrode, it is recommended for the amplifier to be implemented by a low noise and distortions, transimpedance amplifier stage followed by one low passing filter. For very low electrophysiological signals it is necessary a differential amplifier because it has a high common mode rejection of parasitic signals characteristic [3]. II. THE ELECTROPHYSIOLOGICAL SIGNALS PROCESSING
As in the case of many concepts found in engineering, automatic gain control was also discovered by natural selec-
tion. For example, in human vision, calcium dynamics in the retinal photoreceptors adjust gain to suit light levels [4]. A. The automatic gain control Automatic gain control (AGC) is an adaptive system found in many electronic devices. The average output signal level is feedback to adjust the gain to an appropriate level for a range of input signal levels. For example, without AGC the sound emitted from an AM radio receiver would vary to an extreme extent from a weak to a strong signal; the AGC effectively reduces the volume if the signal is strong and raises it when it is weaker. AGC algorithms often use a proportional-integral-differential controller. Other applications include radar, audio/video amplifiers [5] and biological signals processing. The basic components of compressor are the U9, U10 integrated circuits, which biases the D1, D2 diodes, fig. 1, at their I-V curve knee. The input voltage is in the range of 10 to 300mVp and the output voltage is in the range of 5 to 10 mVp. The voltage command of AGC is in the range of 300 to 600mVdc. The resistor R3 allow the circuit to be balanced and adjust the output voltage so it does not produce distortion in the output when gain reduction is active. In order to provide the voltage command of AGC (Vcaa) we choose a feedback configuration design. This design contain the amplifier, composed by U11, R4, R5 with the amplification around 101, the full wave rectifier, composed by D3, D4, R6, R7, U12 which bring the signal to the absolute value and the positive voltage detection realized with D6, C12 connected trough half voltage divider R8, R9. The voltage over the condenser C12 is exactly the voltage command of the automatic control amplifier Vcaa. Discharging of the condenser C12 is made through the diode D7. This diode is opposite polarized by a voltage greater than Vcaa, respectively the voltage produced by diode D5, which is not reduced by half and loaded at the absolute value the condenser C13 through resistances R10 and R11. At reduction of the input signal amplitude the voltage Vcaa remains constant until C13 is discharged by R11 and R10. Thus at transient simulation at 1kHz, the amplification remains constant 5ms and then increase in time of 15ms, fig. 2.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 132–135, 2010. www.springerlink.com
The Analog Processing and Digital Recording of Electrophysiological Signals
To reduce the temperature dependence of the automatic gain control we used the IC stage realized with U13, the diode D8 and the resistance R12, achieving the circuit from fig. 1.
133
When the resistor R16 in the schematic fig. 3, is set to zero ohms, this circuit is simply a unity gain block. As R16 is adjusted, the upper audio frequencies are increased and resulting the family of curves from fig. 4. The reason for introducing the resistance R22 and diode D9 is for the clipper output signal temperature compensation. Simulation results at three different temperatures are significantly improved using these components, fig. 5. 1.2V
0.8V
Fig. 2 The 1KHz time simulation response for short time reduction of input signal from 30dB over the threshold to –20 dB under the threshold
0.4V
0V
The voltage command Vcaa can be adjusted from resistive divider composed with resistances R13 and R14. In order to display the output level of the dynamic range compressor signal we added the stage composed with operational amplifier U22 that adapt the connection and the adjusting of zero and end of scale for a linear display with LEDs bargraph. B. The clipper Because of transitions due to the automatic gain control circuit signals with high amplitude can go trough this circuit and for a continuous signal to the output we added a clipper circuit, which must catch all the transitions that escapes out of the dynamic range compressor. This is done with two diodes D10 and D11, connected in parallel, fig. 3. Each diode is reverse biased so that they do not drive until they reach a certain amount of tension. This voltage is set using a resistive divisor consisting of R20 and R21 to a value of approximately 1V and may be adjusted to set the threshold for clipper. It is applied to the uninverted input of the operational amplifier U18 and from the output of U18 to the inverted input of the second operational amplifier U19. Between the AGC system and the clipper we will install an adapting gain block. If this block is operating at unity gain, it is practically transparent and we are operating the clipper at threshold.
-0.4V 0s
0.2ms V(R19:2)
0.4ms
0.6ms
0.8ms
1.0ms
1.2ms
1.4ms
1.6ms
1.8ms
2.0ms
Time
Fig. 5 The temperature compensated clipper output signal simulated at 0°C, 20°C and 50°C
C. The lowpass filter Since the clipper circuit can create higher harmonic to the output, we add a filter to cut frequencies over 3KHz. The filter must be a lowpass with a flat response and the shape must be abruptly from the maximum passing frequency. Therefore we choose a filter of order five Chebyshev with 0,2 dB wave amplitude in passing band, fig. 6. The low pass filter circuit was optimized in order to have the best lowpass filter transient response. Transient behavior of the whole chain of signal processing was performed on the input signal amplitude close to the threshold, 40mV. Note the input into operation of the AGC at positive voltage command Vcaa, fig. 7. 5V
0V
-5V
-10V 0s V(VCCAA)
V(VNET)
5ms V(U21:OUT)
10ms V(V3:+) V(VCAA)
15ms
20ms
25ms
30ms
Time
Fig. 7 Transient simulation of the whole chain of signal processing
Fig. 4 The clipper driver frequency response
Frequency response of the entire chain corresponds with the block level simulations and makes the designed behavior. Thus the compressor frequency response is smooth over 500KHz, the driver stage of clipper emphasizes high frequencies and low-pass filter cut abruptly the frequencies over 3KHz, fig. 8. The analog processing module, including the AGC, clipper and filter, fig. 9.
IFMBE Proceedings Vol. 29
134
F. Babarada, J. Arhip, and C. Ravariu 200V
150V
100V
50V
0V 1.0Hz V(VCCAA)
10Hz V(VNET) V(U21:OUT)
100Hz V(V3:+) V(VCAA)
1.0KHz
10KHz
100KHz
1.0MHz
Frequency
Fig. 8 Frequencies response of the whole chain of signal processing
minutes of records. The acquired data can be extracted by a serial link, the chip U3 providing for the RS232 specification, including hardware handshake by RTS-CTS pair. The communications parameters have been choose to meet the MODBUS specification, as is: 1 START bit, 8 data bits, 9600 Bauds, 1 Even parity bit, 1 STOP bit. During the recording process time information and recorded values are displayed cyclic. Recording circuit is presented in fig. 10left, pc-board in fig. 10-midle and module in fig. 10-right. IV. CONCLUSIONS
III. THE ELECTROPHYSIOLOGICAL SIGNALS RECORDING
The electrophysiological signals acquiring begins from the source of the bioelectric signals coupled with the electrodes, amplification, processing, analog-digital conversion and data storage in some file format. For the study of the tissues, cell families or cells, is necessary the data storage for a sufficient long period of time, respectively the vector data storage, because the periodicity of these function. Acquisition and data storage are performed by an 8-bit microcontroller series AVR (Atmel), namely ATMega32 on a development board that has its own power source, a real time clock circuit, an EEPROM memory, a LED display and a serial interface adapter (RS232 or RS485), fig. 10. This microcontroller has strong internal resources, allowing data acquisition and digital conversion through a 10-bit ADC, provided with eight inputs multiplexer and its own high accuracy reference voltage reference [6]. Different interesting voltages are collected to internal DAC by means of microcontroller port A, ADC0 to ADC7, fig. 10. The conversion of analog data to a digital vector is synchronized by an internal clock which allows for choose different sampling rates. A conversion cycle starts by clearing the memory locations for the measured values. After that, every input is converted in a 10 Bits word and temporary stored into the internal RAM memory then the next input is also converted, and so on. At this moment we have an eight 10 Bytes words representing a sample of the analog entry signal. This word is now completed with the conversion time, extracted from the external “Real Time Clock” (RTC), the U10 chip, fig.10. The vector obtained looks like: 5AYYMMDDHHmmsshhV0V1.....V7 where: 5A (in hex) represent the “start record” label; YYMMDD is the current date (in ASCII), respectively year, month, day; HHmmsshh is the current time (in ASCII), respectively hour, minute, second, hundreds; V0...V7 is the current vector in binary format (2 bytes for each entry). The whole record is now 24 bytes long and it is stored into the external flash memory U2, fig 10. This memory has 65536 bytes allowing for over 2700 records. At a rate of 10 samples/second that means there is enough space for more of 4
The presented vector data collection, processing and recording have the possibility to use many input channels that give the possibility to simultaneously testing of many versions of source-electrodes-amplifier and processing channels to identifying the optimal version. Later this possibility can be used to multipoint measuring or to increase the resolution. Using the specific data vector recording present the advantage of study and development offline of new methods for processing and extraction of the useful electrical signal. The paper is focus of signal processing because the electrophysiological signals can have a high dynamic range and can be easy covered by the artifacts noise.
ACKNOWLEDGMENT This work was supported by ELECTROCELL 62-063 (2008-2011) project financed by Romanian National Authority for Scientific Research.
REFERENCES 1. 2.
3.
4. 5. 6.
Al. Rusu, N. Golescu, C. Ravariu, (2008) Manufacturing and tests of mobile ECG platform, IEEE Conf. Sinaia Romania, 2008, pp 433-436 Florin Babarada, Janel Arhip, (2009) Electrophysiology Signal Data Vector Acquiring, Congress of Romanian Medical Association, Bucharest, 2009, pp 82 B. Firtat, R. Iosub, D. Necula, F. Babarada, E. Franti, C. Moldovan, (2008) Simulation, design and microfabrication of multichannel microprobe for bioelectrical signals recording, IEEE Int. Conf., Sinaia, Romania, 2008, pp 177-180 Rustem Popa, (2006) Medical Electronics. Matrix House, Bucharest Florin Babarada, (2003) Audiofrequency Electronics Circuits Design, Publisher Printech, Bucharest, ISBN 973-652-884-7, 149p. ATMega32 data sheet, http://www.atmel.com/dyn/resources/prod_ documents/doc2503.pdf Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Florin Babarada University Politehnica of Bucharest Bd. Iuliu Maniu 1-3 Bucharest Romania
[email protected]
The Analog Processing and Digital Recording of Electrophysiological Signals
7
VCC U11
R8 22k
10Vdc
0
7
V+
T1
V-
1
4
Vcaa
R12 47k
D6
VCC
D1N4148
R9 22k
Vccaa V2
VEE
5
7 V
D7 D1N4148
D4 D1N4148
R11
0
22k C13 0.47u
R10 4.7k
0
VEE
0
0
VCC U22
VCC 3
V
R13 10k
D8 D1N4148
Rf-scale
Ro-scale LT1115/LT OUT 18k T2 2 4
VEE R35 1k
0
5
OVC
+
R14 100k VEE
7
0U12
Level Vcaa
6 8
V+
V1 10Vdc
7
8 6
LT1115/LT OUT 3 + OVC
VCC
0
-
5
OVC
LT1115/LT OUT T2 2 -
0
T2
10k
+
1
4
2
V-
R6
R5 100k
R4 1k
5
U10 VCC
0
R7 10k VEE
C12 0.47u
V-
7
OVC
8 6
D1N4148
6 8
T1
+
VCC
T2 OUT
3
2.2k Rmeter 50
1
3
D3 D1N4148
T1
V-
LT1115/LT
U9 Vcaa
VEE
VCC U13
D5
6 Out AGC 8
OUT T2
4
10k
-
T1
2
5
-
1
4
R2
5
OVC
V+
OVC
8 6
+
LT1115/LT 2
V+
T2 OUT
D2 D1N4148
10k VEE
1 T1
2 VOFF = 0 VAMPL = 0.04 FREQ = 1k LT1115/LT 3 +
V+
0
V-
4
VEE
R3
V-
D1 D1N4148
1
10k V
T1
3
V+
R1 V3
135
R36 10k
0
Fig. 1 The automatic gain control C15 0.01u
7
C16 100p
0
7
V+
10k
-
LT1115/LT 3
U18 VCC
0
+
6 8
Vnet
T1
V-
5
2
5
10k VEE R28
T2 OUT OVC
8 6
1
R25
7
Clipper Level
8 6
4
VEE
10k LT1115/LT T2 R20 OUT 2.7k D9 3 0 OVC + D1N4148 V+
10k
-
1
R23 2
10k
R24 10k
T1
R21 10k
R18
VEE
4
4
VEE
R17
R22 10k
V-
-
D1N4148 D10
10k
VCC
V+
2
VCC U17 Clipper Output 3 OVC + D1N4148 D11 R26 LT1115/LT OUT T2 2 -
R19
Clipper Input
4
U15 VCC
OUT T2
T1
6 8
OVC
+
1
LT1115/LT
V-
5
5
T1
OVC
3
7
0
+
VCC U16
8 6
1
3
1
T2 OUT
LT1115/LT
HF Expanding
V+
0.01u
-
V-
2 2.2k
VEE 4
C14
V-
R15
V+
Vccaa
T1
R16 1k
R27 1k 10k
5
0
7
U19 VCC
Fig. 3 The clipper
0
VEE
10k
7
+
OVC
10k C20 0.03u
C21 LT1115/LT OUT T2 730p 2 -
0
VEE
T1
6 8
V-
C19 1800p
3
5 6 8 V
1
0.016u
C18 0.02u
LT1115/LT OUT T2 2 -
4
0
OVC
VCC U21
R33
5
4
C17
+
10k T1
10k
V
V-
10k
R32
V+
3
V+
VCC U20
R31
1
R30
7
R29 Vnet
Fig. 6 The lowpass filter
Fig. 9 The analog processing module
Fig. 10 The digital recording module using the microcontroller with integrated Port-A analog-digital converter (circuit, pc-board, module) IFMBE Proceedings Vol. 29
Parameter Selection in Approximate and Sample Entropy-Complexity of Acute and Chronic Stress Response T. Loncar Turukalo1, O. Sarenac2, N. Japundzic-Zigon2, and D. Bajic1 1
Faculty of Technical Sciences, Trg Dositeja Obradovica 6, Novi Sad, Serbia 2 School of Medicine, Dr Subotica 8, Belgrade, Serbia
Abstract— The paper discusses the parameters of ApEn and SampEn analysis of laboratory rats with different likelihood of developing the cardiovascular disease under acute and chronic exposure to stress. Pattern length, normalized threshold, time delay and pulse interval series length, as well as signal stationarity were considered. Keywords— approximate entropy, sample entropy, heart rate variability, random process.
I. INTRODUCTION The quantification of signal complexity plays an important role in uncovering the mechanisms governing the system dynamics. The several regularity measures, algorithmically similar to Eckman-Ruelle entropy, have been proposed providing suitable quantification of ensemble regularity vs. randomness in data. Approximate entropy (ApEn) approach was first to appear [1-2], providing regularity statistics applicable to short and noisy experimental series. Dependence on record length and lack of relative consistency inherent to ApEn approach brought to derivation of a new family of statistics sample entropy, SampEn, to account for these relevant properties [3]. Underlying relatively simple form of these statistics, resides a fundamental problem of parameter selection. Specification of three parameters is required for both statistics: m, r and τ, referred to as pattern length (embedding dimension), normalized threshold (tolerance, filter) and time delay, respectively. Majority of the applications follow the preliminary conclusions from [1], implementing the parameters m=2, r=0.1-0.2 times standard deviation estimated from the signal and τ=1 (adjacent samples). Recently, several studies criticized this black box approach, questioning the choice of m and r, influence of data length N and influence of sampling frequency [4-8]. Lu et al [6] derived the analytical formulae for the threshold value for which ApEn reaches its maximal value when m ranges from 2 to 7 [7]. The choice of the parameter m, the embedding dimension, is closely related to N and r. The reliable reconstruction of the dynamics in m-dimensional embedding space requires large number of input points N. Yet, homogenous subject state resulting in stationary signal can not be ensured for long
enough time to provide for large N (at least not for heart rate analysis). Once, constrain on N is set, increasing m results in too sparse embedding space and unstable entropy estimates for small values of r. As the reasonable choice of m the studies [5-7] propose False Nearest Neighbors (FNN) approach [9]. The issue of parameter τ arises when the analyzed signals have long range linear correlation [10-11]. Kaffashi et al [11] suggested that τ should be chosen as the first minimum of autocorrelation function. The main purpose of this study is to select a stable working point that offers the most consistent tool in assessing the physiological meaningful results in an experiment that quantifies the complexity of response to acute and to chronic stress applied to normal rats (NRM) and to borderline hypertensive (BHR) rats. It is shown that the most unbiased results are achieved for threshold values that exceed the ones generally presumed. The other levels of freedom (m and τ) are shown to be of a minor impact.
II. MATERIALS AND METHODS A. Exerimental Protocol Animals- outbred male Wistar rats and Borderline Hypertensive rats (BHR - were F1 offspring of Wistar dams and SHR – Spontaneous Hypertensive -– sires) weighing 330 ± 20 g were used. Rats were housed individually in Plexiglas cages with food and water ad libitum, in controlled laboratory conditions. The number of animals per experimental group was n=6. Surgery - ten days before stress experiments rats were submitted to surgery in which radiotelemetric probes (TA11PA-C40, DSI, Transoma Medical) were implanted in abdominal aorta under combined ketamine and xylazine anesthesia, along with gentamicin and followed by metamizol injections for pain relief. Fig. 1 illustrates the timeline of exposure to shaker and restrain stress which was the same for both types of rats (BHR and NRM), yielding 4 different experimental conditions: Shaker – NRM (SN), Shaker-BHR (SB), Restraint– NRM (RN) and RestraintBHR (RB).
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 136–139, 2010. www.springerlink.com
Parameter Selection in Approximate and Sample Entropy-Complexity of Acute and Chronic Stress Response
BASE Basal
FS
Acute stress
PFS Post 18 X stress Last stress per day stress
Post 18 X stress Last stress per day stress
1. day
2. day
1. day Basal
Acute stress
BASE FS
Post 18 X stress Chronic Post stress stress stress per day
3. day
2. - 8. day Post stress
9. day
BP was recorded during stress and post stress every day
PFS
Chronic Post stress stress
RESTRAINT STRESS
PLS
Fig. 1 Timeline
of exposure to shaker and restraint stress with measured signals: BASE, FS, PFS, PLS
B. Approximate and Sample Entropy Pulse interval (PI) series were derived from the arterial BP as interval between maxima in the pulse wave signal. After careful manual visual examination of BP waveforms the artifacts were removed. Very slow signal component was removed using approach proposed in [12]. Wide sense stationarity of time series was tested using approach proposed in [13-14]. ApEn and SampEn are comprehensively described in [13]. In brief, given a time series [x(j)], j=1,…,N, where N is the time series length, the vectors of the length m - X (m1) to X (mN -( m-1)⋅τ ) - are defined as: X m (i ) = [ x(i ), x(i + τ ),..., x(i + (m − 1)τ )] for i = 1,..N-(m-1 )τ
(1)
The distance between any two vectors i and j, dm( X (im) , X (mj ) ), is defined as the maximum of absolute differences between their respective scalar components:
[
d m ( X m(i ) , X m( j ) ) = max x(i + k ⋅ τ ) − x( j + k ⋅ τ ) k =0,..,m−1
]
(2)
For each one of the “templates” X (im) , i=1,…, N-(m-1)·τ, the number Bmr (i ) is found as a number of vectors X (mj ) for which the distance dm( X (mi ) , X (mj ) ) is below a predefined threshold value r. Estimates of probability Cim (r ) that any vector X (mj ) is within the distance r from the template X (mi ) are expressed as: C im ( r ) = Bmr (i ) ( N − ( m − 1) ⋅ τ )
(3)
Another function Φm (r ) =
N − ( m −1)τ
∑ ln[C i =1
m i
]
(r ) ( N − (m − 1) ⋅ τ ) ⋅
(4)
is average of the natural logarithms of the previous functions. ApEn is defined as:
(5)
ApEn(m, r , N , τ ) = Φm (r ) − Φm +1 (r )
PLS
SHAKER STRESS
137
ApEn allows self-matches (i = j in Eq. (3)) to avoid logarithm of zero, thus inducing the bias in estimates. SampEn approach eliminates the bias, making the following alternations: a) self-matches are excluded from Bmr (i ) ; b) the number of sliding window comparison for template vectors of length m and m+1 are equalized; c) summation and logarithm in Eq. (4) exchange the places: ⎞ ⎛ N −m⋅τ ⎞ ⎛ N −m⋅τ SampEn(m, r, N ,τ ) = ln⎜ ∑ Bmr (i ) − 1 ⎟ − ln⎜ ∑ Bmr +1 (i ) − 1 ⎟ (6) ⎠ ⎝ i =1 ⎠ ⎝ i=1
(
)
(
)
Stationarity is implicitly required for the reliable estimation of both statistics: the time average estimates of statistical moments, such as sd, are meaningful only for stationary signals [15] and the threshold r is specified as a fixed percentage of standard deviation (sd). C. Parametar Selection and Stability The values of parameter m were set to a typical one m=2, and the value obtained using FNN approach m=3 [9]. The time delay parameter for which autocorrelation function of PI series reaches its first minimum is τ=2. The significance of time delay parameter for PI analysis was explored in [16], resulting in insignificantly higher ApEn values, showing that statistical dependence of adjacent PI samples is of little relevance. For these reasons the standard time delay τ=1 was applied. Initial preference for the threshold value was rMAX, for which ApEn reaches its maximum and rTHEOR, an estimate derived in [5,7] for which ApEn is in the vicinity of its maximal value: m = 2 : rTHEOR = (-0.02 + 0.23 sd 1 /sd 2 )/ 4 N /1000
(7)
m = 3 : rTHEOR = (-0.06 + 0.43 sd 1 /sd 2 )/ 4 N /1000
(8)
Term sd1 is standard deviation of differential series x(i)-x(iτ), while sd2 is standard deviation of bounded PI time series [5]. The formulae (7,8) are derived for human time series [5,7], but the accordance between theoretical and empirical values of threshold r for which ApEn reaches its maximum is excellent for signals taken from rats as well [16]. The equal length N of the time series analyzed is required as both the statistic, to a different extent, are sensitive to the record length. To illustrate this dependence, the signal length was gradually shortened from N=6000 to N=1000, remaining within the specified limits (10m to 20m [1]). Change in the absolute value of ApEn due to the change in N, can lead to the misleading conclusions about complexity change during the course of experiment. Fig. 1 shows that if
IFMBE Proceedings Vol. 29
138
T.L. Turukalo et al.
N=3000, ApEn(rTHEOR) for FS exceeds the maximum in BASE conditions; if N=6000, the result is just opposite for both m values. Yet, if threshold r exceeds rMAX and rTHEOR, the experimental results are consistent for all the record lengths (Fig. 2). The lack of relative consistency presented in Fig.2 is usually accounted for bias induced by selfmatching, which is a dominant but not the only factor, since the same effects is noticed in SampEn analysis of the same signals. SHAKER BHR N = 6000 2
a1)
ApEn(m=2,r,N=6000,1)
ApEn(m=2,r,N=3000,1)
1.6 1.4 1.2 1
BASELINE FIRST STRESS Rmax BASE Rteor BASE Rmax FS Rteor FS
0.8 0.6 0.4 0.2 0 0
0.1
0.2
0.3
0.4
b1)
1.8 1.6
SBP and PI in shaker stress did not significantly change (Table 1), but animal increased its vigilance, expecting more inputs, apt to use more control mechanisms, consequently the system became more complex. In the case of restrain stress, animal struggled in defense that dominates system inputs, SBP is increased and pulse interval shortened while system becomes less complex.
1.4 1.2 1
0.6 0.4
0 0
Table 1 PI[ms] and SBP[mmHg]
BASELINE FIRST STRESS Rmax BASE Rteor BASE Rmax FS Rteor FS
0.8
0.2
0.5
III. RESULTS
0.1
0.2
0.3
Normalized threshold r
Normalized threshold r
0.4
BASE PI [ms]
0.5
SN SB RN RB SN SB RN RB
Fig. 2 Mean ApEn ± SE (standard error) for BHR rats in SHAKER stress a1) N=3000, b1) N=6000
alter the entropy estimate. To verify this assumption, a set of experiments is made: each number Brm(i) in Eq. (3), is randomly altered by adding an uniform equiprobable random variable z, taking values from {-1,0,1}, with the constraint that Brm(i)z can be neither zero nor negative. Then a new, experimental ApEnEXPER is evaluated gradually increasing r and its relative difference in respect to the initial ApEn is expressed as: DIFFEXPER
= ( ApEnEXPER − ApEn) ⋅ 100 ApEn
[%]
(9)
PFS
157.97±4.15 175.82±7.10 131.46±2.33 153.21±5.95 125.55±3.36 149.55±2.85 114.38±10.2 150.25±5.54
175.76±9.55 189.00±7.69 169.48±4.77 150.54±2.62 112.94±2.75 138.26±2.80 102.76±6.49 140.68±7.54
NRM RESTRAINT: RATS, SURROGATES, GAUSS
NRM SHAKER: RATS, SURROGATES, GAUSS
0.50
0.50
RAT BASE RAT FS
0.45 0.40 0.35
m=3
0.30 0.25
SURROGATES AND GAUSSIAN SERIES
0.20 0.15 0.10 1000
m=2 2000
3000
4000
5000
6000
0.45
SURROGATES AND GAUSSIAN SERIES
0.40
RAT BASE RAT FS
0.35
m=3
0.30 0.25 0.20 0.15
m=2
0.10 1000
2000
3000
4000
5000
6000
Series length N
Series length N 10 0
N=6000
Fig. 4 Theoretical threshold rTHEOR for different experimental settings and for original data, surrogate data and artificial Gaussian series
-10 -20
N=1000
-30 -40
Threshold rTHEOR
-60
range for m=2
-70 -80 -90 0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
Normalized threshold r
Fig. 3 Relative difference of experimental ApEnEXPER in respect to original ApEn. Gray segments show the theoretical threshold range
m = 3, τ = 1
BASE FS PFS PLS
1.5
* 1.0
**
0.5
0.0
NRM BHR SHAKER
NRM BHR RESTRAINT
2.0
SampEn(m=3,N=4000,r=0.5,τ = 1)
2.0
-50
ApEn(m=3,N=4000,r=0.5,τ = 1)
Relative experimental ApEn difference [%]
PLS
167.80±3.92 181.95±9.67 141.91±2.26 143.93±5.81 116.61±3.72 139.45±3.19 114.57±4.50 145.45±4.42
Results are given as mean+SE, the statistical significance was assessed using Repeated measures ANOVA test at levels p<0.05, p<0.01, p<0.005 indicated by the shades of gray, the stronger significance the darker the color.
Theoretical threshold rTHEOR
Another suspicion is that a slight change in number of template matches BmrTHEOR (i ) , e.g. induced by noise, would
SBP [mm Hg]
FS
167.81±5.27 186.56±9.25 179.29±5.58 183.45±3.93 116.28±2.58 134.87±3.98 102.08±3.20 134.45±4.82
Theoretical threshold rTHEOR
SHAKER BHR N = 3000 2 1.8
rMAX and rTHEOR. The results were similar for all the other experimental outcomes, both for ApEn and for SampEn.
m = 3, τ = 1
BASE FS PFS PLS
1.5
1.0
*
* *
** **
*
0.5
0.0
NRM BHR SHAKER
NRM BHR RESTRAINT
Fig. 5 Entropy change in; parameters: m=3, τ=1, r=0.5, N = 4000
This difference, averaged over all animals, is presented in Fig. 3, showing that ApEn estimates become insensitive to minor changes of Brm(i) for the threshold values that exceed
Theoretical threshold values increase in stress, distinguishing different type of stressors: in shaker stress the FS
IFMBE Proceedings Vol. 29
Parameter Selection in Approximate and Sample Entropy-Complexity of Acute and Chronic Stress Response
rTHEOR reaches the value of rTHEOR for isodistributional surrogates and Gaussian data, while in restrain stress, threshold increases, but remains bellow the values of rTHEOR for randomized data. Fig. 4 presents the mean values calculated for BASE and FS for normotensive rats. The plots in Fig.5 indicate that ApEn values increase in shaker stress (more in BHR rats) and decrease in restrain stress (more in NRM rats). During the PFS period, the difference diminishes, but chronic exposure to stressor leads to distorted ApEn value in PLS which deviates from BASE with relative difference of 10%. But statistically significant changes are observed only in SB and RN protocols, during FS. SampEn differentiated significantly acute stress, FS, in all experimental protocols. Moreover, it showed statistically significant difference in PLS period for BHR rats, confirming their impaired adaptation ability.
IV. CONCLUSIONS ApEn (m,r) statistic offers meaningful regularity comparison for the same set of parameter values and for the same signal lengths N. The consequence of inappropriate choice of N and r is misinterpretation of experimentally induced regularity changes caused by inadequate number of pattern matches for small threshold values. The example given indicates that choosing a single value of r in the recommended limits (0.1sd-0.2sd) may result in misleading results, especially for typically small values of N=1000. The key for reliable threshold selection lies in Fig. 3, which impose threshold values for which the relative distortion DIFFEXPER becomes less than 1% and r belongs to the stable region of plots in Fig. 3. Yet, to avoid tedious ApEn calculation, heuristic estimate that yields almost the same result is to calculate rTHEOR for each time series, find mean value and multiply it by 1.9. This method yields threshold values r=0.3 for m=2 and r=0.5 for m=3, regardless of τ. Despite the consideration that large values of r may be too coarse to enable process distinction, the above analysis indicates stable and consistent entropy difference. The bias induced by self matching, is enhanced due to lack of pattern matches for which the reasons are twofold: sparse embedding space (small N) and small values of r. Flip-flop effect of plots of BASE and FS in Fig.2 does not diminish by increasing N, yet the inversion is moved toward smaller values of r (Fig.2b), and stable difference between the plots is reached earlier. Inadequate selection of parameters resulting in poor probability estimates (Eq.4) yields unstable ApEn behavior, which can be overcome by choosing the proposed threshold value.
139
ACKNOWLEDGMENT This research is funded by grants TR11022 and OI145062B.
REFERENCES 1. Pincus SM (1991) Approximate entropy as a measure of system complexity. Proc Natl Acad Sci USA;Vol. 88, 2297-2301 2. Goldberger AL and Pincus SM (1994) Physiological time-series analysis: What does regularity quantify? Am J Physiol (Heart Circ Physiol), Vol. 266, H1643-H1656 3. Richman JS and Moorman JR (2000) Physiological time-series analysis using approximate entropy and sample entropy. Am J Physiol Heart Circ Physiol Vol. 278(6), H2039-H2049 4. Lake DE and Richman JS (2002) Sample entropy analysis of neonatal heart rate variability. Am J Physiol Regul Integr Comp Physiol, vol.283, R789-R797 5. Chen X,Solomon IC and Chon KH (2005) Comparison of the use of approximate entropy and sample entropy: application to neural respiratory signal. Proc. of the 27th IEEE EMBS Ann. Conf, 4212-4216 6. Lu S, Chen X, .Kanters JK et al.(2008) Automatic selection of the threshold value r for approximate entropy. IEEE Trans. Biomed. Eng. Vo55., no.8, 1966-1972 7. Chon KH, Scully CG and Lu S (2009) Approximate Entropy for all signal, IEEE Engineering in Medicine and Biology, vol. 28, No.6, pp.18-23 8. Castiglioni P and Di Rienzo M (2008) How the Threshold ”R” Influences Approximate Entropy Analysis of Heart-Rate Variability. Computers in cardiology, vol. 35, pp.561-564 9. Kennel MB and Brown R (1992) Determining embedding dimension for phase-space reconstruction using geometrical reconstruction. Physical Review A Gen. Phys., vol.45, No.6, 99, 3403-3411, 10. Govindan RB, Wilson JD,. Eswaran H et al (2007) Revisiting sample entropy analysis. Physica A, vol. 376, 158-164 11. Kaffashi F, Foglyano R, Wilson CG et al (2008) The effect of time delay on Approximate and Sample entropy calculations. Physica D, vol.237,3069-3074 12. Tarvainen MP, Ranta-aho PO and Karjalainen PA (2002) An advanced detrending approach with application to HRV analysis. IEEE Trans. Biomed. Eng. vol 42, no.2, 172-174. 13. Karvajal R et al. (2002) Dimensional analysis of hrv in hypertrophic cardiomyopathy patients. IEEE Engineering in Medicine and Biology, 21(4), pp 71-78, 14. Bendat JS, Piersol AG (1986) Random data analysis and measurement procedures. Wiley Series in Probability and Statistics, New York 15. Papoulis A (1984) Probability, Random Variables and Stochastic Processes. McGraw-Holl International Edition 16. Loncar Turukalo T, Bajic D, Sarenac O et al (2009)Environmental stress: Approximate Entropy approach revisited“, Proc. of the 31th IEEE EMBS Ann. Conf,, 2009, pp. 1804-1807
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Dragana Bajic Faculty of Technical Sciences Trg Dositeja Obradovica 6 Novi Sad Serbia
[email protected]
The Importance of Uterine Contractions Extraction in Evaluation of the Progress of Labour by Calculating the Values of Sample Entropy from Uterine Electromyogram J. Vrhovec1,2, D. Rudel1, and A. Macek Lebar2 2
1 MKS Electronic Systems, Rozna dolina C. XVII/22b, 1000 Ljubljana University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, 1000 Ljubljana, Slovenia
Abstract— We evaluate complexity of the uterine electromyographic activity during the labor with a delay in progress by calculating the values of sample entropy. We compared monitoring of the labour by values of SampEn calculated from successive nonoverlaping segments of uterine EMG signal and values of SampEn calculated from EMG signal during successive uterine contractions. It is possible to detect the delay in the progress of labour by both principles. But due to the fact, that extraction of the uterine contractions from uterine EMG signal is time consuming and due to higher variability of the values of SampEn calculated from EMG signal during successive uterine contractions, we recommend to use the first principle. Moreover, the method could be implemented on-line and could be as such used as a diagnostic tool in a delivery room. Keywords— uterine electromyogram, contractions, successive nonoverlaping intervals, sample entropy.
I. INTRODUCTION Uterine electromyography (EMG) is a monitoring technique developed for evaluating and recording an electrical activity produced by uterine muscles. It was shown that this electrical activity is highly correlated to uterine contractions. Therefore it is speculated that the technique has an important clinical potential [1]. The majority of uterine EMG activity lies in between 0.1 Hz and 3 Hz [2]. Through the years different research groups used different processing of uterine EMG signals. Due to the fact that the activity of uterine muscles is much greater during uterine contractions than in between them, most of the studies were focused just on EMG signal recorded during uterine contractions. Amplitude distribution of the EMG signal during uterine contractions, its power spectrum and mean, median and/or peak frequency were studied [3,4,5]. The frequency content of electromyographical changes during the contractions in pregnancy was evaluated with wavelet decomposition [6]. Possible nonlinear nature of the EMG signal recorded during uterine contractions was tested using different methods [7]. The comparison of uterine EMG activity and tocographic recordings of uterine contractions showed that both methods agree in number of contractions [2]. Because uterine
contractions are variable during the pregnancy as well as during the labour, automatic extraction of successive uterine contractions from uterine EMG signal is still subjected to errors [2]. The extraction is a process that is usually done manually and is therefore time consuming and impossible to be used in clinical practice. Therefore uterine EMG signal during uterine contractions are mostly analyzed afterwards [3,4,7,8,9,10]. In our previous studies we demonstrated that analysis of EMG activity recorded during labour can give valuable information about a progress of the labour [11,12]; for instance calculation of sample entropy (SampEn) from successive nonoverlaping segments of uterine EMG signal allows detection of a delay in progress of labour. In this study we compared monitoring of the labour by values of SampEn calculated from successive nonoverlaping segments of uterine EMG signal and values of SampEn calculated from EMG signal during successive uterine contractions. We focused on the importance of uterine contractions extraction in evaluation of the progress of labour on the base of uterine EMG signal.
II. METHODS A. Sample Entropy SampEn is the negative natural logarithm of the probability that two sequences similar for m points remain similar at the next point, where self-matches are not included in calculating of the probability [13,14,15,16,17,18,19]. Formally, given N data points from a time series {x(n)}={x(1), x(2), …, x(N)}, to define SampEn, one should follow these steps: 1. Form N-m+1 vectors X(1), …, X(N-m+1) defined by X(i)=[x(i), x(i+1), …, x(i+m-1)], for 1 ≤ i ≤ N-m+1. Those vectors represent m consecutive values of the signal, commencing with the i-th point. Calculate the distance between X(i) and X(j), d=[X(i), X(j)], as the maximum absolute difference between their respective scalar components:
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 140–143, 2010. www.springerlink.com
The Importance of Uterine Contractions Extraction in Evaluation of the Progress of Labour by Calculating the Values
d [ X (i ), X ( j )] = max ( x(i + k ) − x( j + k ) ). k =1, 2 ,…, m
(1)
2. For a given X(i), count the number of j (1 ≤ j ≤ N-m, i ≠ j), such that the distance between X(i) and X(j) is less than or equal to r·SD:
Brm (i ) =
N −m 1 ∑ Θ(r ⋅ SD − d [X (i ), X ( j )]) (2) N − m − 1 j =1, j ≠i
141
we extract the uterine contractions from the EMG signal according to the uterine contractions detected by intrauterine pressure measurements. We calculated one value of SampEn from each uterine contraction. In the second, values of SampEn were calculated from the successive nonoverlaping segments of 4500 data points, which means, that they were available every 8.2 minutes. All data processing was made in Matlab.
m
3. Calculate Br as:
Brm =
1 N −m m ∑ Br (i ) . N − m i =1
(3)
4. We increase the dimension to m+1 and calcum
late Ar
(i ) : N −m 1 Arm (i ) = ∑ Θ(r ⋅ SD − d [X (i ), X ( j )]) (4) N − m − 1 i =1, j ≠i 5. Calculate
Arm as: Arm =
1 N −m m ∑ Ar (i ) N − m i =1
(5)
6. Finally the sample entropy is defined as:
⎛ Am SampEn(m, r , N ) = − ln⎜⎜ rm ⎝ Br
Fig. 1 A partogram - a simple, inexpensive tool to provide a continuous pictorial overview of the labour. Cervical dilation is marked with squares and head station is marked with circles. A labour with a delay in progress of the labour is shown, because there is no progress in head descending for aproximately two hours
⎞ ⎟⎟ . ⎠
(6)
B. Uterine EMG Measuring and Processing An investigation protocol is described in details in the study by Pajntar et al [9]; therefore just basic features are given. It was approved by the National Medical Ethics Committee. Informed consent was obtained from all 32 patients enrolled in the study. Patients were undergoing their first labour at age from 19 to 29 years. The values of the cervical dilatation and the fetal head station were recorded into the partogram that was carefully drawn over the whole course of the labour (Figure 1). Intrauterine pressure was measured to follow the frequency and intensity of uterine contractions. The EMG signals used in this study were recorded on cervix. They were sampled at 18.2 Hz, low-pass filtered at 5 Hz and saved. For the purpose of this study the EMG signals were first detrended and band-pass filtered (0.1 – 3 Hz) using a second order Butterworth digital filter. To calculate the values of SampEn on appropriate number of points we decreased the sampling rate by keeping every second sample starting with the first sample. We used two types of intervals for calculating values of SampEn. In the first type
III. RESULTS AND DISCUSSION For presentation in this report we randomly chose a labour with a delay in the active phase of the labour from our data base [12]. The delay in progress of the labour was diagnosed according to the partogram (Figure 1), but it is also visible observing in the values of SampEn (Figures 2 and 3) calculated from uterine EMG signal. In the case of normal labour the values of SampEn start to decrease at the beginning of the active phase of the labour that is considered at 4 cm cervical dilatation and keep decreasing trend while approaching the delivery [11]. During a delay in progress in the active phase of the labour, the decreasing trend of the values of SampEn stops as a delay in progress of labour begins. The values of SampEn rise up considerably and reduce again as delay in progress of the labour ends. Figures 2 and 3 show the values of SampEn calculated from uterine EMG signal measured during the labour, which is presented by partogram in Figure 1. In Figure 2 the values of SampEn calculated from segments of uterine EMG signal, which were measured during each uterine contraction, are shown. Each point in the Figure 2 corresponds to one uterine contraction. The border between the latent and the active phase of the labor was determined by obstetrician as 3 – 4 centimeters cervical dilatation. The border is marked with the grey zone in the middle of the Figure 2. Values of SampEn in the latent phase of the labour ranged from 0.5664 to 1.2420 (median value is 0.9352 and variance is 0.0286). As the latent phase proceeds they start to decrease. Decreasing trend is still
IFMBE Proceedings Vol. 29
142
J. Vrhovec, D. Rudel, and A.M. Lebar
present at the beginning of the active phase of the labour when the values of SampEn reduce to the range from 0.0453 to 0.7403 (median value is 0.2878 and variance is 0.0275). During the delay in active phase of the labour the values of SampEn rise to the range from 0.6870 to 0.9389 (median value is 0.7697 and variance is 0.0064). With approaching the end of the delay the values of SampEn drop to the values similar to the values of SampEn in the active phase of the labour.
Fig. 2 Values of SampEn calculated from segments of uterine EMG signal measured during each uterine contraction. The grey zone in the middle represents border between latent (on the left side of zone) and active (on the right side of zone) phase of the labour
The values of SampEn indicate the complexity of the uterine EMG signal measured during uterine contractions. According to widely spread values of SampEn in the latent phase of the labour we conclude that the complexity of EMG signals differ from contraction to contraction. Approaching the active phase of the labour complexity of the EMG signal measured during the contractions decline, which can be reasoned according to reduction of values of SampEn. At the beginning of the active phase of the labour the uterine EMG activity measured during uterine contractions seems to be quite regular, because the values of SampEn are low. The complexity of uterine EMG activity measured during uterine contractions increases significantly during the delay in active phase of the labour and decreases again with approaching the childbirth. In Figure 3 the values of SampEn calculated from successive nonoverlaping segments of uterine EMG signal measured during the labor with a delay in active phase are shown. The border between the latent and the active phase of the labor was determined by obstetrician as 3 – 4 centimeters cervical dilatation. The border is marked with the grey zone in the middle of the Figure 3. Values of SampEn in the latent phase of the labour ranged from 0.6659 to 1.2126 (median value is 0.8420 and variance is 0.0200). As the latent phase proceeds they have decreasing trend. At the end of the latent phase the values of SampEn ranged from 0.0439 to 0.4102 (median value is 0.2188 and variance is 0.0123). During the delay in active phase of the labour the
values of SampEn rise to the range from 0.7189 to 0.8232 (median value is 0.7432 and variance is 0.0024). With approaching the end of the delay the values of SampEn drop to the values similar to the values of SampEn in the active phase of the labour.
Fig. 3 Values of SampEn calculated from successive nonoverlaping segments of uterine EMG signal measured during a labour with a delay in an active phase. The beginning of the active phase was considered at 4 cm cervical dilatation, determent by obstetrician. The grey zone in the middle represents border between latent (on the left side of zone) and active (on the right side of zone) phase of the labour In this case the values of SampEn indicate the complexity of the uterine EMG signal measured during successive nonoverlaping segments of uterine EMG signal. Approaching the active phase of the labour complexity of the EMG signal measured during successive nonoverlaping segments of uterine EMG signal decline, which can be reasoned according to reduction of values of SampEn. The complexity of uterine EMG activity measured during successive nonoverlaping segments of uterine EMG signal increases significantly during the delay in active phase of the labour and decreases again with approaching the childbirth. The values of SampEn calculated from successive nonoverlaping segments of uterine EMG signal and that calculated from EMG signal during successive uterine contractions in different labour stages are comparable. Both principles of SampEn values calculation give the same course of complexity measure, as it is shown with the median values of SampEn in different stages of the labour. Also the delay in active phase of the labour is clearly noticed by both principles. The difference between both principles, that can be observed, is higher variability of the values of SampEn calculated from EMG signal during successive uterine contractions. We used variance for a measure of the amount of variation within the values of SampEn in different stages of labour. The variance is higher when SampEn values were calculated from EMG signal during successive uterine contractions. It seems that each uterine contraction has its own complexity, even in the same labour stage.
IFMBE Proceedings Vol. 29
The Importance of Uterine Contractions Extraction in Evaluation of the Progress of Labour by Calculating the Values
IV. CONCLUSIONS Sample entropy, as a measure of complexity of uterine EMG signals, indicates the course of labor. Lower values of sample entropy correspond to reduced complexity. The complexity of uterine EMG activity during normally progressing labor has reduced trend approaching the delivery, which means that more and more regular uterine activity in the course of time assures normal delivery. If the complexity of uterine EMG signal increases again during the active phase of the labour, the delay in the progress of labour may be expected. To detect such a delay in the progress of labour values of SampEn has to be calculated either from successive nonoverlaping segments of uterine EMG signal or from selected EMG signals taken only during successive uterine contractions. Both principles of calculation are appropriate. But due to the fact, that variability of the values SampEn calculated from EMG signal during successive uterine contractions is greater and that extraction of the uterine contractions from uterine EMG signal is time consuming, we recommend to use the first principle. Additional reason for that is that the method could be implemented on-line and could be as such used as a diagnostic tool in a delivery room.
ACKNOWLEDGMENT The study was supported by Slovenian Research Agency and Ministry of Higher Education, Science and Technology.
REFERENCES 1. Rihana S. et all, (2009), Mathematical modeling of electrical activity of uterine muscle cells, Med Biol Eng Comput, Vol 47, pp 665-675 2. Jezewski J. et all, (2005), Quantitative analysis of contraction patterns in electrical activity signal of pregnant uterus as an alternative to mechanical approach, Phisiol. Meas., Vol 26, pp 753-767 3. Iams J.D et all, (2002), Frequency of uterine contractions and the risk of spontaneous preterm delivery, N Engl J Med, Vol. 346, pp. 250 255 4. Doret M et all, (2005), Uterine Electromyography Characteristic for Early Diagnosis of Mifepristone-Induced Preterm Labor, The Am. J. Obstet. Gynecol., Vol. 105, pp. 822-830
143
5. Garfield RE et all, (2002),Uterine Electromyography and LightInduced Fluorescence in the Management of Term and Preterm Labor, J Soc Gynecol Investig, Vol 9, pp. 265-275 6. Diab MO et all, (2009), An unsupervised classification method of uterine electromyography signals: Classification for detection of preterm deliveries, J.Obstet. Gynaecol. Res., Vol 35, pp. 9-19 7. Radhakrishnan N et all, (2000), Testing for nonlinearity of the contraction segments in uterine electromyography, Int. J. Bifurcat. Chaos, Vol. 10, pp. 2785 – 2790 8. Luria O. et all, (2009), Effects of the individual uterine contraction on fetal head descent and cervical dilatation during the active stage of labor, European Journal of Obstetrics & Gynecology and Reproductive Biology, Vol 144, pp S101-S107 9. Pajntar M, et all, (1987),Electromyographic observations on the human cervix during labor, Am. J. Obstet. Gynecol, Vol. 156, pp. 691697 10. Leman H et all, (1999), Use of the Electrohysterogram Signal for Characterization of Contractions During Pregnancy, IEEE Trans. on BME, Vol.46, pp1222-1229 11. Vrhovec J et al, (accepted), A uterine electromyographic activity as a measure of labour progression, Slovenian Medical Journal. 12. Vrhovec J, (2009), Evaluating the progress of the labour with sample entropy calculated from the uterine EMG activity, Electrotechnical Review, Vol. 79, pp.165-170 13. Lake DE et all, (2002) Sample entropy analyses of neonatal heart rate variability, Am J Physiol., Vol 283, pp. R789-R797 14. Pincus SM (1991) Approximate entropy as a measure of system complexity,” Proc. Natl. Acad. Sci. USA, Vol. 88 pp. 2297–2301 15. Rezek I.A et all, (1998) Stochastic Complexity Measures for Phsiological Signal Analysis, IEEE Trans. on BME, Vol.45(9), pp11861191 16. Richman JS et all, (2000) Physiological time-series analysis using approximate entropy and sample entropy, Am J Physiol Heart Circ Physiol, vol.278, pp. H2039-H2049 17. Abasolo D. et all, (2006) Entropy analysis of the EEG background activity in Alzheimer’s disease patients, Physiol. Meas., Vol. 27, pp 241-253 18. Govindan RG et all, (2006) Revisiting sample entropy analysis at www.elsevier.com/locate/physa. 19. Rezek IA et all, (1998), Stochastic Complexity Measures for Phsiological Signal Analysis, IEEE Trans. on BME, Vol. 45
Author: Jerneja Vrhovec Institute: Faculty of Electrical Engineering Street: Trzaska 25 City: Ljubljana Country: Slovenija Email:
[email protected]
IFMBE Proceedings Vol. 29
Simultaneous Pneumo- and Photoplethysmographic Recording of Oscillometric Envelopes Applying a Local Pad-Type Cuff on the Radial Artery R. Raamat, K. Jagomägi, J. Talts and J. Kivastik Institute of Physiology, University of Tartu, Tartu, Estonia Abstract— Oscillometric envelopes obtained simultaneously from a pneumatic capsule and a photoelectric sensor adjusted over the radial artery were studied in 17 healthy subjects. Results showed an advantage of photoplethysmographically vs pneumoplethysmographically recorded data when the heightbased oscillometric estimation was employed. The groupaveraged difference ‘oscillometric estimate minus auscultatory reference’ and standard deviation were considerably less for systolic as well as for diastolic pressure in the case of photo recording, being –0.6 (5.4) and 1.2 (2.7), respectively. In the case of pneumo recording, these parameters were 12.1 (11.0) and –6.2 (8.4), respectively. The study of the pad-type cuff mechanics demonstrated that the compliance changes as responses to the pressure were several times less than related changes for the conventional arm cuffs. Keywords— oscillometric envelope, radial noninvasive blood pressure, cuff transfer function, photoplethysmographic.
Oscillometric wrist monitors have shown lower reliability compared to upper arm devices. On the other hand, oscillometric wrist devices have small dimensions and are highly user-friendly. It is worth to mention that thanks to collateral circulation at the human wrist area, it is possible to compress only a single (radial) artery while another (ulnar) artery continues to supply blood to the hand. This means that repeated or long-term measurements do not cause any venous congestion and ischemic pain at distal sites to the cuff. In the present study we analyze the pneumo- and photoplethysmographically recorded oscillometric envelopes obtained from the radial artery using a local pad-type cuff. By applying height-based criteria we assess the accuracy of blood pressure estimation from oscillometric patterns obtained by aforementioned methods of recording. A dependence of the transfer function of the pad-type cuff on the applied pressure is also observed.
I. INTRODUCTION
Oscillometric blood pressures are typically determined from the envelope of successive oscillometric pulse amplitudes obtained from the occlusive cuff during its inflation or deflation. The highest point of the envelope curve is generally regarded as the mean arterial pressure (MAP) [1, 2]. Several types of criteria have been used to estimate systolic (SBP) and diastolic (DBP) blood pressures from the oscillometric envelope [3]. Among them, the height-based criterion has been used the most. In the height-based approach, the systolic and diastolic pressures are determined using special fractions of the maximum oscillation amplitude [4]. These fractions are known as characteristic ratios (systolic and diastolic, respectively). Although manufacturers of oscillometric devices keep the detailed algorithms of determination of SBP and DBP secret, the above mentioned characteristics are basic for the accurate blood pressure estimation. Specific studies have demonstrated an effect of the shapes of oscillometric pulse amplitude envelopes on the differences between auscultatory and oscillometric blood pressure measurement [5]. Subsequently, recording of oscillometric envelopes of appropriate shape and stability remains a topical issue for the oscillometric method.
II. METHODS AND MATERIALS
Subjects: A group of 17 healthy volunteers, 10 females and 7 males, aged from 18 to 35, were studied. The study was approved by the Ethics Committee of the University of Tartu. Experimental design and protocol: The subject rested in a supine position on a couch. Systolic and diastolic blood pressures were measured at the subject’s left upper arm employing the auscultatory method (Precisa N sphygmomanometer, Rudolf Riester GmbH Co). SBP and DSP were taken twice before and twice after the recording of oscillometric envelopes from the radial artery. An experimental pad-type cuff (capsule), containing an elastic membrane (Fig. 1) was attached to the left wrist by a flexible Velcro strap and a U-shaped aluminum clip. The latter served to locally counterbalance the force exerted by an inflated capsule to the strap, in this way restricting to the pressurization of the ulnar artery. Before adjusting the capsule (diameter 40 mm, thickness of the elastic vinyl membrane 0.1 mm), a photoplethysmographic sensor was attached over the radial artery by means of two adhesive strips. The correct place for the sensor was found by palpation. The photosensor was positioned in a
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 144–147, 2010. www.springerlink.com
Simultaneous Pneumo- and Photoplethysmographic Recording of Oscillometric Envelopes
way that the artery remained between the LED and photodiode, thus not affecting the correct pressure transfer from the elastic membrane to the artery.
Fig. 1 Pad-type wrist cuff: 1 – photoplethysmographic sensor, 2 – pneumatic capsule with an elastic membrane, 3 – Velcro strap, 4 – metallic clip. After an initial equilibrium period of 5 minutes, a 2minute calibration of the wrist capsule as a volume sensor was performed. The pressure in the capsule was rapidly raised to 160 mmHg and then stepwise lowered. Three pressure levels were tested, each step lasted for 30s, during which a repeated external volume modulation was performed by the calibration syringe similarly to that in [6]. This calibration procedure allowed us to quantitatively assess the dependence of the cuff transfer function on the cuff pressure. After this, three cuff deflations were carried out with a 30s interval between them. The capsule pressure was rapidly raised to 160 mmHg and then linearly lowered at a rate of 2 mmHg/s. The total time for the measurement session in each subject was approximately 10 min. Signal processing and data analysis: The cuff pressure was measured from the wrist capsule by a pressure transducer (0–200 mmHg, 0–100 Hz), and the optical transmittance signal by a photoplethysmograph, containing a LED TSMG2700 (Vishay Corp) and a photodiode BPW34FS (Siemens Corp). The amplified and conditioned analog signals were digitized by an ADC (16-bit, 100 Hz) and transferred to the computer like in [7, 8]. Assuming that the Lambert-Beer’s law holds true in the tissues where the photoplethysmogram is measured [9, 10],
145
a logarithmic photometer should be used to provide for a linear relationship between the recorded light transmittance changes and the real blood volume changes. Since the applied photoplethysmograph did not contain a logarithmic amplifier, this operation was executed by the computer software before extracting the oscillometric pulses. In each of the 17 persons, the measurements during three cuff deflations were included in the analysis. For further evaluation, the envelopes were smoothened by a moving window with a width of 9 cardiac cycles (approximately 18–20 mmHg) and the points of maximum oscillation were found. To assess the quality of obtained envelopes for oscillometric estimation, considering the shape while suppressing variation from the location of points of maximum oscillation, all the envelopes related to one subject were shifted along the pressure axis and aligned at the point equal to the reference MAP for this subject. This operation was also expected to remove the brachial-to-radial pressure drop. The reference MAP was computed from the auscultatory SBP and DBP by the traditional formula [11] MAP=DBP+(SBP– DBP)/3. When executing oscillometric estimation, the systolic characteristic ratio 0.5 and diastolic characteristic ratio 0.8 were applied. To eliminate distortions in oscillometric envelopes, caused by possible changes in the cuff transfer factor over the full range of cuff pressure, the cuff transfer factor at different pressure levels was measured experimentally. Three pressure levels were tested: 160, 100 and 40 mmHg. A graded volume modulation was performed by an external syringe of 50 mm3, connected to the cuff. This modulation resulted in a small cuff pressure fluctuation (approximately 1–3 mmHg). Subsequently, the capsule transfer factor (or its inverse value, the compliance) at several pressure levels for every measurement session was estimated in true physical units (in mmHg/mm3 or mm3/mmHg, respectively). Statistics: To test for the presence of significant differences, Student’s paired t-test was used. To test all the hypotheses, a level of significance of 0.05 was applied. Data are expressed as mean and standard deviation (in parentheses). III. RESULTS Results of measurement of the cuff compliance changes related to the applied pressure are presented in Table 1. We preferred using the variable ‘compliance’ instead of its inverse measure ‘transfer function’ as the first is widely used in medical literature to characterize the cuff viscoelastic properties.
IFMBE Proceedings Vol. 29
146
R. Raamat et al.
Table 1 Individual and group-averaged changes in the compliance of the pad-type cuff as responses to the cuff pressure (n=17) 3
Change, %
Compliance, mm / mmHg Sub- ject
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Mean SD
Table 2
P=40 P=100 P=160 AvemmHg mmHg mmHg rage
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Mean SD a
P=40ĺ160 mmHg
39.2 42.9 50.2 43.3 46.2 41.8 43.3 49.6 45.8 51.8 48.6 47.2 42.5 37.5 44.5 38.4 46.1
37.9 43.7 47.2 40.0 46.2 44.4 41.8 47.6 44.1 50.1 46.3 41.1 41.1 36.9 40.0 35.0 39.9
36.0 41.3 48.6 40.4 44.5 41.1 39.3 48.1 41.1 41.8 42.2 38.4 41.1 36.3 37.8 33.5 37.4
37.7 42.6 48.6 41.2 45.6 42.4 41.5 48.4 43.6 47.9 45.7 42.2 41.5 36.9 40.8 35.7 41.1
-3.5 1.8 -5.9 -7.6 0.0 6.1 -3.5 -4.0 -3.7 -3.2 -4.8 -13.0 -3.4 -1.7 -10.0 -8.8 -13.5
-8.3 -3.7 -3.1 -6.8 -3.7 -1.8 -9.1 -3.1 -10.3 -19.3 -13.2 -18.6 -3.4 -3.1 -15.0 -12.7 -18.7
44.6 (4.1)
42.5 (4.1)
40.5 (4.0)
42.6 (3.8)
-4.6 (5.0)
-9.1 (6.2)
SBP, mmHg
DBP, MAP, mmHg mmHg
Oscillometric – auscultatory SBP, mmHg
DBP, mmHg
photo
pneumo 2.6 21.5 15.7 6.9 20.6 19.9 17.2 27.7 0.2 0.2 -7.4 14.8 23.2 11.7 4.5 -2.5 28.5
0.8 0.9 -1.6 0.9 1.7 2.1 0.9 2.7 3.2 3.4 4.2 2.9 -4.3 5.6 -2.4 2.6 -2.9
-2.5 -3.6 -5.1 1.5 -15.5 -0.8 -9.8 -17.8 0.4 -4.0 -0.3 -30.4 -7.6 -2.6 -1.0 1.5 -7.3
c
1.2 (2.7)
b
-6.2 (8.4)
126.0 114.0 102.5 118.8 120.5 124.8 115.0 123.5 104.0 110.8 111.5 127.5 108.3 123.3 90.0 100.0 112.0
80.0 66.0 68.3 74.0 74.5 67.0 68.0 69.3 62.5 64.3 64.8 70.8 70.0 64.8 58.0 58.0 64.8
95.3 82.0 79.7 88.9 89.8 86.3 83.7 87.3 76.3 79.8 80.3 89.7 82.8 84.3 68.7 72.0 80.1
-4.2 6.1 0.2 2.4 -0.5 -9.4 -5.7 3.6 -2.6 -5.0 -8.2 11.7 0.5 0.6 -1.1 -3.9 5.3
113.7 (10.4)
67.3 (5.6)
82.8 (6.7)
-0.6 (5.4)
a
12.1 (11.0)
Cuff transfer factor: Measurements conducted in [12] demonstrated that the transfer factor for a normal-size arm cuff increased by 26% and by 42% when the cuff pressure was raised from 40 to 100 and from 40 to 160 mmHg, respectively. Our measurements revealed that for similar pressure changes, the corresponding compliance changes of the padtype wrist cuff were –4.6% and –9.1%, respectively (Table 1). The different sign of changes is caused by the inverse relation between the cuff transfer factor and compliance. The obtained smaller absolute values of changes can be explained by a more rigid construction of the used wrist capsule. For comparison, an analogous change in the compliance of a wrap-type finger cuff was –14% during a pressure raise from 40 to 100 mmHg [6].
photo pneumo
Not significant (p = 0.65) calculated by Student’s paired test Not significant (p = 0.08) calculated by Student’s paired test c significant (p < 0.01) calculated by Student’s paired test. b
IV. DISCUSSION
Individual and group-averaged results of oscillometric estimation compared to auscultatory reference (n=17).
Auscultatory (reference) Subject
P=40ĺ100 mmHg
Results of oscillometric estimation are shown in Table 2. Its left part presents averaged data over four auscultatory measurements on the brachial artery of each subject (as reference). The right part contains pressure differences ‘oscillometric minus auscultatory’ for pneumoplethysmographic as well as for photoplethysmographic recording. The exposed data are differences for every subject averaged over three cuff deflations. Fig. 2 demonstrates a typical pair of oscillometric envelopes, obtained by different types of recording in Subject 5.
c
Fig. 2 Normalized oscillometric envelopes for Subject 5. Maximum points of photoplethysmographically (solid line) and pneumoplethysmographically (dashed line) recorded envelopes are shifted along the pressure axis and aligned at the pressure equal to the reference MAP for the subject.
IFMBE Proceedings Vol. 29
Simultaneous Pneumo- and Photoplethysmographic Recording of Oscillometric Envelopes
Pneumo- and photoplethysmographically recorded oscillometric envelopes: The study revealed that photoelectrically recorded oscillometric envelopes fit better for oscillometric estimation compared to pneumatically registered envelopes. The group-averaged difference ‘oscillometric estimate minus auscultatory reference’ and standard deviation (Table 2) were considerably less for SBP as well as for DBP in the case of photo recording, being –0.6 (5.4) and 1.2 (2.7), respectively. In the case of pneumo recording, these parameters equaled to 12.1 (11.0) and –6.2 (8.4), respectively. The first differences were statistically not significant while the second ones were significant. The worse agreement in the case of pneumo recording is explained by Fig. 2, where a pair of normalized oscillometric envelopes is shown for Subject 5. An envelope labeled ‘pneumo’ is considerably wider compared to that labeled ‘photo’. This difference causes an overestimation of SBP and an underestimation of SBP for Subject 5. An issue which needs addressing in oscillometric heightbased estimation is a choise of the appropriate systolic and diastolic characteristic ratios. According to Geddes [4], SBP corresponds to the point of 50% of maximum amplitude; for DBP the ratio is 80%. A mathematical model of oscillometry for the upper arm, introduced in [2] has suggested for the systolic and diastolic ratios 0.593 and 0.717, respectively. A recent study of Amoore et al. [5] has shown that for the database of 243 oscillometric envelopes from 124 patients recorded with a simultaneous auscultatory blood pressure measurement, the mean values and standard deviations of the systolic and diastolic characteristic ratios were 0.49 (0.11) and 0.72 (0.12), respectively. The values 0.5 for systolic and 0.8 for diastolic ratio were applied in determing the transfer function of cuffs in [10]. The latter characteristic ratios were also applied by us. Our study like several others [13, 14] has demonstrated advantages of photoplethysmographic recording, especially the possibility of a more accurate oscillometric blood pressure estimation on the radial artery by using a pad-type cuff. V. CONCLUSIONS
ACKNOWLEDGMENT This work was supported by Grant 7723 from the Estonian Science Foundation.
REFERENCES 1.
2.
3. 4.
5.
6.
7.
8.
9.
10.
11. 12.
13.
A local pad-type wrist cuff suffers less than conventional cuffs from the unfavorable dependence of the cuff transfer function on the cuff pressure. In most cases, the used capsule can be regarded as a linear volume sensor for recording volume oscillations. Photoplethysmographically vs pneumoplethysmographically recorded data fitted better for the height-based oscillometric estimation. The group-mean differences and variance ‘oscillometric estimate minus auscultatory reference’ were considerably less for systolic as well as for diastolic pressure in the case of photo recording.
147
14.
Mauck G, Smith C, Geddes L et al (1980) The meaning of the point of maximum oscillations in cuff pressure in the indirect measurement of blood pressure – Part II. J Biomech Eng 102:28–33 Drzewiecki G, Hood R, Apple H (1994) Theory of the oscillometric maximum and the systolic and diastolic detection ratios. Ann Biomed Eng 22:88–96 Ng K, Small C (1994) Survey of automated noninvasive blood pressure monitors. J Clin Eng 19:452–475 Geddes L, Voelz M, Combs C et al (1982) Characterization of the oscillometric method for measuring indirect blood pressure. Ann Biomed Eng 10:271–280 Amoore J, Vacher E, Murray I et al (2007) Effect of the shapes of the oscillometric pulse amplitude envelopes and their characteristic ratios on the differences between auscultatory and oscillometric blood pressure measurements. Blood Press Monit 12:297–305 Raamat R, Jagomägi K, Talts J (2007) Calibrated photoplethysmographic estimation of digital pulse volume and arterial compliance. Clin Physiol Funct Imaging 27:354–362 Jagomägi K, Raamat R, Talts J et al (2005) Recording of dynamic arterial compliance changes during hand elevation. Clin Physiol Funct Imaging 25:350–356 Raamat R, Talts J, Jagomägi K et al (2006) Simultaneous application of differential servo-oscillometry and volume-clamp plethysmography for continuous non-invasive recording of the finger blood pressure response to a hand postural change. J Med Eng Technol 30:139– 144 Lopez-Beltran E, Blackshear P, Finkelstein S et al (1998) Noninvasive studies of peripheral vascular compliance using a nonoccluding photoplethysmographic method. Med Biol Eng Comput 36:748–753 Tanaka G, Sawada Y, Matsumura K et al (2002) Finger arterial compliance as determined by transmission of light during mental stress and reactive hyperaemia. Eur J Appl Physiol 87:562–567 Berne R, Levy M (1996) Principles of physiology. Mosby-Year Book, Inc., St. Loius, Missouri Mersich A, Jobbágy A (2009) Identification of the cuff transfer function increases indirect blood pressure measurement accuracy. Physiol Meas 30:323–333 Lu W, Tsukada A, Shiraishi T et al (2001) Indirect arterial blood pressure measurement at the wrist using a pad-type square cuff and volume-oscillometric method. Front Med Biol Eng 11:207–219 Laurent C, Jönsson B, Vegfors M et al (2005) Non-invasive measurement of systolic blood pressure on the arm utilising photoplethysmography: development of the methodology. Med Biol Eng Comput 43:131–135
Author: Rein Raamat Institute: Institute of Physiology, University of Tartu, Street: 19 Ravila St City: Tartu Country: Estonia Email:
[email protected]
IFMBE Proceedings Vol. 29
Estimation of Mean Radial Blood Pressure in Critically Ill Patients K. Jagomägi1, J. Talts1, P. Tähepõld2, R. Raamat1 and J. Kivastik1 1
2
Department of Physiology, University of Tartu, Tartu, Estonia Department of Anesthesiology and Intensive Care, Tartu University Hospital, Tartu, Estonia
Abstract— If not estimated by a measuring device, the mean arterial pressure (MAP) can be approximated from the measured systolic (SBP) and diastolic blood pressure (DBP) by applying a traditional formula MAP=SBP+k·(SBP–DBP), where the pulse pressure coefficient k is equal to 0.33 in most common implications. Our aim was to study whether this value can validly be applied to calculate MAP of critically ill patients. A total of 19 patients after cardiac surgery with an intra-radial cannula were involved. SBP, DBP and MAP were obtained from the recorded arterial pressure waveforms. The value for k was found to be 0.31±0.04. Thus, MAP values obtained by applying the traditional one-third approximation were overestimated by 1.7±2.3 mmHg (p<0.001). Keywords— Blood pressure measurement, mean arterial pressure, pulse pressure.
I. INTRODUCTION
Most physicians currently use the systolic (SBP) and diastolic (DBP) arterial pressure to assess cardiovascular status because these two pressures are easily measurable using a sphygmomanometer. Recent studies have increased clinical interest in also analyzing other pressures, especially pulse pressure (PP=SBP–DBP) and mean arterial pressure (MAP). Some studies suggest that MAP may be more accurate in predicting cardiovascular prognosis than other blood pressure indices [1]. MAP is often used as an index of overall blood pressure when caring for critically ill patients, since it represents the perfusion pressure and it is a factor utilized in the calculation of the total peripheral resistance to get a better idea about the balance between vasoconstriction and vasodilatation [2,3]. Devices that measure BP using the oscillometric technique are becoming more and more popular in clinical practice. The mean arterial blood pressure is the only value that is directly measured by the oscillometric device, as the minimum cuff baseline pressure allowing maximum amplitude of arterial pressure oscillations is identical to the mean arterial blood pressure [2]. Because the configuration of the arterial pressure-time waveform curve is complex, MAP calculated from blood pressure cuff measurements uses an empirical formula MAP=DBP+k·PP, where the pulse pressure coefficient k is equal to 0.33 in most implications. This thumb rule appears in all physiological textbooks and is currently used in clini-
cal and epidemiological studies [4,5]. This ‘standard’ equation assumes that diastole persists for 2/3 and systole for 1/3 of each cardiac cycle regardless of heart rate. Recent studies have shown that using a single value for k in calculating MAP in various parts of the vascular bed is inaccurate and several new approximations have been suggested: MAP=DBP+0.4·PP [6]; MAP=DBP+0.412·PP [7]; MAP=DBP+0.475·PP [7] young persons; MAP=DBP+0.33·PP+5 mmHg [8,9]; MAP=DBP+[0.33+(heart rate·0.0012)]·PP [10]. Most of these studies have been done at aortic or brachial levels by using high-fidelity catheter tip manometers. Unlike fluid-filled catheters, micromanometer-tipped catheters provide high-fidelity pressure values without the errors stemming from differences in the reference zero level, as observed with external pressure transducers [11]. However, in the clinical setting, fluid-filled cathetermanometer systems are used, not micromanometer-tipped catheters. The radial artery is the most common site for arterial cannulation. As pressure wave becomes more peaked towards the periphery, the percentage of pulse pressure that has to be added to diastolic pressure to calculate MAP, decreases towards the periphery of the arterial tree. For the radial artery k has been found to be 0.31 [12], and approximately the same value for finger arteries [13-15]. The aim of the present study was to estimate the pulse pressure coefficient k for the radial artery in patients after cardiac surgery. II. METHODS
Subjects: The study was approved by the Ethics Committee of the University of Tartu. A total of 19 patients (9 males and 10 females) with invasive arterial blood pressure monitoring were included in this study. 13 patients were studied after coronary artery bypass grafting (CABG) and 6 patients after heart valve replacement or repair procedures. Hypertension was diagnosed in 12 patients. Five patients were mechanically ventilated; seven patients were receiving vasoactive drugs (dopamine, dobutamine or norepinephrine). Mean age of the patients was 63±10 years, average weight was 80±17 kg, average height was 169±9 cm, aver-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 148–151, 2010. www.springerlink.com
Estimation of Mean Radial Blood Pressure in Critically Ill Patients
149
age mean arterial pressure (MAP) 78±12 mmHg, mean heart rate 77±13 beats per minute (bpm). Mean EuroSCORE (European System for Cardiac Operative Risk Evaluation) was 4.4±2.1 points. Invasive blood pressure: A 20-gauge cannula (BD Arterial Cannula, BD Critical Care Systems Pte Ltd) was inserted into the right or left radial artery with the tip pointing towards the blood flow. The arterial catheter was connected to the transducer (Pressure Monitoring Set, Edwards Lifesciences LLC). The whole tubing system was flushed with sterile normal saline to eliminate air bubbles and tested for system loss (for instance any kind of fluid leak from the circuit). The monitoring device was connected to a permanent pressurized washing system. The pressure transducer was fixed at right atrial level and adjusted to atmospheric zero, which was regularly checked and corrected. The arterial blood pressure signals and ECG were recorded and displayed on a bedside monitor (Siemens SC 7000, Germany). All equipment was checked by biomedical technicians before use. Signal processing: Analog signals from the bedside monitor were digitized by an ADC (16-bit accuracy, sampling rate 200 Hz) and transferred to the laptop computer for offline analyses. The total time for the measurement session in each subject was approximately 8-10 min. In each of the 19 patients, more than 600 heart cycles were included in the analysis. Beat-to-beat data of systolic and diastolic pressure and heart interval were calculated by using custom-made software. MAP was obtained in two ways. We first measured the true MAP as the area under the pressure curve divided by the cardiac cycle length. These true MAP values were used as reference values. Next we calculated MAP from invasive radial pressure measurements by adding one-third of pulse pressure to diastolic pressure. HR was automatically measured from the ECG signal. Data are presented as mean±SD. Episodes with artifacts (e.g. arterial blood sampling) and ectopic heartbeats were rejected in 10-minute recordings. Data were averaged over a one-minute period. Statistics: To test for the presence of significant differences, Student’s paired t-test was used. To test all the hypotheses, a level of significance of 0.05 was applied. Data are expressed as mean and standard deviation.
Table 1 Results of comparison between the measured and calculated MAP. The calculated MAP was approximated by using the one-third rule, the measured (true) MAP was estimated as the area under the pressure curve divided by the cardiac cycle length. All pressures are in mmHg, heart rate (HR) in beats per min (bpm). Patient
SBP
DBP
PP
k
HR, bpm
1
144
61
83
89
2
158
88
70
105
89
0.33
77
111
0.25
3
112
52
60
69
85
72
0.28
64
4
107
52
56
5
108
49
59
71
70
0.36
66
69
69
0.33
6
126
65
69
62
88
85
0.38
105
7
140
8
113
58
82
86
85
0.34
58
52
61
71
72
0.31
9
89
132
57
75
81
82
0.33
70
10
142
68
74
89
93
0.27
76
11
142
57
84
82
85
0.29
62
12
99
43
56
60
62
0.31
63
13
120
48
72
67
72
0.26
79
14
126
81
45
94
96
0.28
104
15
99
53
46
69
68
0.35
89
16
103
50
53
64
68
0.27
74
17
114
55
59
72
75
0.29
86
18
109
51
59
68
70
0.30
73
19
141
71
71
94
94
0.32
79
MAP MAP measured calculated
Mean
123
58
64
78
80
0.31
77
SD
18
12
12
12
13
0.04
13
When MAP was calculated using the traditional formula (MAP=DBP+0.33·PP), we obtained a value of 80±13 mmHg, resulting in an overestimation of the true MAP of 1.7±2.3 mmHg (p<0.001). Whereas the traditional formula assumes that k=0.33, in our measurements it was actually found to be at a lower level than 0.33 in most subjects, with an average of 0.31±0.04. This k value did not correlate with subjects’ MAP level, pulse pressure and heart rate (Figure 1 a, b and c).
III. RESULTS
The average SBP in the radial artery was 123±18 mmHg, the average DBP 58±11 mmHg and the averaged MAP 78±12 mmHg (Table 1).
IFMBE Proceedings Vol. 29
150
K. Jagomägi et al.
Figure 1. The pulse pressure coefficient k as the fraction of PP that has to be added to DBP to obtain true MAP in individual subjects. This coefficient was not related to MAP level (a); PP (b) and HR (c). IV. DISCUSSION
Due to ethical reasons, this kind of study may almost exclusively be performed in patients with cardiovascular disease; none of the patients received an intra-arterial catheter purely for the purpose of the study. Despite being regarded as the gold standard for blood pressure monitoring there are various factors that affect the
precision of invasive measurement. However, as we used the same transducer and cannula of the same type in patients, these influences should be small and do not explain the large differences that we observed between patients. The blood pressure monitoring systems are normally underdamped with the damping factor ranging from 0.2 to 0.3. The use of over-damped systems might result in an overestimation of the percentage of pulse pressure at which MAP should be computed and under-damped systems might result in an underestimation of the k [16]. As all equipment was checked by biomedical technicians, we believe that we did not introduce major errors in our analysis. This study has shown that the intra-arterially measured MAP corresponded to DBP plus 0.31±0.04 of the difference between DBP and SBP. This fraction is lower than the classically assumed value of 0.33. The relationship between MAP and SBP and DBP was complex with a large betweensubject variability of 13% in our patients. This is in accordance with [4,17]. As the patients in our group were elderly hypertensives, and the range of mean blood pressure and heart rate was quite narrow, we did not observe any effect of mean blood pressure level, pulse pressure and heart rate on k value. This finding is in accordance with [6]. Although MAP is essentially similar in the aorta and large peripheral arteries, the pressure waveform obtained at the aortic root differs significantly from that recorded at the peripheral level. Acute changes in blood volume, inotropic state, heart rate, vascular tone and arterial stiffness may lead to discrepancies between the actual MAP value and the MAP empirically estimated at the peripheral level. The factors which may influence MAP estimation were the recording site (central vs peripheral), characteristics of the recording system (high-fidelity vs conventional) and the nature of hemodynamic conditions (stable vs unstable) [2]. Peripherally measured blood pressures are also affected by vasoconstriction or vasodilation due to both local factors such as temperature and to systematic factors such as sympathetic tone. Vasodilation causes a narrowing (shortening) of the systolic portion of the pressure wave, and might thus be expected to lower the relative level of the MAP. As some of our patients were studied shortly after general anesthesia and some were receiving vasoactive drugs, the vascular tone in studied patients was quite different, which might cause differences in k. V. CONCLUSIONS
In the patients after cardiovascular surgery the relationship between MAP and SBP as well as DBP was complex with a large between subjects variability. The invasively
IFMBE Proceedings Vol. 29
Estimation of Mean Radial Blood Pressure in Critically Ill Patients
measured radial MAP was found to correspond to the level of 0.31±0.04 of the pulse pressure above the diastolic pressure. MAP values obtained by applying the traditional onethird rule were overestimated by 1.7±2.3 mmHg (p<0.001).
ACKNOWLEDGMENT
151 9.
10.
11.
This work was supported by Grant 6947 from the Estonian Science Foundation.
12. 13.
REFERENCES 14. 1.
2.
3. 4. 5.
6. 7.
8.
Miura K, Nakagawa H, Ohashi Y et al (2009) Four blood pressure indexes and the risk of stroke and myocardial infarction in Japanese men and women: a meta-analysis of 16 cohort studies. Circulation 119:1892–1898 Bur A, Herkner H, Vlcek M et al (2003) Factors influencing the accuracy of oscillometric blood pressure measurement in critically ill patients. Crit Care Med 31:793–799 Lamia B, Chemla D, Richard C et al (2005) Clinical review: interpretation of arterial pressure wave in shock states. Crit Care 9:601–606 Michard F, Teboul JL, Richard C et al (2003) Arterial pressure monitoring in septic shock. Intensive Care Med 29:659 Kiers HD, Hofstra JM, Wetzels JF. (2008) Oscillometric blood pressure measurements: differences between measured and calculated mean arterial pressure. Neth J Med 66:474–479 Bos WJ, Verrij E, Vincent HH et al (2007) How to assess mean blood pressure properly at the brachial artery level. J Hypertens 25:751–755 Meaney E, Alva F, Moguel R et al (2000) Formula and nomogram for the sphygmomanometric calculation of the mean arterial pressure. Heart 84:64 Chemla D, Hébert JL, Aptecar E et al (2002) Empirical estimates of mean aortic pressure: advantages, drawbacks and implications for pressure redundancy. Clin Sci (Lond) 103:7–13
15.
16. 17.
Chemla D, Brahimi M, Nitenberg A. (2007) Thumb-rule for the proper assessment of mean blood pressure at the brachial artery level: what should be changed? J Hypertens 25:1740–1741 Razminia M, Trivedi A, Molnar J et al (2004) Validation of a new formula for mean arterial pressure calculation: the new formula is superior to the standard formula. Catheter Cardiovasc Interv 63:419– 425 Gould KL, Trenholme S, Kennedy JW. (1973) In vivo comparison of catheter manometer systems with catheter-tip micromanometer. J Appl Physiol 34: 263–267 Pauca AL, Wallenhaupt SL, Kon ND et al (1992) Does radial artery pressure accurately reflect aortic pressure? Chest 102:1193–1198. Bos WJ, van Goudoever J, van Montfrans GA et al (1996) Reconstruction of brachial artery pressure from noninvasive finger pressure measurements. Circulation 15:1870–1875 O'Callaghan CJ, Straznicky NE, Komersova K et al (1998) Systematic errors in estimating mean blood pressure from finger blood pressure measurements. Blood Press 7:277–281 Raamat R, Jagomägi K, Talts J et al (2003) Beat-to-beat measurement of the finger arterial pressure pulse shape index at rest and during exercise. Clin Physiol Funct Imaging 23:87–91 Gardner RM (1981) Direct blood pressure measurement – dynamic response requirements. Anesthesiology 102:28–33 Zheng D, Murray A. (2008) Estimation of mean blood pressure from oscillometric and manual methods. Computers in Cardiology 35:941– 944
Address of the corresponding author: Author: Institute: Street: City: Country: Email:
Jagomägi K. University of Tartu 19 Ravila str Tartu Estonia
[email protected]
IFMBE Proceedings Vol. 29
Photoplethysmographic Assessment of the Pressure-Compliance Relationship for the Radial Artery J. Talts, R. Raamat, K. Jagomägi and J. Kivastik Institute of Physiology, University of Tartu, Tartu, Estonia Abstract— The compliance of the radial artery has been shown to be useful as an early predictor of vascular diseases. ) 12 subjects were studied to estimate the pressure-area ( ) relationships of the radial and pressure-compliance ( relationship was artery over a wide pressure range. The modeled using an asymmetrical arctangent function and the relation was derived as its derivative. Model parameters were found by nonlinear fitting using arterial pressure changes from the Millar tonometer and volume (cross-section area) changes from the photoplethysmograph (University of Tartu), both adjusted on the radial artery. In average over the experiments, the maximum compliance value was 0.15 mm2/mm Hg, the mean location of the maximum compliance point was -1.0 mm Hg; the left and right half-maximum widths of the model were 13.6 and 25.5 mm Hg, respectively. Results show a relatively large variance in bell-shaped curves of the radial relationship.
The analysis and experiments on the radial artery in [10] revealed that the external pressure transmission ratio was almost 100% when the appropriate location and size of the compression pad were chosen. Similarly, the importance of proper pressurization and sensing aspects was emphasized in [11]. Both afore-mentioned studies were based on an investigation of the oscillometric pulse envelopes. The aim of the present study was to characterize the radial artery compliance over a wide transmural pressure range by the nonlinear model using a pressure waveform from the radial tonometer and a volume (cross-section area) waveform from the photosensor under the pad-type cuff.
Keywords— pressure-compliance relationship, photoplethysmographic, radial artery .
A. Procedure:
I. INTRODUCTION
Noninvasive blood pressure measurement is often based on the nonlinear behavior of the arteries. The character of this nonlinearity is a basic reason why the oscillometric pulse envelope has a particular shape. The arterial properties also have an independent prognostic value in diagnosis of the cardiovascular diseases [1]. Arterial compliance has a strong dependence on the transmural pressure. Normally the transmural pressure is positive, but in experiments it can be reduced to zero or negative values when applying an external pressure to the blood vessel. Properties of different proximal and distal arteries have been studied. The pressure-area (P-A) or pressurecompliance (P-C) curves over a wide transmural pressure range including negative pressures have been found for the aorta [2], brachial artery [3-5] and fingers [6]. The P-A relationship is described as a symmetric [2, 7, 8] or asymmetric [4, 6, 9] function. On the radial artery, a presence of the difference in distensibility between patients and the control group was shown in the physiological pressure range [8].
II.
METHODS
Tests were performed on 12 healthy subjects (19-49 y.o.) in lying position. A photoplethysmographic (PPG) sensor was attached over the radial artery of the left hand by means of two adhesive strips. The distance between the infrared source (wavelength 830 nm) and receptor was 15 mm and they were positioned symmetrically to the artery. The local compression pad was fixed by a flexible Velcro strap and a Ushaped aluminum clip. The pad and photosensor worked similarly to the finger photoplethysmographic cuff, where the pressure chamber and opto-pair are built into a single sensor. Using this pad, it is possible to exert pressure to the radial artery and record arterial volume oscillations through the light transmission changes. In addition, the arterial volume oscillations can also be picked up as small changes in the pad pressure. For better assessment of the cardiac-induced oscillations in the pad pressure signal, the filtered (high pass filter, 0.3 Hz) signal was also created. The pencil-shaped Millar tonometer was held manually on the radial artery of the right hand. The tonometer signal was displayed on a separate screen to help the operator to achieve optimal pressure to the artery. Pad pressure, PPG and tonometric signals were recorded at 100 Hz by the 16 bit AD converter.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 152–155, 2010. www.springerlink.com
Photoplethysmographic Assessment of the Pressure-Compliance Relationship for the Radial Artery
After a 5-minute equilibrium period, the pad pressure was raised to 100 mm Hg and by means of the calibration syringe, volume changes of 50 µL were introduced in the pneumatic system. Due to these graded volume changes, a small pressure modulation was induced in the pad pressure signal, which was later used for assessment of the sensitivity of the measuring system. During every measurement session, the pad pressure was rapidly increased to the level of 170 mm Hg and then decreased to 30 mm Hg with a rate of 2 mm Hg/s. This enabled us to record arterial pulsations in the wide transmural pressure range. For calibration of the tonometer signal, the brachial blood pressure (BAP) was measured by Microlife BP A100 device on the right hand at the end of each measurement session. B. Calibrations: Assuming that the Lambert-Beer’s law holds true in the tissues where the photoplethysmogram is measured, a logarithmic photometer should be used to provide a linear relationship between the recorded light transmittance changes and the real volume changes. Operations of calculating the logarithmic function were executed by computer software. The obtained signals were used for estimation of the arterial volume and cross-section area changes. The volume signal calibration in each subject was performed in the following way: first, sensitivity of the AC component of the pad pressure was calibrated against the known syringe volume. Next, the calibration factor for the PPG signal was calculated as a ratio between the maximum cardiac pulse amplitudes of the pad pressure AC component and the PPG signal. An effective length of the arterial segment beneath the pad was assumed to be 30 mm, which was used for conversion of the volume changes into the corresponding cross-section area changes. To calibrate the radial pressure waveforms, the recorded tonometric signal in each subject was scaled to the measured BAP in this subject. C. Modeling: The modeled P-A relationships were calculated offline by the use of nonlinear fitting algorithm described in [6]. Briefly, the transmural pressure was found as a difference between arterial and cuff pressures. Next, the expected arterial area was calculated by the asymmetric arctangent formula: C 2 An arctan max Ptr P0 for An 2 An A C max 2 A p A arctan Ptr P0 for 2A n p
P tr P0 0 (1)
153
where An represents volume difference from a fully collapsed state to the inflection point, Ap - the volume difference from the inflection point to a fully expanded size, Cmax - the slope of the P-A curve at the inflection point, and P0 - the position of the inflection point on the pressure axis. The pressure-area hysteresis was accounted by unity-gain first order lag with time constant τ. As a result, the time sequence of the predicted area was created, and it was used to fit the model parameters by Levenberg-Marqwardt method by comparing filtered waveforms of measured and modeled signals. From the entire pressure scan we used the time interval which had oscillations at least 50% of the maximum amplitude of the particular oscillation curve. The derivative of function (1), the P-C relationship can be written in the form C max 2 1 Ptr P0 dA P1n C max dPtr 2 Ptr P0 1 P1 p
for
Ptr P0 0
(2) for
Ptr P0 0
where P1n=2An/ΔCmax and P1p=2Ap/ΔCmax are steepnesses of the arctangent and also the half-maximum widths of the compliance curve on the left and right side, respectively. III.
RESULTS
An example of recordings in one subject is illustrated in Fig. 1. The tonometric arterial pressure waveform was scaled to BAP. The PPG signal was calibrated into area units according to the explanation in the Methods section. Estimated model parameters are shown in Table 1. The mean transmural pressure for the maximum compliance point was -1.0(SD 9.9) mm Hg. The goodness of fit was evaluated by the R2, showing the percentage of the explained variance and in average it equaled 0.969(0.014)%. Modeled compliance (dA/dP) curves are shown in Fig.2. When inspecting the peak position (P0) and maximum value (Cmax) of the compliance, as well as the decrease rate of the curve in left (P1n) and right (P1p) sides, a large variance in parameters can be noticed.
Ptr P0 0
IFMBE Proceedings Vol. 29
154
J. Talts et al.
Table 1 Model parameters Subject
Fig. 1 Example of photoplethysmographic (top), filtered pneumatic (middle), tonometric (thin) and pad (bold) pressure (bottom) signals in one subject.
Fig. 2 Modeled compliance vs transmural pressure for the radial artery in all subjects. IV.
DISCUSSION
We analyzed the compliance of the radial artery by means of the technique earlier used for finger arteries [6]. When comparing those two sites, the radial artery is more proximal and less influenced by vasomotions, which gives some advantage in the measurement of arterial condition or blood pressure. Moreover, the radial artery is superficial and it is easy to occlude it. However, when using an external pressure to compress the artery, it may sometimes be quite undefined how effectively the pressure reaches the artery, as there are bones and tendons, which can „shield” the compressing effect [10, 11].
Cmax,
P1p,
P1n,
P0,
τ,
mm2/mm Hg
mm Hg
mm Hg
mm Hg
s
1
0.13
41.0
7.3
-8.1
0.068
2
0.153
35.2
17.9
4.4
0.055
3
0.133
16.1
3.8
-2.2
0.102
4
0.188
24.4
8.6
-10.5
0.049
5
0.088
11.4
23.2
10.7
0.057
6
0.22
11.9
15.5
-4.6
0.046
7
0.189
27.8
9.8
-11.5
0.071
8
0.162
24.9
10.7
14.9
0.021
9
0.132
35.3
26.7
9.5
0.081
10
0.251
29.0
11.0
3.0
0.096
11
0.089
25.7
13.9
-17.1
0.079
12
0.096
23.8
14.9
-0.6
0.047
Mean
0.153
25.5
13.6
-1.0
0.064
SD
0.052
9.2
6.6
9.9
0.023
It is our opinion that this is responsible for a larger variation of the P-A curves in the present work when compared to curves previously found for fingers. This problem is not present when measuring without an external force as in [8], but in this case the pressure sweep is limited to the physiological range. Differently from [8], we used a tonometer instead of Finapres for getting pressure waveforms. The tonometer registers pressure on the same site, on the radial artery, thus eliminating errors due to finger vasomotions or pressure pulse amplification towards the periphery. A disadvantage of this approach is that the other hand should also be involved, which can cause an error in persons with an asymmetry between hands. In addition, it is complicated to hold a pencil-type sensor optimally on the artery for a long time, and also calibration using BAP is needed. We used scaling to brachial systolic and diastolic pressures and did not account for possible brachial-to-radial pulse pressure amplification. The last change was found equal to 5.8 mm Hg in [12]. It could be viewed in the PPG signal during the pressure scan that the creep for the radial artery was smaller than that noticed by us for fingers [6]. In some cases (3 of 12), the creep did not exist at all and there was no need to use signal filtering when creating an error signal for the model parameter estimator. This agrees with the general viewpoint that for larger arteries, the viscoelastic effects are typically smaller [13]. The main character of the P-C curve is in accordance with those estimated by other methods. We could not find
IFMBE Proceedings Vol. 29
Photoplethysmographic Assessment of the Pressure-Compliance Relationship for the Radial Artery
publications on the P-C relationship in a wide pressure range for the radial artery, but there are a few for the brachial artery. Although the direct comparison of parameters is not possible, some approximate values can be derived from the data in [3, 4]. As expected, the maximum compliance of 0.15 mm2/mm Hg for the radial artery obtained in our study is smaller than that for the brachial artery, being reported about 0.3...0.4 mm2/mm Hg in [3] and 0.55 in [5]. Curves 5 and 9 show a very slow decrease rate of the compliance on the left side. This is probably due to the restricted compression effect on the wrist as reported in [10]. We expect that this can be overcome by the optimization of the compression pad construction. All standard deviations of parameters with pressure dimension were bigger than we found for fingers [6]. V. CONCLUSIONS
2.
3. 4. 5. 6. 7. 8.
9.
The pressure-compliance curves for the radial artery were derived using a photoplethysmograph beneath the local compression pad. Location of the maximum compliance points on the pressure axis as well as the compliance decrease rates on the curve sides showed a relatively large variation.
10. 11.
12.
ACKNOWLEDGMENT 13.
This work was supported by Grants 6947 and 7723 from the Estonian Science Foundation.
REFERENCES 1.
Laurent S, Cockcroft J, Van Bortel L et al (2006) Expert consensus document on arterial stiffness: methodological issues and clinical applications. Eur Heart J 27:2588-2605
155
Heerman JR, Segers P, Roosens CD et al (2005) Echocardiographic assessment of aortic elastic properties with automated border detection in an ICU: in vivo application of the arctangent Langewouters model. Am J Physiol Heart Circ Physiol 288:H2504-2511 Bank AJ, Kaiser DR, Rajala S et al (1999) In Vivo Human Brachial Artery Elastic Mechanics: Effects of Smooth Muscle Relaxation. Circulation 100:41-47 Drzewiecki G, Field S, Moubarak I et al (1997) Vessel growth and collapsible pressure-area relationship. Am J Physiol 273:H2030-2043 Foran TG, Sheahan NF (2004) Compression of the brachial artery in vivo. Physiol Meas 25:553-564 Talts J, Raamat R, Jagomägi K (2006) Asymmetric time-dependent model for the dynamic finger arterial pressure-volume relationship. Med Biol Eng Comput 44:829-834 Langewouters GJ, Wesseling KH, Goedhard WJ (1984) The static elastic properties of 45 human thoracic and 20 abdominal aortas in vitro and the parameters of a new model. J Biomech 17:425-435 Giannattasio C, Achilli F, Failla M et al (2002) Radial, carotid and aortic distensibility in congestive heart failure: effects of high-dose angiotensin-converting enzyme inhibitor or low-dose association with angiotensin type 1 receptor blockade. J Am Coll Cardiol 39:12751282 Guerrisi M, Vannucci I, Toschi N (2009) Differential response of peripheral arterial compliance-related indices to a vasoconstrictive stimulus. Physiol Meas 30:81-100 Lu W, Tsukada A, Shiraishi T et al (2001) Indirect arterial blood pressure measurement at the wrist using a pad-type square cuff and volume-oscillometric method. Front Med Biol Eng 11:207-219 Kim JP, Kim YH, Bae S et al (2009) Factors affecting the accuracy of volume-oscillometric blood pressure measurement during partial pressurization of the wrist. EMBC 2009, Annual International Conference of the IEEE:721-724 Verbeke F, Segers P, Heireman S et al (2005) Noninvasive assessment of local pulse pressure: importance of brachial-to-radial pressure amplification. Hypertension 46:244-248 Holzapfel GA, Gasser, TC, Stadler M (2002) A structural model for the viscoelastic behavior of arterial walls: Continuum formulation and finite element analysis. European Journal of Mechanics A: Solids 21:441-463 Author: Jaak Talts Institute: Institute of Physiology, University of Tartu, Street: 19 Ravila St City: Tartu Country: Estonia Email:
[email protected]
IFMBE Proceedings Vol. 29
High Frequency Acoustic Properties for Cutaneous Cell Carcinomas In Vitro L.I. Petrella1, W.C.A. Pereira1, P.R. Issa2, H.A. Valle3, C.J. Martins2, and J.C. Machado1 1
Federal University of Rio de Janeiro/Biomedical Engineering Program, Rio de Janeiro, Brazil 2 Gaffrée and Guinle University Hospital/Dermatology Sector, Rio de Janeiro, Brazil 3 Gaffrée and Guinle University Hospital/Pathologic Anatomy Sector, Rio de Janeiro, Brazil
Abstract— The present work studies the acoustic properties of cutaneous cell carcinomas for ex vivo tissue samples. An ultrasound biomicroscope, working at a central frequency of 45 MHz, was used. The analyzed parameters are sound speed, attenuation coefficient and attenuation coefficient slope. Additionally, normal tissues were used for comparison. Higher values of sound speed were observed in healthy skin, although not statistically different when compared with the carcinoma groups. Also, for the attenuation parameters no significant differences were evidenced. Presently, corrections in the methodology are being made for a more accurate characterization of carcinomatous skin. Keywords— cutaneous carcinomas, ultrasound biomicroscopy, acoustic attenuation, sound speed.
I. INTRODUCTION The standard method to diagnose dermatological lesions consists of tissue sample excision, and subsequent histological preparation for light microscopy visualization. This procedure can be undesirable in many situations, because of patient health condition or of aesthetic factors. Challenged by these limitations, several non-invasive imaging diagnostic techniques are being implemented in dermatology, to improve the patient healthcare protocol, as well as the health centers routine. Ultrasound biomicroscopy (UBM) uses high-frequency acoustic waves for high-resolution images generation. The common frequency range for clinical applications is 2060 MHz, giving image resolutions of few tens of micrometers, and penetration depths of few millimeters [1]. These conditions allow the differentiation of epidermal, dermal and subcutaneous layers, as well as the visualization of cutaneous annexes and anomalous structures. Cutaneous carcinomas are the most common malignant tumors, and their principal predisposing factor is the sunlight exposure. They arise from malignant proliferation of epidermal and adnexal keratinocytes, and in the particular case of basal cell carcinomas (BCC), atypical basaloid cells nests are observed. This carcinomas are an important public health problem, despite their low mortality rate [2]. Several works have employed UBM imaging technique to study cutaneous carcinomas. Some of them were intended to
measure tumor sizes [3; 4]; and in many of them the tumor sizes were overestimated because of unclear differentiation between tumor nests and the surrounding associated components. Other works were conducted for tumors regression evaluation after therapy, as well as their recurrence incidence [5; 6]; in these works, UBM showed a great potential to characterize the neoplasm evolution. Moreover, some works studied the tumor echogenicity pattern as an indicator of specific carcinoma types; in this sense, Uhara et al. [7] related bright spots in UBM images of nodular and superficial carcinomas, with the presence of calcification foci, cornified cysts, cells clusters or necrosis; on the other side, Desai et al. [4] observed more echoic characteristics in morphoeiform BCC cases, due to a dense fibrous stroma surrounding the tumor nests. The calculus of acoustic parameters was applied in the dermatology area, to identify spatial variations in normal skin, and in a less extent to analyze some anomalous conditions; conversely, studies of cutaneous carcinomas by acoustic parameters were not found in the literature. The commonly studied parameters are: wave speed (c), attenuation coefficient (α), attenuation coefficient slope (ηα), integrated attenuation coefficient (IAC), backscattering coefficient (β), backscattering coefficient slope (ηβ) and integrated backscattering coefficient (IBC). Several studies were conducted to determine the spatial variability of acoustic parameters from healthy skin. Lebertre et al. [8] observed that IBC provides a good representation of skin structure variations as a function of dermis depth for tissues in vitro; additionally they showed that intra-individual variations of ηα, IAC and IBC were lower than the inter-individual ones. Raju and Srinivasan [9] studied the differences between ηα and β obtained for dermal and hypodermal tissues in vivo, and observed higher βvalues in dermis, but no significant differences for ηα. Few cases of anomalous skin conditions were analyzed by acoustic parameters. Miyasaka et al. [10] compared the c values for photo-damage and photo-protected areas, revealing higher speed in the papillary dermis of photo-damage areas, which was related to fibrosis presence. Huang et al. [11] measured ηα, IAC and IBC for in vivo dermis affected by radiation-induced fibrosis; in this case, smaller ηα values and larger IAC and IBC values were observed when compared to normal skin.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 156–159, 2010. www.springerlink.com
High Frequency Acoustic Properties for Cutaneous Cell Carcinomas In Vitro
In the present work it is evaluated the potential of acoustic parameters to characterize cutaneous carcinomas, using tissues ex vivo. The analyzed tissues include several types of nodular BCC cases, as well as actinic keratoses (AK) and healthy skin for comparison. An UBM experimental system, operating with a center frequency (fc) of 45 MHz, and capable to work as A and B-mode scanner was used. The measured acoustic parameters are c, α and ηα. Since there are considerable histological differences between healthy dermal collagen network and the tumor environment, a significant variation on their values is expected in advance.
157
During acquisitions, the tissue sample was positioned over a sapphire disc (reflector material), inserted in a holder and covered by a polymer film; this assemblage was placed inside of an acrylic container filled with saline solution, which acted as a coupling medium between the tissue sample and the transducer surface.
II. MATERIAL AND METHODS A. Tissue Samples The present work was conducted with tissue samples ex vivo, from the Dermatology Sector of the Gaffrée & Guinle University Hospital (HUGG) - Rio de Janeiro. The volunteer patients, under cutaneous carcinoma suspicion, were submitted to biopsy for diagnostic purposes. They were informed of the procedures and objectives of the present work, and agreed to participate. Nineteen nodular BCC, four AK and four healthy skin tissue samples were studied. The last ones were obtained from the free-tumor border present in some biopsies. The AK samples were excised under carcinoma suspicion, and the carcinoma diagnostic was excluded by light microscopy analysis (Fig. 1). The nodular BCC cases were subdivided in accordance with the distribution patterns of tumor nests (Fig.1). The N1 group comprise numerous and small tumor nests, distributed along the dermis depth (ten samples); the N2 group consist of samples with the tumor nests growing in a circumscribed region (five samples); finally, the N3 group corresponds to ulcerated BCC cases (four samples). The excised lesions were obtained from different body regions and were preserved in formalin solution after excision. Following UBM analysis, the biopsies were diagnosed by light microscopy in the Pathologic Anatomy Sector of the HUGG - Rio de Janeiro, and the results were also used for comparison with the ones by UBM. All investigation procedures followed a protocol approved in accordance with the HUGG Ethical Committee and with the National Committee for Ethics in Research. B. UBM System An experimental system, working at 45 MHz, assembled at the Biomedical Engineering Program of the Federal University of Rio de Janeiro (UFRJ), Rio de Janeiro, was used for the images and RF signals acquisitions. Its principal characteristics are depicted in [12].
Fig. 1 BCC and AK images, obtained by UBM (left column) and light microscopy (right column). a-b) N1 group (see text); the tumor nests are not identified in UBM images which present heterogeneous characteristics. c-d) N2 group (see text), tumor nests demarked by points. e-f) N3 group (see text); the tumor nests are visualized like hypoechoic structures (right arrow) and the bleeding shows more echogenic characteristics (left arrow). g-h) AK images, showing hypoechoic aspect (right arrow); the epidermis detached from the dermis; the left arrow indicates a glandular structure C. Acquisition Protocol The region of interest (ROI) was firstly determined from B-mode images. It consists of a matrix of 8x8 points spaced by 50 μm, from where the RF signals were acquired. The acoustic parameters were computed using the double transmission method [13]. It requires the collection of six signal groups (each group consisting in 64 RF signals acquired from all ROI points), with the transducer focus positioned at different depths of the tissue sample.
IFMBE Proceedings Vol. 29
158
L.I. Petrella et al.
D. Acoustic Parameters Computation After RF signal acquisitions, the acoustic parameters were calculated with a computer program developed in LabVIEW 7.0. The c parameter was computed as: ci =
xi , xi + xm xm (Δt )i − + ca vm 2
(1)
where ci represents the wave speed for each ROI point; Δt is the difference in the arrival time of the echoes reflected on the sapphire surface, when the transmitted pulse travels through the tissue sample, and then through saline solution after it is removed; xi is the tissue thickness (previously calculated); xm and vm are the thickness and the sound speed of the polymer film covering the sample, respectively; and ca is the sound speed in water. The final value c that represents the tissue is computed by averaging the 64 values obtained from each ROI point. The α parameter is obtained according to:
αi ( f ) = −
I2 ( f )i , 20 log 2 xi I1 ( f ) i
Fig. 2 Mean and standard deviation of sound speed, obtained for N1, N2, N3, AK and healthy skin (NT: Non-Tumor) groups
(2)
where αi(f) represents the attenuation coefficient as a function of frequency (in a –6 dB bandwidth range) for each ROI point; |I1(f)| and |I2(f)| are the spectral amplitudes of the echoes reflected in the sapphire disc surface, when the pulse propagates trough saline solution and trough the tissue sample, respectively. The average of the 64 αi values obtained from each ROI point is computed. Finally, a fitted curve relating α vs. frequency is computed, from which are obtained both, the α value at the fc (αf), and the ηα value.
Fig. 3 Mean and standard deviation of attenuation coefficient measured at 45 MHz, for N1, N2, N3, AK and healthy skin (NT: Non-Tumor) groups
III. RESULTS The mean and standard deviation for the three studied parameters are presented in this section. For the c parameter, the highest mean value was obtained from healthy skin (Fig. 2), nevertheless, the results are not statistically different when evaluated by a t-test with p = 0.05. In view of attenuation parameters, αf and ηα, dependencies on healthy and altered skin were not evidenced. The highest αf mean value was observed in the AK group (Fig. 3); while for the ηα parameter, the highest mean value was observed for the N1 group (Fig. 4). On the other side, the N2 group presents the lowest αf and ηα mean values.
Fig. 4 Mean and standard deviation of attenuation coefficient slope, obtained for N1, N2, N3, AK and healthy skin (NT: Non-Tumor) groups
IV. DISCUSSION The parameters obtained for healthy skin agrees reasonably with those published in other works. Miyasaka et al. [10] measure c values between 1500-1600 m.s-1 on tissues in vitro, at frequencies between 50-105 MHz; whereas the
IFMBE Proceedings Vol. 29
High Frequency Acoustic Properties for Cutaneous Cell Carcinomas In Vitro
results obtained here are slightly superior, approximately 1754±159 m.s-1, for measurements between 30 and 60 MHz. The α parameter was measured for healthy skin in vitro by Lebertre et al. [8], at the same frequency range used in this work (45 MHz), and using a multi-narrow-band method; they obtained results of 4.5±1.0 dB.mm-1, that do not match with those obtained in the present work (7.4±4.6 dB.mm-1). On the other side, Pan et al. [14] measured α using the double transmission method (as in this study), and obtained values close to 10 dB.mm-1 at 35 MHz. The great differences published between authors, may be a consequence of the methodology employed and the sites where the samples were obtained, among other factors. Measurements of ηα parameter were reported by Guittet et al. [15], for healthy skin in vivo, at 40 MHz, obtaining values between 0.08-0.36 dB.mm-1.MHz-1. Raju and Srinivasan [9] also worked with healthy skin in vivo, at a frequency range of 14-50 MHz and reported similar values (0.08-0.39 dB.mm-1.MHz-1). The results for healthy skin obtained in the present work are 0.19±0.02 dB.mm-1.MHz-1 which are compatible with those already mentioned. Conversely, works computing acoustic parameters for cutaneous carcinomas cases were not found in the reviewed literature, so that, the results obtained in these cases could not be compared. The c values measured for the N1, N2, N3 and AK groups are 1603±221, 1708±251, 1606±96 and 1497±227 m.s-1 respectively; the αf values are 10±3.5, 6.1±3.0, 7.5±2.0 and 11.5±9.0 dB.mm-1 respectively; and the ηα values are 0.22±0.13, 0.15±0.08, 0.18±0.05 and 0.19±0.07 dB.mm-1.MHz-1 respectively. The results obtained from the different anomalous groups did not show significant statistical difference between them, and additionally, are in similar ranges of those obtained for healthy skin, either, in the present work or in those published by others. The most “favorable” case was observed for the c parameter, whose mean value obtained for healthy skin was higher than for all anomalous conditions, and was related to a more friable characteristics of neoplasm, when compared with the tense collagen fibers of normal skin. The difficulty observed for the classification of cutaneous carcinomas by acoustic parameters can be, in part, due to positioning errors during acquisition protocol. Moreover, the characterization could be improved by working with tissues in vivo, avoiding structural changes from the excision process. The use of higher frequencies, in the 50100 MHz range, could also improve the characterization. Concluding, the results obtained here are yet preliminary for acoustic analysis of cutaneous carcinomas. However, a tendency was already observed for the c parameter. We are presently working on some modifications in the signal processing to evaluate if ultrasonic characterization of cutaneous carcinomas is viable.
159
ACKNOWLEDGMENT The authors thank the support by CAPES/PROEX, CNPq and FAPERJ, and the collaboration of the HUGG staff.
REFERENCES 1. Foster F S, Pavlin C J, Harasiewicz K A et al. (2000) Advances in ultrasound biomicroscopy. Ultrasound Med Biol 26:1-27 2. Weedon D, Marks R, Kao G F, et al. (2006). Classification of tumours - Pathology and genetics of skin tumours. IARC Press, Lyon 3. Mogensen M, Nürnberg B M, Forman J L, et al. (2009) In vivo thickness measurement of basal cell carcinoma and actinic keratosis with optical coherence tomography and 20-MHz ultrasound. Br J Dermatol 160:1026-1033 4. Desai T J, Desai A D, Horowitz D C, et al. (2007) The Use of highfrequency ultrasound in the evaluation of superficial and nodular basal cell carcinomas. Dermatol Surg 33:1220-1227 5. Allan E, Pye D A, Levine E L, et al. (2002) Non-invasive pulsed ultrasound quantification of the resolution of basal cell carcinomas after photodynamic therapy. Lasers Med Sci 17:230-237 6. Moore J V, Allan E (2003) Pulsed ultrasound measurements of depth and regression of basal cell carcinomas after photodynamic therapy: relationship to probability of 1-year local control. Br J Dermatol 149:1035-1040 7. Uhara H, Hayashi K, Koga H et al. (2007) Multiple hypersonographic spots in basal cell carcinoma. Dermatol Surg 33:1215-1219 8. Lebertre M, Ossant F, Vaillant L et al. (2002) Spatial variation of acoustic parameters in human skin: an in vitro study between 22 and 45 MHz. Ultrasound Med Biol 28:599-615 9. Raju B I, Srinivasan M A (2001) High-frequency ultrasonic attenuation and backscatter coefficients of in vivo normal human dermis and subcutaneous fat. Ultrasound Med Biol 27:1543-1556 10. Miyasaka M, Sakai S, Kusaka A (2005) Ultrasonic tissue characterization of photodamaged skin by scanning acoustic microscopy. Tokai J Exp Clin Med 30:217-225 11. Huang Y P, Zheng Y P, Leung S F et al. (2007) High frequency ultrasound assessment of skin fibrosis: clinical results. Ultrasound Med Biol 33:1191-1198 12. Soldan M, Schanaider A, Madi K (2009) In vivo ultrasound biomicroscopy imaging of colitis in rats. J Ultrasound Med 28:463-469 13. D’Astous F T, Foster F S (1986) Frequency dependence of ultrasound attenuation and backscatter in breast tissue. Ultrasound Med Biol 12:795-808 14. Pan L, Zan L, Foster F S (1998) Ultrasonic and viscoelastic properties of the skin under transverse mechanical stress in vitro. Ultrasound Med Biol 24:995-1007 15. Guittet C, Ossant F, Remenieras J-P et al. (1999) High-frequency estimation of the ultrasonic attenuation coefficient slope obtained in human skin: simulation and in vivo results. Ultrasound Med Biol 25:421-429
Author Address Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Lorena Itatí Petrella Biomedical Eng. Program, Federal Univ. of Rio de Janeiro Horácio Macedo Ave. 2030, H-335, Center of Technology Rio de Janeiro (Zip. 21941-914) Brazil
[email protected]
Gender-related Effects of Carbohydrate Ingestion and Hypoxia on Heart Rate Variability: Linear and Non-linear Analysis T. Princi1, M. Klemenc2, P. Golja3 and A. Accardo4 1 2
Department of Life Sciences, University of Trieste, Trieste, Italy General Hospital Dr. Franc Derganc, Šempeter pri Gorici, Slovenia 3 Private researcher, Volce 31, Tolmin, Slovenia 4 DEEI, University of Trieste, Trieste, Italy
Abstract— The differences between genders in cardiac autonomic modulation in normoxia are well documented, while gender-related ANS responses following exposure to hypoxia showed dissimilar results. The carbohydrates treatment provoked the sympathetic excitation in young females and males, whereas it did not modify the sympathovagal balance in middle-aged women. The aim of the present study was to assess the gender-related effects of sucrose ingestion and normobaric hypoxia on cardiac ANS function in young healthy males (n = 6) and females (n = 8). All subjects were exposed to normoxia (40-min). After the first 15-min normoxic period the subjects ingested a 10% water solution of sucrose in the amount of 4 kcal per kg body mass (4 kcal ≈ 1 g sucrose). Then followed 30-min acute normobaric hypoxia (FiO2 = 12.86%). During the experiment the ECG was continuously monitored. The cardiac ANS activity was evaluated by using hart rate variability (HRV) linear (autoregressive spectra) and nonlinear (beta coefficient, fractal dimension) analysis. The HF spectral component showed a significant reduction of vagal activity (p <0.05) in women but not in men, comparing normoxia to sucrose ingestion and hypoxia. At the same experimental conditions the LF/HF ratio as expression of sympathovagal balance presented a significant increase only in females. Also the fractal dimension, as expression of the complexity of the system, was significantly lower (p < 0.05) only in female subjects comparing normoxia to sucrose ingestion and hypoxia. These results indicate a different gender-related cardiac ANS modulation, linked to the carbohydrate ingestion and acute exposure to hypoxia, suggesting in females a higher sensitivity to both these factors able by itself, in particular conditions, to influence the sympathovagal balance. Keywords— Heart rate variability, gender, hypoxia, carbohydrates I. INTRODUCTION
Several factors can modify the cardiac autonomic nervous system (ANS) activity. During acute exposure to hypobaric hypoxia a depression of autonomic functions as reflected in a decrease of heart rate variability (HRV) has been reported, and a shift in the sympatho-vagal balance towards relatively more sympathetic and less parasympathetic activity has been detected at higher hypoxic levels [1-
3]. The carbohydrate treatment represents another factor able to influence the cardiac ANS function. It has been demonstrated that carbohydrate ingestion, but not fat or protein ingestion, increases sympathetic nerve activity [4, 5]. A dominance of sympathetic over the parasympathetic modulation as expressed by higher value of LF/HF ratio has been observed in healthy subjects after glucose administration [6]. The differences between genders in cardiac autonomic modulation in normoxia are well documented [7-8], while gender-related ANS responses following exposure to hypoxia showed dissimilar results because the pattern and duration to exposure to hypoxia were dissimilar [9] or a different technique was used to evaluate the ANS activity [10-11]. The carbohydrates treatment provoked the sympathetic excitation in young females [12] and males [13], whereas it did not modify the sympathovagal balance in middle-aged women [14]. The present study was, therefore, designed to assess the gender-related effects of sucrose ingestion and hypoxia on cardiac ANS modulation in young healthy females and males by using heart rate variability (HRV) linear as well as non-linear analysis. II.
MATERIAL AND METHODS
A. Subjects Fourteen young healthy subjects (8 females in the follicular phase of their menstrual cycle and 6 males) participated in this study. All subjects were students of overage fitness, nonsmokers with no history of cardiovascular, metabolic or pulmonary disease. All participants gained physicians’ approval and provided their informed consent for voluntary participation in the study. The protocol of the study was approved by the National Ethics Committee of the Republic of Slovenia. B. Protocol To eliminate the effects of circadian rhythm, the experiments were performed in the morning at the same day-time
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 160–163, 2010. www.springerlink.com
Gender-Related Effects of Carbohydrate Ingestion and Hypoxia on Heart Rate Variability: Linear and Non-linear Analysis
in the laboratory situated at 90 m a.s.l. The subjects did not perform any physical activity on the day prior to the experiment or on the day of the experiment and they were instructed to not consume any food or drink (except water ad libitum) on the day of the experiment. Upon their arrival to the laboratory, the subjects’ height and weight were determined. Continuous blood pressure measurements by applanation tonometry and electrocardiogram (Colin BP-508, Komaki City, Japan), as well as finger pulse oximetry (Nellcor Oximax N-550, Pleasanton, Ca, USA) were initiated. The subjects were also provided with a mouthpiece, which was connected to a pneumotach (Hans Rudolph, Wyandotte, MD, USA) on the inspiratory side to monitor breath-by-breath ventilation. The expiratory side of the mouthpiece was connected to a 3.5-1 mixing box, from which a 150 ml/min sample of air was passed continuously to a O2/CO2 gas analyser (Servomex 1440, Crowborough, UK). Upon instrumentation, ambient data were noted and subjects rested supine on an examination table for the rest of the experiment. Each experiment was composed of a 15-min control normoxic period (first normoxia, NORM 1), after which the subjects ingested in less than a minute a 10% water solution of sucrose in the amount of 4 kcal per kg body mass (4 kcal ≈ 1 g sucrose). After the sucrose ingestion a rest period of 30 min started in order to allow enough time for carbohydrate absorption; no data were recorded during this time. Following a 30-min rest, the second normoxic interval of 10 min was recorded (second normoxia, NORM 2), after which the inspiratory side of the mouthpiece was connected to a meteorological balloon, which acted as a reservoir for a hypoxic gas mixture (FiO2 = 12.86%) and at the same time allowed for its humidification, as the gas mixture was passed through water. Inspiration of a hypoxic normobaric gas mixture is one of the standard procedures for stimulation of high altitude and in the present study served to simulate the altitude of 3,500 m [15]. The subjects inspired the hypoxic gas mixture for 30 min. The first hypoxic interval was recorded between the 10th and the 20th min (first hypoxia, HYPO 1) and the second hypoxic interval between the 20th and the 30th min (second hypoxia, HYPO 2). After 30 min of hypoxia, the subjects were switched to air and the experiment ended. During the experiments, ECG and oxygen saturation (SaO2; %) were continuously monitored. C. HRV analysis ECG electrodes were attached to the skin so that three modified leads were obtained (CM1-modified V1, CM2modified V2, and CM5-modified V5). The two leads having the most prominently expressed R-peaks were used for further analysis.
161
From the ECG, the series of consecutive R-R interval (tachogram) in function of beat numbers was extracted. All artifacts during the recording were removed by passing the time R-R series through a filter that eliminated premature beats (if deviated from previous qualified interval by more than 2*SD) and noise and substituted them with an interpolated value computed from the neighboring 10 beats. In order to sample at regular time intervals, the series were linearly interpolated and resampled at 2 Hz for further processing. From the tachograms, autoregressive spectra were evaluated using the Haming window on intervals of 1024 points. Low (LF: 0.040 - 0.150 Hz) and high (HF: 0.150 – 0.800 Hz) spectral bands (in ms2 and normalized units) were evaluated and the LF/HF ratio was derived. Linear regression analysis between log(Power) and log(Frequency) was performed on the power spectrum included between 0.004 Hz and 1 Hz, and the slope (β) was estimated. The fractal dimension (FD) was evaluated by means of the Higuchi’s algorithm [16] based on the measure of the mean length of the curve by using a segment of k samples (k varying from 1 to 6) as an unit of measure. The FD values were calculated on tracts of 120 consecutive RR interval samples. The mean value of FD ± SD was considered in the analysis. D. Statistical analysis Hypoxia period was divided into thirds and the last two thirds were considered for statistical analysis. The differences among the four conditions (NORM 1, NORM 2, HYPO 1, HYPO 2) were considered for the statistical analysis, separately in males and in females. The Wilcoxon rank sum test for paired-data was applied and a pvalue of 0.05 was adopted as statistically significant. In men the sucrose ingestion in normoxia provoked a statistically significant increase of FD comparing NORM 1 to NORM 2 (1.35 vs 1.44; p = 0.04), whereas any significant difference was detected neither in RR interval (mean value ± SD) nor in other HRV parameters. In women, in the same normoxic conditions (NORM 1 vs NORM 2), the carbohydrate ingestion did not cause any significant modification of analyzed parameters. Comparing normoxia to hypoxia, the differences between males and females become more evident. The males presented significant differences only for RR intervals (mean value ± SD), comparing fasting normoxia to both hypoxic period after carbohydrate administration (RR interval: NORM 1 vs HYPO 1 p = 0.04, NORM 1 vs HYPO 2 p = 0.004) and beta coefficient (NORM 1vs HYPO 2 p = 0.01).
IFMBE Proceedings Vol. 29
162
T. Princi et al.
On the other hand, in females hypoxia without sucrose ingestion as well as hypoxia with sucrose ingestion provoked a statistically significant increase of heart rate, i.e. a significant decrease of RR interval (NORM 1 vs HYPO 1 p = 0.002; NORM 1 vs HYPO 2 p = 0.003, NORM 2 vs HYPO 1 p = 0.03, NORM 2 vs HYPO 2 p= 0.03) (Fig. 1a), a significant decrease of HF spectral component (ms2) (NORM 1 vs HYPO 1 p = 0.005; NORM 1 vs HYPO 2 p = 0.01, NORM 2 vs HYPO 1 p = 0.01, NORM 2 vs HYPO 2 p= 0.02 (Fig. 1b) and a significant increase of LF/HF ratio (NORM 1 vs HYPO 1 p = 0.005; NORM 1 vs HYPO 2 p = 0.003, NORM 2 vs HYPO 1 p = 0.003; NORM 2 vs HYPO 2 p= 0.001) (Fig. 1c). The FD value significantly decreased, comparing both hypoxic periods with the second normoxia (NORM 1 vs HYPO 2 p = 0.04; , NORM 2 vs HYPO 2 p = 0.01) (Fig. 1d). The beta coefficient significantly increased comparing fasting normoxia to the second hypoxia (NORM 1 vs HYPO 2 p = 0.05).
lowing carbohydrates ingestion are sex dependent. In fact, different authors considered the carbohydrates treatment only in women [12] or in men[13]. Thus, this study evaluated the effects of sucrose ingestion and normobaric hypoxia on cardiac ANS function in age-matched healthy males and females. The autonomic cardiovascular control was investigated using HRV linear and non-linear analysis. Our finding demonstrated differences in cardiac autonomic regulation comparing females to males after carbohydrate load and normobaric hypoxia.
III. RESULTS Table 1. summarizes the anthropometrical characteristics of the subjects. No significant age difference was observed between males and females.
Female
Male
Age (yr) 25 22 25 20 26 25 30 22 24 23 24 23 22 22
Height (cm) 161 162 165 172 174 171 175 164 183 171 176 180 200 171
Weight (Kg) 54.3 55.9 54.3 78.8 60.0 53.4 68.3 89.9 90.5 65.5 79.6 92.5 113.4 62.2
IV. DISCUSSION
It is well known that the exposure to hypobaric hypoxia provokes a depression of autonomic functions and a shift in the sympatho-vagal balance towards more sympathetic and less parasympathetic activity [1-3]. Also the carbohydrate ingestion, but not fat or protein ingestion, increases sympathetic nerve activity [4-5]. The gender-related cardiac ANS responses following exposure to hypoxia revealed dissimilar results. Moreover, to our knowledge, no other experimental reports have examined whether autonomic responses fol-
Figure 1a-d. RR interval, HF spectral component, LF/HF ratio and FD values in males (o) and females (x) in the NORM1, NORM2, HYPO1 AND HYPO2 conditions.
As power spectral analysis of HRV is a safe and reliable tool to evaluate cardiac ANS activity, and the ratio LF/HF is an indirect index of cardiac sympathovagal balance, reflecting autonomic input to the heart [17-18], this study demonstrated in females but not in males a significant depressive
IFMBE Proceedings Vol. 29
Gender-Related Effects of Carbohydrate Ingestion and Hypoxia on Heart Rate Variability: Linear and Non-linear Analysis
effect of hypoxia and sucrose ingestion on the vagal function as well as a shift towards cardiac sympathetic excitation in these experimental conditions (Fig. 1b-c). On the other hand, the linear regression analysis between log(Power) and log(Frequency), performed by selecting the 0.004-1 Hz band of the power spectrum, demonstrated a significantly different level of complexity of the system, as represented by the beta coefficient values, in males as well as in females, comparing normoxia with hypoxia after sucrose ingestion. Because the beta parameter is inversely related to complexity, a decrease of its value, as reported in our study, is indicative of less complex interactions of autonomic nervous control mechanisms over myocardium, during acute exposure to hypoxia. However, the FD, which represents another index able to appraisal the complexity of the signal rather the magnitude of the variability [19-20], confirmed only in female subjects a reduction of complexity as effect of hypoxia and carbohydrate load on cardiac ANS activity in comparison to normoxia. As the oxygen saturation in normoxia and ventilatory response to hypoxia have been found different between males and females [4-5,7], these findings prompt that in women the central chemoreceptors could be involved in a reinforcement of the sympathetic response to hypoxia. V. CONCLUSIONS
7.
8.
9.
10.
11.
12.
13.
14.
15.
This study suggests a different cardiac ANS modulation after carbohydrate ingestion and hypoxia comparing healthy young males to females. It is likely that women could present higher sensitivity to both these factors able by itself, in particular conditions, to influence the sympathovagal balance.
16. 17.
18.
REFERENCES 1.
2.
3.
4. 5.
6.
Hughson RL, Yamamoto Y, McCullough RE, Sutton JR, Reeves JT (1994) Sympathetic and parasympathetic indicators of the heart rate control at altitude studied by spectral analysis. J Appl Physiol 77: 2537 – 2542 Kanai M, Nishihara F, Shiga T, Shimada H, Saito S (2001) Alteration in autonomic nervous control of heart rate among tourists at 2700 and 3700 m above sea level. Wilderness Environ Med 12: 8 – 12 Sevre K, Bendz B, Hanko E, Nakstad AR, Hauge A, Kasin JI, Lefrandt JD, Smit AJ, Eide I, Rostrup M (2001) Reduced autonomic activity during stepwise exposure to high altitude. Acta Physiol Scand 173(4):409-417 Young JB, Landsberg L (1977) Stimulation of the sympathetic nervous system during sucrose feeding. Nature 269:615–617 Welle S, Lilavivat U, Campbell RG (1981) Thermic effect of feeding in man: increased plasma norepinephrine levels following glucose but not protein or fat consumption. Metabolism 30:953–958 Paolisso G, Manzella D, Ferrara N, Gambardella A, Abete P, Tagliamonte MR et al (1997) Glucose ingestion affects cardiac ANS in healthy subjects with different amounts of body fat. Am J Physiol Endocrinol Metab 273:E471–E478
19.
20.
21. 22.
163
Sevre K, Lefrandt JD, Nordby G, Os I, Mulder M, Gans RO, Rostrup M, Smit AJ (2001) Autonomic function in hypertensive and normotensive subjects: the importance of gender. Hypertension 37(6):1351-1356 Barantke M, Krauss T, Ortak J, Lieb W, Reppel M, Burgdorf C, Pramstaller PP, Schunkert H, Bonnemeier H (2008) Effects of gender and aging on differential autonomic responses to orthostatic maneuvers. J Cardiovasc Electrophysiol 19(12):1296-1303 Wadhwa H, Gradinaru C, Gates GJ, Badr MS, Mateika JH (2008) Impact of intermittent hypoxia on long-term facilitation of minute ventilation and heart rate variability in men and women: do sex differences exist? J Appl Physiol 104(6):1625-1633 Jones PP, Davy KP, Seals DR (1999) Influence of gender on the sympathetic neural adjustments to alterations in systemic oxygen levels in humans. Clin Physiol 19(2):153-160 Leuenberger UA, Hogeman CS, Quraishi S, Linton-Frazier L, Gray KS (2007) Short-term intermittent hypoxia enhances sympathetic responses to continuous hypoxia in humans. J Appl Physiol 103(3):835842 Tentolouris N, Tsigos C, Perrea D, Koukou E, Kyriaki D, Kitsou E, Daskas S, Daifotis Z, Makrilakis K, Raptis SA, Katsilambros N (2003) Differential effects of high-fat and high-carbohydrate isoenergetic meals on cardiac autonomic nervous system activity in lean and obese women. Metabolism 52(11):1426-1432 Millis RM, Austin RE, Bond V, Faruque M, Goring KL, Hickey BM, Blakely R, DeMeersman RE (2009) Effects of high-carbohydrate and high-fat dietary treatments on measures of heart rate variability and sympathovagal balance. Life Sciences 85:141-145 Kanaley JA, Baynard T, Franklin RM, Weinstock RS, Goulopoulou S, Carhart R Jr, Ploutz-Snyder R, Figueroa A, Fernhall B (2007) The effects of a glucose load and sympathetic challenge on autonomic function in obese women with and without type 2 diabetes mellitus. Metabolism 56(6):778-785 Pichotka J (1957) Der Gesamt-Organismus im Sauerstoffmangel. Handbuch der allgemeinen Pathologie. Springer, Berlin Higuchi T (1988) Approach to an irregular time series on the basis of the fractal theory. Physica D 31:277-283 Akselrod S, Gordon D, Ubel FA, Shannon DC,. Barger AC, Cohen RJ (1981) Power spectrum analysis of heart rate fluctuations: A quantitative probe of beat-to-beat cardiovascular control. Science 213:220222 Camm AJ, Malik M, Bigger JT, Brethard G, Cerutti, S, Cohen RJ, Coumel P, Fallen EL, Kennedy HL, Kleiger RE, Lombardi F, Malliani A, Moss AJ, Rottman, JN, Schmidt G, Schwartz PJ, Singer DH (1996) Heart rate variability. Standards of measurements, physiological interpretation and clinical use. Circulation 93:1043-1065 Nakamura Y, Yamamoto Y, Muraoka I (1993) Autonomic control of heart rate during physical exercise and fractal dimension of heart rate variability. J Appl Physiol 74(2):875-881 Goldberger AL (1996) Non-linear dynamics for clinicians: Chaos theory, fractals, and complexity at the bed-side. The Lancet 347:13121314 Ricart A, Pages T, Viscor G, Leal C, Ventura JL (2008) Sex-linked differences in pulse oxymetry. Br J Sports Med, 42(7):620-621 Soliz J, Soulage C, Borter E, van Patot MT, Gassmann M (2008) Ventilatory responses to acute and chronic hypoxia are altered in female but not male Paskin-deficient mice. Am J Physiol Regul Integr Comp Physiol 295(2):R649-R658 Author: Princi Tanja Institute: Department of Life Sciences Street: Via A. Valerio, 22 City: 34127 Trieste Country: Italy Email:
[email protected]
IFMBE Proceedings Vol. 29
On the analysis of dynamic lung mechanics separately in ins- and expiration K. Möller1, Z. Zhao1, C. A. Stahl2, and J. Guttmann2 2
1 Furtwangen University, Biomedical Engineering, Villingen-Schwenningen, Germany University Medical Center of Freiburg, Experimental Anaesthesiology, Freiburg, Germany
Abstract— Decision making in the ICU depends on knowledge about the pathophysiological state of the patient. In mechanical ventilation therapy the analysis of respiratory mechanics plays an important role to determine the appropriate ventilation support. Mainly global airway resistance and lung compliance are determined either in a static or dynamic setting. Dynamic analysis though more promising is difficult to obtain online at the bedside. Usually the established dynamic methods [1-6] assume that both parameters i.e. airway resistance and compliance are identical in inspiration and expiration, which is not true in general –as e.g. was shown by means of body plethysmography in healthy spontaneously breathing subjects [7]. Therefore some authors e.g. [8] propose to apply a leastsquares fit (LSF) -based on the equation of motion- to the respiratory data of the expiration phase alone. But as theory about the passive unloading of a capacitor (compliance) via a resistor (resistance) predicts, passive expiration leads to a quasi linear relationship between volume and flow and thus to an underspecified problem. To decouple this linear volumeflow dependency and to enable a separate analysis of respiratory mechanics in inspiration and expiration online at the bedside a new ventilation mode was introduced: Active Expiration Control (AEC) [9]. If performed appropriately the AEC allows even separate analysis of intratidal nonlinearity of compliance and resistance as will be demonstrated. Six healthy sheep -ventilated by an Evita4Lab system (Draeger Medical, Germany) that was reprogrammed to enable AECwere enrolled in this study. The adaptive slice method (ASM) [10, 11] was used to obtain intratidal changes in respiratory parameters. Considerable differences could be observed in the resistance of inspiration and expiration. This may lead to a different interpretation of dynamic compliance obtained with current methods. AEC decouples the linear dependency between volume and flow present in passive expiration and provides the basis for separate analysis of lung mechanics in inspiration and expiration. The method is ready for implementation in ICU ventilators. Keywords- mechanical ventilation, ARDS, estimation, expiration control, adaptive slice method
parameter
I. INTRODUCTION Decision making in the ICU depends on knowledge about the pathophysiological state of the patient. In mechanical ventilation therapy the analysis of respiratory mechanics plays an important role to determine the appropriate ventilation support. Respiratory resistance and compliance were found to be nonlinear and to differ between inspiration and expiration –as e.g. shown by means of body plethysmography in healthy spontaneously breathing subjects [7]. In their study Ulmer et al. have shown that airway resistance Raw during expiration is higher then during the inhalation phase. But this approach has severe limitations: a body plethysmograph is needed, which is definitely not a bedside tool for monitoring of critically ill patients. Lucangelo et al. thus proposed in [8] to apply a LSF method separately to inspiration and to expiration. Citation: “The LSF method does not require flow interruption or a peculiar inspiratory flow pattern, it can be applied during the whole breathing cycle or only in the inspiratory or expiratory phase.” Theoretical reasoning and experimental tests showed that the proposal to analyse the expiration phase alone is errorprone and will lead to false results [9]. Special maneuvers are needed to allow separate analysis of the inspiration and the expiration phase, such as AEC [9]. In this article, we analyze intratidal respiratory mechanics using data obtained during AEC in an incremental/decremental PEEPmaneuver. II. MATERIALS AND METHODS A. Theoretical Background Modern ventilators employed in intensive care acquire breath by breath flow ( V& (t ) ), volume ( V (t ) ) and pressure ( Paw (t ) ) curves at the airway opening in real time. Please note: V&
=
dV dt
The relation between pressure generated by the ventilator and the resulting flow and volume curves can be described by the equation of motion: P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 164–167, 2010. www.springerlink.com
On the Analysis of Dynamic Lung Mechanics Separately in Ins- and Expiration
1 Paw (t ) = R * V& (t ) + * V (t ) + p0 C
165
dence of the lung volume V (t ) and flow V& (t ) (Fig. 1) fitting of the equation of motion is in theory not possible during passive expiration.
p0 denotes the alveolar end-expiratory pressure which may be due to positive end-expiratory pressure (PEEP) applied by the respirator or to intrinsic PEEP. The term R *V& (t ) corresponds to the resistive pressure
p res (t ) dissipated across the connecting tubing system, the endotracheal tube, the bronchial airways and the lung tissue and the rib cage to overcome frictional forces generated with gas flow.
V (t ) = −τ * V& (t ) + K
1 *V (t ) reflects the elastic pressure pel (t ) that must C be applied to overcome elastic forces of lung and thorax and depends on both the tidal volume insufflated in excess to residual volume (Vt) and on the respiratory compliance. With a least-squares fit (LSF) method the lung (model) parameters R and C can be determined online at the bedside [1,2,3]. Accepting Newtons equation of motion as a sufficiently accurate model of lung dynamics, inspiration can be described by an energy transfer from the ventilator to the patient lung that loads a capacitor over a resistor. The expiration as the opposite process can be physically described by the unloading of a capacitor over a resistance, i.e. some part of the energy provided by the ventilator during inspiration and stored in form of elastic extension of the lung and thorax is unloaded via the bronchial and tube resistance. At the end of inspiration ( teoi ) the alveolar pressure
Palv (teoi ) = Pmax assumes its maximal value. Thus at the
Fig. 1 Linear relation of flow and volume during passive expiration Some simple mathematics [9] show that obviously the analysis of respiratory data measured during passive expiration is insufficient to determine expiratory resistance and compliance. Under “ideal” conditions i.e. ideal exponentially decaying flow, the fitted values of R and C are arbitrary, because volume and flow data cancel each other. Under real conditions the flow signal is perturbed especially at the onset of expiration, thus introducing a bias to the fitting of R and C. If the analysis is nevertheless performed as suggested in [8] the parameters are prone to considerable noise [9]. Thus the Active Expiration Control (AEC) mode was introduced by our group that manipulates the expiration phase in different ways. In fig. 2 an example of an AEC generated cycle is shown.
beginning of the expiration (tex=t=0) flow can be
represented by V&max * R = Pmax . For any arbitrary t > 0 the pressure during expiration amounts to
Palv (t ) = Pmax * e − kt with the expiration flow represented by
P (t ) P V& (t ) = alv = max * e− kt R R
Fig. 2 left: An AEC cycle with flow, volume and pressure being depicted. right: The changed relation of flow and volume during modified expiration
and the expired volume becoming
1 P 1 Ve (t ) = (− ) * max * e − kt + K = (− ) * V& + K k R k Obviously
τ=
1 = R * C governs the pressure-time k
curve during a passive expiration. Due to the linear depen-
Instead of just using the LSF method that assumes constant parameters throughout the cycle, the intratidal course of nonlinearity will be examined in this report using the adaptive slice method (ASM) [10]. ASM is based on a piecewise linear approximation of the nonlinearity (such as the SLICE method [1]) with a statistical measure to
IFMBE Proceedings Vol. 29
166
K. Möller et al.
optimize the interval size. The confidence interval of the parameter fit guides the search for a good compromise between noise level (S/N ratio) and nonlinearity in the data. B. Realization of AEC The E4Lab System (Draeger medical, Lübeck) was used to implement a pressure controlled expiration phase on an Evita 4 ventilator. It is capable to create an almost constant or linear flow during expiration. This active expiration control mode (AEC) was embedded into an incremental/ decremental PEEP-maneuver to enable the estimation of respiratory parameters over the whole range of vital capacity. The maneuver is specified by the pressure step between PEEP levels (e.g. 5 mbar) and the number of breathing cycles (n = 15) per level. The inflation limb is stopped as soon as a peak pressure at end-inspiration exceeds 45 mbar. Subsequently PEEP is reduced symmetrically in steps of e.g. 5 mbar to zero end-expiratory pressure (ZEEP).
was given to avoid abdominal compression. Initial mechanical ventilation (volume controlled mode with constant inspiratory flow, tidal volume 10 ml/kg, PEEP 3 cmH2O, breathing frequency 15/min, I:E 1:1, inspiratory hold 10%) was provided with a Servo 900 C ventilator (Siemens-Elema, Solna, Sweden). Monitoring consisted of online registration of ECG, oxygen saturation and tidal CO2-concentration. After a stabilization period of 30 minutes, animals were switched to an Evita4Lab ventilator and measurement system (Draeger Medical, Lübeck, Germany), performing the respiratory maneuvers and data acquisition [11]. III. RESULTS Exemplarily results of the sheep experiments are depicted in the next figures. First the PEEP-wave maneuver is shown in fig. 3. It exemplarily demonstrates how the PEEP level is changed every 15 cycles. This maneuver allows to approximate nonlinearity in dependence of the pressure level i.e. the mechanical stress applied to the system. In prior investigations it was shown how dynamic compliance depends on pressure [11] if inspiration and expiration (whole cycles) are pooled together and are then evaluated. The applied SLICE method relies on the assumption that both the resistance R and Compliance C are equal in both phases of the cycle. ZEEP PEEP=5mbar
Fig. 3 A
PEEP-wave maneuver with 5 incremental steps in PEEP of 5 mbar (in red) each followed by 5 steps down to ZEEP (in blue). On every PEEP level 15 cycles are generated. One of those cycles from the second PEEP level is shown on separate axes to depict the quasi-sinusoidal form.
PEEP=10mbar
...
C. Animal experiments
PEEP=25mbar
Six healthy female sheep (weight 67 ± 11 kg) were enrolled in this study. The study protocol was approved by the Institution’s Committee on Investigations involving animal subjects and the Veterinary Department of the local authorities (G-03/43). All animals were housed and the procedures were performed within the facilities of the University Hospital Freiburg. D. Animal preparation All animals were anesthetized, tracheally intubated and ventilated according to the same protocol. The animals were positioned prone in a physiologic manner, and special care
Fig. 4 Fit of compliance on different PEEP levels with AEC and ASM. Red Upper: Dynamic compliance fitted with pooled data of inspiration and expiration. Lower left: The depicted C is evaluated from inspiration alone on every PEEP level. Lower right: slope of dynamic compliance in expiration alone.
IFMBE Proceedings Vol. 29
On the Analysis of Dynamic Lung Mechanics Separately in Ins- and Expiration
With the newly introduced methods ASM and AEC the same investigations can be done but separately for the inspiration and the expiration phase. Thus, intratidal changes of the parameters R and C during inspiration and expiration could be calculated and are depicted in case of compliance in Fig. 4. Please note differences in the inspiration phase (Fig.4 lower left) at low PEEP values. The blue line shows higher compliance than the red line. This may be due to recruitment processes during the PEEP maneuver. IV. DISCUSSION Analysis of a passive expiration alone is not possible from a theoretical point of view. Therefore a special mode, the active expiration control (AEC) was developed that allows to evaluate inspiration and expiration phases of a respiratory cycle separately. In combination with the adaptive slice method (ASM) reliable estimates of dynamic compliance and resistance can be obtained. which may affect the shape of the expiratory flow/volume curve. The main results of this first analysis on 6 healthy sheep are: 1. it is apparent, that respiratory mechanics nonlinearly depend on PEEP, 2. mechanics in inspiration and expiration differ regarding the resistance and the compliance, 3. ASM has the capability to identify parameters with sufficient reliability in most cases, 4. it seems that the assumption of identity of resistance values in inspiration and expiration is not correct. This last statement has great impact on the interpretation of results presented in former experiments. The form analysis of intratidal compliance may be faulty. E.g. the interpretation that a detected decrease in compliance is an indicator for overdistention and an increase in compliance may signal alveolar recruitment has to be reevaluated. Further investigations will be necessary to isolate the influence of the misleadingly wrong resistance values in expiration on the overall dynamic compliance derived from the whole respiratory cycle.
V. CONCLUSION Active Expiration Control (AEC) enables a separate analysis of expiratory respiratory mechanics by “breaking” the theoretical restriction being the linear dependence between flow and volume during passive expiration. In combination of AEC and Adaptive Slice Method (ASM) a reliable and robust method is presented. From the independent and separate analysis of inspiratory and expiratory lung mechanics we expect new insights in the intratidal process of alveolar recruitment and derecruitment. In addition, the new AEC incorporates the possibility to actively influence the expiratory pattern therapeutically, e.g. in order to prevent expiratory alveolar derecruitment or to avoid flow-limitation in COPD. VI. ACKNOWLEDGMENTS The authors like to thank Prof. Dr. J. Haberstroh and Dr. U. Albus for their support during the animal experiments. This work was partially supported by Bundesministerium für Bildung und Forschung (Grant 1781X08 MOTiF-A), and Dräger Medical, Lübeck.
VII. REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9.
Guttmann J, et al. Technol Health Care 2: 175-191, 1994 Iotti GA, et al. Int. Care Med 21: 406-413, 1995. Karason S, et al., Acta Anaesth. Scan. 44: 578-585, 2000. Nikischin W, et al. Am J Resp Crit Care Med 158: 1052-1060, 1998. Grasso S,. Crit Care Med 32: 1018-1027, 2004. Stenqvist O, et al. Curr Opinion Crit Care 14: 87-93, 2008. Ulmer J Phys Pharm. 55, Suppl 3, 149-153, 2004 Lucangelo U et al. Curr Opin Crit Care. 13(1):64-72, 2007. Möller K., et al., IFMBE Proceedings 22, Springer Verlag pp. 2049– 2052, 2008. 10. Zhao Z, et al. in: ICBBE, IEEE, pp. 1686-1689, 2008 DOI: 10.1109/ICBBE.2008.750 11. Stahl CA et al. Crit Care Med. 34(8):2090-8, 2006. 12. B Jonson, et al. Thorax; 54, 1999
Corresponding author: • • • • • • •
167
Prof. Dr. Knut Möller Biomedical Engineering, Furtwangen University Jakob Kienzle Str. 17 78054 VS-Schwenningen Germany
[email protected]
IFMBE Proceedings Vol. 29
Clinical Validation of an Algorithm for Automatic Detection of Atrial Fibrillation from Single Lead ECG M. Triventi1, G. Calcagnini1, F. Censi1, E. Mattei1, F. Mele2, and P. Bartolini1 2
1 ISS - Italian National Institute of Health, Rome, Italy Department of Cardiology, San Filippo Neri Hospital, Roma, Italy
Abstract— Atrial fibrillation (AF) is the most common cardiac arrhythmia encountered in the clinical practice in the western countries. People with AF usually have a significantly increased risk of stroke. Clinically, AF is diagnosed by a surface electrocardiogram (ECG). AF is characterized by the absence of P-waves and by a rapid irregular ventricular rhythm. The algorithms for automatic detection of AF either rely on the absence of P-waves or are based on ventricular rhythm variability (RR variability). This work presents an automatic algorithm for AF real time detection based on the analysis of the RR series (ventricular interbeat intervals) and of the difference between successive RR intervals (∆RR intervals). Coefficient of variation of ∆RR series and Shannon Entropy of RR series, computed over 5 minutes segments, are used to discriminate AF from normal sinus rhythm. A classifier based on the Mahalanobis distance is then used. The proposed algorithm was clinically validated on 61 patients with an history/suspect of intermittent AF. The results obtained show that the algorithm can precisely detect AF episodes from a 5-minute, single lead ECG, whit a specificity of 97.9%, a sensitivity of 100%, and an accuracy of 98.4%. Its implementation on a microcontroller makes it suitable as a home-care device for the accurate detection of AF episodes.
This work presents an algorithm for AF detection based on indexes extracted from RR interval and ∆RR interval series (difference between successive RR intervals), and its implementation on a microcontroller. The performance of the proposed algorithm has been clinically validated.
Keywords— Atrial fibrillation, real time algorithm, AF detection, clinical validation.
• Low pass (IIR filter with 3dB frequency cutoff at11 Hz):
I. INTRODUCTION Atrial fibrillation (AF) is the most common cardiac arrhythmia in western countries. AF is an independent risk factor for death and a major cause of stroke. [1,4] In AF, the electrical impulses in the atria degenerate from their usual organized rhythm into a rapid chaotic pattern causing irregular sequences of ventricular activations. Clinically, AF is typically diagnosed by the electrocardiogram (ECG). It is characterized by the absence of P-waves and by rapid irregular ventricular rhythm [5]. Algorithms for automatic detection of AF either rely on the absence of P-waves or are based on ventricular rhythm variability (RR variability). Given the low signal-to-noise ratio characterizing the P-wave, many algorithms are based on the analysis of the irregularity of RR series (ventricular interbeat intervals).
II. METHODS A. The AF Detection Algorithm: R Wave Detection The algorithm detects QRS complexes by using the method proposed by Pan and Tompkins (P&T)[8], which includes bandpass filter, differentiator, squaring operation and moving-window integrator. The bandpass filter, done by cascading a lowpass and a highpass filter, reduces the influence of the muscle noise, 50 Hz interference, baseline wander and T-wave. The desirable passband to maximize the QRS energy is approximately 5-15Hz. Given a sampling frequency of 200 Hz, the coefficients are:
2
2
• High pass (IIR filter with 3dB frequency cutoff at5 Hz):
The derivative operator suppresses the low-frequency components of the P and T waves, and provides a large gain to the high-frequency components arising from the high slopes of the QRS complex. •
The derivative was implemented as: 2
2
Moving-window integrator performs smoothing of the output of the derivative operator. •
Moving Window Integrator: ∑
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 168–171, 2010. www.springerlink.com
C Clinical Validatioon of an Algorithhm for Automattic Detection of Atrial A Fibrillatio on from Single Lead L ECG
QRS is consideered to occur when Q w the outpuut of the integrratoor overcomes an adaptive thhreshold, updaated according to thhe signal maxiimum. The flowchaart of P&T algorithm a is deepicted in Fig. 1, w while the outpu ut of the filters are shown in Fig. F 2. Low pass filtter
High h pass filtter
Disscrete derivative
Squ uaring
Adapptive threshoolding
Peeak detecction
169
R series (ventriccular interbeat intervals) occurrennce times, RR and ∆R RR series (thee difference between b succeessive RR intervalls) are obtainedd. B. Thee AF Detection Algorithm: Annalysis of the RR R and ∆RR R Sequences The detection d of ann atrial fibrillaation episode iss based on the extrraction of quanntitative indexes from the RR R and ∆RR time serries. The RR inntervals duringg atrial fibrillatiion have a larger sttandard deviatiion and a shorteer correlation leength than those duuring normal sinus rhythm (NSR) [9,10] (Figg. 3).
Integ gration
Fig. 1 Flowchart of QR RS detection algorrithm
Fig. 3 RR R sequences during g a sinus rhythm (p panel a), and durin ng an atrial fibrillation rhythm (panel b). On the x-axis arre the beat numberr; on the yaxis are RR R intervals in ms
To distinguish d betw ween NSR andd AF we use thhe entropy (En) of variation v of thee RR interval series s and the coefficient c of variaation (CV) of thhe ∆RR intervaals. The entropy e is estim mated as follow ws: ܧ ൌ െ ȉ ଶ
Fig. 2 Output of the intermediatess stages of QRS deetection algorithm The final reesult of the peeak detection algorithm is the t eextraction of the t R wave occurrence o tim mes. From theese
where p is the estimatted probabilityy density functtion of the RR series. Sincee the mean off the ∆RR sequuence leads too zero, we calculatted the CV by dividing the standard s deviattion of the ∆RR intervals by the mean m of the RR R sequence ߪୖୖ ܸܥ ܸ ൌ ୖୖܸܥൌ ߤோோ To impllement an autoomatic decisionn criterion, bassed on the CV andd En, we used the t Mahalanobbis distance, which takes
IFMBE Prooceedings Vol. 29 2
170
M. Triventi et al.
into account the covariance among the variables in calculating distances.
Results are presented as Sensitivity (Se), Specificity (Sp) and Accuracy (Ac).
C. Clinical Validation Protocol
A. NSR Detector
The study was conducted at the Atrial Fibrillation Unit of S. Filippo Neri Hospital, in Rome. We studied 61 patients undergoing standard 12-lead ECG exam for a history/suspect of AF. Heart rhythm diagnosis was performed by an expert cardiologist. After standard 12lead ECG recording using a commercial electrocardiograph, a 5-minutes single lead ECG was acquired using a custommade battery-operated acquisition system with a sampling frequency of 1000 Hz (using a National Instrument NIUSB6218 DAQ card,16-Bit, 250 kS/s). Patients with pacemaker and/or defibrillator were excluded.
Figure 4 shows the results of the algorithm used as NSR detector. 38/43 NSR and 18/18 arrhythmias were correctly classified. In terms of Se and Sp, our algorithm shows Se = 88.4% and Sp = 100%. The accuracy was 91.8%.
D. The Algorithm on a Microcontroller The algorithm was first tested in Matlab. Then, the algorithm was translated in C language, to be implemented in a digital signal processing (DSP) microcontroller to perform real time detection. We used the Microchips DSP dsPIC33FJ256GP710, a 16-bit digital signal controller using C30 compiler, with • • • • • •
Core Frequency: 40MHz Core Supply Voltage: 2.75V Interface Type: ECAN, I2C, DCI, SPI, UART Memory Size, Flash: 256KB Supply Voltage Range: 3V to 3.6V RoHS Compliant
Fig. 4 Arrhythmias episodes distribution as a function of the ∆RR coefficient of variation and RR entropy. Results of the algorithm used as arrhythmias detection B. AF Detector Figure 5 shows the results of the algorithm used as AF detector. 14/14 AF rhythms and 42/43 non-AF rhythms were correctly classified. In terms of Se and Sp, our algorithm shows Sp = 97.9% and Se = 100%. The accuracy was 98.4%.
III. RESULTS Table 1 shows the heart rhythm classification according to the cardiologist and the characteristics of the patients. Table 1 Characteristics of patients’ population Rhythm
N
Age (mean+/- sd, range)
NSR
43
65.27 +/- 11.96, 21-87
Sex (M/F) 26/17
AF
14
78.14 +/- 8.29, 67-89
8/6
Other
4
67.75 +/- 10.51, 61-73
4/0
The algorithm has been used in two different modalities: as NSR detector, to discriminate NSR from any kind of cardiac arrhythmias including AF; as AF detector, to discriminate AF from non-AF rhythm.
Fig. 5 AF episodes distribution as a function of the ∆RR coefficient of variation and RR entropy. Results of the algorithm used as AF detection
IV. CONCLUSIONS In this work, a novel AF real time detection algorithm is proposed. The algorithm has been implemented on a microcontroller and integrated in a ECG acquisition system. Its clinical validation demonstrated both high sensitivity
IFMBE Proceedings Vol. 29
Clinical Validation of an Algorithm for Automatic Detection of Atrial Fibrillation from Single Lead ECG
(100%) and high specificity (97.9%) in AF detection, so the algorithm can precisely detect AF episodes from a single lead ECG. The high sensitivity of the algorithm, the relatively short data required (5 minutes), and its implementation on a microcontroller allow the realization of a home-care device for the accurate detection of AF episodes.
REFERENCES 1. Kasliwal RR, Mukesh S, Manohar G, et al: Pharmacotherapy of atrial fibrillation. Asian Cardiovasc Thorac Ann 2003; 11:364–374. 2. Murgatroyd FD, Camm AJ: Atrial arrhythmias. Lancet 1993; 341:1317–1322. 3. Kannell WB, Abbott RD, Savage DD, et al: Epidemiological features of atrial fibrillation: The Framingham Study. N Engl J Med 1982; 306:1018–1022. 4. Jahangir A, Lee V, Friedman PA, Trusty JM, Hodge DO, Kopecky SL, Packer DL, Hammill SC, Shen WK, Gersh BJ (2007). "Longterm progression and outcomes with aging in patients with lone atrial fibrillation: a 30-year follow-up study". Circulation 115 (24): 3050–6. 5. Fuster V, Rydén LE, Cannom DS, et al. (2006). "ACC/AHA/ESC 2006 Guidelines for the Management of Patients with Atrial Fibrillation: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines and the European Society of Cardiology Committee for Practice Guidelines (Writing Committee to Revise the 2001 Guidelines for the Management of Patients With Atrial Fibrillation): developed in collaboration with the European Heart Rhythm Association and the Heart Rhythm Society". Circulation 114 (7): e257–354. 6. Salerno SM, Alguire PC, Waxman HS. Competency in interpretation of 12-lead electrocardiograms: summary and appraisal of published evidence. Ann Intern Med 2003;138:751-60. 7. Mant J, Fitzmaurice DA, Hobbs FD, et al. (2007). "Accuracy of diagnosing atrial fibrillation on electrocardiogram by primary care practitioners and interpretative diagnostic software: analysis of data from screening for atrial fibrillation in the elderly (SAFE) trial". BMJ 335 (7616): 380.
171
8. Pan J, Tompkins WJ., A real-time QRS detection algorithm. IEEE Trans Biomed Eng. 1985 Mar;32(3):230-6. 9. Tateno K, Glass L. Automatic detection of atrial fibrillation using the coefficient of variation and density histograms of RR and delta RR intervals. Med Biol Eng Comput. 2001 Nov. 10. Bootsma, B., Hoolen, A., Strackee, J., and Meijler, E (1970): 'Analysis of R-R intervals in patients with atrial fibrillation at rest and during exercise', Circulation, 41, pp. 783-794. 11. Andresen, D., and Brijggemann, T. (1998): 'Heat rate variability preceding onset of atrial fibrillation', J. Cardiovasc. Electrophysiol. Supp., 9, pp. $26-$29. 12. Murgatroyd, E, Xig, B., Copie, X., Blancoff, I., Camm, A., and Malik, M. (1995): 'Identification of atrial fibrillation episodes in ambulatory electrocardiographic recordings: validation of a method for obtaining labeled R-R interval files', Pacing Clin. Electrophy siol., 18, pp. 1315-1320. 13. Pinciroli, E, and Castelli, A. (1986): 'Pre-clinical experimentation of a quantitative synthesis of the local variability in the original R-R interval sequence in the presence of axrhythmia', Automedica, 6, pp. 295-317. 14. Slocum, J., Sahakian, A., and Swiryn, S. (1987): 'Computer detection of atrial fibrillation on the surface electrocardiogram', Comput. Cardiol., 13, pp. 253-254.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Michele Triventi Italian National Institute of Health Viale Regina Elena, 299 Rome Italy
[email protected]
Mental and Motor Task Classification by LDA N.GURSEL OZMEN1, L. GUMUSEL2 1,2
Department of Mechanical Engineering, Karadeniz Technical University, Trabzon, Turkey
Abstract— Electroencephalogram (EEG) is the easiest and the painless method to reveal the electrical activity of the brain tissue in order to understand the functioning of the brain and also for clinical diagnostics. When the mental differences identified from EEG signals, this could led to handicapped people to communicate with their surroundings. The work presented here is aimed to classify two different mental tasks and motor behaviors. The features are extracted by power spectral density method and a further step developed to choose six different features from power spectral densities. The generated feature vectors are transferred to the classifier. The classification is done with the classical Linear Discriminant Analysis method. A considerable difference value has been reached for mental tasks and the hemispheric changes. Due to the frequency changes at each electrode locations, a control methodology for the people who are lack of movement control can be developed. Keywords— EEG, mental task classification, LDA. I. INTRODUCTION
The EEG (Electroencephalogram) is a representative signal containing information about the condition of the brain. According to the shape of the wave one can get useful information about the state of the brain. It has been shown that people can control their brain’s rhythms by performing a specific mental task. EEG-based mental task studies have been selected as an approach to understand brain activity. However, the human observer cannot directly monitor these subtle details. So many scientists are studying on various signal processing and analyzing techniques for extracting the hidden information among EEG wave shapes [1-2]. If many mental states can be distinguished from EEG patterns, then it could led to handicapped people to communicate by the helping devices like wheelchair or a computer. This is the main idea of brain computer interfaces (BCIs) which give their user communication and control channels that do not depend on the brain’s normal output channels of peripheral nerves and muscles [3]. Like any communication and control system, a BCI has an input, an output, and a translation algorithm that converts the former to the latter. The input to a BCI system should be a particular feature of brain activity and the methodology used to measure that feature. Suitable features may be, frequency-domain features (such as EEG or rhythms occurring in specific areas of cortex)
[4], [5],[6],[7] or time-domain features (such as slow cortical potentials, P300 potentials, or the action potentials of single cortical neurons) [8],[9],[10],[11]. Estimating mental states from EEG signals is a very cumbersome work; the data may contain noise and irrelevant information. EEG signal is recorded and after preprocessing and feature extraction it is classified to a number of predefined classes of mental tasks. In feature extraction, the former method is to detect P300 signal which appears in the EEG approximately 300ms after the occurrence of an event. This is called event related potentials (ERPs)[12]. In this study the spatial and spectrum analysis techniques are used as feature extraction method. First of all, the power spectral density of signals from different channels is calculated for each task and from these PSD values, six different features are extracted from the alpha and beta peaks. The pioneer work based on estimating mental states from EEG signals is done by Keirn and Aunon [13]. They recorded EEG from seven subjects while they perform five different mental tasks. These tasks were the baseline task, mental arithmetic task, geometric figure rotation task, mental letter composing task and visual counting task. And some earlier studies on mental activity of the brain are searched by [14]. Work reported in this study is part of a project planned to classify different mental tasks and motor behaviors by using novel classification methods in order to build a brain computer interface for disabled people. The tasks are arranged as in [13] with some differences, two motor tasks are added, right hand imagination and left hand imagination so as to display the hemispheric changes. In this study according to the related electrode positions the discrimination of two pair main tasks is given for a subject while performing baseline task and mental arithmetic task or baseline task and right hand task. The classification is applied by using Linear Discriminant Analysis method. II. MATERIALS AND METHOD
A. Subjects and EEG data All data used in this study were recorded in Mechanical Engineering Department of Karadeniz Technical University. The data were recorded by using a 64 Channel Biosemi ActiveTwo EEG System. A 50 years old healthy male subject wearing an electrocap performed five different mental
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 172–175, 2010. www.springerlink.com
Mental and Motor Task Classification by LDA
173
tasks, the baseline task, mental arithmetic task, imagination of right hand and left hand tasks and finally the imagination of any letter task. The subject’s eyes were closed, and he was relaxed on a comfortable chair in a dim lighted, silent room during the recordings. In this study only three of these tasks will be investigated. Baseline task: The subject is asked to close his eyes and relax as much as possible without thinking anything. This is considered the baseline session for alpha wave production and other asymmetries. Mental arithmetic task: The subject is given a binary digit number multiplication problem to solve without vocalizing or doing any movement while performing in the mind (Ex: 24x76). The problems are non-repeating and the subject verifies whether he reached the solution or not at the end of the trial. In this phase the alpha rhythm disappears especially at the frontal lobes. Right hand imagination task: The subject is told to imagine several movements of his right hand like playing a piano or drawing circles continuously with his eyes closed. It is aimed to detect the changes at the central lobes of the brain. Recording of EEG signals were done according to international 10-20 electrode placement system. Every task was repeated hundred times to estimate a general form. The data chosen from 10 channels (F3, F4, C3, C4, Cz, P3, P4, Pz, O1 and O2) were used. The data were sampled at 512 samples per second for duration of 10 seconds. Each task started with an explanation of which type of information processing needed to be performed (relax, multiplication or right hand). When ready, the subject is told to start and when the time limit reached, he is again told to stop. A 3-5 minutes break has given between each session. All algorithmic steps were implemented in the Matlab (The Mathworks Inc.) environment.
B. Signal Processing and Classification For the analyses, the 4 seconds (between 2-6 secs) part of the 10 second EEG data is used. Monopolar recording was performed without setting a reference point. The data is normalized after the recordings by subtracting Cz electrode data from each electrode’s data. In order to remove noise from the original signal, the EEG signal was filtered with a lowpass digital Butterworth filter with cutoff frequency set to 0.19 Hz. Spectral analysis research has been conducted with epileptic data or data taken during hypnosis [15-16-1718-19-20] for various signal processing techniques. In this study, the power spectral density (PSD) is used as the first feature extractor. PSD of the signal was calculated using Welch Periodogram. A Hamming window of 400 points in length, a 90% overlapping between adjacent windowed sections and a 1024-points Fast Fourier Transform (FFT) were used as computational parameters. The alpha and beta waves are visible at he PSD of the each task different at each electrode. The second feature extraction starts with selecting six parametres of alpha wave region (8-13 HZ) and beta wave region(13-30 Hz). A simple rule was developed to choose these features, one peak point selected at alpha region for the y-coordinate and two peak points for the beta region at the y-coordinate direction. The other three features were calculated by subtracting each preused peaks’ x-coordinate values from each other. Finally, for each task and for each electrode all the feature vectors are ready to send to a classifier. The power spectral densities for baseline, problem solving and right hand imagination tasks are given in Figure2 and Figure 3. It is clear that the amplitude of the alpha wave of baseline task is maximum and decreased during high concentration or motor action. However, sharp beta peaks are also formed during baseline task. PSD of Channel F3
95
Baseline Problem Right Hand
Power/frequency (dB/Hz)
90 85 80 75 70 65 60 55 50
Fig 1. Int 10-20 electrode placement system and the chosen electrodes to be analysed in this study
0
5
10
15
20 25 30 Frequency (Hz)
35
40
45
50
Fig 2. PSD of Baseline, Problem solving and Right Hand imagination tasks for F3 channel
In Figure 3, the spectral changes at the C3 channel are shown. When compared with channel F3, the overall IFMBE Proceedings Vol. 29
174
N.G. Ozmen and L. Gumusel
amplitudes of baseline task and right hand task decreased, but a slight increase for problem solving around 11-12 Hz and 23Hz are observed. The power spectral densities of those chosen ten electrodes were calculated and all the differences noted carefully. Only two of the chosen electrodes are given in this study to explain the differences between each task. This supports the idea of different processes occur in different brain regions. PSD of Channel C3
95
Power/frequency (dB/Hz)
90
Baseline Problem Right Hand
85 80 75 70
III. CONCLUSIONS The results show that the imagination of one movement in different conditions can have different consequences on the brain activity. The high percentages of the discrimination states that the chosen data were well fixed and removed from noise and the feature extraction scheme was well suited to the procedure. 100 % accuracy at some electrodes shows that, while building a brain computer interface, the high classification rate electrodes such as F3,F4 or C3,Pz can be used for the related control tasks. Since, this is an off-line application, more contributions would be improved for the on-line applications. And one further discussion is for multi-class classification, this LDA may give poor results and some complex classification techniques would be applied.
65 60 55 50
Table 1 LDA Classification Percentages for each task 0
5
10
15
20 25 30 Frequency (Hz)
35
40
45
50
Fig 3. PSD of Baseline, Problem solving and Right Hand imagination tasks for F3 channel
For this study, a two class Linear Discriminant Analysis is used. Since LDA is a very popular, classical classification method, it is simple to implement and often used as the baseline method for comparison of different classification methods. LDA projects the data onto a lower-dimensional vector space such that the ratio of the between-class distance to the within-class distance is maximized, thus achieving maximum discrimination. The optimal projection can be readily computed by applying the eigendecomposition on the scatter matrices. In classification problems there is always a train data group and a test data group where in this study the number of each set divided equally organized classification groups which mean that there are 50 training and 50 test data. The classification is evaluated by two pairs Task 1-Task 2 and Task1-Task 3. • • •
Task1:Baseline Task Task 2: Problem Solving Task 3: Right Hand Imagination
Channel Name
Task1/Task2
F3
100%
88%
F4
100%
96%
C3
100%
100%
C4
86%
98%
P3
98%
100%
P4
88%
92%
Task1/Task3
Pz
100%
100%
O1
94 %
96%
O2
100%
92%
ACKNOWLEDGMENT
The spectral characteristics of all electrodes ara analyzed and given for each pair of tasks in Table 1. Analysis of the data obtained in the classification phase evidenced a clear differentiation in identifying brain’s mental thinking.
This work is the part of a study supported by Karadeniz Technical University Scientific Research Projects, No.2007.112.03.3,Trabzon.
IFMBE Proceedings Vol. 29
Mental and Motor Task Classification by LDA
175
REFERENCES 1.
Bashashati A, Fatourechi M(2007) A Survey of Signal Processing Algorithms in brain-computer interfaces based on electrical brain signals.Journal of Neural Engineering,4: 32-57 2. Anderson C W , Sijercic Z( 1996) Classification of EEG signals from four subjects during five mental tasks. Solving Engineering Problems with Neural Networks: Proc. Int.Conf. on Engineering Applications of Neural Networks(EANN’96), 1996. 3. Wolpaw JR, Birbaumer N, Heetderks WJ, et al.(2000) Brain– Computer Interface Technology: A Review of the First International Meeting. IEEE Trans On Rehabilitation Eng, Vol. 8, No. 2, June 2000,pp.164-173. 4. Kostov A,Polak M(2000).Parallel man–machine training in development of EEG-based cursor control. IEEE Trans. Rehab. Eng. vol. 8, June 2000,pp.203–205. 5. Lauer RT, Peckham PH, Kilgore KL and Heetderks WJ(2000) Applications of cortical signals to neuroprosthetic control: A critical review, IEEE Trans. Rehab. Eng., vol. 8, June 2000,pp.205–208 6. Pineda JA, Allison BZ, Vankov A,(2000).The effects of selfmovement, observation, and imagination on _ rhythms and readiness potentials (RP’s): Toward a brain–computer interface (BCI). IEEE Trans. Rehab. Eng., vol. 8, June 2000, pp.219–222. 7. .Babilioni F, Cincotti F, Lazzarini L, et al.(2000) Linear classification of low-resolution EEG patterns produced by imagined hand movements.IEEE Trans. Rehab. Eng., vol.8, June 2000,pp.186–188. 8. Birbaumer N, Kubler A, Ghanayim A et al.( 2000) The thought translation device (TTD) for completely paralyzed patients. IEEE Trans. Rehab. Eng., vol. 8, June 2000,pp.190–193. 9. Donchin E, Spencer KM, and Wijesinghe R(2000).The mental prosthesis: Assessing the speed of a P300-based brain–computer interface.IEEE Trans. Rehab. Eng., vol. 8, June 2000,pp.174–179. 10. Middendorf M, McMillan G, Calhoun G , Jones KS(2000).Brain– computer interfaces based on steady-state visual-evoked response. IEEE Trans. Rehab. Eng., vol. 8,June 2000,pp.211–214. 11. Isaacs R E, Weber DJ and Schwartz AB (2000) Work toward realtime control of a cortical neural prosthesis. IEEE Trans. Rehab. Eng., vol.8, June 2000, pp.196-198.
12. Anderson CW, Devulapalli SV, and Stolze A(1995).Determining Mental State from EEG Signals Using Parallel Implementations of Neural Networks. Scientific Programming IOS Press, Vol 4,Number 3, 1995,pp.171-183. 13. Keirn Z A, Aunon J I (1990) A New Mode of Communication Between Man and His Surroundings. IEEE Transactions on biomedical engineering, Vol.37., No.12, Dec.1990. 14. Ehrlichman H. , Wiener M. S.(1980) EEG Asymmetry during covert mental activity. Psychophysics., vol. 17: 228-235. 15. Subasi A, Erçelebi E, Alkan A, Koklukaya E (2006) Comparison of Subspace-Based Methods With AR Parametric Methods In Epileptic Seizure Detection,Computers in Biology and Medicine, Vol 36, Issue 2, February 2006, pp.195-208. 16. Deivanayagi S, Manivannan M, Fernandez P(2007).Spectral Analysis Of EEG Signals During Hypnosis,International Journal of Systemics, Cybernetics and Informatics, ISSN 0973-4864,2007,pp.75-80. 17. Dehaene S, Spelke E, Stanescu R, Tsivkin S (1999). Sources of mathematical thinking: behavioural and brain- imaging evidence. Science 284, 1999,pp. 970-974. 18. Solhjoo S, Nasrabadi A.M, Golpayegani MRH(2005) EEG-Based Mental Task Classification in Hypnotized and Normal Subjects.. Proc. IEEE Engineering in Medicine and Biology 27th Annual Conference,Shanghai, China, September 1-4, 2005,pp 2041-2043 19. Simon TJ(1999).The Foundations of Numerical thinking in a brain without numbers. Trends in Cognitive Sciences, Vol 3, Issue 10, 1 October 1999,pp. 363-365. 20. O'Boyle M W, Cunnington R, Silk T J, Vaughan D (2005). Mathematically gifted male adolescents activate a unique brain network during mental rotation.Cognitive Brain Research, vol 25, Is.2, Oct 2005,583-587.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Nurhan GURSEL OZMEN Karadeniz Technical University Department of Mechanical Engineering Trabzon TURKEY
[email protected]
The human subthalamic nucleus – knowledge for the understanding of Parkinson’s disease T. Heida and E. Marani University of Twente/MIRA, Biomedical Signals & Systems, Enschede, The Netherlands Abstract— The human subthalamic nucleus differs from those of experimental animals, especially rat. In this overview cytological, developmental, and connective discrepancies are enumerated. The main theme is the lack of neuroanatomical prove for the cortico-subthalamic connection in humans. Moreover electrophysiological results show latencies that do not fit in with the assumed distances. Keywords— subthalamic nucleus, cortico-subthalamic connection, Parkinson’s disease
I. INTRODUCTION
Hemiballism or hemichorea is a rare neurological disorder but the crucial involvement of the subthalamic nucleus (STN) in its pathophysiology is appreciated since decades. On the other hand, the idiopathic Parkinson’s disease is a common neurodegenerative disorder but the key role of the STN in the pathophysiological origin of the parkinsonian state became evident only recently. Surgery, primarily in the form of the bilateral, high-frequency stimulation of the STN (Benabid et al. 2000), is highly effective in parkinsonian patients who are responsive to levodopa but experience marked motor fluctuation or other complications (KleinerFisman et al. 2006 and references therein) and that following STN stimulation, the parkinsonian motor disability was improved by more than 60 % and the levodopa equivalent daily dosage was reduced by 60.5%. The involvement of the STN in the limbic and associative circuitries has been studied. Cognitive disorders like altered verbal memory and fluency, altered executive functioning, changed attention behavior, as disturbed working memory, mental speed and response inhibition are reported after deep brain stimulation. The same holds for the limbic involvement of the STN. Changes in personality, depression, (hypo) mania, anxiety, and hallucinations are reported. Therefore the STN not only possesses a key role in motor behavior, but is also a “potent regulator” (Temel et al. 2005) in the limbic and associative circuits.
II.
CYTOLOGY
Within this nucleus with dimensions in man of 6-7.5*1013*3-4 mm, Ramon y Cajal demonstrated with the Golgi technique that the neurons in the subthalamic nucleus are multipolar with pigment, spindle shaped or polygonal. The neurons bring forward long dendrites with spines that branch regularly. The initial axonal segment bows regularly, by which the axons are difficult to follow, towards bundles of descending fibres. The nucleus is closed, which means that the dendritic branches are restricted to the nuclear area. Winkler (1928) discernedat least two types of human subthalamic neurons: parvo and magnocellular neurons (Fig.1). The magnocellular spindle shaped neurons are on average two to three times larger in perikaryon diameter and do contain lipofuscine granula.
Fig. 1 From Winkler (1928) A-c Large neuron of the STN, B-b small neuron of the STN, for comparison Winkler depicted a neuron of the rostral part of the nigra B-a. GABA-ergic interneurons have been detected in the human subthalamic nucleus. The smaller neurons that were GAD positive had an ovoid cell body and a diameter of 12.2 μm. These interneurons produced two till three primary dendrites and are 70 μm long. “These dendrites were thin (1-2 μm), poorly branched, tortuous, and spread out in all
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 176–179, 2010. www.springerlink.com
The Human Subthalamic Nucleus – Knowledge for the Understanding of Parkinson’s Disease
177
directions. These interneurons contained lipofuscin” (Levesque and Parent 2005).
III.
COMPARATIVE ANATOMY
Recalculations of the amount of cells per mm3 STN from Hardman et al. (2002) shows that the rat contains nearly 30.000 cells, while in man the amount is reduced to 2300 (Fig. 2). Following Hardman’s ordening of animal species a constant reduction of the amount of cells is noted. An older comparative study on primates (Füssenich 1967) shows the same tendency for the row: tupaia, lemur, macaca rhesus, pongo, pan, gorilla and humans. STN neuron number/mm^3 (Hardman)
neuron number*1e3/mm^3
35 30 25 20 15 10 5 0 rat
marmoset
macaque
baboon
human
Fig. 2 The amount of STN neurons per mm3. The phylogenetic row shows a steady decrease in the amount of STN neurons per mm3 in which the human STN contains the lowest amount of neurons per mm3.
IV.
DEVELOPMENT OF THE SUBTHALAMIC NUCLEUS
The main characteristics of the subthalamic development can be summarized as follows (marani et al., 2008): 1. The regio subthalamica is one of the first to differentiate in the ventricular surface. 2. The regio subthalamica gives rise to the subthalamic cell cord. The regio subthalamica contributes to a posterior hypothalamic area. 3. The regio subthalamica produces in its basal ventricular matrix the corpus Luysi and the suprapeduncular complex. 4. The matrix is localized just above the mammillary recess also called the supramammillary recess. 5. The corpus subthalamicum Luysi takes a more lateral direction or tangential migration than the suprapeduncular complex does. 6. The border between the subthalamic area and prerubral area is difficult to discern (see Fig.3).
Fig. 3 Schematic drawing of both the subthalamic area (E16-18, striped) and the subthalamic cell cord (E13-14, blocked) in the upper part of the figure as compared to the hypothalamic area at E18 in the lower part. The hypothalamic area is subdivided in a suprachiasmatic area (striped), an infundibular (gray) and a mammillary (circles) part. Overlap of the hypothalamic and subthalamic area is present in the posterior hypothalamic region. V’s are from left to right: recessus opticus and recessus mammilare.
V. TOPOGRAPHY OF THE SUBTHALAMIC NUCLEUS
Two sections (Fig. 4) that are oblique coronal and parallel to the course of the optic tract (left Nissl, right myelin stain ) show the sagittal topography of the human STN. In these sections the relation with the substantia nigra cannot be discerned. The human STN is sagitally sectioned parallel to the optic tract. The STN lies in a hollow of the cerebral peduncle and is covered by a thin layer of fibers the field H2 of Forel, while separated from the cerebral peduncle by a small fiber layer. At the medial side the fornix and tractus mammilo-thalamicus can be discerned. The zona incerta covers with head and tail the STN. The hypothalamus is just in between the medial part of the peduncle, ventricle and the optic tract. In both sections the comb system of Edinger can be discerned.
IFMBE Proceedings Vol. 29
178
T. Heida and E. Marani
entering fibers from the capsula interna is from rostroventrally (entrance) with a spread towards caudomedial. The second stream of entering fibers is those from the ansa and the comb system medially, which course caudo lateralward to enter the subthalamic nucleus. Preterminal degeneration was found in the subthalamic nucleus. In humans the (PPN) is subdivided in a pars compacta and a pars dissipata . The human Ch5 group has been identified and cholinergic projections are present in the STN. However, none of the other connections between STN and PPN has been established with neuroanatomical or neurocytochemical methods in humans.
Fig. 4 Coronal sections through the human STN. Left Nissl stain, right side myelin staining.
VI.
CONNECTIONS OF THE SUBTHALAMIC NUCLEUS
An overview of the connections found in experimental animals is given in Fig. 5. Nevertheless several of the connections presented are hardly proven in humans, here restricted to cortical and pedunculopontine (PPN) connections. The cortico-subthalamic connections have not been proven in humans by neuroanatomical techniques. The motor cortico-subthalamic connection is a moderate one in experimental animals as demonstrated in cats by electron microscopy and rat. There exists strong species variability for the cortico-subthalamic connection. The motor cortico-subthalamic somatotopy as described in man with electrophysiological methods is less sharp, since arm, leg and orofacial neurons contain a serious overlap of nearly 25-30% in the latero-central-dorsal segment of the nucleus. The antidromical studied cortico-subthalamic human connections seemingly do not fulfill the calibers needed for explanation of the velocities and latencies found. Cortical bundles presumably can reach the STN via the cortico-mesencephalic bundle or dorsal capsule of the substantia nigra and half way giving off preterminal degeneration to the pedunculo-pontine nucleus. Gebbink’s thesis (1967) on the structure and connections of the basal ganglia in man contains a series with degeneration that confirm (presumably for the first time and undoubtful) the human pallido-subthalamic connection. The lesions of the globus pallidus internus and externus does not allow a good differentiation between both pallidal parts, nevertheless “in globus pallidus lesions degeneration always is seen within the corpus Luysi in Häggquist- and Nautastained sections alike” (Gebbink 1967). The course of the
Fig. 5 Systemic drawing summarizing the overview of the STN afferent and efferent connections for mammalian species.
REFERENCES For an overview see: The subthalamic nucleus, Part I Development, cytology, topography and connections. Marani E, Heida T, Lakke EAJF, Usunoff KG. Advances in Anatomy, Embryology and Cell Biology 198 2008, ISBN 978-3-540-79459-2, Springer 1.
Benabid AL (2003) Deep brain stimulation for Parkinson’s disease. Curr. Opin. Neurobiol. 13:696-706
IFMBE Proceedings Vol. 29
The Human Subthalamic Nucleus – Knowledge for the Understanding of Parkinson’s Disease 2.
3.
4. 5.
Cajal RS (1955) Histologie Du Système Nerveux De L'Homme Et Des Vertébrés. Tome I. Généralités, Moelle, Ganglions Rachidiens, Bulbe Et Protubérance. Maloine, Paris. 1972. Segunda reimpression. Consejo Superior de Investigaciones Cientificas, Instituto Ramon y Cajal, Madrid Fűssenich MSU (1967) Vergleichend anatomische Studien ueber den Nucleus subthalamicus (Corpus Luysi) bei Primaten. Diss C&O Vogt Institut fuer Hirnforschung, Neustadt/ Albert-Ludwigs Universitaet Frieburg Gebbink TB (1967) Structure and Connections of the Basal Ganglia in Man. Van Gorcum, Assen Hardman GD, Henderson JM, Finkelstein DI, Horne MK, Paxinos G, Halliday GM (2002) Comparison of the basal ganglia in rats, marmosets, macaques, baboons, and Humans: volume and neuronal number for the output, internal relay and striatal modulating nuclei. J. Comp. Neurol. 445:238-255
6.
7. 8.
9.
179
Kleiner-Fisman G, Herzog J, Fisman DN, Tamma F, Lyons KE et al.(2006) Subthalamic nucleus deep brain stimulation: Summary and meta-analysis of outcomes. Mov. Disord. 21:S290-S304 Levesque JC, Parent A (2005) GABAergic interneurons in human subthalamic nucleus. Mov. Disord. 20:574-584 Temel Y, Blokland A, Steinbusch HWM, Visser-Vandewalle V (2005) The functional role of the subthalamic nucleus in cognitive and limbic circuits. Prog. Neurobiol. 76:393-413 Winkler C (1928) Handboek der Neurologie, Bouw van het zenuwstelsel I-V, Erven F.Bohn, Haarlem Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
T. Heida University of Twente Drienerlolaan 5 Enschede The Netherlands
[email protected]
Dissociated neurons from an extended rat subthalamic area - spontaneous activity and acetylcholine addition T. Heida and E. Marani University of Twente/MIRA, Biomedical Signals & Systems, Enschede, The Netherlands Abstract— The tegmental nucleus pedunculopontinus proves to connect by acetylcholine neurotransmitters, among others, the subthalamic nucleus. Dissociated cultures of the subthalamic nucleus on micro-electrode arrays were studied by addition of acetylcholine and by stimulation. Addition of acetylcholine had no effect on bursting activity, reduced spike activity strongly for a short period, after which activity increased again, but overall activity was reduced over time with 25%. After termination of acetylcholine addition the activity returned to normal. Keywords— dissociated neurons, subthalamic area, acetylcholine, Parkinson’s disease
I. INTRODUCTION
Fig. 1 STN (3*7*12 mm ~ 250 mm3) with presumed somatotopic organization (Source: A. Nambu et al., Neurosci. Res. 43, 2002).
Parkinson’s disease is characterized by the progressive loss of dopamine neurons in the substantia nigra, which results in a reduction of activity in the thalamus partly due to an increased bursting activity of STN (subthalamic nucleus) cell, correlated with tremor in Parkinson patients [1]. The STN is important in (voluntary) motor control and pathological changes in the nucleus cause hemiballism. Manipulation of the activities of STN neurons by adding neurotransmitter agonist or antagonists strongly affects spiking behaviour [2], which indicates the importance of knowing how activities of STN neurons are regulated. It is presently firmly established that the STN projection neurons are glutamatergic, excitatory [3], and they heavily innervate the substantia nigra (SN), the internal pallidal segment (GPi), followed by the external pallidal segment (GPe) and the pedunculopontine tegmental nucleus (PPN), by widely branching axons. Some of these connections are reciprocal. The STN contains a somatotopy (Fig.1). Deep brain stimulation (DBS) in or near the STN, results in an average reduction of: akinesia (42%); rigidity (49%), tremor (27%) and of axial symptoms. DBS produces nonselective stimulation of an unknown group of neuronal elements over an unknown volume of tissue. Therefore the actions of DBS are difficult to understand.
In slice preparations, STN neurons show rhythmic single-spike activities at resting membrane potentials. Depolarizing current pulses, increase STN neuron’s firing frequencies linearly with the magnitude of injected current. Several studies have reported the generation of a plateau potential, a long-lasting depolarizing potential [4, 5]. A plateau potential can induce long-lasting high-frequency discharge in the absence of synaptic inputs. STN neurons can generate a plateau potential only when the cells are hyperpolarized in advance. It is assumed that dopamine depletion, as occurring in Parkinson’s disease, results in hyperpolarization of STN neurons, and thus bursting activities are more likely to be induced than in the normal situation. In addition, the voltage-dependency of a plateau potential may play important roles in the generation of oscillatory bursting activity of the STN neurons, characterized by bursts of long duration and repeating at low frequency [6]. However, the mechanism of this voltage dependency in the generation of a plateau potential remains unknown. Opening of K+ channels by metabolic pathways is one possibility; high-frequency inhibitory input from, for example, the globus pallidus or pedunculopontine nucleus (PPN) stimulation of the STN are others. Is increased bursting activity due to network activity, within the STN itself or due to influence of the GP or other structures that project to the STN? In order to answer this question, STN cell cultures may provide a useful instru-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 180–183, 2010. www.springerlink.com
Dissociated Neurons from an Extended Rat Subthalamic Area - Spontaneous Activity and Acetylcholine Addition
ment. Therefore, dissociated STN area cells of the rat are cultured on a micro-electrode array (MEA). The influence of acetylcholine (PPN input) and high frequency stimulation (as in deep brain stimulation) on STN dissociated neuron activity is investigated experimentally.
II.
181
corded data. Electrodes with spontaneous activity of at least 1 Hz prior to stimulation were used and translated to a minimum total number of spikes within the period prior to stimulation. One electrode out of 60 was chosen for stimulation. Stimulation settings: 20 Hz, 500 block pulses, start at 300 s (end 325 s); 80 Hz, 2000 block pulses, start at 300 s (end 325 s).
METHODS AND MATERIALS III.
RESULTS
A. Cell culturing STN cells (rats) were dissociated using chemical (trypsine/EDTA) and mechanical dissociation techniques, and cultured on a micro-electrode array (MEA) consisting of 64 electrodes. The surface of the array was coated with PolyEthylenImine (PEI, 30 ngram/ml) to support attachment and growth of the neurons. During recording periods the electrode array was placed in an incubator while the temperature was kept at 37°C. Measurement set up: A MC1060BC pre-amplifier and FA60s filter amplifier (both MultiChannelSystems) was used to prepare the signals for AD-conversion. Amplification is 1000 times in a range from 100 Hz to 6000 Hz. A 6024E data-acquisition card (National Instruments, Austin, TX) was used to record all 60 channels at 16 kHz. Custommade Labview (National instruments, Austin, TX) programs are used to control the data acquisition (DAQ). These programs also apply a threshold detection scheme with the objective of data reduction. Actual detection of action potentials is performed in an offline fashion. During the experiments, the temperature was controlled at 36.0 ºC, using a TC01 (MultiChannel Systems) temperature controller. Recording starts after a minimum of 20 minutes, to prevent any transient effects. Noise levels were typically 3 to 5 μVRMS, somewhat depending on the MEA and electrode. We use commercially available MEA’s from MultiChannel Systems with 60 Titanium-Nitride electrodes in a square grid. The inter-electrode distance is 100 μm, and the diameter of the electrodes is 10 μm.
A. Addition of Acetylcholine Under normal culturing conditions single spike activity with an average frequency of 5.5 Hz was recorded from the dissociated STN neurons. Bursts, i.e. sequences of at least four spikes with an inter-spike interval less than or equal to 20 ms, were also recorded but no synchrony was found. Acetylcholine was applied in 5 steps of 10 μM with a step interval of 1000 s (Fig.2). After application neuronal activity was significantly decreased for about 100 s, after which spiking activity was restored. The total measurement time was 2.25 hr (including preceding normal registration). Up to 1000 s after the last acetylcholine application a total reduction of 25% of the spike activity was measured (p=0.01). The occurrence of bursts did not significantly change during and after the application of acetylcholine. In conclusion, two spike phenomena in STN cultures could be discerned: an acute diminishing effect of acetylcholine and an overall reduction or late acetylcholine effect.
B. Addition of Acetylcholine Acetylcholine was applied in 5 steps of 10 μM with a step interval of 1000 s using a small pipet positioned through the cover placed over the electrode array for sterility. C. Electrical stimulation
Fig. 2 Average spike activity over 8000 seconds with five steps of ACh addition and a washing step after 8100 seconds, indicated by the broken lines. After the removal of ACh network activity returned to the normal level of spontaneous activity.
Stimulation through these electrodes occurred at 20 Hz and 80 Hz. Stimulation artefacts are removed from the reIFMBE Proceedings Vol. 29
182
T. Heida and E. Marani
B. Spontaneous activity Spontaneous activity was observed using several MEA’s for in total several hours. Fig.3 shows the result of one of the measurements in terms of the average spike rate as a function of time over a period of 5 minutes. Electrodes that showed a minimum firing activity of at least 1 Hz were selected for further analysis, which in this case resulted in the selection of 22 electrodes. The average firing rate in Fig.3 is 2.7 Hz. However, the average firing was found to vary among MEA’s possibly resulting from different network architectures. A number of wave forms from the measurement are shown in Fig. 4.
Fig. 3 Spontaneous activity in an in-vitro STN network represented by the average number of spikes as a function of time. An average spiking frequency of 2.7 Hz was detected selecting those electrodes with at least 1 Hz as a baseline activity.
Bursting activity was not convincingly detected in the networks. Only one of the MEA’s did show bursting activity in accordance with the attained burst definition with an average burst length of 4.2 spikes. This bursting activity did not change significantly before and during a period of 1000 seconds after the addition of acetylcholine.
IV.
DISCUSSION
The connection that is mimicked by addition of acetylcholine is part of the PPN-STN connection. This part of the connection is cholinergic, but other cell groups are also present (glutamatergic, GABA-ergic and dopaminergic). Destruction of the PPN ends up with hyperactivity of the STN [7]. PPN lesioning was shown to induce akinesia in primates [8, 9]. It is now well established that the cholinergic agonists brought into the rat STN contributes to an higher excitation of the STN neurons [3]. However, muscarine agonists in slices diminished the amplitude of both EPSP’s and IPSP’s in the STN [10, 11]. The reduction of IPSP’s is higher, which leads to a final excitation of STN neurons [11, 12]. Adverse results are found in literature as to the effect of acetylcholine on the subthalamic neurons. This could well be due to the still existing connections. Taking away one connection by lesion, adding neurotransmitters or their agonists, therefore, does not show the pure effect of connections, neurotransmitters or receptors. Too many parameters are involved to understand the effect of these experiments. Culturing subthalamic neurons at least restricts the amount of parameters, but adds others! and it is rather unexpected that addition of acetylcholine to such cultures shows a short term and a long term effect. One should notice that addition of 10μM acetylcholine to rat cortex neurons increases their activity (unpublished results). If hyperactivity of STN is induced by reducing the PPN neurotransmitters, among them acetylcholine, and motohypoactivity is the consequence, than this MEA culturing experiment explains by the long term effect how such an hyperactivity can result from this type of neurotransmitter, neglecting all the other effects of other PPN neurotransmitters. The results show no effect on bursting activity, and therefore the long term effect of acetylcholine on subthalamic cultured cells may be related to the synchrony or pacemaker effect, stressing the role of the PPN.
REFERENCES Fig. 4 Typical waveforms of recorded STN spontaneous spiking activity; this graph consists of 500 randomly selected wave forms from one of the electrodes.
1. 2. 3.
Levy R. Ashby P, Hutchison WD, Lang AE, Lozano MA, Dostrovsky JO (2002) Brain 125:1196-1209. Wilson CL, Puntis M, Lacey MG (2004) Neurosci 123:187-200 Feger J, Hassani OK, Mouroux M (1997) Adv. Neurol. 74:31-43.
IFMBE Proceedings Vol. 29
Dissociated Neurons from an Extended Rat Subthalamic Area - Spontaneous Activity and Acetylcholine Addition 4. 5. 6. 7. 8. 9.
Otsuka T, Murakami F, Song W-J (2001) J. Neurophysiol. 86:18161825. Otsuka T, Abe T, Tsukagawa T, Song W-J (2004) J. Neurophysiol. 92:255-264. Beurrier C, Congar P, Bioulac B, Hammond C (1999) J. Neurosci. 19:599-609. Breit s,Lessmann L, Benazzouz A, Schulz JB (2005), Eur J Neurosci 22: 2283-2294 Matsumura M., Kojima J. (2001), Stereotact Funct Neurosurg 77: 108-115 Matsumura M, Stereotact. Funct Neurosurg 77: 91-97
183
10. Flores G., Hernandez S., Rosales M.G., Sierra A., Martines-Fong D., Flores Hernandez J. (1996), Neurosci Lett 203: 203-206 11. Shen K.Z., Johnson S.W. (2000), J. Physiol 525: 331-341 12. Rosales M.G., Flores G., Hernandez S., Martinez-Aceves J. (1994), Brain res 645: 335-337 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
T. Heida University of Twente Drienerlolaan 5 Enschede The Netherlands
[email protected]
Nigro-subthalamic and nigro-trigeminal projections in the rat 2†
E. Marani1, N.E. Lazarov2, T. Heida1, K.G. Usunoff 1
University of Twente/MIRA, Biomedical Signals & Systems, Enschede, The Netherlands 2
Medical University of Sofia, Department of Anatomy and Histology, Sofia, Bulgaria
Abstract— The connections of the substantia nigra in rat have been extensively studied. Nevertheless the connections towards the subthalamic and sensory trigeminal nuclei were incidentally researched. This research showed that both the subthalamic and trigeminal nucleus received a bilateral projection of the substantia nigra and a point to point relation could be discerned for both nuclei, indicating that a topical relation between the substantia nigra and these nuclei exists. Keywords— nigro-subthalamic connetion, nigro-trigeminal connection, substantia nigra, subthalamic nucleus
I. INTRODUCTION
The substantia nigra (SN) is a heterogeneous collection of neurochemically and functionally interrelated neurons in the midbrain that is strongly involved in the pathology of Parkinson’s disease and other neurodegenerative disorders . The first population consists of dopaminergic (DAergic) neurons, condensed mainly in SN pars compacta (SNc, A9 group). The SN pars reticulata (SNr) also contains a few DAergic cells. The second population is composed of GABAergic neurons, very similar, even identical, to the pallidal neurons. These cells are located almost exclusively in SNr. The DAergic neurons in SNc profusely project to the neostriatum (caudate nucleus and putamen), and less extensively to the pallidum, and to the subthalamic nucleus. The subthalamic nucleus (STN) projection neurons are glutamatergic, excitatory, and heavily innervated by widely branching axons: the substantia nigra (SN) and the internal pallidal segment (GPi), followed by the external pallidal segment (GPe) and the pedunculopontine tegmental nucleus (PPN). For many years it has been known that the SN is involved in oral movements, and in oro-facial dyskinesias. The trigeminal nuclei receive afferent inputs from an unexpectedly large number of brain nuclei. The SN in rats sends projection fibers to the reticular formation around the trigeminal motor nucleus (Mo5) but the axons of this nonDAergic pathway do not enter the territory of the Mo5.
Rather, the SN influences this nucleus via a multisynaptic pathway. Copray et al. (1990a) have provided evidence that afferent on mesencephalic trigeminal nucleus (Me5) neurons originate from cells in DAergic areas in the SN.
II.
MATERIALS AND METHODS
Female Wistar Albino Glaxo rats weighing 200-240 g were anesthetized and injected unilaterally with biotynilated dextran amine (BDA) into the SN (survival time 8-13 days). The rats were reanesthetized and perfused transcardially. Serial freeze sections were cut at a thickness of 40 μm. A commercial avidin-biotin-HRP complex (ABC) kit was used to visualize the BDA and counterstaining occurred with Cresyl violet. A representative overview of injections for the several parts of the SN is given in Fig.1. For descriptive reasons Me5 is subdivided into a caudal and a rostral part. The caudal part of Me5 (Me5c) consists of large pseudounipolar neurons that are seen in clusters located in the triangle between the locus coeruleus (LC) and the medial parabrachial nucleus (MPB). The rostral part of Me5 (Me5r) is mostly composed of pseudounipolar neurons that are arranged in a thin shaft bordering from lateral the periaqueductal gray (PAG) (see Lazarov 2000, 2002, for extensive descriptions).
III.
RESULTS
A. Course and termination of nigrosubthalamic connections The largest amount of nigrosubthalamic axons was observed in case 5778. The injection site of the tracer involved the medio-lateral SNl, SNr and SNc (Fig. 1). The labeled axons radiate from the injection site. The axons destined to the brain stem, and some nigrothalamic axons course dorsally towards the tegmentum, and the ascending axons to the forebrain take initially a medial course towards the prerubral area. Most of them run immediately dorsal to SN, and
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 184–187, 2010. www.springerlink.com
Nigro-Subthalamic and Nigro-Trigeminal Projections in the Rat
185
Fig. 1 Representative injections: large the lateral SNr and SNc and the SNl (5778) restricted to the medial SNC (5784), selective in the SNr(5781) and in the SNl (5771).
some axons traverse lateromedially the SNc. Few axons curve ventromedially and travel along the border between SNr and the cerebral peduncle. Reaching the caudal pole of the STN the labeled axons enter the nucleus through its lateral wedge, and from the medially running bundle, dorsal to the STN. Labeled axons enter the STN also through its ventral border but their course is largely obscured by the numerous retrogradely labeled strionigral axons, arranged in the bundles of the Edinger’s comb system. Within the STN, especially in the lateral half of the nucleus, along with passing fibers oriented mediolaterally, there is a large number of terminal labeling. In the medial part of the STN there are mainly discrete bursts of terminal labeling. The SN axons cross the midline at several places. The most substantial component of crossed axons runs in the
mesencephalic tegmentum ventral to the periaqueductal gray, and some fibers in the rostral mesencephalon apparently enter the STN through its dorsal border. Rostral to the SN, the efferent SN axons cross the midline in adhesio interthalamica (crossed nigrothalamic axons), and the last component of crossing axons runs in the supraoptic decussation, immediately above the optic tract. Some of these axons take a dorsomedial course towards the contralateral STN. In the contralateral STN a lower number of labeled axons are seen. However, they form very distinct mediolaterally extended patches that might be followed in serial sections. Most of these discrete fields of terminal labeling are in the central and lateral portions of the STN , but also medially some terminal “whorls” are seen. The specific injections in the different parts of the SN showed: Injections
IFMBE Proceedings Vol. 29
186
E. Marani et al.
into the lateral SN showed that the ilSTN contains scant fibers entering the nucleus from dorsolateral and terminations are noticed around the cells of the STN. Labeled fibers and terminations are absent in the contralateral STN. These fibers entered the nucleus from its latero-dorsal side. Injections into the reticulate SN involve not only the SNr but the SNc too. Labeling was found in the ilSTN and clSTN. Injections into the compact SN showed heavy terminal labeling and labeled fibers are found in the ilSTN. The clSTN contained sparse terminal labeling with a larger amount of labeled fibers . The positivity was restricted to the caudal and lower middle part of the clSTN. Overview of all the results is shown in Fig. 2.
Fig. 2 Overview of the ipsilateral and contralateral connections between SN and STN.
B. Nigro-trigeminal results A broad field of anterogradely labeled fibers emerged from the injection sites and coursed dorsomedially and dorsally. In the midbrain reticular formation heavy terminal labeling was seen in the deep mesencephalic nucleus, and especially in the pedunculopontine and laterodorsal tegmental nuclei. Moderate labeling was present in the PAG, and in the dorsal raphe nucleus. Numerous labeled axons reached the deep and intermediate gray layer of the superior colliculus , and coursed toward the peripheral zones of the inferior colliculus. More caudally, heavy labeling was observed in the parabrachial nuclei and a moderate number of labeled
axons entered the locus coeruleus. A moderate number of axons crossed the midline around the decussation of the superior cerebellar peduncle and terminated in the contralateral pedunculopontine nucleus and the superior colliculus. Specifically, the injection involving the lateral SNc and parts of the adjacent SNr and SNl resulted in anterograde labeling throughout Me5 both ipsilaterally and contralaterally, with a strong ipsilateral predominance. Surrounding the injection site many intensely labeled neurons were present. Terminal labeling was observed among the perikarya of pseudounipolar neurons in the ipsilateral Me5c . At this sectional plane, virtually all pseudounipolar neurons were at least partially surrounded by varicose fibers, which appeared to contact their surface . At more rostral levels, the intensity of anterograde labeling in the Me5r decreased bilaterally. On the contralateral side moderate terminal labeling was present around but not on pseudounipolar neuronal somata , both in the caudal and rostral Me5. Selective injections were centered into the medial SNc and lateral SNc and the labeled axons were followed exclusively to the ipsilateral Me5c while the contralateral Me5c and Me5r on both sides displayed few labeled fibers. An extensive network of terminal labeling was present in the ipsilateral Me5c, decreasing slightly from medial to lateral. Light but distinct anterograde labeling was also found in the parabrachial nuclei and within the superior cerebellar peduncle. Most of the terminal labeling surrounded the pseudounipolar mesencephalic trigeminal neurons, but again some perikarya clearly displayed terminal and passing boutons covering their surface. Throughout the neuropil of Me5c a meshwork of fine labeled fibers with varicosities was also present after injection into the lateral SNc . A few pseudounipolar neurons with boutons en passant and boutons termineaux clearly visible on their surface were observed. The terminal labeling, present in this section, extended medially to the Me5c, to include the area of smaller cells in the locus coeruleus. In case C5771 the minute injection focus selectively infiltrated the SNl (see Fig. 1). In the Me5 area only few varicose fibers and their terminals reached the ipsilateral Me5c, while the rostral portion of this nucleus showed a slightly larger number of labeled fibers. In this case, no anterograde terminal labeling was observed in Me5 contralaterally.
IFMBE Proceedings Vol. 29
Nigro-Subthalamic and Nigro-Trigeminal Projections in the Rat
187
Fig. 3 Three injections with their ipsilateral and contralateral projections on the schematic subdivided trigeminal mesencephalic nucleus parts. modify, modulate or interact with outputs from all these nuclei to control the masticatory behavior.
IV. DISCUSSION
The present study provides data for the existence of a substantial nigrosubthalamic connection in the rat, which emits also a moderate component to the contralateral STN. Thus, two significant nuclei of the basal ganglia – the SN and the STN – are ipsilaterally strongly interconnected, and this STN-SN-STN loop is involved in the complicated basal ganglia circuitry, since both nuclei display a broad variety of afferent and efferent connections. It is generally accepted that the SN is involved in oral movements and oro-facial dyskinesias. Until now, it has been believed that the SN influences the trigeminal motoneurons via a multisynaptic pathway. The results of our study provide strong evidence that the SN also directly innervates the proprioceptive trigeminal neurons and thus, both the motor and sensory neurons controlling jaw muscles involved in mastication. Since pseudounipolar mesencephalic trigeminal nucleus neurons send axons to the pontine and spinal trigeminal nuclei, it appears that the entire trigeminal nuclear complex is profoundly influenced by the SN. Therefore, it can be inferred that inputs from SN possibly
REFERENCES 1.
2.
3. 4.
Copray JCVM, Liem RSB, Ter-Horst GJ, van Willigen JD (1990a) Dopaminergic afferents to the mesencephalic trigeminal nucleus of the rat: a light and electron microscope immunocytochemistry study. Brain Res 514:343-348 Copray JCVM, Ter-Horst GJ, Liem RSB, van Willigen JD (1990b) Neurotransmitters and neuropeptides within the mesencephalic trigeminal nucleus of the rat: an immunohistochemical analysis. Neuroscience 37:399-411 Lazarov NE (2000) The mesencephalic trigeminal nucleus in the cat. Adv Anat Embryol Cell Biol 153:1-103 Lazarov NE (2002) Comparative analysis of the chemical neuroanatomy of the mammalian trigeminal ganglion and mesencephalic trigeminal nucleus. Prog Neurobiol 66:19-59 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
E. Marani University of Twente Drienerlolaan 5 Enschede The Netherlands
[email protected]
Statistical Estimate on Indices Associated to Atherosclerosis Risk C. M. Ipate, A. Machedon and M. Morega University POLITEHNICA of Bucharest, Faculty of Electrical Engineering, Bucharest, Romania Abstract— A statistical analysis based on the estimate of the linear correlation between two significant health indices is performed here, in support of a noninvasive method suited for circulatory disease prognosis and diagnosis. The pulse wave velocity, evaluated by signal acquisition and processing, and several personal data, easily collected from the subject, enter the statistical analysis. The study was performed on a sample of 52 randomly chosen persons and was extended to sub-groups identified by specific characteristics. The results confirm an already identified health trend – wrong habits and lifestyle reflected by the increased incidence of obesity in young generations will contribute in the near future to the expansion of circulatory disease. Keywords— biomedical signals, statistical circulatory assessment, noninvasive investigation.
analysis,
I. INTRODUCTION
Circulatory disease is an important concern in nowadays preventive medicine because it is highly sustained by wrong cultural and lifestyle habits. It has disastrous health and social consequences for individuals and their entourage while forcing the health care system to significant financial effort. Civilized populations suffer, more and more, and even from early ages, of Arteriosclerotic Vascular Disease (ASDV) syndrome, more frequently called atherosclerosis, caused by the accumulation and integration into the arterial wall of atheroma (cells and residue with high lipid content). They originate in fats received from food, and start to accumulate from the first years of life, but the magnitude of the phenomenon varies widely from a person to another, depending on several particularities of the lifestyle, which set a personal mark on quantifiable physiologic and somatic parameters. Noninvasive medical investigations (electrocardiographic and pulse signals) correlated with the relevance of several personal characteristics (age, sex, body size, health status) and habits (selection of food, smoking, exercising) could give a proper forecast toward the development and perspective of circulatory disease for an individual. Our research work originated in a statistical study aimed to find and sustain quantitative relations among personal indices associated to health. Electrical activity of heart is quantitatively represented by electrocardiographic (ECG) recordings; the wave of
electrical depolarization and repolarization of cardiac excitable cells propagates through electroconductive heart tissue and triggers the mechanical activity of the myocardium. The ventricular systole (R-S-T sequence on the ECG signal recording) corresponds to the contraction of the heart ventricular muscle, as Fig. 1 illustrates. Myocytes contraction pushes a volume of blood through arteries; from the left ventricle, the blood spreads into the whole body, through aorta and subsequent arteries, in the systemic circulation. It is considered that the R peak represents the beginning of blood pumping out from the heart. The pumping action determines a pressure wave on the arterial walls, which is propagated to the periphery of the circulatory system. The blood pressure increases on the systolic period and decreases on the diastolic period (the period of ventricle muscle relaxation). In this way every systole determines a propagating pulse of increased pressure on arterial walls. Pulse Transit Time (PTT) is the time it takes the pulse pressure waveform to propagate through the length of the arterial tree; it is directly conditioned by the traveling speed of blood pulse. PTT is affected by various factors, like heart contraction strength, blood pressure, elasticity and cross-section dimension of arteries, pathological conditions, drugs administration, etc. On the other hand, PTT is proportional to the length of the traveled pathway and its value is considered a physical measure of the circulatory system state. It has the advantage to be identified during a relative simple and noninvasive investigation, which is easily accepted by the patient and suitable for clinical purpose [2, 3].
Fig. 1. Correlation between symbolic ECG signal and mechanical activity of the heart [1]
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 188–191, 2010. www.springerlink.com
Statistical Estimate on Indices Associated to Atherosclerosis Risk
Besides a proper heart activity or an arterial good state, the length of the arterial circuit is a primary factor that affects PTT value, when estimated on a certain anatomical branch of the circulatory tree. The arterial length is thus seen as a somatic feature, irrelevant as a quantitative index related to the health and functionality of the circulatory system. A better quantification and more realistic information may come from the Pulse Wave Velocity (PWV). PWV is computed by rating the distance traveled by the pulse wave between two well determined positions on the arterial tree, to the time interval between two registered biosignals marked by the pulse at each location. As a general physiology law, the PWV value is inversely related on the arterial wall elasticity. It could be assumed that PWV increases with the cholesterol concentration in blood (atherosclerosis) and possibly with the arterial stiffness (caused by medial calcification and loss of elasticity), usually related to subject’s age and senile arterial degeneration. This evolution of the aortic injury could be followed by hypertension and increased risk of heart failure. Aortic PWV is frequently considered as a marker of cardiovascular risk independently of blood pressure level, but in conjunction with heart rate. There are opinions expressed in scientific literature, that PWV generally increases with the age [4]. This simplistic statement was invalidated by a previous study conducted by our team [5] and is further questioned here. The investigation algorithm followed by our study may give, in the same time, information on the heart rhythm variability and average pulse. II. MATERIAL AND METHODS
The study presented here follows an acquisition and analysis protocol applied to each of the 52 subjects that enrolled voluntarily in the program. The total number of participants exceeds the minimal dimension of the relevant sample, determined through statistical tests (single mean and single proportion) [6, 7]. The subjects (23 females and 29 males), with ages between 10 and 80, are considered to have a normal health status (particularly free of any diagnosed circulatory disease). More than half of the participants were students, randomly chosen to be integrated in that study. Before the acquisition of the biosignals, during the accommodation period, participants were asked to provide some personal data (age, body weight and height, current medication if any, emotional state, the habitude of smoking and physical exercising); their arterial blood pressure and pulse were also measured at the beginning of the test with a digital monitor. The measurements were performed in a laboratory, in a quiet environment, while the subjects had
189
a relaxed sitting position. Further measurements were performed only after pulse and blood pressure were stable at each subject’s regular values. As previously described, PWV estimate needs a PTT evaluation; ECG signal and peripheral pulse photoplethysmography (PPG) recordings are necessary. We used a portable MP30/35 BIOPAC acquisition system, set to simultaneously record ECG Lead I and PPG signals, as shown by the image in Fig. 2. The two recording locations are: (1) the heart – where the pulse is associated to the R peak on the ECG signal, and (2) an endpoint of the arterial circuit, i.e. one fingertip – where the pulse is effectively captured through a photoplethysmography transducer [1, 3]. The arterial circuit length (AL) is defined here as the distance from the heart to the endpoint of the arterial circuit. AL is an individual somatic index that was measured and recorded with the other previously mentioned personal data. During the RR period, one maximal value occurs on the pulse waveform. PTT is determined by the measurement of the time interval between the peak of the QRS complex and the following peak, which occurs on the pulse waveform, as the signals recording illustrates in Fig. 3. The acquisition is set to a high sampling rate, at least 2 kHz, for the best identification and localization of peaks. The recordings were made for intervals of approx. 100s, but the analysis was performed on a 30 s interval of stationarity, usually chosen from the last part of the acquisition. The instantaneous heart rate (BPM) and its mean value were automatically computed from the ECG waveform. Several processing techniques were applied for the automatic detection of the instants when the peaks occur. The signals were processed in order to enhance the peaks, comparative to the other amplitudes of the signals and the “find all peaks” command of the BSL Pro software [1] was applied to both ECG and PPG derived signals.
Fig. 2. ECG and PPG portable recording system (BIOPAC MP30/35)
IFMBE Proceedings Vol. 29
190
C.M. Ipate, A. Machedon, and M. Morega
Fig. 3. ECG and PPG recordings (Biopac Student Lab Pro software window) [1]
The identified instant values were transferred in an arithmetic worksheet and the PTT-time series was determined as a succession of delays measured between the pairs of peaks. PTT over the 30 seconds sequence was averaged and PWV for each subject was computed and recorded in the database. A manual identification of peaks and measurement of delay intervals, combined with the morphologic analysis provided by BSL Pro was also applied, as a supplementary checking, for several samples randomly selected from the recordings. In that way, the automatic processing was satisfactorily validated. The PWV-time series was easily computed afterwards, by dividing each PTT value to AL. III. RESULTS
An arithmetic worksheet is primarily filled with each subject’s personal data: sex, age, body mass index (BMI = body weight / height2) and the determined value of the PWV as described earlier (see Table 1 with two randomly selected samples).
that reveals very good adequacy of all defined groups to the Gaussian distribution. For each sample we computed statistical parameters like: lowest / highest / median value, arithmetic mean, standard deviation, relative standard deviation, standard error for the mean; 95% confidence interval for mean and median were also estimated. Several statistical tests were afterwards performed for the assessment of correlation of the PWV indices with other physiologic characteristics. Our analysis was first oriented to find if any correlation exists between the PWV and the age of the subjects [7, 8]; linear correlation coefficient r was computed for the first sample of subjects (52 persons, different as age and sex) and the results show, paradoxically, that PWV and age are negatively correlated (r = – 0.412) (Fig. 4). Our previous study [5] found no such correlation at all, on a smaller sample (37 subjects), but with ages more uniformly spread in the same range (10 to 80 years old). The linear correlation evaluation was applied to state another relationship between personal indices; PWV and BMI were chosen for this test applied to the same group of 52 subjects. We observed that the higher PWV values are related to higher BMIs; Fig. 5 displays the distribution of data. The linear correlation coefficient with the value r = 0.293 was this time found to indicate that PWV and BMI are positively correlated for the whole sample of 52 subjects. A more interesting result is revealed by the analysis of the second group, the young persons (31 subjects), with ages in the range 20-25 years. The linear positive correlation between PWV and BMI is stronger this time (r = 0.605), as Fig. 6 shows. On the other hand, similar analysis performed on the two groups identified by sex does not reveal any significant linear correlation; the correlation coefficient is r = 0.016 for the female group (23 female subjects) and r = 0.267 for the group of 29 male subjects.
Table 1. Arithmetic worksheet with personal and health related data
sex f m
…
age 24 38 ..
BMI [kg/m2] 16.8 35.1 …
PWV [m/s] 2.314 2.983 …
The groups that were individually analyzed are: (1) all 52 investigated cases, (2) the young people category (31 persons with ages between 20 and 25 years), (3) the 23 female subjects and (4) the 29 male subjects. Data were primarily analyzed for compliance with the normal distribution; the performed Kolmogorov-Smirnov test shows indices higher than 0.9 [6] for each sample,
Fig. 4. Correlation analysis between PWV and AGE, for the population under investigation (52 persons, with ages between 11 and 78 years)
IFMBE Proceedings Vol. 29
Statistical Estimate on Indices Associated to Atherosclerosis Risk
Fig. 5. Correlation analysis between PWV and BMI, for the population under investigation (52 persons)
191
Despite the conservative opinion, supported also by studies presented in scientific literature, that the pulse wave velocity is directly related to the age, we found a more complex relation. The PWV becomes higher as a natural mechanical effect of the wall stiffness rise, generally known as atherosclerosis, which is not necessarily age conditioned. Atherogenesis, the main cause of PWV rise involves accumulation of fatty substances into the arterial wall. Ageing is not the dominant cause of plaque accumulation in the arterial wall, but this process is more dependent on modern unhealthy lifestyle - bad habits in nutrition and sedentarism, which especially affects young generations. The further development of the research will take into consideration the extension of that analysis methodology, both by the enlargement of the database, and by exploring other adequate mutual relations among health indices.
ACKNOWLEDGMENT Research was performed in the Lab. for Electrical Engineering in Medicine, Univ. POLITEHNICA of Bucharest, partially supported by the research contract 41-061/2007 and BIOINGTEH platform.
REFERENCES 1. 2. Fig. 6. Correlation analysis between PWV and BMI, for the young people (31 persons, with ages between 20 and 25 years) 3. IV. CONCLUSIONS
The study presented here investigates statistical correlations among health related indices, as a quantitative evaluation of the arterial wall integrity. The Pulse Wave Velocity through the arterial tree is the measured physiological quantity. PWV was determined by processing two physiologic signals simultaneously acquired: the ECG (Lead I on the Einthoven triangle) and the pulse captured with a photoplethysmography sensor at the left-hand ring finger. A statistical estimate was conducted on a sample randomly composed by 52 persons, both male and female, with ages in the range 10 - 80 years. The correlation between PWV and age was found irrelevant in our research, but the positive correlation between PVW and the Body and Mass Index is confirmed and supported by our results on the whole group of persons under test, but especially on the sub-group of young people (students), ages between 20 and 25 years.
4. 5. 6. 7. 8.
Biopac Student Lab (BSL) Pro –Reference Manual Version 3.7.2 for Biopac Student Lab PRO Software and MP35/30 Hardware - © BIOPAC Systems, Inc. 2005 Payne R. A., Symeonides C. N., Webb D. J., Maxwell S. R. J., (2006) "Pulse transit time measured from the ECG: an unreliable marker of beat-to-beat blood pressure", Journal of Applied Physiology, 100(1):136-41 C.H. Tang, G.S. Chan, P.M. Middleton, A.V. Savkin, N.H. Lovell, (2009) “Spectral analysis of heart period and pulse transit time derived from electrocardiogram and photoplethysmogram in sepsis patients”, Conf Proc IEEE Eng Med Biol Soc.1:1781-4. Byeong C. C., et a., (2004) "Evaluation of Arterial Compliance on Pulse Transit Time using Photoplethysmography", The 30th Annual Conf. of the IEEE Ind. Electron. Soc., Busan, Korea Machedon A., Ipate M.C., Morega M., (2009) “Noninvasive Evaluation of the Arterial Wall Elasticity”, 2nd Intl. Conf. on eHealth and Bioengineering - EHB 2009, Iasi-Constanta, Romania MedCalc - V 10.4.8 - © 1993-2009 MedCalc Software bvba Wayne W. D., (1996) Biostatistics, A Foundation for Analysis in the Health Sciences, 6th Edition, John Wiley Dragomirescu L., (1998) Biostatistica pentru incepatori, Ed Constelatii, Bucuresti
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Corina Mihaela IPATE University POLITEHNICA of Bucharest 313 Splaiul Independentei, zip code: 060042 Bucharest Romania
[email protected]
Study of Some EEG Signal Processing Methods for Detection of Epileptic Activity R.Matei1, D.Matei2 1
Technical University of Iasi, Faculty of Electronics, Telecommunications and Information Technology, Iasi, Romania 2 University of Medicine and Pharmacy “Gr.T.Popa”, Faculty of Medical Bioengineering, Iasi, Romania
Abstract— In this paper we analyze the utility of some signal processing methods in detecting specific patterns present in the EEG signal, which may give information about the inset of brain disorders, in particular signs of epileptic activity. We approach the matched filter method to detect spike-and-wave patterns and we also analyze the EEG signal using the independent component analysis and frequency spectrum. Keywords—EEG signal processing, matched filter, independent component analysis, detection of epileptic activity I. INTRODUCTION
The electro-encephalographic (EEG) signal obtained from scalp surface electrodes results as the sum of a large number of potentials originating from neurons located in various regions of the brain. The EEG has been intensely studied due to the valuable information it provides about the normal brain and especially due to its utility in the diagnosis of some brain disorders. In analyzing the EEG signal various signal processing methods can be applied, both in the time and frequency domains [1]-[3]. The EEG signal can be conveniently explored using spectral analysis. It is now currently acknowledged the classification of brain waves into four basic groups: Beta (13–30 Hz) is the brain wave associated with active thinking and attention. Alpha rhythm (8–13 Hz) is induced by a relaxed state, lack of attention or focus. Theta waves (4–7 Hz) indicate emotional stress or deep meditation. Delta waves (0.5–4Hz) appear mainly in deep sleep and are often mixed up with muscular artifact signals. Although the EEG signal is always a superposition of brain waves, different states of consciousness will make one of these types of waves dominant, i.e. its frequency range will be more pronounced in the spectrum. EEG waveforms are generally classified according to their frequency, amplitude, and shape. From a morphological point of view there are various shapes of signal patterns which may appear either in normal EEG or various brain disorders. Thus we can identify waveforms including some typical event-type patterns like K complex, V waves, lambda waves, positive occipital sharp transients of sleep (POSTS), spindles, Mu rhythm, spikes, sharp waves, spike-and-wave rhythms, sleep spindles etc. [1] , [2].
EEG is a valuable tool for assessing disorders of cerebral function, in particular epileptic disorders. However the clinical utility of surface EEG in epileptic disorders is rather limited, since it is unlikely for an epileptic seizure to occur during the course of a normal recording. Therefore clinicians are looking for pre-seizure signs and specific changes appearing in the EEG rhythms, which have specific shapes in the waveform. These are generally referred to as epileptiform activity and consist of transient spikes or sharp waves which are clearly distinguishable from the background electric activity and are also called interictal patterns. These are usually of negative polarity, as shows Fig.2(a) and are often followed by a slow wave. These abnormal discharges are shown to have a strong connection with epileptic activity. In spite of its limitations, the EEG is in many cases useful in predicting the seizure type. A typical pattern is a sharp spike associated with a subsequent wave discharge (so called spike-and-wave complex), as shown in Fig.1(a). Clinical studies have shown that spikes tend to occur more often after repeated seizures and are therefore an indication on the degree of brain damage due to epileptic seizures [4]. The EEG signal continuously changes its characteristics in relation with mental tasks, various external stimuli or physiological processes. The analysis of an EEG records involves he identification of several types of waves and rhythms [1], [2]. II. ANALYSIS OF SOME SIGNAL PROCESSING METHODS FOR DETECTING EPILEPTIC BRAIN ACTIVITY
A. Matched filters An efficient method to detect specific patterns (events) in a signal where such patterns occur is to design a so-called matched filter, i.e. a filter (usually a FIR filter) whose impulse response resembles, or matches a typical version of such a signal pattern or event. Once designed this filter, if a signal where approximately similar versions of the pattern occur is applied to the filter, its output response to this signal will contain some peaks or maximum values at the time instants where the events occur. Since basically these filters perform a correlation between the input signal and the signal template, they are also known as correlation filters [2].
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 192–195, 2010. www.springerlink.com
Study of Some EEG Signal Processing Methods for Detection of Epileptic Activity
For instance we will apply this method to detect spikeand-wave complexes in an EEG signal. From a representative signal we extract a sequence x(t ) which is a typical version of the pattern of interest, in our case the spike-and-wave complex which will be the reference signal. Let X (ω ) be the Fourier transform of x(t ) . Let us now consider the linear time-invariant matched filter with impulse response h(t ) and transfer function H (ω ) = F [h(t )] . The filter response is given by: y (t ) = x(t ) ∗ h(t ) or Y (ω ) = X (ω ) H (ω ) . It can be proven that the output signal energy reaches a maximum when H (ω ) = kX ∗ (ω )exp(− jωt0 ) (1) where k is a scaling factor and t0 is a time instant or delay; this implies that the filter impulse response must have the form [2]: h(t ) = kx(t0 − t ) (2) so it is a reversed version of the reference signal, scaled and delayed. The above form of the impulse response shows that the matched filter performs indeed a correlation. In Fig.2 we present some simulation results on EEG signals with epileptic activity from several patients. The reference signal (a typical spike-and-wave complex) shown in Fig.1(a) was isolated from the EEG sequence displayed in Fig.2(a). The response of the matched filter to this signal is displayed in Fig.2(b). The shape of the spike-and-wave complex in Fig.1(a) is a general one (typical version), but for different patients and different conditions its shape (although always visually identifiable) tends to vary substantially. For instance, in Fig.2(c), (e) and (g) other EEG sequences exhibiting such patterns are given. We notice that the complex shape varies, especially the wave following the spike, which may have more or fewer oscillations, larger or smaller in amplitude. Nevertheless, as seen in Fig.2(d), (f) and (h), the matched filter response is quite neat, featuring maxima and minima; the peaks (although not very sharp), indicate quite exactly the location in time of the spike-andwave complexes. Therefore if the reference signal is well chosen, the matched filter method is sufficiently robust to variations in the shape of the pattern of interest. Since the matched FIR filter results of large order, it would be desirable to find an equivalent IIR filter with a similar impulse response. Using Prony’s method for the design of recursive filters in the time domain, we find the very efficient IIR filter of order 4, with the transfer function: H ( z) =
193
sharp low-pass filter. In fact the smoothing effect of this low-pass filter can be noticed on the filter responses from Fig.2 (b), (d), (f) and (h).
(a)
(b) Fig. 1 (a) reference spike-and-wave complex used as impulse response of the FIR matched filter; (b) normalized magnitude of the frequency response of the equivalent 4-th order IIR filter
(a)
(b)
0.16179 z 4 − 0.20037 z 3 + 0.10732 z 2 − 0.02088 z + 0.00479 z 4 − 2.2746 z 3 + 1.9599 z 2 − 0.7 z + 0.03473
(3) The magnitude of the IIR filter frequency response on the range ω ∈ [0, π ] is plotted in Fig.1(b) and is a relatively IFMBE Proceedings Vol. 29
(c)
194
R. Matei and D. Matei
(d)
(e)
(f)
Each of the above variables has its own variation in time. In the above equation we assume that both the independent variables and their linear combinations have zero mean. The system of linear equations (4) can be expressed in the matrix form: x = As (5) where we denoted the vectors x = [ x1 x2 ... xN −1 xN ] ; s = [ s1 s2 ... sN −1 sN ] and the matrix A = [aij ] . Equation (5) is the mathematical expression of the ICA method. The aim is to determine both the matrix A and the independent components s, knowing only the observed variables x. The method relies on the assumption that the components si are statistically independent and have a non-gaussian distribution [10], [11] . The independent components of a signal can be calculated in Matlab using an algorithm for the fast ICA calculation, for instance EEGLAB [12]. For instance let us consider the EEG sequence (of length 1024 samples) from Fig.3(a), which contains spike-andwave patterns. We obtained the independent components plotted below (b)-(e).
(g)
(a) (h) Fig. 2 (a), (c), (e), (g) – EEG signal sequences from three different patients featuring spike-and-wave complexes; (b), (d), (f), (h) – responses of the matched filter to the three EEG signals
(b)
B. Study Using Independent Component Analysis Essentially, the independent component analysis (ICA) method applies to the problem of separating N statistically independent (uncorrelated) signal sources which have been mixed linearly in N output signals (channels). Generally no further knowledge about their statistical distribution and dynamics is available [5]-[10]. This general problem is also called blind separation [11]. The Independent Component Analysis (ICA) method assumes that an observation of N linear mixtures x1 ,..., xN of N independent vectors is available: x j = a j1s1 + a j 2 s2 + ... + a jN sN , j = 1, N (4)
(c)
(d)
(e) Fig. 3 (a) EEG signal sequences; (b)-(e) four independent components
IFMBE Proceedings Vol. 29
Study of Some EEG Signal Processing Methods for Detection of Epileptic Activity
195
ponents are shown. The second one (d) resembles very much the original signal. III. CONCLUSIONS
(a)
(b)
We analyzed several EEG signals presenting signs of epileptic activity, like spike-and-wave complexes, using the matched filter approach and also the independent component analysis and frequency spectrum. The matched filter method gives good results and is robust to signal variation. The independent component analysis method may also have a role as a pre-processing step in a more complex algorithm for detecting epileptic events, with a possible application in predicting epileptic seizures.
REFERENCES (c)
(d) Fig. 4 (a) EEG signal with Mu rhythm; (b) frequency spectrum; (c), (d) two independent components
The coefficients of correlation between each component and the primary signal are: 0.7754, -0.03758, -0.2151 and 0.5924; thus the first and fourth components are more correlated with the observed EEG signal. From Fig.3(b)-(e) it can be seen that the spike-and-wave patterns also occur in the independent components but are more or less distortioned as compared to the observed signal. The correlation between each two independent components is zero, as can be checked immediately. This is also partially true with the spectra. Calculating the signal spectrum for the observed signal and for each component, the spectra of the independent components are also almost uncorrelated. For instance, the largest correlation coefficient between the real parts of the spectra in this case is 0.001, whereas the imaginary parts are totally uncorrelated. In Fig.4(a) an EEG signal presenting a Mu rhythm is shown; this rhythm has a specific arcade shape and the negative spikes are very sharp. For a clear Mu rhythm, as in the first part of the sequence, the fundamental component and second harmonic are predominant, as shows the spectrum in Fig.4(b). In Fig.4 (c) and (d) two independent com-
1.
Sanei S.,Chambers J.A. (2007) EEG Signal Processing. WileyInterscience 2. Rangayyan R.M. (2002) Biomedical Signal Analysis – A Case Study Approach, Wiley-Interscience 3. Hjorth B. (1970) EEG analysis Based on Time Domain Properties, Electroencephalography and Clinical Neurophysiology, 29:306-310 4. Wendling F (2005) Neurocomputational models in the study of epileptic phenomena. Journal of Clinical Neurophysiology, 22(5):285-287 5. Hyvarinen J (1999) Fast and Robust Fixed-Point Algorithms for Independent Component Analysis. IEEE Trans. Neural Networks, Vol. NN-10(3) : 626-634 6. Makeig S., Bell A.J., Jung T.P., Sejnowski T.J (1996) Independent component analysis of electroencephalographic data. Advances in Neural Information Processing Systems 8, 145-151 7. Makeig S, Enghoff S, Jung TP, Sejnowski TJ (2000) Independent components of event-related electroencephalographic data. Cognitive Neuroscience Society Abstracts, pp. 93 8. Hyvarinen A., Karhunen J., Oja E. (2001) Independent Component Analysis, Wiley Inter-Science 9. Roberts S., Everson R. (2001) Independent Component Analysis: Principles and Practice, Cambridge University Press 10. Peterson D.A., Anderson C.W. (1999) EEG-based cognitive task classification with ICA and neural networks. Lecture Notes in Computer Science, Springer Berlin/Heidelberg, vol. 1607 11. Chen Y., Zhang Q., Kinouki, Y. (2006) Blind Source Separation by ICA for EEG Multiple Sources Localization. IFMBE Proceedings, Vol. 14, World Congress on Medical Physics and Biomedical Engineering 2006, Springer Berlin/Heidelberg 12. Delorme A, Makeig S (2004) EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics, Journal of Neuroscience Methods 134:9-21
IFMBE Proceedings Vol. 29
Continuous Wavelet Transformation of Pattern Electroretinogram (PERG) a tool improving the test accuracy K. Penkala West Pomeranian University of Technology/Faculty of Electrical Engineering/Department of Systems, Signals and Electronics Engineering, Szczecin, Poland
Abstract— Purpose: To determine parameters of the Pattern Electroretinogram (PERG) waveforms in the Continuous Wavelet Transform (CWT) coefficients domain important in more precise clinical assessment of the recordings. Material and methods: 102 normal PERG recordings were studied in the two age groups (≤ ≤50 years, >50 years). CWT analysis was performed using the MatLab software. Various wavelets were used in the experiments. Mother wavelets selection was optimized using the criterion based on minimal scatter of the results for normal PERG waveforms. Results: Comparison with traditional, time-domain based analysis showed that determining the PERG parameters in CWT domain was achieved with better accuracy. Normal values for the test showed much less scatter. It was shown on several clinical examples (glaucomatous patients) that the sensitivity of the test was improved. A specialized program module was developed for one of the commercially available electrophysiology systems, and a license for this software was sold to the equipment manufacturer (Roland Consult GmbH, Germany). Conclusions: The CWT-based method of the PERG signal analysis is useful in clinical differentiation between normal and abnormal waveforms, particularly in objective early detection of glaucoma.
rps), respectively. Both the spatial and temporal characteristics influence the shape of the PERG waveforms. Because of the features of this type of stimulus, different from a simple flash of light (evoking Flash Electroretinogram – FERG), the PERG reflects activity of neural structures involved in image information processing in the retina. Figure 1 shows the origin of most important bioelectrical signals with their major components – particular “waves”, in the structures of retina and visual pathway [4]. Abbreviations not explained earlier in the text are as follows: EOG – Electro-Oculogram, OPs – Oscillatory Potentials (extracted from the FERG), mfERG (multifocal ERG) and mfVEP (multifocal VEP).
Keywords— Pattern Electroretinogram (PERG), signal analysis, Continuous Wavelet Transformation (CWT), glaucoma.
I. INTRODUCTION
In the paper the results of analysis of the Pattern Electroretinogram (PERG) are presented. This signal is very important in electrophysiological diagnostics in ophthalmology [1,2,3]. The PERG is a bioelectrical response of the retina evoked by a specific optical stimulus, the “pattern” – alternating black and white checkerboard, commonly presented on a CRT monitor. Overall luminance of the stimulus remains constant, what results in elimination of stray light effects. From the technical point of view, this type of stimulation represents a local contrast phase modulation with a defined spatial and temporal frequency, expressed in cycles/deg and reversals per second (rev/s,
Fig. 1 A simplified diagram of neural structures of the human retina showing the origin of the most important bioelectrical signals arising in the visual system; explanations in the text [4] Pattern Electroretinogram originates in the ganglion cells as well as neighboring inner retinal structures and is recorded from the human retina with a corneal contact electrode. Particular waves of the signal reflect the electrical activity of neural structures involved in visual information processing and are used in assessment of their function. Three characteristic PERG waves are called N35, P50 and N95. The letters “P” and “N” stand for positive and negative components respectively, whereas the numbers
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 196–199, 2010. www.springerlink.com
Continuous Wavelet Transformation of Pattern Electroretinogram (PERG) - A Tool Improving the Test Accuracy
correspond to the approximate time (in milliseconds) when particular components appear – the peak implicit time. The PERG signal is useful in diagnostics of the functional state of the macular region of the retina and optic nerve, in detecting, confirming or excluding particular diseases. The PERG test, as several other electrophysiological examinations in ophthalmology, is standardized by the International Society for Clinical Electrophysiology of Vision (ISCEV) [1]. According to this standard, clinical evaluation of the PERG recordings is based on measurement of amplitudes as well as implicit times of particular waves and comparing them with the
Fig. 2 An example of the PERG signals, normal (Control) and abnormal in the von Hippel-Lindau disease (VHL) [4]
normal values of the electro-ophthalmology lab (normal values should be obtained individually in each lab). Unfortunately, the PERG is not an easy-to-run electrophysiological test and even in experienced labs normal values show usually big scatter of results. Signal variability cannot be neglected too. Standard measurement procedures of waveform parameters also lead to significant errors. In many cases it is difficult to localize the waveform peaks precisely. Figure 2 shows an example of unclear N95wave in the PERG recording, a typical situation in numerous patients [4]. So, the standard method of analyzing the PERG parameters in time-domain is inaccurate. This disadvantage affects reliability of the PERG test in clinical practice, mainly in early, objective detection of glaucoma – the test is not enough sensitive [2]. In order to improve the diagnostic efficiency and value of the PERG test, the author aimed at applying different methods of signal analysis. First goal was to demonstrate the possibility of finding features reliable for more precise distinguishing between normal and abnormal PERG recordings, in the Continuous Wavelet Transform (CWT) coefficients domain. The WT method (Continuous - CWT as well as Discrete – DWT) has already shown its efficiency
197
in analysis, compression and de-noising of some biosignals of complex morphology. As far as bioelectrical signals of the visual system are concerned, the wavelet methods have been rarely used up to the present [4,5,6,7].
II.
MATERIALS AND METHODS
The recordings collected in the Department and Clinic of Ophthalmology of the Pomeranian Medical University, 102 normal PERG waveforms, were studied in the two age groups: ≤50 years, and >50 years. All these signals were recorded in accordance with the guidelines of ISCEV [1] in the Laboratory of Electrophysiology of the Retina and Visual Pathway and Static Perimetry. The recordings were obtained with the systems UTAS-E 2000 (LKC Inc., USA) and RetiPort/RetiScan (Roland Consult, Germany). Continuous Wavelet Transformation (CWT) from the MatLab package was used for the time-frequency analysis of the PERG signal. Various wavelets were used in the experiments. Mother wavelets selection was optimized using the criterion based on minimal scatter of the results (assessed with their standard deviation – SD) for normal PERG waveforms. For preliminary evaluation of the developed analysis tool in clinical practice, 9 recordings were chosen of patients with clinically confirmed glaucoma but with normal PERG results obtained in traditional, time-domain measurements. III. RESULTS
As an effect of performed CWT analysis of the normal PERG recordings and wavelet optimization, the following mother wavelets were chosen: coif1 for implicit times TN35 and T50, lem4 for TN95, and morlet for all amplitude parameters. This set of wavelets, as an analysis tool, was called COMPLEX. In both age groups the accuracy of determining the values of parameters using this tool was improved – the scatter of results was smaller than in traditional measurements. The only exception was implicit time of P50-wave (TP50) - in this case the SD obtained in traditional method was slightly smaller. The P50 peak is rather sharp, clear and the maximum is easy to localize. Results for the younger group (W1: ≤50 years) are shown on Figure 3. Preliminary evaluation of the developed analysis tool in clinical practice, performed on 9 waveforms of patients with clinically confirmed glaucoma but with normal PERG results obtained in traditional, time-domain measurements (this way, using conventional method, assessed as “normal” recordings), showed increased sensitivity of the CWTCOMPLEX test. Sample results for a 63 years old woman
IFMBE Proceedings Vol. 29
198
K. Penkala
suffering from glaucoma are shown on Figure 4. Two implicit times: TN35 and TN95 were longer than the normal values (exceeding the COMPLEX range of normal values), so the recording could be classified as “abnormal”. The W1
Peak time (ms) TN35
TP50
TN95
same recording was assessed as “normal” using traditional technique. Similar improvement of the PERG test sensitivity was obtained in other analyzed patients.
Amplitude (μV)
Ampl.ratio
AP50
AN95/AP50
AN95
lab-n1-"102"
28,34
51,35
102,98
lab-n1-"102"
10,01
13,63
lab-n1-"102"
1,39
COMPLEX
30,66
51,5
101,18
COMPLEX
11,93
12,83
COMPLEX
1,07
Peak time-SD (ms) SDN35
SDP50
SDN95
lab-n1-"102"
2,3
2,49
5,69
COMPLEX
1,4
3,18
3,97
Amplitude-SD (μV)
Ampl.ratio-SD
SDAP50
SDAN95
SDAN95/AP50
lab-n1-"102"
3,2
3,93
lab-n1-"102"
0,2
COMPLEX
1,8
1,9
COMPLEX
0,04
Amplitude - SD (μV)
Peak time - SD (ms)
6
4,5
Ampl.ratio - SD 0,25
4
5 Peak time-SD (ms) SDN35
4
Peak time-SD (ms) 3
SDP50
2
Peak time-SD (ms) SDN95
0,2
3,5 3 Amplitude-SD (mV) 2,5
Ampl.ratio-SD
SDAP50
2
Amplitude-SD (mV)
SDAN95/AP50 0,1
SDAN95
1,5
0,05
1
1
0,15
0,5
0
0
0
lab-n1-"102"
COMPLEX
lab-n1-"102"
COMPLEX
lab-n1-"102"
COMPLEX
Fig. 3 Results of CWT analysis (COMPLEX) compared with the traditional measurement of PERG parameters (lab-n1-“102”) in the “young” age group (W1: ≤50 years); explanations in the text
IFMBE Proceedings Vol. 29
Continuous Wavelet Transformation of Pattern Electroretinogram (PERG) - A Tool Improving the Test Accuracy
199
Fig. 4 Results of the CWT-COMPLEX analysis of a sample PERG signal (PERG waveform shown at the top) of a glaucoma patient: G.M., female, age: 63 years, left eye; explanations in the text
A specialized CWT-PERG software module was developed in C++ for one of the commercially available electrophysiology systems. The CWT-COMPLEX set of wavelets and the measurement technique of amplitude and time parameters was implemented in this program. The user interface allows simple performing analyzes of PERG recordings. A license for this software, dedicated to the RetiPort systems, was sold to the equipment manufacturer (Roland Consult GmbH, Germany).
2.
3.
4.
5.
6. IV. CONCLUSIONS 7.
CWT-based method of the PERG signal analysis is efficient in more precise measurement of the parameters of particular waves, improving the test sensitivity, i.e. separation between normal and abnormal waveforms. This way increased accuracy of clinical PERG assessment and better diagnosis may be achieved.
REFERENCES 1.
Holder GE et al (2007) ISCEV standard for clinical Pattern Electroretinography – 2007 update. Doc Ophthalmol 114:111-116
Hood DC et al (2005) The Pattern Electroretinogram in glaucoma patients with confirmed visual field deficits. Investig Ophthalmol & Vis Sci 46(7): 2411-2418 Palacz O, Lubiński W, Penkala K (2003) Elektrofizjologiczna diagnostyka kliniczna układu wzrokowego (in Polish). OFTAL, Warszawa Penkala K (2005) Analysis of bioelectrical signals of the human retina (PERG) and visual cortex (PVEP) evoked by pattern stimuli. Bulletin of the Polish Academy of Science. Technical Sciences 53(3): 223-229 Penkala K, Rogala T, Brykalski A (2003) Analysis of the Pattern Electroretinogram signal using the Wavelet Transform. Proc 48 th Internat. Wissenschaftliches Kolloquium, Ilmenau, 145-146 Penkala K et al (2004) Wavelet approach to the PERG analysis and processing. Proc 2nd Annual Meeting of the British Society for Clinical Electrophysiology of Vision (BriSCEV), Liverpool, 103 Penkala K, Rogala T, Brykalski A, Lubiński W (2004) Wavelet Transform in analysis of the pattern responses of the human retina (Pattern Electroretinogram - PERG). Proc 4th European Symposium on Biomedical Engineering. University of Patras, Greece, at http://bme.med.upatras.gr/patras2004 Corresponding author: Author: Krzysztof Penkala Institute: Department of Systems, Signals and Electronics Engineering, Faculty of Electrical Engineering, West Pomeranian University of Technology Street: Sikorskiego 37 City: 70-313 Szczecin Country: Poland Email:
[email protected]
IFMBE Proceedings Vol. 29
An Interactive Tool for Customizing Clinical Transacranial Magnetic Stimulation (TMS) Experiments A. Faro1 , D. Giordano1 , I. Kavasidis1 , C. Pino1 , C. Spampinato1 , M.G. Cantone2 , G. Lanza2 and M. Pennisi2 1
Department of Informatics and Telecommunication Engineering University of Catania, Viale Andrea Doria, 6, 95125 Catania, Italy 2 Department of Neuroscience - University of Catania Via S. Sofia, 86 - Policlinico Universitario 95125 Catania, Italy Abstract— Transcranial magnetic stimulation (TMS) is a very useful technique for neurophysiological and neuropsychological investigations. In this paper we propose a user-friendly and a fully customizable system that allows experimental control, data recording for all the currently used TMS paradigms (single and paired pulse TMS). This system consists of two parts: 1) a userinterface that allows the medical doctors to customize the settings of their experiments and to include post-processing and statistical tools for analyzing the acquired patients data, and 2) a hardware-interface that communicates with the existing TMS equipment. New algorithms for post-processing and new user settings can be easily added without interfering with the hardware part communication. The proposed system was used for conducting a clinical experiment for estimating patterns of cortical excitability in patients with geriatric depression and subcortical ischemic vascular disease, achieving very interesting results from the medical point of view. Keywords— TMS, Motor Cortex, Geriatric Depression, Subcortical Ischemic Vascular Disease
I. I NTRODUCTION TRANSCRANIAL magnetic stimulation (TMS) is a non invasive diagnostic and therapeutic method without painful effects [1]. Nowadays, this non-invasive technique is an important tool in many research fields, from psychiatry to neurophysiology, from neuropsychology to neurosurgery, and it is especially used in the clinical investigation of the central motor pathways. TMS produces a modification of the neuronal activity of the primary motor cortex stimulated by the variable magnetic field generated by a coil positioned on the scalp [2]. This stimulation produces motor evoked potentials (MEP) in the hand muscles due to the magnetic field that induces an electrical flow in the brain that goes towards the limbs and determines involuntary movements of the legs and of the hands
[3]. During clinical experiments one (single pulse TMS) or two (paired pulse TMS) impulses can be used in order to derive interesting measures of motor cortex excitability. Actually, the single pulse TMS is used to extract MEP amplitudes and motor thresholds [4] (defined as the minimum amplitude of the impulse necessary to get a motor response greater than 50μV ). The motor cortical excitability is studied by the paired pulse TMS, which uses a subthreshold (the amplitude is set lower than the patient motor threshold and it is called conditioning pulse) and a suprathreshold (called test pulse) stimulus. Since there is an extensive use of TMS in different research fields and for each use of TMS several different factors are crucial, a data acquisition and processing system is required in order to create more standardized conditions and to reduce the high intra- and inter-rater variability in the execution of the clinical experiments (typically due to coil positioning and to the interval between each pulse administration), such as the one proposed in [5]. Although this system represents the first approach in automating the TMS experiments setting, it was designed only as a data acquisition system for data postprocessing, thus not allowing users any customization. In order to improve the functionalities of this system, in this paper we propose a flexible TMS data acquisition and processing system that allows medical doctors an easy and customizable interaction with the TMS hardware, thus making faster, more efficient, and more accurate the data recording and analysis phase. The reminder of the paper is as follows: in the next section Transcranial Magnetic Stimulation concepts are introduced. In section III the proposed system is described. In the last two sections a testing case, carried out on a set of patients with geriatric depression and subcortical ischemic vascular disease, conclusion and future work are reported.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 200–203, 2010. www.springerlink.com
An Interactive Tool for Customizing Clinical Transacranial Magnetic Stimulation (TMS) Experiments
II. T RANSCRANIAL M AGNETIC S TIMULATION TMS may be used to excite the movements of all the muscles even if in the medical practice, it is used to evaluate the responses coming from the limbs (i.e., feet, hands and legs). Usually, a double magnetic coil is used to cause the muscle movements because it permits to stimulate with high precision a single spot of the motor cortex. According to the direction of the current in the coil, the left or the right part of the limb will be excited. The muscle movements are involuntary and are caused by a magnetic field whose intensity depends on the subject under stimulation. Before starting any stimulation session, it is important to compute the motor threshold (MT) of the patient, defined as the power level at which a response can be detected 50% of the times. The motor threshold is computed by using the single pulse TMS, whereas studies on cortical excitability are performed with the paired pulse TMS, which is based on the administration of two pulses (a conditioning one and a test one) with a certain delay (ISI) in ms. Fig. 1 shows a typical muscular response during a paired pulse TMS session with ISI = 1. In this signal we can identify: • The latency, which is the time interval between the instant when the stimulation is administered to the subject and the instant when the muscle starts to move. Latency tends to increase with age and height. • The amplitude of the muscular response, which is the peak-to-peak excursion expressed in volts of the instrument that measures the muscle response.
Fig. 1: Typical Muscular Response to an ISI = 1 paired pulse stimulus Finally, the cortical excitability is estimated by a graph that describes the obtained amplitudes of the muscular responses at varying of the ISIs with respect to the amplitude obtained at ISI=0. An example of this graphs is shown in fig.2. Currently, in the clinical practice the paired pulses are generated by using two single pulse stimulators (MagStim200), which are then combined by a programmable double pulses
201
stimulator (BiStim).
Fig. 2: Graph that describes the cortical excitability
III. T HE P ROPOSED S YSTEM According to [6] and [7] the protocol for cortical excitability study by using a conditioning-test pulses (paired pulse TMS) can be summarized in the following points: • Test and conditioning pulses at different ISIs are randomly intermixed and administered at random intervals between [4-55] sec; • Ten responses (number of repetitions) per ISI are collected and averaged in order to obtain a cumulative response for each ISI. Finally, the peak-to-peak amplitude of the cumulative signal is measured; • This peak-to-peak amplitude is then divided by the one obtained with ISI = 0 and the resulting value is then plotted as a value of the cortical excitability curve. Without a flexible software that interacts with the hardware instrumentation, the experiments’ parameters must be set by manually interacting with the BiStim, which allows medical doctors to set only one ISI per time, whereas the number of repetitions has to be performed by the operator by clicking a button on the coil as many times as the number of repetitions. Actually, the MagStim is provided with a tool that allows the automatic setting, but this must be done by using a proprietary script language, which is a very difficult and tedious task for medical doctors. In order to overcome such limitations, we propose a customizable acquisition and processing system that lies between the medical doctors and the TMS hardware system, whose architecture is shown in fig.3. It is designed to permit full customization of all currently used TMS paradigms (single pulse and paired pulse TMS) and consists of two main parts: 1. A hardware-interface communicates with the TMS
IFMBE Proceedings Vol. 29
202
A. Faro et al.
parameters of the experiments such as the values of ISI, how main times those paired pulse must be administered, if sequentially or randomly, and the time between two consecutive paired pulse. Fig. 4 shows a screenshot of the user-interface.
Fig. 4: Interface for the Experiments Settings and the Real-Time Muscular Response Display
Fig. 3: Architecture of the proposed system
equipment. More in detail, it produces an execution scheme according to the user settings (see fig.4), and sends it to a data acquisition unit optimized for real-time waveform processing (CED 1401). This is provided with 4 analog inputs, whose features are high-speed waveform capture at rates up to 500 KHz with 16 bits of resolution, 2 digital inputs and 2 digital outputs. This unit receives the user-commands, and synchronizes the two MagStims 200, connected on its digital outputs, for the creation of the single pulses, which are further combined in a paired pulse by the Magstim BiStim and are administered to the patient’s cortex through an eight-shaped coil. After the pulses administration the muscular response is registered by using single-use, low-noise, high conductivity electrodes. During the experiment an EMG device (Medelec Synergy) monitors the patient’s relaxation level in real-time. According to the grabbed relaxation level our system is able to automatically correct the motor responses. Finally, the motor responses are amplified by a small signal amplifier (CED 1902), which amplifies signals with a gain ranging from 100 to 1000000 V/V and a maximum voltage input range ±10 Volts. 2. A user-interface that allows medical doctors to set the
The user-interface aims also at yielding real-time displays of clinically meaningful information by processing the motor responses. These data are then collected for further off-line post-processing in order to provide quantitative measurements of the cortical excitability. Moreover, the proposed system contains a set of frequencyfilters (e.g. Butterworth/Bessel/Chebyshev filters) in order to remove the noise from the acquired signal and it is also provided with a set of utilities for performing statistical analysis such as t-test, ANOVA Test, χ 2 distribution and BoxPlot. Since there is a high variability in the signal’s amplitude values depending on the patient’s relaxation levels and the correct position of the coil, the system provides the option to manually or automatically discard or correct (denoise) every single signal obtained per each ISI (see fig.5). The automatic correction is achieved by estimating if the acquired value is in the range μ ± σ with μ and σ that are, respectively, the mean and the standard deviation of the previously acquired values. Our system was developed by using Labview 8.0.1 and the 1401.llb library (available at the web-link http://www.ced.co.uk/ ), that enables the interaction between LabVIEW and the data acquisition unit (CED 1401).
IV. S YSTEM T ESTING The proposed system was used in a clinical experiment for estimating patterns of cortical excitability in patients with
IFMBE Proceedings Vol. 29
An Interactive Tool for Customizing Clinical Transacranial Magnetic Stimulation (TMS) Experiments
203
Fig. 6: Cortical Excitability Comparison between VD and VND patients Fig. 5: Post-Processing Interface geriatric depression and subcortical ischemic vascular disease. Previous studies using paired pulse TMS, e.g in [8] have demonstrated a hyperexcitability of the motor cortex and a dysfunction of subcortical inhibitory circuits in patients affected by subcortical ischemic vascular dementia, whereas no previous studies have investigated cortical excitability in patients with vascular depression. For this experiment, the amplification gain of the amplifier was set to the value of 1000 V/V. The magnetic field intensities were defined equal to the 70% of the MT for the conditioning pulse and equal to 120% of the MT for the testing pulse. 14 patients were enrolled so divided: seven subcortical ischemic vascular disease patients with depressive symptoms (VD) and seven subcortical ischemic vascular disease patients without depression (VND). The motor thresholds were recorded by using the single-pulse TMS; whereas the cortical excitability was investigated with the paired pulse TMS at interstimuls intervals (ISIs) of 1, 3, 5, 7, 10 ms. These parameters were automatically set by the proposed system. During the data acquisition, we acquired in total 817 muscular responses, 84 were automatically discarded, 23 were manually deleted by the operator and 62 were automatically corrected in order to obtain 700 muscular responses needed for the experiment (14 patients x 5 ISI x 10 repetitions). Fig. 6 shows the different behavior of the two set of patients. In detail, the figure shows how the pattern of cortical excitability in VD is lower that the one of the VND patients, i.e. VD patients show a hypoexcitability of the motor cortex with respect to the VND patients.
V. C ONCLUSIONS AND FUTURE WORK In this work we have developed a customizable system for helping medical doctors in performing clinical experiments by using Transcranical Magnetic Stimulation. The main feature of the proposed system is the possibility to customize
TMS experiments without the need to directly interact with the existing hardware or to use difficult and tedious language scripts as the one provided with the BiStim module. Future work will regard adding new functionalities to the system such as the possibility to compare multiple set of patients or the improvement of the automatic signal correction and denoising according to the available standard ATLAS. Finally, tools, such as the ones proposed in [9] and [10], where respectively, a fuzzy model and a neural network to classify patients affected by neurological disease have been proposed, will be integrated for helping medical doctors in the final diagnosis.
R EFERENCES 1. Barker A. T., Jalinous R., Freeston I. L.. Non-invasive magnetic stimulation of human motor cortex Lancet. 1985;1:1106–1107. 2. Pepin J. L., Bogacz D., Pasqua V., Delwaide P. J.. Motor cortex inhibition is not impaired in patients with Alzheimer’s disease: evidence from paired transcranial magnetic stimulation J. Neurol. Sci.. 1999;170:119–123. 3. Ruohonen J., Ollikainen M., Nikouline V., Virtanen J., Ilmoniemi R. J.. Coil design for real and sham transcranial magnetic stimulation IEEE Trans Biomed Eng. 2000;47:145–148. 4. Rossini P. M., Barker A. T., Berardelli A., et al. Non-invasive electrical and magnetic stimulation of the brain, spinal cord and roots: basic principles and procedures for routine clinical application. Report of an IFCN committee Electroencephalogr Clin Neurophysiol. 1994;91:79– 92. 5. Kaelin-Lang A., Cohen L. G.. Enhancing the quality of studies using transcranial magnetic and electrical stimulation with a new computercontrolled system J. Neurosci. Methods. 2000;102:81–89. 6. Kujirai T., Caramia M. D., Rothwell J. C., et al. Corticocortical inhibition in human motor cortex J. Physiol. (Lond.). 1993;471:501–519. 7. Ziemann U., Rothwell J. C., Ridding M. C.. Interaction between intracortical inhibition and facilitation in human motor cortex J. Physiol. (Lond.). 1996;496 ( Pt 3):873–881. 8. Alagona G., Ferri R., Pennisi G., et al. Motor cortex excitability in Alzheimer’s disease and in subcortical ischemic vascular dementia Neurosci. Lett.. 2004;362:95–98. 9. A. Faro el al. A Fuzzy Model and Tool to Analyze SIVD Diseases Using TMS International Journal of Signal Processing. 2006;2;1. 10. Faro Alberto, Giordano Daniela, Pennisi Manuela, Scarciofalo Giacomo, Spampinato Concetto, Tramontana Francesco. Transcranial Magnetic Stimulation (TMS) to Evaluate and Classify Mental Diseases Using Neural Networks in Artificial Intelligence in Medicine;3581 of Lecture Notes in Computer Science:310–314Springer Berlin / Heidelberg 2005.
IFMBE Proceedings Vol. 29
Measurement Methodology for Tempomandibular Joint Displacement Based on Focus Mutual Information Alignment of CBCT Images W. Jacquet1,2, E. Nyssen3, and B. Vande Vannet4 1
University of Antwerp, Department of Physics, Vision Lab, Antwerpen, Belgium Vrije Universiteit Brussel; Department of mathematics, DWIS, Brussels, Belgium 3 Vrije Universiteit Brussel, Department of Electronics and Informatics, ETRO, Brussels, Belgium 4 Vrije Universiteit Brussel, Dental School, SOPA, Brussels, Belgium 2
Abstract— Maxillofacial surgery can lead to displacement of the mandibular joint, bone resorption and finally articulatory dysfunctions. Bimaxillary surgical correction of a high angle absolute mandibular retrognathism case may provoke condylar resorption. An in vivo measurement method is developed to evaluate mandibular joint displacement aiming at correlating it with possible long-term clinical symptoms. For 6 patients two consecutive Cone Beam Computed Tomography (CBCT) images are obtained before and after functional craniofacial surgery. The images are superimposed using a fuzzy 3D alignment criterion: Focus Mutual Information (FMI), in which the operator only needs to indicate approximately the (anatomic) structure in the pre-image. Changes in the left mandibular joint are studied from the alignment of the images focussing on the left mandibular ramus. An analogous alingnment procedure is applied to the right mandibular ramus. The average age of the patients was 30 years, ranging from 16 to 48 years, 1 male and 5 female. The subtraction images reveal clear change in geometry in 7 of the 12 joints. The rotation in the horizontal plane ranges from -11.2° to 8,8°, with μ = -0.9°, and σ = 4.9°, the rotation in the frontal plane ranges from -6.8° to 9.5°, with μ = 0.0°, and σ = 5.1°. The translational displacement of the caput with respect to the fossa ranges from 0.2mm to 3.1mm in the horizontal plane, with μ = 1.0mm, and σ = 0.9mm. Minimal distances lower than 1mm have been observed. The semi-automatic alignment and measurement method based on FMI allows for superimposition of 3D CBCT images and makes it possible to accurately measure distortion of the mandibular joint during or immediately after orthodontic surgery allowing for instantaneous corrections.
and can be detected using 3D virtual control. Eggensperger et al. [2] gave two reasons for skeletal relapse and possible TMJ dysfunction after mandibular advancement by Bilateral Sagittal Split Osteotomy (BSSO): (1) insufficient intra-operative positioning causing an early relapse, and (2) late relapse due to Progressive Condylar Resorption (PCR). The occurrence of PCR was estimated 7.7% after 2 to 4 years by Scheerlinck et al. [3] (n=103). Borstlap et al. [4] found an occlusal relapse in 16% of the cases after 2 years (n=222). Hoppenreijs et al. [5] found in their study that 13 out of 26 patients who developed PCR had unacceptable occlusal and/or esthetic results and underwent a second surgery. Geometry related surgical PCR risk factors are mechanical stress, compression and articular disk-condyle mal-relationships, according to e.g. Arnett et al. [6]. Closeness of the condyle and the fossa mandibularis can be an indication of possible compression and elevated pressure, as is the rotation of the condyle. Pre- and post-operative CBCT images can reveal changes in closeness, displacement and rotation of the condyle with respect to the fossa. The changes can be quantified by comparison of direct measurements of angles and distances in the pre- and postoperative images separately, yet more efficiently through direct measurement of the evolution of angles and displacements in aligned structures. Two dual approaches exist:
Keywords— Orthognathic surgery, orthodontics, craniofacial anomalies, image registration, measurement.
The first approach is studied by Cevidanes et al. [7]. In what follows, the second approach is explored. An advantage of aligning ramus and condyle is that the cranial base is a relatively large structure and distinct curves are displayed when the cranial base is viewed on coronal and horizontal slices. This makes the indication of an orientation of the cranial base, based on 2D slices, relatively easy. Together with the alignment, a measurement system is proposed, based on the least squares fit of lines and ellipses, similar to what is done in echography analysis of the development of infants during pregnancy.
I. INTRODUCTION Orthognathic surgery, the surgical correction of abnormalities of the mandible, maxilla, or both, has possible adverse long term effects on the functionality of the Tempo Mandibular Joints (TMJ). Inacurate performance of orthognatic surgery results in errors in all three orientations in space in terms of pitch, roll and yaw, as introduced in Ackermann et.al. [1],
• Alignment of the cranial base, and measurement of the relative displacement of ramus and condyle. • Alignment of ramus and condyle, and measurement of the relative displacement of the cranial base.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 204–207, 2010. www.springerlink.com
Measurement Methodology for Tempomandibular Joint Displacement Based on Focus Mutual Information Alignment
205
Fig. 1 Least squares line fitted through cranial base in a horizontal slice at the center of the caput of the pre-surgery image (left) and least squares line fitted through the cranial base for the same horizontal slice of the aligned post-surgery image, aligned with the pre-surgery image with respect to the ramus (right)
Fig. 2 Least squares fit of an ellipsoid to the fossa mandibularis in a horizontal slice of the pre-surgery image (left) and least squares fit of an ellipsoid to the fossa mandibularis for the same horizontal slice of the aligned post-surgery image, aligned with the pre-surgery image with respect to the ramus (right)
IFMBE Proceedings Vol. 29
206
W. Jacquet, E. Nyssen, and B. Vande Vannet
II. MATERIAL AND METHODS
(left) and Fig. 2 (right). The ellipsoids are fitted using a least squares criterion – see e.g. Fitzgibbon et al. [10].
From the files of six patients, pre- and post-surgery CBCT images were obtained. The patients consist of a group of 3 with age between 16 and 18 years and a group of 3 with age between 35 and 48 years, one male and five female. The time interval between the first and the second image acquisition ranged from 6 to 420 days (image acquisition by a Galileos 3D Cone Beam Imaging System, Sirona Dental Systems GmbH, Germany). The images consist of 512 × 512 × 512 voxels and a gray value scale of 256 gray values. The patients underwent mandibular advancement by BSSO.
III. RESULTS
A. Material
Descriptive statistics are presented in table 1. A considerable variability in rotational displacement can be observed in both planes ranging in absolute value from 0.1° to 11.2° in the horizontal plane and ranging in absolute value from 0.6° to 9.5° in the coronal plane. Variation in translational displacement ranges from 0.2mm to a substantial 3.1mm. Table
1 Horizontal rotation, coronal rotation and horizontal translational displacement (n = 12) Horizontal rotation
B. Methods Alignment: For each image pair the post-surgery CBCT image was aligned with the pre-surgery CBCT image once with respect to the left ramus, and once with respect to the right ramus using FMI – see Jacquet et. al. [8], [9]. The focus on the ramus was generated from 19 to 30 points, roughly indicated on horizontal slices using an in house developed software tool. To quantify displacement a reference point is placed at the midpoint of the mandibular caput in the pre-image. The frontal and horizontal slices of the aligned images going through this midpoint are used to evaluate the displacement. Rotational displacement: For all image pairs of horizontal and frontal slices an approximation of the angular displacement of the cranial base with respect to the ramus is produced. The orientation of the cranial base is indicated manually by means of eight points in the pre-surgery image – see Fig. 1 (left) – and eight points in the transformed test image – see Fig. 1 (right). A line is fitted through the eight points indicating the cranial base in the pre-surgery image and a second line is fitted through the eight points indicating the cranial base in the aligned test image. A least squares criterion is used to fit the lines. The angle between these two lines is calculated. The same procedure is followed to measure the angular displacement in the coronal plane. Translational displacement: To quantify the displacement of the caput with respect to the fossa the fossa was manually indicated through 8 points in the pre-operative horizontal image and in the post-operative horizontal image. The displacement reported in table 1 is characterized by the differences between the centers of the ellipsoids fitted through the indicated points in the pre-operative images and the aligned post-operative image respectively – see e.g. images Fig. 2
Coronal rotation
Horizontal translational displacement
°
°
-11.2
-6.8
0.2
Maximum
8.8
9.5
3.1
Mean
-0.9
0.0
1.0
Standard deviation Minimum of absolute value Minimum of absolute value Mean of absolute value
4.9
5.1
0.9
0.1
0.6
11.2
9.5
3.4
4.2
> 5°
> 5°
> 1mm
3
4
5
Minimum
Number
Mm
IV. DISCUSSION AND CONCLUSIONS FMI was successfully applied to the alignment of 3D CBCT images with respect to the mandibular ramus acquired before and after Bilateral Sagittal Split Osteotomy (BSSO). Alignment of the ramus allows for the use of slices for the direct measurement of displacement and relative rotation. The observed rotations and displacements are comparable to those found by Cevidanes et al. [10]. If the rotation in one orientation, – coronal or horizontal – is excessive, it becomes difficult to measure the rotation in the other orientation. A 3D approach involving feature points might solve the problem, but requires expert knowledge and training. The Focussed Mutual Information (FMI) alignment and the measurement procedures are semi-automatic. The interaction of the practitioner is kept minimal, and does not require expert knowledge. The rotation, magnitude, and direction of displacements of the caput with respect to the condilary fossa due to the surgical procedure is immediately
IFMBE Proceedings Vol. 29
Measurement Methodology for Tempomandibular Joint Displacement Based on Focus Mutual Information Alignment
visible from the aligned images without the use of 3D models or 3D model superimpositions. The measurement methodology was developed to obtain more discriminating information to benefit from CBCT images. A more extensive study is needed that would incorporate inter- and intra-rater reliability evaluation and a long term follow up study to determine how the measurements can contribute to risk assessment. One could consider the use of surface alignment when 3D models have been constructed to align anatomical structures in consecutive CBCT images. In its elementary form, surface alignment consists of mapping a selected part of the image corresponding to a structure on its 3D surface model. The transformation, minimizing a nearness measure between the selected part and the 3D surface model, is sought – see e.g. Besl and McKay [11]. When aligning two consecutive images with respect to a specific anatomic structure, the surface of this structure has to be reconstructed and a 3D surface model has to be created, based on the presurgery image. In the test image the pixels corresponding to the structure have to be selected through segmentation. Segmentation as well as 3D model reconstruction are computationally intensive, need manual verification and possibly correction and/or recalculation. The FMI alignment criterion does not need reconstruction of surfaces nor requires segmentation, and leads to directly interpretable superimpositions. If it is imperative to visualize aligned surface reconstructions, this can be done by applying the transformation, found through FMI, on the surface reconstruction in the test image, by analogy to what has been done by Cevidanes et al. [12] using ROI in combination with MI alignment.
ACKNOWLEDGMENT Prof. Dr. Stan Politis of the Universiteit Hasselt, is gratefully acknowledged for providing the CBCT images.
207
REFERENCES 1. Ackerman JL, Proffit WR, Sarver DM et al. (2007) Pitch, roll, and yaw: Describing the spatial orientation of dentofacial traits. Am. J. Orthod. Dentofacial Orthop. 131:305–310 2. Eggensperger N, Smolka K, Luder J et al. (2006) Short- and longterm skeletal relapse after mandibular advancement surgery. Int. J. Oral Maxillofac. Surg. 35:36–42 3. Scheerlinck JP, Stoelinga PJ, Blijdorp PA et al. (1994) Sagittal split advancement osteotomies stabilized with miniplates. a 2-5-year follow-up. Int. J. Oral Maxillofac. Surg. 23:127–131 4. Borstlap WA, Stoelinga PJW, Hoppenreijs TJM et al. (2004) Stabilisation of sagittal split advancement osteotomies with miniplates: a prospective, multicentre study with two-year follow-up. part i. clinical parameters. Int. J. Oral Maxillofac. Surg. 33:433–441 5. Hoppenreijs TJM, Stoelinga PJW, Grace KL et al. (1999) Longterm evaluation of patients with progressive condylar resorption following orthognathic surgery. Int. J. Oral Maxillofac. Surg. 28:411–418 6. Arnett GW, Milam SB, Gottesman L (1996) Progressive mandibular retrusion-idiopathic condylar resorption. Part II. Am. J. Orthod. Dentofacial Orthop., 110:117–127 7. Cevidanes LH, Bailey LJ, Tucker GR et al. (2005) Superimposition of 3d cone-beam ct models of orthognathic surgery patients. Dentomaxillofac. Radiol. 34:369–375 8. Jacquet W, Nyssen E, Bottenberg P et al. (2009) Novel information theory based method for superimposition of lateral head radiographs and cbct images. Dentomaxillofac. Radiol. Accepted 9. Jacquet W, Nyssen E, Bottenberg P. et al. (2009) 2D image registration for piecewise rigid objects using a variant of mutual information. Computers in Biology and Medicine 39:545–553 10. Fitzgibbon AW, Pilu M, Fisher RB et al. (1999) Direct least-squares fitting of ellipses. IEEE Trans. Patt. Anal. Mach. Intell. 21:476–480 11. Besl PJ, McKay ND (1992) A method for registration of 3-D shapes. IEEE Trans. Patt. Anal. Mach. Intell. 14:239–255 12. Cevidanes LH, Styner MA, Proffit WR (2006) Image analysis and superimposition of 3-dimensional cone-beam computed tomography models. Am. J. Orthod. Dentofacial Orthop. 129:611–618
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Dr. Jacquet W University of Antwerp, Vision Lab Universiteitsplein 1 B-2610 Wilrijk Belgium
[email protected]
Computer Aided Diagnosis of Diffuse Lung Disease in Multi-detector CT – Selecting 3D Texture Features I. Mariolis1, P. Korfiatis1, C. Kalogeropoulou2, D. Daoussis3, T. Petsas2, and L. Costaridou1 1
University of Patras, Faculty of Medicine, Department of Medical Physics, Rio, Patras 265 00, Greece 2 University Hospital of Patras, Department of Radiology, Rio, Patras 265 00, Greece 3 University Hospital of Patras, Department of Internal Medicine, Division of Rheumatology, Rio, Patras 265 00, Greece Abstract— Computed Tomography (CT) is the modality of choice for the diagnosis of Diffuse Lung Disease (DLD) affecting lung parenchyma. The need for Computer Aided Diagnosis (CAD) schemes aimed at DLD patterns quantification in lung CT, originates from large inter- and intra observer variability characterizing DLD interpretation. The majority of the proposed CAD schemes aimed at DLD characterization exploits textural features combined with supervised classification algorithms. However, the exploitation of these features is suboptimal, since no feature reduction or evaluation is performed prior to the classification task. The aim of the current paper is to investigate 3D texture features sets (histogram signatures, co-occurrence and run length matrices’ statistics) regarding their capability in DLD patterns’ characterization (normal, ground glass, reticular and honeycombing). Earth Mover’s Distance (EMD), k-Nearest Neighbor (k-NN) and Multinomial Logistic Regression (MLR) classifiers where used to access the performance of individual feature sets. In the analysis performed Histogram Signature (HS) feature set combined with EMD classifier, achieves the lowest overall accuracy (80.2 %). Co-occurrence based feature set presented the highest overall classification accuracy (99.3 %) when combined with k-NN classifier. However, both Run Length and Co-occurrence based feature sets, presented robustness against classifier choice and higher classification accuracy than HS feature set. Keywords— Diffuse lung disease, MDCT, 3D texture, Histogram signatures, Supervised classification.
I. INTRODUCTION Diffuse Lung Disease (DLD) represents a large and heterogeneous group of disorders primarily affecting lung parenchyma [1]. Such disorders account for about 15% of the respiratory practice and can potentially lead to respiratory failure if therapy fails [2]. Computed Tomography (CT) is the modality of choice for the diagnosis of DLD and for the prediction of response to treatment. The clinical diagnosis of DLD in CT is based on assessment of lung parenchyma texture patterns and of their extent and distribution within the lung. High Resolution CT (HRCT) and emerging Multi-Detector CT (MDCT) scanning protocols have been exploited in computer aided characterization and quantification of the entire extent of DLD.
Quantification of DLD patterns by radiologists is characterized by high inter and intra-observer variability, due to lack of standardized criteria and further challenged by the volume of image data reviewed. Proposed systems up to now exploit supervised textural pattern classification since DLD is manifested as texture alterations of lung parenchyma [3]. Xu et al.[4] used 3D cooccurrence, run-length, fractals and first order statistics to differentiate between 5 lung tissue types in MDCT datasets combined with two classifiers ( support vector machine and Bayesian). Zavaletta et al. [5], proposed a scheme based on histogram signatures extracted from lung Volumes Of Interest (VOIs), using the Earth Movers’ Distance (EMV) similarity metric. Although several 3D texture feature sets have been proposed their exploitation is suboptimal, since the performance and robustness of individual feature sets is not evaluated prior to classification task. The aim of the current paper is to investigate commonly used texture features regarding their capability in differentiating DLD patterns. Specifically, three feature sets corresponding to histogram signatures, co-occurrence and run length are examined. The extracted feature sets are compared by means of their ability in discriminating different types of lung parenchyma patterns (normal, ground glass, reticular honeycombing) exploiting earth mover’s distance, k-Nearest Neighbor and multinomial logistic regression. Performance of individual feature sets is evaluated by means of classification error.
II. MATERIALS AND METHODS A. Dataset A pilot clinical case sample was acquired consisting of 30 MDCT scans corresponding to 5 normal patients and 25 patients diagnosed with IP secondary to connective tissue diseases, radiologically manifested with ground glass, reticular and honeycombing patterns (Fig. 1). MDCT scans were obtained with a Multislice (16x) CT (LightSpeed, GE), in the Department of Radiology at the University Hospital of Patras, Greece. Acquisition parameters of tube voltage, tube current and slice thickness were 140 kVp, 300 mA and
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 208–211, 2010. www.springerlink.com
Computer Aided Diagnosis of Diffuse Lung Disease in Multi-detector CT – Selecting 3D Texture Features
1.25 mm, respectively. The image matrix size was 512x512 pixels with average pixel size of 0.89 mm.
Fig. 1 CT appearance of (a) normal lung parenchyma, and DLD patterns; (b) ground glass, (c) reticular, (d) honeycombing
The MDCT scans were used to extract VOIs for training the classifiers employed for IP pattern identification and characterization. These sets consisted of 1173 cubic VOIs (VS×VS×VS) defined by an expert radiologist, exploiting a home developed graphical user interface, representing patterns corresponding to reticular (458), ground glass opacities (195) honeycombing (249) and normal LP ( 271). B. Methods a) Texture Analysis Histogram Signatures (HS) [5]: For each VOI a signature was constructed by adaptive binning its corresponding histogram in C bins of varying size. These adaptive bins are produced by clustering the histogram using an optimal kmeans algorithm implemented by using dynamic programming. This results to a histogram signature, which is defined by the centroids of the clusters and their weights (i.e. the number of voxels in the cluster). Following the canonical signature for a class is computed by combining the signatures for each of the training VOIs and re-clustering the distribution into c clusters. 3D run length based features (RLE) [6]: Run length statistics capture the coarseness of texture in a specified direction. A run is defined as a string of consecutive voxels which have the same gray level intensity along a planar orientation. For a given VOI a run length matrix is defined as follows: P(i,j) represent the number of runs with voxels of gray level intensity equal to i and length of run equal to j
209
along the direction(13 directions). 11 features were calculated: short-run emphasis (SRE), long-run emphasis (LRE), low gray-level run emphasis (LGRE), high gray level run emphasis (HGRE), short run low gray level emphasis (SRLGE), short run high gray level emphasis(SRHGE), long run low gray level emphasis (LRLGE), long run high gray level emphasis (LRHGE), gray-level non-uniformity (GLNU), run length non-uniformity (RLNU), run percentage(RPC). The mean and range of each feature over the 13 run length matrices (corresponding to 13 directions) was calculated, comprising a total of 22 run length-based features. 3D Gray level co-occurrence based features(GLCM) [7]: The Gray Level Co-occurrence Matrix is a well-established tool for characterizing the spatial distribution (second order statistics) of gray levels in an image. An element at location (i, j) of the co-occurrence matrix signifies the joint probability density of the occurrence of gray levels i and j in a specified direction θ and specified distance d from each other. The 3D co-occurrence matrix stores the number of cooccurrences of pairs of gray levels i and j, which are separated by a distance D (in this study D was varying according to VOI size) in 13 directions of a VOI. In this work, for each distance (D) 17 3D co-occurrence matrix features were calculated from VOI: angular second moment (ASM), contrast (CTR), correlation (COR), variance (VAR), inverse different moment (IDM), sum average (SAV), sum variance (SVAR), sum entropy (SENTR), entropy (ENTR), difference average (DAV), difference variance (DVAR), difference entropy (DENTR), autocorrelation (ACOR), shade (SHD), prominence (PRM), information measure of correlation 1 (IMC1) and information measure of correlation 2 (IMC2). The mean and range of each feature over the 13 cooccurrence matrices (corresponding to 13 directions) was calculated, comprising a total of 34 GLCM-based features for each distance D. b) Feature Selection A statistical approach, the stepwise discriminant analysis [8] (SDA) is employed to reduce the dimensions of each feature vector. c) Classification Earth mover’s distance (EMD) [9]: The earth mover’s distance is a cross-bin similarity metric that computes the minimal cost to transform one signature to another. Given two signatures with separate bins, computing the EMD can be thought of as how much work does it take to transform one signature to another. Thus, the EMD naturally extends the notion of a distance between single elements to that of a distance between sets, or distributions, of elements. k- Nearest Neighbor classifier (k-NN) [10]: Nearest Neighbor classification is one of the simplest supervised
IFMBE Proceedings Vol. 29
210
I. Mariolis et al.
classification techniques in the field of statistical pattern recognition. In the current study, a k-NN classifier was used to assign to each LP voxel a label of normal, ground glass, reticular or honeycombing using as inputs the set of selected texture features. The k-NN classifies an unknown pattern according to the majority vote of its k nearest neighbors. In this work the training set is normalized to zero mean and unit variance, while the Euclidean metric was used as distance function. Multinomial logistic regression (MLR) [11]: Multinomial logistic regression is a classic statistical method addressing multi-class pattern recognition problems. It is part of a greater family of statistical models, like linear and Poisson regression, unified under the term Generalised Linear Models [12]. The output of an MLR model can be interpreted as an a-posteriori estimate of the probability that a given pattern belongs to each of m disjoint classes. The regression coefficients used to produce that probability estimate, are typically estimated by means of maximum likelihood or Bayesian techniques. The HS features were tested using the EMD classifier, to reproduce a recently proposed DLD quantification scheme [5]. The RLE and GLCM features were tested using a KNN classifier as well as MLR. Thus, five different CAD schemes (HS_EMD, RLE_k-NN, RLE_MLR, GSCM_kNN, GSCM_MLR) have been examined in this study, employing either different feature types, or different classifiers or both. d) Parameter Selection For given combination of feature type and classifier, the overall performance of the CAD system depends on a number of other parameters, like VOI size VS and number of available grey levels NL. The selection of these parameters is based on the analysis of the overall accuracy of the system, using the leave one out validation method. Namely, grid search is performed and the set of parameters corresponding to the highest overall accuracy are selected for each scheme. In case of the HS features apart from VS, NL the number of binning clusters C is also determined by grid search, while the same applies for the distance D used in the extraction of the GLCM features. In case of k-NN classifier the number of neighbors’ k is also a parameter determined via grid search, while in order to avoid tie votes k is allowed to take only odd values. Thus, grid search determines three different types of parameters: VS, NL used in data selection, D, C used in feature extraction and k used in classification. All parameters are simultaneously determined by a unified grid search. The unified search results to five different sets of parameters, each one producing the highest overall accuracy for one of the five different CAD schemes. The grid consisted of 12
different VOI sizes, VS ∈ (11,13,…,33) and 4 different gray levels, NL ∈ (32,64,128,256). In case of HS features 4 different cluster numbers were examined, C ∈ (3,4,5,10), while in case of GSCM features D was varying according to VOI size, namely D ∈ (1,3,… ,(VS+1)/2 ). Finally, 16 different k values were examined, k ∈ (1,3,… ,31) in the case of k-NN classifier.
III. RESULTS The parameters of the five CAD schemes determined by the aforementioned grid search procedure are presented in Table 1. Table 1 Grid search parameters Data Selection VS NL
CAD scheme
Feat. Extraction C D
Classific. k
HS_EMD
33
256
3
-
-
RLE_k-NN RLE_MLR
29 25
64 64
-
-
1 -
GLCM_k-NN GLCM _MLR
33 33
32 32
-
1 9
3 -
Using these parameters five different datasets are determined, each one producing a different set of features. In case of HS three canonical signatures were produced, while the four remaining feature sets, are presented in Table 2. The corresponding classification results are presented in Table 3, where the overall accuracy for each CAD scheme is accompanied by a confidence interval estimated at p=0.05 confidence level. In the next columns the sensitivity and specificity are presented separately for each class. Table 2
Selected Feature Sets
RLE_ Mean
MLR
k-NN
MLR
SRE LRE RPC LGRE HGRE SRLGE LRLGE LRHGE RLNU
SRE LRE RPC LGRE HGRE SRLGE LRLGE LRHGE RLNU GLNU SRHGE
COR VAR SAV ACOR IDM DENTR ENTR SVAR IMCA1
COR VAR SAV ACOR IDM DENTR
SRLGE RLNU
PRM IMCA1 ENTR SVAR
CTR ACOR SAV DVAR
Range
Total
IFMBE Proceedings Vol. 29
GLCM_
k-NN
9
13
13
10
Computer Aided Diagnosis of Diffuse Lung Disease in Multi-detector CT – Selecting 3D Texture Features
Table 3
CAD schemes’ performance evaluation Sensitivity %
CAD scheme
Overall accuracy % (p=0.05)
211
Specificity %
Normal
Ground glass
Reticular
Honey combing
Normal
Ground glass
Reticular
Honey combing
HS_EMD
80.2
(78.2-82.1)
99.6
74.4
69.9
82.7
78.0
66.8
86.3
86.2
RLE_k-NN RLE_MLR
99.1 98.6
(98.4-99.5) (97.8-99.1)
100 100
98.1 97.0
97.7 96.8
100 100
99.6 100
98.1 96.5
98.3 97.7
100 99.6
GLCM_k-NN GLCM_MLR
99.3 98.4
(98.7-99.6) (97.7-98.9)
100 100
96.2 95.1
99.7 98.0
100 100
100 100
100 95.6
98.3 97.7
99.6 100
the robustness of both RLE and GLCM features against classifier choice.
IV. DISCUSSION Several studies have tested texture features regarding their ability in differentiating DLD patterns from lung MDCT scans. The majority of these studies exploit 2D texture feature sets, while a feature selection step of investigating individual feature set performance and robustness is not considered. In this study, three commonly used 3D features sets are evaluated. In the analysis performed HS feature set achieves the lowest overall accuracy (80.2 %), which was in accordance to the results in the reported literature [5]. This is mainly attributed to the fact that HS features transform the information contained into the VOI into 1D signal, thus no spatial information is retained. However another factor influencing the reported performance is the nature of HS feature prohibiting the use of different classifiers such as the MLR, since only cross-bin similarity metrics can be applied. Although the feature selection technique used in this study complies better with the linear model employed by the MLR algorithm, no significant difference between the performance of k-NN and MLR classification algorithms is demonstrated. The best accuracy in the analysis performed (99.3 %) was achieved in case of GLCM features set combined with the k-NN classifier. However, since the accuracy confidence intervals of all classification schemes are overlapping no statistical significant difference is expected.
V. CONCLUSIONS In this study, three commonly used 3D texture feature sets are evaluated yielding the increased GLCM discriminative ability when combined with k-NN classifier, as well as
ACKNOWLEDGMENT This work was supported in part by the Caratheodory Programme (C.591) of the University of Patras.
REFERENCES 1. Sluimer I C, Prokop M, Hartmann I, van Ginneken B (2006) Automated classification of hyperlucency, fibrosis, ground glass, solid, and focal lesions in high-resolution CT of the lung. Med. Phys33:26102620 2. Aziz Z A, et al (2004) HRCT diagnosis of diffuse parenchymal lung disease: inter-observer variation Thorax. 59 506-511. 3. Sluimer I C, Schilham A, Prokop M, Van Ginneken B (2006) Computer analysis of computer tomography scans of the lung: a survey IEEE Trans. Med. Imaging 25:385–405 4. Xu Y, Van Beek E J, Hwanjo Y, Guo J, Mclennan G and Hoffman E A (2006) Computer-aided classification of interstitial lung diseases viaMDCT: 3D adaptive multiple feature method (3DAMFM) Acad. Radiol. 13:969–78 5. Zavaletta V A, Bartholmai B J,Robb R A (2007) High Resolution Multidetector CT-Aided Tissue Analysis and Quantification of Lung Fibrosis Acad. Radiol. 14:772-787 6. Galloway M 1975 Texture analysis using gray level run lengths Comput. Graph. Imaging Process. 4:172–9 7. Haralick R M (1979) Statistical and structural approaches to texture Proc. IEEE 67:786–804 8. Einslein K, Ralston A, Wilf H S, (1977) Statistical methods for digital computers New York: John Wiley & Sons 9. Rubner Y, Tomasi C, Guibas L J (2000) The earth mover’s distance as a metricfor image retrieval. Int J Comp Vision 40 99–121 10. Patric E A, Fischer F P (1970) III. A generalized k-nearest neighbor rule. Information and Control 16:128-152 11. McCullagh P, Nelder J A (1989) Monographs on Statistics and Applied Probability. Chapman & Hall/CRC, second edition 12. Hosmer D W, Lemeshow S (2000) Applied logistic regression. New York Wiley, second edition
IFMBE Proceedings Vol. 29
Statistical Pre-processing Method for Peripheral Quantitative Computed Tomography Images T. Cervinka1, H. Sievanen2, M. Hannula1, and J. Hyttinen1 1
Tampere University of Technology, Department of Biomedical Engineering, Tampere, Finland 2 Bone Research Group, UKK Institute for Health Promotion Research, Tampere, Finland
Abstract— This study aimed to find a processing method that would reduce the noise level and enhance the image quality for structural bone analysis in peripheral quantitative computed tomography (pQCT) images. We proposed method based on down-sampling of histogram of gray scale intensities and following correction of subsequent inaccuracies. It employs wavelet transform and a Markov random field model. For comparison, two well known techniques for filtering of images (median filtering and filtering based on wavelet transform with using soft-thresholding) were evaluated. The performance of the used methods was tested on pQCT scan of artificial phantoms, the real pQCT scan of distal tibia as well as on numerical model of pQCT scan. As to the preservation of coarse structural information of pQCT images, it seems that the new preprocessing method based on statistical approach performed reasonably well and appears to be a promising method for enhancing the analysis of pQCT images. Keywords— image processing, peripheral quantitative computed tomography, wavelet transform, Bayes approach, Markov random field.
affected by the noise from the pQCT scanner per se. The primary goal of the present study is to propose an optimal pre-processing method for noise suppression of pQCT images without affecting the relevant structural information on bone. The common processing method of pQCT images is simply based on computing average values from the bone cross-sections of interest. To test the present methods, the raw and preprocessed pQCT images of phantoms were first used for determination of standard deviation of the noise and gray level dynamic range. Ground truth testing was performed with artificial computer generated pQCT images of bone utilizing the information determined from the phantoms. Further, actual bone pQCT images were used to illustrate the performance of the preprocessing methods. The performance of the new method is compared with two well known techniques for filtering of images: i.e., median filtering and filtering based on wavelet transform with using soft-thresholding [5].
I. INTRODUCTION
II. MATERIALS AND METHODS
Osteoporosis and associated fragility fractures are a common health problem in developed countries forming a considerable health problem. Since bone fragility is largely determined by structural particulars of bone [1], the structural analysis of bones may be of use in improving the identification of individuals susceptible to fragility fractures [2]. The quantitative computed tomography (QCT) allows the true structural assessment of clinically relevant bone sites possible [3], but at the price of X-ray radiation dose and consequent ethical issue, particularly among fertile-aged women or healthy persons in general. The low-dose peripheral QCT (pQCT) is not generally feasible to the clinically important axial skeleton but applies well to appendicular bones [4]. Appendicular bones can be considered a reasonable vehicle to develop image processing and analysis algorithms that may eventually turn out to be feasible for structural characterization of bones in general, including the clinically relevant proximal femur and lumbar vertebrae. This paper introduces an image pre–processing technique for enhancing the analysis of pQCT images quality which is
A. pQCT Images Four phantoms, based on three known concentrations of K2HPO4 solution (50mg/cc, 100mg/cc, 250mg/cc) and tap water [4], were used for calibrating the gray scales and evaluating the noise levels of pQCT images (XCT 3000, Stratec Medizintechnil GmbH, Pforzheim, Germany). The phantoms were scanned 12 times in a row with identical scan parameters to human scans [4]. Further, artificial computer generated bone images were constructed. A numerical model of trabecular bone structure was constructed according to the structural information provided by Khosla et al [6]. In short, the procedure was started by placing seeds of trabecular bone into 2D-plane and they were allowed to grow randomly in the orthogonal direction creating a 3D structure. The Fig. 1 illustrates the numerical phantom structure of trabecular bone of which a virtual pQCT-scan (slice thickness 2.5 mm, pixel size 0.5 mm x 0.5 mm) was performed. For realistic approach, noise with similar characteristics as the pQCT images, was added to the
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 212–215, 2010. www.springerlink.com
Statistical Pre-processing Method for Peripheral Quantitative Computed Tomography Images
213
Fig. 2
model of trabecular bone structure constructed according to information given by Khosla et al (2005) [6]. See details in the text
Example of down-sampling conversion function. The x – axis denotes number of gray levels in the original image, while the y – axis denotes reduced number of gray levels after down-sampling procedure
subsequent pQCT image. Next, one pQCT scan from the distal tibia was used as example of real data.
reconstruction. Using the modified Bayes’ rule, a posteriori probability can be derived:
Fig. 1 The
P(A│B) = P(B│A) × P(A)
B. The Proposed Pre-processing Method The novel pre-processing method was intended to enhance the image quality of the pQCT images so that structural features could be detected, as appropriate given the limited resolution of the pQCT images. The method was composed of two steps: 1. down-sampling of image intensity histogram to reduce a number of gray scales, 2. correction based on Bayes approach and redundant wavelet transform to correct the inaccuracies caused by the down-sampling procedure. Common piecewise linear conversion function was used in an iterative down-sampling procedure. The conversion curve is not linear because the main range of interest (the trabecular bone) lies in middle part of histogram of gray levels. Other regions were considered irrelevant and a coarse conversion factor was used. It can be also viewed as non-equidistant re-quantization of intensity scale. The example of conversion function after the first iteration is depicted on Fig. 2. The result of down-sampling procedure needs to be corrected to suppress the consequent inaccuracy. This can be done by application of a correction factor based on the following approach. We used redundant wavelet transform with bio-orthogonal filters to decompose an image after down-sampling procedure. The wavelet transform provides decomposition of a function on a particular basis of functions, called wavelets [5]. Thus, we obtain separated frequency bands of the processed image. A priori knowledge takes into account that wavelet coefficients, with similar values, are usually locally concentrated in the processed images across all decomposition levels. A conditional probability, in turn, takes into account that a wavelet coefficient (which is obtained after decomposition of down-sampled image) has certain value. It is more probable that the wavelet coefficient has the same value rather than different one at the same position before
(1)
The first term in is the conditional probability, the second term is the prior probability (matrices A and B describe hypothesis and data, respectively). Both probabilities are usually modeled as Gibbs probability functions. This has an advantage that all variables can be described directly with an image model called a Markov random field. The relation between Markov random field and Gibbs probability functions is expressed in the Hammersly-Clifford theorem [7]. C. Standard Pre-processing Methods Utilized for Evaluation To evaluate the performance of the proposed method, two well known methods for filtering of images were also employed; median filtering and filtering based on wavelet transform with using soft-thresholding [5]. For filtering based on wavelet transform, a redundant wavelet transform with bio-orthogonal filters was used to decompose an image. After that the filtering procedure based on soft – thresholding of wavelet coefficients was applied. The value of the threshold was set as t =
2 ⋅ log n ⋅ σ
(2)
n
where n is number of wavelet coefficients and σ is standard deviation of noise of processed frequency band. The median filter with 3x3 window was used for comparing results with previous mentioned methods as type of classical method for filtering of images. D. Performance Test of Methods Because the results of used methods have different scale of intensity values and it is hard to compare them in absolute values, we had to find some method which will be used for comparing the results, have an objective base and give
IFMBE Proceedings Vol. 29
214
T. Cervinka et al.
Table 1 Comparison of used processing method for values of phantoms images obtained from averaging of 12 pQCT scans Original images I.
Fig. 3 Phantoms images. From left to right: Original phantom image (fifth scan), pre-processed image by proposed method, image after filtering by filter based on wavelet transform, image filtered by median filter with window 3x3. In the second line are cuts out the images in first line. The cutting was made at specified cut lines us some objective numbers which express the ability to differentiate the closest intensities corresponding of phantoms. For this purpose, a probability approach was used. We modelled the gray levels from the phantoms as standardized Z - distribution values and after that we could use one side Z - test (right side) for computing probability of correct discrimination to gray levels in each phantom. The type I error α was set to 0.05. From known average and standard deviation values of neighborhood phantoms, the type II error (β) and power of the test were computed: β = P(X < xk), power = 1 – β
(3)
The power value can be within the interval 0 – 1. The values close to 1 (i.e., type II error is small) mean that we can reliably discriminate the gray values corresponding to neighboring phantoms at 95% confidence (α =0.05). In contrast, the values close to 0 mean that we are not able to determine gray values corresponding to neighborhoods phantoms (type II error is high). More about the Z – distribution and Z – test can be found [8].
III. RESULTS The first set of test images consist of pQCT scans of phantoms (see Fig. 3). This set was used in order to get the information about estimation of noise and gray level of resolution of the pQCT scanner. The average values of standard deviation of a noise and average values of the gray levels corresponding to particular phantoms are given in the Table 1 in conjunction with the power values. The influence of the tested processing methods on the real pQCT scans is illustrated in Fig. 4. Fig. 5, in turn, shows the corresponding influence on the virtual pQCT scan of the model of known bone structure (see Fig. 1).
Intensity values Std. deviation Critical value Power
II.
III.
Images processed by presented method I. II. III. IV.
IV.
416.9 322.9 291.7 259.4 35.1 29.7 28.6 27.7 371.5 338.5 304.9 0.902 0.398 0.323
63.8 1.6
Images processed by wavelet based filter I. II. III. IV. Intensity values Std. deviation Critical value Power
52.9 45.1 37.0 3.0 2.9 2.8 57.8 49.8 41.6 0.999 0.849 0.885
Images processed by median filter I. II. III. IV.
415.3 321.5 290.2 258.2 414.6 321.3 290.2 258.2 18.4 12.0 11.7 10.8 18.0 11.2 10.7 10.3 341.1 309.4 275.9 339.7 307.7 275.1 0.999 0.844 0.887 0.999 0.887 0.922
IV. DISCUSSION A. Quantitative View The phantom data provided us a convenient way to determine gray levels and standard deviation of noise of pQCT scanning. We knew a priori that the contours of the used phantoms were regular and they comprised homogenous and known concentration of K2HPO4 solurion, and thus they each should present one value of gray level intensity (Table 1). Because the levels of intensities and standard deviations of noise were different for each phantom and used processing method, the power value obtained from the Z – test provided a reasonable means to compare the tested methods for processing of pQCT images. As can be seen from Table 1, the power values for the original pQCT images were generally low and a reliable discrimination between intensity levels of neighboring phantoms could be made only between phantoms I and II. For the others phantoms, the power values were 0.4 and 0.32 indicating that the probabilities to correctly differentiate intensity levels between phantoms II and III and between III and IV were very low. In terms of bone structure, only high density trabecular structures can be reliably separated from lower density structures. This also indicates that the original pQCT data, without appropriate preprocessing, may not be sensitive enough to detect small changes in trabecular structure, and some improvement in image quality is definitely needed. It is clear that after processing of original images, the obtained power values improved essentially but the results obtained from wavelet filtering and median filtering were somewhat better than obtained with the proposed method, especially for differences between the high density phantoms I and II. This result is understandable and comes from
IFMBE Proceedings Vol. 29
Statistical Pre-processing Method for Peripheral Quantitative Computed Tomography Images
215
Fig. 5 Pre-processing and filtering of the numerical phantom of the virtual Fig. 4 Pre-processing and filtering of a pQCT image from the distal tibia. From left to right: Original pQCT image, pre-processed image by the new method, image filtered by filter based on wavelet transform, image filtered by median filter with window size 3x3. In the lower panel, the intensity profiles are given at specified cut lines the theory because the used phantoms should have same intensity level through whole area what brings advantage to the clear filtering based methods. B. Qualitative View As shown by the phantom images, the original pQCT images are markedly corrupted by noise. As to the influence of image processing of homogeneous phantoms, the image appearance after wavelet based filtering and median filtering was smoother and the noise suppression higher than obtained with the proposed method (Fig. 3). Given this, one could argue that those conventional methods are better and the proposed method brings nothing to improve the image quality of pQCT. However, it is stressed that the real bones are not homogeneous but structures which can show considerable spatial variation in density profiles. The pQCT-measured density values reflect basically apparent density of trabecular tissue and cortical bone, being thus lumped measures of the scanned bone volume without ability to show any structural details; e.g., thinning and perforation of the trabecular structure. In this respect, the possibility to assess the true trabecular structure in detail by high-resolution 3-D pQCT is interesting and promising [6]. At the moment, however, these 3-D devices are not yet so widely used while pQCT is a common device in bone research. In order to demonstrate the possibility to preserve possible trabecular structures, a numerical model of bone structure was employed (see Fig. 1). As can be seen from Fig. 5, spatial variation in density profile (as reflection of actual bone structure) was distinct in the image processed with the new method while the image was blurred after using median filtering. Further, the wavelet based filtering removed the noise virtually completely, but at the same, also the actual density (structural) variation. Obviously (based on the gain
pQCT scan of bone tissue images. From the left to the right: Original computed model image, model image with added noise according the values measured on phantoms images, pre-processed image by the new method, image filtered by filter based on wavelet transform and softthresholding, image filtered by median filter with window size 3x3. In the lower panel, the intensity profiles are given at specified cut lines
results), the wavelet based filtering is not suitable for processing of pQCT images. As to the preservation of structural information, it seems that the proposed preprocessing method of pQCT images based on down-sampling of histogram of intensities and with correction factor based on Markov random field performed reasonably well and offers a promising toll for enhancing the analysis of pQCT images.
REFERENCES 1. Jarvinen T, Sievanen H, Jokihaara J et al. (2005) Revival of bone strength: the bottom line. J Bone Miner Res. 20:717-720 2. Mayhew P, Kaptoge S, Loveridge N et al. (2004) Discrimination between cases of hip fracture and controls is improved by hip structural analysis compared to areal bone mineral density: An ex vivo study of the femoral neck. Bone 34:352-361 3. Riggs BL, Melton II LJ, Robb RA et al. (2004) Population-based study of age and sex differences in bone volumetric density, size, geometry, and structure at different skeletal sites. J Bone Miner Res 19:19451954 4. Sievänen H, Koskue V, Rauhio A et al. (1998) Peripheral quantitative computed tomography in human long bones: evaluation of in vitro and in vivo precision J Bone Miner Res 13:871-882 5. Aldroubi A, Unser M (1996) Wavelets in Medicine and Biology. CRC Press LLC, Boca Raton 6. Khosla S, Riggs BL, Atkinson EJ et al. (2005) Effects of sex and age on bone microstructure at the ultradistal radius: A population based noninvasive in vivo studu. J Bone Miner Res 21:124-131 7. Malfait M, Roose D (1995) Wavelet based image denoising II: Wavelet based image denoising using a Markov Random Field a priori model. Technical Report TW 228, Katholieke Universiteit, Leuven, Belgium 8. Swoboda H (1977) Modern Statistics, Nakladatelství SVOBODA, Prague
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Tomas Cervinka Biomedical Engineering Biokatu 6, 4.floor Tampere Finland
[email protected]
Security and Reliability of Data Transmissions in Biotelemetric System M. Stankus, M. Penhaker, V. Srovnal, M. Cerny, and V. Kasik VSB - Technical University of Ostrava, Department of Measurement and Control, Faculty of Electrical Engineering and Computer Science, Ostrava, Czech Republic Abstract— Secure and reliable data transmissions are of vial importance in today’s biotelemetric systems. Particularity of biotelemetric data puts special requirements on real biotelemetric system regarding data confidentiality, transport channel reliability, transport latency and latency jitter. This article describes some of conclusions acquired in development of real biotelemetric system using off the shelf embedded hardware technology, namely ARM microcontrollers, FRAM memory and dedicated ZigBee chipsets with emphasis on reliability and security of data transfer. Described biotelemetric system is partitioned into logical parts that communicate using custom data protocols. Devices participating in biotelemetric system use Zig- Bee and Ethernet networks as underlying structure for secure and reliable data communication. Keywords— Biotelemetry, measurement, security, reliability.
I. INTRODUCTION This article describes core of biotelemetric system aimed at monitoring of patient’s vital functions, among others ECG, heart rate, blood pressure and blood oxygen saturation (Fig.1). There is need to track other values as well - body and ambient temperature, patient’s posture, body weight and patient’s location in monitored space. To accomplish these goals, various kinds of data need to be acquired, processed and forwarded. Transport of biotelemetric data is specific in some ways.
Fig. 1 Concept of biotelemetric system
II. CONCEPT OF BIOTELEMETRIC SYSTEM Biotelemetric system can be partitioned into two basic parts. Inner part located in patient’s home and outer part located in monitoring centre. Both parts are sub partitioned into participating elements. A. Inner Part of Model Biotelemetric System Inner part of model biotelemetric system is located in space where patient spends most of his time. Main purpose of this subsystem is to acquire biotelemetric data and to hand them over to outer part of model biotelemetric system [1]. Communication in inner part of model biotelemetric system is implemented using ZigBee technology. There are three main hardware elements in inner part of model biotelemetric system, all marked with symbolic names: • • •
CERBERUS - mobile data acquiring unit item ORTHRUS - stationary data forwarding unit item SENSORs - stationary data acquiring unit item
Optional part of model biotelemetric system is class of ZigBee routers. These devices are not endpoints of communication and as such are not considered indispensable part of model biotelemetric system. Their sole purpose is to provide coverage by signal of ZigBee network. Individual elements of inner part of model biotelemetric system and scheme of data flows among them can be seen in Fig. 2.
Fig. 2 Inner part of model biotelemetric system
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 216–219, 2010. www.springerlink.com
Security and Reliability of Data Transmissions in Biotelemetric System
Main purpose of element CERBERUS is to acquire the most important of biometric data - ECG. CERBERUS also acquires values of body and ambient temperature and changes in patient’s posture. CERBERUS is mobile unit and is always in close proximity to patient. The purpose of element ORTHRUS is to collect various measured data from elements of inner part of model biotelemetric system and forward them to outer part of this system. ORTHRUS acts as midpoint in inner part of model biotelemetric system and is essential for it’s operation. SENSORs are stationary dedicated units of various types that measure values not acquirable by CERBERUS. Measurement of these values is not always fully automatic and requires certain cooperation from patient. Examples of this are measurements of heart rate, blood pressure, blood oxygen saturation and other values depending on specialised sensor hardware. B. Outer Part of Model Biotelemetric System Outer part of model biotelemetric system is located outside space where patient spends most of his time. Single instance of outer part of model biotelemetric system is common to multiple inner parts of systems monitoring individual patients. Inner and outer parts of system communicate usány TCP and UDP protocols of TCP/IP protocol suite over FastEthernet infrastructure. There are four main elements in outer part of model biotelemetric system: • • • •
217
Individual elements of outer part of model biotelemetric system and data flows among them can be seen in Fig. 3. Element ORTHRUS acts as demarcation point between inner and outer part of system. Measured values are stored and evaluated in facility CENTRE0 and/or backup facility CENTRE1. Configuration and management of inner part of biotelemetric system are fully automatic from perspective of patient. Inner system configuration and management are realized using DHCP, NTP and SNMP protocols.
III. KEY HARDWARE COMPONENTS OF CERBERUS ELEMENT
This paper describes only key hardware components of CERBERUS element as detailed description of whole system is outside scope of this paper. Central data processing element of CERBERUS unit is 32 bit RISC microcontroller (MCU) clocked at 48 MHz. Connection between MCU and peripheries is accomplished using SPI bus. ZigBee connectivity is provided by dedicated ZigBee chipset running entitě ZigBee stack [2]. Measure values are in case of catastrophic network failure cached in fast FRAM memory.
CENTREs - endpoints aggregating measured data item DHCP server - provides IP settings item NTP server - provides accurate time item SNMP NMS - management of whole system item
Fig. 3 Outer part of model biotelemetric system IFMBE Proceedings Vol. 29
Fig. 4 Architecture of CERBERUS element
218
M. Stankus et al.
Primary purpose of this cache is to provide temporary data storage in case of short time ZigBee and / or Ethernet network failure. Another periphery connected to SPI bus is FPGA parsing raw biometric data. FPGA and ZigBee SPI peripheries are interrupt driven. Whole architecture of CERBERUS element can be seen in Fig. 4.
IV. DATA FLOWS IN BIOTELEMETRIC SYSTEM Data flows in biotelemetric system can be categorized into several groups. First group of data contains measured values originating in CERBERUS element. These data include three channel ECG data, body temperature data and accelerometer data determining patient’s posture [3]. These data can be seen in Fig. 2 marked by green arrow. Second group of data contains measured values originating in SENSORs elements. Dedicated SENSORs are simple devices injecting data into model biotelemetric system and thus their data processing performance is not crucial. These data include measurementsof heart rate, blood pressure, blood oxygen saturation, body weight and patient’s location in monitored space. All of these measurements are of asynchronous nature and are initiated by patient’s action. Data from second group are forwarded to CERBERUS element by ORTHRUS element. Purpose of this is to provide CERBERUS element with possibility to further process these data. This processing may include visualizationof data measured by SENSORs on screen of CERBERU S element [4]. Data belonging to second group can be seen in Fig. 4 marked by red arrows. Third group of data exists for management and provisioning purposes. Purpose of this data flow is to provide means for centralized administration of ihned part of biotelemetric system. Third group of data allows to set TCP/IP protocol related settings of ORTHRUS element and to manage individual instances of inner part of biotelemetricsystem by SNMP protocol. Third group of data is marked by black arrows in Fig. 3 and by blue arrows in Fig. 2.
we can set up the circadian rhythm of monitored person a use it for accidents diagnostics. All these technologies communicate by ZigBee. The key part of system is network of smoke detectors. They are useful primarily for user security. These detectors also communicate by ZigBee technology. All the devices works as ZibBee network rooters for data transferring from large distanced sensors to ORTHRUS.
VI. SECURITY AND RELIABILITY OF BIOTELEMETRIC SYSTEM It’s necessary to prevent data loss as patient’s life may depend on reliability of biotelemetric system. Main cause of data loss in distributed measurement system is network failure. Communication infrastructure of biotelemetric system is composed of ZigBee network and TCP/IP network over FastEthernet. Both of these communication in restructures provide means for reliable data transport. Because of this, it is possible to determine that any of these network is not working. Short time connectivity drops are compensated by FRAM memory cache built into both CERBERUS and ORTHRUS elements. As size of any type of memory is inevitable limited, it is not possible to compensate lengthy connectivity drops.
V. STATIONARY DATA ACQUIRING UNIT ITEMS The imporatnt parts of ZigBee network consist of sensor for home safety a movement monitoring system. The movement monitoring system is based on three different technologies. These technologies are PIR sensor network, Location engine by Texas Instruments and opto-electronic bar in the doorframes. The opto-electronic bars can detect the direction of movement through the door frames. It is possible by using two infra red rays in functionality as a bar. These three technologies help to determine position of monitored person in the home care flat. On this information IFMBE Proceedings Vol. 29
Fig. 5 Architecture of ORTHRUS element
Security and Reliability of Data Transmissions in Biotelemetric System
Transported biometric data are of confidential nature, communication in wireless environment of ZigBee network has to be encrypted. ZigBee network allows data encryption using Advanced Encryption Standard (AES) 128 bit cipher including automatic key distribution. Only device participating in ZigBee network needing specification of key is ORTHRUS element Fig.5. This key is provided by provisioning centre using SNMP protocol.
VII. CONCLUSIONS In this time, model biotelemetric system is being tested and implemented and into working solution. Biotelemetric system is currently designed for indoor use only. More nearby instances of inner part of model biotelemetric system managed by single outer part of system are possible, but there exists one to one mapping between patient and ZigBee network. Future improvements may include support for outdoor operation with communication implemented using 3G mobile technology and patient’s tracking by GPS system. With advancements in low-power high-density FPGA solutions, FPGA programmable system on chip technology seems to bepromising for purpose of this biotelemetric system.
ACKNOWLEDGMENT The work and the contribution were supported by the project Grant Agency of Czech Republic GACR 102/08/1429 ”Safety and security of networked embedded system applications”. Also supported by the Ministry of Education of the Czech Republic under Project 1M0567.
REFERENCES 1. Penhaker , M. Cerny, L. Martinak, et al. HomeCare (2006) Smart embedded biotelemetry system” Book Series IFMBE proceedings World Congress on Medical Physics and Biomedical Engineering, AUG 27SEP 01, 2006 Seoul, SOUTH KOREA, Volume: 14 Pages: 711-714, 2007, ISSN: 1680- 0737, ISBN: 978-3-540-36839-7 2. Prauzek, M., Penhaker, M., (2009) Methods of comparing ECG reconstruction In 2nd Internacional Conference on Biomedical Engineering and Informatics,Tianjin: Tianjin University of Technology, 2009. Strnky 675-678, ISBN: 978-1-4244-4133-4, IEEE Catalog number: CFP0993D-PRT
219 3. Penhaker, M., Cerny M., Rosulek, M. (2008) Sensitivity Analysis and Application of Transducers 5th International Summer School and Symposium on Medical Devices and Biosensors, JUN 01-03, 2008 Hong Kong, PEOPLES R CHINA, Pages: 85-88 Published: 2008, ISBN: 978-1-4244- 2252 4. Kasik, V. (2002) FPGA based security system with remote control functions. 5th IFAC Workshop on Programmable Devices and Systems, NOV 22- 23, 2001 GLIWICE, POLAND, IFAC WORKSHOP SERIES Pages: 277-280, 2002, ISBN: 0-08-044081-9 5. Cerny M., Penhaker M. Biotelemetry (2008) In conference proceedings 14th Nordic-Baltic Conference an Biomedical Engineering and Medical Physics, JUN 16-20, 2008 Riga, LATVIA, Volume: 20 Pages: 405-408 Published: 2008, ISSN: 1680-0737, ISBN: 978-3540-69366-6 6. Cerny, M.(2009) Movement Monitoring in the HomeCare System . In IFMBE proceddings. Ed. Dossel-Schleger, Berlin:Springer, 2009, issue. 25, ISBN 978-3-642-03897-6; ISSN 1680-07 7. Vasickova, Z., Augustynek, M.(2009) Using frequency analysis of vibration for detection of epileptic seizure. Global courseware for visualization and processing biosignals. In World Congress 2009. Sept 7. 12. in Munich, ISBN 978-3-642-03897-6, ISSN 1680-0737 8. Horak, J., Unucka, J., Stromsky, J., Marsik, V., Orlik, A., (2006) TRANSCAT DSS architecture and modelling services”, In Journal: Control and Cybernetics, vol. 35, pp. 47-71, 9. Krejcar, O., Janckulik, D., Motalova, L., Kufel, J., (2009). Mobile Monitoring Stations and Web Visualization of Biotelemetric System Guardian II. In EuropeComm 2009. LNICST vol. 16, pp. 284-291. R. Mehmood, et al. (Eds). Springer, Heidelberg 10. Krejcar, O., Janckulik, D., Motalova, L., (2009) Complex Biomedical System with Mobile Clients. In The World Congress on Medical Physics and Biomedical Engineering 2009, WC 2009, September 0712, 2009 Munich, Germany. IFMBE Proceedings, Vol. 25/5. O. Dössel, W. C. Schlegel, (Eds.). Springer, Heidelberg. 11. Krejcar, O., Janckulik, D., Motalova, L., Frischer, R., (2009) Architecture of Mobile and Desktop Stations for Noninvasive Continuous Blood Pressure Measurement. In The World Congress on Medical Physics and Biomedical Engineering 2009, WC 2009, September 0712, 2009 Munich, Germany. IFMBE Proceedings, Vol. 25/5. O. Dössel, W. C. Schlegel, (Eds.). Springer, Heidelberg. 12. Idzkowski A., Walendziuk W.(2009) Evaluation of the static posturograph platform accuracy, Journal of Vibroengineering, Volume 11, Issue 3, 2009, pp.511-516, ISSN 1392 - 8716M. 13. Penhaker, M. Cerny, M. (2008) The Circadian Cycle Monitoring Conference Information: 5th International Summer School and Symposium on Medical Devices and Biosensors, JUN 01-03, 2008 Hong Kong, PEOPLES R CHINA, Pages: 41-43, 2008, ISBN: 978-1-42442252-4
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Martin Stankus VSB- TU Ostrava 17. listopadu 15 Ostrava Czech Republic
[email protected]
A Novel Approach for Implementation of Dual Energy Mapping Technique in CT-Based Attenuation Correction Using Single kVP Imaging: A Feasibility Study B. Teimourian1,2, M.R. Ay2,3,4, H. Ghadiri2,5, M. Shamsaei Zafarghandi1, and H. Zaidi6,7 1
Faculty of Physics and Nuclear Engineering, Amir Kabir University of Technology (Tehran Polytechnic), Tehran, Iran 2 Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, Tehran, Iran 3 Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran 4 Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, Tehran, Iran 5 Department of Medical Physics, Iran University of Medical Sciences, Tehran, Iran 6 Geneva University Hospital, Division of Nuclear Medicine, Geneva, Switzerland 7 Geneva University, Geneva Neuroscience Center, Geneva, Switzerland
Abstract— In the CT-based attenuation correction methods, dual-energy technique (DECT) is the most accurate approach, which has been limited due to the increasing patient dose. In this feasibility study, we have introduced a new method that can implement dual-energy technique with only a single energy CT scan. In this method, with having the CT image in one energy, we generate the CT image at the second energy (from now we call it virtual dual-energy technique). The attenuation map at 511 keV was generated using bilinear (the most commonly used method in commercially available PET/CT scanners), dual-energy and virtual dual-energy technique in phantom and patients data. In the phantom study, the created attenuation map using mentioned methods are compared to the theoretical values calculated using XCOM cross section library. In the patient study, the generated attenuation map using dual-energy method is considered as gold standard. The results in the phantom data show 10.1 %, 4.2 % and 4.3 % errors for bilinear, dual-energy and virtual dualenergy techniques respectively. Also, the results in the patient data show the virtual dual-energy has better agreement with the dual-energy method rather than the bilinear method especially in the bone tissue (1.5 % and 8.9 % respectively).
effective CT energies (~ 60-80 keV) rather than 511 keV which is the energy of PET imaging, so it is necessary to convert the LAC at CT energies to those corresponding to 511 keV [3]. Several energy mapping strategies including scaling [4], segmentation [4], hybrid (Segmentation and scaling) [4], bilinear [5] and dual-energy (DECT) [6] have been proposed that convert the LACs of CT images to the LACs of 511 keV. It should be noted that most commercially available PET/CT scanners use the bilinear method. The best of these mentioned methods is dual energy technique [7]. Some of the drawbacks of this method that render it impractical for commercial PET/CT scanners are the additional dose to the patient, resulting from two CT scans at two different kVPs, increasing the scanning time as well as cost. In this feasibility study we have introduced a new method that can implement dual-energy technique with only a single energy (kVP) CT imaging. In this method, with having the CT image in one energy, we generate the CT image at the second energy.
Keywords— PET/CT, DECT, attenuation correction, attenuation map, energy-mapping.
II. MATERIALS AND METHODS
I. INTRODUCTION Hybrid positron emission tomography/computed tomography (PET/CT) units have been designed and been commercially available since 2000 [1]. The additional morphological information provided by PET/CT scanners in contrast to stand alone PET scanners can be of additional diagnostic value for the physicians. Another benefit of PET/CT systems is the faster examination time, since the attenuation map for PET data correction is obtained from the CT scan and not from the much longer transmission scan [2]. Although fast and precise CT-based attenuation correction (CTAC) method yields a noise free attenuation map in comparison with transmission scan, but CT images provide linear attenuation coefficients (LAC) of the tissues at
A. kVP Conversion Curves The Alderson RANDO (Radiology Support Devices Company, USA) phantom [8] was scanned by a LightSpeed VCT scanner (GE Healthcare Milwaukee, USA) with four different tube voltages (80, 100, 120, and 140 kVP) and tube current of 300 mA. The analysis on the acquired images was done by AMIDE [9] image viewer. More than 400 different ROIs were selected in each image and the mean CT numbers for each ROI at one kVP was plotted versus the same values at another kVP. Finally the best curve was fitted for each plot to obtain kVP Conversion Curves (which scale CT numbers at different tube voltages to each other), in three regions including lung tissue (HU<-100), soft tissue (-100
200). This classification improves the precision of the resulted kVP conversion curves.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 220–223, 2010. www.springerlink.com
A Novel Approach for Implementation of Dual Energy Mapping Technique in CT-Based Attenuation Correction
The kVP Conversion curves have been reported for the combination of 80 kVP /140 kVP . It should be noted that in dual energy method, the accuracy of estimating attenuation map at 511 keV is directly related to the difference of the pair energies used in each combination [8]. The calculated kVP Conversion Curves can be used for virtual generation of a CT image in other kVPs. Having the CT image of a patient in one energy and generating the second image in another energy, we are now able to implement the dual energy technique which is called the virtual dual-energy method.
221
XCOM cross section library, as gold standard, for different concentrations of K2HPO4, while in the patient study μmap derived from the dual-energy method was considered as the gold standard. An ROI analysis was used for calculation of the percentage relative differences between the values calculated from the attenuation maps generated using different methods and the gold standard. On the 16 CT images, 120 ROI were selected. These ROIs were divided into soft, bone and lung groups. At last the average relative difference for each group to gold standard values for every method was calculated.
B. Phantom Study For this study, a polyethylene cylindrical phantom (250±0.5 mm diameter) was constructed. This phantom is consisted of 16 cylindrical holes (20±0.5 mm diameter) with four holes in the middle (5±0.5 mm diameter) which were filled with air. One of the 16 holes was filled with water and the others were filled with different concentrations of K2HPO4 in water (for modeling of soft tissue and bones with different densities). The concentration of these K2HPO4 solutions was varied from 60 mg/cc to 1800 mg/cc to simulate bones with different densities. This phantom was scanned by the LightSpeed VCT scanner at energy levels of 80 and 140 kVPs and tube current of 400 mA. By using the kVP Conversion Curves, phantom image at 80 kVP was derived from 140 kVP. As the noise of the CT image is lower in higher kVPs, CT image at 80 kVP was derived from 140 kVPs.
E. kVP Conversion Curves Figure 1 shows the kVP Conversion Curves obtained from by CT scans of RANDO phantom at tube voltages of 80 and 140 kVPs. The conversion equations for this combination are shown in Table 1.
C. Patient Study In sixteen patients, CT scan was done at the energy levels of 80 and 140 kVPs in just one slice (ethic license number 1432, Tehran University of Medical Sciences). The data was acquired from the LightSpeed VCT scanner. By using the kVP Conversion Curves, patient image at 80 kVP was derived from 140 kVP, similar to the phantom study. D. Generation of µmap and Comparison Strategy The reconstructed CT images (512×512 matrix size) were at first down-sampled to 128×128 and then smoothed using a 5-mm Gaussian kernel to match the resolution of the PET images. Then bilinear, dual-energy and virtual dualenergy methods were used to convert CT pixel values in Hounsfield units (HU) to an attenuation map (μmap) at 511 keV. The virtual dual-energy method was implemented using the CT image at 140 kVP and the generated CT image at 80 kVP which is derived from 140 kVP. In the phantom study, the generated μmaps using each method were compared to the true value extracted from
Fig. 1 The kVP Conversion Curves obtained from by CT scans of RANDO phantom for the combination of 80 kVP /140 kVP at (a) Lung tissue, (b) Soft tissue and (c) Bone tissue
IFMBE Proceedings Vol. 29
222
B. Teimourian et al.
Table 1 The conversion equations for 80 kVP /140 kVP at different tissues Tissue Type Lung Tissue (HU<-100) Soft Tissue (-100200)
Conversion Equation from 140 kVP to 80 kVP
Table 2 Percentage relative difference between the calculated LACs for different concentrations of K2HPO4 and the reference values extracted from the XCOM cross section library C †(mgr/cc)
HU80kVp =(1.083×HU140kVp) + 94.60 HU80kVp =(1.529×HU140kVp) + 9.911 HU80kVp =(1.431×HU140kVp) + 33.45
F. Phantom Study Figure (2) shows the original CT image of the polyethylene phantom at 140 kVP and the generated attenuation maps using bilinear, dual-energy and virtual dual-energy methods. Table (2) summarizes the percentage relative difference between the calculated linear attenuation coefficients at 511 keV for different regions of phantom and the true values extracted from the XCOM cross section library. The average relative difference in all regions calculated using bilinear, dual-energy and virtual dual-energy methods are 10.1%, 4.2%, 4.3% respectively. †
Bilinear
DECT
Virtual DECT
Water
0.0
1.0
0.0
120
4.9
2.9
2.0
180
6.6
0.9
1.9
240
11.0
3.7
5.5
300
12.5
4.5
4.5
360
12.9
4.3
4.3
480
13.8
4.1
4.9
540
14.3
3.2
4.0
600
14.2
3.1
3.9
660
15.0
2.3
4.5
720
15.4
2.2
3.7
840
14.0
0.0
2.8
900
12.9
1.4
0.7
1200
9.8
6.1
3.7
1500
3.9
12.1
8.3
1800
0.5
15.6
12.6
4.2
4.3
Average 10.1 Concentration of K2HPO4 in solution.
G. Patients Study Figure (3) shows one chest slice of original patient CT image at 140 kVP and the generated attenuation maps using bilinear, dual-energy and virtual dual-energy methods in the same slice.
Fig. 2 Original CT image (a) and generated attenuation maps using bilinear (b), dual energy (80 and 140 kVps) (c) and virtual dual-energy (d)
Fig. 3 Original patient CT image (a) and generated attenuation maps using bilinear (b), dual energy (80 and 140 kVPs) (c) and virtual dual-energy (d)
IFMBE Proceedings Vol. 29
A Novel Approach for Implementation of Dual Energy Mapping Technique in CT-Based Attenuation Correction
Figure (4) shows the correlation plots for each energy mapping method and DECT.
223
attenuation coefficients at 511 keV for all tissues, but the use is limited because of its high dose. In this feasibility study we have introduced a new method that can implement dual-energy technique with only a single energy (kVP) CT imaging. As shown in table (2), that dual-energy and virtual dualenergy have the lowest errors in obtaining LACs at 511 keV (4.2 % and 4.3 % respectively). In the patient study results, figure (4) and table (3), the virtual dual-energy has better agreement with the dualenergy method rather than the bilinear method especially in the bone tissue (1.5 % and 8.9 % respectively). In this feasibility study, we present results showing the virtual dual-energy approach has the same performance as dual-energy in all tissues, while the proposed method has additional potential advantages of a lower patient dose. It should be noted, all presented results were obtained in the absence of contrast agents and metal implants. Further evaluation using a clinical PET/CT database is underway to evaluate the potential of the technique in a clinical setting.
REFERENCES Fig.
4 The correlation plots between a) bilinear and DECT, b) virtual DECT and DECT
Table (3) summarizes the percentage relative difference between the calculated LAC at 511 keV for bilinear and virtual dual-energy and dual-energy (as the gold standard) in different tissues including lung, soft and bone. Table 3 Percentage relative difference between the calculated LAC at 511 keV for bilinear and virtual dual-energy with dual-energy (as the gold standard) in different regions of patients Tissue Type
Bilinear
Virtual DECT
Lung Tissue
16.4
8.0
Soft Tissue
1.6
2.2
Bone Tissue
8.9
1.5
III. CONCLUSION
1. Townsend DW, Beyer T, Blodgett T (2003) PET/CT scanners: a hardware approach to image fusion. Semin Nucl Med 33:193–204 2. Rehfeld NS, Heismann BJ, Kupferschläger J et al. (2008) Single and Dual Energy Attenuation Correction in PET/CT in The Presence of Iodine Based Contrast Agents. J Med Phys 35 (5):1959-1969 3. Shirmohammad M, Ay MR, Sarkar S et al. (2008) Comparative assessment of different energy mapping methods for generation of 511keV attenuation map from CT images in PET/CT systems: A Phantom study, IFMBE Proc. 22:496-499, Antwerp, Belgium, 2003. 4. Kinahan PE, Townsend DW, Beyer T et al.(1999) Attenuation correction for a combined 3D PET/CT scanner. J Med Phys 25(10):20462053. 5. Bai C, Shao L, Da Silva AJ et al.(2003) A generalized model for the conversion from CT numbers to linear attenuation coefficients. IEEE Trans Nucl Sci 50(5):1510-1515. 6. Guy MJ, Castellano-Smith IA, Flower MA et al. (1998) DETECT-Dual energy transmission estimation CT-for improved attenuation correction in SPECT and PET. IEEE Trans Nucl Sci 45:1261-1267. 7. Shirmohammad M, Ay MR, Sarkar S et al. (2008) Comparative assessment of different energy mapping methods for generation of 511keV attenuation map from CT images in PET/CT systems: A Phantom Study. 5th IEEE ISBI, Paris, France, 2008, pp 644-647 8. RANDO phantom website. URL: http://www.rsdphantoms.com/ 9. AMIDE image viewer software. URL: http://amide.sourceforge.net/ † Corresponding Author: Mohammad Reza Ay
Among different CTAC methods of PET data, the bilinear method is the common used method in most commercial PET/CT scanners. This method has an acceptable accuracy in lung and soft tissue, but overestimates in bone tissue. Also, the dual-energy method has a good estimation of
Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Tehran University of Medical Sciences Pour Sina Tehran Iran [email protected]
Computational Visualization of Tumor Virotherapy X.F. Gao1 , M. Tangney2 and S. Tabirca1 1
Computing Resources for Life Sciences Research, University College Cork, Cork, Ireland 2 Cork Cancer Research Center, Cork, Ireland
Abstract— Recent research has indicated that replicationcompetent viruses are being tested as tumor therapy agents. The fundamental premise of this therapy is based on the viruses infecting tumor cells and replicating inside them. Spread of the virus in the tumor ultimately should lead to eradication of the cancer. The outcome of tumor virotherapy depends on the dynamics that arise from the interaction between the virus and tumor cell populations both of which change in time. Motivated by this novel cancer treatment, we simulate the dynamical process of the interactions between tumor cells and viruses. We have developed a computational model based on mathematical models that captures the essential factors involved in cellular dynamics. By analyzing and adjusting the essential parameters, we reconstruct and visualize the process and outcomes of cancer virotherapy in a dynamical system. By comparing with in vivo experiments, we validate our simulations to keep a high similarity with the three typical types of experimental observations - tumor eradication, therapeutic failure and oscillations. Keywords— Virotherapy simulation; Computational visualization; Cellular dynamics; Population dynamics
I. I NTRODUCTION Virotherapy is an experimental method of cancer treatment using biotechnology to convert viruses into cancer-fighting agents by reprogramming viruses to selectively lyse and destroy tumour cells, while healthy cells remained relatively undamaged [1]. Over the last few years, several viruses have been altered to selectively infect cancer cells. Viruses such as Newcastle disease virus (NDV), vesicular stomatitis virus (VSV), reovirus and measles virus (MV) seem to have a natural tropism for tumor cells due to a variety of mechanisms [2–6]. In the present study, the recombinant MV has significant in vitro and in vivo oncolytic activity against various types of cancer. Tumor cells infected with MV that exerts its cytopathic effect (CPE) by formation of multinucleated cell aggregates (syncytia) via cell-cell fusion. The giant cell syncytia ultimately die after a few days [7]. Infected cells that have been incorporated into syncytia stop replicating and do not contribute to further growth of the tumor population. Moreover, once infected cells die, they might release free virus particles that can infect surrounding cells. This cellcell fusion is regarded as an important therapeutic advantage
of MV as it provides a significant bystander effect that eliminates uninfected cells that are incorporated in syncytia. Fig 1 shows the three types of tumor cells under MV-NIS injection.
Fig. 1: In vivo MV-NIS infection of prostate cancer xenografts. (a) Uninfected tumor cell; (b) Infected tumor cell; (c) Syncytia
An underlying premise of tumor virotherapy is that the infected tumor cells become factories that generate new virus particles which proceed to infect additional tumor cells in a series of waves [6]. Such a system may have different outcomes including the potential for chaotic behavior, which are highly dependent on the interactions between between the tumor, virus and immune system cells as well as their populations. Hence, modeling these dynamic interactions is essential to understand therapeutic outcomes and optimize therapy. To visualize these experimental observations, we need to build a dynamical system based on mathematical models. Rigorous mathematics play important roles in the computational modeling of cell biology [8]. In the development of the techniques and algorithms, mathematics composes the tools of numerical and statistical analysis. A process of estimations is the foundation of the computational solutions to mathematical problems, and the accuracy and efficiency of these methods of estimation are the subjects of much study. In addition, mathematics helps to identify certain key parameters that play a central role in defining the overall behavior of the system, and thus lead to new predictions and informative experiments. Several mathematical models have been created to understand and characterize the dynamical system by (Wu et al. 2004 [9]; Tao and Guo 2005 [10]; Dingli et al. 2006a [11]; Friedman et al., 2006 [12]). These mathematical models are useful to describe the populations of tumor cells and viruses, as well as how infected cells and syncy-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 224–227, 2010. www.springerlink.com
Computational Visualization of Tumor Virotherapy
225
tia contributing to tumour growth under different combinations. They do not, however, account for active cell motility or how macroscopic behavior of tumor cells being affected by the presence of viruses at the microscopic level. In this article, we design a two-dimensional computational simulation based on a mathematical model that has been developed by David Dingli et al., to visualize the dynamic interactions between the viruses and tumor cells as well as identify the vital parameters and their promising ranges. Another important question concerns the role of immune responses for the outcome of therapy, for example immune response that directly reduces the replication rate of the virus. Our model does not consider the potential interactions between immune system and viruses and/or the tumor cells that could introduce additional complexity in the dynamics.
where n is the number of neighboring cells (range from 0-8 in 0 is the base probability that a sina 2D square lattice), and PDiv gle cell will divide. During interphase and division states, a normal tumor cell can be infected by free viruses/neighboring infected cells or fuse into a syncytia. Cells may die when they receive a death signal (e.g. infection) or fail to receive a lifemaintaining signal (e.g. failed to divide), namely Apoptosis. In our model, if a cell’s age over TApopotosis (18 h in our simulation), it enters a programmed cell death (PCD) phase, with its activity progressively decreasing and will die by a probability Pdie F(Age) Age ≤ TApoptosis Pdie = 1.0 TApoptosis < Age ≤ TDeath (2) 0 F(T ) = Pdie +
II. M ODELING There have already been some interesting modeling and observations of recombinant viruses based on the Edmonston vaccine strain of MV as these vectors have potent and selective oncolytic activity against a wide range of tumors [13–16]. To model the observations of virotherapy, we consider about modeling the cellular dynamics of the system, kinetics of multiplication of tumor cells, the amplification of virus, and the interactions between different components(e. g. virus, cell and syncytia). To model the infection of tumour cells, one approach is to simulate the interactions between tumour cells and recombinant viruses during their movements. We have presented a 3D computational model that successfully captures many of the cell behaviors that play important roles in cell aggregation and cell sorting [17]. We extend this multicellular dynamic system by introducing new factors that found to be important in virotherapy. A cell passes through different stages during its life, namely Quiescent, Interphase, Division and Apoptosis. Cell cycle operates continuously during growth, with newly formed daughter cells immediately embarking on their own path to mitosis. In our model, each virtual cell is designed as a discrete unit with the ability of dividing, aging, dying, being infected and fusing with other cells into syncytia. A cell’s life cycle and behaviors are implemented as a set of actions which are performed during each simulation time step.After a cell is created by cell division, it enters quiescent phase where its biological machinery is not fully functional. We assume these cells can not perform mitosis or be infected by viruses. Following this period a cell in interphase state becomes most active, and is able to divide with a probability PDiv that calculated as 0 PDiv = e−n ∗ PDiv
(T − TApoptosis (TDeath − TApopotosis )2 )2
0 is the base probability of cell death. The probabilwhere Pdie ity of dying is increasing after Apoptosis state and up to 1.0 when the cell’s age reaches TDeath (24 h in our simulation). A dead cell will be removed from the system, and if it was infected, the infectious viruses can be released into surrounding environment by a possibility α, whereas by probability (1 − α) die with their host cell. The virus in this model is designed as a smaller particle with the ability of randomly floating, dividing inside of cell and infecting a uninfected cell. When referring to cells infected by infectious viruses, the term multiplicity of infection (MOI) is the ratio of infectious agents (e.g. virus) to infection targets (e.g. cell). The actual number of viruses that will enter any given cell can be regarded as a statistical process: some cells may absorb more than one virus particle while others may not absorb any. The probability that a cell will absorb n virus particles when inoculated with an MOI of m can be calculated for a given population using a Poisson distribution.
P(n) = α.
mn .e−m n!
(3)
where m is MOI, n is the number of infectious agents that enter the infection target, al pha is some scale and P(n) is the probability that an infection target (a cell) will get infected by n infectious agents. This relationship will be affected by the infectivity of the virus in different situations, such as the type of cells. In our simulation, we test the range of MOI from 1 to 8 in order to generate different treatment results. In principle, an infected cell or syncytium may transfer the virus to an neighboring uninfected cell, which by certain probability λ becomes a single infected cell, whereas by
(1) IFMBE Proceedings Vol. 29
226
X.F. Gao, M. Tangney, and S. Tabirca
under certain conditions. Table 1: Font sizes and styles
(a)
Parameter
Range tested
Not cured
Oscillation
Cured
0 Pdiv
0.001-0.01
0.0031
0.023
0.0026
0 Pdie
0.0001-0.0005
0.01
0.0015
0.0011
α
0.01-0.1
0.052
0.03
0.038
λ
0.035-0.1
0.05
0.003
0.035
MOI
1-8
2
3
4
III. V ISUALIZATION AND R ESULT
(b)
(c)
Fig. 2: Three representative examples of therapy with MV-NIS are shown. In (a), is an example of a tumor that initially responds and then regrows while in (b), the tumor is eradication after, and in (c) treated tumor that exhibit oscillations in size as a function of time.
probability (1 − λ ) fuses with the infected cell to form a syncytium or fuses to the already existing syncytium. We assume a syncytia can not be broken apart, and the invasion viruses inside of it can duplicate and be released into environment
The simulation environment is presented as a 500 × 500 × 500 ηm3 cubic space which contains randomly mixed tumor cells and viruses with the ratio of 1 : 10. Each cell has a radius of 6ηm and is initialized with a randomly assigned age between 0 to 24 h. Our mathematical model has accounted for three populations: uninfected tumor cells, virus-infected tumor cells (single or incorporated into syncytia), and free viruses. We assume that a tumor was considered cured if the total population was reduced to less than one cell. Three types of virotherapy outcome were exhibited: tumor eradication, therapeutic failure or oscillations in tumor size. Numerous groups of simulations were performed with our model in order to identify which of its components are most critical for this system and their best combinations for successfully simulating the different outcomes. During parametric studies, we found a promising range for each parameter as well as the optimal value that produced simulation results best matching the three types of outcomes (see Table 1). During each simulation the size distribution of the the total tumor cells, uninfected cells, infected cells and cells incorporated in syncytium are saved, and we averaged several best simulations to outline the three typical preservations ranging from tumor eradication to oscillatory behavior (see Fig 2). We found that tumor eradication occurred mostly with a small λ which can lead to high efficiency of syncytium formation. In addition, another feature that accompanies with the tumor eradication is the population of uninfected cells drop faster than the cells incorporated in syncytia. This is compatible with the findings by D Dingli et al.. We also saved the state of the simulation system at regular intervals to generate animations of the virotherapy process, from which we can clearly visualize the dynamic interactions between the tumor cell and virus (see Fig 3).
IFMBE Proceedings Vol. 29
Computational Visualization of Tumor Virotherapy
227
cells proliferation rate which is controlled directly by the dividing probability in our model and the infection probability. In the future, we aim to develop a multicellular engine that can be used to facilitate understanding experimental observations, exploring alternative therapeutic scenarios, predicting experiments result and optimizing therapy. (a)
R EFERENCES
(b)
(c)
Fig. 3: Three representative examples: (a) Not cured, (b) Cured; (c) Oscillation. Red : Uninfected cells; Blue: Discrete infected cells; Yellow: Syncytium
IV. C ONCLUSION Virotherapy using replication-competent viruses is an exciting approach for cancer treatment since many viruses preferentially infect and destroy tumor cells. Visualization of interactions between tumor cells and viruses at cellular level by using computational models can greatly facilitate understanding the virotherapy process. In this paper, a 2D computational model was created to simulate the interactions between tumor cells and recombinant viruses injected. This model is driven by the population dynamics of tumour cells and viruses. We demonstrated the tumour growth behavior and the interactions between tumor cells and viruses as dynamics simulation progress, and also presented the outcomes in quantitative approaches fitting the experimental observations. Our simulations successfully captured therapy outcomes including tumor eradication, oscillatory and therapy failure. During the parameters modifications, we validated this computational model by fitting with experimental data. It is clear that tumor virotherapy is a highly non-linear process, and very sensitive to many factors such as the tumor
1. Kirn D. H., McCormick F.. Replicating viruses as selective cancer therapeutics Mol. Med. Today. 1996;2:519C527. 2. Heise C. C., Williams A., Olesch J., Kirn D. H.. Efficacy of a replication-competent adenovirus (ONYX-015) following intratumoral injection: intratumoral spread and distribution effects Cancer Gene Ther. 1999;6:499-504. 3. Hirasawa K., Nishikawa S.G., Norman K. L., Alain T., Kossakowska A., Lee P. W.. Oncolytic reovirus against ovarian and colon cancer Cancer Res. 2002;62:1696-1701. 4. Lichty B. D., Stojdl D. F., Taylor R. A., et al. Vesicular stomatitis virus: a potential therapeutic virus for the treatment of hematologic malignancy Hum. Gene Ther.. 2004;15:821-831. 5. Peng K. W., TenEyck C. J., Galanis E., Kalli K. R., Hartmann L. C., Russell S. J.. Intraperitoneal Therapy of Ovarian Cancer Using an Engineered Measles Virus Cancer Res.. 2002;62:4656-4662. 6. Kirn D., Martuza R. L., Zwiebel J.. Replication-selective virotherapy for cancer: biological principles, risk management and future directions Nat Med.. 2001;7:781-787. 7. Peng K. W., Facteau S., Wegman T., OKane D., Russell S. J.. Noninvasive in vivo monitoring of trackable viruses expressing soluble marker peptides Nature Medicine. 2002;8:527 - 531. 8. Fall C. P., Marland E.S., Wagner J. M., Tyson J. J.. , eds.Computational Cell Biology. Berlin: Springer 2002. 9. Wu J. T., Kirn D. H., Wein L. M.. The Potential of Bioartificial Tnumbers in Oncology Research and Treatment Bull. Math. Biol.. 2004;66:605C625. 10. Tao Y., Guo Q.. Tumor-related angiogenesis J. Math. Biol.. 2005;51:37C74. 11. Dingli D., Cascino M. D., Josic K., Russell S. J., Bajzer Z.. Mathematical modeling of cancer radiovirotherapy Math Biosci. 2006;199:55-78. 12. Friedman A., Tian J. P., Fulci G., Chiocca E. A., Wang J.. Glioma virotherapy: effects of innate immune suppression and increased viral replication capacity Cancer Res. 2006;66:2314-2319. 13. Wein L. M., Wu J. T., Kir D. H.. Validation and analysis of a mathematical model of a replication-competent oncolytic virus for cancer treatment: implications for virus design and delivery Cancer Res. 2003;63:1317-1324. 14. Wodarz D.. Expression of endothelial cell-specific receptor tyrosine kinases and growth factors in human brain tumors Hum Gene Ther. ;14. 15. Bajzer Z., Carr T., Josic K., Russell S. J., Dingli D.. Modeling of cancer virotherapy with recombinant measles viruses J Theor Biol. 2008;252:109-122. 16. Dingli D., Offord C., Myers R., et al. Dynamics of multiple myeloma tumor therapy with a recombinant measles virus Cancer Gene Ther. 2009;16:873C882. 17. Gao X. F., Tangney M., Tabirca S.. Computational 3D Simulation of Cellular Dynamics in Eurographics Ireland Workshop Series(Trinity College Dublin, Ireland):27-33 2009.
IFMBE Proceedings Vol. 29
New Approaches for Continuous Non Invasive Blood Pressure Monitoring Petr Zurek, Martin Cerny, Michal Prauzek, Ondrej Krejcar, and Marek Penhaker VSB – Technical University Ostrava, Faculty of Electrical Engineering and Computer Science, Biomedical Engineering Laboratory, Ostrava, Czech Republic
Abstract— The article is focused on two new approaches for non invasive blood pressure monitoring. There are discussed and in tested two possibilities for this measurement. The first one is system with NIR CCD camera (Near Infra Red), which measure the wide changes of the blood vessels and the second one is system, which measure the pulse transmit time (PTT). The results from laboratory test are discussed in this article too.
special high power Infra Led Diodes, two power converters, USB digitizer with 10 bits RGB A/D converter and measurement device (PDA or desktop station) (Fig 2). Camera is applied to patients hand with opposite applied three high power Led Diodes, which screen the patients hand with infra red light. Both these elements are powered by special developed power converters with high power.
Keywords— Continuous Blood Pressure, Non Invasive Measurement, NIR Camera, Pulse Transmit Time.
I. INTRODUCTION By the help of hemoglobin (in blood) is possible to take a pictures or video sequences of bloodstream from hands where the vessels are dark and the rest of scene is brighter. Such images are possible to get by NIR CCD camera which is a standard small size color PAL camera with a spectrum of sensing near infra red. Firstly the video sequence is obtained throw the camera to a mobile device (PDA, MDA, XDA), standard PC or notebook PC. These stations are used to analyze a video sequence and detect a heart rate and a blood pressure. Video signal from camera to client stations is transferred wirelessly by WiFi standard (802.11b,g). Due to a limited performance of mobile devices the blood pressure is not visualized, but only an actual number is displayed to user. The second chance, how to measure the continuous cuff less, noninvasive blood pressure is to detect this value from the information about pulse transmit time (PTT). The PTT is time which informs us, how fast the volume change (wave) is expanding in the blood vessels (Fig. 1). This time will be change with the blood pressure change too. Theoretically when the blood pressure will be growing, the PPT will be shorter respectively. Of course there are some others variables, which can affect the PTT (elasticity of vessels etc.). A. NIR CCD System Measurement scheme consist of one NIR CCD camera with two circles of small power Infra Led Diodes, three
Fig. 1 Pulse transmit time derivated from ECG and PPG (photopletysmography on finger) signals
II. DEVELOPED SYSTEMS Measurement schema continues from camera wirelessly throw WiFi network to special USB digitizer which is equipped by high speed 10 bits RGB A/D converter. Finally the desktop station is placed at the end of measurement schema or the mobile device is used. The most power PDA HTC Touch HD is actually used in tests with sufficient results. During our project the software to analyze and measure the vessels width was created in Microsoft Visual Studio 2005.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 228–231, 2010. www.springerlink.com
New Approaches for Continuous Non Invasive Blood Pressure Monitoring
229
Table 1 Test with classical blood pressure device and our measurement solution. Value are in [mm Hg]
Fig. 2
Vessel selection for realize width measurement
Software has a two function modes: First case is Image Enhance, from menu Mode. This mode is for creating of image adjustment sequence to measure a vessels width. This solution was selected to offer a maximal variability for input image source. Our application has embedded a many of different adjustment filter to offer a large variability to user for creation an own convolution filter which is able to store in program. By this our software is possible to use as an simple image editor. Second mode - Vein Measure allow to user setup and run a multiple image adjustment and vessels width measurement by selected process. In this mode is important to select a part of image to realize a vessels width measure. This is due to a fact that more than one vessel is exist on image. Resulting width is displayed to output window and stored in database file for future use. Experiments Our first tests were executed on a group of five university students. The aim of test results was expected as a percentage of successful rates of blood pressure diagnosis and quality of measurement in compare to classical blood pressure devices. We start every test with measurement of blood pressure on classical blood pressure device to get a starting value of blood pressure. This start value is an input value for our noninvasive blood pressure software. After a successful synchronization of starting value, the continual blood pressure measurement was realized for a period of 10 minutes (This time is a minimum period between two measurements on classical blood pressure device). After 10 minutes period the second – final comparative measurement was executed and the results was recorded. All these results are summarized in [Table 1].
Persone no.
Start pressure
continuous measurement time from start [min] 0 5 10
Final Pressure
1 2 3 4 5
86 75 82 78 67
86 75 82 78 67
82 74 76 82 75
84 73 80 80 71
83 72 79 81 72
Behind the Scenes – Code Implementation Our software uses standard .Net Framework libraries like System.Drawing.Imaging, System.IO and System.Text.RegularExpressions. Software is developed in Microsoft Visual Studio. The most important part of software is image processing. Working with pixels is possible by SetPixel and GetPixel methods. Unfortunately these methods are very slow and we suggest using a direct memory access by help of unsafe code. To access bitmap pixels is necessary to lock them in memory by LockBits method. This method expect as argument a class instance Rectangle – bitmap part for locking and System.Drawing.Imaging.ImageLockMode enumeration type. The last argument is System.Drawing.Imaging.PixelFormat enumeration type which defines a color depth of bitmap. Vessel width measure is realized by simple function which counts pixels white color on given y-axis. For required accuracy of width the rotation of image is needed. The vessel must be a vertically to direction of measurement. This is done automatically by Get_angle(Bitmap b) which compute a angel of slope against to y-axis. Next function rotateImage(Bitmap b, float angle) realize the turning of image as it is needed. A. PTT System For the research in this area we developed the special measurement system. This system is unique at that, that it can measure different biosignals in one time synchronous. This system consists of the basic sensors for measure biosignals. These sensors are connected to the biosignal amplifier (g.tec - BSAmp). This amplifier is connected to A/D convertor (NI- DAQpad 6052), which is connected wia FireWire with PC. For our research we measure the main biosignals as ECG (g.tec - ECGbox), photoplethysmography (g.tec – g.Pulse). Other measured biosignals are capacity wave (piezosensor ADInstrument MTL1010 and special measuring system with bhv5355) and breathing curve (ADInstrutments - pneumotrace). The last but not least biosignal is continuous invasive blood pressure. This system is prepared for measuring the continuous invasive blood pressure – we are prepared to connect this system with the
IFMBE Proceedings Vol. 29
230
P. Zurek et al.
Nihon Kohde monitor. This is very important signal, which we can measure synchronous with other biosignals. Then we can really exactly compare our intended blood pressure (compute with PTT using) and real blood pressure in the time. The most emphasis was take to testing and verification of the alternate synchronization of all sensors and measuring. Software for the synchronious measuring of differerent biosignals, which we use for computing the pulse transmit time and other value for blood pressure computing, was made in the Matlab software. The big emphasis was take on possibility to set the parameters of the signal processing, as sample rate, possibility to set the filtration parameters for each channel separately. This is very important, because each channel process different biosignal, which has different frequency parameters. That is why we must set each channel separately based on processing signal. Emphasis was taken on the lucidity and simplicity of user interface too. At once was designed algorithm for the detecting the important parts (points) of the biosignals and determination of their together correlation in the time. For the detection of the important points we used and tested the derivation principle, or the wave transformations. All this measuring system is in the testing process just now. We are preparing this system for the quality measuring of biosignals. Than we can determine exact blood pressure.
III. DISCUSSION Noninvasive cuff less continuous blood pressure measurement will be very useful. For example for monitoring blood pressure of the peoples, who has some problems with heart or arterial system (hypertension, people after heart attack). In this time this people use standard blood pressure monitor – they have cuff on their arm, which is filling with air with adjusted delay. It can be very uncomfortable especially during the night. Also it is not really continuous measurement. If we replace this with some other system, which will be with passive sensors, it will be more comfortable for the patient and also for the doctors. Also it can be use in homecare systems for old peoples, or for peoples with some handicap, where we need to monitor their life functions. Indeed we can employ system for blood pressure monitoring with for example ECG and actimetry monitoring for peoples, who are working in dangerous environment (fire brigade, soldiers). System could detect some changes in health of this people and precede weightier problems.
Fig. 3
User inerface of system for measuring different biosignals
IV. CONCLUSION The largest software part for NIR CCD is tuned and program is well functioned. Unfortunately the visualization and measure part would be more extend on practical needs and performance possibilities. The mobile devices like PDA withhold such possibilities so they exist only in limited version in compare to desktop client version. These limitations are not found as a majority problem and our solution is possible to use. Our first tests results are very good and give to us a positive feedback for future works. The PPT system is testing process and tuning process just now. In the near future we would like to measure first official data – in the main the data with the synchronous invasive blood pressure. Then we can find out mutual biosignals coherence and define finally algorithm between biosignals parameters (PTT) and blood pressure. This solution can be very useful for investigation, where we need not only the blood pressure, but also ECG or some other biosignal, because we have not too measure the blood pressure separately, but we could compute it from other these biosignals. Of course after testing and tuning this system we will develop small mobile system, which will be able to measure the biosignal, analyze them, compute the blood pressure and save this information, or send it to some wireless docking, or supervisory station. In near future we plan to use our solution in cryogenic room where temperature is going down by -142OC. In such
IFMBE Proceedings Vol. 29
New Approaches for Continuous Non Invasive Blood Pressure Monitoring
cases is continuous blood pressure very important for physicians and other medical staffs.
ACKNOWLEDGMENT The work and the contribution were supported by the project Grant Agency of Czech Republic – GAČR 102/08/1429 “Safety and security of networked embedded system applications”. Also supported by the Ministry of Education of the Czech Republic under Project 1M0567.
REFERENCES 1. Krejcar, O., Fojcik, P.(2008) Biotelemetric system architecture for patients and physicians – solutions not only for homecare, In Portable 2008, 2nd IEEE International Interdisciplinary Conference on Portable Information Devices, Ga-Pa, Germany 2. Cerny M., Penhaker M. (2008) Biotelemetry In conference proceedings 14th Nordic-Baltic Conference an Biomedical Engineering and Medical Physics, JUN 16-20, 2008 Riga, LATVIA, Volume: 20 Pages: 405-408 Published: 2008, ISSN: 1680-0737, ISBN: 978-3-54069366-6 3. Černý, M. (2009) Movement Monitoring in the HomeCare System . In IFMBE proceddings. Ed. Dossel-Schleger, Berlin:Springer, 2009, issue. 25, ISBN 978-3-642-03897-6; ISSN 1680-07
231
4. Krejcar, O., (2006) “PDPT framework - Building information system with wireless connected mobile devices”, In Icinco 2006, 3rd International Conference on Informatics in Control, Automation and Robotics, Insticc Press, Setubal, Portugal, pp. 162-167 5. Krejcar, O., Janckulik, D., Motalova, L., (2009) “Complex Biomedical System with Mobile Clients”. In The World Congress on Medical Physics and Biomedical Engineering 2009, WC 2009, September 0712, 2009 Munich, Germany. IFMBE Proceedings, Vol. 25/5. O. Dössel, W. C. Schlegel, (Eds.). Springer, Heidelberg. 6. Penhaker, M., Cerny, M., Martinak, L., Spisak, J., Valkova, A., (2006) “HomeCare - Smart embedded biotelemetry system”, In World Congress on Medical Physics and Biomedical Engineering, Vol 14, PTS 16, Aug 27-Sep 01, pp. 711-714, Seoul, South Korea 7. Penhaker M., Cerny M., Martinak L., et al. HomeCare (2006) - Smart embedded biotelemetry systém In Book Series IFMBE proceedings World Congress on Medical Physics and Biomedical Engineering, AUG 27-SEP 01, Seoul, SOUTH KOREA, Volume: 14 Pages: 711714, 2007, ISSN: 1680-0737, ISBN: 978-3-540-36839-7 8. Peterek, T., Žůrek, P., Augustynek, M., Penhaker, M. (2009) Global courseware for visualization and processing biosignals, In World Congress 2009. Sept 7. – 12. in Munich, ISBN 978-3-642-03897-6, ISSN 1680-0737 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Petr Zurek VSB – Technical University of Ostrava 17. listopadu 15 Ostrava Czech Republic [email protected]
Wireless Power and Data Transmission for Robotic Endoscopic Capsules R. Carta1 , J. Thon´e1 and R. Puers1 1
K.U.Leuven, ESAT-MICAS Department, Leuven, Belgium
Abstract— Robotic capsular endoscopy is nowadays a really hot topic. Scientists are fascinated by the idea of developing an integrated tool that travels through the human body while sending images, measuring biomedical parameters and performing therapeutic activity. Doctors actively support this futuristic solution aiming at non invasive examination and therapy. Patients appreciate the idea of swallowing a capsule that performs medical examinations without any pain or discomfort. Although technology has obtained results unthinkable only a few years ago, the main issue is the dramatic lack of energy in the capsule. Up to date commercial capsules are purely passive devices relying on batteries that provide a mere 25 mW for 6 to 8 hours. A promising approach to overcome energy shortage is wireless powering. A condensed set of orthogonal coils inside the capsule can retrieve more than 300 mW from an external magnetic field, without any time limitation. This solution allows the integration in the capsule of highly consuming modules such as diagnostic tools, actuators, a better camera and a high data-rate transmitter. This work presents an overview of this powering solution and proposes a few examples of system integration. Keywords— Capsular endoscopy, wireless powering, inductive powering, data transmission.
I. I NTRODUCTION Capsule endoscopes get more and more accepted as a valid tool for early diagnosis of gastrointestinal (GI) conditions [1]. Since the presentation of the M2A capsule by Given Imaging in 2000 [2], many similar devices has been developed to cover the increasing demand for an alternative to more invasive examination techniques. Up to now only three companies offer a commercial device used in medical practice: Given Imaging (Israel), Olympus (Japan) and IntroMedic (Korea). The three of them produce a capsule for the examination of the small bowel, which was historically the first target as it is hardly reachable by traditional endoscopy. Given Imaging recently obtained FDA clearance for two more capsules dedicated to the esophagus and the colon respectively [1, 3]. Table 1 resumes the main characteristics of the commercially available capsules. All of them have dimensions comparable to those of a large vitamine pill in order to facilitate the swallowing action even in young patients. With the exception of PillCam ESO2 and PillCam COLON2, a single camera is integrated on board with a vision angle of about 150◦ . Although a second camera is used at the back of both the esophagus and colon capsules to reduce the chance of missing relevant spots, the image resolution is still poor and not comparable with the one achieved by traditional endoscopy. Moreover, the limited
frame-rate does not allow real-time vision of the received images, that are viewed off-line [1]. The capsule locomotion is purely passive and relies on peristalsys which does not allow any kind of control on its orientation or speed. Finally none of the commercial capsules include active diagnostic tools or actuators. The main reason for these limited functionalities is the dramatic shortage of available energy. Two silver oxide watch batteries are typically used as a power source. This can only supply an average power of 20 mW for about 8 hours which is barely enough to transmit a quarter VGA image every 2 seconds. It is certainly not sufficient to sustain the integration of advanced diagnostic tools. The higher data-rate claimed by GivenImaging for PillCam ESO2 and COLON2 is only possible as a consequence of their really short operating time (15 minutes to one hour). To overcome the energy shortage, wireless power transmission can be used which consists of a 3D coils inductive link [4]. Providing more than 300 mW without time limitation, this approach enables the integration of high power demanding modules [5, 6]. In the recent years a wide range of dedicated tools has been described in literature for robotic endoscopic capsules. A vision module, providing half VGA images has been described in [7]. A transmitter supporting a data rate up to 2 Mbps has been reported by [8]. Various locomotion strategies have been developed based on magnetic actuation [9], legs [10], radio controlled propellers [11]. Numerous actuation systems have been developed and reported in literature which can perform biopsy or release medications [12]. A spring actuated biopsy tool has been presented in [13] whereas a rotational micro biopsy device has been described in [14]. These are only a few examples of results produced by a vivid scientific community working on the topic. Once the desirable capabilities of a robotic capsule have been defined, an accurate estimation of the power budget is a must. This budget should describe not only the power consumption of each relevant modules but also their estimated operating time. Table 2 resumes the power demand of the main modules of a robotic capsule. Although the overall consumption is out of reach even with inductive powering, it is important to note that a power-wise activation of the high consuming modules would allow a smooth capsule operation. While peristaltic waves may be sufficient for the analysis of the small bowel, active locomotion would allow accurate investigation of some critical parts of the GI tract,
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 232–235, 2010. www.springerlink.com
Wireless Power and Data Transmission for Robotic Endoscopic Capsules
233
Table 1: Overview of the available endoscopic capsules [1, 3, 15, 16]. GivenImaging
Olympus
IntroMedic
PillCam COLON2TM
EndoCapsuleTM
MiroCamTM
Esophagus
Colon
Small bowel
Small bowel
26 × 11
31 × 11
26 × 11
24 × 11
PillCam SB2TM
PillCam ESO2TM
Target GI district
Small bowel
Dimensions [mm]
26 × 11
Weight [g]
3.7
3.7
-
3.8
3.4
Field of view
156◦
2 × 156◦
2 × 156◦
145◦
150◦
Number of LEDs
4
2×4
2×4
6
6
Type of camera
CMOS
2xCMOS
2xCMOS
CCD
CMOS
Image resolution [pixels]
256 × 256
256 × 256
-
-
320 × 320
Frame rate [fps]
2
14
adjustable
2
3
Image acquisition
Real-time
Real-time
Real-time
Real-time
Off-line
Battery life-time [h]
9
-
-
8-10
Over 11
speeding-up, slowing-down or eventually stopping the capsule motion [5]. Although a single capsule addressing the full GI tract is highly desirable, this is still out of reach with current technologies due to the strict volume constraints. A way to overcome the issue consist on developing a capsule for each critical region of the digestive system [10]. Every capsule will have different characteristics and will embed specialized modules.
II. S YSTEM D ESIGN Inductive powering has been widely used to bias medical implants in the past 20 years. A typical system is based on a pair of resonant coils where the primary is driven by a power amplifier to induce an alternating magnetic field which is partially picked up by a secondary coil, converted into voltage and used to feed the implanted electronics and actuators. These systems have been successfully applied to many medical devices, from orthopedics [17, 18], cardiology [19], to hearing and eye implants [20]. In specific conditions these systems are characterized by high efficiency and able to transfer tens of watts. Unfortunately the inherent conTable 2: Power consumption of the main modules of a robotic capsule. Device
Required power
Operating time
CMOS image sensor
40mW
70-100%
LED illumination
4 × 10/20mW
70-100%
Control electronics
40mW
70-100%
Compression chip
7.5mW
70-100%
Autofocus system
12mW (@50V )
50-80%
Active locomotion system
> 300mW
3-5%
Actuators
> 200mW
1-2%
Transmitter
5mW
90-100%
Receiver
5mW
1-80%
straints of capsular endoscopy (size, orientation, coupling) do not allow these levels of power to be reached. The continuous movement of the capsule through the twisty GI tract would imply a random relative position of primary and secondary coil. Whereas typical subcutaneous systems are calibrated for small coil separation (1-2 cm), primary and secondary coils would have to cope with distances in the order of 20 cm with angular misalignment up to 90◦ . Moreover, the capsule dimensions drastically limit the size of the power receiver, reducing the coupling between primary and secondary coil. Although the design of an inductive link for this application presents unique challenges, an approach based on a multiple coil system is possible. Three orthogonal coils embedded in the capsule can catch simultaneously the external magnetic field, with contributions proportional to their coupling conditions with the primary coil. A preliminary design was described by Lenaerts and Puers in [4] based on an external coil wound around the patient chest and a set of air coils enclosing the power conversion electronics. The operating frequency was set to 1 MHz to reduce the absorption by the human body. The overall volume of the power receiver is a little larger than a cubic centimeter for a received power of 150 mW in the worst coupling case. The system has been significantly improved in size and materials by the authors, adding a ferrite core to the coils which has the effect of locally densify the magnetic field lines, increasing the amount of collected power up to 330 mW in a mere 0.48cm3 [6]. Fig. 1 depicts a schematic of the inductive powering module. The external unit includes a class E coil driver and supports the possibility of transmitting low data-rate communication by modulating the amplitude the power carrier. The receiver is composed of three resonant LC tanks tuned at the carrier frequency. The three contribution are rectified and added for a further voltage regulation. The implemented system provides the embedded circuitry with two DC lines. The use of an Helmholtz coil at the primary produces a more uniform field than a solenoid,
IFMBE Proceedings Vol. 29
234
R. Carta, J. Thoné, and R. Puers AM/OOK demodulator
Vdd
R1 C1
+
D2
R2
R3
L2x
control data
data out
C7 C2
FPGA
prog
−
D1
power rail
serial interface
R4
driving signal OOK modulation
coil driver
D4
Vdd
LDO1
L2y D3
L1
C5
data in
C3
D6
Vdd2
LDO2
L2z D5
C4
C6
Fig. 1: Schematic of the wireless power and control data transmission. allowing a better field confinement to the trunk region reducing the undesired exposure of other part of the patient’s body [6]. A high-speed wireless transmitter is required to send real time camera images and other sensor data acquired by the endoscopic capsule through the body, to a portable receiver. A 2 Mbps frequency shift keying (FSK) transmitter was developed, based on earlier work [8], modulating a carrier at a frequency of 330 MHz. At 2 Mbps, it is possible to transmit 15 VGA frames per second (fps), compressed with a factor 20. The carrier frequency is a trade-off between body absorption and antenna efficiency : the higher the carrier frequency, the more transmitted power is absorbed by human tissue, but the more efficient an antenna becomes for a certain volume. As such a high-speed transmitter only makes sense when a higher resolution camera is available, this solution can only be successful in coexistence with a wireless powering module. The down link communication is implemented by on-off keying (OOK) of the 1 MHz inductive power carrier (Fig. 1). Next to providing energy, the link is used for low speed transmission of control commands or programming data to the wireless capsule, at a speed of 9.8 kbps. Demodulation is achieved at the inductive receiver module, by envelope detection of the rectified power carrier. Up link load modulation is not feasible in this configuration, because of the limited coupling between the primary and secondary coils.
III. E XPERIMENTAL
RESULTS
A complete characterization of the ferrite coils has been presented by the authors in [6]. The performances of ferrite coils were compared with those of the air coils devel-
oped by Lenaerts [4]. Measured results prove an increment above 150% in the received power within the same external field. Besides this important but artificial characterization, a more interesting testbench for the power module consist on the integration with other units designed to be embedded in a robotic capsule. Fig. 2 depicts three succesful examples of integration. The integration of a 14 mm ferrite coils in a propelled locomoted capsule for the investigation of the stomach was described by the authors in [21]. Although that module was too large to fit in a real size capsule, this was the first step for the realization of a 9 mm ferrite coils that were then assembled with several modules optimized for the integration in a robotic capsule. The first integration was performed with an illumination module counting on 6 high brightness surface mounted LEDs (HSMW-C191 from Avago Technologies). The module was implemented in a flexible substrate and connected to the power receiver as illustrated in Fig. 2. The LEDs are driven in parallel at 3.3 V for an estimated power consumption of 300 mW. It was proven the powering system can sustain a bright light in all possible capsule orientation with respect to the external coil. Another module was assembled to test the simultaneous operation of wireless powering and high data-rate transmission. The transmitter is a dedicated 330 MHz FSK device, taking random data from an optical link. In realtime, the bit-error-rate (BER), being a figure of merit for the link quality, is shown on the screen of a user application. Qualitative measurements have shown a successful transmission of data at 2 Mbps, in a strong inductive field at 1 MHz, at a BER below 10−4, at a distance of 0.5 m. This BER is enough to ensure error-free transmission when using an adequate forward error correction scheme (FEC).
IFMBE Proceedings Vol. 29
Wireless Power and Data Transmission for Robotic Endoscopic Capsules
235
Fig. 2: A few examples of integration of the 3D coils with a locomotion module [21] (left), a 6-LED illumination module (center) and a 2 Mbps transmitter (right).
IV. C ONCLUSION Wireless power supply is a breaktrough in the development of robotic capsules. Providing more than 300 mW with no time limitation to the capsule, 3D inductive powering overcomes the energy shortage which is drastically limiting the capabilities of commercial capsules, allowing the integration of advanced diagnostic tools. The use of ferrite coils significantly increases the link performances promoting a volume reduction of the receiving module. A few examples of integration have been presented which prove the high potentials of this powering approach.
ACKNOWLEDGMENTS The work is supported by the European Community, within the 6th Framework Programme, through the Vector project (contract number 0339970). The authors wish to thank the project partners and the funding organization.
R EFERENCES 1. Moglia A., Menciassi A., Dario P., Cuschieri A.. Capsule endoscopy: progress update and challenges ahead Nat. Rev. Gastroenterol. Hepatol.. 2009;6:353-362. 2. Eliakim R.. M2A capsule endoscopy - a painless voyage in the small bowel and beyond, Isr Med Assoc J.. 2004;6:560-561. 3. GivenImaging at www.givenimaging.com 4. Lenaerts B., Puers R.. Inductive powering of a freely moving system Sensors and Actuators A: Physical. 2005;123-124:522–530. 5. Swain P.. The future of wireless capsule endoscopy World Journal of Gastroenterology. 2008;14:4142–4145. 6. Carta R., Thon´e J., Puers R.. A wireless power supply system for robotic capsular endoscopes Sensors and Actuators A: Physical. 2010;In Press, Corrected Proof:-. 7. Vatteroni M., Covi D., Cavallotti C., et al. Smart optical CMOS sensor for endoluminal applications Procedia Chemistry. 2009;1:1271-1274. Proceedings of the Eurosensors XXIII conference. 8. Thon´e J., Radiom S., Turgis D., Carta R., Gielen G., Puers R.. Design of a 2 Mbps FSK near-field transmitter for wireless capsule endoscopy Sensors and Actuators A: Physical. 2009;156:43–48.
9. Ciuti G., Valdastri P., Menciassi A., Dario P.. Robotic magnetic steering and locomotion of microsystems for diagnostic and surgical endoluminal procedures Robotica. 2009;doi:10.1017/S0263574709990361:–. 10. Quirini M., Scapellato S., Valdastri P., Menciassi A., Dario P.. An Approach to Capsular Endoscopy with Active Motion in Engineering in Medicine and Biology Society, 2007. EMBS 2007. 29th Annual International Conference of the IEEE:2827-2830 2007. 11. Tortora G., Valdastri P., Susilo E., et al. Propeller-based wireless device for active capsular endoscopy in the gastric district Minimally Invasive Therapies and Allied Technologies (MITAT). 2009;18:280–290. 12. Mc Caffrey C., Chevalerias O., O’Mathuna C., Two K.. SwallowableCapsule Technology Pervasive Computing. 2008;7:23–29. 13. Park Sunkil, Koo Kyo, Bang Seoung Min, Park Jeong Youp, Song Si Young, Cho Dongil ’Dan’. A novel microactuator for microbiopsy in capsular endoscopes Journal of Micromechanics and Microengineering. 2008;18:025032 (9pp). 14. Kong Kyoung, Cha Jinhoon, Jeon Doyoung, Dan Cho Dong. A rotational micro biopsy device for the capsule endoscope in Intelligent Robots and Systems, 2005. (IROS 2005). 2005 IEEE/RSJ International Conference on:1839-1843 2005. 15. Olympus at www.olympus-europa.com/endoscopy/ 16. IntroMedic at www.intromedic.com/en/product/productInfo.asp 17. Catrysse M., Hermans B., Puers R.. An inductive power system with integrated bi-directional data-transmission Sensors and Actuators A: Physical. 2004;115:221–229. The 17th European Conference on Solid-State Transducers. 18. Van Ham J., Reynders-Frederix P., Puers R.. An Autonomous Implantable Distraction Nail Controlled by an Inductive Power and Data Link in Solid-State Sensors, Actuators and Microsystems Conference, 2007. TRANSDUCERS 2007. International:427-430 2007. 19. Miura H., Arai S., Kakubari Y., Sato F., Matsuki H., Sato T.. Improvement of the transcutaneous energy transmission system utilizing ferrite cored coils for artificial hearts IEEE transactions on magnetics. 2006;42:3578–3580. 20. Schnakenberg U., Walter P., B¯ogel G., et al. Initial investigations on systems for measuring intraocular pressure Sensors and Actuators A: Physical. 2000;85:287 - 291. 21. Carta R., Tortora G., Thon´e J., et al. Wireless powering for a selfpropelled and steerable endoscopic capsule for stomach inspection Biosensors and Bioelectronics. 2009;25:845–851.
Author: Riccardo Carta Institute: K.U.Leuven, ESAT-MICAS Department Street: Kasteelpark Arenberg 10 City: Leuven Country: Belgium Email: [email protected]
IFMBE Proceedings Vol. 29
Ulcer Detection in Wireless Capsule Endoscopy Images Using Bidimensional Nonlinear Analysis Vasileios Charisis1, Alexandra Tsiligiri1, Leontios J. Hadjileontiadis1, Christos N. Liatsos2,3, Christos C. Mavrogiannis3, and George D. Sergiadis1 1 Aristotle University of Thessaloniki/Department of Electrical & Computer Engineering, Thessaloniki, Greece 401 Army General Hospital of Athens/Department of Internal Medicine and Gastroenterology Unit, Athens, Greece 3 Athens University/Academic Department of Gastroenterology, Athens, Greece
2
Abstract—Wireless Capsule Endoscopy (WCE) constitutes a recent technological breakthrough that enables the observation of the gastrointestinal tract (GT) and especially the entire small bowel in a non-invasive way compared to the traditional imaging techniques. WCE allows a detailed inspection of the intestine and identification of its clinical lesions. However, the main drawback of this method is the time consuming task of reviewing the vast amount of images produced. To address this, a novel technique for discriminating abnormal endoscopic images related to ulcer, the most common disease of GT, is presented here. Towards this direction, the Bidimensional Ensemble Empirical Mode Decomposition (BEEMD) was applied to images of the small bowel acquired by a WCE system in order to extract their Intrinsic Mode Functions (IMFs). The IMFs reveal differences in structure from their finest to their coarsest scale providing a new analysis domain. Additionally, lacunarity analysis was employed as a method to quantify and extract the texture patterns of the images and differentiate the ulcer from the healthy regions. Experimental results demonstrated promising classification accuracy (>90%), exhibiting a high potential towards WCE analysis. Keywords—Capsule Endoscopy, Bidimensional Ensemble Empirical Mode Decomposition, Lacunarity, Ulcer Detection I. INTRODUCTION
Gastroenterology is said to be one of the most difficult medical fields due to the inaccessibility of the gastrointestinal tract (GT) and the complex nature of pathologic findings. One of the most popular GT lesions is the peptic ulcer which arises in stomach and more often in the small intestine and especially in the duodenum. Ulcer is a healing wound that develops on the mucous membrane and its early diagnosis is extremely essential since it is directly linked to other serious diseases such as Crohn's disease and ulcerative colitis. Traditional imaging techniques for ulcer recognition include push endoscopy, double balloon endoscopy, small bowel follow-through and enteroclysis. However, all these methods cause discomfort to patients. Wireless Capsule Endoscopy (WCE) is a novel technique that allows visualization of the whole GT in a comfortable, noninvasive and efficacious way[1]. The main disadvantage
of WCE is that the clinician needs to examine carefully a video of approximately 55.000 images, even frame by frame in some cases. This renders WCE time consuming (2-3 hours needed) and prevents it from wide use; hence, automatic detection is of an immediate need. Many efforts and computational approaches towards the direction of automatic inspection and analysis of WCE images have been reported in the literature. In particular, texture features are extracted from the texture spectrum and neural network techniques are used in order to detect abnormal patterns [2], [3]. In the same direction, L-G graphs and image registration [4] along with segmentation techniques [5] were also used. For polyp and tumor detection co-occurrence matrixes [6], [7], and local binary patterns [8] were employed. Moreover, blood detection was performed with the aid of chromaticity moments [9] and color spectrum transformation [10]. Nevertheless there a lack of research towards ulcer recognition, in spite of the grave importance and popularity of the disease. The techniques proposed include MPEG7 descriptors [11], RGB pixel values evaluation [12], curvelet transform and uniform local binary patterns [13] and chromaticity moments [14]. The classification accuracy achieved with these methodologies varies from 36% to 88%. The aim of this work is to process WCE data with effective image processing nonlinear analysis, namely Bidimensional Ensemble Empirical Mode Decomposition (BEEMD) [15]. The analysis output is classified into normal and ulcerous one, using the texture patterns extracted by lacunarity analysis [16]. This combination proves to be a promising scheme for the WCE images analysis and characterization. II. MATERIALS AND METHODS
A. Materials The proposed analysis was realized in MATLAB R2009b (The Mathworks, Inc.). The images we used in this work were captured from a pillcam SB capsule (Given Imaging), are of 8-bit color resolution and of 512x512 pixel size. They come from 6 patients with ulcerous diseases. The data set
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 236–239, 2010. www.springerlink.com
Ulcer Detection in Wireless Capsule Endoscopy Images Using Bidimensional Nonlinear Analysis
Fig. 1 Examples of the three categories of the analyzed WCE images, i.e., ulcer - easy case (left); ulcer - hard case (center); normal (right).
consists of 80 normal and 62 pathologic images. After thorough examination and the support of a gastroenterologist, the 62 ulcer images were categorized into two types. The first includes images with ulcer findings that are easily noticeable (38 cases) while the other includes more hard cases (24 images). In Fig. 1, an example of the three image categories acquired from the endoscopic capsule is given. The left one depicts a big, clear and easily noticeable ulcer while the centered represents a hard case of ulcer. The right image shows the normal mucous membrane. B. BEEMD The Bidimensional Empirical Mode Decomposition (BEMD) is the extension of Empirical Mode Decomposition (EMD), introduced by Huang et al. [17], in two dimensions. EMD is an alternative way of decomposition, intuitive, adaptive and without a priory analysis basis functions. Thus, it can be applied to both nonlinear and non-stationary time series. EMD is designed to seek and reveal the simple intrinsic oscillatory modes, called Intrinsic Mode Functions (IMFs), inherent in any data. IMFs are estimated so that every data, even if highly complicated, embed. As far as the 2-Dimenisional (2-D) case is concerned [15] there are two methods for the extension. Either the 2-D data are treated as a collection of 1-D slices and each slice is decomposed using one-dimensional EMD (pseudo-BEMD) or the EMD algorithm is directly transplanted to 2-D data by replacing the fitting curves (envelopes) with fitting surfaces. The image is imposed to the sifting process until a stopping criterion is satisfied. The first 2-D IMF is produced and the same process is applied to the difference of the initial image and the first 2-D IMF. This procedure is repeated until the desired number of IMFs is acquired. One of the major drawbacks of the original EMD is the appearance of mode mixing phenomenon, which is defined as a single IMF either consisting of signals of widely disparate scales, or a signal of a similar scale residing in two or more IMF components. To overcome this problem, a new noise-assisted data analysis method has been proposed by Huang et al. [18], the so called ensemble EMD (EEMD).
237
The EMD is applied to every component of an ensemble containing copies of the initial signal on which finite amplitude white noise has been added. The final IMFs are obtained by averaging the corresponding IMFs of the decompositions of the ensemble. The added white noise forces the different scales of the signal to be projected on the proper scales of reference established by the white noise which populates the whole time-frequency space uniformly. The effect of the noise is cancelled out since the ensemble mean is the result when the trials in the ensemble approach to infinity. In practice, a large number of trials is selected. The concept of EEMD is directly transferred to the 2-D space, resulting in the Bidimensional EEMD (BEEMD). C. Differential Lacunarity Lacunarity (Lac), another fractal property, is a measure of how a fractal fills space and describes the texture of a fractal [19]. Introduced by Mandelbrot to express an object deviation from translational invariance [16], was originally developed to discriminate textures and natural surfaces that share the same fractal dimension. More recently, lacunarity analysis has been used as a more general technique able to describe spatial patterns and distributions. Lac can be used to assess the largeness of gaps or holes of both binary and quantitative data in 1-D (signals) or 2-D (images) sets. A set with low Lac is homogenous, whereas one with high Lac is heterogeneous, having gaps distributed across a broad range of sizes. It should be noted that heterogeneous sets at small scales can be quite homogeneous when examined at larger scales or vice versa. From this perspective, Lac can be considered a scale dependent measure of heterogeneity or texture [16]. There are many algorithms to calculate Lac. The most popular seem to be based on the use of the intuitively clear and computationally simple "gliding box algorithm" (GBA) of Allain and Cloitre [20] which refers to 1-D dyadic data. In case of numerical data, the previous algorithm can be applied if data are formerly converted to dyadic by thresholding as denoted in the work of Plotnick et al. [21]. Furthermore, Dong [22] proposed another version of lacunarity appropriate for 2-D numerical data texture analysis (e.g. grayscale images) without the necessity of threshholding. This version is called "Differential Lacunarity" (Dif Lac). In this case, there is a gliding box r x r and a gliding window w x w (r<w). The box r is used for the estimation of the box mass of w at every position. D. The proposed method In our approach, only the texture information is necessary rendering the color domain useless. That is the reason
IFMBE Proceedings Vol. 29
238
V. Charisis et al.
why the images are converted to grayscale. Then, the images are cropped. The region of interest for the normal pictures varies from 110x110 to 220x220 pixels. For the pathologic images the crop area is not fixed but depends on the size and the position of the ulcer. The denoising stage comes next. WCE images due to the conditions under which they are being taken are likely to exhibit high levels of noise that needs to be eliminated in order to successfully extract the texture patterns of ulcer. BEEMD is applied and each image is decomposed in 8 IMFs. The fitting surface was created using the trianglebased cubic interpolation method. The number of the ensemble members was set to 20 and the amplitude of the white noise to 0.1. These values were empirically chosen after exhausting experiments and as a tradeoff between the degree of mode mixing elimination and time consumption. The first two IMFs contain the high frequency components of the original image from which they are subtracted. With the subtraction, the noise that the image may contain is being suppressed. Experiments were made on removing the first IMF only, but the results were worse. The last step includes the feature-pattern extraction procedure which is achieved by calculating the Dif Lac for each image (for box size r=2 and window size w=4-30 pixels). Dif Lac was adopted to classify WCE data since it is more precise and capable of revealing sharp changes in neighboring pixels [22], which is necessary in case of ulcer. A lacunarity-window size curve is produced and the ability of Lac to classify the images is investigated. The classification of the images is achieved by using a simple classifier, i.e., discriminant analysis-based classification [23]. III. RESULTS
The Dif Lac for all the window box sizes (4 to 30 pixels) for the three pictures of Fig. 1 is shown in Fig. 2 as a separate curve for each case. It should be mentioned that the curves of Lac are normalized to the Lac that corresponds to the smallest window size (w=4) to secure an identical reference level [24]. As it can be observed from Fig. 2, there is a clear discrimination between the easy case of ulcer and the normal case. On the contrary, the hard ulcer case tends to coincide with the normal nonetheless remaining distinguishable. The Lac curve of the normal image appears above all since the Lac values are the lowest (and we normalize) due to the approximately normal intensity distribution. The easy ulcer case shows high deviation from the normal intensity distribution and thus the Lac curve of the first case is the lowest. A middle situation is seen in the hard ulcer case, as the contour of the ulcer produces pixel value variation but not as high as the obvious ulcer.
Fig. 2 Lacunarity curves of the images shown in Fig. 1 Another point that must be stressed is that the behavior of the normalized Lac for all the above cases resembles the one of hyperbola, therefore, the model function of (w )
b w
a
c , w [ w min , w max ]
(1)
was chosen, where a represents the convergence of Λ(w), b represents the concavity of the hyperbola curve and c the translational term. For each image example considered in this study, the best interpretation of Lac by the model function Λ(w) is computed as the solution of a least squares problem where parameters a, b, c are the independent variables. The solution of the least squares curve fitting was based on the Levenberg-Marquardt algorithm [25] with max number of iterations 400 and RMSE<0.005. The important point about modeling the Lac curve with Λ(w) is that we manage to reduce the feature space dimension. The classification is based exclusively on the three parameters a, b and c of the model instead of the 27 values of Lac. In our approach, the first experiment examines the classification efficiency of the easy case of ulcer images. The data set consists of the 80 normal and 38 easy-ulcer images. The second experiment examines the hard case (the same 80 normal and 24 hard-ulcer images). At last, the classification efficiency of all ulcer images is examined (the same 80 normal and the same 62 ulcer images). The success of classification is measured by accuracy, specificity and sensitivity. From the total number of pictures in each experiment the 90% is used for training ant the remaining 10% for testing. This procedure repeated 10 times with random training and test set and the mean accuracy, specificity and sensitivity were calculated (10-fold cross validation). The classification results are shown in Table 1, where high classification accuracy (>90%) is achieved.
IFMBE Proceedings Vol. 29
Ulcer Detection in Wireless Capsule Endoscopy Images Using Bidimensional Nonlinear Analysis 5.
Table 1 Classification results Case
Accuracy
Specificity
Sensitivity
Easy ulcer
92.32% ± 0.21
97.50% ± 0.00
81.40% ± 0.66
Hard ulcer
90.47% ± 0.29
97.49% ± 0.10
67.06% ± 1.21
Total
91.75% ± 0.33
95.07% ± 0.28
87.46% ± 0.68
6. 7. 8.
IV. DISCUSSION
9.
The results corresponding to the hard-ulcer case tabulated in Table 1, as expected, were worse than in the easyulcer case. Especially the sensitivity is very low because the differences in texture reduce due to the similarity with the normal region. Improvement is expected by increasing the number of ulcer images and using a more advanced classifier (i.e. SVM). However, the results denote a quite satisfactory overall classification performance, since they are amongst the highest found in related studies, leading to the conclusion that BEEMD combined with Lac can reveal structure differences of ulcer and normal conditions. Extended analysis using more cases and images are already under consideration in order to justify the promising results presented here.
10. 11. 12. 13. 14. 15. 16.
V. CONCLUSIONS
In this work, a novel scheme was introduced for the analysis of endoscopic capsule images based on the BEEMD in combination with Lac. The BEEMD was applied as an image transformation tool that denoised our data and revealed their intrinsic characteristics. In this way, Lac was facilitated to result with efficient discrimination performance. We experimented on normal and ulcer WCE images that fall into three categories and the accuracy reached 91.75%. The proposed approach identifies pattern differences of ulcer among normal mucous membrane and could possibly facilitate its diagnosis in a future automatic diagnosis system.
17. 18. 19. 20. 21. 22. 23.
REFERENCES 1. 2.
3. 4.
24.
N. Kalantzis, A. Avgerinos (2006) Enteroskopisi me asirmati kapsoula. Vita, Athens V.S. Kodogiannis, M. Boulougoura, J.N. Lygouras, and I. Petrounias (2007) A neuro-fuzzy-based system for detecting abnormal patterns in wireless-capsule endoscopic images. Elsevier, Neurocomputing, vol. 70, pp. 704-717 D.K. Iakovidis (2008) Unsupervised summarization of capsule endoscopy video. 4th Int. Conf. IEEE on Intelligent Systems N. Bourbakis (2005) Detecting abnormal patterns in WCE images. 5th IEEE Symposium on BIBE
25.
239
B.V. Dhandra, R. Hegadi, M. Hangarge, and V.S. Malelath (2006) Analysis of abnormality in endoscopic images using combined HSI color space and watershed segmentation. 18th Int. Conf. IEEE ICPR S.A. Karkanis, D.K. Iakovidis, D.E. Maroulis, et al. (2003) Computer-aided tumor detection in endoscopic video using color wavelet features. IEEE Trans. Inf. Tech. in Biomed., vol. 7, no. 3 S. Ameling, S. Wirth, D. Paulus, G. Lacey, and F. Vilarino (2009) Texture-based polyp detection in colonoscopy. Bildverarbeitung fur die Medizin 2009, pp. 346-350 D.K. Iakovidis, D. Maroulis, and S.A. Karkanis (2006) An intelligent system for automatic detection of gastrointestinal adenomas in video endoscopy. Elsevier Comp. in Biol. and Med, vol. 26, pp. 1084-1103 Baopu Li, and Max Q.-H. Meng (2008) Computer Aided Detection of Bleeding n Capsule Endoscopy Images. Canadian Conference on Electrical and Computer Engineering Y.S. Jung, Y.H. Kim, D.H. Lee, and J. H. Kim (2008) Active blood detection in a high resolution capsule endoscopy using color spectrum transformation. 2008 Int. Conf. IEEE on Biomed. Eng. and Inf. M. Coimbra, and J.P. Silva Cunha (2006) MPEG-7 visual descriptorscontributions for automated feature extraction in capsule endoscopy. IEEE Trans. on Circuits and Systems for Video Technology, vol. 16 T. Gan, J.-C. Wu, Ni-Ni Rao, T. Chen and, B. Liu (2008) A feasibility trial of computer-aided diagnosis for enteric lesions in capsule endoscopy. World Journal of Gastroenterology, 14(45), pp. 6929 - 6935 Baopu Li, and Max Q.-H. Meng (2008) Ulcer recognition in capsule endoscopy images by texture features. Proc. of 7th IEEE World Congress on Intelligent Control and Automation (WCICA), pp. 234-239 Baopu Li, and Max Q.-H. Meng (2009) Computer-based detection of bleeding and ulcer in wireless capsule endoscopic images by chromaticity moments. Elsevier Comp. in Biol. and Med, vol.39, pp. 141-147 Zhaohua Wu, Norden E. Huang, and Xianyo Chen (2009) The multidimensional ensemble empirical mode decomposition method. Adv. in Adapt. Data Analysis, vol. 1, no. 3, pp. 339-372 B.B. Mandelbrot (1993) A fractal's lacunarity, and how it can be tuned and measured. In: Fractals in Biology and Medicine Birkhauser: Basel pp. 8-21 Norden E. Huang, Zhen Shen et al (1996) The empirical mode decomposition and the Hilbert spectrum for nonlinear and nonstationary time series analysis. The Royal Society Zhaohua Wu and Norden E. Huang (2005) Ensemble empirical mode decomposition: A noise-assisted data analysis method. B.B. Mandelbrot (1983) The Fractal geometry of nature. New York: Freeman C. Allain and M. Coitre (1991) Characterizing the lacunarity of random and deterministic fractal sets. Phys. Rev. A, vol. 44, no. 6, pp. 3552-3558 R. E. Plotnick, R.H. Gardner, W.W. Hargrove, K. Prestegaard, and M. Perlmutter (1996) Lacunarity analysis: A general technique for the analysis of spatial patterns. Phys. Rev. E, vol 53, no. 5, pp. 5461-5468 P. Dong (2000) Test of a new lacunarity estimation method for image texture analysis. Int. J. Remote Sens., vol. 21, no. 7, pp. 3369-3373 Krzanowski (1988) W.J. Principles of multivariate analysis: A user's perspective. New York: Oxford University Press L. J. Hadjileontiadis (2009) A texture-based classification of crackles and squawks using lacunarity. IEEE Trans. Biomed. Eng.,vol.56, no.3 D. Marquardt (1963) An algorithm for least squares estimation of nonlinear parameters. SIAM J. Appl. Math, vol. 11, pp. 431-441 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Leontios J. Hadjileontiadis Aristotle University of Thessaloniki University Campus, GR - 541 24 Thessaloniki Greece [email protected]
Pre-clinical physiological data acquisition and testing of the IMAGE sensing device for exercise guidance and real-time monitoring of cardiovascular disease patients A. Astaras1, A. Kokonozi1, E. Michail1, D. Filos1, I. Chouvarda1, O. Grossenbacher2, J.-M. Koller2, R. Leopoldo2, J.-A. Porchet2, M. Correvon2, J. Luprano2, A. Sipilä3 and N. Maglaveras1 1
Laboratory of Medical Informatics, Medical School, Aristotle University of Thessaloniki, Greece ([email protected]) 2 Centre Suisse d' Electronique et de Microtechnique CSEM SA, Neuchâtel, Switzerland ([email protected]) 3 Clothing Plus Oy, Kankaanpää, Finland ([email protected])
Abstract— Non-invasive monitoring of a patient’s vital signs outside the medical centre is essential for the remote management of chronic cardiovascular diseases (CVD), such as Heart Failure (HF) and Coronary Artery Disease (CAD). In this work we present preliminary results from pre-clinical testing of the IMAGE sensing platform, a wearable device designed for wireless real-time data acquisition and monitoring of CVD patients’ physiological responses, primarily while they are exercising. The device is capable of acquiring and on-board processing 3-lead electrocardiogram (ECG) and bioimpedance measurements, obtain multi-sensor oxymetry data as well as record torso movement and inclination. Pilot testing has so far primarily focused on optimising the hardware and experimental protocol, using healthy volunteers. A planned clinical study involving CVD patients is expected to commence within the next few months and provide more detailed experimental results, as part of a research and development effort into realtime exercise guidance and early-warning alert generation for patients and clinicians. The IMAGE device has been developed by CSEM SA, a partner in the HeartCycle consortium, a biomedical engineering project co-funded by the EU 7th Framework Programme.
developed in the past three decades, in the framework of academic and industrial research projects [5, 6, 7]. The more recent versions of such systems are typically capable of acquiring physiological signals, storing the data, transparently extracting and evaluating vital parameters in near real time, and in some cases generating alerts aimed at the patients, their carers or both. Increasing hardware integration, miniaturization and power autonomy of such medical data acquisition devices enable their end users to incorporate them into their lifestyle, dramatically improving the amount and quality of acquired data. This approach is adopted by the EU FP7 cofunded project HeartCycle [8], which aims at the development of closed-loop, personalized, home care services for cardiac patients. The portable and wearable Heartcycle system enables physicians to telemetrically obtain readings from their patients while they are working, resting, exercising or sleeping in their regular surroundings, away from the medical centre.
Keywords— Wireless, Real-Time, Sensor, Exercise, Cardiovascular. I. INTRODUCTION
Multiple research efforts have confirmed the importance of regular exercise during the rehabilitation phase of medical treatment for CAD patients [1]. Analysis of 22 randomized trials involving more than 4000 patients showed a reduction of 20% -25% in both general and cardiovascularrelated mortality among patients receiving exercise-based rehabilitation after a myocardial infraction, as compared with those not receiving rehabilitation treatment [2]. Advances in information technology have made it possible for monitored and guided physical exercise during the CAD rehabilitation period to take place at home, with increased safety, convenience and other added benefits for patients, clinicians and the healthcare system [3]. Several portable, wearable and even ingestible systems involving sensing and onboard signal real-time processing have been
Fig. 1: The HeartCycle IMAGE sensor platform comprising the data acquisition electronics and elastic underwear vest. The integrated data acquisition hardware platform developed within HeartCycle aims to provide a variety of medical measurements, which are collected by several sensing devices. Activity and lifestyle measurements such as time spent walking, lying down or running, the amount of oxy-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 240–243, 2010. www.springerlink.com
Pre-clinical Physiological Data Acquisition and Testing of the IMAGE Sensing Device
gen contained in a subject’s blood at any given time, fluid accumulation in the body, the pulse and breathing rates are all measurements of particular interest to cardiologists following the progress and recuperation of their patients. One of the main research and development directions within the HeartCycle project is for a guided exercise (GE) system, which will be capable of providing feedback information and real-time guidance to post-myocardial infarction (MI) patients while they are following their rehabilitation exercise program. Exercise in this respect helps preserve the patient’s quality of life while augmenting their physical endurance and improving prognosis. GE helps patients adhere to the prescribed exercise regimen, maximise their cardiovascular fitness and integrate fitness maintenance in their daily routine. It furthermore enables healthcare professionals to monitor patients’ progress and compliance, as well as to timely alert the patients themselves should medical necessity arise. While exercising, the acquired physiological signals are processed in real time and pertinent advisory messages are generated for the patient, ensuring that the exercise is carried out at an optimum balance between effectiveness and health safety. Initially, a healthcare professional selects an appropriate exercise plan, which is subsequently updated based on acquired physiological information, questionnaires completed by the patient and a medical expert’s evaluation of the patient’s overall health condition. The patient exercises while the system constantly monitors whether the workload, heart rate and breathing frequency are within the personalised safety and effectiveness thresholds, determined for each individual based on their health status and personal goals. During the post-exercise recovery phase the rate of change in the user’s vital signs is evaluated in order to assess fitness and cardiovascular risk and the user receives summary feedback on the exercise. The acquired data is analyzed on the basis of adherence to the target, intensity, duration, and effectiveness. The user’s fitness level is updated and the weekly exercise plan altered accordingly. The overall progress and physiological parameter trends are made available to both the user and authorised carers. The data acquisition system consists of the wearable IMAGE device, a custom-designed elastic exercise underwear vest, a wearable palmtop digital assistant (PDA) functioning as a short-range wireless interface between the user and the IMAGE device, and a Patient Home Station which manages patient questionnaires and reports, assesses overall health status and acts as a longer term data repository and transmission station. The aim of this work is to present preliminary results from pre-clinical hardware optimisation testing of the IMAGE wearable sensing system carried out in the Labora-
241
tory of Medical Informatics of the Aristotle University of Thessaloniki (AUTH) Medical School (Greece). II. METHODOLOGY
A. The IMAGE multi-sensor data acquisition device The IMAGE integrated sensing device developed by CSEM SA, a Swiss partner participating in the Heartcycle consortium, is a platform developed to achieve the aforementioned targets. It incorporates a 2-lead electrocardiograph, bioimpedance measurements and 3D accelerometry into a versatile wearable device which is worn on the chest using a specially designed elastic sleeveless shirt developed by another Heartcycle partner, the Finnish company Clothing+ (fig. 1). The IMAGE sensing device is capable of acquiring data for time intervals longer than 8 hours, on board data processing and storage, as well as wireless transfer of acquired data to a nearby palmtop digital assistant (PDA) in near-real time using the IEEE 802.15.4 transmission protocol. The wearer of the device, be they a physically exercising subject or a heart-failure patient under medical observation, is kept within the information loop via their portable PDA device such as a smart mobile phone. User alerts are generated to alert or motivate the user and maintain a log of their daily physical activity, for instance regarding the estimated duration and quality of performed daily exercise, unusual trends in fluid accumulation in the chest or excessive stress suffered by the cardiovascular system. While a user wears the IMAGE device and shirt, the data acquisition system records multiple-channel raw ECG, bioimpedance, oxymetry and gyroscopic acceleration signals. The raw signals are stored in internal memory, while also being processed in real time in order to extract additional parameters such as heart rate, breathing rate and activity intensity. Further processing attempts to automatically determine the type of activity being performed, evaluate the consistency (and therefore quality) of incoming signals, as well as generate instructions and motivational messages for the user. B. Experimental setup and protocol description The IMAGE device is at an advanced prototyping stage and is currently being pre-clinically tested by CSEM SA, AUTH and the VTT Technical Research Center (Finland). The AUTH team is contributing to the hardware troubleshooting and optimizations and has established a preclinical trial protocol involving primarily healthy subjects. The protocol involves lying, sitting, standing, walking, running and static cycling activities and has been designed to be experimentally compatible with standardized stress
IFMBE Proceedings Vol. 29
242
A. Astaras et al.
tests used by clinicians as diagnostic tools for heart failure and other cardiopathy patients. The IMAGE device is currently undergoing evaluation and will consequently be concurrently validated against regular cardiography equipment, on healthy subjects undergoing standard cardiac stress tests. This research is expected to be followed by further guided exercise clinical trials planned within HeartCycle. One of the main issues the pre-clinical evaluation team is facing with the IMAGE device has been the presence of noise in the raw ECG and bioimpedance signals. Such noise can occasionally reach amplitudes greater than the signal itself and overwhelm the signal processing algorithms, thus finding its way into the extracted physiological parameters, such as the heart rate, breathing rate and oxymetry estimates. This is due to poor or unpredictable conductance of the dry electrodes and several possible causative factors are currently being investigated. The fact that noise levels vary during different types of physical activity, involving the same subject and setup, points to electrode movement due to vibrations, inconsistent electrode pressure applied by the underwear vest, skin sweating causing signal saturation, as well as dirt, grease and body hair altering conductivity on different skin patches of the same subject. A recent firmware update has improved the embedded signal processing algorithms, while experimentation with the placement of the electrodes on the body, the tightness of the elastic underwear vest and improved offline signal processing are all being used in order to alleviate the problem. The latest November 2009 firmware update (v0.3 build 761) delivered noticeable improvements on the reliability of the ECG quality index and activity classification algorithms. The latter has since been correctly identifying the exercise activity most of the time, with the exception of cycling (under development).
Regarding the raw ECG acquired data the quality index did not consistently correspond with the true quality of the recordings. All of these issues have been addressed in a recent firmware update, v0.3.
Fig. 2: A single electrode raw ECG recording of a halfhour exercise session (top), an 8-heartbeat detail of the same signal (bottom) and an estimated ECG signal quality index (based on both ECG leads; on a scale 0-255, 0=best).
III. RESULTS:
A male subject (35 years old) participated in a mostly outdoor exercise session for approximately 30 minutes. The routine comprised 2 min lying, 2 min sitting, 4 min walking, 3 min brisk walking, 7 min running, 4.26 min walking, 2.04 min sitting and finally 2 min lying. The device was running an older version of firmware (v0.2, build 686). Graphs based on acquired data from this test are presented in fig. 2 and fig. 3. Most of the performed activities were identified correctly by the automated IMAGE classification algorithm, with the exception of brisk walking, which was classified as a mixed sequence of walking and running. Descending stairs was classified as a mix of standing/sitting and walking, while ascending stairs was classified as a sequence of running, walking and standing/sitting activities.
Among the IMAGE sensor is the HR calculated from recordings from both ECG electrodes. Using the average of the same signals, HR was also calculated using the Physionet algorithm with a sampling rate of 7 sec. The results from both methods can be seen in fig. 3 (bottom). During a second experimental session involving an updated version of the IMAGE device (firmware v0.3 build 761), the same male subject followed a slightly modified exercise routine (2 min lying, 2 min sitting, 10 min walking, 10 min running, 10 min walking, 2 min sitting, 2 min lying). Oxygen saturation measurements were obtained in this session, using the electrode placed below the throat area. A third exercise session involved a female subject (40 years old) static cycling for 14 minutes. Periodic noise artifacts were present in the ECG data obtained from one of the electrodes, a problem which was subsequently tracked down
IFMBE Proceedings Vol. 29
Pre-clinical Physiological Data Acquisition and Testing of the IMAGE Sensing Device
to cabling problems. The raw ECG and extracted BR data from both IMAGE electrodes can be seen in fig. 3 below.
243
pedance and oxymetry measurements, as well as wirelessly transmitting them to a base station in real time. There are several impediments being addressed at this prototyping stage, mostly involving noisy raw signals due to movement artefacts, saturation of the dry electrodes from sweat and identifying an optimum and consistent level of pressure from the elastic underwear vest. Comparative signal processing between the firmware and offline algorithms indicates that there is room for improvement on the signal processing level as well, particularly for heart rate extraction. Further work will involve more consistent exercise workloads by focusing more on indoor treadmill and static cycling, in order to obtain more reproducible and comparable –albeit less realistic- experimental results. Furthermore, comparative clinical testing of the IMAGE device vs. state of the art ECG equipment is expected to commence within the next months, involving both healthy subjects and CVD patients.
ACKNOWLEDGMENTS The aforementioned work received funding from the European Community's 7th Framework Programme under grant agreement n° FP7–216695- the HeartCycle project. The authors are particularly grateful to Clothing+ (Finland) for providing the underwear vests for the IMAGE system.
REFERENCES 1. 2.
Fig. 3: Activity classification (top), breathing rate (middle) and heart rate (bottom), as estimated by algorithms embedded in the IMAGE device. The bottom graph superimposes HR extracted by an offline algorithm running on a PC, based on single electrode data.
3.
4.
5.
IV. CONCLUSIONS & FURTHER WORK
The IMAGE system, partly comprising a wearable dry electrode prototype device capable of acquiring multiplechannel physiological data for cardiovascular diseased patients, is being evaluated using healthy volunteers. A preclinical evaluation experimental protocol was developed for this purpose, designed to identify noise, accuracy and reliability issues with the hardware and embedded firmware. The results show that the system is capable of acquiring, storing and processing multiple-channel ECG, bioim-
6.
7. 8.
Frontera W, Slovik D and Dawson D. (2006) Exercise in Rehabilitation Medicine-2nd Edition. Miller T et al (1997) Exercise and its role in the prevention and rehabilitation of cardiovascular disease. Annals of Behavioral Medicine 3: 220-229. Engelse W. A. H. and Zeelenberg C. (1979) A single scan algorithm for QRS-detection and feature extraction, Computers in Cardiology 6:37-42 Moody, G. B, PhysioToolkit: Open Source Software for Science and Engineering, http://www.physionet.org/physiotools/ [feb 2010], sqrs and tach algorithms. Oliveira J, Ribeiro F. and Gomes H. (2008) Effects of a home-based cardiac rehabilitation program on the physical activity levels of patients with coronary artery disease. J. Cardiopulm. Rehabil. Rehabil. Prev. vol. 28, no. 6, pp. 392-396. Astaras A, Ahmadian M, Aydin N et al (2002) A miniature integrated electronics sensor capsule for real-time monitoring of the gastrointestinal tract (IDEAS), Proc. of the IEEE ICBME conference, Singapore Luprano J and Chételat O. (2008) Sensors and Parameter Extraction by Wearable Systems: Present Situation and Future. pHealth 2008 The HeartCycle Project, FP7, http://www.heartcycle.eu/ [feb 2010]
IFMBE Proceedings Vol. 29
Thermal Images of Electrically Stimulated Breast: a simulation study H. Feza Carlak2, 1, Nevzat G. Gençer1 and Cengiz Beşikçi1 1
Department of Electrical-Electronics Engineering, Middle East Technical University, Ankara, Turkey 2 Department of Electrical-Electronics Engineering, Akdeniz University, Antalya, Turkey
Abstract— The thermal and electrical properties of biological tissues are different. It is also known that the electrical conductivity and metabolic heat source of tissues change depending on their state of health. Different thermal end electrical properties of tissues cause thermal emissions, and thermal imaging goes on very important and vital point. Infrared imaging has a limited performance for the breast cancer diagnosis due to patient moving and detector sensitivities. However, this performance can be improved by applying currents at different frequencies in medical safety limits. Due to the different electrical properties of tissues, temperature differences of tissues can be increased by the help of current application and malignant tissue can be distinguished from the healthy tissue in the thermal image. In this study, a two-dimensional model of breast and cancerous tissue is developed. To obtain the temperature distribution, Pennes Bio Heat Equation is solved with the finite element method. Secondary heat sources are generated in the model by applying currents from the boundary and solving the resulting electric field. Simulations are implemented for different tumor locations and at different frequencies for the same object. Different temperature distributions are obtained by changing the frequency of the stimulation current and corresponding frequency value. It is shown that imaging performance can be increased with the applied currents, and tumors can be sensed 5 cm away from the surface with the state-of-the-art thermal infrared imagers.
especially for small tumors and is not so comfortable for patients. 1.66 cm tumors can be sensed by mammography. Ultrasound also has some disadvantages. It can’t image areas deep inside the breast and cannot show micro calcifications [8]. Thermal infrared cameras are non-ionizing, risk free, patient-friendly, and price to performance ratio is higher compared to other imaging systems. This value can be improved by signal processing algorithms and techniques. In our study, woman breast is two dimensionally modeled. Malignant tissue is located inside of the healthy breast tissue and bio heat equation is figured out with finite element method. Then, we obtain the temperature distribution by applying current. Simulation work is also implemented at different frequencies for the same object. Inserted thermal and electrical conductivity parameters, metabolic heat values, blood perfusion rates are real values taken from the literature. Our object in simulations were to be able to obtain temperature distribution of cancerous tissue at specific depth values with recent thermal infrared cameras and by applying various low frequency currents in medical safety limits how much we can improve the temperature differences of malignant and healthy tissue.
Keywords— thermal imaging, medical imaging, breast cancer simulation, bio-heat equation III. INTRODUCTION
Infrared technology rise by military research and developments. Thermal imaging was firstly used for clinical approaches in the year of 1970’s. In those days digital infrared cameras had poor spatial resolution (128x128 pixels), limited thermal resolution (approximately ± 0.1˚C) and slow response time due to single detector scanning. Nevertheless, in recent years very fast focal plane array cameras in the > 8 mm region (> 100 frames/sec, 256 x 256 pixels or more per frame) and high resolution fast scanning cameras (> 10 frames/sec, 256 x 256 pixels) were developed. A level of 20 m˚C differences in temperature can be reliably sensed with the state of art thermal infrared cameras [7]. Mammography is the most common method for this purpose. However, it has some diagnostic accuracy problems
Fig. 1 Schematic of Electrically Stimulated Thermal Imaging
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 244–247, 2010. www.springerlink.com
Thermal Images of Electrically Stimulated Breast: A Simulation Study
Table 1 Abbreviation of Thermal Images T(x,y) T0(x,y) T’(x,y) T0’(x,y)
thermal image of healthy tissue thermal image of malignant tissue thermal image of healthy tissue when current is applied thermal image of malignant tissue when current is applied
Question is whether we can improve the contrast between healthy and cancerous tissue or not? 𝑇 ′ 0 𝑥, 𝑦 − 𝑇 ′ 𝑥, 𝑦 > 𝑇0 𝑥, 𝑦 − 𝑇 𝑥, 𝑦
(1)
Performance improvement due to current application is defined as how much we can improve the temperature difference between cancerous tissue and healthy tissue. II.NUMERICAL MODELING AND ANALYSIS
To be able to obtain temperature distribution of tissue, Pennes proposes a method which describes the affects of metabolic generation and blood perfusion over energy balance. The most common approximation method to figure out heat problems in biological tissues is Pennes Bio Heat equation. This work explains the thermal interaction between tissues and perfused blood in detailed. 𝜌𝐶
𝜕𝑇 𝜕𝑡
= ∇. 𝑘∇𝑇 + 𝑄𝑚𝑒𝑡 + 𝑄𝐵
(2)
245
In our study, current application causes a new energy source. When there is no external source, existing state can be considered as thermal equilibrium state. Current application causes an additional heat source and finishes this state. After approaching to the steady state, contrast between malignant and healthy tissue increases. This yield us to be able to detect cancerous tissue, even if, it is located at deeper locations. Due to the external source that we create, a new term is inserted to the bio heat equation. A modified version of the bio heat equation is figured out to obtain different images which have different temperature distributions and contrasts. By the mean time, keeping the current value constant, different temperature distributions can be obtained by altering the frequencies for the same object. We called the modified form of the equation as ‘Electrically Stimulated Pennes Bio Heat Equation’. ρC
∂T ∂t
+ ∇. −k∇T = ρb Cb Wb TB − T + Q met + Q ext (4)
Where Qext is spatial heat source (W/m3). Real tissue values for malignant and healthy tissues are used in our simulations (table II). Natural convection on skin surface was also taken into account as a convective boundary condition. The heat transfer coefficient was taken as 5 W/m2 K. A. Modeling
QB=ρBCBWB(TB-T)
(3)
3
Where ρ is density (kg/m ), C is specific heat (J/kg°K), T is temperature (°K), k is thermal conductivity (W/m°K), Qmet is metabolic heat generation (W/m3).
Designed model is objected to be made closer to the real case (real case is thought as a woman breast and malignant tissue in it). Breast is modeled by a large square which is 10 cm to 10 cm and cancerous tissue is modeled by 1 cm to 1 cm smaller square.
QB is heat source due to blood perfusion (W/m3), ρB is blood mass density (kg/m3), CB is blood specific heat (J/kg°K), WB is blood perfusion rate (1/s), TB is ambient blood temperature (°K). Table 2 Normal and Malignant Tissue Electrical and Thermal Characteristics [1], [2], [3] Healthy Tissue 3
ρ (kg/m ) ρB (kg/m3) CB (J/kg°K) WB (1/s) TB (°K) QB (W/m3) k (W/m°K) (S/m)
920 1000 4200 0.0018 310.15 450 0.42 0.0283
Malignant Tissue 920 1000 4200 0.009 310.15 29000 0.42 0.1804
Fig. 2 Simulation Model Current is applied to the object from left and right side boundaries. COMSOL Multiphysics modeling is carried out for our problem. In plane electric currents module and bio heat equation module are used simultaneously. First of all, spatial heat source is figured out by using in plane electrics
IFMBE Proceedings Vol. 29
246
H.F. Carlak, N.G. Gençer, and C. Beşikçi
current. Then, this value is inserted to the second module, ‘bio heat equation module’, and tissues’ temperature distribution image is obtained. Normal and malignant tissue parameters used in simulations are found in Table II. B. Spatial Heat Source Calculation (due to current application) Firstly, models’ boundary conditions are inserted. Using Maxwell Equations, program calculates electrical field intensity for stimulation currents. Then, spatial heat source is figured out for each pixel using the formula (5). Consequently, obtained values are inserted to the bio heat equation module. 1 1 𝑄 = 𝐽 2 = 𝜎𝐸 2 = 𝜎 ∇𝑉 2 (5) 𝜎
Fig. 4 Temperature distribution when there is current stimulation, i=5 mA at 10 kHz, Tmax= 37.716˚C, ΔT= 0.716 ˚C
𝜎
By carrying out this joule heat term, spatial heat source is obtained [6]. C. Obtaining Temperature Distribution Images a) Altering the Location of Cancerous Tissue at Constant Frequency: First of all, thermal boundary parameters are inserted to the program. Acquired spatial heat source is inserted to the electrically stimulated Pennes Bio Heat Equation and tissues’ spatial temperature distribution image is obtained. By changing the location of malignant tissue (modeled as small square), various experiments are implemented. Simulations are done in COMSOL Multhipysics Program by using finite element method.
Fig. 5 Temperature distribution when there is current stimulation, i=5 mA at 10 kHz, Tmax= 38.144˚C, ΔT= 1.144 ˚C
b) Altering the Location of Cancerous Tissue at Constant Frequency: Keeping the current value constant and without changing the location of cancerous tissue, several images were taken for different frequencies and corresponding electrical conductivity values for the same object.
10
0.5
20
0.45 0.4
30
0.35
40
0.3
50
0.25 60
0.2
70
0.15
80
0.1
90
Fig. 3 Temperature distribution when there is no stimulation,
100
Tmax= 37.537˚C, ΔT= 0.537 ˚C
0.05 20
40
60
80
100
0
Fig.6. Difference image of healthy tissue, when there is a tumor and when there is no tumor, applied current i=5 mA at 800 MHz, ΔT= 0.57 ˚C
IFMBE Proceedings Vol. 29
Thermal Images of Electrically Stimulated Breast: A Simulation Study
0.5 10 0.45 20
0.4
30
0.35
40
0.3
50
0.25
60
0.2
70
0.15
80
0.1
and different temperature distributions can be obtained at these different conductivities. Combining these different images for the same object we can have more information to be able to make diagnosis. Current application leads to performance improvements. By the help of this increase and getting images at different frequencies make the determination of tumors possible which are hard to detect, especially when the patient is moving.
0.05
90 100
247
ACKNOWLEDGMENT
0 20
40
60
80
100
Fig.7. Difference image of healthy tissue, when there is a tumor and when there is no tumor tissue, applied current i=5 mA at 10 kHz, ΔT= 0.52 ˚C
III.
RESULTS
In figure 3, when cancerous tissue is located at 5 cm, there is 0.537 °C temperature raise due to metabolic heat generation and blood perfusion (when there is no current stimulation). Current application in medical safety limits causes 0.716 °C temperature increase which is 0.179°C more compared to no current case. This corresponds to % 33 performance improvement. Figure 5 shows that, when we imaged the tumor 1 cm away from the surface, 1.144 °C temperature difference occurs on the skin surface due to 5 mA stimulation current. This means % 113 performance improvement. Simulation results show that by applying externally currents, from % 33 to % 113 performance improvements can be achieved depending on tumor location. Even for the worst case, when the malignant tissue is at 5 cm depth, % 33 performance improvement was acquired. The temperature distribution of healthy tissue was provided when there is a tumor and there is no tumor. Then, the difference of two images was taken. Difference image belongs to just tumor tissue (figure 6, 7). The current artifacts due to electrodes exist when the current is applied at 10 kHz (figure 7) but they are removed at 800 MHz frequency value (figure 6). Frequency analysis shows that current artifacts disappear at higher frequency values and tumor tissue become clearer in the image.
Feza Carlak has continued his Ph.D. education at Electrical and Electronics Department in Middle East Technical University as personnel of Akdeniz University. He has been working as a research assistant and financially supported from Government Planning Organization with BAP-08-11DPT.2002K120510-BIL-7 project for the sake of Akdeniz University. This study is arisen from Feza Carlak’s thesis work with his advisor Nevzat G. Gençer and coadvisor Cengiz Beşikçi.
REFERENCES [1] Zhong-Shan Deng, Jing Liu, Analytical Study on Bioheat Transfer Problems with Spatial or Transient Heating on Skin Surface or Inside Biological Bodies, Transactions of the ASME, December 2002. [2] TR Gowrishankar, Donald A Stewart, Gregory t Martin and James C Weaver, Transport lattice models of heat transport in skin with spatially heterogeneous, temperature-dependent perfusion, Biomedical Engineering Online, November 2004. [3] F. J. Gonzalez, Thermal Simulation of Breast Tumors- Instituto de Investigacion en Comunicacion Optica, Universidad Autonoma de San Luis Potosi, August 2007 [4] Deshan Yang, Mark. C. Converse, David M. Mahvi and John G. Webster, Expanding the Bioheat Equation to Include Tissue Internal Water Evaporation During Heating, IEEE Transactions on Biomedical Engineering, August 2007 [5] Ashish Gupta, Ph. D. Candidate, Feasibility Study of Early Breast Cancer Detection Using Infrared Imaging, School of Mechanical Engineering, Purdue University [6] Comsol Multiphysics Modelling Guide, Version 3.4, pp.348-354. [7] R. Joro, A. L. Laaperi, P. Dastidar, S. Soimakallio, T. Kuukasjarvi, Imaging of breast cancer with mid- and long-wave infrared camera, Journal of Medical Engineering & Technology, May 2008 [8] Hairong Qi, Nicholas A. Diakides, Thermal Infrared Imaging in Early Breast Cancer Detection – A Survey of Recent Research, IEEE Embs 25th Annual International Conference, 2003
IV. CONCLUSIONS
In this simulation work, cancerous tissue is modeled in healthy woman breast and thermal images of electrically stimulated breast are obtained. Whenever malignant tissue approaches to the surface, very high performance improvements can be acquired with the current application. Altering frequencies cause changes in electrical conductivity values IFMBE Proceedings Vol. 29
Magnetic Resonance Current Density Imaging Using One Component of Magnetic Flux Density: An Experimental Study A. Ersöz, B.M. Eyüboğlu Middle East Technical University, Electrical and Electronics Engineering, Ankara, Turkey Abstract— Magnetic Resonance Electrical Impedance Tomography (MREIT) algorithms to reconstruct conductivity distribution using current density distribution have been proposed in literature. The current density distribution can be determined by measuring distribution of magnetic flux density, B, generated by externally applied currents. This technique is known as Magnetic Resonance Current Density Imaging (MRCDI). Calculation of current density distribution from magnetic flux density measurements requires all three components of B using Biot-Savart law. Since MRI scanner can measure only the component of B parallel with the main magnetic field, object rotation is required to measure all three components of B for a given current injection pattern. In this study, a novel current density reconstruction algorithm using only the z-component of B, Bz, is proposed. Keywords— Magnetic resonance, current density, electrical impedance, imaging, tomography.
density distribution can be obtained uniquely [4,5,6]. However, in 3D problems, one component of magnetic flux density is not sufficient to achieve current density distribution, J(x,y,z), uniquely [4,6]. In this study, the proposed algorithm is tested on both simulated and measured data. The results are presented.
II.
A. Derivation of the Algorithm The relation between current density, J, flowing in a twodimensional subject, Ω, and the z-component of the magnetic flux density, Bz, is the main part of this algorithm which is given by the Biot-Savart integral equation as Bz ( x, y) =
I. INTRODUCTION
Magnetic Resonance Electrical Impedance Tomography (MREIT) is an emerging imaging modality which is used to image conductivity distribution by measuring distribution of magnetic flux density, B, generated by externally applied currents to volume conductor regions containing Magnetic Resonance (MR) active nuclei. Reconstruction algorithms to reconstruct conductivity distribution, utilizing distribution of J, have been proposed in literature [1, 2]. Current density, J, can be calculated from B [3]. However, only the component of B in the same direction with the static field of an MR system, Bz, can be measured by means of MRI techniques. Other components can be measured only by rotating the subject to align other components of B with the main imaging field. Rotating is not trivial inside the magnet, even for small size objects. In this study, a new method to determine current density, using only one component of current induced magnetic field is proposed. In this method, only the component of magnetic flux density orthogonal to the imaging plane is required to reconstruct current density in a two dimensional field of view. In 2D applications, where current flows in x-y plane, current density, J, has only two components, Jx and Jy, which create magnetic flux density only in z direction, Bz. Utilizing Bz measured experimentally, it is known that current
METHOD
μ0 ( y − y ').J x ( x ', y ') − ( x − x ').J y ( x ', y ')
4π Ω∫
3 2 2
dx ' dy '
(1)
[( x − x ') + ( y − y ') ] 2
in z=0 plane. To formulate the problem more clearly, assume that Ω is divided into N finite elements. Also assume that for each element, current is localized at the center of corresponding element and magnetic flux density is measured at those points. Hence, equation (1) can also be discretized and written as a matrix equation
[ Bz ] = ⎡⎣C y
⎡Jx ⎤ −C x ] ⎢ ⎥ ⎣J y ⎦
(2)
where, Bz, Jx and Jy are Nx1 vectors and Cx, Cy are NxN matrices. The matrices Cx, Cy only depend on magnitude and direction of the vectors between the field and source points. Therefore, these matrices are calculated once for a subject with specific geometry. Assume that current is applied through the electrodes placed on each side of the subject. The divergence of current density distribution is given as
∇.J = 0 in Ω ∇.J = g ( x, y ) on ∂Ω
(3)
in a two-dimensional subject, Ω. Here, g(x,y) is the applied current and it satisfies
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 248–251, 2010. www.springerlink.com
Magnetic Resonance Current Density Imaging Using One Component of Magnetic Flux Density: An Experimental Study
g ( x, y ) = 0 in Ω
∫ g ( x, y)ds = 0
(4).
249
number of NxN matrix is obtained and they are added to Cy matrix. If we call this new matrix Ct, the final form of equation (2) becomes
∂Ω
If the current density distribution inside the subject is assumed to be solenoidal, a relation between Jx and Jy can be obtained. This assumption is not correct on boundaries since, current is injected to the region. To solve this problem, difference currents, Jxdiff and Jydiff, can be defined as [7]
J xdiff = J x − J xuniform (5).
J ydiff = J y − J yuniform
Here, Jxuniform and Jyuniform are the currents for a uniform conductivity distribution. Since divergences of total current and uniform current are the same, difference current is solenoidal. Therefore, a relation between Jxdiff and Jydiff is obtained as
∂J x diff ∂J y + ∂x ∂y
diff
=0
(6).
In proposed algorithm, central difference method is used to approximate derivatives, as it yields a more accurate approximation than forward and backward differences. Equation (6) demonstrates that in each voxel, the change of Jxdiff in x-direction is equal to negative of the change of Jydiff in y-direction. Thus, discretization of equation (6) defines a relation between the elements of Jxdiff and Jydiff. In other words, each element of Jydiff vector can be written in terms of elements of Jxdiff vector. Therefore, a NxN matrix can be formed as:
⎡ J y1diff ⎢ diff ⎢ J y2 ⎢ ⎢. ⎢. ⎢ ⎢. ⎢ diff ⎣ J yN
⎤ ⎡ a a . . . a ⎤ ⎡ J x1diff ⎤ 1N ⎥ ⎢ 11 12 ⎥ ⎥⎢ . ⎥ ⎢ J x 2 diff ⎥ ⎥ ⎢ a21 . ⎥ ⎢ ⎥ . . ⎥ ⎢. ⎥ = ⎢. ⎥ ⎥⎢ ⎥ ⎢. . . ⎥ ⎢. ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ . . ⎥ ⎢. ⎥ ⎥⎢ ⎥ ⎢a . . . aNN ⎦⎥ ⎢⎣ J xN diff ⎥⎦ N1 . ⎣ ⎦
Bz diff = Ct J x diff
(8)
Here, Bzdiff is the difference magnetic flux density which is calculated by subtracting Bzuniform from z-component of measured magnetic flux density, Bz. Magnetic flux density for uniform conductivity distribution, Bzuniform, is computed by using uniform currents and Biot-Savart law. Jxdiff can be easily obtained from equation (8). Solution for Jydiff is also trivial. Finally, Jx and Jy can be found by adding uniform currents and difference currents. B. Experimental Setup Magnetic flux density measurements were performed in 0.15T METU MRI system. Standard spin-echo pulse sequence with the addition of a bipolar current pulse was used. Current was applied to the phantom with a current source synchronized with RF pulse. C. The Phantom Experimental phantom is prepared in order to test the performance of the algorithm with measured data. Experimental phantom is made of Plexiglas material and its dimensions are 9x9x9 cm3 (Figure 1a). The 2D geometry is obtained with additional Plexiglas walls which force the applied current to flow in a volume of 9x9x2 cm3 as seen in Figure 1b. Electrodes are placed in the middle of each side and their dimensions are 2x2 cm2.
(7).
The elements of this A matrix are either zero or one depending on which elements of Jxdiff are related with elements of Jydiff. For instance, if Jy6diff is dependent on Jx8diff, Jx5diff, and Jx9diff then, all the elements except a68, a65 and a69 in 6th row will be zero. Note that coefficients of A matrix depend on the geometry of subject and number of finite elements, N. Then, first column of Cx matrix is multiplied with first row of A matrix, and second column of Cx matrix is multiplied with second row of A matrix and so on. At last, N
(a)
(b)
Fig. 1 Experimental Phantom, a) Oblique view, b) Upper side removed [8]
Phantom elements are prepared by using solidifying materials, Agar-Agar, TX-150, TX-151. Also, NaCl is used to adjust conductivity values of elements and CuSO4 is used to fix T1 relaxation time. In this experiment, conductivity values of background and circle element are 0.2 and 2 S/m.
IFMBE Proceedings Vol. 29
250
A. Ersöz and B.M. Eyübo÷lu
Square element is prepared as a pure insulator. The phantom elements and their MR magnitude image are given in Figure 2.
SNR value of 0.15 T METU MRI system is around 13 [10]. The error of reconstructed current density distributions are calculated using the following formula,
|| J r − J c || || J r ||
εj =
(a)
(9)
Here, Jr and Jc represent the real and calculated values of current density distribution respectively. Calculated error values for different SNR levels are given in Table 1. In vertical current injection, current is injected from upside electrode and removed from downside electrode. For horizontal current injection, current is applied from left side electrode and removed from right side electrode.
(b)
Fig. 2 a) Phantom Elements, b) MR magnitude image of the phantom Table 1 Calculated error values The geometry of phantom elements and their dimensions are as shown in Figure 3.
Horizontal current injection Vertical Current Injection
Jx
Noise free 5.84%
SNR 30 6.03%
SNR 13 6.42%
Jy
6.17%
7.05%
9.08%
Jx
4.64%
6.68%
9.66%
Jy
3.21%
3.75%
6.48%
For experimental study, 20 mA DC current is applied to the phantom. All three components of magnetic flux density are measured and current density distribution is calculated using Maxwell’s law as seen in Figure 4. The current density distribution is also calculated by using the proposed algorithm as seen in Figure 5. Difference magnetic flux density, Bzdiff, required for proposed algorithm is calculated by subtracting Bzuniform from z-component of measured magnetic flux density, Bz. To calculate Bzuniform, current density distribution for uniform conductivity, Juniform, is obtained from simulation model and Bzuniform, is calculated by using BiotSavart law.
Fig. 3 The geometry of phantom and its elements III. RESULTS
A simulation model which has the same geometry as the experimental phantom is prepared. The proposed algorithm is tested with both simulated and measured data. In the simulation model, the region is divided into 1936 (44x44) finite elements. 20 mA DC current is applied through the electrodes. Current density distribution for uniform conductivity, Juniform, and difference magnetic flux density, Bzdiff, are calculated using Finite Element Method (FEM). A random Gaussian noise model is added to Bzdiff to evaluate the performance of the algorithm on noisy data [9, 10]. This noise model only depends on signal-to-noise ratio (SNR) of imaging system where magnetic flux density is measured. SNR 30 and SNR 13 levels are used since SNR 30 corresponds to a MRI system with 2T magnet [9] and
Fig. 4 Reconstructed current density distribution using 3-components of magnetic flux density for vertical current injection
IFMBE Proceedings Vol. 29
Magnetic Resonance Current Density Imaging Using One Component of Magnetic Flux Density: An Experimental Study
251
In this study, current density distribution is reconstructed using only Bz measurements, eliminating the need for subject rotation in 2D applications. This is an important property since the rotation of the subject is not feasible in most cases. Eliminating the need for subject rotation also makes J-based MREIT methods more practical in 2D MREIT. In 3D applications, reconstruction of current density distribution using only one component of B is not unique [3, 5].
ACKNOWLEDGMENT
Fig. 5 Reconstructed current density distribution using proposed algorithm for vertical current injection
IV.
Ali Ersöz is currently studying for his M.Sc. degree at Middle East Technical University (METU). This study is part of Ali Ersöz’s M.Sc. thesis and M. Eyüboğlu is the thesis supervisor. This study is supported by Turkish Scientific and Technological Research Council (TUBITAK) under research grant 107E141.
DISCUSSION AND CONCLUSION
Several MRCDI algorithms have been proposed in literature. In Fourier Transform based MRCDI algorithms, a well-defined Fourier Transform can be obtained when measured magnetic flux density approaches to zero at boundaries of the phantom [7]. Since current flows at boundaries of the phantom, magnetic flux density does not approach to zero at the boundaries. Therefore, magnetic flux density should be measured outside of the phantom to obtain a welldefined Fourier Transform. However, measuring magnetic flux density outside the subject is not possible in an MRI scanner. The most significant contribution of the proposed method is that measuring only one component of magnetic field inside the subject is sufficient to reconstruct current density distribution. In the proposed algorithm, magnetic flux density for a uniform conductivity distribution, Bzuniform, is calculated from simulation. Since, subtraction of Bzuniform from Bzmeasured is required, misaligned electrode locations may introduce undesirable deformations in reconstructed image in experimental studies. In the results section, these deformations are masked and reduced by filtering. This is a disadvantage of the algorithm. By using low pass filters, the deformations in the reconstructed current density distribution are suppressed at the expense of reduced resolution. Both matrix A in equation (7) and Cx, Cy matrices are independent from the conductivity distribution of the subject and applied current. Once the geometry of the subject and number of finite elements are known, these parameters can be calculated and stored. Performing the matrix inversion in equation (8) makes current density distribution possible. The elapsed time for this operation is just a few seconds.
REFERENCES [1] Eyüboğlu B M (2006) Magnetic Resonance - Electrical Impedance Tomography Wiley Encylopedia of Biomedical Engineering (Metin Akay ed) 4 pp. 2154-2162 [2] Woo E J, Seo J K (2008) Magnetic resonance electrical impedance tomography (MREIT) for high-resolution conductivity imaging. Physiological Measurement 29:R1-R26 [3] Scott G C, Joy M L G, Armstrong R L, and Henkelman R M (1991) Measurement of Nonuniform Current Density by Magnetic Resonance. IEEE Trans. Med. Imag. 10:362-374 [4] Roth B J, Sepulveda N G, Wikswo J P (1989) Using a magnetometer to image a two-dimensional current distribution. J. Applied Physics 65:361-372 [5] Park C, Lee B I and Kwon O I (2007) Analysis of recoverable current from one component of magnetic flux density in MREIT and MRCDI. Phys. Med. Biol. 52:3001-3013 [6] Pyo H C, Kwon O, Seo J K, Woo E J (2005) Identification of current density distribution in electrically conducting subject with anisotropic conductivity distribution Phys. Med. Biol. 50:3183-3196 [7] Ider Y Z (2006) Bz-substitution MR-EIT and Fourier Transform MRCDI: Two new algorithms World Congress on Medical Physics and Biomedical Engineering, Seoul, Korea, 2006, pp. 3803-3806 [8] Boyacıoğlu R Performance evaluation of current density based MREIT Reconstruction algorithms, M.Sc. Thesis Department of Electrical and Electronics Eng. Middle East Technical University 2009 [9] Scott G C, Joy M L G, Armstrong R L, and Henkelman R M (1992) Sensitivity of Magnetic Resonance Current Density Imaging Journal of Magnetic Resonance 97:235-254 [10] Birgül Ö, Eyüboğlu B M, İder Y Z (2003) Current constrained voltage scaled reconstruction (CCVSR) algorithm for MREIT and its performance with different probing current patterns Phys. Med. Biol. 48:653-671 Author: Institute: City: Country: Email:
IFMBE Proceedings Vol. 29
Ali Ersöz, B.Murat Eyüboğlu Middle East Technical University Ankara Turkey [email protected],[email protected]
Computer-Aided Detection of COPD Using Digital Chest Radiographs L. Nikházy1, G. Horváth1, Á. Horváth2, and V. Müller3 1
Budapest University of Technology and Economics/Department of Measurement and Information Systems, Budapest, Hungary 2 Innomed Medical Co. Budapest, Hungary 3 Semmelweis University, Department of Pulmonology, Budapest, Hungary
Abstract— COPD is a spreading respiratory disease claiming more and more deaths. The aim of this study was to develop a method, which enables the computerized recognition of COPD in the course of pulmonary screening. At pulmonary screening only a PA radiograph is recorded, but some additional data can be measured automatically. Parameters indicating COPD have been derived from the chest radiograph using complex image processing algorithms. Neural network based classifiers were applied to select the patients with the suspicion of COPD. Results show that although diagnosis is not possible, the majority of COPD cases can be sorted out. Keywords— COPD, Radiography, CAD, Neural networks, Pulmonary screening.
I. INTRODUCTION Although chest X-ray radiograph-based screening is traditionally used for the detection of lung tuberculosis, it can be used for early detection of other lung diseases. Among them perhaps the most important is to detect lung cancer in the early stages, as lung cancer is one of the most common causes of death throughout the world. Although there are controversial statements about the usefulness of X-ray chest radiography in lung cancer detection, it possibly represents the most cost-effective screening method. Besides lung nodule detection, chest X-ray screening can help in the diagnosis of other illnesses such as pneumonia, pleural effusion, pneumothorax, lung fibrosis, heart failure and several other diseases. Recently intensive research and development work has been started throughout the world to develop Computer Aided Detection or Diagnostic (CAD) systems to help pulmonologists/radiologists in the evaluation of chest radiographs, where CAD systems usually serve as second readers in the evaluation of radiographs. Approximately two years ago a Hungarian consortium has also started a research/development work to extend a recently developed PACS with CAD functionality. The general goals and the first results of the system were presented last year in the World Congress of Medical Physics and Biomedical Engineering, Munich [1]. Computer aided evaluation of X-ray radiographs needs complex image processing/pattern recognition algorithms
where first the images should be preprocessed, and the detection of abnormalities is done in the preprocessed images. Preprocessing means that the contours of the lung fields and the heart, as well as the contours of the bones – the clavicles and the rib cage – should be determined. The findings of these contours may serve two goals: (1) the shape of these anatomical parts, especially the shape of the lung fields may have diagnostic meaning, (2) having determined the contours of the bones and the heart, there is a chance to suppress the shadows of these parts, “cleaning” the whole area of the lung fields, and making possible to “look behind” these parts. The suppression of the shadows of “disturbing anatomical parts” may significantly improve the performance of nodule detection, and may help in reducing false positive hits. The goal of this paper is to show that besides nodule detection X-ray chest radiograph-based CAD systems can also be used to detect hyperinflation of the lung, a major peculiarity of obstructive lung disease including asthma bronchiale and chronic obstructive lung disease (COPD). COPD is a mixture of co-existing chronic bronchitis, bronchiolits and emphysema. As a result of these histological changes the airways become narrowed. In emphysema, the walls of the alveoli are damaged, while in chronic bronchitis the lining of the airways is constantly irritated and inflamed [2]. These conditions are leading to airflow limitation and subsequent air trapping, which results in hyperinflation of the lung. In contrast to asthma, the airflow limitation in COPD is mainly not reversible. It is a slowly evolving illness, which can remain unnoticed for a long time. COPD is the 5th leading cause of death worldwide, and is predicted to emerge to 4th position by 2030 [3]. However, if detected in time, it is possible to keep the illness at a stable stage. The main cause of COPD is smoking, but also genetic susceptibility is involved. COPD is diagnosed and classified through spirometry. Postbronchodilator forced expiratory volume within 1 second (FEV1)/ forced vital capacity (FVC) <70% and FEV1 are needed for the classification of the disease. FEV1 indicates the severity of airway obstruction. As COPD causes air trapping and hyperinflation, residual volume (RV) increases and therefore the total lung capacity (TLC) is frequently greater than reference values.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 252–255, 2010. www.springerlink.com
Computer-Aided Detection of COPD Using Digital Chest Radiographs
As COPD remains for a long time asymptomatic, screening of risk groups might provide a good tool for early detection and treatment of these patients. Actual guidelines suggest screening by a short questionnaire and subsequent spirometry. Imaging techniques may provide evidence for COPD and help excluding other diseases. Computed tomography (CT) scan is not ordered routinely, because the additional information it provides compared to X-ray is rarely needed, while it exposes the patient to more radiation. The question is whether chest X-ray screening, when only a posteroanterior (PA) radiograph is taken, could help to sort out hyperinflated, especially COPD suspicious patients. The chance for COPD detection is that due to the enlargement of the lungs the lung fields are oversized and the diaphragm is flattened and depressed. These are the main signs of hyperinflation on the PA radiograph according to Simon et al [4]. Fig. 1 shows the X-ray images of a healthy patient and one suffering from COPD induced hyperinflation. The aim of this work was to develop a method for sorting out possible COPD at pulmonary X-ray screening. It is important to declare that the goal is not to diagnose COPD, the computerized analysis of radiographs is only to point out the suspicion of the disease. Based on this suspicion, a spirometry test may be ordered, by which a diagnosis can be established. It would be a great achievement if the majority of COPD cases could be discovered at pulmonary screening. This is not a hopeless task, as a previous study has shown that there are measurable quantities which have a distribution different from normal in the case of COPD [5].
253
distance of the chest and the radiation dose emitted at the exposure. The height and weight of the patient provides information about the body size, compared to which the size of the lung fields can be considered. The front-to-back distance of the chest indicates hyperinflation, but this information alone is ambiguous, since thickness of the chest may originate from muscles or fat. However the radiolucency of these materials differs significantly from that of the lungs. People with hyperinflation have an increased radiolucency compared to normal subjects with the same front-to-back distance of the chest. This way, the radiation energy emitted during the exposure and the thickness of the chest together provide information about the front-to-back extent of the lungs, which is lost through the projection. From the analysis of the chest radiographs some numerical features are computed, which may indicate COPD. These are the area and height of the lung fields and the bending and the rise of the diaphragm. B. Principle of the Solution Radiographs show static pictures of the chest at full inspiration. Therefore it is not possible to learn anything about the dynamic behavior of the lungs. This means that COPDsuspicion can only be concluded from the static properties of the lungs, which are derived from the X-ray image, and the measured quantities, which provide information of the body. This is a difficult classification problem. We applied neural network to solve this task. Neural networks are capable of learning complex relationships between inputs and outputs from a sample set of data. Thus, neural networks make it possible to solve difficult classification problems. C. Acquisition of Data
Fig. 1 Chest radiograph of a normal (left) and a subject with COPD (right)
II. MATERIALS AND METHODS A. Available Information At pulmonary screening only one X-ray image is captured, in the posteroanterior (PA) view, at full inspiration. Some additional parameters can be easily measured automatically, at the moment when the radiograph is recorded, which carry information related to COPD. These parameters are the gender, height and weight of the patient, the front-to-back
a) The Sample Set For the training of the neural network, a set of sample data is needed. For this purpose we acquired the chest radiographs and the corresponding pulmonary function test results of 140 patients from the Department of Pulmonology. 69 out of these subjects are suffering from COPD. It is important to note that this sample set does not represent well the whole community, since the ratio of the ill patients is higher. The patients are categorized with COPD according to the FEV1 value, as it is officially described [6]. This way the subjects were divided into four groups: healthy, mild COPD, moderate COPD and severe COPD. b) Parameters from the Chest Radiograph Digital radiographs were used, with the resolution of 2400x2600 pixels (0.16 mm per pixel) and a dynamic range
IFMBE Proceedings Vol. 29
254
L. Nikházy et al.
of 12 bits. The contours of the lung fields and the ribs on the X-ray image were determined automatically during the preprocessing steps. The area of the lung fields (AL1, AL2) can be calculated easily from the lung contour. We define the height of the lung (hL1, hL2) as the vertical distance between the apex of the lung and the inner end of the diaphragm. The area beneath the diaphragm (AD) is defined as the area enclosed by the straight line between the two ends of the diaphragm and the contour of the diaphragm itself. We get a good descriptor of the arch of the diaphragm (bD), if we divide this area with the straight distance between the two ends of the diaphragm (lD). bD = AD / lD
(1)
This means the average height of the arch of the diaphragm over the straight line connecting its ends. The rise of the diaphragm (rD) can be described as the ratio of the vertical (hD) and horizontal (wD) distance between the two ends of the diaphragm. rD = hD / wD
(2)
The above mentioned areas and distances are shown on Fig. 2.
also measured, thus the machine automatically ends the exposure when the appropriate number of photons have been detected. Using the measured exposure time (t), the energy of the emitted radiation can be calculated, which is coherent to the radiolucency of the subject, as follows: E = cU2IZt
(3)
In (3), c is a constant and Z is the atomic number of the anode material. D. Neural Network Construction and Training Two different types of neural networks were applied for the classification, Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP). In both cases 10-fold crossvalidation technique is used. The data set was divided randomly into 10 groups, each containing 14 samples. At the construction of every network, one group was used for testing and the rest for training purposes. Thus 10 networks were constructed in each experiment. This way the whole data set could be utilized for both training and testing. There were 10 input parameters of the neural networks. These include the 6 parameters derived from the radiograph (AL1, AL2, hL1, hL2, bD, rD), the calculated energy of the emitted radiation (E) and the general properties of the subject: the height, the weight and the sex. All data were normalized.
Fig. 2 Distances and areas measured on the radiograph
a) Support Vector Machine (SVM) Support Vector Machines are well-known for good classifying abilities. Training of an SVM classifier is analytic, the data is separated so that the safety margin of the partitioning is the highest. However, the proper choice of the hyperparameters is a difficult issue. Networks with Gaussian (width varying from 0.5 to 2.0), polynomial (degrees of 2, 3 and 4) and linear kernel functions were tested. In all cases, the C regularization parameter was set to four different values (10, 100, 1000, and infinity).
c) Additional Data The height of the patient and the front-to-back distance of the chest can be measured by simple ultrasonic sensors at the moment of capturing the X-ray image. Similarly, the weight of the subject can be measured by digital scales while standing in front of the X-ray detector. Unfortunately, at the moment of writing this article, these measuring instruments are not ready at the clinic, we are working together with. However, the weight and height of the patients are recorded on the results of pulmonary function tests, so these data could be utilized. The digital X-ray machinery measures the voltage (U) and the current (I) in the X-ray tube, used for generating the radiation. The amount of photons reaching the detector is
b) Multi-Layer Perceptron (MLP) First, the topology of the MLP needs to be planned, which is very important. We employed 1-of-2 encoding at the output of the net. Linear activation function was chosen in the output layer, and in all of the preceding layers hyperbolic tangent sigmoid function was applied. Networks containing 1 and 2 hidden layers were tested, the number of neurons ranging from 4 to 10 in one layer. The training of the MLP is iterative. To prevent the net from overtraining, we used one group of data for validation. This way, five groups of the sample set were used for training, one for validating and one for testing. During the iterative training process, the mean square error of the output was minimized through backpropagation, particularly the
IFMBE Proceedings Vol. 29
Computer-Aided Detection of COPD Using Digital Chest Radiographs
Levenberg-Marquardt version of it. To avoid getting trapped in a local minimum, the training of each net was started 10 times, with random initialized weights, and the network providing the best classification results on the validating set was chosen.
III. RESULTS The results of the best classifying network of both types are presented below. The output of the classification is shown in a table. The values in the table indicate the number of positively and negatively classified subjects, grouped by the stage of COPD. The positive class denotes the subject, which are claimed to suffer from COPD according to the computer. Among the SVM classifiers, the best results were achieved using Gaussian kernel function with a width parameter of 2.0, while the value of C was set to 10. The detailed results are shown in Table 1. It is remarkable that 83% of the COPD cases were recognized with a positive predictive value of 75%. The overall accuracy of the classification is 78%.
255
hyperinflation and possible COPD at pulmonary X-ray screening, using the appropriate computational aid. The application of neural networks was successful in this complex classification problem. However, one should not draw far-reaching conclusions from present results, since the available data set was relatively small and selected, not representing the general population. In addition, patient cooperation during the X-ray may also modify the outcome. Methods need to be verified using adequate testing set. It is also important to note that this system has not reached its final form. As mentioned in the text, additional data will be measured (namely the front-to-back distance of the chest), which hold information related to COPD. This extension and the usage of more samples for neural network training provide potential for further development.
ACKNOWLEDGMENT This work was partly supported by the National Development Agency under contract KMOP-1.1.1-07/1-2008-0035.
REFERENCES
Table 1 Results of SVM classification Subject Group Normal Mild COPD Moderate COPD Severe COPD
Number of negatives
Number of positives
52 4 7 1
19 10 17 30
As for MLPs, the best results were obtained with a network containing 2 hidden layers, consisting of 8 and 6 neurons, respectively. Details are shown in Table 2. In this case, the accuracy of the classifying was 76%, slightly less than the corresponding value for SVM. Table 2 Results of MLP classification Subject Group Normal Mild COPD Moderate COPD Severe COPD
Number of negatives
Number of positives
51 5 7 2
20 9 17 29
IV. CONCLUSIONS
1. G. Horváth, G. Orbán, Á. Horváth, G. Simkó, B. Pataki, P. Máday,S. Juhász1 and Á. Horváth A CAD System for Screening X-ray Chest Radiography , World Congress of Medical Physics and Biomedical Engineering, Munich 2009. Vol, 25. pp. 210-213 2. US National Heart, Lung and Blood Institute - What is COPD? at http://www.nhlbi.nih.gov/health/dci/Diseases/Copd/Copd_WhatIs.html 3. Mathers CD, Loncar D. (2006) Projections of Global Mortality and Burden of Disease from 2002 to 2030. PLoS Med 3(11): e442. doi:10.1371/journal.pmed.0030442 4. Simon G, Pride NB, Jones NL, Raimondi AC. (1973) Relation between abnormalities in the chest radiograph and changes in pulmonary function in chronic bronchitis and emphysema. Thorax 28:15-23 doi:10.1136/thx.28.1.15 5. Coppini G et al. (2006) Computer-aided diagnosis of emphysema in COPD patients: Neural-network-based analysis of lung shape in digital chest radiographs. Medical Engineering & Physics 29, 76–86 doi:10.1016/j.medengphy.2006.02.001 6. Global initiative for chronic obstructive lung disease. (2009) Global strategy for the diagnosis, management, and prevention of chronic obstructive pulmonary disease. Executive summary. www.goldcopd.com Author: Institute: Street: City: Country: Email:
This preliminary study has shown that it is possible to automatically sort out a certain ratio of patients with IFMBE Proceedings Vol. 29
László Nikházy Budapest University of Technology and Economics Magyar tudósok körútja 2. Budapest Hungary [email protected]
Localisation, Registration and Visualisation of MRS Volumes of Interest on MR Images Yu Sun1,2, Nigel P. Davies1,2,3, Kal Natarajan1,2,3, Theodoros N. Arvanitis1,4, and Andrew C. Peet1,2 1
Birmingham Children’s Hospital NHS Foundation Trust, Birmingham, UK 2 Cancer Sciences, University of Birmingham, Birmingham, UK 3 Medical Physics and Imaging, University hospital Birmingham, UK 4 Biomedical Informatics, Signals and Systems Research Laboratory, School of Electronic, Electrical & Computer Engineering, University of Birmingham, Birmingham, UK Abstract— Magnetic Resonance Imaging (MRI) produces high resolution images of anatomical features such as brain tumours, due to the contrast indicators of different types of tissues. Magnetic Resonance Spectroscopy (MRS) can provide the metabolic characterisation of tumours that MRI cannot. The combination of the elements of anatomical features and metabolic information are most useful in a clinical practice, particularly in the context of brain tumours where biopsy carries additional difficulties because of dangers inherent in inter-cranial surgery. The aim of this paper is to investigate how to combine MRI and MRS in the context of brain tumour research. The location of the MRS voxel can provide important diagnosis information of the tumour from both imaging and spectroscopy sides. A developmental framework has been implemented for the localization, registration and visualisation of MRS voxels with MRI images. The proposed work can be used for further image analysis and spectroscopy representation. Keywords— MRI, magnetic MRS, Voxel localisation, Registration, Visualisation.
I. INTRODUCTION Brain tumour constitutes one of most detrimental diseases jeopardising human healthcare [1]. MRI has been used in the assessment of brain tumours for many years. It is often the first imaging procedure performed when a brain tumour is suspected. It is non-invasive and may offer a means to determine a diagnosis pre-surgically. However, while MRI is very good at detecting the presence of tumour or lesions, it is does not have a high diagnostic accuracy with problems in the diagnosis of tumours’ type and degree. The most common use for MRI is to confirm or exclude tumours in the brain. An emerging radiology technology; magnetic resonance spectroscopy (MRS) provides an alternative to MRI with improved diagnosis accuracy. Unlike MRI, MR spectroscopy can reveal the biochemical characteristics of the sampled tissue, in particular the metabolism characteristics. Compared to the similar benefit from MRI, MRS can provide more histological information, which can be used to make tissue classification and initial
diagnosis with high accuracy, and give a better prognosis and post-therapy monitoring. Since MRS is non- invasive, it is often called the’ vitual ‘biopsy. Combining MRI and MRS should provide a powerful technique for paediatric brain tumour clinical management and research. Identifying the MRS volume of interest on the MR images is the key to building the bridge between these two technologies. MRS can provide metabolic characterisation of tumours that MRI cannot but MRI provides a detail of structural information not available from MRS. Furthermore, due to intra-tumour heterogeneity, the placement of the voxel within in tumour can affect the result of the MRS acquisition. When MRS scanning is performed, it is important to localize the MRS voxel in the most appropriate position in the tumour to maximize the accuracy of the diagnostic information. This paper presents a three-dimensional reconstruction of MRS voxels within brain tumours, and achieves location registration of the MRS voxel in the 2D image slices and 3D brain. The work includes: 1. localisation and registration of the spatial location of the MRS voxel in 2D MR image slices; 2. reconstruction and visualization of the registration of the 3D voxel in 2D image slices and 3D space;
II. METHOD AND EXPERIMENT A. Method There are two main types of MR spectroscopy, singlevoxel and multi-voxel, also known as chemical shift imaging (CSI). Single-voxel spectroscopy (SVS) produces a single spectrum from a single voxel, usually in the order of 3.5 to 8 cm3 in volume. PRESS (point-resolved spectroscopy) and STEAM (stimulated echo acquisition mode) are the two main types of sequences used for singlevoxel spectroscopy. CSI combines the advantages of imaging with those in a spectroscopy acquisition to obtain spectroscopic information from multiple adjacent volumes
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 256–259, 2010. www.springerlink.com
Localisation, Registration and Visualisation of MRS Volumes of Interest on MR Images
over a large volume of interest. In 2D CSI, the defined volume of interest is normally a large slab. CSI derived spectra are typically acquired from voxels of a similar volume to SVS but smaller voxel sizes have been used. DICOM and MRI: in order to register the accurate spatial position of the MRS voxel in MR images, a 3D image space is constructed. The relative image coordinate information is defined in the following DICOM tags: Patient Position (0018, 5100) specifies the position of the patient relative to the imaging equipment space. The direction of the axes is defined by the patient’s orientation. The x-axis is increasing to the left hand side of the patient. The y-axis is increasing to the posterior side of the patient. The z-axis is increasing toward the head of the patient. The patient based coordinate system is a right handed system. Image Position (0020, 0032) specifies the x, y, and z coordinates of the upper left hand corner of the image; it is the center of the first transmitted voxel. Image Orientation (0020, 0037) specifies the direction cosines of the first row and the first column with respect to the patient. These Attributes are always provided as a pair. Each pixel location can be calculated as follows:
⎡ Px ⎤ ⎡ X x Δi Yx Δj ⎢ P ⎥ ⎢ X Δi Y Δj y ⎢ y⎥ = ⎢ y ⎢ Pz ⎥ ⎢ X z Δi Yz Δj ⎢ ⎥ ⎢ 0 ⎣1⎦ ⎣ 0
0 S x ⎤⎡ i ⎤ ⎡i ⎤ ⎥ ⎢ ⎥ ⎢ j⎥ 0 S y ⎥⎢ j⎥ = M⎢ ⎥ ⎢0⎥ 0 S z ⎥ ⎢0⎥ ⎥⎢ ⎥ ⎢ ⎥ 0 1 ⎦ ⎣1 ⎦ ⎣1 ⎦
Where: Px , Py , Pz :The coordinates of the voxel (i , j) in the
257
MRS: The relative parameters of the MRS voxel can be extracted from the MRS raw data. For instance, in the data from Siemens scanners (Symphony, NUM4 ima files), three parameters define the coordinate of the voxel center: sSpecPara.sVoI.sPosition.dSag = pdSag sSpecPara.sVoI.sPosition.dCor = pdCor sSpecPara.sVoI.sPosition.dTra = pdTra The following three parameters specify a plane “ α ”, the normal of which is
K n = OP(a, b, c) :
sSpecPara.sVoI.sNormal.dSag = ndSag = a sSpecPara.sVoI.sNormal.dCor = ndCor = b sSpecPara.sVoI.sNormal.dTra = ndTra = c One of the voxel planes is parallel to plane “ α ”. Another three parameters determine the size of the voxel (specify the length, width, and height): sSpecPara.sVoI.dThickness = XLength sSpecPara.sVoI.dPhaseFOV = YLength sSpecPara.sVoI.dReadoutFOV = ZLength The next parameter specifies the degree (180*radian/pi) of which, the voxel rotates around the vector (nSag,nCor,nTra): sSpecPara.sVoI.dInPlaneRot = radian With the above important parameters from MRI and MRS, the spatial relationship of patient, image slice and voxel can be reconstructed and presented as in Figure 1.
frame’s image plane in units of mm. S x , S y , S z : The three values of the Image Position (Patient) (0020, 0032) attributes in mm . X x , X y , X z :The values from the row(X) direction
Nc=(Yx,Yy,Yz) Image position :P
IFMBE Proceedings Vol. 29
Z
O(0,0,0) O(0,0,0)
X
cosine of the Image Orientation (Patient) (0020,0037) attribute. Yx , Y y , Yz :The values from the column(Y) direction cosine of the Image Orientation (Patient)(0020,0037) attribute. i:Column index to the image plane. The first column is index zero. Δi : Column pixel resolution of the Pixel Spacing (0028,0030) attribute in units of mm. j: Row index to the image plane. The first row index is zero. Δj : Row pixel resolution of the Pixel Spacing (0028,0030) attribute in units of mm.
Y
Nr=(Xx,Xy,Xz)
Nc=(Yx,Yy,Yz) Image position :P Nr=(Xx,Xy,Xz)
Y Z
O(0,0,0)
sNormal.dSag: sNormal.dCor: sNormal.dTra:
X Rotate the voxel by dInPlaneRot radians around the vector
Fig. 1 Reconstruction map
258
Y. Sun et al.
Fig. 2 (a) Screen capture of the SVS voxel (left and middle) and SCI slab
Fig. 4 3D view of the SVS voxel (green cube)
(right) from GE canner (b) Reconstructed SVS voxel (left and middle) and SCI slab (right)
a) Patient Cases After the investigation of the definition of MRS voxel parameters, the next experiments were performed on a 1.5T Siemens scanner. Short echo time (30ms) single voxel spectroscopy was acquired on 3 patients with a SVS voxel or CSI slab placed within the suspected brain tumour prior to treatment as part of an ethically approved study and with patient/parent consent. Figure 3 illustrates the 2D view of the reconstructed SVS voxel from saggital, coronal and axial direction in the MR image slices. Figure 4 illustrates the 3D view of the reconstructed SVS voxel from different angles. Figure 5 presents a reconstructed CSI slab with 2D slice view and 3D view. Figure 6 presents the localization of the individual voxel within the CSI slab. There are 36 single voxels within the slab. Each voxel can be accurately localized and registered in the image slice.
Fig. 5 (a) Screen capture of the CSI voxel in MR image slices (b) Reconstructed CSI slab (green slab) in 2D image slice (c) 3D view of the slab in image slices
Fig. 6 A single voxel (green cube) within CSI slab In the last experiment, volume rendering is implemented to visualize the CSI slab in brain. The reconstructed brain is set as half-transparent and the location of the slab can be ‘seen’ within the brain (See Figure7).
Fig. 3 (a) Screen capture of the SVS voxel in MR image slices (b) Reconstructed SVS voxel (green cube) in MR image slices IFMBE Proceedings Vol. 29
Fig. 7
Localisation, Registration and Visualisation of MRS Volumes of Interest on MR Images
III. CONCLUSIONS In this paper, a scheme for localization and registration of MRS voxels and CSI slabs to MRI series has been implemented. The localisation method can provide the accurate spatial information of the SVS voxel or CSI slab which can then be registered on 2D MR image slices visualized in 2D3D. This method is a key step in combining information from DICOM MR images and MR spectroscopy. The combination of MRS with structural texture information from conventional MRI, cellularity from diffusion tensor imaging and blood perfusion from dynamic contrast enhanced MRI will make a powerful multimodal approach to the understanding and management of brain tumours.
ACKNOWLEDGMENT We would like to thank the brain tumour research group in Institution of child health in Birmingham children
259
hospital. We are grateful to the clinicians who have helped in this project and in particular the staff of the Radiology Department at Birmingham Children’s Hospital.
REFERENCES [1] Byrd, S. E. et al. Magnetic resonance spectroscopy (MRS) in the evaluation of pediatric brain tumors, part i: Introduction to MRS. J Natl Med Assoc. 88(10), 649654 (1996). [2] Digital Imaging and Communications in Medicine (DICOM), Part 3: Information Object Definitions [3] William J.Schroeder,The VTK User’s Guide(V4.0)
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Yu Sun Birmingham Children’s Hospital NHS Foundation Trust whittall street Birmingham U.K [email protected]
Magnetic Marker Monitoring: A novel approach for magnetic marker design S.Biller1,2, D.Baumgarten1,2 and J.Haueisen1,2 1
Institute for Biomedical Engineering and Informatics, Ilmenau University of Technology, Ilmenau, Germany 2 Biomagnetic Center / Department of Neurology, University Hospital Jena, Jena, Germany
Abstract— Magnetic Marker Monitoring (MMM) has high potential in determination of the motility of the gastrointestinal tract as well as in observation of the dissolution of pharmaceuticals. To establish a wider range of applications stronger magnetic markers and increased flexibility of the measurement setups are required. In this study a novel marker design for MMM is investigated. The marker is composed of one permanent magnet and a compartment of iron powder. The iron powder is reallocated during dissolution, thus altering the external magnetic field. We present our first experimental results obtained for a simplified setup. The markers were produced and the magnetic induction generated by the marker was measured at different distances. Our investigations proved that the magnetic induction is in the range of a few µT which is much higher compared to the pT of conventional markers. The magnetic inductions are decreased in the range of 13 % and 21 % during dissolution. These results indicate that the novel marker design is well suitable for application in MMM using less sensitive sensor technologies (e.g. magnetoresistive sensors). Keywords— magnetic marker monitoring, marker design, magnetic field measurement, gastrointestinal motility, dissolution process
I. INTRODUCTION
The physiologic behavior of the gastrointestinal tract (GI-Tract) is of immense relevance for the diagnosis of functional disorders as well as for pharmaceutical research. Different studies showed that 10 - 15 % of the world’s population suffer from irritable bowel syndrome and up to 20 % from chronic constipation [1]. For detailed diagnostics of these functional disorders it is particularly essential to analyze the motility behavior of the GI-Tract and to determine the transit time of liquid and solid nutritional components [2]. The bioavailability of orally applied pharmaceuticals strongly depends on the processes of disintegration and dissolution [3]. These processes are affected by various conditions including gender, nutrient behavior and functional disorders of the patient as well as interactions with other pharmaceuticals [4]. Sufficient knowledge of these interactions is crucial for the efficient design of new pharmaceuticals. Therefore, extensive studies are necessary.
Current technologies for those investigations are very limited. Lactulose-H2-breathing tests or the Hinton-Test to determine the motility of the GI-Tract only allow examination of specific parts of the GI-Tract or cause a lot of stress for the patient due to radiation exposure [5, 6]. Gold standard for the investigation of the dissolution process is a scintigraphic technique in which the pharmaceuticals are radioactively labeled [4]. Due to the exposure of the test person it is not possible to perform field studies using this method. The technique of Magnetic Marker Monitoring (MMM) has high potential in the determination of the gastrointestinal motility and in vivo observation of the dissolution behavior of pharmaceuticals [7-9]. After ingestion of a magnetically marked tablet the magnetic field outside the patient is measured using highly magnetically sensitive non invasive sensors. By solving the inverse problem, the position of the marker is calculated, allowing continual tracking of marker movements in the GI-Tract. Currently used markers mostly consist of pure magnetized Magnetite [8, 9]. By the process of dissolution the Magnetite became flexible and disperses in the GI-Tract. This results in decreasing of the external magnetic induction. Although the currently used technique of MMM has high potential for dissolution testing and motility analysis, it's usability for extensive studies is limited. The current markers produce very weak magnetic fields in the range of a few pT [8]. Due to this fact only highly sensitive sensor technologies like Superconducting Quantum Interference Devices (SQUIDs) with active and passive shielding technologies are applicable. SQUIDs are stationary systems which have to be cooled by cryostats. As a consequence the measurement setup is very restrictive (e.g. patient have to stay in supine position) and constrains the options concerning the procedure of intended studies. Due to these disadvantages new mobile systems for MMM are required using smaller, more cost efficient but less sensitive sensor technologies. In order to establish a basis for such improved systems new types of markers with stronger magnetic fields are required. In this study a novel marker design for MMM is investigated. These markers base on the reallocation of magnetic parts during dissolution [10].
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 260–263, 2010. www.springerlink.com
Magnetic Marker Monitoring: A Novel Approach for Magnetic Marker Design
II.
B. Measurement setup
MATERIALS AND METHODS
A. The novel marker design The novel magnetic markers have to satisfy two major requirements: Primarily they should provide strong magnetic fields for application in combination with less sensitive sensor technologies. Additionally, these magnetic fields should be strongly influenced by the process of dissolution in order to enable investigation of this process. The novel marker consists of magnetic parts which reorder during tablet dissolution. In this study the markers are made up of one permanent magnet and a compartment of iron powder which was positioned at one side of the magnet inside the pharmaceutical. Figure 1 shows the principle of the prepared markers. During dissolution of the marker, the iron powder becomes moveable. Driven by the magnetic forces of the permanent magnet the iron powder reallocates around the magnet.
Fig. 1: Tablet with inserted permanent magnet and iron powder in cross-section. By dissolution the iron powder rearrange around the magnet and form a magnetic stable configuration.
261
Fig. 2: Prepared tablet with wooden fixation
Conventional tablets were used for investigation of this novel marker design. The tablets had a dimension of approx. 22 mm×9.1 mm×6.7 mm (length×width×height) and a weight of approx. 2 g. A hole with a diameter of 3 mm and a depth of 18 mm was drilled into the tablet along the longitudinal axis at the center of the profile. One neodymium permanent magnet (Nd2Fe14B) with a diameter of 3 mm and a height of 1.5 mm was inserted. This permanent magnet was magnetized along the axial direction with a residual flux density BR between 1.40 T and 1.46 T. Additionally approx. 250 mg of conventional iron powder was inserted and the tablet was closed with a fragment. For ensuring reproducible positioning of the tablet during the measurements, a second hole (diameter 1 mm) was drilled and the magnet was glued to a wooden stick. Figure 2 shows one of the prepared tablets.
For the evaluation of the prepared markers the magnetic induction of the tablets was measured before and after the dissolution process. The measurements where performed using a three axial fluxgate magnetometers (Mag-03MSL, Bartington Instruments Ltd., UK) with a measurement range of ±100 T and a maximum output DC voltage of ±10 V. The output voltage of the magnetometer was detected using a computer controlled nanovoltmeter (2182A, Keithley Instrument Inc., USA). The voltage was measured synchronously to the power supply. The nanovoltmeter measured the average of the magnetometer output for one power line period. In order to reduce the effect of high frequency interfering signals the integrated analog low pass filter was used, eliminating frequencies above 18 Hz with 20 dB per decade. Each measurement was repeated automatically 100 times within approx. 80 s. Before any measurement, the background magnetic induction (earth field and low frequency interfering fields) was determined without the markers. The nanovoltmeter had only two channels following the measurement was restricted to two components of the magnetic induction. The direction of the magnetization axis of the magnet (z-direction) and the orthogonal direction in the horizontal plane (x-direction) were analyzed (see fig. 3). During the post process, the average of the 100 measurement points was calculated resulting in one representative value for each investigated position and orientation. The background field was subtracted to obtain the fields of the magnetic markers.
Fig. 3: Measurement setup for the test of the new markers at a distance of 5 cm before dissolution. The arrows on the fluxgate magnetometer indicate the used coordinate system. The arrow on the tablet denotes the magnetization axis of the permanent magnet.
IFMBE Proceedings Vol. 29
262
S. Biller, D. Baumgarten, and J. Haueisen
III.
RESULTS
The magnetic inductions of three tablets were measured with a distance of 5 cm and 10 cm between marker and sensor and with two different marker orientations (compartment of iron powder pointing towards and away from the sensor). The dissolution process of one prepared marker is illustrated in figure 4.
Fig. 4: Dissolution process of the tablet.
Fig. 5: Percental reduction of the magnetic induction during tablet dissolution for distances of 5 cm and 10 cm and both tablet orientations.
The magnetic induction of the tablets before the dissolution process was in the range of 4.6 T±0.5 T at 5 cm and 1.1 T±0.1 T at 10 cm distance respectively. These values were much larger than the fluctuations of the background field which was in the range of ±0.02 T. During the dissolution the iron powder changed its relative position and was reallocated around the permanent magnet. To determine the influence of this behavior the magnetic inductions before and after the dissolution were compared. The percentage difference was considered for every investigated measuring point. The magnitudes of the magnetic inductions were accumulated using (1) with BX and BZ as measured directions of the magnetic induction. The percentage differences (p) of BXZ were calculated using
IV.
In this study, a novel marker design for Magnetic Marker Monitoring was presented. The markers were prepared and initial measurements were performed. The Neodymium permanent magnet in the tablets produced a magnetic induction in the range of 1 T at 10 cm and 5 T at 5 cm distance which is 106 times larger compared to conventional markers of magnetized Magnetite.. During the dissolution the iron powder reallocated around the permanent magnet. This process damped the external magnetic field generated by the marker. The magnetic induction was decreased about approx. 17 %. The analysis of the measured interfering background field showed only small fluctuation of ±0.02 T and therefore the influence can be neglected.
(2) with 1 and 2 as indices for the values before and after the dissolution. Figure 5 shows the calculated differences for the three investigated tablets for both distances and orientations. Due to the rearrangement of the iron powder the magnetic induction was altered. The reduction was in the range of 13.3 % and 21.3 % with a mean value of 17.3 % and a standard deviation of 2.7 %. Based on the acquired measurements the percental reduction is better reproducible in case of the compartment of iron powder pointing towards the sensor.
DISCUSSION
V. CONCLUSION
In this investigation we analyzed novel markers generating strong magnetic inductions which were highly affected by the process of dissolution. The strong magnetic fields and the large alteration by dissolution enable the use of less sensitive sensor technologies (e.g. magnetoresistive sensors) for measurement of the magnetic inductions. With the novel magnetic markers it is possible to develop a new and mobile system for magnetic marker monitoring in the human gastrointestinal tract. A system, based on magnetoresistive sensor technologies, will allow a wide range of applications in functional diagnostic and pharmaceutical research. It will also enable extended studies for dissolution tests with less stress for the patient. Further investigations are required to evaluate the novel markers under conditions comparable to application in bio-
IFMBE Proceedings Vol. 29
Magnetic Marker Monitoring: A Novel Approach for Magnetic Marker Design
logical subjects. The dissolution process in this study was influenced by the fixation of the magnet. A phantom to simulate the gastrointestinal tract will be designed for the purpose of analyzing the dissolution process in a floating medium. Additional measurements of the magnetic characteristics will be performed using a SQUID sensor system and possible algorithms for marker localization will be tested.
ACKNOWLEDGMENT This study was partly funded by the German Research Council (DFG).
REFERENCES 1. 2.
3. 4.
Higgins PD, Johanson JF (2004) Epidemiology of constipation in North America: a systematic review. Am J Gastroenterol. 99:750-759 Rao SS, Ozturk R, Laine L (2005) Clinical utility of diagnostic tests for constipation in adults: a systematic review. Am J Gastroenterol. 100:1605-1615 Fagerholm U (2007) Prediction of human pharmacokineticsgastrointestinal absorption. J Pharm Pharmacol 59:905-916 Davis J, Burton J, Connor AL et al (2009) Scintigraphic study to investigate the effect of food on a HPMC modified release formulation of UK-294,315. J Pharm Sci 98:1568-1576
5.
263
Sanata M, Yamamoto T, Kuyama Y (2008) Retention, Fixation, and Loss of the [13C] Label: A Review for the Understanding of Gastric Emptying Breath Tests. Dig Dis Sci 53:1747-1756 6. Hinton JM, Lennard-Jones JE, Young AC (1969) A new method for studying gut transit times using radioopaque markers. Gut 10:842-847 7. Andrä W, Danan H, Eitner K. et al (2005) A novel magnetic method for examination of bowel motility. Medical Physics 32:2942-2944 8. Weitschies W, Kosch O, Mönnikes H. et al (2005) Magnetic Marker Monitoring: An application of biomagnetic measurement instrumentation and principles for the determination of the gastrointestinal behavior of magnetically marked solid dosage forms. Adv Drug Deliv Rev 57:1210–1222 9. Blume H, Anschütz M, Schmücker K et al (2006) Gastro-intestinal transit of solid oral dosage forms: Imaging studies using Magnetic Marker Monitoring technique. Abstractband American College of Clinical Pharmacy 2006 Annual Meeting, October 26–29, 2006, St. Louis, Missouri 10. Haueisen J, Biller S, Hilgenfeld B et al (2008) Pharmakologische Darreichungsform und Vorrichtung und Verfahren zur Lokalisation und Verfolgung des Auflösungsvorgangs dieser. DE 10 2008 033 662.9, 11.07.08 Corresponding author: Author: Sebastian Biller Institute: Institute for Biomedical Engineering and Informatics, Ilmenau University of Technology Street: Gustav-Kirchhoff Straße 2 City: Ilmenau Country: Germany Email: [email protected]
IFMBE Proceedings Vol. 29
Corneal nerves segmentation and morphometric parameters quantification for early detection of diabetic neuropathy Ana Ferreira1, António Miguel Morgado1,2 and José Silvestre Silva2,3 1
IBILI – Institute of Biomedical Research in Light and Image, Faculty of Medicine, University of Coimbra, Portugal 2 Department of Physics, Faculty of Sciences and Technology, University of Coimbra, Portugal 3 Instrumentation Center, Faculty of Sciences and Technology, University of Coimbra, Portugal
Abstract—Morphological parameters of the corneal subbasal nerve plexus may be the basis of a simple and noninvasive method for detection and follow-up of diabetic neuropathy. These nerves can be analyzed from images obtained in vivo by corneal confocal microscopy. In this work we present and evaluate an automatic methodology capable of identifying corneal nerves and determine various morphometric parameters. Keywords— diabetic neuropathy, corneal nerves, automatic segmentation, corneal confocal microscopy. I. INTRODUCTION
The cornea is one of the most highly innervated tissues in the human body. It is possible to image in vivo the corneal layers and membranes, using corneal confocal microscopy (CCM). In particular, it is possible to image the sub-basal nerve plexus and to document and quantify changes in corneal nerves morphology. There has been an increasing interest in using corneal nerves for early diagnosis and accurate assessment of peripheral neuropathy, a major cause of morbidity in diabetic patients [1, 2], which are important to define the higher risk patients, decrease morbidity and assess new therapies [3]. There are several in vivo studies published that quantify nerve density [4], evaluate changes in morphology of subbasal nerves [5] or elucidate the overall distribution of these nerves [6]. Other studies demonstrated that CCM can accurately define the extent of corneal nerve damage and repair, proving that it can be used as a measure of peripheral neuropathy in diabetic patients [7], allows the evaluation of corneal nerve tortuosity and that this parameter relates to neuropathy severity [8] or compared basal epithelium cell density between patients with diabetic retinopathy and controls to determine whether corneal basal epithelium density is associated with alterations on corneal innervation [9]. Our group has done relevant research on diabetic corneas using CCM, showing that the number of fibers in the subbasal nerve plexus of patients was significantly lower than in healthy humans, even for short diabetes duration, opening the possibility of using the assessment of corneal innerva-
tion by CCM for early diagnosis of peripheral neuropathy [5], a result later confirmed by other authors [10]. Currently, the corneal nerves analysis is based on a tedious process of manual tracing of the nerves, using confocal microscope built-in software [11], commercial programs [11-14] or software specifically developed for the purpose [14]. The extraction of clinical information is subjective and prone to errors. Thus, an automatic tool capable of extract and quantify the sub-basal plexus morphometric parameters may be the ideal method to evaluate nerve pathologies in diabetic patients and may constitute a basis for diabetic neuropathy diagnosis [15, 16]. Scarpa et al. [17] proposed automatic methods for the recognition and tracing of the corneal nerve structures. The nerves were recognized by a tracing algorithm based on filtering and pixel classification methods with postprocessing to remove false recognitions and link sparse segments into continuous structures. Automatic and manual length estimations on the same image were well correlated. In the past, we proposed an automatic method capable of identifying straight nerves [18]. When nerves had a curved shape or sudden changes of direction, additional processing was necessary. This way, a entropy-based method was developed with considerably better results [19]. However, the pre-processing step induced noise, resulting in false nerve branches. This prompted further improvements leading to a new algorithm capable of reliable extraction of the nerve structure and to measure morphometric parameters. The development of such a tool is reported in this work. II.
MATERIALS AND METHODS
A. Corneal nerves segmentation 1. Image acquisition We used corneal nerve images acquired in vivo, by researchers at the University of Padova, from diabetic and non-diabetic patients, using a CCM (ConfoScan4, Nidek Technologies, Padova, Italy), with a 460350 m field of
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 264–267, 2010. www.springerlink.com
Corneal Nerves Segmentation and Morphometric Parameters Quantification for Early Detection of Diabetic Neuropathy
view using a 40X objective, and compressed in JPEG monochrome format, with a size of 768576 pixels. These images are available online [17].
265
B. Morphometric parameters calculation 1. Length
2. Image Pre-Processing In a CCM image, the background is often characterized by a gradual intensity variation from the periphery to the center, with the central region being brighter. Nerves stand out from the background and normally appear as bright linear structures over a dark background. For correcting this non-uniformity of contrast, it is necessary to apply a preprocessing method to the images, before segmentation. We applied local equalization to the original images, based on the histogram of a region with size of 88 pixels, to increase the contrast. In order to enhance the boundaries of structures in the image, a phase symmetry algorithm, based on local frequency information analysis, was used. This overcome the need to segment the objects first and not providing any absolute measure of the degree of symmetry at any point in the image [13]. Finally, we investigated the histogram of the image, applying the highest dynamic threshold, in such a way that at least 10% of the image pixels are above that threshold. Thus, some noise is removed and in some way and edges that correspond to nerves are identified, as the number of pixels of the nerves is usually less than 10% of the total pixels of the image. 3. Nerves reconstruction The recognition of the nerves involves several steps, but is mainly based on the region growth approach. It starts with two region growth applications to the binary image: one from all the pixels that are 5% distant from the margin and other from pixels 35% away from the image border. A comparison between those regions that have grown (nerves) and those regions that have not grown (noise) is made, removing the noise. Then, several morphological operations are applied. The morphologic skeleton of the image is computed and branches with less than 10 pixels (spurious branches) are removed. After that, each disconnected region on the image is identified: those isolated and with small area are discarded, as they are regions with a lot of consistent noise or small nerves which do not represent continuous structures. The resultant nerves are compared with the original image just after the threshold and their endpoints are grown along the major axis to reconstruct the nerves to their original dimensions.
The lengths (m) of the nerve structures were calculated by simply computing the size of the nerve skeleton. 2. Tortuosity Coefficient The Tortuosity Coefficient (TC) is a parameter that gives information on the frequency and magnitude of nerve curvature changes. To calculate it we consider each nerve as a mathematical function on the image space and compute the function first and second derivatives [8]. In order to treat each nerve as a mathematical function, we find its endpoints, draw a straight line between them and rotate the image, aligning the straight line with the x-axis. The TC is calculated by:
TC
N 1
( f '( x , y )) i 1
i
i
2
( f ''( xi , yi ))2
(1)
With N the number of pixels of the nerve skeleton, f '( xi , yi ) and f ''( xi , yi ) are the first and second derivatives at the point (xi , yi), respectively. C. Performance evaluation The automatic algorithm was evaluated against manual segmentation of the corneal nerves by an experienced ophthalmologist. Pixel classification (nerve or non-nerve) was compared between automatically and manually segmented images. The nerve length correctly recognized by the algorithm and the nerve length traced by manually tools, were compared. The manual nerve segmentation and length measurement was accomplished with the help of the Simple Neurite Tracer plug-in for Fiji [20]. III.
RESULTS AND DISCUSSION
Fifteen (15) corneal nerves images were tested using the proposed methods. Fig.1 shows a representative example of the results obtained with the corneal nerves segmentation algorithm. To evaluate the performance of the method we compared the nerve length correctly recognized by the algorithm, with the length of manually traced nerves on the same image.
IFMBE Proceedings Vol. 29
266
A. Ferreira, A.M. Morgado, and J.S. Silva
(a)
(b)
(c)
(d)
Fig. 1 Representative example of the corneal nerves segmentation method: (a) original image, (b) normalized image, (c) after phase shift, (d) segmented image.
The sensitivity measures the proportion of true positives, while specificity measures the proportion of true negatives: sensitivity =
TP
specificity =
TP+FN
TN
(2)
TN+FP
100
Automatic - Manual Nerve Length Difference (m)
The average percent of nerve correctly segmented by the algorithm was 87.1% ± 8.1% (range: 73.5% - 96.8%). No image structures were falsely reported as nerves by the algorithm although there are pixels falsely classified as nerves (when compared with manual segmentation) due to differences on the nerve widths. Fig. 2 shows a Bland and Altman plot [21] for the comparison between automatic and manual nerve length measurement. The average difference between nerve lengths was -38.0 ± 45.8 m. This means that, in 95% of the cases, the difference between nerve lengths measured automatically and manually will lie between - 127.7 and 51.7 m. These limits, as well as the average difference, are shown in the plot. These results are similar to those reported in the literature and also show underestimation by the automatic method [17]. In the segmentation process every image pixel is classified either as nerve or non-nerve. By comparing the outcome of the automatic segmentation with the manual segmentation results, which are taken as the standard, it is possible to classify every image pixel according to four events: true positive (TP) and true negative (TN), when a pixel is classified in the same way by the automatic and manual segmentation processes, a false negative (FN) when a pixel classified as nerve by the manual process is segmented as non-nerve by the automatic algorithm and a false positive (FP) when a non-nerve pixel is segmented as a nerve by the automatic algorithm. From these events it is possible to calculate the sensitivity and specificity of the automatic segmentation algorithm.
50
0 0
50
100
150
200
250
300
350
400
450
-50
-100
-150
-200
Average nerve length (m)
Fig. 2 Comparison of nerve length measurement between automatic and manual segmentation. There is a tradeoff between these two figures. Our option was to minimize the false positive rate since we considered that is more important to prevent the identification of false nerves than to indentify correctly all the nerve length as nerve morphometric parameters, like the Tortuosity Coefficient can be successfully extracted from nerve segments. The false positive rate (FPR) is defined according to:
IFMBE Proceedings Vol. 29
FPR =
FP FP+TN
= 1 - specificity
(3)
Corneal Nerves Segmentation and Morphometric Parameters Quantification for Early Detection of Diabetic Neuropathy 2.
The accuracy of the segmentation is defined by accuracy =
TP +TN P+N
(4)
where P and N are the total number of positives and negatives pixels in the segmentation process. For the set of 15 segmented images, the average sensitivity was 66.6% ± 10.4% (range: 48.3% - 77.7%). This value results not only from nerve segments that were not identified as such by the automatic segmentation but mainly from differences in the nerves width between manual and automatic segmentation. The average specificity was 99.6% ± 0.2% (range: 99.3% - 99.9%), which is equivalent to a FPR of 0.4%. This shows that no corneal structures were falsely classified as nerves. The average accuracy of the automatic segmentation was 98.6% ± 0.5% (range: 97.9% - 99.2%). From the nerves representation obtained through automatic segmentation we have extracted the TC morphometric parameter. The average value of the TC was 26.8 ± 10.5 (range: 15.3 - 33.3). This value agrees with those previously reported, using the same definition of tortuosity, for nondiabetic and mild-neuropathy diabetic individuals [8]. The proposed algorithm for nerve identification was fully automatic, requiring no user intervention. Running times were around 3 minutes on an Intel® Centrino Core™2 Duo at 2.4 GHz computer. In conclusion, the developed algorithm produced good results, in terms of nerves detected and nerve length measurement, while providing an excellent specificity. It yields Tortuosity Coefficients in agreement to those found in the literature. The issues related to non-uniform contrast and luminosity were successfully solved by pre-processing the images with local equalization and phase shift based methods. There is still room for improvement particularly when dealing with images containing nerve branches. In our opinion, the need for a simple, non-invasive technique, capable of accurately documenting the extent of nerve damage and repair, for early diagnosis of peripheral diabetic neuropathy, can be addressed through the evaluation of corneal nerve morphology, using CCM images. In this work we presented an automatic algorithm for analysis of corneal sub-basal nerve plexus images. This work is part of a broader project that aims to develop a noninvasive technique for early diagnosis and monitoring of diabetic neuropathy.
4.
5.
6. 7.
8.
9.
10. 11.
12. 13. 14. 15.
16. 17. 18. 19. 20. 21.
REFERENCES 1.
3.
Gooch Clifton et al. (2004) The Diabetic Neuropathies. The Neurologist, vol. 10, pp. 311-322
267
Hossain Parwez, Sachdev Arun et al. (2005) Early detection of diabetic peripheral neuropathy with corneal confocal microscopy. The Lancet, vol. 366, pp. 1340-1343 T. S. Park, J. H. Park et al. (2004) Can diabetic neuropathy be prevented? Diabetes research and clinical practice, vol. 66, pp. S53-S56 Grupcheva C. N., Wong T. et al. (2002) Assessing the sub-basal nerve plexus of the living healthy human cornea by in vivo confocal microscopy. Clinical and Experimental Ophthalmology, vol. 30, pp. 187190 Popper, M., Quadrado, M.J., Morgado, A.M., Murta, J.N., Van Best, J.A., Muller, L.J. (2005) Subbasal Nerves and Highly Reflective Cells in Corneas of Diabetic Patients: In vivo Evaluation by Confocal Microscopy. Invest. Ophthalmol. Vis. Sci., vol. 46 Patel, D.V., McGhee et al. (2005) Mapping of the Normal Human Corneal Sub-Basal Nerve Plexus by In Vivo Laser Scanning Confocal Microscopy. Invest. Ophthalmol. Vis. Sci., vol. 46, pp. 4485-4488 Malik, R.A., Kallinikos, P., Abbott, C.A., van Schie, C.H.M., Morgan, P., Efron, N., Boulton et al. (2003) Corneal confocal microscopy: a non-invasive surrogate of nerve fibre damage and repair in diabetic patients. Diabetologia, vol. 46, pp. 683-688 Kallinikos Panagiotis, Berhanu Michael, O'Donnell Clare, Boulton Andrew J. M., Efron Nathan et al. (2004) Corneal Nerve Tortuosity in Diabetic Patients with Neuropathy. Invest. Ophthalmol. Vis. Sci., vol. 45, pp. 418-422 Chang P. Y., Carrel H., Huang J. S., Wang I. J., Hou Y. C., Chen W. L., Wang J. Y. et al (2006) Decreased density of corneal basal epithelium and subbasal corneal nerve bundle changes in patients with diabetic retinopathy. American journal of ophthalmology, vol. 142, pp. 488-490 Midena E., Brugin E., Ghirlando A., Sommavilla M. et al. (2006) Corneal diabetic neuropathy: A confocal microscopy study. Journal of Refractive Surgery, vol. 22, pp. S1047-S1052 Frangi A.F., Niessen W.J., Vincken K.L., Viergever M.A. (1998) Multiscale vessel enhancement filtering. Medical Image Computing and Computer-Assisted Intervention - Miccai'98, vol. 1496, pp. 130137 McLaren J.W., Nau C.B., Kitzmann A.S., Bourne W.M. (2004) Keratocyte Density: Comparison Of Two Confocal Microscopes. Invest. Ophthalmol. Vis. Sci., vol. 45 Kovesi P. (1997) Symmetry and Asymmetry from local phase. Tenth Australian Joint Conference on Artificial intelligence, vol. Bilgin C., Bullough P., Plopper G., Yener B. ECM-aware cell-graph mining for bone tissue modeling and classification. Data Mining and Knowledge Discovery Stachs O., Zhivov A., Kraak R., Stave J., Guthoff, R. (2007) In vivo three-dimensional confocal laser scanning microscopy of the epithelial nerve structure in the human cornea. Graefes Archive for Clinical and Experimental Ophthalmology, vol. 245, pp. 569-575 Mocan M. C., Durukan I., Irkec M., M., O. (2006) Morphologic alterations of both the stromal and subbasal nerves in the corneas of patients with diabetes. Cornea, vol. 25, pp. 769-773 Scarpa F., Grisan E. et al. (2008) Automatic Recognition of Corneal Nerve Structures in Images from Confocal Microscopy. Investigative Ophthalmology & Visual Science, vol. 49, pp. 4801-4807 Silva J.S., Morgado A.M. (2008) Caracterização dos Nervos da Córnea para o Diagnóstico de Diabetes. RecPad (Conferência Portuguesa de Reconhecimento de Padrões) Ana Ferreira, António Miguel Morgado, Silva J.S. (2009) Corneal nerves identification for earlier diagnosis and follow-up of diabetes. RecPad (Conferência Portuguesa de Reconhecimento de Padrões) http://homepages.inf.ed.ac.uk/s9808248/imagej/tracer/: Bland J.M., Altman D.G. (1986) Statistical methods for assessing agreement between two methods of clinical measurement. Lancet, vol. 1, pp. 307-310.
IFMBE Proceedings Vol. 29
Novel catheters for in vivo research and pharmaceutical trials providing direct access to extracellular space of target tissues M.Bodenlenz1, C.Hoefferer1, F.Feichtner1, C.Magnes1, R.Schaller1, J.Priedl1, T.Birngruber1, F.Sinner1 L.Schaupp2, S.Korsatko2 and T.R.Pieber1,2 2
1 Joanneum Research, Institute of Medical Technologies and Health Management, Graz, Austria Dept. of Internal Medicine, Div. of Endocrinology and Nuclear Medicine, Medical University Graz, Austria
Abstract— Medical and pharmaceutical research frequently requires direct access to the site of action of drugs and the withdrawal of tissue samples rather than withdrawal of blood samples. As alternative to invasive biopsy procedures less invasive continuous sampling techniques such as Microdialysis (µD) and Open-Flow Microperfusion (OFM) have been developed since the 1970ties. While µD-catheters recover substances through semi-permeable membranes OFM-catheters are membrane-free and thus permeable for all substances of interest at tissue level. We aimed at utilizing the advantages of the OFM principle and to develop catheters suitable for intradermal insertion to facilitate research in dermatology. Moreover, we aimed at demonstrating the feasibility of sampling lipophilic drugs from the dermis of patients in a clinical trial. Following development a trial was performed in 12 psoriatic patients. Lesional and non-lesional skin sites were treated for 8 days with a topical lipophilic immunoactive drug. Six catheters were implanted and OFM performed pre-dose and 24hrs post-dose on day 1 and day 8 for assessment of drug (by cap-LS-MS/MS) and according cytokine levels (TNF, by ELISA). As reference for drug levels standard skin biopsies were taken at 4hrs post-dose. Dermal OFM was able to recover the lipophilic drug and TNF and to provide 24hrs profiles of drug kinetics and action. Calculated AUC0-24 showed significant higher drug levels on day 8. In contrast, the skin biopsy procedure could not demonstrate any drug accumulation. Moreover, OFM profiles indicated a suppression of local TNF release on day 8. We conclude that OFM can overcome limitations of state of the art methods and thus represents an alternative for basic research, the assessment of skin barrier function and drug penetration from topically applied formulations. The successful trial also led to the development of advanced catheters at medical product quality available by mid 2010. Keywords— open-flow microperfusion, catheters, human, skin, lipophilic I. INTRODUCTION
Many decisions in drug development and medical practice are based on measuring blood concentrations of endogenous and exogenous molecules. Yet most biochemical and pharmacological events take place in the tissues. Also, most drugs with few notable exceptions exert their effects not
within the bloodstream, but in defined target tissues into which drugs have to distribute from the central compartment [1]. Multiple skin biopsies are done as state of the art but are not well tolerated by patients because of invasiveness. As alternative to invasive biopsy procedures less invasive continuous sampling techniques such as Microdialysis (µD) [2] and Open-Flow Microperfusion (OFM) [3-6] (Fig.1) have been developed and successfully used in preclinical and clinical research since the 1970ties.
Fig. 1 Schematic representation of an OFM sampling system. A catheter is inserted into tissue and perfused. Partial equilibration between perfusate and interstitial fluid occurs. Fluid is transported by a peristaltic pump through the tubing system to a collecting vial [4]. Both methods continuously deliver samples from the interstitial fluid (ISF) compartment of the tissues and thus can also be considered as ‘data-rich’. While µD-catheters recover substances through semi-permeable membranes from the tissues OFM-catheters are membrane-free and thus permeable for all substances of interest at tissue level. Microdialysis has also been implemented for application into the dermis of the skin for dermatological research during the last decade. However, research has repeatedly been hampered due to low recovery of large molecules and due to low recovery of lipophilic solutes binding to membrane and
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 268–271, 2010. www.springerlink.com
Novel Catheters for In Vivo Research and Pharmaceutical Trials Providing Direct Access to Extracellular Space of Target Tissues
catheter material [1, 7-9]. These limitations have not been observed in studies with OFM for large molecules like albumin [4] and substances with increased affinity to proteins like the peptide hormone insulin [6]. These positive experiences with OFM in metabolic research in adipose and skeletal muscle tissue suggested that the principle of OFM might also help to overcome the limitations in skin research. We thus aimed at the development of an OFM based approach with membrane-free sampling probes to overcome the limitations and enlarge the options in skin research. In a recent proof of concept (PoC) trial we investigated whether such a novel approach also enables the assessment of the pharmacokinetics(PK) and pharmacodynamics(PD) of a topically applied lipophilic drug. II.
MATERIALS AND METHODS
The principle of open-flow microperfusion (OFM) was utilized and adapted for skin. A novel linear catheter was designed which can be introduced by a small needle. A helical slit enables a direct access to dermal ISF with all its constituents for subsequent assay (Fig.2 and 3). The recent PoC trial for lipophilic sampling was performed with a topical lipophilic drug (LogP = 3, MW 410 Da) selectively inhibiting p38. The p38 pathway is suggested to be critically involved in both the innate as well adaptive immune system, in the production and signalling of the main pro-inflammatory cytokines and thus be crucially involved in the psoriasis pathology. 12 patients diagnosed with untreated psoriatic lesions have been enrolled in a single center, open label exploratory trial. Four defined skin sites of 2.54 cm2 (2 lesional + 2 nonlesional) were treated once daily with the lipophilic drug in a 0.5 % cream formulation from day 1 to day 8. On day 1 and day 8 six dermal OFM-probes were inserted into the dermis (3 lesional + 3 non-lesional) to continuously obtain dermal interstitial fluid (ISF) in fractions (15 μl / hr / probe) pre-dose and up to 24 hrs post-dose (Fig. 4). In addition state of the art skin punch biopsies (diam. 3 mm) were taken 4 hrs post-dose on day 1 and 8 following tape stripping (6 stripes) (Fig. 5). ISF samples were analyzed for drug (Fig. 6) and the cytokine TNFα (Fig. 7). Skin biopsies were analyzed for total drug concentrations (Fig. 5). High-sensitive, validated, bioanalytical methods prior tailored in house for lowest volume and concentration were used (drug: capLC / MS / MS, LLOQ : 0.033 ng / ml; TNFα : ELISA, LLOQ: 10 pg / ml, 10 μl of sample volume).
269
Linear membrane-free probe perfusate
ISF + perfusate
Fig. 2 Concept of continuous sampling of the entire dermal interstitial fluid matrix using a minimally-invasive linear catheter design
Fig. 3 Dermal OFM-probe (OD 0.4mm, PTFE) with helical slit as exchange area
Fig. 4 Dermal OFM applied to lesional (3x) and non-lesional skin (3x) on a psoriatic patient using a fully portable setup for sample collection
IFMBE Proceedings Vol. 29
270
M. Bodenlenz et al. III.
Fig. 5 Total dermal drug levels at 4 hrs post-dose on day 1 and 8 as assessed by (invasive) 3 mm skin punch biopsies in lesional and non-lesional sites following ~6-fold tape stripping. This state of the art method can not show the drug accumulation from day 1 to day 8.
RESULTS
In the recent PoC trial on intradermal lipophilic sampling more than 500 dermal interstitial fluid samples (by OFM) and biopsies from lesional and non-lesional skin were obtained during the trial and analysed. On day 1 the drug was already detected (>LLOQ) in ISF samples around ~3 hours post-dose in many patients (Fig. 6). On day 8 pre-dose drug levels were above LLOQ and further increased up to a maximum of ~0.25 ng / ml after last drug application. TNFα levels were low on day 1, peaking at ~12 hrs post-dose at approx. 20 pg / ml and remaining below LLOQ on day 8, with the decrease being significant in lesional skin (Fig. 7). Biopsy drug levels in lesional skin (~40 ng / ml) were significantly higher than in non-lesional skin (~8 ng / ml) on day 1 and 8 (Fig. 5). In general biopsy drug levels as reference measurements showed high variability and thus gave no trend information. IV.
CONCLUSIONS AND OUTLOOK
The principle of OFM was successfully implemented and adapted for the specific requirements of the organ ‘skin’. The recent trial proves that this membrane-free technique also provides access to lipophilic drugs within skin for assessing PK-PD. OFM had again proven to provide access to virtually all molecules of interest in the ISF at target tissue level, regardless molecular size or electrochemical properties [10] (Fig.8).
Fig. 6 24 hrs dermal interstitial drug kinetics as assessed from OFM samples. OFM shows an increase from day 1 to day 8.
Fig. 8 Range of applications for μ-Dialysis and the membrane-free method of Open-Flow Microperfusion (OFM)
Fig. 7 24 hrs dermal interstitial TNFα profiles (4-points : pre-dose, 6hrs, 12 hrs, 24 hrs post dose) as assessed from OFM samples
We conclude that dermal OFM can overcome limitations of state of the art methods and thus represents an alternative for basic research, the assessment of skin barrier function in normal and pathophysiologic states and drug penetration from topically applied formulations. This successful trial led to the development of advanced catheter material for professional users with further improved exchange properties and handling. To accomplish
IFMBE Proceedings Vol. 29
Novel Catheters for In Vivo Research and Pharmaceutical Trials Providing Direct Access to Extracellular Space of Target Tissues
this all guidelines for medical product design (e.g. Quality Management EN ISO 13485:2003, Risk Management EN ISO 14971:2007) were implemented at the Institute in 2009. The novel catheter design is based on reinforced coated tubing (OD 0.32mm, polyimide-steel-PTFE) with a meshlike exchange area featuring highly efficient recovery of interstitial fluid and solutes (Fig. 9). This novel dermal catheter will be provided to research partners as CEconform medical product starting mid 2010.
Fig. 9 Novel linear OFM-probe (OD 0.32mm) with a mesh-like exchange area available in CE-product quality.
The institute is experienced in collaborative research at European level. We provide competence in the design of minimally-invasive body interfaces, high-sensitive bioanalytics, in medical product design at industry/product quality and in preclinical and clinical testing of products.
ACKNOWLEDGMENT The development of novel material for clinical application is performed within Project ‘CASE’ funded by the program ‘Research Studios Austria’ of the Federal Ministry of Economy, Family and Youth of the Republic of Austria. We thank Monika Majerowicz for editing and layouting this proceeding last minute.
REFERENCES 1.
Chaurasia CS, Müller M, Bashaw ED, Benfeldt E et al. (2007) AAPS-FDA workshop white paper: microdialysis principles, application and regulatory perspectives. Pharm Res. 24(5):1014-25
271
Ungerstedt U, Hallström A. (1987) In vivo microdialysis--a new approach to the analysis of neurotransmitters in the brain. Life Sci. 41(7):861-4 3. Direct access to interstitial fluid in adipose tissue in humans by use of open-flow microperfusion. Schaupp L, Ellmerer M, Brunner GA, Wutte A, Sendlhofer G, Trajanoski Z, Skrabal F, Pieber TR, Wach P (1999) Am J Physiol. 276(2 Pt 1):E401-8 4. Ellmerer M, Schaupp L, Brunner GA, Sendlhofer G, Wutte A, Wach P, Pieber TR (2000) Measurement of interstitial albumin in human skeletal muscle and adipose tissue by open-flow microperfusion. Am J Physiol Endocrinol Metab.278(2):E352-6 5. Trajanoski Z, Brunner GA, Schaupp L, Ellmerer M, Wach P, Pieber TR, Kotanko P, Skrabal F (1997) Open-flow microperfusion of subcutaneous adipose tissue for on-line continuous ex vivo measurement of glucose concentration. Diabetes Care 20(7):1114-21 6. Bodenlenz M, Schaupp LA, Druml T, Sommer R, Wutte A, Schaller HC, Sinner F, Wach P, Pieber TR (2005) Measurement of interstitial insulin in human adipose and muscle tissue under moderate hyperinsulinemia by means of direct interstitial access. Am J Physiol Endocrinol Metab. 289(2):E296-300 7. Benfeldt E, Hansen SH, Vølund A, Menné T, Shah VP (2007) Bioequivalence of topical formulations in humans: evaluation by dermal microdialysis sampling and the dermatopharmacokinetic method. J Invest Dermatol. 127(1):170-8 8. Garcia Ortiz P, Hansen SH, Shah VP, Menné T, Benfeldt E (2009) Impact of adult atopic dermatitis on topical drug penetration: assessment by cutaneous microdialysis and tape stripping. Acta Derm Venereol. 2009;89(1):33-8 9. Benfeldt E, Groth L (1998) Feasibility of measuring lipophilic or protein-bound drugs in the dermis by in vivo microdialysis after topical or systemic drug administration. Acta Derm Venereol. 78(4):2748 10. Bodenlenz M, Schaupp LA, Höfferer C, Schaller R, Feichtner F, Magnes C, Suppan M, Pickl K, Sinner F, Wutte A, Korsatko S, Köhler G, Legat FJ, Hijazi Y, Neddermann D, Jung T and Pieber TR (2009) A NOVEL APPROACH FOR INVESTIGATIONS INTO SKIN BARRIER FUNCTION AND DRUG PENETRATION Poster Presentation at the 3rd Symposium of Skin and Formulation & 10th Annual Meeting of the Skin Forum, Versailles, France 2.
Author: Manfred Bodenlenz Institute: Joanneum Research, Institute of Medical Technologies and Health Management, Graz, Austria Street: Elisabethstrasse 11a, A-8010 Graz City: Graz Country: Austria Email: [email protected]
IFMBE Proceedings Vol. 29
Statistical Texture Analysis of MRI Images to Classify Patients Affected by Multiple Sclerosis A. Faro1 , D. Giordano1 , C. Spampinato1 and M. Pennisi2 1
Department of Informatics and Telecommunication Engineering University of Catania, Viale Andrea Doria, 6, 95125 Catania, Italy 2 Department of Neuroscience - University of Catania Via S. Sofia, 86 - Policlinico Universitario 95125 Catania, Italy Abstract— This paper proposes a vision based system for helping neurologists in the diagnosis of multiple sclerosis (MS), by analyzing textures of the white matter extracted in T2 MRI (Magnetic Resonance Imaging) images. The proposed system consists of three connected subsystems: 1) Pre-processing, 2) Statistical Features Extraction and 3) MRI Slice Classification. The lesions are extracted, in the first step, by using image processing techniques such as morphological and edge detection filters. The diagnosis of the considered MRI slice is based on the analysis of the texture features computed on the white matter highlighted by the first module. More in detail, the statistical moments of the image gray levels are evaluated and are further used by the classification module implemented by using a MLP neural network. The proposed method was tested on a set of 250 simulated MRI images and on a set of 20 real patients achieving very promising results both in terms of sensitivity and in terms of specificity. Keywords— Multiple Sclerosis, MRI processing, Neural Network Classification, Statistical Texture Analysis
I. I NTRODUCTION Multiple Sclerosis (MS) is a serious disease of the central nervous system, i.e., the brain and spinal cord, and is characterized by a progressive degeneration and destruction of myelin, a white substance made up of fatty acids, which has an important role in neural transmission of the central nervous system. The destruction of myelin sheaths in the central nervous system causes the blocking or slowing of the nerve impulse conduction. The areas affected by myelin destruction, called demyelination plaques, have irregular shape, often elliptical, with the bigger axes centered on venules and a multifocal distribution located at the white substance level. Sometimes the plaques extend until the gray substance. Magnetic Resonance Imaging (MRI) is a special type of imaging technique that provides detailed images of the brain and spinal cord and allows neuro-radiologists to detect le-
sions or plaques. Depending on the kind of lesions and plaques, medical doctors diagnose neurological diseases and prescribe a therapy. The MRI can clearly show the size, quantity and distribution of demyelination plaques and provides information on the multiple sclerosis activity and progression in clinical trials of new pharmaceuticals. Since the lesions or plaques are usually numerous and of very different size, the detection performed by neuro-radiologists is a complex and tedious task with a high intra- and inter-rate variability [1]. Indeed, often, the atypical clinical and radiographic features of large demyelinating plaques may lead to an erroneous diagnosis of a brain tumor, an infection, or a demyelination from other causes. To aid medical doctors in the diagnosis of abnormalities from a brain MR image, researchers have developed different image processing and image analysis techniques for brain tissue segmentation, e.g. [2], [3], and for white matter segmentation [4], [5]. Moreover, in the last years automatic detection and segmentation of white matter lesions, has been commonly performed using fluid attenuation inversion recovery (FLAIR) images rather than T2 images since it has been proved that FLAIR images are very robust with respect to the hyperintensity artifacts, [6] and [7]. However, such algorithms are of little use in normal clinical practice, where T2 images are collected instead of FLAIR sequences, since these require a longer exposure time. All the existing methods aim at detecting the lesions, thus considering only the local effects produced by the disease. Instead, diseases such as MS or brain tumor usually show widespread effects in the white matter; in fact in these case not only lesions but also multiple small spots can be detected. In image processing terms different diseases produce different textures in MRI slices. This is the reason why we propose a system to process the white matter as a whole and to automatically classify if a patient is affected or not by MS by extracting texture features of the white matter obtained by using T 2 MRI. The reminder of the paper is as follows: in the next section Magnetic Resonance Imaging concepts and the data acquisi-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 272–275, 2010. www.springerlink.com
Statistical Texture Analysis of MRI Images to Classify Patients Affected by Multiple Sclerosis
273
tion method are shown. In section III the proposed system is described. In the last two sections experimental results, concluding remarks and future work are reported.
II. M AGNETIC R ESONANCE I MAGING AND DATA ACQUISITION M ETHOD
(a) Normal MRI
Magnetic Resonance Imaging (MRI) is a modern and powerful imaging technique, which offers several advantages in comparison to other techniques (radiography, ultrasound, CAT scan, etc.) including no patient’s exposure to ionizing radiation and high resolution images at any plane (not only on the transverse plane). An example of a MRI projection is shown in fig. 1.
(b) MRI affected by multiple sclerosis
Fig. 2: MRI examples then we tested it on a smaller set of real images. The volumetric data of MRI images are in MINC format (Medical Image NetCDF) that is a file format for biomedical images based on NetCDF. The manipulation of this MINC file was done by using EMMA [10], a toolkit developed by the Montreal Neurological Institute.
III. T HE P ROPOSED S YSTEM The proposed system consists of three main processing modules: Fig. 1: MRI projection There are several kinds of MRI images, depending on the relaxation times of the magnetic field. Two of the most common times are: 1) T 1 that is the longitudinal relaxation time and represents the time required to recover 63% of the total value of magnetism along the main direction of the field and 2) T 2, which, instead, represents the transverse relaxation time that is the time required for the annulment of 63% of the transverse magnetization in ideal conditions. The MRI images used in this work were generated by two simulators, the Simulated Brain Database (SBD) [8] for the normal and MS affected MRI images and Simulated Brain Tumor MRI Database [9] for the MRI Images affected by tumors. Fig. 2 shows two kinds of MRI images of human brains generated by SBD: normal (fig.2(a)) and affected by multiple sclerosis (fig.2(b)). The simulators were instrumental to build the neural network classifier, since, as is known, large datasets are required in order to achieve good performance and hospitals do not have such big datasets available. Therefore, we built our classifier by using a large set of simulated images and
• A pre-processing module that aims at enhancing the white matter and its lesions and at removing not interesting objects; • A features extraction module that aims at computing the white matter texture features to be passed to the classification step; • A classification module based on a feed-forward neural network to establish if a MRI image belongs to a patient affected by MS or not, depending on the previously extracted features. The proposed system works on a single MRI slice that is the slice where the demyelination plaques or lesions are more visible. For extracting this slice, we have built an histogram model for normal patients (called HN , computed using 100 MRI), and for each slice of the considered MRI we compute the value I = ∑255 i=1 Hcurrent−slice − HN . The selected slice is the one with the biggest I value. A. The pre-processing module The preprocessing step involves image enhancement techniques to highlight structures of interest. Because approxi-
IFMBE Proceedings Vol. 29
274
A. Faro et al.
mately 95% of all the multiple sclerosis lesions occur in the white matter of the brain, this step involves the application of morphological processing and thresholding techniques in order to develop a mask image containing only white matter and Multiple Sclerosis (MS) lesions. The first step of the pre-processing module aims at removing noise. Hence a Gaussian filter (with σ = 1.5) is applied to the MR Image slice (fig.3(a)), thus obtaining an enhanced image (fig.3(b)). Afterwards a Canny filter (with σ = 1) is computed (fig.3(c)) in order to detect the image edges, further processed by a closing operator (10 iterations) whose structuring element is a circle with radius of 15 pixels, obtaining the image shown in fig. 3(d). To remove inconsistent parts of the considered slice, the negative filter is applied to the image, shown in fig.3(d), and all the border-connected structures are deleted (fig.3(e)). Then, a thresholding filter is applied to the original image and subsequently an AND logic between this image and the convex image of the object shown in fig. 3(e) is computed. The resulting image is shown in (fig.3(f)). Finally, all the holes are filled (fig.3(g)) and a logic AND operator is computed between the binary image and the original image (fig.3(h)). As is possible to notice, the lesions are quite visible in the output image. The main problem is that often tumors have the same shape but different textures, hence a texture features extraction is strongly required as shown in the following section. B. Features extraction This module aims at extracting the relevant features from the image obtained by the preprocessing step. More in detail, the extraction is carried out by analyzing the statistical moments of the gray-level histogram and the co-occurence matrix. Let z be a random variable denoting the image gray levels and p(zi ) i = 1, 2, ..., L − 1 be the corresponding histogram, where L represents the distinct Grey levels. P(zi ), i = 1...N is the normalized histogram, so that ∑Ni=1 p(zi ) = 1. The considered features, extracted from the histogram, are shown in table 1. Moreover, the gray level co-occurrence texture features are extracted by using the grey level co-occurrence matrices (GLCM), often used in MRI applications e.g. in [11], which describes the frequencies at which two pixels (in specified direction and distance) occur in the image. Once the GLCM has been created, we extract the following statistics: Energy, Correlation, Inertia, Entropy, Inverse Difference Moment, Sum Average, Sum Variance, Sum Entropy, Difference Average, Difference Variance, Difference Entropy, Information measure of correlation 1, Information measure of correlation 2,
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Fig. 3: Pre-processing step details
Maximal Correlation Coefficient. Defining a 2-by-2 array of offsets specifying the distance between the pixel-of-interest and its neighbor, we obtain 14*2=28 features. Therefore for each pre-processed MRI we obtain a feature vector consisting of 34 features (28 extracted by using the GLCM and 6 by using the histogram statistics shown in Table 1). A simple feature reduction by using the Principal Component Analysis is applied, thus obtaining a feature vector of 8 elements. The reduced features are the inputs for the neural network to classify the images according to the statistical gray level and GLCM parameters. The adopted neural network has the following structure: Table 1: Features Extracted from the Grey Level Histogram Average
m = ∑Li=1 zi p(zi )
2nd -order momentum
2 μ2 (z) = ∑L−1 i=0 (zi − m) p(zi )
3th -order momentum
3 μ3 (z) = ∑L−1 i=1 (zi − m) p(zi )
Uniformity
2 U = ∑L−1 i=0 p (zi )
Entropy
e = − ∑L−1 i=0 p(zi ) log2 p(zi )
Intensity Variation R
R = 1 − 1+σ12 (z)
IFMBE Proceedings Vol. 29
Statistical Texture Analysis of MRI Images to Classify Patients Affected by Multiple Sclerosis
• 8 neurons for the input layer, one for each feature value (linear activation function); • 17 neurons for the hidden layer (sigmoid activation function); • 1 Binary Output neuron that gives 1 if the image is classified as affected and 0 if not affected by MS.
IV. E XPERIMENTAL RESULTS The performance of a diagnostic system is usually expressed by using 1) the sensitivity that represents the percentage of images of patients affected by MS correctly classified (hence, it provides information about false negatives), 2) the specificity that is the percentage of images with no MS lesions correctly classified (information about false positives) and 3) the accuracy that integrates both the above indices and indicates the number of images correctly classified. Experimental results were carried out on a set of 500 simulated MRI images so divided: 250 MRI images of patients not affected by MS (of which 115 are affected by tumors and the remaining ones are normal) and 250 MRI images of patients affected by MS. These images were divided into 250 learning patterns and 250 test patterns. The neural network was trained for 5000 epochs on the 250 learning MRI images obtaining results quite satisfying: indeed the learning mean square error was 1.15e−05 , whereas the test mean square on the remaining 250 MRI images error was 5.93e−04 . The results of the quantitative evaluation of the system performance over 250 MRI (150 affected by MS) are as follows: Accuracy = 100 ·
TP+TN ≈ 95% T P + T N + FP + FN
Sensitivity = 100 ·
TP ≈ 92% T P + FN
Speci f icity = 100 ·
TN = 100% T N + FP
where T N, T P, FN and FP represent, respectively, the true negatives, the true positives, the false negatives and the false positives. Another test was carried out on 20 real data MRI from real patients (10 affected by MS, 5 affected by tumors and 5 normal). These test results are fairly in line with the simulation results since we obtained an accuracy, a sensitivity and a specificity, respectively, of 90%, 91.6% and 90%.
V. C ONCLUDING REMARKS AND FUTURE WORK We have developed an automated system for recognizing patients affected or not by multiple sclerosis by computing
275
textures features of the white matter extracted by processing T2 MRI brain images. The experimental results are very promising, given the very low rate of false positives and of false negatives as is demonstrated by the high values of specificity and sensitivity. Further work will focus on expanding the model in order to classify other kinds of neurological diseases that affect the white matter, which can be detected by using MRI images. This will also include a larger evaluation of the performance with different acquisition modalities (T1, PD (Proton Density), FLAIR). Furthermore, future work will include the development of a segmentation method for aiding the neurologists in the analysis of the data from follow-up exams, where the lesions are evolving over time. In particular, we are working on developing an automatic system for 3D lesions reconstruction in order to provide quantitative measurements for evaluating lesions progression.
R EFERENCES 1. Mantyla R., Erkinjuntti T., Salonen O., et al. Variable agreement between visual rating scales for white matter hyperintensities on MRI. Comparison of 13 rating scales in a poststroke cohort Stroke. 1997;28:1614–1623. 2. Weisenfeld N. I., Warfield S. K.. Automatic segmentation of newborn brain MRI Neuroimage. 2009;47:564–572. 3. Chao W. H., Chen Y. Y., Lin S. H., Shih Y. Y., Tsang S.. Automatic segmentation of magnetic resonance images using a decision tree with spatial information Comput Med Imaging Graph. 2009;33:111–121. 4. Akselrod-Ballin A., Galun M., Gomori J. M., et al. Automatic segmentation and classification of multiple sclerosis in multichannel MRI IEEE Trans Biomed Eng. 2009;56:2461–2469. 5. Boer R., Vrooman H. A., Lijn F., et al. White matter lesion extension to automatic brain tissue segmentation on MRI Neuroimage. 2009;45:1151–1161. 6. Khayati R., Vafadust M., Towhidkhah F., Nabavi S. M.. A novel method for automatic determination of different stages of multiple sclerosis lesions in brain MR FLAIR images Comput Med Imaging Graph. 2008;32:124–133. 7. Maillard P., Delcroix N., Crivello F., et al. An automated procedure for the assessment of white matter hyperintensities by multispectral (T1, T2, PD) MRI and an evaluation of its between-centre reproducibility based on two large community databases Neuroradiology. 2008;50:31–42. 8. Cocosco C.A., Kollokian V., Kwan R.K.-S., A.C. Evans. BrainWeb: Online Interface to a 3D MRI Simulated Brain Database in NeuroImageProceedings of 3rd International Conference on Functional Mapping of the Human Brain 1997. 9. Prastawa M., Bullitt E., Gerig G.. Simulation of Brain Tumors in MR Images for Evaluation of Segmentation Efficacy Medical Image Analysis (MedIA). 2008:(in press). 10. Wolforth M., Ward G., Marrett S.. EMMA: Extensible MATLAB Medical Analysis User Manual 1995. 11. Wu Jie, Poehlman Skip, Noseworthy Michael D., Kamath Markad V.. Texture Feature based Automated Seeded Region Growing in Abdominal MRI Segmentation in BMEI ’08: Proceedings of the 2008 International Conference on BioMedical Engineering and Informatics:263– 267 2008.
IFMBE Proceedings Vol. 29
WADEDA: A Wearable Affective Device with On-Chip Signal Processing Capabilities for Measuring ElectroDermal Activity E.I. Konstantinidis1, C.A. Frantzidis1, C. Papadelis2, C. Pappas1, and P.D. Bamidis1 1
Lab of Medical Informatics, Medical School, Aristotle University of Thessaloniki, Greece 2 Center for Mind/Brain (CIMEC), University of Trento, Mattarello, (TN) Italy
Abstract— In this paper a miniaturized and wearable skin conductance sensor equipped with a micro-processor is proposed. The device facilitates the acquisition of long-term monitoring of the electrodermal activity under real-world situations. Its generic and flexible design permits the data storage during daily activities, while the prototype equipped with on-chip firmware, performs real-time signal filtering and feature extraction. The system’s architecture based on the recent hardware advances, aims to enhance the robustness of previous skin conductance sensors. Emerging applications under non-laboratory experiments are introduced in order to highlight the applicability of the proposed device. The results obtained from preliminary experiments are described. Keywords— Affective Computing, Emotional Processing, Microprocessor, Skin Conductance, Wearable Sensors.
I. INTRODUCTION The skin is the largest organ of the human body. It is responsible for material exchange, thermoregulation, prevention of foreign matter entrance, etc. [1] Its function is controlled through signals emitted by the central nervous system [2]. As a consequence of the signals' arrival, columns of sweat fill the ducts resulting to increased conductivity in the corneum [1]. Thus, sweat alterations (sweating) cause measurable electrical changes in the skin surface. These changes in the electrodermal activity (EDA) may be attributed either to thermoregulatory processes or to emotional sweating which is sensitive to mental changes. The emotional sweating is consisted of a low-frequency component which affects the skin conductance level (SCL) and a rapid stimulus-specific, phasic skin conductance response (SCR) appearing as a wave superimposed to the SCL [3]. The EDA is a reliable index of the limbic system’s function which is related with evolutionary processes promoting species survival [2]. SCRs appear as a response to novel and highly arousing events [4]. Arousal is an emotional dimension reflecting the activation level. Increases in the activation level are correlated with increases in attention and better memory performance [5]. Therefore, the skin conductance has been widely used to psychological experiments. Such experiments mainly involved short-term simultaneous recordings from both the central and the autonomic nervous
system and artificially elicited emotions under specific laboratory conditions [6]. Despite their great impact, the affecting computing community has mentioned the need for development of a new form of wearable, computerized system able to gather and unobtrusively process physiological signals while the user performs his real-world activities. Such systems are expected to provide a deeper understanding of the user’s personalized needs [7]. Towards the introduction of an affective wearable, there are certain design specifications that should be taken into consideration. More specifically, as revealed by its name, such a system should be equipped not only with sensing abilities but it should also be capable of employing pattern recognition techniques in order to detect the user’s affective state [7]. Requirements for using an affective wearable in real time activities are the small size and weight of the device in order to permit the physical contact over long time without obtrusiveness or causing user disturbance. Moreover, they should be able to gather huge amount of data in everyday settings being independent from other devices. So, their power supply should allow them to function for hours while they are equipped with sufficient memory in order to store huge amount of data and having the computational efficiency to perform real-time feature extraction. Another important aspect which is often neglected is that wearable systems are in close linkage with biomedical tele-monitoring systems since both of them record neuro-physiological signals [6]. Therefore, special care should be given to the proper design in order to eliminate noise interference. Previous attempts lack to provide a solution able to fulfill the aforementioned specifications. One of the first attempts [8] resulted in a recording device which was consisted of a wheatstone bridge for the detection of conductivity changes and a low-pass filter for noise removal. The recordings took place inside a clinical magnetic scanner and the data were then transmitted to a computer located outside the room. A later study [9], used the same circuitry and added a fiber-optic skin conductance transducer in order to acquire artifact free SCRs and fMRI data. These systems were proposed mainly for the acquisition of short-term data during experimental procedures. So, they are focused on the improvement of the SCR acquisition, while the issue of recognizing the user’s affective state is completely outside
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 276–279, 2010. www.springerlink.com
WADEDA: A Wearable Affective Device with On-Chip Signal Processing Capabilities for Measuring ElectroDermal Activity
277
from their scope. Moreover, they should be connected with a personal computer in order to store the SCR data. The MIT group introduced devices able to acquire longterm data able to model the affective profile of people while they participate in social activities. Their first prototype [5] was a glove sensing EDA changes which then were mapped in a bright LED display. Despite being a pioneering approach, there still was a need for connection with a host computer for data transmission, while no sophisticated recognition techniques other than a simple value mapping were adopted. So, only raw analysis of conductivity alterations based on the LED brightness is feasible, since the device was not designed for scientific use. A later work [10] contained a microcontroller for EDA acquisition with adjustable gain and then data transmission to a computer by means of Bluetooth technology. This device was technologically innovative since it could gather data in a range of 100 m from the computer while it was reliable and causing minimum obtrusion. However, its great power consumption limited the available operation time while was not fully independent from the computer. Moreover, the device could not store data or process them in order to extract features that could be used for affective recognition. Aiming to enhance the arsenal of affective wearables, we introduce an extremely small and lightweight prototype for reliable acquisition of long-term data without the need of transmitting them to a host computer. However, connection with other recording devices or host computers is feasible through a fiber optic. The device is also equipped with a modern microprocessor for keeping constant sampling rate, adjusting the amplification gain and performing initial filtering. Furthermore, software implementation on chip was adopted in order to perform feature extraction. Raw data or SCR features [6], [11] (amplitude, latency, rise time) can be stored on the memory card inside the device. Therefore, independency is achieved since it can perform acquisition and real-time affective computations without the assistance of any other equipment. So, we aim to provide a device fulfilling the specifications outlined above serving both as an affective wearable system able to sense and understand the user’s affective state while it could also be used in neurophysiological experimental procedures.
consisted of the measurement circuit, the microcontroller, the memory card and the output circuit. The measurement circuit is based on the Wheatstone bridge principle. The unknown resistance Rx, which represents the human’s finger resistance, can be calculated by the values of the rest known 3 resistors. The bridge balance is achieved by means of a potentiometer. Most of the previous approaches related to EDA used an analog potentiometer. The proposed measurement circuit employs a digital potentiometer which value can be selected by the microcontroller. Moreover, the amplifier’s gain, ranging from 28 to 1300, is digitally programmable by the microcontroller. Thus, the proposed system is advanced by being able to gain the acquired signal based on limitations stemming from environmental conditions and personalized specifications. Moreover, the bridge balancing takes place continuously during the experiment. The analog to digital converter (ADC) converts the amplified signal to a 16bit word which is then acquired by the microcontroller. The selected DSP 16bit microcontroller possesses 256Kbytes total memory and 32Kbytes RAM. So, it is able to perform low pass filtering (LPF). This capability eliminates the external filter circuit which is presented to previously introduced EDA systems. Apart from that, the high clock frequency provides a layer for more demanding filtering algorithms when it is desirable. As depicted in Fig.1, the microcontroller acts as the master controller of the sampling function. It is responsible for data storage (raw signal or extracted features) to files in the memory card, operating the Led indicators and/or output the signal through a fiber optic connection (digital output). The code execution of the miniaturized device (4.9cm x 2.6cm) is explained to the next section.
II. MATERIALS AND METHODS
Fig. 1 Visualization of the Hardware and Software implemented blocks involving data acquisition, amplification and signal processing stages
A. Hardware Implementation Single-supply operation has become an increasingly desirable characteristic of modern sensor amplifiers. Many of today’s data acquisition systems are powered from a single low voltage supply. Following this trend, the EDA system is
B. Code Execution The system is designed to fulfill different experiment protocol requirements. The operating modes support led indicators (color and intension depends on the acquired
IFMBE Proceedings Vol. 29
278
E.I. Konstantinidis et al.
signal), memory card file storage and digital output. As it is depicted in Table 1, a set of parameters customizes the execution of functionalities. These parameters are provided to the system by the user modifying a “settings” file in the memory card. In case of memory card absence the system load the default values. The first step of the data acquisition procedure is the establishment of a constant and user-selected sampling frequency, which is generated by the microcontroller’s internal timer interrupt functionality. During the interrupt function a sample is acquired. Moreover, the interrupt routine implements and executes the two filters (LPF and custom filter) according to the parameters’ values. Finally, it stores the filtered data to a buffer array. This acts as the communication data layer between the sampling routine and the main program. Its length ensures the unobtrusive execution of time-consuming functionalities, like memory card storage and feature extraction.
available for synchronization purposes by annotating the data.
Table 1 System Parameters Parameter Operation Mode Sampling Frequency LPF Custom Filter Digital Output Mode Feature Extraction Algorithm For Feature Extraction Led Indicators Mode Memory Card Organization External Inputs Functionality
Description Led Indicator and/or Memory Card and/or Digital Output Sampling Frequency in Hz Cut-off frequency for the codeimplemented low pass filter Factors for a custom internal filter Pulse Width Modulation or Digital Words Creating a file with features or raw data to memory card Selection of predefined internal algorithms for feature extraction Arousal Level Depiction Type and Name of Files to be written Two external Synchronization inputs functionality
The main program either executes a single operation mode (see Table 1) or concurrently performs more than one. During the led function, the led’s color and intensity depends on the subject’s arousal level. The digital output transmits the filtered data through a fiber optic. The output format could be either a pulse of modulated width (PWM) or a sequence of digital words. Both of them can be easily integrated to an EEG recording system (PWM to voltage) or to a PC (digital words). The aforementioned tasks are performed in real-time. Finally, in case of using the device as a long-term wearable, the memory card mode is selected. Moreover, it supports experiments in which an external device (EEG recorder) is not advisable. Besides this, external inputs are
Fig. 2 Flow Chart Diagram of the code execution depicting the functional blocks followed according to the operating mode
III. RESULTS Towards evaluating the device’s applicability, two approaches were followed. Initially, the measurement of a known electronic resistor took place in order to evaluate whether the device reliably measures voltage proportional to a resistance (skin resistance). The calculated (by the device) value of the known resistance had a negligible error in comparison to the real value. Then, skin conductance was recorded during the presentation of auditory stimuli. The stimuli were noxious sounds (white noise or intense tones) and unpleasant stimuli. The sounds duration ranged from 15 seconds. The experimental procedure consisted of 5 successive sounds. The inter-stimulus interval was set at 10 seconds. The overall experimental procedure was lasted for 1 minute. The device evaluation employed 15 healthy volunteers (13 males and 2 female subjects). The averaged electrodermal activity recorded as a response to the noxious stimuli was extracted. Figure 3
IFMBE Proceedings Vol. 29
WADEDA: A Wearable Affective Device with On-Chip Signal Processing Capabilities for Measuring ElectroDermal Activity
indicatively illustrates the normalized EDA changes due to two stimuli are indicatively. The skin conductance signal is depicted with red line, while the onset and sound duration with the blue pulse. The sound stimulation evokes phasic SCRs while the SCL is also altered.
279
SCR/SCL features without the need of a host computer. It also introduces the notion of on-chip processing data processing and feature extraction by means of algorithmic steps executed by the microcontroller as firmware. Empowering an EDA sensor with such capabilities may extend its applicability to a plethora of applications ranging from emotion aware computing to neurophysuological experiments and healthcare systems.
REFERENCES
Fig. 3 EDA activity elicited by novel and noxious stimuli
IV. DISCUSSION The proposed system was designed for facilitating a variety of applications. Among them, virtual gaming [12] and smart human-computer interfaces employ affective sensors for sensing the user’s state over short temporal windows. Response delays may cause user irritation and application’s doom. Therefore, the approach of on-chip processing and feature extraction is proposed in this study. Pre-processing is performed inside the micro-controller, which then employ pattern recognition techniques for sensing autonomic arousal alterations (both phasic and tonic). Several healthcare applications have been recently proposed for the detection of stress, anxiety, depression. Long-term recordings are required for obtaining reliable neuro-physiological markers. Unobtrusiveness is a key issue for such applications, since the user should perform his daily activities without feeling frustration. So, the affective wearable might be miniaturized and light-weight. Our approach facilitates data acquisition over a long period by embedding a memory card. The storage of EDA features instead of raw data further extends the device’s independency and limits power consumption. Biomedical devices have been used during experimental procedures. Simultaneous recordings of various systems (respiratory, nervous, etc.) result in the adoption of several devices [13]. Their parallel function causes noise interference and artifact rejection. Data transmission to a host computer through cables enhances the signal contamination. The proposed device offers the possibility of noise-free transmission through a fiber-optic circuit performing data digitization and transfer through frequency modulation. This study proposed a miniaturized, light-weight skin conductance sensor able to store either raw EDA signals or
1. Dawson M E, Schell A M and Filion D L (2001) The electrodermal system. In: Cacioppo J T, Tassinary L G, Berntson G G, editors. Handbook of psychophysiology. Cambridge. University Press: 53-84 2. Edelberg R (1972) Electrical activity of the skin. In Greenfield N S, Sternbach R A, editors. Handbook of psychophysiology:376-418 3. Boucsein W (1992) Electrodermal activity. New York. Plenum Press. 4. Lang P J, Bradley M M and Cuthbert B. N. (1998) Emotion, motivation, and anxiety: brain mechanisms and psychophysiology. Biological Psychiatry, Vol. 44, Issue 12:1248–1263 5. Picard R W and Scheirer J (2001) The Galvactivator: A Glove that Senses and Communicates Skin Conductivity. Proceedings 9th Int. Conf. on HCI, 2001, New Orleans, USA, 2001 6. Frantzidis C, Bratsas C, Klados M, Konstantinidis E, Lithari C, Vivas A, Papadelis C, Kaldoudi E, Pappas C and Bamidis P (2010) On the classification of emotional biosignals evoked while viewing affective pictures: an integrated data mining based approach for healthcare applications. IEEE transactions on Information Technology in Biomedicine DOI 10.1109/TITB.2009.2038481 7. Picard R W and Healey J (1997) Affective Wearables. Personal technologies, Vol 1(4):231-240 8. Shastri A, Lomarev M P, Nelson S J, George M S, Holzwarth M R and Bohning D E (2001) A low-cost system for monitoring skin conductance during functional MRI. Journal of Magnetic Resonance Imaging, Vol. 14:187-193 9. Lagopoulos J, Malhi G S and Shnier R C (2005) A fiber-optic system for recording skin conductance in the MRI scanner. Behavior Research Methods, Vol. 37, Issue 4:657-664 10. Strauss M, Reynolds C, Hughes S, Park K, McDarby G and Picard R W (2005) The handwave Bluetooth skin conductance sensor. In Tao J, Tan T and Picard R W, editors. ACII, Lecture Notes in Computer Science, Springer, volume 3784:699-706, 2005 11. Frantzidis C A, Konstantinidis E I, Pappas C and Bamidis P D (2009) An Automated System for Processing Electrodermal Activity. Studies in Health Technology and Informatics, Vol. 150, ISBN 978-1-60750044-5, 2009 12. Konstantinidis E I, Luneski A, Frantzidis C A, Pappas C and Bamidis P D (2009) A Proposed Framework of an Interactive Semi-Virtual Environment for Enhanced Education of Children with Autism Spectrum Disorders, The 22nd IEEE International Symposium on Computer-Based Medical Systems, CBMS 2009, 3-4 August, Albuquerque, New Mexico, USA 13. Konstantinidis E I, Bamidis P D and Koufogiannis D (2008) Development of a Generic and Flexible Human Body Wireless Sensor Network, in Proceedings of the 6th European Symposium on Biomedical Engineering (ESBME 2008) Author: Evdokimos Konstantinidis Institute: Lab of Medical Informatics, Aristotle University Street: Medical School, Aristotle University, PO Box 323, 54124 City: Thessaloniki Country: Greece E-mail:[email protected]
IFMBE Proceedings Vol. 29
A Modular Architecture of a Computer-Operated Olfactometer for Universal Use A. Komnidis1, E. Konstantinidis1, I. Stylianou2, M.A. Klados1, A. Kalfas2, and P.D. Bamidis1 1
2
School Of Medicine, Laboratory of Medical Informatics, P.O. Box 323, Department of Mechanical Engineering, Laboratory of Fluid Mechanics and Turbomachinery, Aristotle University of Thessaloniki, GR-54124 Thessaloniki, Greece
Abstract— Olfactometers are widely used in the study of the chemical senses from a neurophysiological point of view. Although there is a plethora of olfactometer designs, all of them lack of flexibility in modification. In more details they are not able to dynamically increase the number of odors that can be provided simultaneously or they are not capable to use other form of odorous material than the one they’ve been designed for. In addition to all these the concentration of the stimulus is estimated indirectly through the ratio of the odorized and the odorless air that is delivered to the subject. Taking into account all these, it is understandable that there is an urgent need for an effective olfactometer which will be able to overcome the aforementioned drawbacks of the existing olfactometers. In this scope, the current study comes to introduce a new computer – operated olfactometer. Its novelty lies on the fact that it has a modular architecture with microcontroller units in every module, which can undoubtfully simplify the system’s modification. On the other hand it can also estimate directly the Volatile Organic Compounds (VOC) with one sensor for every odor, and one sensor for the overall stimulus. Keywords— Olfactometer, Volatile Organic Compounds, direct concentration measurement, odor module, modular design.
disadvantages of the stimulus delivery through inhalation are shown in the Tab. 1).The constant air flow could also be warmed up and humidified especially if a high constant air flow is used. (The advantages and disadvantages of warming and humidifying the air flow are shown in the Tab. 2). It has also to be mentioned that the delivery apparatus could be either directly contacted to the subject’s nose (nasal mask, nasal cannulas etc) or not. The overwhelming majority of such kind of systems is focused on a single parameter such as low cost or fMRI compatibility, while their design’s inflexibility embarrasses the system’s modification. Despite all these, another major drawback, of the existing olfactometers, is that they are not able to directly quantify the stimuli’s concentration, and they are forced to extract it indirectly by computing the ratio between the odorized and the odorless air that reaches to the subject’s nose. Table 1 The advantages and disadvantages of the stimulus delivery through inhalation Advantages
Disadvantages
More natural
Need of inhale detection
Regular nose inhalation
Lesser control of the stimulus concentration
Less stressful
Lesser control of the duration
I. INTRODUCTION Olfactometer is an electrical instrument which delivers an olfaction stimulus with specific chemical compound properties, as well as with precise and controlled intensity. In other words an olfactometer can deliver either a single or combined odors with certain concentration and with fixed duration. This type of devices is very useful for the evaluation of the human’s olfaction, as well as for the investigation of cerebral responses induced by olfaction stimuli. Thus it can be widely used in clinical practice [1] as in neuroscience [4] and psychophysiology research [9],[10]. To the best of our knowledge, many different olfactometer systems have been proposed these years [2], [3],[4],[5],[6],[7]. Each of them was design in order to serve certain purposes, so there are many different designing options. For example, the stimulus can be delivered to the subject’s nose through a low constant air flow via the inhalation procedure or in a high constant air flow while the inhalation is made through the mouth. (The advantages and
Dependence of the inhalation The design of a computer-controlled olfactometer which can be adapted in all cases is a complex procedure and it’s still unfeasible. The design complexity of such a universal olfactometer lies on a multiparameteric problem that the designer is called to solve during its implementation. Some crucial parameters which someone has to take into account are the number of the different olfaction stimuli that can be provided simultaneously as well as the initialization of each stimulus’s intensity. Also the amount of the total air flow that reaches to the subject’s nose, the quick response of the electromechanical parts, the compatibility with other
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 280–283, 2010. www.springerlink.com
A Modular Architecture of a Computer-Operated Olfactometer for Universal Use
devices such as fMRI, EEG and MEG recorders etc, and the machine’s cost have to be under consideration. Table
2 The advantages and disadvantages of warming and humidifying the air flow
Advantages
Disadvantages
Closer to natural
More sophisticated design
Ability of using higher air flow rates
Higher cost
The stimulus peak is quicker Bacterial development through the use of higher air hazard. flow rates This study comes to integrate the existing olfactometers, by introducing a novel olfactometer system for universal use. In contrast to the existing systems, the novelty of the herein introduced olfactometer lies on the fact that it can be easily adapted in order to accomplish any experimental task as well as it can directly estimate the Volatile Organic Compounds (VOC). Abstractly some of the main characteristics of the current proposed system are its flexible modular design, which helps the system to be easily adapted and makes it capable to generate multiple odor stimuli, the wide range of air volume and pressure supply, which is crucial for the regulation of the stimuli’s intensity, the direct quantification of the stimulus concentration, its ability to use various odorant materials such as solid, liquid or gas, and its small dimensions for easy portability. So the remainder of this paper is structured as follows. In section II previous work is provided. A detailed description of the proposed herein olfactometer system is presented in section III. Section IV consists of an evaluation of the introduced device with a short discussion and the future work proposed by the authors. Finally, sections V and VI are the conclusions and acknowledgements.
II. PREVIOUS WORK In 1980 Benignus & Prah [2] developed a computercontrolled vapor-dilution olfactometer. Although their work was pioneer, and the usage of a microcomputer gave some flexibility to their device, it was designed only for liquid odorant materials. It diffused a constant air flow to the subject through a nasal catheter while the inhalation procedure took place orally. On the other hand the calibration of the odorant level was made by an external gas chromatograph device before each experiment. In 1988 Vigouroux et al. [8] designed a device with multistage dynamic flow dilution. This device was capable to
281
operate with liquids and solid odorant materials. It delivered pulses of odorized air with a variable duration through a nozzle near the subject’s nose but it was not possible to deliver odorless air flow hence it was unable to deliver constant air flow to the subject. In this case the calibration of the concentration levels was made by an external Flame Ionization Detector (FID) before the experiment. In 2006 Lowen & Lukas [5] designed an olfactometer that was low-cost and MR-compatible. Their device was able to operate only with liquid or liquid-diluted odorant materials. In this device there was no concentration estimation at all, and its operation relied upon the assumption that a constant air flow produces a constant concentration level stimulus. In 2007 Johnson & Sobel [3] developed a more integrated olfactometer by combining as many parameters as it was possible. However the device became very complicated, bulky and expensive, with low flexibility in the modifications. The olfactometer proposed in this paper is based on the same principles as the aforementioned models, but the novelty of our system relies upon the idea of a modular design. This concept comprehends a main module which is the “heart” of the herein proposed olfactometer, and a series of peripheral modules which can be directly connected to the main module satisfying the given experimental needs. Thus the kind and the number of the plugged-in modules are only dependent by the experimental protocol.
III. MATERIALS AND METHODS The diagram of the olfactometer system proposed in the current study is shown in Fig 1. For space economy reasons the figure only depicts one odor module which is prefixed by the letter A, while the main module of the system is represented by the prefixed letter B. It has to been mentioned that one of the major advantages of the current olfactometer is that the whole module can be easily replicated in order to achieve the desired number of simultaneous olfaction stimuli. According to the operation principle of the herein proposed system, the fan (A1) forces a low-volume, low-pressure air stream to flow via the module. The air passes through the air filters (A2) for purification reasons and through the one-way valve (A3) in order to avoid the air recoil. Then the air stream flows into the dispenser, which includes the odorous material (A4) and washes away the existed vapors. An air quality sensor (A5) measures directly the concentration of the VOC inside the dispenser. If the concentration exceeds a desirable threshold, the three-way electro-valve (A6) delivers the air stream to the exhaust, until the desirable level of concentration is achieved.
IFMBE Proceedings Vol. 29
282
A. Komnidis et al.
When the VOC concentration level is ideal, the odorous air flows through a one-way valve (A7) and enters to the dilution vessel. In the same way, fan (B1) produces a lowvolume, low-pressure air stream that passes through the air filters (B2) and through the one-way valve (B3) and enters to the dilution vessel. The odor concentration in the diluted air is then measured by an air quality sensor (B4) which is already placed inside the dilution vessel. If the concentration level exceeds a pre-defined threshold, the closed on/off electro-valve (B5) opens and delivers the air stream to the exhaust until the desired concentration level is reached. Afterwards the closed on/off electro-valve (B6) opens and the air stream flows via the one-way valve (B7) and it gets mixed with the odorless air of the CPAP machine. The air stream then flows via the one-way valve (B9) to the air flow meter (B10) and finally ends to the delivery apparatus (the nasal mask in the figure). After the exposure to the subject’s nose the olfaction stimuli is driven to the exhaust via a one-way valve (B12) to prevent “odor pollution” of the experimental room, which can seriously affect the experimental conditions. Unlike the other projects, our system uses a main air stream provided by the CPAP machine, in which the low pressure odorized air, injects. In this way it is easier to maintain constant the volume of the air stream that reaches to the subject when the stimulus is delivered. The CPAP provides a high pressure, although adjustable, air flow. This high pressure air flow is convenient when the tubing is very long, in cases where olfactometer’s use distorts other devices like the EEG, MEG and fMRI.
The whole operation of the system can be controlled by a PC via a USB interface. For integration purposes, each module has its own microcontroller unit (MCU) which controls the module’s functions. The MCUs are forming a master – slave network using I2C protocol. In more particular each slave MCU receives and processes the various data from the module sensors and controls the corresponding electro-valves and fan speed while it communicates with the master MCU. The master MCU on the other hand supervise the slave MCUs operation and communicates with the PC. This technique increases the speed and the accuracy of the module functions and makes the module operation more independent. Electrical power for the whole system except the CPAP machine is provided by rechargeable 12V/ 7 Ah batteries, while the CPAP machine has its own battery granted by the manufacturer. The proposed architecture gives us the flexibility to plug or unplug modules more easily according to the experimental needs, but its main advantage is that it enable us to develop just one module instead of developing a whole new system, in case of special requirements due to a demanding experiment.
IV. DISCUSSION The proposed modular architecture led to a single module adjusted to the current requirements. This module can be directly connected to the main board, as opposed to the
Fig. 1 The diagram of the proposed modular olfactometer. The components prefixed with the letter A are forming an odor module for liquid materials and the components prefixed with the letter B are forming the main module of the system. Multiple odor modules can be connected to the dilution vessel of the main module to accomplish the experimental needs. This figure shows only one odor module for space economy reasons IFMBE Proceedings Vol. 29
A Modular Architecture of a Computer-Operated Olfactometer for Universal Use
development of a whole new system. The modification of an already existed module is easier and can be done in a timely manner. This facility could lead to the development of a plug-in library, enhancing the performance of the proposed olfactometer. Other researchers could be assisted in overcoming current technical issues. The operation of the current system can be fully controlled by a PC system, which is also a basal aspect of our system. It can automatically denote the accurate onset time of the olfaction stimuli, as well as it can also adapt the stimuli’s intensity in real time either by a pre-defined algorithm or by any kind of feedback. Both of the aforementioned faculties are meaningful as much for the neuroscience research as for the clinical practice, because it provides us the opportunity to measure the cognitive tasks performed exactly after the olfaction stimulus onset. Most of the studies concerning neurophysiological aspects of chemical senses combine machines that record human’s ongoing cerebral activity, like EEG and MEG, with an olfactometer system. Thus the olfactometers should not distort the recording process of other devices. The present approach uses electromagnetic valves, which can potentially induce electromagnetic noise on other recording devices. Proper electromagnetic shielding is urgently required. The solution proposed to date, was the use of long tubes assisted by the relatively high air pressure produced by the CPAP machine in order to keep a distance from the electromagnetic noise. The CPAP machine provides the opportunity to diffuse a wide range of air supply in terms of volume and pressure characteristics. Further work may address the construction of the full set of modules for every form of odorant materials (liquids, solids, gases etc.). Finally, the system’s software could be enriched, in order to facilitate the integration with other devices.
V. CONCLUSION The characteristics of the proposed system meet the requirements of a computer-controlled multi-functional olfactometer. Furthermore, the dimensions of the proposed olfactometer have been kept to a minimum in order to maintain portability. The system’s capability to directly estimate the stimuli’s intensity, using VOC sensors, as well as the rapid action electro-valves, enables the easy and accurate adjustment of the stimuli’s intensity. This is an essential feature of the proposed olfactometer, especially in neuroscience research,
283
as the stimuli’s intensity is one of the most crucial parameters for the assessment of induced cerebral responses.
ACKNOWLEDGMENT This work was supported by the postgraduate program in Medical Informatics (PRO.ME.S.I.P.) of the Medical School, Aristotle University of Thessaloniki, Greece. A. Komnidis has a scholarship from the Greek State Scholarship Foundation (I.K.Y.) for his postgraduate studies.
REFERENCES [1] Lorig, Tyler S. (2000), The application of electroencephalo-graphic techniques to the study of human olfaction: a review and tutorial. International Journal of Psychophysiology,36,pp 91-104 [2] Benignus, V. A., & Prah, J. D. (1980). A computer-controlled vapordilution olfactometer. Behavior Research Methods & Instrumentation , 12 (5), pp. 535-540. [3] Johnson, B. N., & Sobel, N. (2007). Methods for building an olfactometer with known concentration outcomes. Journal of Neuroscience Methods , 160, pp. 231-245. [4] Lorig, S. T., Elmes, G. D., Zald, H. Z., & Pardo, J. V. (1999). A computer-controlled olfactometer for fMRI and electrophysiological studies of olfaction. Behaviour Research Methods, Instruments & Computers , 31 (2), pp. 370-375. [5] Lowen, S. B., & Lukas, S. E. (2006, May). A low-cost, MRcompatible olfactometer. Behav Res Methods , 38 (2), pp. 307-313. [6] Palmer, B. R., Stough, C., & Patterson, J. (1999). A delivery system for olfactory stimuli. Behavior Research Methods, Instruments, & Computers , 31 (4), pp. 674-679. [7] Vigouroux, M., Bertrand, B., Farget, V., Plailly, J., & Royet, J. (2005). A stimulation method using odors suitable for PET and fMRI studies with recording of physiological and behavioral signals. Journal of Neuroscience Methods , 142, pp. 35-44. [8] Vigouroux, M., Viret, P., & Duchamp, A. (1988). A wide concentration range olfactometer for delivery of short reproducible odor pulses. Journal of Neuroscience Methods , 24, pp. 57-63. [9] Møller, P., & Dijksterhuis, G. (2003). Differential human electrodermal responses to odours. Neuroscience Letters , 346, pp. 129-132. [10] P.D. Bamidis, P. Moeller, "Enabling physiological sensing of aroma enhanced human computer interaction", In Proceedings of the International Symposium on Intelligent Environments: Improving the quality of life in a changing world, Micorsoft Research, Cambridge, United Kingdom, 2006. Author: Antonis Komnidis Laboratory of Medical Informatics, School Of Medicine, Aristotle University of Thessaloniki Street: P.O. Box 323 54124 City: Thessaloniki Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
The role of geometry of the human carotid bifurcation in the formation and development of atherosclerotic plaque P.G. Kalozoumis1, A.I. Kalfas1, A.D. Giannoukas2 2
1 Departement of Mechanical Engineering, Aristotle University of Thessaloniki, GR-54124 Thessaloniki, Greece Department of Vascular Surgery, University Hospital of Larissa, University of Thessaly Medical School, GR-41110 Larissa, Greece
Abstract- Atherosclerosis is a major cause of morbidity and mortality and is result of complex relations among blood, flow parameters and vessel geometry. Its apparent link to wall shear stress (WSS) has led to considerable interest in the in vivo estimation of WSS. An automated method is described for creating a computational fluid dynamic (CFD) mesh of a blood vessel lumen geometry based on in vivo measurements taken by magnetic resonance (MR) images, in order to examine the velocity and wall shear stress inside a volunteer’s specific geometry. *umerical results arising from six carotid bifurcations of three volunteers, have shown that areas with low wall shear stress correlate best with the areas where, as established in medical literature, atherosclerotic plaque develops. Keywords- Computational fluid dynamics, carotid artery, wall shear stress, atherosclerosis, magnetic resonance angiography.
I. INTRODUCTION Atherosclerosis is the major cause of death in the developed world and has long been related to the way of life. As a result of cholesterol-laden plaque accumulated in arterial walls, atherosclerosis causes a narrowing or stenosis in specific areas of the arteries. This disease often leads to heart attack and stroke [1]. A number of studies on surgical approaches have been published in recent years [2, 3]. The atherosclerotic plaques have a tendency to start and develop near bifurcations and bends, where the hemodynamic forces and especially WSS, play an important role. Therefore, for the last few decades, it has been suggested that some individuals might face an increased risk of developing atherosclerosis owing to the specific geometry of their arteries without taking into account other factors (smoking, bad metabolism, eating habits, etc.). The carotid bifurcation consists of the main branch, known as the common carotid artery (CCA), which is asymmetrically divided into the internal (ICA) and the external (ECA) carotid artery. The ICA is often characterized by a widening known also as carotid sinus or bulb. The shape of the artery, the existence or not of a bulb, constitute factors that undoubtedly affect the hemodynamic parameters and in certain cases they induce non-eligible flow phenomena. These phenomena lead to the existence of atherosclerotic lesion-prone sites that facilitate the accumulation of the low-density lipoproteins (LDL) that are
considered to be the initiators of atherosclerotic plaque formation [4]. Nowadays, through the rapid development of technology, there are some techniques such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and Ultrasound (US) that are used for the detailed depiction of vessels inside the human body. Some of these techniques are able to determine a number of hemodynamic parameters (pressure field, velocity, shear stresses, etc) [5]. However, the measurements in most cases are not accurate enough. Computational Fluid Dynamics (CFD) techniques can provide extremely detailed analysis of the flow field and wall stress (shear and tensile) with very high accuracy. The objective of this study was to compare the geometry shape with the locations of low Wall Shear Stress (WSS) and complex flow structures. Moreover an effort was made to correlate these locations with usual locations where atherosclerotic plaque generates and develops [6]. II.
METHODS
A. Volunteers for data acquisition As for the geometric reconstruction of the human carotid artery, the in vivo simulation of blood flow and the estimation of the hemodynamic parameters, three volunteers were used for data acquisition. As a result, the subjects for study were 6 carotid bifurcations. All volunteers were males, healthy and non-smokers without any history of cardiovascular diseases, aged between 24 and 35. Their arterial geometry lacked any geometric changes due to atherosclerosis. All volunteers gave their written consent in order to participate in the following study. B. TOF MRA acquisition The acquisition of the Magnetic Resonance Angiographies was held in the diagnostic laboratories Tsoukalas - Parafestas - Tsetsilas in Larisa. Before the beginning of the procedure adumbrational serum was intravenously injected in order to assist the attainment of contrast between the blood flow area and the rest of the areas (bones, tissues etc.). The volunteers were scanned, while lying supine with their head held in a straight position. The MR Images were obtained through the use of 1.5 Tesla Siemens Avanto scanner. The type of the MR
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 284–287, 2010. www.springerlink.com
The Role of Geometry of the Human Carotid Bifurcation in the Formation and Development of Atherosclerotic Plaque
angiographies was Time-of-Flight (TOF). For each subject, 200 slices were averagely acquired around the carotid bifurcation. The thickness of each slice was 1mm. C. Image segmentation and 3D model reconstruction The processing of the obtained images, as well as the lumen reconstruction were carried out with the use of the program ITK-SNAP. ITK-SNAP offers the capability of handling tree-dimensional medical images, DICOM in our case, and provides automated segmentation using the “level set” method. Moreover, it gives the operator the ability to segment the images manually or to rectify the automated segmentation. After the full determination of the shape of the lumen each geometry went through some smoothing and finally was exported into an STL (Stereolithography) file. The STL file was imported into the commercial mesh generation package, ANSYS ICEM CFD 11, for further processing (Determination of the inlet surface of the CCA and outlet surfaces of ICA and ECA). A tetrahedral numerical grid was generated for each carotid bifurcation. In order to obtain mesh independent simulation results the mesh of each geometry was comprised of 450k elements averagely.
285
obtained with triplex ultrasound measurements. Figure 1 shows the Womersley velocity profile of subject 1 at peak systole. A fixed pressure and a non-slip wall condition were used as outlet and wall boundary conditions, respectively.
Fig. 2 Front (F) and back (B) views showing Low WSS Distribution at peak systole on the left (L) and right (R) carotid bifurcation of subject 1
D. Computational Fluid Dynamics (CFD) A commercially available package, ANSYS CFX 11, was used for the simulation. For the present study we have assumed that blood behaves as a Newtonian fluid with density 1080 kg/m3, dynamic viscosity 0.004 Pa s and specific heat capacity 4000 J/Kg/K and that the walls are rigid, assumptions that are both justified by the size and the location of the carotid bifurcation. Specifically, despite the fact that blood has a non-linear viscosity-shear rate relationship, it can be considered with minimal error as a Newtonian fluid when it flows though the major arteries.
Fig. 1 Womersley velocity profile at peak systole. Axial velocity in (cm/s).
Fig. 3 Front (F) and back (B) views showing Low WSS Distribution at peak systole on the left (L) and right (R) carotid bifurcation of subject 2
Fig. 4 Front (F) and back (B) views showing Low WSS Distribution at peak systole on the left (L) and right (R) carotid bifurcation of subject 3 III. RESULTS
E. Boundary conditions As inlet boundary conditions at the CCA we used the Womersley [7] velocity profiles that were produced with the help of a MATLAB code (MathWorks™) based on the subject-specific velocity waveforms. These waveforms were
Figure 2, shows WSS distribution contours of subject 1 at peak systole. At the left carotid artery bifurcation, low WSS were found at the CCA just before the bifurcation. Moreover low WSS appeared at the outer wall of the ICA,
IFMBE Proceedings Vol. 29
286
P.G. Kalozoumis, A.I. Kalfas, and A.D. Giannoukas
at the carotid sinus, as well as on the outer wall of the ECA just past the bifurcation. These locations involved flow separation, especially at the ECA, and recirculation at the carotid sinus characterized by low velocities (Figure 5). Separation and recirculation of the flow at the ICA have been probably occurred due to the steep angle between the centerline of the CCA and the centerline of the ICA. At the right carotid artery, low WSS appeared before the bifurcation, on the outer wall of the ECA and at the sinus just past the bifurcation. In particular, at the carotid sinus, despite the shallow angle between the centerlines, flow separation, recirculation and turtuosity were observed. As it can be seen in Figure 3, both carotid bifurcations of subject 2 showed similar low WSS distribution and flow phenomena with subject 1. Furthermore, intense turtuosity of the flow appeared at the sinus of the left carotid artery. This specific location showed low WSS distribution due to lower velocities. At the right carotid artery a large area of low WSS appeared at the outer wall of the ECA. At the left carotid artery of subject 3, Figure 4, low WSS appeared on the outer walls of the CCA before the bifurcation and the outer walls of the ECA and of the sinus but only at the front surface. Separation of the flow appeared at the origin of the sinus with recirculation and low velocities. Identical results appeared also at the right carotid bifurcation of subject 3. In several bifurcations, low WSS appeared near the apex. A logical explanation is that there is a stagnation point where the velocity takes a zero value and moreover some fluid molecules reverse their direction. These molecules collide with the oncoming molecules with the result of the appearance of slow velocities, complex flow and low WSS.
Fig. 5 Flow conditions inside four of the carotid bifurcations (1Left (L), 1Right (R), 2L, 3L) represented by streamlines of velocity.
IV.
DISCUSSION
According to [8], atherosclerotic plaques usually occur at common carotid artery just before the bifurcation. As for the IC arteries, plaques develop on the outer wall of the proximal ICA as well as on the outer wall of the carotid sinus. Finally, a usual location of atherosclerotic plaque development is the outer wall of the ECA. The aforementioned facts are in good agreement with the locations of low WSS and complex flow structures in our study. As [9] shown, there is a large variety in carotid artery geometries. The only common element between carotid arteries is the bifurcation. Beyond that amongst individuals there can be infinite shape variations that change the characteristics of the flow. Any attempts to categorize carotid arteries according to geometric parameters, such as mean diameters, branch angles, planarity, curvature, etc., have led many researchers to a dead-end, with large discrepancies between their results [10]. Another important drawback in most cases is the existence of recirculation and separation of the flow which as it is generally known is unstable and due to the transient nature of the flow it continuously changes location and size. As a result there can be no accurate and sufficient determination of the hemodynamic parameters according to the geometric parameters. Separation is possible to occur at bifurcations, plaques, stenosis, bulbs and generally at locations where there appears an expansion of the vessel lumen. It usually happens at locations where a steep angle between the direction of the main flow and the arterial wall exists. The boundary layer detaches from the vessel wall creating a recirculation zone of low velocity. Such zones help several blood components to intrude the epithelial wall and start the formation of the plaque. Changes in velocity and pressure of the blood flow occurring from narrowing or widening of the walls which lead to wall shear stresses that can slow or reverse layers of the flow that are next to the wall. Locations where wall shear stresses are low or changing rapidly are at the highest risk for atherosclerosis. These mechanical forces appear to induce changes at the cells of the arterial walls. Specifically a uniform field of wall shear stress tends to elongate and align the cells of the endothelium at the direction of the flow, while low levels of wall shear stress combined with an oscillatory hemodynamic environment cause irregular shape and lack of specific orientation [11, 12]. Exposure to low WSS can increase the permeability of the cells and consequently increase the vulnerability of these locations to the atherosclerotic plaque. We should also to take into account the fact that the movements of the head may play an important role in changing the flow parameters. It has been proved that movements of the head in various positions cause changes to the curvature of the cervical internal carotid artery, which includes deformation of the carotid sinus [13]. As a result
IFMBE Proceedings Vol. 29
The Role of Geometry of the Human Carotid Bifurcation in the Formation and Development of Atherosclerotic Plaque
the geometric parameters change at every move of the head changing the flow parameters. This fact can lead to the inference that the calculated flow parameters in a straight position may differ from those that occur in other positions. In our simulations we used a constant value of blood viscosity that is met in healthy people. Viscosity is a parameter depending on hematocrit , temperature and flow rate. It has been observed that the higher the amount of erythrocytes in the blood (hematocrit) is, the higher the viscosity. Characteristically if the hematocrit is 50% above the normal value the viscosity of the blood will increase about 100%.
Birpoutsoukis and Orestis Vardoulis, students of the Aristotle University of Thessaloniki, for helping in the programming of the MATLAB code and supporting the early steps of this study, respectively.
REFERENCES 1.
2. 3.
V. CONCLUSIONS
4.
In this study, the potential to estimate in vivo hemodynamic parameters in subject-specific geometries and correlate the results (low WSS and complex flow patterns) with the locations that are prone to develop atherosclerotic plaque, was investigated. The results appeared to be in good agreement with results met in literature, which indicates that individuals may be exposed to differential risk of atherosclerosis due to their local arterial geometry or hemodynamics. This study also introduced a method for 3D lumen reconstruction from MR angiographies, which requires little time and is sufficiently accurate. Future work will include a statistical analysis of a large number of carotid artery bifurcations from healthy and patient individuals. Moreover, it will utilize fluid-structure interactions in order to take into account arterial wall elasticity and its effect on hemodynamic parameters, especially on WSS.
ACKNOWLEDGMENT
5.
6.
7.
8.
9.
10. 11.
12.
The authors would like to acknowledge the volunteers for participating in this study. Thanks are also due to the members of the Department of Vascular Surgery (Hospital University of Larisa) and in particular Dr Stylianos Koutsias, MD, for their invaluable suggestions and discussion. Furthermore, this study could not be completed without the help support of the diagnostic laboratories Tsoukalas - Parafestas - Tsetsilas of Larissa. Their efforts and cooperation in acquiring the MRA data, are gratefully acknowledged. Moreover, thanks are also due to the team of Prof. Nikolaos Stergiopulos of the Swiss Federal Institute of Technology (EPFL) Lausanne and especially Dr Demetrios Kontaxakis for sharing their knowledge and experience on the subject. Finally, special thanks are offered to George
287
13.
Tan F P P et al. (2008) Advanced computational models for disturbed and turbulent flow in stenosed human carotid artery bifurcation. Biomed 2008, Proceedings 21: 390-394 Giannoukas A D et al. (2005) Management of the near total internal carotid artery occlusion. Eur J Vasc Endovasc Surg 29: 250-255 Giannoukas A D et al. (2000) Misdiagnosed post-traumatic occlusion of the internal carotid artery in a young man. Eur J Vasc Endovasc Surg 20: 478-481 Olgak U, Kurtcuoglu V, Saur C. S, Poulikakos D, (2008) Identification of Atherosclerotic Lesion-Prone Sites through PatientSpecific Simulation of Low-Density Lipoprotein Accumulation, MICCAI 2008, LNCS 5242: 774- 781 Glor F P et al. (2004) Image-based carotid flow reconstruction: a comparison between MRI and ultrasound. Pysiologiacal Measurement, 25,: 1495-1509 Zarlns CK, Glddens DP, Bharadva] BK, Sottlural VS, Mabo n RF, Glagov S. (1983) Carotid bifurcation atherosclerosis- Quantitative correlation of plaque localization with flow velocity profiles and wall shear stress. Circ Res 1983;53:502-514 Womersley J R (1955) Method for the calculation of velocity, rate of flow and viscous drag in arteries when the pressure gradient is known. J Physiol 127: 553-563 Ku D N et al. (1985) Pulsatile flow and atherosclerosis in the human carotid bifurcation. Positive correlation between plaque location and low oscillating shear stress. Arteriosclerosis,Thrombosis, and Vascular Biology 5: 293-302 Goubergrits L et al. (2002) Geometry of the human common carotid artery. A vessel cast study of 86 specimens. Pathol Res Pract 198: 543-551 Lee S W et al. (2008) Geometry of the carotid bifurcation predicts its exposure to disturbed flow. Stroke 2008:39: 2341-2347 Davies P F et al. (2001) Hemodynamics and the focal origin of atherosclerosis–a spatial approach to endothelial structure, gene expression, and function. Atherosclerosis VI. Annals of the New York Academy of Sciences 947: 7–17. Chatziprodromou I et al. (2007) On the influence of variation in haemodynamic conditions on the generation and growth of cerebral aneurysms and atherogenesis: A computational model. Journal of Biomechanics 40: 3626-3640 Callaghan F M et al. (2009) The role of the carotid sinus in the reduction of arterial wall stresses due to head movements – potential implications for cervical artery dissection. Journal of Biomechanics 42: 755-761
Author: Panagiotis Kalozoumis Institute: Department of Mechanical Engineering, Aristotle University of Thessaloniki Street: P.O. Box 323 54124 City: Thessaloniki Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
A wearable wireless ECG sensor: a design with a minimal number of parts E.S. Valchinov and N.E. Pallikarakis Department of Medical Physics, University of Patras, Greece
Abstract - A wearable wireless sensor for ECG monitoring is presented. It features a split design where the digital and the analog part of the sensor are separated self-contained subparts. The radio link is implemented with an ZigBit module compatible with robust IEEE 802.15.4/ZigBee stack for wireless personal area networks (WPANs). A two-way wireless data transmission link operating at license free 2.4 GHz frequency band is used for transferring the 10-bit measurement data to a receiver device connected to a PC. Tailored PC software is used for displaying the signals and controlling the measurement parameters. The ECG sensor is aimed for measurements in patient’s natural living environment during daily routines and continuous long term measurement for people having, or being recovering from a cardiac disease. The ECG sensor can operate alone or it can be used as a part of wireless personal area network in health care facilities. Keywords— wireless monitoring, biopotential amplifier, ECG, ZigBee, ZigBit.
I. INTRODUCTION Disease management programs involving ECG home monitoring and integrated telemedicine solutions are expected to play an important role in ensuring the safety and cost-effectiveness of future health-care services. However clinical experience had confirmed the need for improvements in equipment used to record ECG during physical activity. Currently movement artifacts represent a major obstacle in this type of equipment. However the rapid developments in wireless technologies seem to partially solve the problem by introducing a range of wearable wireless biomedical sensors. The use of telecommunication and information technologies for remote diagnosis is growing rapidly and has resulted in different products and projects within mobile ECG recording using GSM/GPRS communication [1], WAP-based implementations [2], Internet solutions [3] and wireless local area networks, WLAN [4]. Many continuously operating mobile and even implantable cardiac measurement equipment have already come to the market [5-8]. Today the traditional 24 hour ECG Holters feature an automatic data transmission via integrated GSM module that can send information directly to the hospital [9]. Many research groups have described solutions based on
wireless sensors using RF-radio link [10-12], Bluetooth [13], IEEE 802.15.4 [14], ZigBee [15] or ANT communication protocols [16]. We are presenting a prototype of a small size, compact wearable wireless ECG sensor, suitable for continuous monitoring during daily activities while providing enhanced comfort. It features a split design with minimal number of off-the-shelf components. II. DESCRIPTION A. Hardware A general overview of the proposed ECG monitoring device is depicted in Figure 1. It consists of two units, a wearable on-body sensor that measures, samples and transmits the ECG signal and a receiver unit (coordinator) which captures the transmitted signal and directs it to a PC where further processing, storage and automatic data transmission to a hospital may take place. The ECG sensor features a split design consisting of two separate selfcontained subparts (Pad-1, Pad-2), connected with a flexible wire. Pad-1 and Pad-2 contain respectively the digital/radio and the analog part of the sensor.
Fig. 1 Block diagram of the proposed wireless ECG sensor This design aims to minimize the motion artifacts, the interference between the analog and digital part of the circuit and the partial size of the sensor.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 288–291, 2010. www.springerlink.com
A Wearable Wireless ECG Sensor: A Design with a Minimal Number of Parts
1.
Biopotential Amplifier, Filters and Power Supply
The designed ad hoc two-electrode biopotential amplifier consists of two AC coupled stages with a large fixed differential gain of the first stage. The first stage is implemented with a precision laser trimmed low power zerodrift instrumentation amplifier (IA) with three op-amp design in order to safe board space and avoid additional trimming and resistor matching. The second gain stage and the low pass filter are implemented with two micropower zero-drift operational amplifiers in SOT23-3 package. The AC coupling is implemented by two differential high-pass filters at the inputs of the IA. Both integrated circuits feature rail-to-rail operation for maximum dynamic range and fast recovering from overloads or large artifacts and offer a good compromise between the various demands for low supply voltage, low power, low noise, high CMRR, low input bias current and high input impedance. The amplifier bandwidth is limited to 150Hz by 2nd order low pass filter to prevent aliasing. Bessel filter type in a Salen-key topology is preferred for its excellent transient response and linear phase. No DSP filters are used since all the required signal processing can be done at the receiver side where no strict demands for size and consumption are imposed. The sensor is powered by a rechargeable coin cell Lithium-Ion polymer battery which is the optimal choice in term of dimensions and weight. However Li-ion batteries require more precise charging procedure so a dedicated battery charger is used to prevent the battery voltage from rising above 4.2 volts. Power supply regulation is implemented with two 3.3V low noise low-dropout integrated linear regulators: one for the analog part - biopotential amplifier, and another for the digital part - the microcontroller and the radio transceiver. This arrangement aims to minimize the disturbances in the analog circuitry caused by the higher current consumption of the radio transceiver. Biopotential amplifiers working with bipolar signals require dual power supplies for proper operation. In order to create a balanced power supply we used a virtual reference common at 1.25V, implemented with a precision, low power, low dropout integrated voltage reference manufactured by Texas Instruments.. 2.
Microcontroller, Radio Link and Receiver Unit
The sensor control, data acquisition and radio is implemented with one ultra-compact (24 x 13.5 x 2.0 mm), low-power 2.4 GHz ZigBit module. It is an IEEE 802.15.4/ZigBee OEM module based on the innovative Atmel’s mixed-signal hardware platform. The module is a combination of the popular ATmega1281V microcontrol-
289
ler (128kBytes Flash, 8kBytes RAM, 4kBytes EEPROM) and the latest Atmel AT86RF230 radio transceiver with incorporated dual chip antenna. It has maximum data rate of 250kbps at 101dBm and best-in-class sensitivity. The data acquisition is performed with the embedded 10 bit analog-to-digital converter (ADC). The assembly of the proposed sensor is shown in Figure 2.
Fig. 2 Assembly of the proposed ECG sensor. The physical dimensions of the two subparts of the sensor without the battery are: Ø 37 x 4 mm (Pad-1) and Ø 37 x 2 mm (Pad-2). It weights 25 grams and 11 grams with and without the electrodes and the battery respectively. The developed ECG sensor prototype without the battery is shown in Figure 3.
Fig. 3 Photo of developed ECG sensor with the electrodes. The receiver unit is ready implemented with the AVR RZUSBSTICK from the AVR RZRAVEN development kit from ATMEL and is powered from the PC through USB. The AVR RZUSBSTICK hardware is based on a USB microcontroller and a radio transceiver chip. The AT90USB1287 microcontroller handles the USB interface, the AT86RF230 radio transceiver and the RF protocol stacks. The antenna on the RZUSBSTICK is a folded dipole antenna with a net peak gain of 0dB. B. Radio Communication The selected communication protocol is ZigBee based on IEEE 802.15.4 standard for wireless personal area networks (WPANs). It is simpler and is targeting at RF applications that require a low data rate, long battery life and secure networking. Because ZigBee go from sleep to active mode in less than 15 ms, the devices can be very
IFMBE Proceedings Vol. 29
290
E.S. Valchinov and N.E. Pallikarakis
responsive - particularly compared to Bluetooth wake-up delays, which are typically around three seconds. Therefore ZigBee sensors can sleep most of the time resulting in low average power consumption and long battery life. The stack configuration used is BitCloud which is a fullfeatured, professional grade embedded software ZigBee stack from Atmel and is compliant with ZigBee PRO and ZigBee standards. The stack also provides an augmented set of C APIs thus offering extended functionality. When associating the ECG sensor with the coordinator (receiver unit), the communication channel is first selected out from the 16 possible channels defined by IEEE standard by scanning through all the channels and selecting the one with least amount of noise. The network coordinator, which is powered from the host PC is active all the time and replays to the ECG sensor when it asks if the measurement should be started. During monitoring the embedded microcontroller samples the amplified ECG signal every 2ms (500Hz sampling frequency) where the data is transmitted every 500ms for about 15ms. The radio is off between the transmissions and the microcontroller is in sleep mode with ADC powered off between sampling which results in very low average power consumption. The data is transmitted in packets with size of 181 bits. Received packets are acknowledged and if a packet is lost, it is resend. Lower layers of protocol stack are implemented as a partial implementation of the Zigbee (IEEE 802.15.4). The upper protocol layers are also replaced by a simpler stack. A FIFO style buffer is implemented in the software using the embedded SRAM memory in order to prevent data loss during transient transmission errors. The user interface is executed in Windows environment and written in LabView development environment. It visualizes the measured ECG signal and it has the possibility to set the ECG sampling rate. Measured signals can also be stored as Matlab struct format if needed.
Table 1. RMS noise as a function of the transmit power Meassurement setup Inputs shorted 22kȍ between the inputs Electrodes shorted
0dB 3.23 ȝV 3.37 ȝV 3.31 ȝV
-3dB 3.29 ȝV 3.42 ȝV 3.32 ȝV
-10dB 3.32 ȝV 3.41 ȝV 3.34 ȝV
As can be seen from the results the noise does not depend much on the measurement configuration. The main contributors to the overall noise are the biopotential amplifier and the ADC, where the first is dominating. It can be shown that for a Successive Approximation Register type ADC the rms quantization noise is approximately 0.3Vlsb which is approximately the same as Gaussian noise with a 1.8Vlsb peak-to-peak. The rms noise for a 10-bit ADC with a 3.3V reference voltage and amplifier gain of 2000 is 0.48ȝV referred to the amplifier input. The measured equivalent input amplifier noise was 3.2 ȝVrms in the frequency range 0.1 – 150 Hz and is mainly due to the large resistors used for the high pass input filters and the higher input voltage and current noise of the IA in the frequency range 0.1-10Hz. There was no radio interference present in the measured signal due to the RFI filtered inputs of the IA. However in practice the noise generated at the electrode-skin interface and the one generated from the skin and the muscles is of the order of 20ȝVpp which is practically the dominating noise when a low noise amplifier is used. The validation of the proposed sensor, and the system as a whole, requires human studies to assess its performance in real-world settings. We have performed only preliminary tests to estimate the performance of the sensor prototype. The sensor attachment to the patient’s chest was implemented with disposable adhesive foam electrodes Ambu Blue Sensor [17]. A lead I non-filtered ECG waveform captured with the developed sensor placed on the subject’s chest is shown in Figure 4. 0.5
III. RESULTS The rms noise of the developed sensor was measured in three ways with different transmit powers. The first set of measurements was done with the amplifier inputs shorted right at the snap connectors. The second set of measurements was done with 22kȍ resistor between the snap connectors. The third set of measurements was done with disposable pre-gelled electrodes stack/shorted together. The measured rms voltage noise as referred to the amplifier input in the frequency range 0.1 – 150 Hz with 500 Hz sampling frequency is shown in Table 1.
signal amplitude, mV
0.4 0.3 0.2 0.1 0 -0.1 0
0.5
1
1.5 time, sec
2
2.5
Fig. 4 ECG signal recorded with the developed wireless sensor. IFMBE Proceedings Vol. 29
A Wearable Wireless ECG Sensor: A Design with a Minimal Number of Parts
The right sensing electrode was placed close to the breast bone where the distance between the electrodes was eight centimeters. The quality of the recorded signals is sufficiently good since the influence of the 50 Hz power line interference is highly reduced due to the electrically floating configuration and the small size of the sensor. The noise present in the acquired signal is mainly compounded of muscle artifacts and noise from the electrodeskin interface due to subject’s motion where the last is dominating. The amount of this noise varies a lot between test subjects and electrode locations. The maximum transfer distance measured in an indoor corridor was 36 meters with 0dB transmit power with ceramic dual chip antenna at the ECG sensor and folded dipole PCB antenna at the coordinator (receiver unit) with a net peak gain of 0 dB. The transfer distance with -3 dB transmit power was almost the same most probably due to reflections from the walls, flour and the ceiling which extend the distance compared to a free space. The ceramic dual chip antenna does not provide very good transfer distances as the dipole PCB antenna but offers integrated small-footprint design and eliminates the need for costly and time-consuming RF development. The maximum transfer distance also depends a lot on the amount of 2.4 GHz radio interference present at the testing site. The maximum and the average current consumption measured during transmission were respectively 19 mA and 6 mA. The values measured with several others transmit powers were about the same. The current consumption of 0.5 mA was measured during waiting mode: biopotential amplifier and ADC are powered off. The current consumption measured while the sensor is searching for the coordinator was 3mA. The maximum operation time of the sensor with 300 mAh rechargeable battery is 49 hours while continuously measuring.
291
but also for diagnostic purposes for patients with diffuse arrhythmia symptoms.
REFERENCES 1.
2.
3.
4.
5. 6. 7. 8. 9. 10.
11.
12.
13.
14.
15.
IV. CONCLUSIONS The combination of a compact low profile split sensor design and wireless data transfer assured excellent interference and motion artifact reduction while providing comfortable wear. Additionally this setup offers very low isolation capacitance (capacitance between the device and the environment) and thus high protection against electrical shock-hazard. The leakage currents measured are well below the IEC 601-1 CF specifications. The preliminary results indicated that the proposed sensor is sufficient for continuous ECG monitoring for more than 48 hours and could be used to follow up patients who have survived cardiac arrest, ventricular tachycardia or cardiac syncope
16.
17.
Istepanian R, Woodward B, Gorilas E et al (1998) Desing of mobile telemedicine systems using GSM and IS-54 cellular telephone standards. J Telemed Telecare 4:80-82 Hung K, Zhang Y (2003) Implementation of a WAP-Based Telemedicine System for Patient Monitoring. IEEE Trans Inf Technol Biomed 7:101-107 Oefinger M, Moody G, Krieger M et al. (2004) System for remote multichannel real-time monitoring of ECG via the Internet. Computers in Cardiology 31:753-756 Rollins D, Killingsworth C, Walcott G et al. (2000) A Telemetry System for the Study of Spontaneous Cardiac Arrhytmias. IEEE Trans on Biomed Eng 47:887-892 Vitaphone at http://www.vitaphone.de Alive heart monitor at http://www.alivetec.com Lifeshirt system at http://www.vivometrics.com Implantable Cardiac Devices at http://wwwp.medtronic.com ECG HOLTER Recorder at http://www.schiller.ch Fensli R, Gunnarson E, Gundersen T (2005) A Wearable ECG-recording System for Continuous Arrhythmia Monitoring in a Wireless Tele-Home-Care Situation, IEEE Proc. 18th Int. Symposium on Computer-Based Medical Systems, Dublin, Ireland, 2005, pp 407-412 Fensli R, Gunnarson E, Hejlesen O (2004) A Wireless ECG System for Continuous Event Recording and Communication to a Clinical Alarm Station, IEEE Proc. vol.3, Eng. Med. Biol. Soc., San Francisco, USA, 2004, pp 2208-2211 Vehkaoja A, Lekkala J (2004) Wearable wireless biopotential measurement device, IEEE Proc Eng. Med. Biol. Soc. 3:2177-2179 Bifulco P, Gargiulo G, Romano M et al. (2007) Bluetooth Portable Device for Continuous ECG and Patient Motion Monitoring During Daily Life, IFMBE Proc. vol 16, Mediterranean Conf. Med. Biomed. Eng. Comp. MEDICON, Ljubljana, Slovenia, 2007, pp 369-372 Vehkaoja A, Verho J, Lekkala J (2006) Miniature Wireless Measurement Node for ECG Signal Transmission in Home Area Network, IEEE Proc. vol. 1, Int. Conf. Eng. Med. Biol. Soc., pp 2049-2052 Vehkaoja A, Verho J, Puurtinen M et al. (2005) Wireless Head Cap for EOG and Facial EMG Measurements, IEEE Proc. vol. 6, Eng. Med. Biol. Soc., pp 5865-5868 Vuorela T, Seppä V-P, Vanhala J et al. (2007) Wireless Measurement System for Bioimpedance and ECG, IFMBE Proc. vol. 17, Int. Conf. on Electrical Bioimpedance and Conf. on Electrical Impedance Tomography, Graz, Austria, 2007, pp 248-251 Ambu Blue Sensor at http://www.ambu.com/
Address of the corresponding author: Author: E.S. Valchinov Institute: Department of Medical Physics, University of Patras City: Patras 26500 Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
Active Contours without Edges Applied to Breast Lesions on Ultrasound W. Gómez1,2, A. F. C. Infantosi2, L. Leija1 , W. C. A. Pereira2 1
Department of Electrical Engineering, CINVESTAV-IPN, Mexico D.F., Mexico 2 Biomedical Engineering Program, COPPE/UFRJ, Rio de Janeiro, Brazil
Abstract—A computerized lesion segmentation technique for breast ultrasound images is proposed. It uses a deformable model based on active contours without edges. The performance of this technique was accessed by comparing the resulting contours of 50 ultrasonographies with those manually delineated by two radiologists. For the boundary, the error metrics used were the Hausdorff distance (HD) and the mean absolute distance (MD). The error area was assessed by using the percentage of false positive (FP), false negative (FN), and true positive (TP) pixels. The results were HD = 8.76±6.04, MD = 2.78±1.20, FP = 4.98±3.83, FN = 16.19±6.85, and TP = 83.86±6.75. These findings indicate that the proposed technique was capable to reach high detailed contours. Keywords— Breast ultrasound, segmentation, active contours, anisotropic diffusion. I. INTRODUCTION
Breast ultrasound (US) is the most important adjunct to mammography for patients with palpable masses and normal or inconclusive mammograms [1]. Moreover, breast sonography has the ability to depict hidden lesions in women with dense breast tissue [2], and is particularly useful in distinguishing cystic from solid lesions with an accuracy approaching 100% [3]. Breast US is also used to differentiate between benign and malignant tumors [4]. However, due to the large overlap in the sonographic appearance of breast lesions, it has been difficult to diagnose them only by visual inspection of the specialist [5]. To improve diagnosis, computer-aided diagnosis (CAD) schemes have been emerged as a “second reader” [5]. A fundamental step in breast US CAD is the image segmentation whereby a lesion can be separated from the image background and other structures. The accurate segmentation of breast lesions in ultrasonography is a difficult task, due to the presence of speckle and shadowings, low or nonuniform contrast of certain structures, and variability of the echogenicity of nodules [6]. This work aims at proposing an approach to segment breast lesions in US images. This technique uses anisotropic diffusion filtering guided by texture descriptors to remove speckle, followed by the morphological operation of minima imposition to enhance the lesion region from its back-
ground. Finally, the segmentation is performed by the technique of active contours without edges. II. METHODOLOGY
A. Image database For this study, 50 ultrasonographies were acquired during routine breast screening at the National Cancer Institute of Rio de Janeiro, Brazil. These images were obtained with a 7.5 MHz linear array B-mode 40 mm ultrasound probe (Sonoline Sienna® Siemens) with axial and lateral resolutions of 0.45 mm and 0.49 mm, respectively. For each image, two experienced radiologists cropped a rectangular region of interest (ROI); which includes the lesion and its adjacent tissue. Besides, they also manually delineated all tumor contours, with a mouse device, using software for that purpose. B. Image enhancement and filtering The requirements for medical image preprocessing are: (i) preserving lesion boundaries and structure details, (ii) enhancing edge information, and (iii) suppressing noise efficiently [7]. The preprocessing consists in firstly normalizing the original ROI, f (m,n), to the range of 0 to 255. Then, it is applied a contrast-limited adaptive histogram equalization (CLAHE) to accentuate the lesion from surrounding regions to produce the contrast-enhanced image, fC (m,n) [8]. The next step is to remove speckle from fC (m,n) by employing a texture-oriented anisotropic diffusion filter, defined as [6, 9]: ,
(1)
where ∇ is the gradient operator, div is the divergence operator, ||⋅|| denotes the magnitude, and c(⋅) is the diffusion coefficient, which enhances high-contrast edges over lowcontrast ones and is expressed as:
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 292–295, 2010. www.springerlink.com
,
(2)
Active Contours without Edges Applied to Breast Lesions on Ultrasound
where κ is a constant that controls the diffusion extension and determines the contrast of the edges to be preserved. R(m,n) = {r1,…,rn}, in (1), is a texture vector defined by the responses of a set of Gabor filters, with spatial impulse response expressed as: , where
and
293
the Lipschitz function φ (called level set function) as: φ(x,y) > 0 in ω, φ(x,y) < 0 in Ω\ω, φ(x,y) = 0 in ∂ω (Fig. 2).
(3) (a)
(b)
(c)
(c)
(d)
(f)
. The
Gabor function h (x,y) is a sinusoid centered at the frequency k > 0, modulated by a Gaussian envelope with vari> 0 and orientation θ. The output of a single Gabor ance filter is calculated as the convolution in two dimensions as . To enhance the lesion from its background, a morphological minima imposition operator is applied to fad (m,n) by using a binary marker point manually defined by the user at the lesion center. This point is also taken as the origin of the initial active contour in the segmentation procedure. The minima imposition enforces an image region to be the darkest one by using a single marker (or set). This operation is performed by computing the pointwise minimum between the input image, f, and the marker image, fm, as f ∧ fm. This procedure enforces the connected region masked by fm to be the global minimum of f. Then, a morphological reconstruction by erosion of f ∧ fm from the marker image fm, defined as
, recovers all image information contained in f
except those regions masked by fm. Fig. 1 illustrates the resulted images after applying the following preprocessing stages: (i) normalization, (ii) contrast enhancement, (iii) speckle filtering, and (iv) minima imposition. C. Active contours without edges theory Classical approaches of active contours use the image gradient to stop the evolving curve on the boundary of the desired object (i.e. with edges). However, when the image is very noisy or the object edges are poorly defined, the active contour can completely miss the object boundary. Hence, Chan and Vese [10] proposed an active contour model with the stopping term based on Mumford-Shah segmentation techniques [10]-[12], and not on the image gradient (i.e. without edges). The level-set formulation proposed an effective implicit representation for evolving curves and surfaces. Let C = ∂ω be the boundary of an open set ω ⊂ Ω. Then, in the zero level set the contour C is represented by
Fig. 1. (a) Original ROI. (b) Normalization. (c) CLAHE. (d) Textureoriented anisotropic filtering. (e) Binary marker. (f) Minima imposition.
Fig. 2. A curve given by the zero level set of the function φ. The Chan-Vese model [11] minimizes the energy for a mono-channel image (e.g. gray-level image), that is: ,
(4)
where u0 is a given image (in this case, the imposed image), δε is the Delta function, and μ is a constant. When μ is large, only larger objects are detected, whereas for small μ, objects of smaller size are also detected. c1 and c2 are unknown constants, denoting Ω1 = ω and Ω2 = Ω\ω, respectively, which are calculated as: ,
(5)
, where Hε is the Heaviside function, defined by:
IFMBE Proceedings Vol. 29
(6)
294
W. Gómez et al.
(7)
point in P from all points in M. ∀℘j ∈P the closest distance to the contour M is defined as [13]: ,
where ε = 10-5, and
.
in the first term in (4) is the level-set func-
The
tion curvature and could be expressed as: ,
(9)
where ||⋅|| is the 2D Euclidean distance between any two points. Then, HD is defined as the maximum d(℘j,M) over all j, and MD is the average of d(℘j,M) over all j. Ideally, when M and P are coherent, HD and MD should be zero. For calculating the area error between SC and SM, three metrics were used, as defined in [13]:
(8)
where φx, φy, and φxy are derivatives of φ. Then, by using forward finite differences we get , , and
.
Because the model does not make use of a stopping-edge function based on the gradient, it can detect edges both with and without gradient. Fig. 3 illustrates the Chan-Vese model applied to the segmentation of a breast lesion.
,
(10)
,
(11)
.
(12)
where FP denotes the area falsely identified by SC, compared to the gold standard SM. FN expresses the percentage of the area in SM that was missed by SC. Finally, TP indicates the percentage of the total area of SM that was covered by SC. The areas were based on the number of pixels. (a)
(b)
(c)
Fig. 3. (a) Evolution of the active curve using the Chan-Vese model applied on the imposed image. (b) Final contour of the lesion in Fig. 1(a). (c) Manual delineation used as reference.
D. Performance Assessment For evaluating the accuracy of our segmentation technique (SC), the computerized contours were compared to the manual delineations performed by two radiologists (SM), considering the latter as references. Such comparison is made with the help of parameters derived from both the boundary and the enclosed area. In the comparison between SC and SM, two boundarybased error metrics have been employed: the Hausdorff distance (HD) and the mean absolute distance (MD). The former measures the worst possible disagreement between the two outlines, whereas the latter estimates the disagreement averaged over the two contours. Let M = {m1,m2,…,mη} be the reference boundary and P = {℘1,℘2,…,℘σ} the computerized segmentation result, where each element of M or P is a point in the corresponding contour. Then, it is calculated the distance of every
III. RESULTS
Examples of the segmentation results for three irregular tumors are shown in Fig. 4. SC was capable of preserving fine details that the radiologists missed with manual delineations, such as angular margins and spiculations, which are characteristics that suggest malignancy.
(a)
(b)
(c)
Fig. 4. Examples of three irregular tumors depicted by the radiologist (red) and SC (yellow). Despite the shapes from both delineations are similar, it is noticeable that SC could preserve fine details that radiologist missed.
The resulting average and standard deviation of HD and MD and of FP, FN and TP, for the 50 ultrasonographies, are shown in Tables I and II, respectively. For any of the errors the comparison has been carried out between SC and each radiologist manual delineation, SM1 and SM2. These results
IFMBE Proceedings Vol. 29
Active Contours without Edges Applied to Breast Lesions on Ultrasound
295
indicate the high agreement between the proposed segmentation procedure and both manual delineations. Table I. Boundary errors – HD and MD related to both radiologists Cases
Radiologist
50
SM1 SM2
Total mean
HD (pixels) 8.98±6.34 8.91±5.62 8.94±5.96
MD (pixels) 2.75±1.16 2.79±1.14 2.77±1.15
Table II. Area errors – FP, FN and TP related to both radiologists Cases
Radiologist
50
SM1 SM2
Total mean
FP (%) 5.09±3.71 4.87±3.95 4.98±3.83
FN (%) 16.13±6.84 16.90±6.91 16.52±6.85
TP (%) 84.25±6.74 83.47±6.82 83.86±6.75
IV. DISCUSSIONS
Previous works that used active contours claim to be fully automatic [13] or semiautomatic [6]. In the first case, there is no human intervention to mark the lesion region within the image, but needs a priori knowledge of the shape of the lesion and the characteristics of the image texture. Madabhushi and Metaxas [13] evaluated their fully automatic method in terms of both boundary and area errors, by using a set of 42 ultrasonographies. Manual delineations of a unique radiologist were taken as a reference. They implemented a deformable model based on active contours and considered a rough boundary detected automatically as an initial contour. The active contours have then been adjusted to the lesion borders by employing the directional gradient of the image to reach the lesion boundaries. The results were: HD = 19.72 pixels, MD = 6.68 pixels, FP = 20.85%, FN = 24.95%, and TP = 75.04%. Compared to our findings (Tables I and II), these values indicate a lower segmentation procedure performance. Alemán-Flores et al. [6] proposed a semiautomatic segmentation technique, in which the user marks a point at the lesion center. Their segmentation method with active contour needs a lesion presegmentation to generate the initial curve. If the presegmentation is not properly carried out, the final contour will not depict the lesion contour accurately. Hence, our segmentation algorithm (SC) could delineate lesions with poorly defined margins and heterogeneous textures. Our segmentation method does not require a presegmentation stage. The initial contour is a 5-pixel radius circle placed on the point marked by the user. Moreover, SC does not depend on the gradient of the image to stop the evolving curve.
V. CONCLUSIONS
A semiautomatic segmentation method for breast ultrasonography has been proposed using active contours without edges technique to delineate highly detailed lesion contours. The initial results showed that SC has a good agreement with manual delineations, and it was capable to depict boundary details such as spiculations and angular margins, which is important information to establish a diagnosis hypothesis and for the development of a CAD system.
ACKNOWLEDGMENT To National Council of Science and Technology (CONACYT, Mexico) and the Brazilian Ministries of Science and Technology and Health, for the financial support.
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.
Skaane P (1999) Ultrasonography as adjunct to mammography in the evaluation of breast tumors. Acta Radiol Suppl 420:1–47. Crystal P, Strano S D, Shcharynski S, Koretz M J (2003) Using sonography to screen women with mammographically dense breasts. AJR Am J Roentgenol 181:177–182. Zonderland H M, Coerkamp E G, van de Vijver M J, van Voorthuisen A E (1999) Diagnosis of breast cancer: contribution of US as an adjunct to mammography. Radiology 213(2):413–422. Stavros A T, Thickman D, Rapp C L et al (1995) Solid breast nodules: use of sonography to distinguish between benign and malignant lesions. Radiology 196(1):123–134. Giger M L (2000) Computer-aided diagnosis of breast lesions in medical images. Comput Sci Eng 2(5):39–45. Alemán-Flores M, Álvarez L, Caselles V (2007) Textured-oriented anisotropic filtering and geodesic active contours in breast tumor ultrasound segmentation. J Math Imaging Vis 28:81–97. Geric G, Kubler O, Kikinis R, Jolesz F A (1992) Nonlinear anisotropic filtering of MRI data. IEEE Trans Med Imaging 11(2):221–232. Heckbert P S (1994) Graphic Gems IV. Academic Press, Cambridge. Gómez W, Leija L, Alvarenga A V, et al (2010), Computerized lesion segmentation of breast ultrasound based on marker-controlled watershed transformation. Med Phys 37(1):82–95. Chan T F, Vese L A (2001) Active contours without edges. IEEE Trans Image Process 10(2):266–277. Chan T F, Sandberg B Y, Vese L A (2000) Active contours without edges for vector-valued images. J Vis Commun Image Represent 11(2):130–141. Mumford D, Shah J (1989) Optimal approximation by piece-wise smooth functions and associated variations problems. Comm Pure Appl Math 42:577–685. Madabhushi A, Metaxas D N (2003) Combining low-, high-level and empirical domain knowledge for automated segmentation of ultrasonic breast lesions. IEEE Trans Med Imaging 22(2):155–169. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Wilfrido Gómez Flores Biomedical Engineering Program, COPPE/UFRJ Horácio Macedo Ave. 2030, Cidade Universitária Rio de Janeiro Brazil [email protected]
Automatic identification of trabecular bone fracture S. Tassani1,2, P.A. Asvestas3, G.K. Matsopoulos1, and F. Baruffaldi2 1
Institute of Communication and Computer System, School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece 2 Laboratorio di Tecnologia Medica, Istituti Ortopedici Rizzoli, Bologna, Italy 3 Department of Medical Instruments Technology, Faculty of Technological Applications, Technological Educational Institute of Athens, Athens, Greece Abstract— The correct assessment of bone fracture risk is a mandatory issue for the application of efficient therapeutic scheme. However, it is still not clear which parameter can better predict the probability of a broken event. The correct identification of the fracture zone is a prerequisite in order to recognize the best parameter describing the bone failure. The in-vitro analysis of trabecular bone by means of micro tomographic devices (micro-CT) is a common technique for the study of bone structure. In this scenario image guide failure analysis is becoming a powerful tool for the characterization of trabecular bone mechanical behavior. In the present study the application of image registration techniques is proposed for the rapid and precise identification of the broken fracture. Five trabecular bone specimens were acquired by means of a micro-CT before and after the mechanical compression. The two datasets of every specimen were registered using a surface based automatic method and the broken region was identified as the only region not registered. The automatic procedure was finally validated comparing its results to the visual identification of three operators independently. The procedure resulted in complete agreement in 7 cases out of 10. For the three remaining cases, the operators had to compare their findings against the ones suggested by the automatic registration procedure. In all three cases, the operators had to correct their findings suggesting that the proposed automatic registration procedure was successfully performed and the identification of the fracture zone was correct. In conclusion, a novel procedure for the identification of trabecular bone fracture zone was presented and validated throughout this study.
structural changes within the cancellous bone microarchitecture, which was found to have an important effect on bone strength [1]. Correct identification of the fracture zone is a prerequisite in order to identify the best parameter describing the bone failure. In-vitro analysis by means of micro-tomographic (micro-CT) devices is widespread for the study of trabecular bone structure and its relation to mechanical behavior. Particularly, in [6], a 17% of wrong fracture zone identification was recorded in a situation of controlled involvement of the trabecular structure (i.e. reduced off-axis angle [7]) and with half of the specimen identified as broken. The correct identification of the trabecular broken region have a potential impact in the understanding of the structural mechanisms driving the mechanical fracture of bone, by the calculation of morphometric parameters in the actual broken region instead of the whole specimen. In the present study, a new methodological approach for the automatic identification of trabecular bone fracture zone is proposed aiming for a fast and precise identification of the broken region of trabecular bone specimens; thus permitting larger number of specimens to be further analyzed.
Keywords— Bone fracture risk, image registration, fracture identification, micro-CT
A. Bone Specimens
I. INTRODUCTION
Bone fracture risk assessment is a primary issue for the prevention of traumatic events and for the application of efficient therapeutic scheme. The standard analysis for the assessment of the fracture risk is the evaluation of bone mineral density (BMD) measured by means of dual X-ray absorptiometry (DXA) [1-3]. Even though, the BMD has been proved to correlate to fracture risks, its measurement to identify individuals who will suffer a fracture is not always reliable [4, 5] and it does not allow to determine
II. MATERIALS AND METHODS
Five cylindrical specimens of trabecular bone, with a diameter of 10 mm and a height of 26 mm, were extracted from the epiphysial slices, i.e. femoral condyles, tibial plateau and distal tibia, by means of a holed diamondcoated milling cutter. B. Micro-CT scanning Trabecular specimens were acquired using a previously published protocol [8]. The same acquisition procedure was performed before and after the mechanical test (see below). Global fixed threshold was used for the segmentation of trabecular specimens as previously reported in literature [8].
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 296–299, 2010. www.springerlink.com
Automatic Identification of Trabecular Bone Fracture
297
C. Mechanical testing All specimens underwent compressive testing [7]. Each specimen was cemented directly onto the testing machine (Mod. 8502, Instron Corp., Canton, MA, USA) to ensure the alignment between the testing direction and the specimen axis. The specimen free length was set to 20 mm. Before testing, the specimen was immersed in Ringer’s solution for an additional hour. Each specimen was compressively loaded until failure, with a strain rate of 0.01 s-1 [2, 9, 10]. D. Visual inspection for the identification of the broken region A comparison was performed in the datasets acquired before and after the mechanical compression for the identification of the broken regions. These regions were identified visually, by comparing the stack of pre- and postfailure micro-CT cross-sections (slices) over the whole free height of the specimens, using the pre-failure cross sections as reference. Depending on the presence or absence of trabecular fracture in the post-failure micro-CT slices, each of the corresponding pre-failure slice was labeled as a ‘broken cross-section’ or ‘unbroken cross-section’. E. The automatic registration method
0 0 §1 ·§ cos t y ¨ ¸¨ 0 ¨ 0 cost x sin t x ¸¨ ¨ 0 sin t cost ¸¨ sin t x x ¹© y ©
·¸§¨ cost
0 sin t y 1 0 0 cos t y
z
¸¨ sin t z ¸¨ ¹© 0
For all the acquired data the automatic registration procedure was applied. Two subsets of the post-failure set were defined: the upper and the lower subsets relatively to the fracture zone. The upper subset was formed from slices of the post-failure set from the first upper slice of the set up to a randomly selected slice located above the fracture zone that clearly corresponds to an ‘unbroken region’, whereas the lower subset was formed from slices of the post-failure set from the lowest slice up to a randomly selected slice located below the fracture zone that also clearly corresponds to an ‘unbroken region’. The proposed registration method was applied twice: the first registration involving the upper subset with the pre-failure set and the second registration the lower subset with the pre-failure set (Fig 1).
Fig. 1 The
Automatic identification of the fracture zone was performed by the application of 3D automatic registration method applied on the acquired data sets. For each specimen, two data sets were acquired: the pre-failure data set and the post-failure data set, after mechanical compression. The purpose of the 3D registration is to automatically identify the zone on the pre-failure set that corresponds to the fracture zone of the post-failure data set. The method presented is a surface-based registration which involves the determination and matching of the surfaces of the two sets and the minimization of a distance measure of these corresponding surfaces [11, 12]. The transformation model employed was the rigid transformation model [13], according to the following equation: § xc · ¨ ¸ ¨ yc¸ ¨ zc ¸ © ¹
F. Identification of the fracture zone using the automatic registration procedure
sin t z 0 ·§ x · § dx · ¸¨ ¸ ¨ ¸ cost z 0 ¸¨ y ¸ ¨ dy ¸ 0 1 ¸¹¨© z ¸¹ ¨© dz ¸¹
where tx, ty, and tz represents the rotation angles and dx, dy, and dz the translation displacements along the x, y, and z axes, respectively. No scaling was used.
volume registrations steps are shown. (a) The pre and postfailure sets. The float set is registered on the reference set. The procedure is applied twice: one involving the upper subsets of both sets (b) and the other involving the lower subsets (c). The regions belonging to clearly unbroken regions coincide while the regions belonging to the fracture zone (red fracture zone) present misalignments.
G. Full 3D identification of broken region A full 3D definition of the broken region (ROI) , not related to the identification of broken slices, was performed starting from the 3D distribution of misaligned ROIs. The 3D volume (VOI) was identified applying a dilation procedure around the misaligned ROIs. Every ROI was dilated in every direction of 0.5mm obtaining for every ROI an ellipsoidal VOI centered on broken trabecula. When ROIs were close enough, the VOIs were fused creating a single 3D VOI. H. Validation of the registration procedure The procedure has been applied on 5 specimens, which have previously been mechanically tested. The outcome of the 3D registration, the fracture zone, was compared to the broken regions as visually identified by the operators. When differences between the two procedures were greater than the threshold identified during the visual inspection the
IFMBE Proceedings Vol. 29
298
S. Tassani et al.
operators were asked to verify if the error was due to the 3D registration approach or the visual one.
Table 1 Specimen N°
III. RESULTS
A. Validation: identification of the fracture zone Five trabecular bone specimens were mechanically tested in compression and acquired by means of a micro-CT device before and after the mechanical test. Fig.2 shows the result of applying the automatic registration procedure on a specimen underwent mechanical testing in order to define the fracture zone. Pre and post-failure sets were compared (Fig. 2 a) and the broken region was identified both visually and automatically in order to identify the starting and ending broken slides. A plot for the identification of the fracture zone was obtained by the automatic registration procedure (blue line) while the visual inspection of the observers gave as result the starting and ending broken slides (red straight lines) (Fig. 2 b). Finally the full 3D identification of the broken structure was performed for every specimen (Fig. 2 c).
table specification
Starting Slice
Ending Slice
Visual Automatic Visual Automatic approach procedure approach procedure 1 181 191 589 665(*) 2 26 42 610 611 3 17 22 381 396 4 13 26 406 416 5 606 503(*) 951 988(*) Identification of the starting and ending slices of the ‘broken region’ for all specimen as obtained by the visual approach and the proposed registration procedure. (*) difference between visual and automatic procedure greater than disagreement threshold.
The visual approach and automatic registration procedure were in agreement in 7 out of 10 cases. In fact, the difference between the procedures was smaller than the operators variability in the identification. In the last three cases, the visual approach and the automatic registration procedure disagreed of 37, 76 and 103 slices. For these cases, the operators had to compare their findings against the ones suggested by the automatic registration procedure. In all three cases, the operators had to correct their findings suggesting that the proposed automatic registration procedure was successfully performed and the identification of the fracture zone was correct. B. Validation: full 3D identification.
Fig. 2 The whole identification process is shown. (a) Micro-CT volumes of the bone specimen before and after the mechanical test are shown. (b) The visual procedure by the observers is compared to the automatic identification of the fracture zone. On the vertical axis of the plot, the number of slices of the pre-failure data set is displayed whereas the percentage of overlapping of all ROIs for each slice as obtained by the proposed registration procedure is shown on the horizontal axis. (c) Finally a full 3D broken region is identified.
Due to the operator-dependent nature of the of the visual identification a small disagreement among operators in the identification of the broken region is unavoidable. The three operators reported a mean variation in the identification of starting and ending broken region of 18 slices. This value was used as threshold for the classification of disagreement between automatic and visual procedure. The comparisons are shown in Table 1
The full 3D broken region was also identified for every specimen (Fig. 2 c). Through this kind of visualization was possible to identify the 3D shape of every principal and secondary broken region. Moreover, was possible to identify some single broken trabeculae. These trabeculae were not related to the principal broken regions and were not identified by the operators. They were threedimensionally isolated (no other trabeculae above or below) and therefore not enough to label a single slice as broken due to their presence. Nonetheless, they were identified by the automatic registration procedure. Once again the operators were asked to verify the presence of the broken trabeculae. Operators generally agreed with the automatic registration identification, but in some cases they disagreed among themselves. IV. CONCLUSIONS
The objective of the present study was the presentation and validation of a novel technique for the automatic identification of the bone fracture zone in trabecular specimens. The technique was compared and validated with IFMBE Proceedings Vol. 29
Automatic Identification of Trabecular Bone Fracture
299
the visual approach performed by three operators with large experience. Moreover, a full 3D automatic identification of the broken fracture was introduced. The presented study has shown comparable results between the visual approach and the automatic registration procedure, in the identification of the principal “broken region”. However, during the validation of the full 3D identification procedure, especially related to identification of single trabelculae, some disagreement among the operators was reported. This can be related to one objective problem. The same objects acquired twice in micro-CT are subjected to different interpolations of the images and, therefore, segmentation results. The problem is usually negligible, but not when trabeculae are thin, close to the pixel size, and more subjected to partial volume effect. The proposed identification procedure is based on the registration of segmented images, consequently if the same object has two different segmentation results the procedure will identify the object as broken. This result point out a limitation in the choice of global thresholding as segmentation procedure. In fact, in cases of calcellous bone with thin trabeculae, i.e. osteoporosis, the use of a different segmentation procedure could be desiderable. The application of a 3D image registration method to identify the fracture bone zone in vitro data as visualised by a micro-CT is considered to be novel. To the author knowledge, image registration methods have been applied for the alignment of micro-CT data sets in only two previously published studies [14, 15], but with different aims. Furthermore, the proposed registration procedure provides a fast and an accurate methodology in order to identify the fracture zone in cases where mechanical testing is applied to the specimens. The proposed procedure proved to be reproducible; thus it can be applied for a larger scale analysis of the fracture region. The execution time for the application of the proposed registration procedure was 50 sec in total, for all data sets within this study, including the reconstruction of the post-failure set (10 sec), the segmentation of the two data sets (10 sec), the application of the registration method (twice, in 10 sec in total), and the identification of the fracture zone (10 sec). This result in a substantial reduction of the time for the identification of the fracture zone compared with the 20 to 30 minutes that required for the visual approach by each observer. In conclusion, a novel procedure for the identification of trabecular bone fracture zone was presented and validated throughout this study. The combined use of image guide failure analysis of micro-CT datasets and the mechanical tests can lead to a more comprehensive study of the fracture behaviour in the near future.
ACKNOWLEDGMENT This work was supported by the EC (acronym: MOSAIC). The datasets were available from http://www.physiomespace.com, and produced by Laboratorio di Tecnologia Medica, with the financial support of the EU project LHDL (IST-2004-026932).We would like to thank Luigi Lena for the graphical support and Nikolaos Mouravliansky for the technical support.
REFERENCES 1. 2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
Gibson, L.J., Biomechanics of cellular solids. J Biomech, 2005. 38(3): p. 377-99. Goulet, R.W., et al., The relationship between the structural and orthogonal compressive properties of trabecular bone. J Biomech, 1994. 27(4): p. 375-89. Helgason, B., et al., Mathematical relationships between bone density and mechanical properties: a literature review. Clin Biomech (Bristol, Avon), 2008. 23(2): p. 135-46. Marshall, D., O. Johnell, and H. Wedel, Meta-analysis of how well measures of bone mineral density predict occurrence of osteoporotic fractures. BMJ, 1996. 312(7041): p. 1254-9. McCreadie, B.R. and S.A. Goldstein, Biomechanics of fracture: is bone mineral density sufficient to assess risk? J Bone Miner Res, 2000. 15(12): p. 2305-8. Perilli, E., et al., Dependence of mechanical compressive strength on local variations in microarchitecture in cancellous bone of proximal human femur. J Biomech, 2008. 41(2): p. 438-46. Ohman, C., et al., Mechanical testing of cancellous bone from the femoral head: experimental errors due to off-axis measurements. J Biomech, 2007. 40(11): p. 2426-33. Perilli, E., et al., MicroCT examination of human bone specimens: effects of polymethylmethacrylate embedding on structural parameters. J Microsc, 2007. 225(Pt 2): p. 192-200. Ciarelli, T.E., et al., Variations in three-dimensional cancellous bone architecture of the proximal femur in female hip fractures and in controls. J Bone Miner Res, 2000. 15(1): p. 32-40. Linde, F., I. Hvid, and F. Madsen, The effect of specimen geometry on the mechanical behaviour of trabecular bone specimens. J Biomech, 1992. 25(4): p. 359-68. Matsopoulos, G.K., Medical Image Registration and Fusion Techniques: A Review, in Advanced Signal Processing Handbook – Theory and Implementation for Radar, Sonar, and Medical Imaging Real-Time Systems, S. Stergiopoulos, Editor. 2009, CRC Press, Taylor & Francis, Inc.: Florida. p. 148-221. Matsopoulos, G.K., et al., CT-MRI automatic surface-based registration schemes combining global and local optimization techniques. Technol Health Care, 2003. 11(4): p. 219-32. van den Elsen, P.A., E.J.D. Pol, and M.A. Viergever, Medical image matching-a review with classification. IEEE Engineering in Medicine and Biology Magazine, 1993. 12(1): p. 26-39. Boyd, S.K., et al., Evaluation of three-dimensional image registration methodologies for in vivo micro-computed tomography. Ann Biomed Eng, 2006. 34(10): p. 1587-99. Hulme, P.A., S.J. Ferguson, and S.K. Boyd, Determination of vertebral endplate deformation under load using micro-computed tomography. J Biomech, 2008. 41(1): p. 78-85. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Simone Tassani Laboratorio di Tecnologia Medica. via di Barbiano 1/10 Bologna Italy [email protected]
Visualization System to Improve Surgical Performance during a Laparoscopic Procedure L.T. De Paolis1, M. Pulimeno2 and G. Aloisio1 1
Department of Innovation Engineering, Salento University, Lecce, Italy 2 Engineering Faculty, Salento University, Lecce, Italy
Abstract— Minimally invasive surgery offers advantages that make it the best choice for many diseases. Modern technologies give a great support to this kind of surgical procedures through medical image processing and visualization, 3D organ’ s reconstruction and intra-operative surgicalguidance. In this paper is presented an advanced visualization system and the surgeon has the possibility to visualize both the traditional patient information, as the CT image set, and a 3D model of the patient’ s anatomy built from this. Two different visualization modalities are available in real time and dynamically. According to the surgeon needs, it is possible to obtain the automatic reslicing of the orthogonal planes in order to have an accurate visualization of the 3D model and slices exactly next to the actual position of the surgical instrument tip. In addition, it is possible to activate the clipping modality that allows cutting the 3D model in correspondence of a chosen visualization plane. The system can be used as support for the diagnosis, for the surgical preoperative planning and also for an image-guided surgery. Keywords— Medical images, image-guided surgery, visualization modalities. I. INTRODUCTION
One trend in surgery is the transition from open procedures to minimally invasive laparoscopic interventions, where visual feedback to the surgeon is only available through the laparoscope camera and direct palpation of organs is not possible. Minimally Invasive Surgery (MIS), such as laparoscopy or endoscopy, has become very important and the research in this field is ever more widely accepted because these techniques provide surgeons with less invasive means of reaching the patient’s internal anatomy and allow entire procedures to be performed with only minimal trauma to the patient. The diseased area is reached by means of small incisions in the body, called ports, and specific instruments and a camera are inserted through these ports; during the operation a monitor shows what is happening inside the body. The surgeon does not have direct vision and he is thus guided by camera images; this is very different to what
happens in open surgery, where there is full visual and touch access to the organ. As a promising technique, the practice of MIS is becoming more and more widespread and is being adopted as an alternative to the classical procedures. Shorter hospitalizations, faster bowel function return, fewer wound-related complications and a more rapid return to normal activities have contributed to the acceptance of these surgical procedures by surgeons. The advantages of this surgical method are evident for the patients, but despite the improvement in outcomes, these techniques have their limitations and come at a cost to the surgeons. In particular, the imagery is in 2D and the surgeon can estimate the depth of anatomical structures only by moving the camera. In laparoscopic surgery the lack of depth in perception and the difficulty in estimating the distance of the specific anatomical structures can impose limits on delicate dissection or suturing. As modern medical imaging provides an accurate knowledge of patient’s anatomy and pathologies, the medical image processing could leads to an improvement in patient care by guiding the gestures of the surgeon. Even though the information interpretation of the computed tomography (CT) or the magnetic resonance images (MRI) remains a difficult task, the computerized medical image processing allows detecting and identifying the anatomical and pathological structures and building 3D models of the patient’s organs that could be used to guide the surgical procedures. In addition, these models can be the base in the building of a realistic virtual environment used in Virtual Reality and Augmented Reality applications. Given that a great deal of the difficulties involved in MIS are related to perceptual disadvantages many research groups are now focusing on the development of surgical assistance systems, motivated by the benefits MIS can bring to patients [1]. Several research teams dealt with the task of segmentation and developed techniques that allow to automatically or interactively extracting the patient’s organ models from CTscan or MRI [2], [3], [4]. Other research groups developed solutions to support the
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 300–303, 2010. www.springerlink.com
Visualization System to Improve Surgical Performance during a Laparoscopic Procedure
preoperative surgical planning and the intra-operative surgical guidance [5], [6], [7]. The aim of this paper is to present an advanced visualization system, based on the 3D modelling of the patient’s internal anatomy, which could be used as support for a more accurate diagnosis, in the surgical preoperative planning and also for an image-guided surgery. Using the developed system the surgeon has at his disposal the possibility to visualize both the traditional patient information, as the CT image set, and a 3D model of the patient’s anatomy built from this; in addition, the location of the medical instrument could be detected in order to allow the visualization of the specific CT slice. The axial, coronal or sagittal planes or a combination of these can be provided to the surgeon in order to have further possible details of the patient’s anatomy exactly where he is proceeding with the surgical instruments. A dynamic visualization of these planes added to a partial visualization of the 3D model is also provided. II.
THE USED TECHNOLOGY
In order to obtain the most realistic environment possible and therefore provide information on the visualization and location of the organs, we have used 3D models of the patient’s anatomy built from CT medical images of a patient (MRI or CT) and an efficient 3D reconstruction of his anatomy has been carried out in order to improve the standard slice view. The grey levels in the medical images are replaced by some colors associated to the different organs. Currently there are different software packages used in medicine for the visualization and analysis of scientific images and for the building of 3D models of human organs; among these tools play an important role Mimics [8], 3D Slicer [9], ParaView [10] and OsiriX [11]. In our application we have used 3D Slicer [9], a multiplatform open-source software package for visualization and image analysis. The platform provides functionalities for segmentation, registration and three-dimensional visualization of multi-modal image data. Among the different tracking systems based on mechanical, optical or visual technologies, we have chosen an optical tracker (the Polaris Vicra of the NDI Inc.) in order to avoid the problems typical of the mechanical systems and associated to the use of metal devices. The Polaris Vicra [12] tracks both active and passive markers and provides precise, real-time spatial measurements of the location and orientation of an object or tool within a defined coordinate system. The system consists of 2 IR cameras and uses a position sensor to detect infrared-emitting or retro-reflective markers
301
affixed to a tool or object; based on the information received from the markers, the sensor is able to determine position and orientation of tools within a specific measurement volume. The system can calculate the current position of the tool in the space with an accuracy of 0.2mm and 0.1 tenth of a degree. Tracking technology has already entered operating rooms for medical navigation and provides the surgeon with important help to further enhance performance during the real surgical procedures. For the visualization and image processing we have used the IGSTK library. IGSTK (Image-Guided Surgery Toolkit) [13] is a set of high-level components integrated with low-level open source software libraries and application programming interfaces. IGSTK provides several functionalities as the ability to read and display medical images and the possibility to interface to common tracking hardware; for this reason, it has not been necessary to incorporate an external library to use the Polaris Vicra tracker. IGSTK includes ITK (Insight Segmentation and Registration Toolkit), an open source software system for 3D computer graphics, image processing, and visualization, and VTK (Visualization Toolkit), an open-source software system that employs leading-edge segmentation and registration algorithms. We have built the graphical interface using FLTK (Fast Light Toolkit) library. III.
3D MODEL AND TRACKING
The developed visualization system is based on the idea to provide the visualization of the 3D models of the organs and the medical image dataset; the surgeon is used to take decisions by means of the medical image analysis and he could distrust of the obtained 3D models. The software interface is provided of some buttons and windows; on the left side the buttons are located in strict order taking into account the temporal step sequence necessary to obtain the different visualizations modalities starting from the loading of the medical images. The remain part of the interface presents four windows used for the visualization of the CT slices in the axial, coronal and sagittal planes and the 3D model of the organs built from these images. A slider bar, one for each visualization plane, allows sliding the different views of the medical image set. In the presented 3D model we used dataset is composed from 114 CT images related to the abdominal area of a patient and it has been acquired with a distance of 2 mm each other.
IFMBE Proceedings Vol. 29
302
L.T. De Paolis, M. Pulimeno, and G. Aloisio
When the 3D models of the human organs are loaded in the main window the outlines of that organs are visualized superimposed on each slice in the others three windows, so that the surgeon can evaluate the quality and the accuracy of the segmentation and classification algorithm results. In addition, it is possible to add on the 3D model the visualization of each plane shown in the windows located in the lower side of the software interface. About the 3D model of the organs, the surgeon has the possibility to add or remove the visualization of some organs in order to have a better vision of the interest area; some organs (for instance the muscles) are visualized in transparency in order to permit the vision of the behind located organs. IV.
registration also during the surgical procedure. As surgical instrument we use a probe equipped with some reflective spheres arranged according to a specific geometry; we assume that the sensitive part of this instrument is the tip. In order to identify the position of the probe’s tip, we use a procedure known as pivoting; this method returns coordinates of the tip to the local reference system of the probe. A virtual probe is associated to the real surgical instrument and its position is detected by means of the optical tracker.
THE DEVELOPED APPLICATION
In the developed application is possible an interaction between the virtual organs and the patient’s body; the surgeon have the possibility to visualize, dynamically and in real time, the medical image corresponding with the actual position of the surgical instrument. In other words, the exact localization of the instrument’s tip is detected by means of the optical tracker and this information is used to choose and to visualize the patient’s medical image corresponding with the specific point of the body where the tip is located. In order to have a perfect correspondence between virtual and real organs, it is necessary to carry out a correct and accurate registration phase that provides as result the overlapping of the virtual 3D model of the organs on the real patient [14]. The applied method is based on the placement of 3 fiducial points on the patient’s body before the CT scanning and the following detection of these in the 3D model built from the acquired medical images. Before the surgical procedure, three reflective spheres have to be placed on the patient’s body at the same positions of the used fiducial points. By means of the optical tracker these spheres are detected and the developed registration algorithm is able to overlap on these the same points detected in the 3D model. In the medical applications is very important to have a correct detection and overlapping of the fiducial points because also a very small error can have very serious consequences for the patient. The registration phase is carried out just once at the beginning of the surgical procedure, but, in order to maintain the overlapping between the virtual and the real organs also in case of movements of the patient’s body, another reference tool, detected by the optical tracker, is positioned on the patient. Only in this way it is possible to have a correct
Fig. 1 The visualization of the automatic reslicing To allow an accurate overlapping of the virtual scene (consisting of a 3D model of the organs of the abdominal area) on the real patient, an appropriate chain of rigid transformations has been implemented and it is necessary to calculate the relation between these coordinate systems. The entire system consists of 4 different reference systems: • • • •
the reference system of the optical tracker; the reference system of the camera; the reference system associated with the tool located on the camera; the global reference system that identifies the position of the virtual object in the real scene.
The automatic reslicing of the orthogonal planes in order to associate the tip of the surgical instrument to the intersection point of the coronal, sagittal axial planes is shown in Fig. 1. The surgeon, during a minimally invasive surgical procedure, can have an accurate visualization of the 3D model and of the CT slices exactly next to the actual position of the surgical instrument. In order to have a more clear visualization of the interest
IFMBE Proceedings Vol. 29
Visualization System to Improve Surgical Performance during a Laparoscopic Procedure
area, it is possible to activate the clipping modality where the 3D model is cut in correspondence of a chosen visualization plane pointed by the surgical instrument. Fig. 2 shows a visualization using the clipping modality; in this case the cuts are applied for the sagittal and coronal planes and the axial plane is not visualized. The clipping is dynamic as well as the reslicing.
303
organs in the real body. The registration phase could be also readjusted and carried out in an automatic way; accuracy and usability tests will be executed on the developed system.
ACKNOWLEDGMENT This work is part of ARPED Project (Augmented Reality Application in Paediatric Minimally Invasive Surgery) funded by the Fondazione Cassa di Risparmio di Puglia. The aim of the ARPED Project is the design and development of an Augmented Reality system that can support the surgeon through the visualization of anatomical structures of interest during a laparoscopic surgical procedure.
REFERENCES 1. 2.
Fig. 2 The visualization of the clipping modality
3.
V. CONCLUSIONS AND FUTURE WORK
4.
In this paper is presented an advanced visualization system based on the 3D modelling of the patient’s internal anatomy. Using the developed system the surgeon can visualize both the traditional patient information, as the CT image dataset, and the 3D model built from this. Two different visualization modalities are available in real time and dynamically. According to the surgeon needs, it is possible to obtain the automatic reslicing of the orthogonal planes in order to have an accurate visualization of the 3D model and slices exactly next to the actual position of the surgical instrument tip. In addition, it is possible to activate the clipping modality that allows cutting the 3D model in correspondence of a chosen visualization plane. The system can be used as support for the diagnosis, for the surgical preoperative planning and also for an imageguided surgery. As future work is in progress the building of a complete Augmented Reality system with the acquisition in real time of the real patient’s video and the integration of virtual organs; these information will be dynamically overlapped the patient’s body taking into account the surgeon point of view and the location of medical instrument. An accurate AR visualization modality will be developed in order to provide a realistic depth sensation of the virtual
5. 6. 7. 8. 9. 10. 11. 12. 13. 14.
Furtado H, Gersak B (2007) Minimally Invasive Surgery and Augmented Reality. New Technology Frontiers in Minimally Invasive Therapies, pp 195-201 Camara O., Colliot O., Bloch I. (2004) Computational Modeling of Thoracic and Abdominal Anatomy Using Spatial Relationships for Image Segmentation. Real Time Imaging, 10(4), pp. 263-273 Kitasaka T. et al. (2005) Automated extraction of abdominal organs from uncontrasted 3D abdominal X-Ray CT images based on anatomical knowledge. Journal of Computer Aided Diagnosis of Medical Images, 9(1), pp. 1-14 Soler L. et al. (2001) Fully automatic anatomical, pathological, and functional segmentation from CT scans for hepatic surgery. Computer Aided Surgery, 6(3), pp. 131-142 Papademetris X. et al. (2006) Development of a research interface for image guided intervention: initial application to epilepsy neurosurgery. ISBI 2006, pp. 490-493 Sielhorst T., Obst T. et al. (2004) An Augmented Reality Delivery Simulator for Medical Training, Workshop on Augmented Environments for Medical Imaging (MICCAI 2004), pp 11-20 Troccaz J. et al. (2006) Medical Image Computing and ComputerAided Medical Interventions Applied to Soft Tissues: Work in Progress in Urology, IEEE(94), No. 9, Sept. 2006, pp. 1665-1677 Mimics Medical Imaging Software, Materialise Group at http://www.materialise.com/materialise/view/en/92458-Mimics.html 3D Slicer at: http://www.slicer.org ParaView at http://www.paraview.org OsiriX Imaging Software at www.osirix-viewer.com NDI Polaris Vicra at http://www.ndigital.com Clearya K., Ibanez L., Ranjan S., et al. (2004) IGSTK: a software toolkit for image-guided surgery applications, Conf Computer Aided Radiology and Surgery (CARS 2004), Chicago, USA Sauer F. (2005) Image Registration: Enabling Technology for Image Guided Surgery and Therapy, The 27th Annual Conf. IEEE Engineering in Medicine and Biology, Shanghai, China, September 1-4, 2005 corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Lucio Tommaso De Paolis Dept. of Innovation Engineering – Salento University via Monteroni Lecce Italy [email protected]
The blood perfusion mapping in the human skin by photoplethysmography imaging U. Rubins, R. Erts and V. Nikiforovs University of Latvia, Institute of Atomic physics and spectroscopy, Riga, Latvia Abstract— A CMOS camera-based imaging photoplethysmographic (PPGI) system is described to detect the blood pulsations in tissue. Attention of PPGI is drawn to the potential applications in visualized blood perfusion. Intensity variations of three wavelengths (620 nm, 520 nm and 432 nm) were detected and analyzed in each pixel of image. To obtain a twodimensional mapping of the dermal perfusion measurement, custom image-processing software has been developed. The high-resolution PPGI images were derived from human fingers (transmission mode) and face (reflection mode), evaluated at three wavelengths. The newly developed system can be usable in skin blood perfusion monitoring for clinical applications. Keywords— photoplethysmography imaging, PPGI, noncontact photoplethysmography, blood perfusion mapping, PPG mapping.
II. METHODS
A. The concept of PPGI Fig. 1 illustrates its basic concept of the technique of non-contact PPGI. The camcorder takes video from any part of the human body and stores it to computer disk. After that special imaging software splits video content to separate frames and calculates light intensity variations in selected region of interest (RoI) (Fig. 2). The next part of processing is visualizing these intensity variations as PPG signal. Such measurement scheme looks promising for fast detection and monitoring of PPG signal waveform changes, as the blood flows from the heart to every location of the body [6].
I. INTRODUCTION
Photoplethysmography (PPG) is a non-invasive optical technique for detecting the blood volume pulsations in tissues by back-scattered or transmitted optical radiation. PPG is nowadays widely used because of its simple design and relatively low cost. A convenient PPG device consisted of light source and photo detector is able to detect blood pulsations from human tissue on the single spot of skin. PPG pulsations can be also registered by non-contact way using an ambient light and video camcorder [1-6]. As a result, PPG can be evaluated in each pixel of registered image. Amplitude of this signal reflects spatial distribution of blood perfusion in skin surface and can be reflected as twodimensional photoplethysmography image (PPGI) map. PPGI allows monitoring with larger field of view, so as to improve the ability to probe biologic interactions dynamically and to study disease over time. Combining this technique with original image processing algorithms can improve the quality and performance of evaluation of PPGI maps. In this paper, a non-contact PPGI system with original image processing software is presented that is capable of monitoring blood perfusion in human skin in hi-resolution images. The aim of study is testing of the new experimental technique for detection of PPGI maps at multiple wavelengths.
Fig. 1 The measurement technique of PPGI
Fig. 2 The concept of PPGI. The PPG signal is derived in time domain from any point of video frame
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 304–306, 2010. www.springerlink.com
The Blood Perfusion Mapping in the Human Skin by Photoplethysmography Imaging
B. The measurements and video processing Sony HDR-SR1 AVC hi-definition (HD) Handycam® Camcorder was used in experiments. As a source of light 60W light bulb lamp was used. Videos were taken from human fingers in light transmission mode and from face in light reflection mode with picture resolution of 1440x1080 pixels 50 frames per second (fps) interlaced mode. For minimizing the influence of automatic settings, Super Steady Shot (electronic image stabilization) system was switched off, white balance and exposure were in manual mode. Each measurement was performed 10 seconds when the patient was no moving. Both hands and face was immersed in hot water for 10 minutes before experiments. Video processing: Video content was exported from camcorder to computer. After that, AVC HD format video was converted to more convenient AVI format video and video resolution was down sampled to 640x360 pixels, 25fps progressive mode. Custom developed Matlab® computer program was used for video processing (Fig. 3). It consists of following main parts: •
The conversion of AVI format video to individual frames and loading into HxWxCxF matrix (where H is frame height, W is frame width, C – color in RGB space, N – number of frames)
•
The selection of image area by choosing RoI in the video frame image and selection of RGB channel (R – red, G – green, B – blue)
•
Evaluation of the regions of frames with too large intensity variations affected by motion. This procedure helps to avoid regions where skin surface moves
•
Evaluation of maximal intensity variation for each pixel of frame. These values assumed as amplitudes of PPG signal in each pixel and stored in 2D PPGI matrix.
•
Normalizing of PPGI matrix that the minimal value must be 0 and maximal value must be 255
•
Graphical representation of PPGI map
305
The PPGI map represents 2D distribution of the amplitude of blood pulsations in skin or skin blood perfusion. The pixels that affected by motion are excluded from map (dark areas, Fig 3a,b). III. RESULTS
Fig. 4a shows the image of the left arm fingers in penetrating light. Because red light penetrates through the tissue in several cm depth, red light (620 nm) channel is selected from RGB space. Fig. 4b shows the PPGI map evaluated from the video frames. Fig. 5 shows the signal of finger video evaluated from the averaged pixel values of selected RoI. Both the arterial pulsation and the slowly changing respiration rhythm can be seen clearly in the time domain. In frequency domain, the exact frequency value of the heartbeat (about 1.1 Hz) with its higher-order harmonic and the low frequency of respiration rhythm can be determined too.
a
b
Fig. 4 Video frame of fingers in penetrating red light (a) and evaluated PPGI map (b)
Fig. 5 PPG signal of finger video evaluated from the averaged pixel values a
of selected RoI. Time domain (upper figure) and frequency domain (lower figure)
b
Fig. 3 First frame of finger in penetrating red light (a) and its PPGI map (b)
IFMBE Proceedings Vol. 29
306
U. Rubins, R. Erts, and V. Nikiforovs
Fig. 6a-c shows the image of human face in transmitted light in three colors of RGB space: red (620 nm), green (520 nm) and blue (432nm). The PPGI maps (Fig. 7a-c ) show blood perfusion variations and depends of wavelength of light. It is because optical radiation of different wavelength penetrates and reaches vascular bed at different depths in skin layers. Red light reaches more deeper blood vessels in contrast of blue light that penetrates less than 1mm in deep. Therefore amount of blood detected by blue light is much smaller and PPGI is much more affected by noise (Fig. 7c). In both transmission mode and reflection mode PPGI maps are not affected by non-pulsatile component of skin surface reflection or tissue absorbtion and shows only pulsatile component of blood.
IV. CONCLUSIONS
We performed measurements of light variations on human skin surface and visualized skin blood perfusion in hiresolution PPG images using a camcorder. This technique showed sufficient sensitivity to the visible light spectra, it is non-invasive and easy to use, still it has some advantages and disadvantages. Advantages: For acquiring the PPGI maps consumer level camcorder can be used. As for light source electrical bulb light can be used. Disadvantages: For quality PPGI high power light source is needed. Electrical bulb light generates some noise. The volunteer should be in still position, even slightest movements generates artifacts. This feasibility study shows potential of two-dimensional mapping of PPG signal; however, this requires further studies.
ACKNOWLEDGMENT Financial support from European Social fund, project number 2009/0211/1DP/1.1.1.2.0/09/APIA/VIAA/077, is highly appreciated.
REFERENCES a
b
c
1.
Fig. 6 Video frame of human face in reflected light in red (a), green (b) and blue (c) color spaces
2.
3.
4.
5.
6.
a
b
c
Fig. 7 PPGI maps of human face evaluated from video shoot in reflected light in red (a), green (b) and blue (c) color spaces
Wu T, Blazek V, Schmitt H J (2000) Photoplethysmography imaging: a new noninvasive and noncontact method for mapping of the dermal perfusion changes. Proc. SPIE 4163: 62-70 Wu T (2003) PPGI: New Development in Noninvasive and Contactless Diagnosis of Dermal Perfusion Using Near InfraRed Light. J. GCPD e.V., 7(1): 17-24 Humphreys K, Markham C, Ward T E (2005) A CMOS camera-based system for clinical photoplethysmographic applications. Proc. SPIE, 5823, 88-95 Zheng J, Hu S (2007) The preliminary investigation of imaging photoplethysmographic system. Journal of Physics: Conf. Series 85: 012031 Verkruysse W, Svaasand L O, Nelson J S (2008) Remote plethysmographic imaging using ambient light. Opt. Express, 16(26): 21434– 21445 Erts R, Rubins U, Spigulis J (2009) Monitoring of blood pulsation using non-contact technique. WC 2009, IFMBE Proceedings 25/VII: 754–756 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Uldis Rubins Institute of Atomic physics and Spectroscopy Raina Bulv.19 Riga Latvia [email protected]
Fingerprint Matching with Self Organizing Maps A.N. Ouzounoglou1, T.L. Economopoulos1, P.A. Asvestas2 and G.K. Matsopoulos1 1
National Technical University of Athens, School of Electrical and Computer Engineering, 9 Iroon Polytechniou str, 157 80, Zografos, g , Greece 2 Department of Medical Instruments Technology, School of Technological Applications, Technological Educational Institute of Athens, Ag. Spyridonos str, 122 10, Egaleo, Greece Abstract — In this paper, an automatic scheme for the identification of fingerprint images is presented. The scheme consists of two main processes: the extraction of distinctive points only from the template fingerprint image and the detection of their corresponding ones (if they exist) on the input fingerprint image using an implementation of the Self Organizing Maps. The correspondence quality is evaluated using a proper metric, which determines the matching between the two images. The proposed scheme was tested on fingerprint image pairs subject to known and unknown transformations using the VeriFinger_Sample_Data_Base of NeuroTechnology. The overall performance for fingerprints originated from the same and different fingers was 94.12% in terms of the Equal Error Rate.
sess the highest amount of information compared to their immediate neighbours are extracted from the template image. Then, corresponding points in the input image are determined by means of the proposed Self Organizing Maps (SOMs) algorithm. For each pair of the obtained corresponding points, a correspondence score is computed. The average value of these correspondence scores is finally used as a matching score for the two fingerprint images in comparison.
Keywords— Fingerprint image correspondence, distinctive points extraction, self organizing maps, matching score
I. INTRODUCTION
A fingerprint, namely the reproduction of a fingertip epidermis, produced when a finger is pressed against a smooth surface, is a human characteristic that has been systematically used for identification purposes. The identification involves the comparison, also known as fingerprint matching, of an input fingerprint image to a template image stored in a database and either the calculation of a matching score or the extraction of a binary decision (mated/non-mated) (Fig.1). Usually, a fingerprint identification algorithm does not operate directly on grayscale fingerprint images and requires the derivation of an intermediate fingerprint representation by extracting distinct features of the fingerprint images, such as minutiae (ridge bifurcations or endings). A fingerprint recognition system based on minutiae matching was introduced in [1]. Chan et al, developed a fast verification method that was based on the matching of minutiae located in a region centered at a reference (core) point of the fingerprint [2]. Gu et al, proposed a fingerprint matching technique that combines both model – based orientation field and minutiae [3]. Chen et al, addressed a fingerprint matching and verification method based on a normalized fuzzy similarity measure [4]. In the proposed study, fingerprint identification is performed in two stages. Initially, distinctive points that pos-
Fig. 1 Typical diagram of the fingerprint enrollment and identification processes. During enrollment a fingerprint image serves as the template image and is stored in the database. During identification, an input image is compared with the template images of the database.
II. METHODOLOGY
In this paper, an automatic fingerprint identification scheme is presented. Without loss of generality, hereafter we denote the image of the fingerprint acquired during enrollment as the template (I T ) and the representation of the fingerprint to be matched, that is acquired during identification, as the input image (I inp ). The following two processes are applied consequently only on the template image I T : x Distinctive Point Extraction x Distinctive Point Automatic Correspondence A. Distinctive Point Extraction Distinctive points possess the highest amount of information when compared to their immediate neighbors. The extraction of these points was performed by the algorithm
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 307–310, 2010. www.springerlink.com
308
A.N. Ouzounoglou et al.
described in [5]. During this process, 200 distinct points are initially extracted (Fig. 2).
x
If
the
average 1 N
of MoMi (n 1) , MoM ave ( n 1)
value
N
¦ MoM n 1 i
better than MoM best , then MoM best and the current as w i i 1,2,!, N . An input vector, randomly.
s(n),
x
For
neuron,
Fig. 2 (a) Template image. (b) Extraction of the Distinctive Points.
MoM i n
B. Distinctive Point Correspondence based on SOMs algorithm The automatic method applied for establishing distinct point correspondences between the template image and the input image is based on the theory of SOMs. The SOMs is a neural network training algorithm, which uses a competitive learning technique to train itself in an unsupervised manner [6]. The proposed implementation of the SOMs algorithm in the context of point correspondence evolves as follows: let P A ( I ) I x, y , x, y A Z 2 be the restriction of an image I
in
the
A
region
x
and 0 Tmax S .
Let Pi
xi , yi ,
( i 1, 2,..., N ) be the distinctive points extracted from the template image, then each point can be considered as a neuron with an associated weight vector that holds the parameters of a local similarity transformation. Furthermore, let Ai [ xi R, xi R] u [ yi R, yi R] be a square area centered at the position of each point and MoM ( I1, I2 ) denotes a measure of match between the two images in comparison. The corresponding points on the input image are obtained by performing the following steps: Step 1. (Initialization) For each neuron, the components of the weight vector are initialized to default values and the quantities wi 0 1,0,0,0
MoM i 0 { MoM P Ai IT , PTw ( 0 ) Ai I inp i
are calculated,
the variable MoM best is set to a very large (in magnitude) negative value and the iteration variable, n, is set to 1. Step 2. (Training) While n is less than n max :
generated
the
MoM P Ai I T , P Ts ( n ) Ai I inp
pseudo-
quantity is calcu-
The winning neuron, k n , in the current iteration, is defined as kn arg max MoM i n under the condii
tion MoM kn n ! MoM ave n 1 . x
The weights of the neurons are updated according to the following equation:
wi (n) wi (n 1) h(kn , n, i)[s(n) wi (n 1)]
(1)
where h(kn , n, i) (i = 1,2,…..N) is given by the following equation: ° Ln , Pk Pi a n d 0 n ® °¯ 0, otherwise
h(kn , n, i )
and
each parameter is bounded according to the inequalities dx d dxmax , dy d dymax , T d T max , r d rmax , where
is
are
lated.
ª r cos T x r sin T y d x º Tw ( x , y ) « » be the similarity ¬ r sin T x r co s T y d y ¼ transformation with parameters w (r, dx, dy,T ) . The value of
dxmax , dymax , rmax ! 0
every
MoM ave (n 1) stored
weights
x
is
i 1
(2)
L, a, d0 are parameters to be defined later and
denotes the Euclidean norm. x The iteration variable is increased by one. In order to cope with the differences in contrast and/or brightness between the template and the input image, the selected measure of match was the gradient correlation: 2
MoM ( I1 , I 2 )
ª E[(w x I1 Pw x I1 ) (w x I 2 Pw x I 2 )] º « » V w x I1 V w x I 2 ¬« ¼»
ª E[(w y I1 Pw y I1 ) (w y I 2 Pw y I 2 )] º « » V w y I1 V w y I 2 «¬ »¼
where E is the expected value operator, Pw the mean values and V w
x ( y ) I1
and V w
(3)
2
x ( y )I2
x ( y ) I1
and Pw
x ( y ) I1
are
are the standard devia-
tions of the partial derivatives of the images I1 and I 2 ( w x ( y ) I1 and w x ( y ) I 2 ), with respect to x (y). An input vector s(n) is generated pseudorandomly, according to s n w k ȣ where ȣ X1, ,X 2 ,X3 ,X 4 is a 4n
dimensional normally distributed random variable with mean vector 0,0,0,0 and covariance matrix
diag V12 n , V 22 n , V 32 n ,V 42 n
.
The standard deviation V j n
of the random variable X j varies with the iteration variable
IFMBE Proceedings Vol. 29
Fingerprint Matching with Self Organizing Maps
U
as V j n
j
309
L j e pn , where U j (L j ) denotes the maximum
(minimum) allowed value for the j-th component of the input vector and p determines the rate of exponential change of V j n . The above equations provide random signals which range [ wk
in n
,j
general
(U j L j ), wk n , j (U j L j )]
lie in the . When a generated
input vector is not in the allowed range [ L j ,U j ] , then it is discarded and a new input vector is produced until s j (n) [ L j ,U j ] .
Initially, eleven images from the database were selected as template images and were transformed under rigid transformation (5 pixels vertical and horizontal displacement and rotation at 5q) in order to assess the accuracy of the obtained correspondences. In Fig. 3 (a) and (b), a typical result of the application of the scheme is shown for a pair of images subject to the aforementioned transformation. It can be seen that correct correspondence of the extracted points has been obtained by the SOMs correspondence algorithm.
The parameter d0 provides the initial radius of a circular region around the winning neuron. Only neurons inside this region are updated. Usually, d0 is set to the maximum distance between neurons. As can be seen from (2), this distance is reduced with geometric rate determined by the parameter a (0 a d 1) . The parameter L acts like a gain constant for the magnitude of the update (0,99 d L d 1) . This parameter also decreases geometrically as the iteration variable evolves. The best average value of all MoMs provides a correspondence score between the two images. Experiments have shown that 200 points are sufficient for fingerprint images in order to obtain accurate correspondence results. The proposed automatic fingerprint identification scheme was applied to all image pairs using the same values of the various parameters listed in Table 1. The values of the parameters were chosen after experimentation. Table 1. Implementation Parameters of the proposed automatic fingerprint identification scheme Parameters’ Description Learning Rate Rate of Change of d 0 Rate of change of input vector range Half Size of Square Region of each neuron Number of Iterations Maximum Value of Scaling Max Value of Horizontal Displacement ( pixels) Max Value of Vertical Displacement (pixels) Max Value of Angle of Rotation (degrees)
Symbol L A P R n max r max dx max dy max dlj max
Value 0.995 0.9 0.01 10 10,000 1.15 120 120 30
III. RESULTS
In order to assess the performance of the proposed automatic fingerprint identification scheme, the VeriFinger_Sample_DB database of fingerprint images was used [7]. The database contains 408 images in total. These images are from nine different persons, for six fingers of each person and for eight different captures of each finger. The size of each fingerprint image is 504×480 pixels.
(a)
(b)
(c)
(d)
Fig. 3 Performance of the proposed SOMs algorithm in defining correspondences. (a) Initial points position and (b) Obtained correspondence for a pair of fingerprint images subject to known transformation. (Red dots indicate the actual corresponding points, while yellow dots indicate the detected points using the SOMs algorithm). (c) Template image subject to unknown transformation along with the extracted points. (d) Input image along with the obtained corresponding points by the SOMs algorithm.
In Table 2, quantitative results on the performance of the proposed scheme are presented for data subject to known transformations in terms of the Root Mean Square Error ( RMSE
1 N ¦ Qi Q i N i1
2
are the detected points by , where Q i
the SOMs algorithm and Qi are the actual corresponding points, i 1, 2,..., N ). As can be seen from these results, subpixel accuracy is obtained by the proposed correspondence algorithm. Furthermore, the proposed identification scheme was applied on a dataset of 357 image pairs of same fingers and on 384 image pairs of different fingers as selected from the database [7]. Each pair consists of a template and an input image with an unknown transformation associating those two images. An example of the results obtained by applying the proposed algorithm on a pair from this dataset is shown
IFMBE Proceedings Vol. 29
310
A.N. Ouzounoglou et al.
in Fig. 3 (c) and (d). In this case, only points with a MoM measurement over a threshold of 1.1 are displayed. Table 2. Quantitative results obtained by the proposed automatic fingerprint identification scheme in terms of the RMSE (in pixels) Fingerprint Pairs
RMSE
1 2 3 4 5 6 7 8 9 10 11 Mean ± std
0.930 0.854 0.943 0.990 1.096 0.927 0.962 0.922 0.902 1.071 0.896 0.954 ± 0.073
The performance of the proposed scheme on this dataset was computed using a Matching Score, which measures the similarity between the stored template image and the input image. This similarity measurement corresponds to the best average value of all MoMs from all neurons. If the value of the Matching Score approaches 1 (if normalized in the range [0,1]), it is more likely that both fingerprints originate from the same finger. On the other hand, if the Matching Score is near 0, it is more probable that the examined fingerprints are from different fingers.
0.3918. Finally, the confusion matrix of the obtained results is shown in Table 3. As can be seen in Table 3, the system classified fingerprints originated from the same finger with an accuracy of 94.11% (sensitivity) and those originated from different fingers with an accuracy of 94.01% (specificity). Table 3.Confusion Matrix Actual Value Positive
Negative
Total
Prediction
Positive
336
23
359
Outcome
Negative
21
361
382
Total
357
384
IV. CONCLUSIONS
In this paper, a SOMs-based algorithm for fingerprint identification is presented. The distinctive points of the template image are used as neurons of a neural network and the proposed algorithm detects the set of distinctive points in the input image in an iterative way. This method is errortolerant in the estimation of the distinctive points of the template image. The overall performance of the proposed method was 94.12%.
REFERENCES 1. 2.
3.
4. Fig. 4 The FAR/FRR curves for various threshold values of the MoM. The intersection point corresponds to the ERR.
The decision of the system is determined by threshold T. If the Matching Score is over the threshold, the fingerprints are regarded as a matching pair, whereas if the Matching Score is below the threshold, the fingerprints are regarded as a non-matching pair. The performance of a biometric system is assessed by means of the False Acceptance Rate (FAR), False Rejection Rate (FRR) and Equal Error Rate (EER) for various values of the threshold [8]. The FAR and FRR were calculated by thresholding the correspondence criterion obtained by the SOM-based algorithm. The corresponding curves are shown in Fig. 4. The intersection point of the two curves corresponds to the ERR. The value of the ERR is 0.0588 and is obtained for T* =
5.
6. 7. 8.
Jea, T-Y. & Govindaraju, V. (2005). A minutia-based partial fingerprint recognition system. Patt. Recog., 38(10):1672– 1684. Chan, K. C. ; Moon, Y. S. & Cheng P.S. (2004). Fast Fingerprint Verification Using Subregions of Fingerprint Images, IEEE Trans. Circuits and Systems for Video Technology, 14(1): 95-101. Gu, J.; Zhou, J. & Yang, Ch.(2006). Fingerprint Recognition by Combining Global Structure and Local Cues, IEEE Transactions on Image Processing, vol. 15, Iss. 7, Jul. 2006, pp.1952-1964. Chen, X.; Tian, J. & Yang, X. (2006). A New Algorithm for Distorted Fingerprints Matching Based on Normalized Fuzzy Similarity Measure, IEEE Trans. Im. Proc., 15(3):767-776. Likar,B & Pernus, F. (1999). Automatic extraction of corresponding points for the correspondence of medical images. Med Phys., 26(8):1678-1686. Kohonen, T. (2000). Self-Organizing Maps, 3rd Edition, SpringerVerlag, Berlin, Germany. Website of NEUROtechnology,Biomedical and Artificial Technologies, http://www.neurotechnologija.com/download.html. Maltoni, D., Maio, D., Jain, A. & Prabhakar. (2009). S. Handbook of Fingerprint Recognition, Second Edition. New York: Springer. Author: Anastasia N.Ouzounoglou Institute: National Technical University of Athens Street: 9 Iroon Polytechniou str, 157 80 City: Athens Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
A Novel Model for Monte Carlo Simulation of Performance Parameters of the Rodent Research PET (RRPET) Camera Based on NEMA NU-4 Standards N. Zeraatkar1,2, M.R. Ay2,3,4, A.R. Kamali-Asl1, and H. Zaidi5 1
Department of Radiation Medicine Engineering, Shahid Beheshti University, Tehran, Iran Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, Tehran, Iran 3 Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran 4 Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, Tehran, Iran 5 Division of Nuclear Medicine, Geneva University Hospital, 1211 Geneva, Switzerland 2
Abstract— Rodent Research PET (RRPET) is a newly designed small-animal PET integrated several novel methods to develop performance parameters of the system. A novel model has been defined to calculate some of performance parameters of the RRPET using Monte Carlo simulations by GATE. Simulations were done in 2 stages: evaluation phase and calculation of sensitivity and count rate parameters of the RRPET based on NEMA NU 4 – 2008 Standards which has recently published for performance measurements of small-animal PET systems. Evaluation phase shows a sensitivity of 10.7%, and maximum NECR of 800 kcps @ 1.5 mCi for a mouse-like phantom. Using NEMA NU-4 protocol, for the mouse phantom, average total absolute system sensitivity is 2.7%, while peak true count rate is 2,050 kcps @ 95 MBq, and peak Noise Equivalent Count Rate (NECR) is 1,520 kcps @ 82.5 MBq. Scatter Fraction (SF) is computed 4.7% for the mouse phantom based on NEMA NU-4. By presenting the accuracy of our new model, it can be used for further calculations of other performance parameters of the RRPET. Keywords— RRPET, small animal PET, NEMA NU 4–2008, Monte Carlo.
I. INTRODUCTION In the last two decades, the use of small-animal models in the field of biomedical research for studying disease have accelerated, and after the first dedicated rodent PET by Hammersmith Hospital (London, UK) in collaboration with CTI PET Systems, Inc. (Knoxville, Tennessee), dedicated small-animal PET systems have continued playing their role as a strong imaging modality in investigating cellular and molecular processes associated with disease in live animals [1, 2]. Rodent Research PET (RRPET) is a newly designed small-animal scanner that has been commercialized as the world’s first animal PET-CT (XPET) scanner. Main design goals of RRPET were lower cost, higher sensitivity, higher image resolution, and large axial field of view (AFOV). Applying Photomultiplier-Quadrant-Sharing (PQS), SlabSandwich-Slice (SSS) production technique for building the
detector blocks more efficiently, in addition to using a high yield pileup event recover (HYPER) method has complied the main goals [3]-[7]. The RRPET consists of 6 detector rings, and each ring is comprised of 30 pentagonal block detectors [8] as shown in Fig. 1. System parameters of the RRPET and properties of detector blocks are summarized in Table 1.
Fig. 1 The detector cylinder and several PMTs, in addition to an enlarged pentagonal block in the bottom-left corner. Green cylinder shows a mouselike phantom [13]
Monte Carlo techniques have become one of the most popular tools in different areas of medical physics in general and medical imaging in particular in order to overcome difficulties of practical experiments or analytical solutions [9]. GATE (Geant4 Application for Tomographic Emission) [10] is a Monte Carlo simulator toolkit mainly developed for medical imaging particularly PET and SPECT. GATE results are reliable due to using libraries and well validated physics models of Geant4 [11]. Due to rapid expansion and development of small animal PET systems, a specific testing protocol is needed to make the comparison of different systems possible. NEMA NU 4 – 2008 Standards [12], which has been recently published, is dedicated to performance measurements of small animal PETs.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 311–314, 2010. www.springerlink.com
312
N. Zeraatkar et al.
While other studies on the RRPET, have been performed using a simplified model and not in a special protocol [13, 14], we used a novel accurate model to calculated sensitivity and count rate performance of the RRPET based on NEMA NU-4. Table 1 System parameters of the RRPET and its detector blocks properties [8, 13] System parameters of the RRPET Transverse field of view
100 mm
Detector ring diameter
165 mm
Axial field of view
Fig. 2 The 30-sided polygon model used in our simulations
116 mm
Septa
No septa between rings
Data collection
3D
Image planes
95
Properties of the detector blocks Scintillator
BGO
Crystal width (transaxial)
1.36 mm (edge), 2.32 mm
No. of crystals
8 x 8 = 64
Averaged crystal depth
9.4 mm
II. METHOD A. Definition and Evaluation of RRPET in GATE Because of the complicated geometry of the RRPET, in particular its pentagonal blocks, it is not assumed as a standard geometry in GATE. So Monte Carlo Simulations performed up to now have been done using simplified models in GATE [13, 14]. In such model, cubic block detectors have been used instead of pentagonal blocks. Furthermore, in previous studies for simulating time-dependent parameters, a 10-sided polygon model has been employed as a replacement of the real 30-sided polygon model. Whereas we used a more accurate model for detector blocks, and also a 30-sided polygon model for all simulations including the simulations of time-dependent parameters. Fig. 2 depicts the 30-sided polygon model we defined. Employing these models in GATE should lead to more precise and reliable results. However, for evaluating our new model, first we repeated some experiments of [13], whose authors had the real RRPET system, in their methods but with our own model. In our simulations, the energy resolution for BGO detector block, the energy window, the time window, and finally the dead-time of each sector are set as 25%, 340 to 750 keV, 16 ns, and 60 ns, respectively [13].
B. Sensitivity We calculated the sensitivity in two ways: firstly at the center of field of view (FOV) with an ideal 0.04 mCi point source as done in [13], and secondly according to NEMA NU-4. For second approach, following the protocols of NEMA NU-4, a spherical Na-22 source (0.3 mm in diameter) embedded in an acrylic cube (10mm in all sides) was employed. The activity should be low enough to guarantee that the count losses are less than 1% and the random event rate is less than 5% of the true event rate. 200 kBq activity (Acal) was chosen to satisfy these conditions. For the first step, the source was placed at the center of the scanner, both axially and transaxially for acquiring 10,000 true counts. Its corresponding time determines the acquisition time (Tacq) for next steps. Then the source was stepped axially to the both sides of the scanner identically to the thickness of the slices. In each step, data collected for the same acquisition time as the first step. After each acquisition, Single Slice Rebinnig (SSRB) algorithm [15] was applied on data and the corresponding slice was represented by a 2D sinogram. In every sinogram, the pixel with the largest value in each row (angle) was located, and all pixels greater than 1cm from this pixel was set to zero. No correction for scatter or random counts, and decay was applied. Finally, all pixels were summed to calculate the total counts in that slice. By dividing this value by Tacq, the counting rate (Ri) for that slice in counts per second was determined. C. Count Rate Performance We calculated coincidence count rates in two methods. First, according to [13], we used uniform distributions of different amounts of activity in a mouse-like phantom. The mouse-like phantom employed in this stage is a ϕ 30 mm x 70 mm cylinder. After that, we computed count rate parameters based on NEMA NU-4. The mouse phantom of NEMA NU-4 is a ϕ 25 mm x 70 mm. A cylindrical hole (3.2 mm diameter) is drilled parallel to the central axis at the radial
IFMBE Proceedings Vol. 29
A Novel Model for Monte Carlo Simulation of Performance Parameters of the Rodent Research PET (RRPET) Camera
distance of 10 mm whose central 60 mm is used for inserting activity uniformly distributed in water. F-18 is used as radionuclide in simulations with different initial activity for every acquisition. After applying SSRB and generating sinograms, in each sinogram, all pixels out of the 16 mm band wider than the phantom are set to zero. Then, each row is shifted so that the maximum pixel is aligned with the central pixel of the sinogram. A sum projection is produced such that a pixel in the sum projection is the sum of the pixels in each angular projection having the same radial offset as the pixel in the sum projection. Finally, by considering a 14 mm-band from the central pixel and interpolating, all counts above the connecting line of 2 borders of the band are assumed to be true counts, and the remaining counts below the line are considered to be the sum of random and scatter counts.
III. RESULTS
313
Table 2 Average system parameters of the RRPET according to NEMA NU-4 Parameter
Value
Average system sensitivity for mouse (cps/kBq)
33
Average system sensitivity for rat (cps/kBq)
25
Average absolute system sensitivity for mouse
3.7%
Average absolute system sensitivity for rat
2.7%
Average total system sensitivity (cps/kBq)
25
Average total absolute system sensitivity
2.7%
B. Count Rate Performance Total coincidence rate, true coincidence rate, random coincidence rate, and scatter coincidence rate using the ϕ 30 mm x 70 mm mouse-like phantom are shown in Fig. 4. Our results show a negligible difference with count rate curves of [13]. Noise Equivalent Count Rate (NECR) was calculated for every acquisition by the equation below:
(True Count Rate) NECR =
2
A. Sensitivity
Total Count Rate
Using a 0.04 mCi ideal point source at the centre, we computed the sensitivity equal to 10.7% which shows a very small difference with the practical value 10.2% reported in [13]. Then, according to NEMA NU-4, the sensitivity (in counts per second per Bq), and the absolute sensitivity (by taking the branching ratio of Na-22 into account) were calculated respectively as follow: ⎛ R i − R B ,i S i = ⎜⎜ ⎝ Acal
⎞ Si ⎟⎟ , S A,i = × 100 0 . 906 ⎠
Sensitivity profile over axial position is sketched in Fig. 3. Other sensitivity parameters are reported in Table 2.
Fig. 3 Sensitivity profile over slices
NECR curve for the same mouse-like phantom is illustrated in Fig. 5 which shows that peak value of NECR is 800 kcps with a 1.5 mCi activity. In order to obtain count rate parameters according to NEMA NU-4, we simulated 33 acquisitions from initial activity of 500 MBq down to 10 kBq. Each acquisition lasted so that adequate counts were acquired. Our results showed that peak true count rate is 2,050 kcps (achieved at 95 MBq equivalent to 2.765 MBq/mL), peak NECR is 1,520 kcps (achieved at average activity 82.5 MBq equivalent to 2.401 MBq/mL), and finally, the Scatter Fraction (SF) is 4.7% for mouse phantom. System true event rate, random+scatter event rate, total event rate, and NECR are illustrated in Fig. 6 for different values of average effective activity concentrations.
Fig. 4 Count rate curves of the mouse-like uniform phantom IFMBE Proceedings Vol. 29
314
N. Zeraatkar et al.
impacts of different factors such as randoms, scatters, positron range, etc. on reconstructed images. We are currently performing mentioned possible studies that will be published in close future.
REFERENCES
Fig. 5 NECR curve of the uniform mouse-like phantom
Fig. 6 Count rate curves over average effective activity concentration
IV. CONCLUSION AND DISCUSSION In this study, we designed a new model of the RRPET for more accurate Monte Carlo simulations using GATE. The accuracy of our model was evaluated by comparing our results with the results of [13]. Then, sensitivity and count rate performance parameters were calculated according to NEMA NU-4. Due to more realistic geometry of our model, our results in the field of count rate parameters are more reliable than previous studies. In addition, although the model tolerates a smooth overestimation because of special definition of the blocks, it can be employed for calculating other performance parameters of the RRPET accurately based on NEMA NU-4, and also for assessment of the
1. Levin CS, Zaidi H (2007) Current trends in preclinical PET system design. PET Clin. 2:125-160 2. Bloomfield PM, Rajeswaran S, Spinks TJ et al. (1995) The design and physical characteristics of a small animal positron emission tomograph. Phys Med Biol. 40:1105-1126 3. Wong WH, Li H, Xie S et al. (2003) Design of an inexpensive highsensitivity Rodent-Research PET camera (RRPET), Nuclear Science Symposium Conference Record. vol. 4, 2003, pp 2058–2062 4. Wong WH (1993) Positron camera detector design with cross-coupled scintillators and quadrant sharing photomultipliers. IEEE Trans Nucl Sci. 40(4):962-966 5. Uribe J, Wong WH, Baghaei H et al. (2003) An efficient detector production method for position-sensitive scintillation detector arrays with 98% detector pack fraction. IEEE Trans Nucl Sci. 50(5):14691476 6. Li H, Wong WH, Uribe J et al. (2002) A new pileup-prevention frontend electronic design for high-resolution PET and gamma cameras. IEEE Trans Nucl Sci. 49(5): 2051-2056 7. Xie S, Ramirez R, Liu Y et al. (2005) A pentagonal photomultiplierquadrant-sharing BGO detector for a Rodent Research PET (RRPET). IEEE Trans Nucl Sci. 52(1): 210-216 8. PET Instrumentation Development Lab Group, The University of Texas, MD Anderson Cancer Center at http://www.mdanderson.org 9. Andreo A (1991) Monte Carlo techniques in medical radiation physics. Phys Med Biol. 36: 861-920 10. Jan S, Santin G, Strul D et al. (2004) GATE: a simulation toolkit for PET and SPECT. Phys Med Biol. 49: 4543-4561 11. Agostinelli S, Allison J, Amako K et al. (2003) GEANT4 – a simulation toolkit. Nucl Instrum Methods A. 506: 250-303 12. National Electrical Manufacturers Association (NEMA) (2008) Performance measurements for small animal positron emission tomographs (PETs). NEMA Standards Publication NU 4-2008. Rosslyn, VA: NEMA 13. Zhang Y, Wong WH, Baghaei H et al. (2005) Performance evaluation of the low-cost high-sensitivity Rodent Research PET (RRPET) camera using Monte Carlo simulations, Nuclear Science Symposium Conference Record. vol. 5, 2005, pp 2514-2518 14. Baghaei H, Zhang Y, Li H et al. (2007) GATE Monte carlo simulation of a high-sensitivity and high-resolution LSO-based small animal PET camera. IEEE Trans Nucl Sci. 44(5): 1568-1573 15. Daube-Whitherspoon ME, Muehllener G (1987) Treatment of axial data in three-dimensional PET. J Nucl Med. 28(11): 1717-1724 Author: Mohammad Reza Ay Institute: Department of Medical Physics, Tehran University of Medical Sciences, Tehran, Iran Street: Pour Sina City: Tehran Country: Iran Email: [email protected]
IFMBE Proceedings Vol. 29
Is the Average Gray-Level from Ultrasound B-Mode Images Able to Estimate Temperature Variations in Ex-Vivo Tissue? César A. Teixeira1, A.V. Alvarenga2, M.A. von Krüger3, and W.C.A. Pereira3 1
2
Centre for Informatics and Systems/University of Coimbra, Coimbra, Portugal Laboratory of Ultrasound/National Institute of Metrology, Standardization and Industrial Quality (Inmetro), Duque de Caxias, Brazil 3 Biomedical Eng. Program/COPPE, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil
Abstract— This paper presents the first results of a method to estimate temperature change by monitoring the gray-scale average value of standard ultrasound (US) images. It was carried out an experiment with a bovine muscle sample with four thermocouples along its depth, spaced of 1 cm. The sample was immersed in a reservoir to which water at 50°C was added. Images were recorded with a commercial ultrasound equipment during sample heating and cooling (2.5 h experiment). Temperature was acquired every 10 s. Each image was then divided in four horizontal regions-of-interest (ROIs), corresponding to each thermocouple positioning. Average gray-scale values versus temperature change were obtained and have shown a clear linear pattern within the range 0.5 to 8°C for all ROIs. A linear model was proposed to fit the curves with a maximum prediction error of approximately 0.6°C, which is close to the accepted error for hyperthermia treatment (0.5°C). The next step is to use ultrasound also as the heating source. Keywords— Non-invasive temperature estimation, B-Mode image, average gray-level.
I. INTRODUCTION In recent years, non-invasive temperature estimation (NITE) in biological tissues has drawn attention, in special due to the increasing development of hyperthermia methods for tumor ablation. Ultrasound has been one of the options used to non-invasive temperature estimation of tissues [1-9]. More specifically, when dealing with backscattered ultrasound, four parameters were identified as having potential for NITE. These are: medium attenuation coefficient [2], time-shifts (TS) [3], spectral component shifts [4], and backscattered energy [5]. Among these parameters, TS has been receiving special attention, because it is known to be a monotonic function of temperature, and it is also independent of the transducer characteristics. In addition, theoretical and experimental consistent results, based on linear relationships, were obtained for temperature variations up to 10ºC [1,3,4]. More recently, a non-linear methodology, also based on TS, was proposed for NITE, presenting a superior performance at both error and operational levels [6]. A drawback of this TS methodology is that it estimates
temperature at discrete spatial points. It would be more appropriate to perform continuous-space estimates. It is envisaged that the addition of information from B-Mode images to the non-linear technique could then result in improved continuous-space maps. Backscattered ultrasound signals are the basis of ultrasonography, therefore, changes in their characteristics are to some extent reflected on the observable B-Mode images. Temperature changes produce medium contraction/ expansion, which can cause variations on speed-of-sound and in the relative position of scatterers. These phenomena alter speckle pattern that can be tracked and correlated with temperature. Abolhassani et al. [7] used the cross-correlation algorithm to study the behavior of speckle patterns related to temperature changes from digital sonographic images. The authors claim to have achieved average errors of 0.2°C in temperature changes between 25ºC and 45ºC. Changes on the grey-level content of the B-Mode images have also been investigated to estimate temperature variations, as they are affected by modifications on backscattered energy [8,9]. Xinying et al. [8] studied average gray values in B-mode ultrasonic images of during heating of pig and bovine livers. The temperature range started at normal temperatures of the mammalian subjects up to 45oC. The areas of interest (10 × 10 pixels) were filtered (Gaussian filter with a 3 × 3 kernel size) to eliminate high-frequency information. Authors pointed out an average correlation coefficient of 0.8760 ± 0.0765. Li et al. [9] also studied average gray values in B-mode ultrasonic images of ex-vivo tissue (chick heart, pig muscle and pig liver in vitro) and pointed out correlation coefficient between 0.179 (chicken heart with membrane) and 0.833 (pig liver), with temperatures ranging from 25 to 50oC. This paper explores the feasibility of using the average gray level from B-Mode images in estimating temperature variations on ex-vivo tissue at different regions. In Section 2, the experimental setup developed is presented, including the description of the materials applied, instrumentation and protocol for data acquisition. In Section 3, the pre-processing applied to the acquired data is explained. In Section 4, results are presented and discussed.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 315–318, 2010. www.springerlink.com
316
C.A. Teixeira et al.
II. EXPERIMENTAL SETUP A piece of fresh ex-vivo bovine muscle (dimensions ≈ 4 x 10 x 7 cm) was placed inside a recipient containing a standard saline solution (0.91% of NaCl). In order to improve heating and minimize temperature losses, the recipient was placed inside a polystyrene box (dimensions ≈ 30 x 30 x 30 cm). The medium was heated by introducing hot water (≈50ºC) in this box till the bovine muscle was 100% surrounded (Fig. 1). B-Mode images were acquired by using a standard ultrasound scanner (GE Logic P5, GE Healthcare, Finland), with a linear array transducer working at 7 MHz. The transducer was placed on the upper surface of the sample. The images were transferred to a personal computer (PC) by using an USB video acquisition board (DVD EZMaker USB Plus, Avermedia, Taiwan). This board enables movie recording at 30 frames per second (30 fps). Medium temperature was acquired by four type-E thermocouples positioned along the sample depth and connected to a cold-junction-compensated multiplexer (Spider8, HBM, Darmstadt, Germany), which interfaces the PC through the parallel port. Temperature was measured at each second at four points that were 1 cm spaced across the axial direction of the ultrasound transducer.
Fig. 1 Experimental setup developed for B-Mode and temperature recording
The experimental procedure involved B-Mode and temperature recording during 2.5 hours. In the first 10 min, the base-line temperature (approximately 22.6ºC) was recorded, i.e. without hot water in the polystyrene box. After this initial period, hot water was introduced at approximately 50ºC in order to heat the medium.
III. METHODS The collected images were processed using MatlabTM (R2008a). The function mmread was applied for image retrieval. This function returns two structures, one corresponding to the video and the other to the audio data [10]. As this work is based on image processing, only the video data was considered. Once the frames were accessible, a region of interest (ROI) can be defined. In this work, the ROI was taken as the entire image. Given that the temperature was measured at several points, a more localized analysis can be performed. In this way, the selected ROI was divided equally in four sub-ROIs, according to each thermocouple position. An example of the total image, as well as the sub-images is presented in Fig. 2. The total ROI has approximately 240 x 190 pixels, while each sub-ROI has 60 x 190 pixels. After this division, the frames within 10 s were averaged and then the average gray-level computed for each sub-ROI. To verify the average-gray-level/temperature relation, the temperature values within each 10 s were also averaged.
Fig. 2 Example of one image. a) Total image and b) image divided in 4 ROIs according to each thermocouple position
IFMBE Proceedings Vol. 29
Is the Average Gray-Level from Ultrasound B-Mode Images Able to Estimate Temperature Variations in Ex-Vivo Tissue?
IV. RESULTS AND DISCUSSION As previously stated, temperature changes were recorded at four different depths, and at the deepest point (4 cm from the sample surface) a change of approximately 12ºC was reached, while at the closest point (1 cm from the sample surface) a change of 6ºC was observed. Analyzing the time evolution of the average-gray level over the total experiment time (2.5 h), it is possible to see an abrupt change on the average gray-level contents, for all the ROIs at approximately 10 min of recording (Fig. 3). As previously mentioned, baseline recordings were performed in the first 10 min, and then hot water was poured and, thus originating this transitory effect on the average gray-level.
Fig. 3 Average gray-level variation along time for each ROI A linear fit (red straight lines in Fig. 4) could be computed to the average gray-level variation rate at each sub-ROI, based on the data points corresponding to temperatures between 0.5 and 8ºC, i.e. where it can be assumed a linear relation. The angular coefficient of each line is presented in Table 1. For temperature variations superior to 8 ºC the function is no longer linear and for the same temperature value two gray scale values are observed, depending if the medium is heating or cooling (Fig. 3). Based on the computed slopes, linear models could be formulated, and an error analysis performed. In this work the linear models applied were first order polynomials: ^
Δ T = Slope ⋅ ΔAGS + ΔTInitial , where ΔAGS is the average gray-scale variation, and ΔTInitial is the temperature variation at time 0. The curves in Fig. 5 represent the measured temperature change (solid line) as compared with the estimated (dashed line) by the linear model for the different ROIs. Analyzing the maximum absolute errors (MAE) obtained for each ROI (Table 2), one can say that for the ROIs where temperature change is superior to 8ºC, a significant
317
MAE is obtained, which is associated with the “non-linear function” behavior previously reported. When temperature change does not exceed 8ºC, MAE was inferior to 0.61ºC. This means that the simple linear models considered may furnish a temperature resolution close to the gold-standard error pointed for hyperthermia treatments, i.e. 0.5ºC. In fact, MAEs inferior to 0.5ºC were observed in ROI 3 and 4 (Table 2).
Fig. 4 Average
gray-level/temperature-change relation. The red straight line represents the linear regression between 0.5 and 8ºC, i.e. where the relation is clearly linear
This non-linear behavior of the gray-scale may be due to the physical natural process: when heating the heat flow propagates from the water towards the center of the sample, while the inverse process happens in cooling. Thus, it is not surprising that the average gray-level be the same value for different temperatures measured by the thermocouples (that are in the middle of the sample). Table 1 Slopes of the lines presented in Fig. 4, describing the variation rate between the average gray-level (ΔAGS) and the temperature (ΔT) ROI Slope (ΔAGS/ΔT) Correlation coefficient
1 2 3 4 0.53 0.75 0.87 0.78 0.998 0.998 0.999 0.999
Table 2 Maximum absolute error (MAE) obtained when considering a linear mode. Error is computed for all temperature range (row 1), and with only temperature changes inferior to 8ºC are considered (row 2) ROI MAE for All ΔT range (ºC) MAE for ΔT<=8 ºC (ºC)
IFMBE Proceedings Vol. 29
1 0.55 0.55
2 0.61 0.61
3 2.16 0.47
4 2.44 0.24
318
C.A. Teixeira et al.
V. CONCLUSIONS
ACKNOWLEDGMENT
This work has shown preliminary results on investigating the relationship between tissue heating and the average gray-scale values. It was possible to see a linear behavior for a temperature increment of approximately 8ºC, in the four different depths investigated, which is a temperature range largely sufficient for HIFU and diathermia treatments. Within the linear range, a maximum absolute error of 0.61ºC was obtained with a first order polynomial model. This error performance is close to the one pointed for hyperthermia/diathermia, encouraging us to proceed with more realistic and complete experiments. We are presently planning experiments where ultrasound is also the heating source to verify if the linear (or any predictable) behavior is present.
The authors acknowledge the financial support of the Brazilian agencies CNPq and FAPERJ for the financial support. The authors gratefully acknowledge to M.Sc. Thaís Pionório Omena and Dr. Maria Julia Gregorio Calas for making the development of the experimental setup and data acquisition possible.
Fig. 5 Measured (solid line) as compared with the estimated (dashed line) temperature changes by a linear model based on the computed slopes of the lines presented in Fig. 4
REFERENCES 1. R.M. Arthur, W.L. Straube, J.W. Trobaugh and E.G. Moros, “Noninvasive temperature estimation of hyperthermia temperatures with ultrasound”, Int. J. Hyperther, 21 (2005), pp. 589–600 2. S. Ueno, M. Hashimoto, H. Fukukita, T. Yano, “Ultrasound thermometry in hyperthermia”, in: Proc. IEEE Ultrasonics Symposium vol. 3, 1990, pp. 1645–1652 3. C. Simon, P. Van Baren and E.S. Ebbini, “Two-dimensional temperature estimation using diagnostic ultrasound”, IEEE Trans. Ultrason. Ferroelect. Freq. Contr. 45 (1998), pp. 1088–1099 4. A.N. Amini, E.S. Ebbini and T.T. Georgiou, “Noninvasive estimation of tissue temperature via high resolution spectral analysis techniques”, IEEE Trans. Biomed. Eng. 52 (2005), pp. 221–228 5. R.M. Arthur, W.L. Straube, J.D. Starman and E.G. Moros, “Noninvasive temperature estimation based on the energy of backscattered ultrasound”, Med. Phys. 30 (2003), pp. 1021–1109 6. C.A. Teixeira, M.G. Ruano, A.E. Ruano and W.C.A. Pereira, “A softcomputing methodology for non-invasive time-spatial temperature estimation”, IEEE Trans. Biomed. Eng. 22 (2008), pp. 572–580 7. M.D. Abolhassani, A. Norouzi, A. Takavar and H. Ghanaati, “Noninvasive temperature estimation using sonographic digital images”, J. Ultrasound Med. 26 (2007), pp. 215–222 8. R. Xinying, W. Shuicai, Z. Yi, “Noninvasive Monitoring for Hyperthermia Based on Ultrasonic Tissue Characterization of B-Mode”, in: 1st International Conference on Bioinformatics and Biomedical Engineering - ICBBE 2007, (2007), pp. 1173-1176 9. W. Li, T. Kan, X. Xiao J. Niu, “Noninvasive temperature estimation using B-scan image for thermal therapy”, in: IFMBE Proceedings (7th Asian-Pacific Conference on Medical and Biological Engineering) vol. 19, 2008, pp. 542–545 10. M. Richert, “mmread.m”, Matlab Central, 2009. URL: http://www.mathworks.com/matlabcentral/fileexchange/ 8028-mmread Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
César A. Teixeira Centre for Informatics and Systems/University of Coimbra University of Coimbra, 3030-290 Coimbra Portugal [email protected]
CT2MCNP: An Integrated Package for Constructing Patient-Specific Voxel-Based Phantoms Dedicated for MCNP(X) Monte Carlo Code A. Mehranian1, M.R. Ay1,2,3, and H. Zaidi4,5 2
1 Tehran University of Medical Sciences, Research Center for Science and Technology in Medicine, Tehran, Iran Tehran University of Medical Sciences, Department of Medical Physics and Biomedical Engineering, Tehran, Iran 3 Tehran University of Medical Sciences, Research Institute for Nuclear Medicine, Tehran, Iran 4 Geneva University Hospital, Division of Nuclear Medicine, Geneva, Switzerland 5 Geneva University, Geneva Neuroscience Center, Geneva, Switzerland
Abstract— We introduce a fast and well-structured package for constructing voxel-based computational phantoms as MCNP(X) input file based on CT DICOM images. Our program which has been implemented under a graphic user inter-face provides several basic image processing tools for manipulating images. The MCNP materials are interpreted from the CT numbers of DICOM images. Two modes of phantom creation have been provided using individual cells and lattice. In the former, the program uses a fast merging algorithm for reducing the number of cells and in the latter, an optimized approach has been followed. This software has strong potential in applications in radiologic, dosimetry and therapeutic MCNP- based Monte Carlo studies. Keywords— anthropomorphic phantoms, voxel-based models, Monte Carlo simulation, MCNP.
I. INTRODUCTION Anthropomorphic computational phantoms have recently served an important role in improving the effectiveness of radiation treatments in radiotherapy and the correctness of image processing algorithms in diagnostic radiology [1, 2]. They can be defined by either mathematical functions (stylized phantoms) or digital volume arrays (voxel-based phantoms) [3] or a combination of both (hybrid phantoms) [4]. The stylized phantoms are approximate mathematical fits to the shape and volume of actual organs. They do not faithfully represent realistic organ anatomy as exits in individual patients. Whereas voxel-based phantoms, constructed from segmented tomographic medical images such as magnetic resonance (MR) images or computed tomography (CT), can more realistically describe the human anatomy than afforded via stylized equation-based phantoms. By increasing the computing power of micro-processors and expanding the memory capacity of computers, the voxel-based phantoms has attained so popular acceptance that ICRP Publication 110 is recently published to recommend replacing the stylized phantoms by voxel-based computational phantoms [5]. Integrating the anatomical characteristics of specific
patients into Monte Carlo radiation codes is now an integral part of modern radiotherapy treatment planning systems where patient’s CT images is used for treatment planning, localization and dose assessments. A number of groups have developed programs for creating voxel anthropomorphic model geometry into Monte Carlo code. We describe a new program called CT2MCNP (Computed Tomography to MCNP Monte Carlo radiation transport code) with enhanced features in creating voxel-based phantoms from DICOM CT images into 3-dimensional Cartesian coordinate system of MCNP(X) geometry. This software which is written under MATLAB (Math Works Inc., Natick, MA, USA) reads a sequence of tomographic scans from DICOM files and prepares the geometry and material sections into an MCNP or MCNPX input file. Basically, the creation of a tomographic model involves four general steps: (a) Acquire a set of medical images, (b) classify and segment the organs or tissues of interest for the application at hand (e.g., lungs, liver, skin, etc.) from the original images by assigning voxel with unique identification numbers, (c) specify tissue type (e.g., soft tissue, hard bone, air, etc.) and composition to organs or tissues, and (d) implement the geometric data into a Monte Carlo code to calculate radiation transport and score quantities of interest (e.g., dose in each of the organs of interest) [6]. In the following sections we first introduce the MCNP code and its geometry definition package and then about the above basic steps implemented in CT2MCNP and supplemental features fed into this software.
II. MCNP MONTE CARLO CODE MCNP is a general-purpose and internationally recognized code for Monte Carlo (MC) radiation transport, developed and maintained at Los Alamos National Laboratory [7]. It uses a flexible scheme in geometry definition in which geometrical volumes, known as cells, are primarily defined by Boolean combination of signed half-spaces
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 319–322, 2010. www.springerlink.com
320
A. Mehranian, M.R. Ay, and H. Zaidi
which are delimited by first, second and fourth degree surfaces in a three-dimensional Cartesian coordinate system. In this geometry definition, surfaces are in turn designated by special characters followed by appropriate coefficients needed to meet the surface equation. As an example, a cube can be defined by Boolean intersection of six planes (first degree surfaces). Cells are the basic tool for the geometric construction for any problem in MCNP and, as said, comprise of combinations of surfaces. Furthermore, MCNP employs a special feature called Repeated Structures whose basic concept is that generic geometrical shapes can be built by the repetition of one or a group of cells. Using this feature, even irregular shapes can be reproduced. In repeated structures feature, each cell can be filled with a universe, which can represent a lattice or collection of cells. To each universe, an identification number (ID) is assigned so that every cell belonging to this universe is associated with this number. This feature has been used for constructing voxelbased phantoms such that each cell, or strictly speaking, each universe to which cell belongs represents one voxel of human body[8]. The geometry and graphical capabilities of MCNPX do not fundamentally differ from the standard MCNP code and thus remains the same.
III. DICOM STANDARD
A. Windowing To improve display contrast visibility, window level (WL) and window width (WW) setting is provided for user along with Window/Level Presets that are a scrollable list of available presets that can be applied with a single click to the image display window. B. Image Cropping Image slices can be cropped by a rectangular crop region to remove unwanted portions from an image or for a set of images. Thus pixels outside the crop region are not used when generating MCNP geometry. Furthermore, it has planned to improve this feature by cropping with userdefined region of interest (ROI) to remove those portions. C. Image Reslicing Reslicing a series of DICOM image is often required to visualize the three orthogonal views: transverse, coronal, and sagital. CT2MCNP provides reslicing option by which user can observe images in these orthogonal views at different slices. D. Image Resizing
The DICOM (Digital Imaging and COmmunications in Medicine) standard is the fundamental standard in digital medical imaging and communicating [9]. As such, it provides all the necessary tools for the diagnostically accurate representation and processing of medical imaging data. In the first step of construction of a patient-specific phantom, CT2MCNP reads patient’s CT DICOM images which may consist of either a series of files each containing a single image or a single file that contains multiple image frames. When importing a series of files, it first verifies that each file or files has a Service-Object Pairs Class UID that corresponds to a CT image. It then checks several features of each file to verify that they belong to the same image study, by comparing the Series Instance UID, Study ID and the Series Number. Each file is then consistency checked by comparing various fields such as pixel rows, pixel columns, pixel spacing, etc. to make sure that the image presentation is identical for each image file.
IV. IMAGE MANIPULATION FEATURES Initially, some basic image processing tools have been predicted in our software and impeded into a user graphic interface (GUI). At this section we treat to some of the implemented tools.
The image matrix sizes may keep the default CT resolution (512x512) or it can be reduced to a 256 × 256 or 128 × 128 matrixes. Resizing images (down-sampling) may be accomplished by nearest- neighboring, bilinear and bicubic interpolation methods or pixel binning (summing the intensities together).
V. MCNP INPUT GENERATION Each pixel in medical images represents a tissue volume in a 2-D plane. The 3-D volume of the tissue is termed a voxel (volume element), and it is determined by multiplying the pixel size by the thickness of an image slice. Unlike stylized whole-body models, a tomographic model contains a huge number of tiny cubes grouped to represent each anatomical structure. In the next step of generation of a voxel-based phantom to each voxel of image dataset a unique tissue/organ ID is assigned which is then interpreted as an MCNP material. A. Material Mapping The MCNP materials are interpreted from Hounsfield Unit (HU) values or CT numbers in the DICOM dataset. The CT numbers are grouped into 6 subsets that are dosimetrically
IFMBE Proceedings Vol. 29
CT2MCNP: An Integrated Package for Constructing Patient-Specific Voxel-Based Phantoms
equivalent and then an identification number is assigned to each of them. Therefore each pixel’s CT number is mapped to a specific ID representing a material. The composition fractions and density of each material is then loaded from a material library derived from ICRU 44 [10] and imported to MCNP input file in the form of material card number followed by atomic number and atomic fractions. B. Conversion Modes The specification of phantom geometry, its composition and radiation source must all be put together in three main blocks on MCNP deck input file, namely, cell card, surface card and data card. When CT numbers in input images were mapped to ID numbers, CT2MCNP provides two schemes for defining the geometry of phantom into MCNP input file called XYZ intersection and Lattice method. •
XYZ intersection method
In this mode, individual rectilinear cells are defined from arrangements derived from the tissue ID-mapped DICOM dataset by Boolean intersection of six planes (first-degree surfaces). In this method, to reduce the total number of cells without scarifying the anatomical details and thus increasing the efficiency of the MCNP, a very fast cell merging algorithm is used to combine cells in to a larger cell. Cells are merged if they share a common boundary, are bounded by the same surfaces in the directions perpendicular to the common boundary, and contain the same material. The cell merging algorithm first searches for candidate cells with common boundaries in the X direction, then in the Y direction, and then Z direction (between slices). The cells along with their attributes (cell number, material, density and particles’ importance) are written on cell block of input file. The surfaces bounding cells along with their coefficients required to meet the equation of each surface are then written in surface card and finally material card is written in
Fig.
1 An MCNP transverse geometry plot from the anthropomorphic phantom converted by XYZ intersection module
321
data card. Figure 1 shows the operation of this conversion mode Zubal phantom [11] in which each color represent a unique material by which organs are delineated. This image has been depicted in MCNP4C code. Zubal phantom is a computational human body model derived and segmented from CT image. It consists of a 3-dimensional array of 128 × 128 × 246 (4,030,464) cubic voxels, 4 mm on each side. Multiple internal organs and structures were identified by Zubal et al. and related to an index number for each voxel. The XYZ intersection module defines this phantom into MCNP environment by 52,634 cubic voxels in a fairly short time (~15 seconds on a Pentium IV dual core PC with 2.6 GHz CPU and 2 GB RAM). •
Lattice method
The repeated structures representation is employed in this mode of conversion which eliminates the MCNP’s limit for the number of cells in the phantom model. The definition of the rectangular space lattice of voxels is the heart of the voxel phantom setup. The procedure is initiated by defining a lattice whose number of rows, columns and its third dimension is the same as those of the processed DICOM volumetric dataset. The dimensions of the phantom elementary voxel are determined based on patient dimensions, slice thickness and the amount of image size down-sampling. In the next step, each voxel (cell) is filled by a universe to which an ID or material number has been assigned. Based on the IDs on the slices of the image, the algorithm written for this mode construct an fill array comprising of a large number of IDs by which MCNP fills each voxel by the universe to which the ID belongs. When one row of the lattice was filled, the next rows are in turn filled. Fig. 2 shows a cut of an anthropomorphic Zubal phantom defined into MCNP using lattice method. The particle tracking in lattice-base geometry is slightly slower than in conventional surface-sensed geometry. To speed up the tracking and consequently decreasing MCNPs’ execution time, the filling universes were defined by a
Fig. 2 An MCNP transverse geometry plot from the anthropomorphic phantom converted by lattice module
IFMBE Proceedings Vol. 29
322
A. Mehranian, M.R. Ay, and H. Zaidi
single plane far away from phantom’s body. Traditionally a single spherical cell and a space out of it is used for defining universes which is take more memory and slows down the speed of execution. Phantoms defined by lattice mode are of importance for internal dosimetry where the distribution of dose to organs near an internal radiation source or plotting isodose curves is sought. Whereas phantoms defined by XYZ intersection mode may find applications in absorbed dose in each organ for external irradiation in a faster simulations. Fig. 3 shows CAD visualizations of the lung and heart of Zubal’s phantoms converted by XYZ intersection mode. As seen in cutaway views, the merging algorithm combines neighboring cells in an optimized way in three directions. The CAD visualization was translated from MCNP input file using MCAM software, an integrated interface program between CAD systems and Monte Carlo simulation codes (FDS Team, China [12]).
(b)
(a)
(d)
(c)
Fig. 3 CAD visualization of the performance of XYZ intersection conversion mode. (a) 3D views of lung and (b) its coronal cut-away view, (c) heart and (d) its transverse cut-away view
VI. CONCLUSIONS CT2MCNP provides a MATLAB-based graphical user interface to create voxel-based phantom from CT DICOM images into MCNP’s environment. It employs two conversion modes called XYZ intersection and Lattice methods which exploit optimized algorithms in geometry definition. The program provides some basic image processing tools. It is currently under the evaluation and development and soon will be enhanced with new features such as segmentation tools, measurement and displaying tools along with tallying capabilities.
REFERENCES 1. Williams G, Zankl M, Abmayr W, et al. (1986) The calculation of dose from external photon exposures using reference and realistic human phantoms and Monte Carlo methods. Phys Med Bio 31:449– 452 2. Peter J, Tornai M, Jaszczek R. (2000) Analytical versus voxelized phantom representation for Monte Carlo simulation in radiological imaging. IEEE Trans Med Imaging 19:556–564 3. Huh C, Bolch WE (2003) A review of US anthropometric reference data (1971–2000) with comparisons to both stylized and tomographic anatomic models. Phys Med Biol 48:3411–3429 4. Lee C, Lodwick D, Hasenauer D, et al. (2007) Hybrid computational phantoms of the male and female newborn patient: NURBS-based whole-body models. Phys Med Biol 52:3309–3333 5. ICRP Publication 110 (2009) Adult reference computational phantoms. 39(2):1–166 6. Zaidi H, Xu X G, (2007) Computational Anthropomorphic Models of the Human Anatomy: The Path to Realistic Monte Carlo Modeling in Radiological Sciences. Annu Rev Biomed Eng 9: 471-500 7. Briesmeister J F, (2000) MCNP–A general Monte Carlo N-particle transport code. Los Alamos National Laboratory LA-13709-M 8. Yoriyaz H, Santos A, Stabin M G, Cabezas R, (2000) Absorbed fractions in a voxel-based phantom calculated with the MCNP-4B code. Med Phys 27:1555–1562 9. Digital Imaging and Communications in Medicine (DICOM) at http://medical.nema.org. 10. ICRU Publication 44 (1989) Tissue Substitutes in Radiation Dosimetry and Measurement. 11. Zubal G, Harrell C, Smith E, et al, (1994) Computerized threedimensional segmented human anatomy. Math Phys 21:299-302 12. Wu Y, (2009) CAD-based interface programs for fusion neutron transport simulation. Fusion Eng Des 84:1987–1992 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Mohammad Reza Ay Tehran University of Medical Sciences Pour Sina Tehran Iran [email protected]
Noise reduction in fluoroscopic image sequences for joint kinematics analysis T.Cerciello1, P. Bifulco1, M. Cesarelli1, L. Paura1, M. Romano1, G. Pasquariello1 and R. Allen2 1
Department of Biomedical, Electronic and Telecommunication Engineering, University of Naples “Federico II”, Naples, Italy 2 Institute of Sound and Vibration Research, University of Southampton, Southampton, UK
Abstract—Analysis of dynamic videofluoroscopic can provide spine kinematic data with an acceptable low X-ray dose. Estimation of the kinematics relies on accurate recognition of vertebrae positions and rotations on each radiological frame. In previous works we presented a procedure for automatic tracking of vertebra motion by smoothed gradient operators and template matching in fluoroscopic image sequences. A limitation to the accurate estimation of the kinematics by automatic tracking of vertebrae motion, independently by the specific methodology employed (e.g. manual marking, corner or edge automatic detection, etc.), is mainly due to noise: low-dose X-ray image sequences exhibit severe signal-dependent noise that should be reduced, while preserving anatomical edges and structures. Noise in low-dose X-ray images originates from various sources, however quantum noise is by far the more dominant noise in low-dose X-ray images and other sources can be neglected. Signal degraded by quantum noise is commonly modeled by a Poisson distribution, but it is possible to approximate it as additive zero-mean Gaussian noise with signal-dependent variance. In this work we propose a digital spatial filter for reducing noise in low-dose X-ray images. The proposed filter is based on averaging of only similar pixels (whose grey level is contained within ±3σ) instead of spatial averaging of all neighbouring pixels. The effectiveness of the filter performance was evaluated by fluoroscopic image sequence processing, comparing the results of the automatic vertebra tracking on filtered and unfiltered images. Keywords— Fluoroscopic image sequences, low dose X-ray noise, joint kinematics, spatial average filter. I. INTRODUCTION
The use of a fluoroscopic device can offer a continuous screening of a specific musculoskeletal system (e.g. hip, knee, cervical and lumbar spine, etc.) during the patient's spontaneous motion, with an acceptable, low X-ray dose. Very small radiation doses are applied for each image in order to minimize the overall exposure of patient, but this results in only a very small variable number of photons available for image formation at the detector site; the fluctuations of the number of photons detected can be modeled as a signal-dependent Poisson noise (quantum noise) [10-16]. The joint kinematics analysis by processing fluoroscopic images is based on the knowledge of bone position in each image of the sequence. For this purpose it is typical to define landmarks, specific points or other more complex fea-
tures belonging to the structure of interest. In order to accurately identify landmarks it is extremely important to preserve image edges that define the outline of the anatomical structures (i.e. bones). For example, in intervertebral kinematics studies, many authors [2-3,5] utilised manual identification of the anatomical landmarks throughout the sequence. This operation results in a subjective, tedious and often insufficiently accurate procedure [1]. Further approaches based on automatic template matching have been utilized: some were based on correlation as a measure of similarity of the image portion enclosing each vertebra [4,78], others tried to describe the vertebral body outline in different ways (e.g. utilizing splines or Hough generalized transform) [6]. In previous studies we proposed an automatic vertebra tracking, still based on cross-correlation template matching, but using gradient images [7-8]. A limitation to the accurate estimation of the kinematics by tracking of bone motion in low dose X-ray images, independently by the specific methodology employed (e.g. manual marking, corner or edge automatic detection, etc.), is mainly due to noise. Indeed, low-dose X-ray image sequences exhibit severe signal-dependent noise. Noise in low-dose X-ray images originates from various noise sources. X-ray beams sensed at a detector are subject to quantum fluctuations, so-called quantum noise. Further noise sources include scattered radiation and system noise. The latter integrates noise originating from the hardware, for example, thermal, shot, and quantization noise. However, in spite of other noise sources, quantum noise is by far the most dominant noise in low-dose X-ray images [10-16] and other sources can be neglected [9]. Signal degraded by quantum noise, and hence noise in low-dose X-ray images, is commonly modeled by a Poisson distribution: n
PN (n p ) =
N p −N e np!
(1)
with np being the detected number of X-ray quanta and N the noise-free photon count, that is N = E [np] = µ (np) = s/cg with uncorrupted intensity s, constant detector gain cg and offset c0 = 0 [9-16]. For a sufficiently large number of quanta contributing per pixel, the Poisson distribution can be approximated by an additive zero-mean Gaussian distribution with signal-dependent variance [14-16]. It can be
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 323–326, 2010. www.springerlink.com
324
T. Cerciello et al.
demonstrated that the equation (1) of a discrete random variable x can be well approximated by − 1 P( x; µ ,σ ) = e 2π σ 2
processed by a previous method [7-8] and in accordance with the new filter algorithm.
( x −µ ) 2 2σ 2
II.
(2)
with same mean and variance µ = σ2 = N [16]. For approx N>10 a Poisson distribution can be theoretically transformed into a sampled Gaussian distribution with numeric evaluations yielding maximum relative errors below 0.1% and maximum absolute cumulative errors below approximately 0.02 [16]. Therefore, although Poisson noise does not fit well into the general concepts of additive and multiplicative noise models, the low dose X-ray image noise can be modeled as an equivalent additive noise model with zero-mean signal-dependent noise. Each pixel grey-level can be expressed by: p = s + n(s ) (3) with s being the signal and n a noise source with a Gaussian distribution with its mean equal to its variance and both depending on signal intensity; thus, the grey level standard deviation in homogeneous areas of the image (where it is possible to assume signal to be constant) can be considered to be an estimator of local noise [15-16]. Various methods have been proposed for reducing noise in low-dose X-ray images. The simpler methods are based on a linear filter that is composed of a temporal and/or a spatial low-pass filter. Although linear low-pass filters can reduce noise, they also reduce the signal components (e.g. edge and structures), and, thus, they are not appropriate for object localization (based on edge detection, crosscorrelation, etc.). The resulting filtered images are generally blurred in the temporal and/or spatial directions, depending on the filters applied. For overcoming the limitations of linear filters, several improvements have been proposed, including a temporal filter combined with motion detection, an edge-preserving adaptive filter, a non-linear diffusion filter and a median filter or a multi-resolution filter [17-20]. The usefulness of an average filter based on the normalized difference with the relevant pixel for reducing the image noise and consequently the patient dose, in fluoroscopic imaging was demonstrated [21]. In the present work we propose a spatial average filter for noise reduction in video-fluoroscopic images that is free from the limitation of linear filters but still has an adequate capability for noise reduction and for anatomical edges and structure preserving. The filter is based on averaging of only similar pixels (whose grey level is contained within ±3σ) rather than all neighbouring pixels. The effectiveness of the filter was evaluated by computer simulation, where a fluoroscopic image sequence, that screened lumbar spine of a patient undergoing passive flexion-extension spinal motion, was
METHODS
A. Digital spatial filter In previous work we proposed an automatic vertebra tracking based on cross-correlation template matching using gradient images [7-8]. Gradient operators are particularly sensible to noise and thus it is very important to perform by image smoothing before applying them. Quantum noise is by far the dominant noise in low-dose X-ray images [1016]; images degraded by quantum noise are modeled by a Poisson distribution that, under common condition, can be approximated by an additive zero-mean Gaussian distribution with signal-dependent variance [14-16]. Therefore, the low dose X-ray image noise can be modeled as an equivalent additive noise model with zero-mean signal-dependent noise. As a consequence, the grey level standard deviation in homogeneous areas of the image (where it is possible to assume the signal to be constant) can be considered as an estimator of local noise [15-16]. Spatial low-pass filters (i.e. average filter) are commonly used for additive zero-mean Gaussian noise reduction in digital images [17]. Because they also reduce the signal components, they do not preserve anatomical edges and structures in medical images [21]. In this work we propose a spatial average filter for noise reduction in low-dose X-ray images that is able to preserve anatomical edges and structures. The noise reduction was achieved by averaging only similar pixels instead of spatial averaging of all neighbouring pixels. For each pixel, a threshold is determined in a noise-adaptive way by standard deviation analysis: the filter process estimates the standard deviation (σ) of the grey levels in the surrounding pixels and the output value of the pixel is given by the averaging of only the neighboring pixels whose grey level differs less than ±3σ from the previous value of the filtered pixel. The proposed filter works as a spatial average filter on homogeneous areas where grey levels are contained within ±3σ (assuming that the grey level variability is due to the noise). In correspondence of edges, such a filter does not take into account the external pixels whose grey level differs significantly from the edge pixel grey level which results in a high edge preservation. The size of the area containing the pixels to be averaged, and of the area for the standard deviation estimation, were chosen heuristically. The standard deviation can include not only the X-ray quantum noise, but also the other kinds of noise. The filtering was tested on a fluoroscopic image sequence of the lumbar spine, subsequently the gradient images and the Normalized Cross-Correlation (NCC) for vertebra detection were calculated [7-8].
IFMBE Proceedings Vol. 29
Noise Reduction in Fluoroscopic Image Sequences for Joint Kinematics Analysis
Fig. 1 A fluoroscopic lumbar image (left), after processed by a simple spatial averaging filter (middle) and by our filter (right).
B. Vertebra tracking procedure An opportunely smoothed gradient operator which combines isotropic noise suppression and partial derivatives was used for the estimation of the norm and the direction of the gradient images; subsequently the use of Normalized CrossCorrelation (calculated in the frequency domain for minimizing the computational time) provided an effective estimation of vertebrae position combined with a degree of independence from noise and contrast and brightness variations [4, 7-8]. The cross-correlation was recomputed around the estimated vertebrae centers, progressively rotating the template (with smaller and smaller angular increments) [78]. In addition, the cross-correlation function and the maxima cross-correlation series were interpolated by a cubic spline: this provided sub-pixel resolution and improved angle estimation accuracy [8]. Once obtained, the sequence of the vertebra displacements and rotations, the intervertebral kinematics, was also estimated. Results achieved processing real fluoroscopic sequences of the lumbar spine were compared with corresponding data obtained using a highly accurate manual selection procedure [5] and another automatic method for vertebra tracking based on the Hough transform[6]. The manual operations are limited to selection of 4 points per vertebra on a single image of the sequence for the template selection. III. RESULTS
Through computer simulations, we have quantitatively investigated the performance of filtering. The noise was evaluated in terms of grey level variance reduction, and was found to be reduced to about 1/2 the grey level variance in the almost-homogeneous areas of the images and to about 1/10 in surrounding edges. The fig. 2 shows that the grey level profile of the filtered image (bold line) was smoothed with respect to the grey level profile of the unfiltered image in correspondence to the vertebrae and the intervertebral disks (dashed line), but they are really similar in correspondence to the vertebrae edges (solid line).
325
Fig. 2 Grey level profile particular of a filtered and an unfiltered image.
Intensity gradient images were obtained by applying the smoothed gradient operator. The following picture shows the effect of such an operator (only the norm gradient image is presented) on the original image and on the filtered image; an acceptable tradeoff between smoothing and edge enhancement is noticeable. As shown in figure, the application of the digital spatial filter, before the use of the smooth gradient operator, allows the noise to be reduced in the almost-homogeneous areas of the image (e.g. inside the vertebra) and without modifying the vertebra edges with respect to the unfiltered image.
Fig. 3 The norm gradient image particular of a L2 vertebral body (left) and its correspondent obtained by using described filter (right).
Concerning the template matching procedure, it is worth emphasizing that the peaks related to the maximum cross correlation value always appeared sharper than the corresponding value peaks obtained by unfiltered images. This resulted in more accurate vertebra localization. On average, the normalized cross-correlation maxima ranged from 0.73 (worst) to 1 (best) with an overall mean of 0.86, with an increase of 12,65% in respect of the corresponding values obtained by unfiltered images. Results were compared with corresponding results obtained utilizing the same fluoroscopic sequences but using very accurate manual selection [5] and a tracking method based on the Hough transform [6]. As example, fig. 4 shows the estimated absolute angle of the L2 and L3 vertebrae for a patient, while fig. 5 shows the corresponding intervertebral angle, both plotted against time (i.e. frame number); positive angle corresponds to extension patient movement and negative to flexion. Current results are plotted as solid lines while manual-selection values are shown as a dashed line with crosses and Hough-tracking as a dashed line with circles.
IFMBE Proceedings Vol. 29
326
T. Cerciello et al.
corresponding values obtained by unfiltered images results in more accurate vertebra localization. Future studies will be devoted to improving the filtering and the smoothed gradient operators and to their testing on sets of fluoroscopic image sequences.
REFERENCES 1.
Fig. 4 Absolute L2 and L3 vertebra angles against time (frame #).
2.
3.
4.
5. 6. 7.
Fig. 5 Intervertebral angle (L2-L3 segment) against time (frame #).
The root mean square error (RMSE) was computed to quantitatively illustrate differences between our new method and the other methods. With respect to the manual selection the RMSE resulted in 1.39 degree for vertebral angles and with respect to the automatic tracking based on Hough transform the RMSE resulted in 1.13 degree.
8.
9. 10.
11.
IV. CONCLUSIONS
Analysis of dynamic videofluoroscopic images can provide spine kinematic data with an acceptable low X-ray dose. Noise in low-dose X-ray images is commonly modeled by a Poisson distribution, but it can be approximated by an additive zero-mean Gaussian distribution with signaldependent variance. As a consequence, the grey level standard deviation (σ) in homogeneous areas of the image (where it is possible to assume the signal to be constant) can be considered an estimator of local noise. In the present work we have proposed a digital spatial filter based on averaging of only similar pixels (whose grey level is contained within ±3σ), instead of spatial averaging of all neighbouring pixels. The results show that the proposed filter provides significant noise reduction, without reducing the signal components (e.g. edge and structures) necessary for bone detection. The results obtained by applying the filter on images overlap and track the results obtained using unfiltered images, with a significant increase of the similarity index (NCC). As explained in previous works [7-8], the intervertebral angle series obtained by using our method seems to be a more reasonable approximation to the real process with respect to the other methods [5-6]; moreover, the increase in the maximum cross correlation values with respect to the
12. 13.
14. 15.
16.
17. 18.
19.
20.
21.
Panjabi M, Chang D and Dvorak J (1992), An analysis of errors in kinematics parameters associated with in vivo functional radiographs. Spine 2: 200-205 Van Mameren H, Sanches H, Beurgens J et al (1992) Cervical spine motion in the sagittal plane (II) position of segmental averaged instantaneous centers of rotation: a cineradiographic study. Spine, 17: 467-474 Breen AC, Muggleton JM and Mellor FE (2006) An objective spinal motion imaging assessment (OSMIA): reliability, accuracy and exposure data. BMC Musculoskeletal Disorders 7:1 Bifulco P, Cesarelli M, Allen R et al (2001) Automatic Recognition of Vertebral Landmarks in Fluoroscopic Sequences for Analysis of Intervertebral Kinematics, J Med Biol Eng Comp, 39 (1):65-75 PhD thesis by Kondracki M (2001) Clinical applications of digitised videofluoroscopy in the lumbar spine. PhD Thesis, University of Southampton, UK Zheng Y, Nixon MS, Allen R (2004) Automated segmentation of lumbar vertebrae in digital videofluoroscopic images. IEEE Trans Med Imaging, 23(1):45-52 Bifulco P, Cesarelli M, Romano M et al (2009) Vertebrae tracking through fluoroscopic sequence: a novel approach, IFMBE Proc. vol. 25, World Cong. on Med. Phys. & Biomed. Eng., Munich, Germany, pp 619-620 Cerciello T, Bifulco P, Cesarelli M et al (2009) Automatic vertebra tracking through dynamic fluoroscopic sequence by smooth derivative template matching, IEEE Proc. Of ITAB2009, Larnaca, Cyprus Aach T, Kunz D, Florent R et al (1996) Noise Reduction and Image Enhancement Algorithms for Low-Dose X-Ray Fluoroscopy, Proc. BVM, pp. 95–100 Chan CL, Katsaggelos AK and Sahakian AV (1993) Image Sequence Filtering in Quantum-Limited Noise with Applications to Low-Dose Fluoroscopy. IEEE Trans. on Medical Imaging, 13: 610–621 Ferrari RJ and Winsor R (2005) Digital Radiographic Image Denoising via Wavelet-Based Hidden Markov Model Estimation.. Journal of Digital Imaging, 18:154–167 Kunz D, Eck K, Fillbrandt H et al (2003) A Nonlinear Multi-Resolution Gradient Adaptive Filter for Medical Images. SPIE Medical Imaging, 5032:732–742. Aufrichtig R and Wilson DL (1995), X-Ray Fluoroscopy Spatio-Temporal Filtering with Object Detection. IEEE Trans. on Medical Imaging 14: pp. 733– 746 Aach T, and Kunz D (1998) Bayesian motion estimation for tempo rally recursive noise reduction in x-ray fluoroscopy. Philips J Res, 51:213-251 Hensel M, Pralow T and Grigat RR (2006) Real-Time Denoising of Medical XRay Image Sequences: Three Entirely Different Approaches. Image Analysis and Recognition, vol. 4142 Hensel M, Pralow T and Grigat RR (2007), Modeling and Real-Time Estimation of Signal-Dependent Noise in Quantum-Limited Imaging, WSEAS Proc. of International Conference on Signal Processing, Robotics and Automation, Corfu Island, Greece Barrett HH and Swindell W (1991) Radiological imaging. New York, Academic, pp . 29–61 Jaffe CC, Orphanoudakis SC, Ablow RC (1982) The effect of a television digital noise reduction device on fluoroscopic image quality and dose rate. Radiology, 144:789–92 Saito N, Kudo K, Sasaki T et al (2008) Realization of reliable cerebral-bloodflow maps from lowdose CT perfusion images by statistical noise reduction using nonlinear diffusion filtering. Radiol Phys Technol, 1:62–74 Yamada S and Murase K (2005) Effectiveness of flexible noise control image processing for digital portal images using computed radiography. Br J Radiol, 78:519–27 Nishiki M, ShiraishiK, Sakaguchi T et al (2008) Method for reducing noise in Xray images by averaging pixels based on the normalized difference with the relevant pixel. Radiol Phys Technol, 1:188–195
Author: P. Bifulco Institute: University of Naples “Federico II” Street: via Claudio 21, 80125 City: Naples Country: Italy Email: [email protected]
IFMBE Proceedings Vol. 29
The Influence of Patient Miscentering on Patient Dose and Image Noise in Two Commercial ct Scanners M.A. Habibzadeh1,2, M.R. Ay2,3,4, A.R. Kamali asl1, H. Ghadiri4,5, and H. Zaidi6,7 1 Faculty of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, Tehran, Iran 3 Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran 4 Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, Tehran, Iran 5 Department of Medical Physics, Iran University of Medical Sciences, Tehran, Iran 6 Division of Nuclear Medicine, Geneva University Hospital, Geneva, Switzerland 7 Geneva Neuroscience Center, Geneva University, Geneva, Switzerland 2
Abstract— The clinical influences of patient miscentering on patient dose and image noise were investigated for two models of commercial CT scanners. Several phantoms were scanned on 4-slice GE Lightspeed and 64-slice GE Lightspeed VCT. Regression models of surface dose and image noise were generated as a function of phantom size and the value of miscentering. 64 scout images of patients from the first scanner and 113 from the second one were analyzed to assess the possible amount of increasing in dose and noise. For the first scanner the average amount of miscentering was 3 cm below the isocenter which leads to 25.8% increase in dose and 8.3% increase in noise. These values for the second scanner were 1.6 cm below the center, 19.8% and 6.2%, respectively. The results clearly demonstrate that patient miscentering may substantially increase dose and image noise. Therefore, technologists are strongly encouraged to pay greater attention to patient centering. Keywords— Bowtie Filter, Computed Tomography, Dose, Noise.
I. INTRODUCTION Bowtie beam shaping filter is an important element in the CT image formation chain that is used both for dose reduction and optimization of detector dynamic range. The role of the bowtie filter is to convey maximum radiation to the thickest part of the patient which attenuates the most x-rays and to reduce x-ray intensity where patient attenuation decreases [1]. The operation of the bowtie filter is based on the assumption that the object being scanned is properly centered in the scanner’s field-of-view (FOV). If the object is miscentered as it is shown in figure 1, it would be exposed to more surface dose in the region that goes toward the less attenuating part of the bowtie filter and the noise would increase in the region that moves into the more attenuating part of the bowtie filter [2]. In this study, we have investigated the effect of patient miscentering on image noise and patient dose in two different commercial CT scanners. In addition, in order to determine the role of
technologists, results of two imaging centers for each of implied scanners were compared.
II. MATHERIALS AND METHODS A. Quantifying the Bowtie Filter Effect Six cylindrical phantoms (Table 1) with various size and materials were scanned on the 64-slice GE Lightspeed VCT and the 4-slice GE Lightspeed (General Electric Healthcare Technologies, Waukesha, WI, USA) scanner (5 phantoms were used for the 64-slice GE VCT). The phantoms include four water phantoms, one polyethylene and one CTDI phantom with different size and density in order to emulate various patient sizes. Phantom centers were positioned at 0, 2, 4 and 6 cm below the center of rotation. Scout scans were also obtained from the phantoms at anterior-posterior view. The scanning parameters were 120 kVp, 4×5 mm axial slice collimation, 200 mA with 2 sec gantry rotation speed. Large body bowtie filter was chosen for each scan given that a large scanning FOV was used. Dose was measured using a standard 10 cm pencil chamber placed on the top surface of the phantoms. The Barracuda dosimetry system (RTI Electronics AB, Flöjelbergsgatan 8 C, SE-431 37 Mölndal Sweden) was used for dose measurements with its associated accessories (DCT10 chamber). Figure 2 shows the experimental setup used for dosimetry estimates. Standard deviation (SD) in selected regions of interest (ROI) was considered as an indicator of image noise. SD measurements were made for ROIs representing approximately 60% of the area of the lower half of the phantoms’ images. For each scan, SD measurements were performed on all images acquired in one rotation (four images) and were then averaged over the four axial images acquired (figure 3). Regression models of surface dose and noise of the lower half of the image were generated as a function of phantom size (the amount of sqrt PA [explained later]) and miscentering. Using these regression models, it was possible to
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 327–330, 2010. www.springerlink.com
328
M.A. Habibzadeh et al.
Fig. 1 The scanning object is positioned below the isocenter, so because of the shape of bowtie filter, higher half is exposed to more dose and the image noise of the lower half is increased [1]
predict the increase of the surface dose and lower half image noise for each patient from the scout images. Statistical analysis of the data to generate regression models was performed using SPSS software version 14. B. Object Size Estimation Since in this study, scout images of patients were used to determine the values of patient miscentering and on the other hand, the size of the patient is a parameter which affects the dose and image noise, it was needed to use a factor that could be calculated from the scout images and also represent the patient size. For this reason projection area was used. This parameter is already used in GE scanners for automatic tube current modulation [3]. Projection area is included the information of object’s density and size and is the summation of detector channel data values after corrections. This factor represents total attenuation of the object. Because the dimension of projection area is area, thus the square root of it (sqrt PA) is a parameter which was used for representing the size [2].
Fig. 3 Noise measurement through estimation of the standard deviation (SD) in each image slice of the Polyethylene phantom One of the methods for calculation of sqrt PA is to use scout images [4]. In this method sqrt PA is calculated from scout attenuation area. Figure 4 demonstrate these parameters in a scout image. The relationship between these two parameters is [5]:
SAA = W × ROI × 0.001
(1)
sqrtPA = SAA + 8.7
(2)
sqrtPA = SAA + 10.7
(3)
The anterior-posterior scout images were used to determine the square root of projection area (sqrt PA) for each phantom. First a rectangular ROI was selected on scout image, then ROI MEAN was calculated by Imagej software and at last sqrt PA was determined by use of implied equations. In the above equations, ROI is the summation of ROI MEAN and 1000 and W is the average lateral width. The relation (2) and (3) is true for 64-slice GE Lightspeed VCT and 4slice GE Lightspeed, respectively. The same process was performed on patients’ scout images to determine the sqrt PA as patient size. Table
1 Phantoms used in this study with the corresponding values of square root of projection area (sqrt PA)
Fig. 2 Experimental dosimetry system
setup for Dose measurement using the Barracuda
Phantom name
Material
Exact diameter (cm)
sqrt PA
W15 W17 W21 W23 CTDI 32 P26
Water Water Water Water PMMA Polyethylene
14.8 17 21 22.5 32 26.5
22.7 25 31.3 32.8 49.3 38.5
IFMBE Proceedings Vol. 29
The Influence of Patient Miscentering on Patient Dose and Image Noise in Two Commercial ct Scanners
Fig. 4 This figure shows the parameters which are used for scout attenuation area calculation. ROI mean is the average CT number of the pixels within the ROI and W is the average lateral width
C. Assessment Strategy To evaluate the influence of miscenterings, scout images of patients who were scanned with these models of scanners were used. First the sqrt PA of patients were calculated from the anterior-posterior scout and then the amounts of miscenterings were determined from the lateral scouts. The amount of miscentering was calculated by use of Imagej software. This software is capable to determine the geometrical center of patient body’s shape. By choosing a polygonal ROI of patient body in lateral scout image and determination of center of this ROI and subtracting the value of Y from the Y value of image’s center, the amount of miscentering was found out. Figure 5 shows that a selection of an ROI in lateral scout image by Imagej. From sqrt PA and miscentering calculated values and use of regression models of dose and image noise, the amounts of dose and image noise increase were assessed for each person due to miscentering. For each scanner, scout images were gathered from 2 imaging centers. The average centering error, dose increase percentage and image noise increase percentage was calculated for each center and finally the results of centers were compared.
Fig. 5
329
A ROI is chosen regards to the shape of the patient’s body
Figure 7 shows the percentage of image noise changes relative to the value of miscentering for the two implied scanners. For example, the image noise increases using the W23 phantom were 1.8%, 5.4% and 13.4% for miscentering of 2, 4 and 6 cm below the isocenter, respectively for 64-slice GE Lightspeed VCT. For the GE Light Speed 4-slice scanner, the corresponding increases of image noise were 3.3%, 4.4% and 15.1%, respectively. As it is already implied, for each scanner, the scout images of patients from two imaging sites were used. The names 1 and 2 were used for the sites with VCT scanner and the names 3 and 4 were used for the sites with 4 slice scanner. From site 1, 80 patient’s scouts were analyzed which the average miscentering was 2.1 cm below the isocenter that leads to the average 21.9% increase of dose and 6% increase of noise. The same analyzes were done for 33 patients from aite 2 which is included in table 2. From site 3, 41 patient’s scouts were analyzed that the average increase of dose was 17.6% and the average increase of noise was 5.3% while the average miscentering value was 1.5 cm below the isocenter. The same analyzes were done for site 4 which is included in table 3.
IV. DISCUSSION III. RESULTS Figure 6 demonstrate that how the dose changes with regards to the value of centering error for 64-slice GE Lightspeed VCT and 4-slice GE Lightspeed. For example, the increase of surface doses using the CTDI-32 phantom were 15.5%, 33.3% and 51.1% for miscentering of 2, 4 and 6 cm below the isocenter, respectively for 64-slice GE Lightspeed VCT. For the GE Light Speed 4-slice scanner, the corresponding increase of surface doses were 18.6%, 32.9%, 51.4%, respectively.
Technologists’ faults from site 2 leads to 1.6 cm more error in patients centering and thus the average 7.2% more dose and 4.9% more noise than site 1. These faults from site 4 also leads to 0.2 cm more error in patients centering and therefore the average 4.3% more dose and 1.4% more noise than site 3. Looking at the calculated values of miscentering for all the patients in each center, makes it clear that the high percentage (67-85 %) of patients was miscentered more than 1 cm. These amounts show that in the most of the cases tech
IFMBE Proceedings Vol. 29
330
M.A. Habibzadeh et al.
Table 2 The results of patients’ scout images investigation of site 1 and 2 Imaging site
Average miscentering (cm)
Average percentage of dose increase
Average percentage of noise increase
Site 1 Site 2
2.1 3.7
21.9 29.1
6 10.9
Table 3 The results of patients’ scout images investigation of site 3 and 4
(a)
Imaging site
Average miscentering (cm)
Average percentage of dose increase
Average percentage of noise increase
Site 3 Site 4
1.5 1.7
17.6 21.9
5.3 6.7
V. CONCLUSIONS
(b)
Fig. 6 (a) The relation between surface dose and centering error for different phantoms on the 4-slice Light Speed and (b) Light Speed 64-slice CT scanners
nologists make unignorable mistakes. Of course the numbers of patients differ from one center to another. This is a factor which does not relate to technologists but affects their job quality. Analyzing the results of large body and small body patients individually shows that small patients are miscentered by more probability than large patients.
Results show that surface doses are increased, this means that especially anterior organs take more doses. Also the image noise is increased but the effect of miscentering on dose is higher than noise. The comparison of two centers of each scanner shows the difference between the operations of technologists and the signification of their role in imaging process. So they should be strongly encouraged to pay greater care to patient centering.
REFERENCES 1. Tack D, Gevenois P A (2007) Radiation Dose from Adult and Pediatric Multidetector Computed Tomography. Springer, Germany 2. Toth T, Ge Z, Daly M P (2007) The influence of patient centering on CT dose and image noise. Med Phys 34:3093-3101 3. Kalra M K, Maher M M, Toth T L, Schmidt B, Westerman B L, Morgan H T, and Saini S (2004) Techniques and applications of automatic tube current modulation for CT. Radiology 233: 649–657 4. Schindera S T, Nelson R C, Toth T L, Nguyen G T, Toncheva G I, DeLong D M, Yoshizumi T T (2008) Effect of Patient Size on Radiation Dose for Abdominal MDCT with Automatic Tube Current Modulation: Phantom Study. AJR 19:100–105 5. Udayasankar UK, Kalra M, Li J et al. (2006) Multidetector scanning of the abdomen and pelvis: a study for evaluation of size compensated automatic tube current modulation technique in 100 subjects. RSNA 2006. Oak Brook, IL: Radiological Society of North America
(a)
(b)
Author: Mohammad Reza Ay Institute: Department of Medical Physics abd Biomedical Eng., Tehran University of Medical Sciences, Tehran, Iran Street: Pour Sina City: Tehran Country: Iran Email: [email protected]
Fig. 7 (a) The relation between image noise in the lower half of the image and centering error for different phantoms in the 4-slice Light Speed and (b) 64-slice Light Speed CT scanners
IFMBE Proceedings Vol. 29
A Study on Performance of a Digital Image Acquisition System in Mammography Diagnostic D. Dimitric1, G. Nisevic2, Z. Boskovic3, and A. Vasic4 1
MMA/Department of Radiology, Beograd, R. Serbia
Abstract— In last decade X-ray clinical diagnostic experienced a considerable technological advance especially in two key fields: image reception and automatic exposure control. These advances necessitated physicists and biomedical engineers to adjust themselves to the new technical and physical aspects of Quality Assurance program in order to meet requests of practicing ALARA1 principle. Keywords— Half Value Layer, dose optimization, pixel aspect ratio, quantum noise, data regression.
I. INTRODUCTION A. Digital and Analog Image Reception Modern units for diagnostic mammography employ digital imaging technology meaning that if the image receptor is exposed to the X-Ray beam then electron charges are formed in the plane of image reception as discrete elements of a virtual image. Two distinctive techniques are used for conversion of photons’ kinetic energy into the potential energy of electron charges: indirect and direct. In an indirect technique, X-ray photons passing through a scintillating layer deposit in it a part of their energy by virtue of the photo-electric effect thereby producing light photons which in turn find their way through to the light-sensitive elements of detective circuitry. On the other side, in a direct conversion technique the intermediate energy transfer (x-ray to light) is avoided, and incident x-ray photons passing through a photoconductive layer produce the electronhole pairs which directly contribute to an image element formation. Screen-film technology of an image reception is employed in classic mammography. Here, an image-forming process looks much like that of indirect digital conversion technique with an exception that a photo-film emulsion plays role of image reception layer inside which minute, light sensitive crystals are suspended. Since these crystals are randomly and continuously distributed within the structure of the film emulsion this technique of image recording is referred as analog. 1
As Low As it is Reasonable Achievable.
B. Automatic Exposure Control and Automatic Dose Optimization In units for classic diagnostic mammography the Automatic Exposure Control (AEC) system involves a dose measuring device (ion chamber) positioned in the beam field behind the image reception plane. An X-ray film is exposed with a beam whose parameters are beforehand manually set. The AEC system is designed to indicate the moment of exposure termination. With this purpose the dose absorption of the ion-chamber is measured continuously throughout the exposure. At the moment when a predefined threshold dose has been sampled the exposure termination is started. Employment of digital technology in modern image receptors brought on the thoroughly new concept in the AEC systems design. The ion chamber as dose measuring device is now superseded by a much smaller AEC sensor which can be driven along the center line of an image reception area to get to the most preferable position for dose measuring. This position referred as target sensor position is introduced as one component of Automatic Dose Optimization (ADO) approach with the purpose of accounting for diversity and non-homogeneity of the breast composition. The beam quality parameterization is completed automatically based on the breast thickness indication. When an exposure is activated from the operator’s console the ADO system makes preliminary exposure in order to search for sensor’s target position relying upon an analysis of image data obtained from pre-exposure. Thereafter sensor is moved into the target position and ADO system makes final exposure whose termination is based on threshold dose that was beforehand assigned to the sensor. In this concept of Automatic Dose Optimization (ADO), sensor’s target position and sensor’s threshold dose value are interactively set through an execution of software routines which call upon entries in the look-up tables of the system’s software configuration files.
II. TESTED SYSTEM AND MEASUREMENT EQUIPMENT Mammography unit Selenia, Hologic with Full Field Digital detector technology is tested. The equipment is
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 331–334, 2010. www.springerlink.com
332
D. Dimitric et al.
installed in July 2008 at The Radiotherapy Department in Medical Military Academy, Belgrade. When measurements involved the tube’s radiation output a Victoreen`s control unit Model 8000 NERO mAx and a Mammo-ion Chamber Model 6000-529 were used. In test procedures presented here the following two phantoms are imaged: • •
PH1 -the 4.5cm thick, half-circled QA imaging phantom whose composition features the radiation interaction propreties of 50% glandular and 50% adipose tissue. PH2 -“thickness” phantom composed of five rectangular PMMA plates each of 1cm thickness.
In following chapters, the two main issues of quality control programme in one X-ray imaging diagnostic are discussed. These are dose output and image acquisition.Measurements are carried out and evaluated according to the guidelines of European protocols [1] and [2].
III. QUALITY OF DOSE OUTPUT
B. Performance of the Automatic Dose Optimization System In this procedure which is described in [1] under 2b.2.1.3. the output dose is measured with clinical ADO system settings, i.e. using the beam parameters automatically adjusted to the phantom in place. Measured value of output dose equals to ESAK (Entrance Surface Air KERMA) provided that the measurement volume of the ionchamber is out of reach of scattered radiation. For this purpose the tube load (mAs) required by the ADO system for given object’s thickness and composition is firstly ascertained by imaging the phantom PH2. Then, with so defined beam parameters the ESAK is measured in subsequent exposure with no phantom in the field. Average Glandular Dose (AGD) is calculated by formula:
AGD = ESAK ⋅ g ⋅ c ⋅ s
(1)
where g, c, and s are tabulated weighting factors [1]. Results for three phantom thicknesses are given in Table 2. Table 2
Measured ESAK values, calculated AGDs as well as acceptable and achievable AGD limits according to the European protocol [1]
A. Beam Quality Half-value layer (HVL) is measured according to the respective guideline in [2]. In measurement setup the beam radiation was attenuated by an Al-sheet of 0.5 mm thickness, the smallest beam field opening was selected2 and a Mammoion chamber was placed in the beam field. The control unit was set to measure dose in a HVL mode whereby the dose values from successive exposures are collected and then statistically processed. This statistic routine uses the “least square” method to calculate a straight line that best fits the regression function of dose attenuation data against filter thickness data. Table 1 gives the summary of results.
dPH (cm) 3 4 5
kVp (kV) 26 27 28
mAs
ESAK
95 139 200
3.06 5.12 8.17
AGD (calc.) 1.137 1.517 2.059
AGD accept. <1.5 <2.0 <3.0
AGD achiev. <1.0 <1.6 <2.4
As it can be seen from Table 2 calculated AGDs are well below the acceptable limits, remaining mostly within the achievable limits except for lower phantom thicknesses (≤ 3cm).
IV. IMAGE QUALITY
Table 1 HVL values measured for 28 kVp / 100 mAs beam. Xo /XMEAN are output doses of non-attenuated beam and of the beam attenuated with a 0.5mm of Al, respectively. HVLCAL is calibration value Target/Filter
W/Rh W/Ag
X0 /XMEAN (mGy)
HVL (mm)
3.97/1.95 5.60/3.06
.48 .46
HVLCAL (mm)
0.54 0.55
HVLCAL values are obtained during the last system’s calibration test. Presumably, since that time the beam quality has changed. The falling trend of HVL value means the decreasing penetrability of the beam. The ADO system compensates for this loss of penetrability by producing a larger tube load (mAs) and thereby a larger Entrance Surface Dose (ESD). This holds within the entire energy range and not only at the energy (28keV) selected for this test. 2
A. Source to Image Distance Source to image distance (SID) is calculated according to the guideline 2b.2.1.1.2 [1]. A thin rectangular object was imaged once when laid on the breast support table to make for a contact object’s view, and subsequently when it was set 20cm above the breast support table to make for a magnified object’s view. An image analysis application belonging to the software packet of the Acquisition Workstation (AWS) is then run. On each of the two images, the object’s dimensions are found by applying a distance measuring tool. If number of pixels as distance measure is used then next formula yields SID value:
In keeping low the share od scattered photons. IFMBE Proceedings Vol. 29
f SID =
p⋅h −1 a ⋅ (n − nmgn ) −1 pix
(2)
A Study on Performance of a Digital Image Acquisition System in Mammography Diagnostic
Response function of the image receptor is assessed by imaging the phantom PH1 with different Entrance surface Doses (ESDs) at the fixed beam quality as it is described in 2b.2.2.1 [1]. In measurement setup for ESD the ionchamber is positioned laterally besides the phantom in the beam field. The beam parameters were defined as W/Rh (target/filter) and 28 kV (tube voltage). On each of the acquired images a 1cm2 Region Of Interest (ROI) is selected and its image content is statistically processed by an application tool, so that the mean pixel value (MPV) as well as the standard deviation (STD) of pixel value over the ROI are obtained. Results are summarized in Table 3. PV’s standard deviation, and ROI’s square signal-noise ratio for different ESDs. Beam parameters were 28 kV and W/Rh
3
600
y=1.22x+19
400
r2=.9999
200 0 0
200
ESD (mGy) .75 1.49 2.97 3.72 5.60 7.49
MPV 107 201 385 475 706 931
SNR2OUT 420.25 1095.61 2381.44 2948.49 4199.04 5505.64
STD 4.3 5.5 7.5 8.4 10.6 12.3
400
600
800
ESD[x10microGy]
Fig. 1 Diagram of regression of MPV on ESD Linear regression function of MPV data against ESD data is shown in Fig. 1. The coefficient of linearity is 1.22. When reciprocated it values the sensitivity of an image receptor that is defined as dose gradient by unit of pixel value. For this image receptor the sensitivity was calculated at 8.2µGy/PV. Second coefficient (19) in regression function, here marked nOFF gives the mean pixel value that would be measured if the image were acquired without exposing the image receptor. When image’s signal-to-noise ratio is calculated the pixel value offset (nOFF) is subtracted from MPV as follows:
SNRout = ( MPV − nOFF ) / STD
(3)
where subscript “OUT” in the symbol of SNR refers to the output image noise. Square SNROUT data are tested for linear regression against ESD data and diagram is shown in Fig. 2. The goodness of fit of a regression is given by r2. Non-unity value (0.9955) of r2 means that besides the quantum noise, digital and electronic are also present in the output image. 6000
y=7.48x+13.45
5000 4000
Table 3 MPV, mAs 20 40 80 100 150 200
800
SNR2
B. Image Receptor Response Function
1000 MPV
where h[mm] is the height of the object when magnified view is acquired, a[mm] is actual length (or width) of the object, p is a pixel pitch whose value is specified by manufacturer3, while npix and nmgn are pixel quantities found for a considered dimension in a contact view and in a magnified view, respectively. Considering Eq.(2) as geometrical identity, the SID value should stay unchanged whatever object’s dimension has been selected. Here, to the contrary, there was a difference of about +8% (5cm) in SID value calculated on the pixel number data for horizontal distance comparing to that calculated on data for vertical distance. Hereby, the SID value obtained from vertical direction data closely coincided with that presented in the unit specification (≈64cm). This lack of consistency when different measuring directions are applied can be explained by assuming a nonunity value for the pixel aspect ratio. With the purpose of satisfying the constancy of SID value, two equations like that in Eq.(2) are written, one for each of the image directions and their quotient is supposed to be unity. Proceeding with this method we obtained the pixel aspect ratio 1.08:1, vertical-to-horizontal.
333
3000
r2=.9955
2000 1000
ESD [x10microGy]
0 0
200
400
600
800
Fig. 2 Diagram of regression of the SNR2OUT on ESD
70µm. IFMBE Proceedings Vol. 29
334
D. Dimitric et al.
C. MTF and DQE
SNRIN = ESAK / STD ( ESAK )
Modulation Transfer Function (MTF) gives a relationship between the detail size and image contrast. For purpose of measuring MTF the phantom PH1 is used. Within the phantom are embedded the patterns with different detail sizes. The image of a single pattern is seen as square region of 1cm2 area over which are either horizontally or vertically distributed the black and white line pairs. The narrower is a single line the greater is a spatial frequency of the region. There are about 15 patterns with spatial frequencies going from 0 lp/mm to 20 lp/mm. Phantom PH1 is imaged with clinical settings. Only image regions with spatial frequencies of 0 lp/mm and 5 lp/mm were measurable while on the rest of regions the stripes were unrecognizable. Regarding that we took spatial frequency of 6 lp/mm as the systems’ contrast visibility threshold for which MTF value (when normalized to maximal MTF) should amount to 0.04[3]. Based on three normalized MTF values, MTF is then interpolated as broken line function. Two MTFs for the two image directions, horizontal and vertical are shown in Fig. 3. For each MTF, a spatial frequency of the line breaking point as well as the MTF value corresponding to the spatial frequency of 5 lp/mm is shown.
1,2 f =3,51
f =4,47
1 0,8
MTF=0,669
(hor.)
MTF
0,6
(vert.) MTF=0,426
0,4 0,2
f [lp/m m ]
MTF=0,04
When DQE calculation formula was applied on SNRIN and SNROUT data DQE value amounted to 0.43.
V. DISCUSSION AND CONCLUSION In beam quality test a considerable fall of HVL value was measured at both filters rhodium and silver. This fall has not yet reached the level when the tube load goes out of calibrated mAs-range and system starts to generate alarms. In test of the ADO system performance the AGD results were inside the acceptable limits and for the most of thickness range inside of the achievable limits. Based on SID measurement the pixel aspect ratio is proved to be 1.08:1. Application tool for distance measuring does not account for this fact and thereby introduces additional uncertainty. In measurements of image contrast two MTFs are plotted whereby a better contrast resolution is found in vertical direction what was expected considering a shorter horizontal pixel dimension. Image receptor sensitivity is calculated at 8.2 µGy/PV. If this value is combined with the upper limit of image receptor’s dynamic range (specified as 8.73mGy) then pixel saturation would occur at about 1083 pixel value. This accounts for maximal 11-bit of useful image data per pixel. Considering manufacturer’s specification of 14-bit image data per pixel it turns out that 3 bits data per pixel are engaged only to convey the quantum and electronic noise components of an image signal which puts unnecessary demand on image receptor hardware and extend the time of image acquisition. DQE value of 0.43 as final overall estimate of an imaging ability of the acquisition station is rather low and recalibration procedure might be taken into consideration.
0 0
1
2
3
4
5
6
7
REFERENCES Fig. 3 Normalized
MTFs measured on two image directions, horizontal and vertical. MTF values for f=5 lp/mm are given as well as spatial frequencies (f) on which lines are broken
Digital Quantum Efficiency (DQE) estimates imaging quality of one digital image receptor by comparing square SNR at its input to that at its output (DQE=SNR2OUT /SNR2IN ). SNROUT is given in Eq.( 3). For SNRIN calculation the results of measuring dose output reproducibility are used. Hereby the average ESAK value is considered as a signal while the standard deviation of ESAK values is considered as a quantum noise:
1. European protocol for the quality control of the physical and technical aspects of mammography screening –part `2b` (Digital Mammography); Fourth edition ,September 2005 2. European Protocol On Dosimetry In Mammography, June 1996 3. Grundlagen der Röntgen-Bildverstärker Fernsehkette. SIEMENS, Bereich MedizinischeTechnik Erlangen, Mart 1974 Author: Dragan Dimitric Institute: Department for Logistics of Medical Military Academy Street: Crnotravska 17 City: Belgrade Country: R. Serbia Email: [email protected]
IFMBE Proceedings Vol. 29
An efficient Video-Synopsis technique for optical recordings with application to the analysis of rat barrel-cortex responses V. Tsitlakidis 1, N.A. Laskaris 1, G.C. Koudounis2, E.K. Kosmidis3 1
Laboratory of Artificial Intelligence & Information Analysis, Department of Informatics, AUTh, 54 124, Greece, 2 Cardiology dept., General Hospital of Kalamata, Greece 3 Laboratory of Physiology, School of Medicine, AUTh, 54 124, Greece
Abstract— Optical imaging techniques are nowadays highly popular in neuroscience research, due to their high spatial and temporal resolution. Optical recordings data are coming in the form of sequential snapshots (i.e. videos) reflecting changes in neural activity. By adopting carefully designed stimulation paradigms, these signals constitute an invaluable source of information regarding the emerging spatiotemporal dynamics of brain’s response. However, the volume of collected data can obscure the understanding of underlying mechanisms. In particular, the comparison between different recording conditions is a challenging task that is usually solved empirically. We introduce an algorithmic technique that identifies spatial domains of coherent evoked activity, produces a meaningful summary of the involved videos and facilitates the comparison of response dynamics. A self-organizing network (SON) lies in the core of the methodology and is responsible for the segmentation of imaged areas into disjoint, functional homogeneous regions. The obtained segments are then ordered according to the strength of response and the ones below an adaptively defined threshold are suppressed. In this way, regions of interest (ROIs) are defined automatically for each response individually and can subsequently be compared across-responses revealing the spatial aspects of neural code that usually give rise to functional maps. Our technique is demonstrated using averaged responses from rat S1 somatosensory cortex. For the first time, some evidence is provided that the deflection direction of a single whisker might be reflected in the location of activation’s first entry. Keywords— Optical recordings, Neural Gas, visual summaries.
I. INTRODUCTION
Optical imaging is a relatively new recording technique that uses microscopes and cameras with high spatial and temporal resolution. This explains why only a few methodological papers have appeared so far, concerned with the signal understanding task in the collected data. The analysis usually starts, in an exploratory mode, with the displayed data observed by an experienced user and proceeds with the
manual definition of ROIs and the extraction of associated time-courses of activity. For a more thorough treatment, Wavelet-analysis [1] and multivariate decompositions (PCA, spatial-ICA) [2] have been attempted on the raw data. More recently, a manifold learning approach has been introduced [3] with the advantage to produce an advanced parsimonious visualization of the data. It is the scope of this work to introduce a novel methodology that fully automates the detection of ROIs, and hence can be repeatedly applied as a means of studying the putative temporal-dependent modulations of a complex, spatially-encoding scheme with which the brain differentiates the external world stimuli. Moreover, treating data from a control condition as surrogates for the responserelated video-segmentation, we can define finely-tuned thresholds for the precise detection of significantly activated brain regions. While the original motive stemmed from the recent advances in handling video databases [4], the realized algorithmic steps were borrowed from previous work on mining information from multisite encephalographic recordings [5]. In a nutshell, we first exploit the original Neural Gas algorithm so as to identify spatial domains of coherently-evoked neural activity, then derive a temporal course for each group of pixels and associate a response strength with it, based on a conventional SNR estimator. Using the obtained SNR measurements, the segments are ordered and color-coded. The procedure is repeated for different response latencyranges and, by keeping a uniform scale for the color-code, the evolution of response dynamics is mapped in the most intelligible way, as a series of activation topographies. The distribution of SNR measurements from the spontaneous activity data can be exploited in a thresholding scheme that will reveal the segments of significant stimulus-evoked activations. To introduce and demonstrate our methodology, we utilize data from a study targeting rat somatosensation. The scope of that study and some relevant background elements are provided in Section 2. Section 3 is devoted to the presentation of the video synopsis technique, while Section 4
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 335–338, 2010. www.springerlink.com
336
V. Tsitlakidis et al.
includes a brief report on some new experimental findings resulted from its application.
III.
THE METHOD
A. Feature extraction II.
EXPERIMENTAL DATA
Rodent’s whiskers are highly sensitive tactile detectors, similar to primate fingertips that are actively moved through space to extract information about the environment. A somatosensory stimulus evokes a topographical response in cortex, and a physical map of the whisker pad can be found in stained sections of rodent cortex [6]. It is proven that single whisker deflections evoke a specific response that rapidly spreads across the barrels [7] and beyond [8], especially when the animal is anesthetized [9] [10] [11] [12]. The responding neural activity gives the impressions that a particular cortical area responds exclusively to a specific stimulus [13]. However the way in which all this neural activity encodes the stimulus information, is still unknown. In the particular experimental study performed on anesthetized rats, optical recording data were used to identify the characteristics of stimulus responses that code the direction of the stimulus. A Voltage Sensitive Dye (VSD) was first applied on animal cortex that transformed the intracellular voltages differences into optical signal that was then recorded with special CCD cameras. The experiments were carried out at School of Medicine of Yale University (D.J. Davis, R. Sachdev and V.A. Pieribone, in preparation) and focused on the neural activation in layer 2/3 of cortex following single whisker movements (stimulation sweeps) [14] [15] or no deflection (background activity sweeps). The background activity sweeps were used to subtract the cardiac and dye bleaching artifacts from stimulation sweeps [16] and also to produce sweeps of spontaneous cortical activity (control sweeps). Stimuli were whisker deflections on two different directions (caudal and rostral) and three different amplitudes. Here, we use only averaged videos corresponding to the maximum amplitude in both directions, and averaged videos corresponding to spontaneous activity (artifact-corrected data). Each video was one second in duration and had been sampled at a rate of 0.5 kHz. This means 500 frames for each video, with a frame of [80 x 80] pixels. Stimuli onset time (when applied) was at 150 ms (75th frame). The preprocessing of all the averaged videos included: 1) a simple algebraic transformation that associates each pixel (n,m) with a signal expressing the relative increase in fluorescence DF/F due to stimulus onset [3], 2) temporal band-pass filtering and spatial low-pass filtering. 3) definition of the useful part of Field Of View (FOV).
We first define the feature vectors that will be used in the subsequent clustering step. A conjunction of temporal and spatial domain features will be adopted, so that clustering will naturally result into connected segments consisting of pixels which reflect similar activation-timecources. Regarding the temporal domain, signal values at consecutive latencies of interest (LOIs) will form the first part of the feature vector that is associated with each pixel (the simplest way to select LOIs is based on the time interval around the peak(s) in the time-dependent curve of integrated activity from a response video (see, fig.1a). The second part of the feature vector is formed by the corresponding pair (x,y) of pixel coordinates in the FOV. Both parts of the feature vector are concatenated after proper scaling (to counterbalance for the range differences) and weighting (based on a factor 0≤β≤1) so as to control the relative importance of the two domains in the overall representation. With superscripts to denote the two different domains, the data matrix (containing all feature vectors from ‘useful’ pixels) takes the form: A
X1A X1B
B
1 X [ N ×w] 1 X . ∪ (1 − β). . [AN ×2] A r w 2 r A B A | ⋯ | Xi Xi | ⋯ | X N X BN
=β . Xdata [N × w + 2 ]
= (1)
Where N is the number of pixels (of actual FOV), w is the number of selected latencies and r denotes the range of values in either domain. B. Spatial Segmentation via Neural-Gas based clustering of pixels After the feature extraction step, the N formed patterns conveying the spatiotemporal response dynamics are fed to a clustering routine which take over their partition into homogenous groups representing well localized activities of similar temporal morphology. Neural-Gas network is employed to accomplish this task, due to its efficiency [17]. The algorithm is executed using the combined (temporal +spatial) representation, however the results are transferred to both domains and visualized separately. Strictly speaking, the “Neural-Gas” algorithm is applied to the data matrix Xdata=[X1X2 …XN]. This algorithm is an artificial neural network model, which converges efficiently to a small, user-defined number k
IFMBE Proceedings Vol. 29
An Efficient Video-Synopsis Technique for Optical Recordings with Application to the Analysis of Rat Barrel-Cortex Responses
data dimensionality is known [8] and this makes it the best candidate for our summarization purposes. The computed code vectors Oj ∈ Rw+2, j=1,2,…k are used in a simple encoding scheme: the nearest code vector are assigned to each Xi in Xdata. This procedure divides the response manifold V⊂ Rw+2 into k Voronoi-regions
{
Vj = X ∈ V : X − O j ≤ X − Oi , ∀ i , i = 1,2,...k
}
From a more practical point of view, the bulk of information contained in the data matrix is represented, in a parsimonious way, by a (Nxk) partition matrix U, with elements uij such that 1 if X i ∈ Vj k N k uij = , ∑ j ∑ i =1 uij = ∑ j N j = N (2) 0 if X ∉ V j i Next, the computed k-partition is applied, individually, to both parts (temporal & spatial) of the patterns. Hence temporal signals and pixel-coordinates are grouped accordingly. By within-group averaging of the former, k response profiles are computed (segment-based averages). Each one serves as indicative response profile for the spatial domain contained in formed segment (i.e. the corresponding group of pixels).
337
segments that present high SNR-values even without stimulation. This experimental fact be explained via the tendency of the clustering algorithm (not in particular of Neural-Gas) to identify similar activations in combination with the tendency of SNR-estimator to recognize waveform-similarities as high signal content. Based on this empirical distribution, an SNR-value can be defined as threshold associated with a user-defined significance level. All the segments with response strength (i.e. SNR-value) below the thresholds can be considered as part of spatial domains non responsive to a particular stimulus. By the same token, the derived threshold can be used in the detection of stimulus first entry, by locating the first segment (during the post-stimulus latency range) that exceeds it.
Figure 1: The main steps of the segmentation methodology. a) Feature extraction selecting LOIs based on the time interval around the peak. b) Temporal visualization of ordered segments. c) Spatial domain visualization of ordered segments.
. C. Tracking response dynamics via piecewise implementation. To study the evolution of response dynamics and in particular the spatial aspects of the dynamical changes, we perform repeatedly the above described segmentation procedure, using different (optionally overlapping) timesegments in the pixel-activity representation (eq.1). However, all the intermediate SNR-values obtained in each segmentation step are treated together during the ordering scheme. In this way, a common color map (Fig.2b) is obtained that can facilitate the comparative visualization of response topographies. Figure 2c, serves as a demonstration of this procedure. It includes the successive segmentations (using the overlapping time windows indicated in Fig,2a) of the 3 videos corresponding to the two somatosensory responses and the spontaneous activity. D. Data-sieving. To simplify the whole picture, we exploit the outcomes from the application of piecewise segmentation to spontaneous activity data and form an empirical distribution for the SNR-values. As can be seen in Fig.2d, there are a few
IV.
RESULTS
Using the described framework for video-synopsis, we contrasted the data corresponding to (averaged) responses from caudal and rostral deflection of the shame whisker and also compared them against spontaneous activity. Rostral deflection resulted in larger amplitude responses than caudal (Fig 2a). Using k=100, β=0.5 and the latency ranges shown in Fig.2a, we show that whisker stimulation induces a stimulus-specific pattern of dynamical changes (see Fig.2c), with the strength and the velocity of spatial spread being the most differentiating characteristics. To provide a fine description of the spatial aspects of stimulus encoding in S1 barrel cortex, we locate and compare the earliest entries of response (based on the above Data-sieving procedure). Figure 2e shows the response topographies after thresholding, and Figure 2f includes the detected top on SNR-rank segments from both deflection-directions. Figure 2f suggests that rostral and caudal deflections of the same whisker are also encoded topographically on the S1 having distinct areas of maximum activation.
IFMBE Proceedings Vol. 29
338
V. Tsitlakidis et al.
Figure 2: a) Overlapping time-windows used in the piecewise implementation of the main segmentation step. b) The ordered SNR-rank color code. c) Piecewise application of the segmentation step. d) Empirical Distribution of the SNR-values from the spontaneous activity data and the selected threshold c (corresponding to P-value 0.001). e) Response topographies after Data-Sieving. f) First entry response-locus (top SNR-rank segments) for both deflection directions 4. Pritch et al., (2008), IEEE Trans. PAMI, Vol 30(11), pp. 19711984, 2008. 5. N. Laskaris, S. Fotopoulos, and A. Ioannides, (2004) IEEE SigCONCLUSIONS nal Processing Mag., vol. 21, no. 3, pp. 66–77, 2004. 6. Woolsey TA, Van der Loos H., (1970) The, Brain Res, 17:205242, 1970. We introduce a novel method that summarizes optical 7. Ferezou I, Bolea S et al, (2006) , Neuron 50:617-629, 2006. recording data by identifying spatial domains of coherent 8. Ferezou I, Haiss F, et al, (2007), Neuron 56:907-923, 2007. 9. Kleinfeld D, Delaney KR, (1996), J Comp Neurol 375:89-108, evoked activations and present them in an orderly fashion. 1996. The obtained visualizations support comparisons across 10. Senseman DM, Robbins KA, (2002), Journal of Neurophysiolodifferent recording conditions and facilitate the gy 87:1499-1514, (2002). understanding of neural response dynamics at a single 11. Petersen CC, Grinvald A, Sakmann B, (2003) J Neurosci 23:1298-1309, 2003. glance. Without limiting the applicability of the method, rat 12. Civillico EF, Contreras D, (2006) J Neurophysiol 96:336-351, somatosensory responses were utilized for demonstration 2006. purposes. The preliminary results reveal some new aspects 13. Welker C., (1971), Brain Res 26:259-275, 1971. of somatosensory encoding on the cortex. We provide 14. Simons DJ, (1978), J Neurophysiol 41:798-820, 1978. 15. Sachdev RN, Ebner FF, Wilson CJ, (2004) J Neurophysiol evidence that a spatiotemporal code exists at the cortical 92:3511-3521, 2004. level for the directionality of whisker deflection. 16. Orbach HS, Cohen LB, Grinvald A, (1985) J Neurosci 5:18861895, 1985. 17. Laskaris N, Fotopoulos et al ,(1997) Electroencephalogr Clin Neurophysiol 1997;104:151±156.
REFERENCES 1. 2. 3.
Bathellier et al., (2007), NeuroImage, vol 34(3), pp.1020-1035, 2007 Reidl et al., (2007), Neurolmage, vol. 34, no.1, pp. 94-108, 2007. Nikolaos A. Laskaris, Efstratios K. Kosmidis et al, (2008) IEEE Engineering In Medicine And Biology Magazine, March/April 2008.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Tsitlakidis Vasilios Department of Informatics, AUTh Aristotle University Campus, 54124 Thessaloniki Greece [email protected]
Preoperative Planning Software for Hip Replacement M. Michalíková, L. Bednarčíková, T. Tóth, and J. Živčák Technical University of Košice, Faculty of Mechanical Engineering, Department of Biomedical Engineering, Automation and Measurement, Košice, Slovakia Abstract— At the present time in many countries of the worlds preoperative planning of interventions for lumbar joint is realized with caliper, protractor, plastic templates and x-ray images. From these reasons the measurement is time consuming with many errors. Over the past few years, an increasing appreciation of the usefulness of digital technology has emerged among various field of medicine. This paper presents current applications of computer technology in the field of surgery and pre-operative planning of total hip implantation. The new developed CoXaM software offers simple solution of the problems by using the digital x-ray images and handmade plastic templates. The developed software combines the digital x-ray images with the digital templates for planning implantation and reimplatation interventions of hip joints. Keywords— software.
preoperative
planning,
x-ray,
digitizing,
I. INTRODUCTION Computer technology has many application in different fields of industry, health care and medicine. This encompasses paper-based information processing as well as data processing machines (Hospital information system or Clinical information system) and image digitalization of a large variety of medical diagnostic equipment (e.g. computer images of X-ray, MR, CT). Many of these applications allow the visualization and classification, respectively the identification and the assessment of the diagnosed objects. The aim of the computer technology in medicine is to achieve the best possible support of patient care, pre– operative surgery planning and administration by electronic data processing. Kulkarni et all, 2008 devised a method whereby a planar disc placed on the radiographic cassette accounts for the expected magnification. Digital radiography is becoming widespread. Accurate pre-operative templating of digital images of the hip traditionally involves positioning a calibration object onto its centre. This can be difficult and cause embarrassment.[1] Digital pre-operative planning enables the surgeon to select from a library of templates and electronically overlay them over the image. Therefore, the surgeon can perform the necessary measurements critical to the templating and
pre-operative planning process in a digital environment. The pre-operative planning process is fast, precise, and costefficient, and it provides a permanent, archived record of the templating process.[2] William Murzic et all, 2005 presented the study with the aim to evaluate the accuracy of a specific templating software (with emphasis on femoral component fit) and compare it to the traditional technique using standard radiographs.[3]
II. ANATOMY AND MORFOLOGY OF HIP JOINT The hip joint, scientifically referred to as the acetabulofemoral joint (art. coxae), is the joint between the femur and acetabulum of the pelvis and its primary function is to support the weight of the body in both static (e.g. standing) and dynamic (e.g. walking or running) postures. The hip joint (Fig. 1) is a synovial joint formed by the articulation of the rounded head of the femur and the cup-like acetabulum of the pelvis. It is a special type of spheroid or ball and socket joint where the roughly spherical femoral head is largely contained within the acetabulum and has an average radius of curvature of 2.5 cm. [4]
Fig. 1 Right hip joint – cross-section view [5]
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 339–342, 2010. www.springerlink.com
340
M. Michalíková et al.
Fig. 3 Femoral neck angle and acetabular inclination
Fig. 2 Three degrees of hip joint freedom The hip muscles act on three mutually perpendicular main axes, all of which pass through the center of the femoral head, resulting in three degrees of freedom (Fig. 2) and three pair of principal directions: Flexion and extension around a transverse axis (left-right); lateral rotation and medial rotation around a longitudinal axis (along the thigh); and abduction and adduction around a sagittal axis (forward-backward); and a combination of these movements (i.e. circumduction, a compound movement in which the leg describes the surface of an irregular cone) [6]. Most important morphological specifications (Fig. 3) which can be measured on anteroposterior pelvic radiograph are: •
•
femoral neck angle (caput-collum-diaphyseal angle, CCD angle) - between the longitudinal axes of the femoral neck and shaft, normally measures approximately 126° in adults, acetabular inclination (transverse angle of acetabular inlet plane) - the angle between a line passing from the superior to the inferior acetabular rim and the horizontal plane, normally measures 40° in adults. [4]
The total hip prosthesis must be anchored securely within the skeleton for good function. The loose sitting total hip prosthesis is painful and such loose total hip is also stiff. There are two methods how to secure the fixation of a total hip prosthesis to the skeleton [7]: 1. The cemented total hip - the surgeon uses bone cement for fixation of the prosthesis to the skeleton 2. The cementless total hip - the surgeon impacts the total hip directly into the bed prepared in the skeleton The construction, the form, and the rehabilitation after the operation with these two types of prostheses are different. 3. The hybrid total hip prosthesis - cementless cup paired with cemented shaft.
III. TOTAL HIP PROSTHESIS Hip replacement (total hip replacement), is a surgical procedure in which the hip joint is replaced by a prosthetic implant. Replacing the hip joint consists of replacing both the acetabular and the femoral components (Fig. 4). Such joint replacement orthopaedic surgery generally is conducted to relieve arthritis pain or fix severe physical joint damage as part of hip fracture treatment. Hip replacement is currently the most successful and reliable orthopaedic operation with 97% of patients reporting improved outcome. IFMBE Proceedings Vol. 29
Fig. 4 The modular structure of total hip prosthesis
Preoperative Planning Software for Hip Replacement
341
Successful surgery requires precise placement of implants such that the function of the joint is optimized biomechanically and biologically. Pre-operative planning is helpful in achieving a successful result in total joint replacement. Pre-operative templating in total hip replacement helps familiarize the surgeon with the bony anatomy prior to surgery, reducing surgical time as well as complications. Typically most reconstructive surgeons have used acetate overlays and radiographs to determine appropriate implant size. Pre-operative planning is realized with caliper, protractor, plastic templates and x-ray images. The measurement is time consuming with many errors. Digital images replace radiographs which can no longer be lost or misplaced in a completely filmless system. X-ray images are viewed on a diagnostic grade monitor, rendering prosthetic overlays useless. [2], [8], [9]
which will assess the suitability of the type of implant. By using digital templates, the surgeon can employ a stepwise method to determine which size of prosthesis to use and where to place the prosthesis within the bone to ensure optimum function of the joint following surgery. The incorporation of the various templates into the software in terms of the “magnification factor” is essential for accurate preoperative templating and planning. A. Possibilities of CoXaM Software
IV. DIGITALIZATION OF THE PRE-OPERATIVE PLANNING BY COXAM SOFTWARE
The “CoXaM” software was developed in Visual Studio 2005 (Microsoft) in the Visual C++ programming language at the Department of biomedical engineering, automation and measurement at the Faculty of mechanical engineering, Technical University of Kosice.
Fig. 6 Example of using the “CoXaM” and control panel
Fig. 7 Control panel detail – a- calibration circle, b- center icon, c- measurement of dimension, d- circle, e- text, f- angle, g- examination of three line parallelism, h- removing the planning, i- templates • Fig. 5 Overview of main menu The new sophisticated software “CoXaM” (Fig. 5) was designed for pre-operative planning and helps to determine on the X-ray image a length dimensions, a center of rotation, an angle values. These parameters are considered at parallelism of guidance lines. The software enables the digitalization of plastic templates from several producers,
• •
Calibration circle (Fig. 6 Turquoise, Fig. 7- a) - allows exact conversion of the marking dimensions for a given calibration feature on the X-ray. Determining the three points plotted circle whose diameter in millimeters real user enters - in this case is 28 mm. Center icon (Fig. 7 – b) – centers the x-ray image into viewport. Measurement of dimension (Fig. 6 Green, Fig. 7 - c) – calculates the distance between two points. If the calibration is passed the result is in millimeter, else pixels.
IFMBE Proceedings Vol. 29
342
M. Michalíková et al.
•
Circle (Fig. 6 Red, Fig. 7- d) – from three points the software calculates the circle (center, diameter). If the calibration is passed the result is in millimeter, else pixels. The circles are used for finding the center of hip joint and defining dimension of femoral head and acetabular component diameter. With help of circle we can determine the floatable center of rotation before surgery and after.
•
Text (Fig. 7- e) – allows the user to enter the text. The font used is Arial 12pts. Angle (Fig. 6 Blue, Fig. 7- f) – The angle between two lines (created from four points). It’s not necessary that two lines have intersection point. Examination of three line parallelism (Fig. 6 Pink, Fig. 7- g) – L. Spotorno and S. Romagnoli method calculates the parallelism between three lines (created from six points) - ischial tuberosities flowline (the base line), superior acetabular rims flowline and lesser trochanters flowline. Removing the planning (Fig. 7- h) – removes all task and cleans the x-ray image. Templates (Fig. 6 Yellow, Fig. 7- i) – opens the digital template from database of scanned templates from total hip prosthesis producers. Allows calibrating the templates and inserting it into x-ray image. The size of template is equal to size of x-ray and it’s possible to rotate and move it.
• •
• •
V. CONCLUSIONS At the present time computer and imaging technologies with electronic outputs are improved slowly but steadily in the hospitals. The quality and user comfort of the software equipment of the hospital departments are offen added value at diagnostic or surgery planning process. CoXaM offer simple solution of the problem of using the digital x-ray images and handmade plastic templates. The problem is solved by the digitalization of templates for use in software. The developed software combines the digital xray images with the digital templates for planning implantation and reimplatation interventions of hip joints. The new proposed methodology provides the opportunity for comfortable, user-friendly and dimensionally accurate computer programming surgical operation. The technique is reliable, cost effective and acceptable to patients and radiographers. It can easily be used in any radiography department after a few simple calculations and manufacture of appropriately-sized discs. The CoXaM software provides several advantages for orthopedics surgery. X-ray film is no longer necessary. There are no radiographs to store, lose, or misplace. Over time this
results in a cost savings as film and developing supplies are no longer needed. Disadvantages include the initial cost of outfitting the technology. As digital technology improves and becomes more accessible to the health care industry, digital radiography will be used by an increasing number of hospitals and orthopaedic practices. More practices will become filmless and software programs will be necessary for successful reconstructive planning and templating. Statistically significant clinical studies are planned to confirm the both qualitative value of the software and quantitative precision of the output parameters.
ACKNOWLEDGMENT This research has been supported by the research project 1/0829/08 VEGA - Correlation of Input Parameters Changes and Thermogram Results in Infrared Thermographic Diagnostic.
REFERENCES 1. A. Kulkarni; P. Partington; D. Kelly; S. Muller: Disc calibration for digital templating in hip replacement. Journal of Bone and Joint Surgery - British Volume, Vol 90-B, Issue 12, 1623-1626. doi: 10.1302/0301-620X.90B12.20238 2. James V. Bono, MD: Digital Templating in Total Hip Arthroplasty. The Journal of Bone and Joint Surgery (American). 2004;86:118-122. 2004 The Journal of Bone and Joint Surgery, Inc. 3. William J. Murzic M.D.; Zeev Glozman B.S.; Paula Lowe R.N.: The Accuracy of Digital (filmless) Templating in Total Hip Replacement. 72nd Annual Meeting of the American Academy of Orthopaedic Surgeons in Washington, DC, February 23-27, 2005 4. Michael Schuenke, Erik Schulte , Udo Schumacher , Lawrence M. Ross , Edward D. Lamperti: Thieme Atlas of Anatomy (2006), ISBN10: 3131420812 5. http://www.rush.edu/rumc/images/ei_0244.gif 6. Platzer W. Color Atlas of Human Anatomy, vol. 1. Locomotor System. 5th revised andenlarged English edition. Stuttgart, New York: Thieme; 2004. ISBN 3-13-533305-1 7. http://www.totaljoints.info/cemented_and_cementless_thr.htm#0 8. http://www.ortho-cad.com/b/Content/Technology_2_2.html 9. Michalíková, Monika, Ing.: Riešenia tribologických vlastností totálnych náhrad bedrového kĺbu. Dizertačná práca. Košice: Technická univerzita, Strojnícka fakulta, 2009
Author: Monika Michalíková Institute: Technical University of Košice, Faculty of Mechanical Engineering, Department of Biomedical Engineering, Automation and Measurement Street: Letná 9 City: Košice Country: Slovakia Email: [email protected]
IFMBE Proceedings Vol. 29
Entropy: a way to quantify complexity in calcium dynamics A. Fanelli 1, F. Esposti2 , J. Ion Titapiccolo1, M. G. Signorini1 1
2
Politecnico di Milano, Dipartimento di Bioingegneria, Milano, Italy Medical Research Council Laboratory of Molecular Biology, Cambridge, United Kingdom
Abstract—Ca2+ is an intracellular signal that can regulate many cellular functions. It is at the basis of the communication of different cellular populations, as astrocytes. Astrocytes are cells of the brain endowed with supportive functions towards neurons. They also regulate and control neuronal activity. In this paper, we employed Shannon entropy to quantify the complexity of calcium dynamics in astrocyte cultures. We exploited astrocyte fluorescence recordings, reporting calcium activity before and after Ionomycin stimulation. The use of an original algorithm for the construction of Entropy Maps allowed us to infer the non-linear characteristics of calcium dynamics. The implemented method highlights a different level of complexity in the behavior of the nucleus, if compared to the surrounding compartments. Moreover the wave spreading modifies the unpredictability of calcium activity in the culture. Keywords— Astrocytes, Entropy, Calcium, Fluorescence. I. INTRODUCTION
Ca2+ signaling is used throughout the life history of an organism. Life begins with a surge of Ca2+ at fertilization and this versatile system is then used repeatedly to control many processes during development and in adult life. Calcium signaling is at the basis of communication in many cellular populations [1]. It deserves great importance mainly in the brain functioning. Our research activity concentrated on the study of calcium dynamics in astrocytes cultures. Astrocytes are the prominent cellular population in the brain. They outnumber neurons by ten to one and occupy about one-third of the volume in the cerebral cortex [2]. The traditional concept of astrocytes has been one of a phenotype intended to serve the neurons, to regulate and optimize the environment within neurons function [3]. This viewpoint is gradually changing as a result of a steadily increasing interest in the study of the biology and pathology of astrocytes. Over the past 25 years it has become clear that astrocytes are responsible for a wide variety of complex and essential functions in the healthy CNS, including primary roles in synaptic transmission and information processing by neuronal circuit functions [4]. By employing classical electro-physiological methods, it was possible to demonstrate that astrocytes are electrically non-excitable and they respond to current injection with only passive changes in membrane potential [5]. Nonetheless, several studies provided strong evidence that astrocytes are able to sense neurotransmitters released by synaptic
terminals of neurons; they can respond with calcium elevations to regulate the release of neuroactive molecules that can diffuse back to the synapse to bind neuronal synaptic receptors [6]. Thus the propagation of information in astrocyte cultures is allowed by a localized increase in Ca 2+ that is followed by a succession of similar events in a wave-like fashion („calcium wave‟). The study of calcium dynamics in astrocyte culture is arousing great interest because those cellular bodies were found responsible of certain pathologies. For example, they could have a central role in the genesis of epilepsy [7] and they are major contributor to age-related neurodegenerative pathology [8]. In this paper we concentrated on the quantitation of complexity in astrocite calcium dynamics, by exploiting entropy indexes. In literature there are many examples of the employment of entropy to quantify the information carried by neuron spike trains [9], as well as an instrument for neuronal network classification, instead of mean square error [10]. We analyzed fluorescence recordings reporting calcium activity before and after Ionomycin stimulation. The new facet introduced in this paper is the implementation of an algorithm which is applied to 3-dimensional recordings, in order to obtain 2-dimensional maps (Entropy Maps). The algorithm allows to highlight significant biological differences. In this way it is possible to define the amount of information and, thus, the order of complexity of the region of the culture under analysis. II. MATERIALS AND METHODS
A. Cell culture Nearly 60 calcium imaging movies from pure (or semipure) in vitro cultures of primary astrocytes from P2 rat hippocampus were recorded at the Università Vita e Salute San Raffaele of Milan. Primary cultures of P2 rat hippocampal astrocytes were prepared as reported by others [11], with minimal modifications. Cultures were incubated for 15 minutes at 37°C with 10μM Fluo-4 AM dissolved in Tyrode solution (Tyrode: NaCl 119mM, KCl 5mM, Epes 25 mM, CaCl2 2 mM, MgCl2 2mM and Glucose 6gr./liter), 0,0l% Pluronic acid f-127 was added to increase dye permeability. After incubation cells were washed out for 5 minutes with Tyrode solution. During recordings cells were maintained in Tyrode in 100% O2, 24°C. During stimulation, a single
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 343–346, 2010. www.springerlink.com
344
A. Fanelli et al.
astrocyte was excited by administration of Ionomycin 1μM. Ionomycin increases the permeability of the plasma membrane and other intracellular organelles, including ER and mitochondria. Recordings were performed with a Zeiss LSM 510 confocal microscope and by employing a Zeiss FITC Narrow Band Laser with central excitation band 488nm (BP 505530nm), nominal current 3.1A, used at the 25% of its nominal power. Acquisitions were performed by using Zeiss Objective, NA 0.8 in Water. We acquired recordings with 3 different fields of view: (i) a square of 1302.7μm side, scanned by the laser light (pinhole 280μm) in 1.58sec by obtaining a 512 x 512 output matrix at 12 bit, total acquisition time of 360sec, for a grand total of 240 frames (ii) a square of 325μm side, scanned in 1.58sec (512x512 matrix), total acquisition time of 316sec (200 frames); (iii) a square of 37μm side, with temporal resolution of 0.5sec, total acquisition time of 120 sec, for a grand total of 240 frames. Images were analyzed and processed with ImageJ® and Matlab® softwares. B. Method The acquired confocal recordings were submitted to a first preprocessing stage. As first step, the cell displacement caused by Ionomycin stimulation was corrected using an algorithm of automatic registration [12]. Subsequently we compensated the phenomenon of photobleaching through the multiplication of the average luminescence of the stack by a rising exponential with the same τ of fluorescence decay. Finally the luminescence level was normalized (computation of ∆𝐹/𝐹0 ) for each frame of the recordings. Entropy was used to measure the predictability level of calcium dynamics. Let 𝑋 be a discrete random variable with alphabet 𝒳 and probability mass function 𝑝 𝑥 = Pr 𝑋 = 𝑥 , 𝑥 ∈ 𝒳. The Shannon entropy 𝐻(𝑋) of a discrete random variable 𝑋 is defined by [13]: 𝐻 𝑋 =−
matrix. The first two dimensions correspond to width and height of each frame of the stack, the last dimension is time. So the time evolution of each pixel intensity value can be considered as a temporal series. The Entropy Map is constructed by attributing to each pixel of the map the Shannon Entropy of the correspondent temporal series. This method allows to obtain maps which describe the amount of information and the level of complexity of the different regions of the cell/culture recorded by the microscopy system. Moreover the algorithm can be applied to recordings with different dimension in the field of view. III. RESULTS
Entropy Map computation was applied to 60 recordings reporting calcium activity before and after Ionomycin stimulation. Fig. 1 shows an example of maps obtained when the algorithm is applied to small field recordings (37µm x 37 µm, 240 frames), reporting calcium activity in basal conditions (without stimulation). A jet colormap, ranging from 0 (blue) to 8 (red) was applied to improve visualization. The analysis of Entropy Maps highlighted a constant behavior in calcium dynamics. Indeed, as Fig. 1 exemplifies, the analysis of the same recording shows that the entropy value of the nucleus is, on the average, always smaller than the entropy of the citosol (∆Entropy = 2 ± 0.5). Nonetheless there are great differences in the entropy absolute value when several recordings are compared. Fig. 2 shows the entropy maps obtained when the algorithm is applied to a large field recording (1302.7 µm x 1302.7 µm) before (Fig. 2, left) and after (Fig. 2, right) stimulation. It is possible to notice that the average entropy value decreases after the stimulus. The same decrease in the average entropy value of the network was always observed passing from a pre-stimulus to a post-stimulus condition. IV. DISCUSSION
𝑝 𝑥 log 𝑝 𝑥 𝑥∈𝒳
The algorithm we developed treats each movie as a 3-D
Using the Entropy Maps we were able to define the complexity level for every pixel of the movie. The first interest-
Fig. 1 Comparison between a filtered projection of a fluorescence recording (field of view 37 µm x 37 µm, 240 frames, 512x512 pixels , 120 secs) and the correspondent entropy map. It is possible to notice the presence of the same morphological details in the two couple of images. The entropy value in the nucleus is always smaller than the entropy in the cytoplasm. In the entropy map on the right it is clearly recognizable a per inuclear region (yellow/green) with an entropy value absolutely distinct from the surrounding regions. A colormap ranging from 0 (blue) to 8 (red) was applied to improve visualization.
IFMBE Proceedings Vol. 29
Entropy: A Way to Quantify Complexity in Calcium Dynamics
345
Fig. 2 Entropy maps of the same culture (1302.7 µm x 1302.7 µm, 512x512 pixels, 240 frames, 360 secs) before (left) and after (right ) the application of a Ionomycin stimulation. After the stimulus the average entropy value of the network decreases. A jet colormap ranging from 0 (blue) to 8 (red) was applied to improve visualization.
ing result is the similarity in shape and morphological details between the entropy map and the original correspondent fluorescence recording. In the algorithm, each pixel temporal track is treated in independent way from any other. Thus, after the computation of a mathematical quantity as entropy, it would be plausible to expect a resulting map completely lacking in correlation with the cell morphology. Nonetheless, the entropy map shows the same cellular compartments it is possible to recognize in the fluorescence recording. Sometimes the map provides further details that are not visible in the recorded movie, even after the application of custom procedures of signal-to-noise enhancement. This result is a demonstration of a tight connection between function and structure. The complexity in Ca2+ dynamics seems to be associated to the cellular compartment which acts as generator of that behavior. This concept is far from being revolutionary. The relationship between function and structure was found in many fields of biology, nature and science. More interesting is the possibility, given by the use of entropy, to localize regions with a distinct behavior from other cell compartments, that are not visible simply looking at the fluorescence level. In particular we were able to spot a perinuclear region (clearly visible in Fig. 1, right) which distinguishes itself mainly for its calcium dynamics rather than for its [Ca2+]. Also other mathematical descriptors (as cross-correlation) allowed to detect it. The properties of this region are currently under analysis by our research group. As Fig. 1 shows, entropy maps are characterized by a constant behavior in calcium dynamics. In all the recordings we analyzed, it is possible to notice a higher entropy in the cytoplasm than in the nucleus. The same definition of entropy as a “measure of uncertainty of a random variable”, leads to a unique conclusion: nuclear behavior is more predictable than cytoplasmic behavior. The smaller complexity may be due to the presence, in the nucleus, of a less numerous number of simultaneous components acting as regula-
tors of the calcium homeostasis. Indeed it is well known that cytoplasmic calcium behavior in the cytoplasm is controlled by the presence of many pumps and receptors. Inositol-1,4,5-trisphosphate receptor (InsP3R) and ryanodine receptor (RyRs) [14] are responsible to feed calcium into the cytoplasm, together with many other messengers, which act on separate, as yet uncharacterized, channels. On the other side, PMCA and SERCA pumps are responsible to return Ca2+ to the external melieu or to the internal stores, respectively. Moreover mitochondrions sequester Ca2+ rapidly during the development of Ca2+ signal and then releases it back slowly during the recovery phase, shaping both its amplitude and spatio temporal patterns [15]. On the contrary little is known about the mechanisms acting as controllers of calcium dynamics in the nucleus. Thanks to recent discoveries [16], it seems that the inner nuclear envelope membrane expresses both types of intracellular Ca2+ release channels (InsP3Rs and RyRs), but there are no clues about the presence of other systems of control. The result we have shown is a confirmation of the minor complexity in the nucleus calcium dynamics. Maybe it is the perinuclear zone which filters the calcium waves reaching the nucleus, removing the noise and selecting the information which has to be processed by the nucleus itself. Finally entropy was exploited to evaluate the change in the complexity level of the whole culture, after the passage of a wave. To proceed in this analysis, we concentrated on the study of large field recordings (1302.7 µm x 1302.7 µm), which allow to take into consideration a bigger number of cells, instead of a unique cellular body. In a previous work [17], we found that the application of a Ionomycin stimulus causes an increase in the correlation level of the cells, acting as a biological trigger. Fig. 2 shows the entropy maps before and after the stimulation. As it is possible to observe, the wave spreading causes a decrease in the average entropy level of the culture. Thus the passage of the calcium wave provokes not only a synchronization of the
IFMBE Proceedings Vol. 29
346
A. Fanelli et al.
cells in the culture, but it also makes calcium dynamics more predictable. From an energetic point of view, astrocytes pass from an energy saving condition (it may be defined as a „stand-by condition‟) to a more expensive state. Indeed they pass from a condition of disorder (lack of synchronization) to a more ordered state (increase in crosscorrelation), which implies a decrease in entropy. Thus, for the second law of thermodynamics, the entropy decrease is the consequence of the energy consumption necessary to guarantee the synchronization among cells. The culture is a thermodynamic system: the decrease of disorder is possible only after an energy waste.
ACKNOWLEDGMENT We greatly thank A. Malgaroli, from the Università Vita e Salute San Raffaele, Milano, for the coordination of experimental activity. Cultures and recordings were kindly achieved by F. Esposti and M. Ripamonti.
BIBLIOGRAPHY [1] [2] [3]
V.
CONCLUSIONS
In the present paper we faced the study of non linearities in calcium dynamics, taking into consideration both large and small field of view recordings. In particular we focused on the quantitation of complexity in calcium behavior, using entropy as a measure of complexity. For this purpose, we developed an algorithm to evaluate the entropy level in astrocyte fluorescence recordings. Entropy allowed us to localize regions of the cell that differs from the other compartments for the dynamicity of calcium activity, and not for the [Ca2+] itself. This approach led to the detection of a perinuclear zone with an atypical behavior. Other mathematical methods (i.e. crosscorrelation) confirmed its existence. We are currently doing a deeper investigation of the characteristics of this thin layer (3-6 µm), which surrounds the nucleus as a collar. The analysis of intracellular recordings highlighted a constant behavior in calcium activity, denoting a smaller entropy value for the nucleus, if compared to the surrounding regions. This confirmed the minor complexity in the number of systems of control which act simultaneously in the regulation of calcium activity of the nucleus. Finally entropy was exploited to quantify the variation in the complexity after the wave passage. As the results have shown, average entropy level decreases. This phenomenon could have an energetic explanation. The culture passes from a cheap energetic condition to a more expensive condition, as the decrease of entropy could suggest. In conclusion, entropy resulted to be an efficient approach to infer important characteristics of calcium dynamics. It showed to be applicable to different recordings, even under different experimental conditions (field of view, excitability level). Entropy maps were used to study complexity and non linearity in astrocyte behavior; they demonstrated to be an innovative and simple approach to add new understandings about this fundamental cellular population of the brain.
[4] [5] [6] [7] [8] [9] [10] [11]
[12] [13] [14] [15] [16] [17]
K.P. Lu, et al., "Regulation of cell cicle by calcium and calmodulin," Endocrine Rev., vol. 14, pp. 40-58, 1993. D. Micheal, et al., "Astrocytes Responses to CNS Injury," Jour of Neuropat. and Experim. Neurol., vol. 53, no. 3, pp. 213-220, 1994. M. Nedergaard, et al., "New Roles for Astrocytes: Redifining the Functional Architecture of the Brain," Trends in Neuroscience, vol. 26, no. 10, pp. 523-530, 2003. M.V. Sofroniew and H.V. Vinters, "Astrocytes: biology and pathology," Acta Neuropathol, vol. 119, pp. 7-35, 2010. J. Kang, "Astrocyte-Mediated Potentiation of Inhibitory Synaptic Transmission," Nat. Neurosci., vol. 1, pp. 683-692, 1998. A.H. Cornell-Bell, et al., "Glutamante induces calcium waves in Cultured Astrocytes," Science,vol. 247, pp. 470-473, 1990. G.F. Tian, et al., "An Astrocytic Basis of Epilepsy," Nature Medicine, vol. 11, no. 9, pp. 973-981, 2005. J.E. Simpsona, et al., "Astrocyte Phenotype in Relation to Alzheimer-Type Pathology in the Ageing Brain," Neurobiology of Aging, 2008. I. Nemenman, et al., "Entropy and Information in Neural Spike Trains: Progress on the Sampling Problem," Phys. Rev. E., vol. 69, no. 5, pp. 1-7, 2004. L.M. Silva, J.M. de Sà, and L.A. Alexandre, "Neuronal Network Classification using Shannon's Entropy," , 2005. K.D. McCarthy and L.M. Partlow, "Preparation of Pure Neuronal and non-Neuronal Cultures from Embryonic Chick Sympathetic Ganglia: a New Method Based on Both Differential Cell Adheiveness and the Formation of Homotypic," vol. 114, pp. 391-414, 1976. P. Thèvenaz, et al., "A Pyramid Approach to Subpixel Registration Based on Intensity," IEEE Transactions on Image Processing, vol. 7, no. 1, pp. 27-41, 1998. C.E. Shannon, "A mathematical theory of communication," The Bell System Technical Jour, no. 27, pp. 379-423, 1948. M.J. Berridge, "Inositol Trisphospate and Calcium Signaling," Nature, no. 361, pp. 315-325, 1993. S.L. Budd and D.G. Nicholls, "A Reevaluation of the Role of Mithocondria in Neuronal Ca2+ Homeostasis," J. Neurochem., no. 66, pp. 403-411, 1996. P. Lipp, et al., "Nuclear calcium signalling by individual cytoplasmic calcium puffs ," The EMBO Journal, vol. 16, no. 23, pp. 7166-7173, 1997. A. Fanelli, et al., "Temporal and spatial analysis of astrocyte calcium waves," Conf. Proc. IEEE Eng. Med. Biol. Soc., no. 1, pp. 6038-6041, 2009.
IFMBE Proceedings Vol. 29
A new fluorescence image-processing method to visualize Ca2+-release and uptake Endoplasmatic Reticulum microdomains in cultured glia J. Ion Titapiccolo1, F. Esposti2 ,A. Fanelli1, and M.G. Signorini1 1
2
Biomedical Engineering Department, Politecnico di Milano, Milan, Italy Medical Research Council Laboratory of Molecular Biology, Cambridge, United Kingdom
Abstract— Astrocytes perinuclear zone is an intracellular region of great interest for its role in calcium intracellular dynamics regulation. Indeed IP3R and SERCA molecules, located on the membrane of the Endoplasmic Reticulum (ER) in the perinuclear region, show a clustering self-organizing capacity that influences intracellular calcium signaling. IP3Rs and SERCA pumps have respectively the role to release calcium ions into the cytoplasm and segregate them into the ER. In this work astrocytes fluorescence recordings were acquired with a confocal microscope and calcium activity was stimulated by the use of ionomycin. A fluorescence image processing method to study the organization, localization and dimension of these molecular clusters was developed. The method has been called DMSD and it leads to the visualization of a falsecolor map where the preferential behavior of each time series pixel is represented. In this way Ca2+ release and uptake ER membrane microdomains can be clearly identified. Keywords— Astrocytes, fluorescence, calcium dynamics, IP3 receptors, SERCA pumps. I. INTRODUCTION
Glial cells are the major cellular component of human brain and among these atrocytic cells are the most interesting from a numerical and functional point of view. As nonexcitable cells, astrocytes possess the capability to signal via calcium diffusion over long distances, in a similar way to the propagation of action potentials in neurons [1]. In basal conditions, the free calcium concentration in the cytoplasm is very small (10-7 M circa) because of the toxicity of free calcium for cells [2]. The calcium activity consists of a rapid release of calcium from calcium internal stores, e.g. the Endoplasmic Reticulum (ER), where the Ca2+ concentration is 10-3 M circa [3]. The calcium release usually occurs in sequential calcium spikes, in order to create a desired cytosolic concentration as the spatial-temporal averaging of subsequent calcium spikes [4]. A calcium wave is a localized increase in cytosolic Ca 2+ that is followed by a succession of similar events in a wavelike fashion. The Ca2+ waves can be restricted to one cell (intracellular) or transmitted to neighboring cells (intercellular) [5]. The basic steps that lead to intracellular Ca 2+ waves in astrocytes, usually involve the activation of G-protein-
coupled receptors followed by production of IP3. IP3 activates IP3 receptors (IP3Rs) located on the ER membrane. The activation of these receptors leads to Ca2+ release from the ER ([6],[7]). Intracellular Ca2+ signals are space-time complex events involving the recruitment of elementary Ca2+ release sites called Ca2+ puffs [8], which then propagate from the periphery to the soma throughout the cell by an amplification mechanism. Calcium release typically occurs via IP3Rs and Ca2+ resequestering into intracellular stores occurs via the Sarcoplasmic/Endoplasmic Reticulum Ca2+-ATPases (SERCAs) [9]. It has been observed that IP3Rs assembly into clusters (calcium sources) and form a porous-like structure on the ER membrane through which calcium ions diffuse into the cytoplasm when the receptors are activated [10]. In a similar way, SERCA pumps form aggregates on the ER membrane (calcium drains) [11]. The result of this molecular organization is an ER membrane dynamic structure which leads to the complexity and versatility of calcium intracellular phenomena. As IP3Rs and SERCAs move and differently aggregate in the ER membrane, calcium intracellular signal propagating towards the nucleus, are differently amplified or filtered [12]. Indeed calcium induced mechanisms of ER membrane self reorganizing have been greatly observed [13],[14]. The perinuclear zone, where the ER is located, results of a great interest to study these mechanisms. This paper focuses the attention on a new fluorescence image processing method to study the organization of IP 3R and SERCA aggregates on the ER membrane during the propagation of an evocated calcium activity in cultured astrocytes. II.
MATERIALS AND METHODS
A. Materials Calcium imaging movies from pure (or semi-pure) in vitro cultures of primary astrocytes from P2 rat hippocampus were recorded at the Università Vita e Salute San Raffaele of Milan, Neurophysiologic Lab. Primary cultures of P2 rat hippocampal astrocytes were prepared as reported by others [15], with minimal modifications. The imaging was exploited through calcium fluoroscopy, by employing the
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 347–350, 2010. www.springerlink.com
348
J.I. Titapiccolo et al.
Fluo-4 AM fluorescent dye. Fluo-4 presents excitation and emission peaks at 494nm and 516nm, respectively, where the fluorescence of Ca2+ bound Fluo-4 is almost 100 times that of the Ca2+-free form. Such characteristics make this dye particularly sensitive to changes in [Ca2+]. Cultures were incubated for 15 minutes at 37°C with 10μM Fluo-4 AM dissolved in Tyrode solution (Tyrode: NaCl 119mM, KCl 5mM, Epes 25 mM, CaCl2 2 mM, MgCl2 2mM and Glucose 6gr./liter), 0,0l% Pluronic acid f127 was added to increase dye permeability. After incubation cells were washed out for 5 minutes with Tyrode solution. During recordings cells were maintained in Tyrode in 100% O2, 24°C. Cells were stimulated administrating ionomycin to the culture. Ionomycin is a ionophore very used in research to raise the intracellular level of calcium [Ca2+] and as a research tool to understand Ca2+ transport across biological membranes. Ionomycin was administrated to a single cell of the culture and then the calcium intracellular activity was observed in a neighboring cell. The ionomycin stimulation was executed directly on the cell membrane through a micropipette. Recordings were performed with a Zeiss LSM 510 confocal microscope and by employing a Zeiss FITC Narrow Band Laser with central excitation band 488nm (BP 505530nm), nominal current 3.1A, used at the 25% of its nominal power. Acquisitions were performed by using Zeiss Objective, NA 0.8 in Water. The field of view of each recording was a square of 200μm side, scanned in 0.25 sec, 1024x1024 matrix, total acquisition time of 30 seconds, for a grand total of 120 frames. Images were analyzed and processed with ImageJ® and Matlab® softwares. B. Data Preprocessing The acquired fluorescence confocal recordings required a first stage of preprocessing. As first step, we corrected the cell displacement caused by ionomycin stimulation. By an algorithm of automatic registration computed with the use of the ImageJ® software. Moreover the phenomenon of photobleaching had to be compensated. This was done through the multiplication of the average luminescence of the stacks, by a rising exponential with the same τ of fluorescence decay. Finally the luminescence level was normalized (computation of ∆F/F0) for each frame of the recordings [16]. In the method we propose in this paper, each movie was considered as a 3-dimensional matrix. The first two dimensions correspond to the width and the height of each frame of the stack, the last dimension is time. So the time evolution of a pixel intensity can be considered as a time series.
C. Methods The “Dynamical evaluation of calcium variations and fluxes”, introduced by Laskey [17], allows to represent in a very handy way the variations in calcium concentrations in the spatial and temporal coordinates and to evaluate calcium fluxes that take place in the cell with an easy mathematical approach. This method allows to analyze the temporal evolution of calcium concentration levels in the cells points belonging to a rectilinear axis. As innovation, our study introduces the possibility to analyze the dynamical evolutions of calcium intracellular concentrations in two spatial dimensions instead of only one as in the Laskey method. To do that, first of all, the area of interest was identified in the analyzed recordings. In particular the perinuclear zone was selected obtaining a new video of reduced spatial dimensions. Then, a mean spatial filter is applied to this image sequence in order to reduce the noise that always affect fluorescence recordings. Then a new Matlab® algorithm for image processing was applied to the obtained video, in order to calculate a discriminant image in which the particular behavior of calcium sources and calcium drains temporal series is clearly visible. This image has been called Discriminant Matrix Source Drain (DMSD). To obtain this image a sigmoid-like transformation was applied to all the temporal series of the video. This means that the value of each pixel in the resulting image corresponds to the minimum or maximum value of its temporal series. The choice of the maximum or minimum value is done according to the corresponding value in the image of the average activity, as illustrated in Fig. 1. If this value shows an increase in the calcium concentration average activity comparing to the basal condition, then the maximum value of the temporal series is set in the final matrix. On the contrary if the average value shows a decrease in calcium concentration, the minimum value is set. The obtained DMSD image clearly shows the presence of points with the role of calcium drains and the presence of points with the role of calcium cytoplasmic sources in the perinuclear zone. Fig.1 shows the flux diagram of the Matlab algorithm implemented to obtain the DMSD image. The resulting image is a gray-levels false-color representation of the calcium concentration normalized F/F0 values. The utilized gray levels convention associates dark colors to the values indicating calcium concentration decrease (F/F0<1) and bright colors to those indicating an increase in calcium concentration compared to the basal condition (F/F0>1). So the maximum normalized concentration value is represented as a white pixel and the minimum value as a black pixel.
IFMBE Proceedings Vol. 29
A New Fluorescence Image-Processing Method to Visualize Ca2± Release and Uptake Endoplasmatic Reticulum Microdomains
349
Fig. 3 DMSD image of the fluorescence recording ROI squared in Fig 1. In the colorbar numerical F/F0 values are reported.
Fig. 1 Flux Diagram of the DMSD Matlab® algorithm. III.
RESULTS
The algorithm that computes the DMSD maps was applied to the perinuclear regions extracted from all the experimental fluorescence recordings. In Fig. 2 a white square is drawn to point out the region of interest to which the DMSD algorithm was applied: it includes the astrocyte perinuclear and nuclear zone. In the drawn square, ER, and cellular nucleus are clearly identifiable as respectively the brighter area and the darker zone in it.
It is important to underline that in Fig. 3 F/F0 values are reported as normalized calcium concentration values. In the ER zone an alternation of white and black spots is clearly identifiable. The DMSD images were used also to quantify the size of calcium sources and drains, applying a threshold image segmentation method. A higher intensity level threshold was fixed at 1.6 F/F0 to identify source size; the lower threshold was set at 0.75 F/F0 to identify drain size. In this way it was confirmed that IP3Rs channels and SERCA pumps form aggregates of about 2 μm and 1 μm diameter, respectively. These results confirm that IP3Rs and SERCA pumps work as microdomains on the ER membrane and these microdomains are built up of clusterized molecules. The dimensions here found are comparable with those presented in previous studies [10], [12], [18]. IV.
Fig. 2 Fluorescence image of an astrocyte: first frame of the selected recording. The white drawn square is the region of interest.
The result of the application of the DMSD algorithm to the squared area is reported in Fig. 3.
DISCUSSION
The DMSD method is a contrast improvement method that operates a sigmoid-like transformation of the temporal series values in fluorescence recordings. This means that the behavior of every single time series is discriminated on the basis of its mean value. The method permits to bring out calcium up-take and release microdomains on the ER membrane: calcium drains or sources. In Fig. 3 it can be observed the presence of points having the role of calcium drains; their location can be clearly identified as the black seeds in the perinuclear zone. Furthermore, ER calcium sources can be identified as points whose represented value is higher than a certain intensity level, thus they correspond to the white spots in the perinuclear zone. In the ER cellular compartment it can be clearly noted a thick alternation of calcium drains and sources activated as a response to the calcium cellular activity stimulation. On
IFMBE Proceedings Vol. 29
350
J.I. Titapiccolo et al.
the contrary in the cytoplasm it cannot be noted a preferential behavior of the pixel temporal series. Actually no molecular structures able to store and release calcium are present in this cellular compartment. At the same time the nucleus compartment shows a behavior similar to cytoplasm. So the particular behavior of the perinuclear zone is pointed out by the application of the DMSD method to fluorescent recordings of cultured astrocytes. This region is supposed to act as an amplifier and as a calcium signal repeater or filter through the activation and reorganization of calcium drains and sources structure. Moreover by applying the DMSD method to recordings of a singular cell in different stimulations it has been shown the dynamicity of the ER membrane and of the IP 3Rs clusters activation. Further custom studies on astrocyte cultures will help to deeply investigate the role and behavior of the ER membrane molecular structures and their effect on astrocytes cellular activity. We are now facing further analysis in order to increase knowledge on perinuclear region properties.
REFERENCES [1] T.A. Fiacco and K.D. McCarthy, "Astrocyte Calcium Elevations: Properties, Propagation, and Effects on Brain Signaling," Glia, vol. 54, p. 676–690, 2006. [2] E. Carafoli and C.B. Klee, Calcium as a Cellular Regulator. USA: Oxford University Press, 1999. [3] R. Rizzuto and T. Pozzan, "Microdomains of Intracellular Ca2+: Molecular Determinants and Functional Consequences," Physiol. Rev., vol. 86, pp. 369-408, 2006. [4] M.J. Berridge, "Calcium microdomains: Organization and function," Cell Calcium, vol. 40, pp. 405-412, 2006. [5] E. Scemes and C. Giaume, "Astrocyte Calcium Waves: What They Are and What They Do," Glia. 2006 November 15; 54(7): 716– 725, vol. 54(7), pp. 716-725, 2006. [6] V.A. Golovina and M.P. Blaustein, "Unloading and refilling of two classes of spatially resolved endoplasmic reticulum Ca2+ stores in astrocytes," Glia, vol. 31, pp. 15-28, 2000. [7] E. Scemes, "Components of Astrocytic Intracellular Calcium Signaling," Mol. Neurobiol., vol. 22, pp. 167-179, 2000. [8] I. Parker and Y. Yao, "Regenerative release of calcium from functionally discrete subcellular stores by inositol triphosphate," Proc. R. Soc. Lond., vol. 246, pp. 269-274, 1994. [9] M.J. Berridge, "Inositol trisphosphate and calcium signalling.," Nature (London), vol. 361, pp. 315-325, 1993.
V. CONCLUSIONS
In this paper a new fluorescence image processing method is presented. It is a method called DMSD that operates a temporal series sigmoid-like transformation based on the average value of the pixels temporal series of a video. In this way the preferential behavior of every temporal series is represented in the final image. The application of the method to astrocytes fluorescence recordings shows the presence of a particular zone in the perinuclear region. Indeed in this region the ER with all its calcium release (calcium sources) and buffering (calcium drains) molecular structures is located. The dimension of the identified molecular clusters are comparable to those presented in literature by other studies. The method proposes a simple way to study the location, dimensions and behavior of astrocytes calcium release as well as accumulation points in the perinuclear region. Further biological evaluation of the results obtained with the DMSD method will be assessed in future studies.
ACKNOWLEDGMENT We greatly thank A. Malgaroli, from the Università Vita e Salute San Raffaele, Milano, for the coordination of experimental activity and for the interesting discussions on this subject. Cultures and recordings were kindly achieved by M.Ripamonti and F.Esposti.
[10] C.W. Taylor, T.U. Rahman, and E. Pantazaka, "Targeting and clustering of IP3 receptors: Key determinants of spatially organized Ca2+ signals," Chaos, vol. 19, pp. 037102/1-10, 2009. [11] M. Zhao, I.V. Negrashov, R. Bennett, D.D. Thomas, B. Mueller, "SERCA Structural Dynamics Induced by ATP and Calcium ," Biochemistry, vol. 43, pp. 12846-12854, 2004. [12] L. Diambra and S. Marchant, "Localization and socialization: Experimental insights into the functional architecture of IP3 receptors," Chaos, vol. 19, pp. 037103/1-8, 2009. [13] J. G. Goetz et al., "Reversible interactions between smooth domains of the endoplasmic reticulum and mitochondria are regulated by physiological cytosolic Ca2+ levels," Jour. of Cell Sc., vol. 120, pp. 3553-3564, 2007. [14] K. Subramanian and T. Meyer, "Calcium-induced restructuring of nuclear envelope and endoplasmic reticulum calcium stores," Cell, vol. 89, pp. 963-971, 1997. [15] K.D. McCarthy and L.M. Partlow, "Preparation of Pure Neuronal and non-Neuronal Cultures from Embryonic Chick Sympathetic Ganglia: a New Method Based on Both Differential Cell Adheiveness and the Formation of Homotypic Neuronal Aggregates," Brain Res., vol. 114, pp. 391-414, 1976. [16] A. Takahashi, P. Camacho, J.D. Lechleiter, and B. Herman, "Measurment of intracellular calcium," Physiol. Review, vol. 79, pp. 1089-1125, 1999. [17] A.D. Laskey, B.J. Roth, P.B. Simpson, and J.T. Russel, "Images of Ca2+ Flux in Astrocytes: Evidence for Spatially Distinct Sites of Ca2+ Release and Uptake," Cell Calcium, vol. 23, pp. 423-432, 1998. [18] H. Platter et al., "Microdomain arrangement of the SERCA-type Ca2+ pump (Ca2+ ATPase) in subplasmalemmal calcium stores of paramecium cells," The Jour. of Histoch. & Cytoch., vol. 47(7), pp. 841-853, 1999.
IFMBE Proceedings Vol. 29
Experimental Measurement of Modulation Transfer Function (MTF) in Five Commercial CT Scanners S.M. Akbari1,2, M.R. Ay2,3,4, A.R. Kamali asl1, H. Ghadiri2,5, and H. Zaidi6,7 1
Faculty of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, Tehran, Iran 3 Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran 4 Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, Tehran, Iran 5 Department of Medical Physics, Iran University of Medical Sciences, Tehran, Iran 6 Division of Nuclear Medicine, Geneva University Hospital, Geneva, Switzerland 7 Geneva Neuroscience Center, Geneva University, Geneva, Switzerland 2
Abstract— The modulation transfer function (MTF) is the technical description of spatial resolution for most imaging systems. the MTF describes how well an imaging system processes signal [1,4]. There are several methods for calculation of MTF that can be categorized in theoretical and experimental methods. In this study, in order to compare the performance of five different commercial CT scanner, an experimental method was used to calculate MTF in all scanners. The MTF curves were calculated for both axial and helical scanning mode and also for different slice thickness. The method for experimental measurement of MTF which called SD method is based on calculation of the standard deviation (SD) of CT numbers in different regions of scanned phantom and determination of coefficient modulation. The results calculated in this study using simple experimental measurements were in good agreement with published technical specification by manufacturers [5]. Keywords— Computed Tomography, Spatial Frequency, MTF, SD method.
I. INTRODUCTION The modulation transfer function (MTF) is the technical description of spatial resolution for most imaging systems. Generally, the MTF describes how well an imaging system processes signal. There are several methods for calculation of MTF that can be categorized in theoretical and experimental methods. In theoretical methods the MTF of the imaging system is calculated using the Fourier transform of the line spread function (LSF) or point spread function (PSF), while in experimental methods various resolution phantoms can be used for calculation of MTF [2,3]. In this study, in order to compare the performance of five different commercial CT scanner, an experimental method was used to calculate MTF in all scanners. In all experimental measurements the GE performance phantom which is
water filled cylindrical phantom with Plexiglas envelope including slits with different spatial frequency was used, it should be noted that the resolution part of phantom was made with Plexiglas as well. The phantom includes slits with spatial frequencies of 0.75, 1, 1.5, 2, 2.5, 3, 3.5, 4.5, 6, 7, 9 lp/cm and each group includes 5 slits. The phantom was scanned in five different commercial CT scanner made by GE Healthcare Technologies company using a tube voltage of 120 kVp and 400 mAs with 5 mm and other slice thickness. The MTF curves were calculated for both axial and helical scanning mode and also for different slice thickness. The method for experimental measurement of MTF which called SD method is based on calculation of the standard deviation (SD) of CT numbers in different regions of scanned phantom and determination of coefficient modulation. The results calculated in this study using simple experimental measurements were in good agreement with published technical specification by manufacturers. The dependency of MTF to scanning mode and slice thickness were not high and we could not see considerable differences between them. The SD experimental method for calculation of MTF validated in this study, our group plan to use this method for performance comparison of wider range of commercial CT scanners design by different manufacturers [5].
II. MATERIAL AND METHODS In this study, in order to compare the performance of five different commercial CT scanner, an experimental method was used to calculate MTF in all scanners. In all experimental measurements a water filled cylindrical phantom with Plexiglas envelope including slits with different spatial
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 351–354, 2010. www.springerlink.com
352
S.M. Akbari et al.
frequency was used, it should be noted that the resolution part of phantom was made with Plexiglas as well, The phantom in this study called GE performance phantom. The phantom includes slits with spatial frequencies of 0.75, 1, 1.5, 2, 2.5, 3, 3.5, 4.5, 6, 7, 9 lp/cm and each group includes 5 slits (Figure 1.a). The phantom was scanned in five different commercial CT scanner made by GE Healthcare Technologies company listed in table 1 using a tube voltage of 120 kVp and 400 mAs with 5 mm and other slice thickness. The MTF curves were calculated for both axial and helical scanning mode and also for different slice thickness. The method for experimental measurement of MTF which called SD method is based on calculation of the standard deviation (SD) of CT numbers in different regions of scanned phantom and determination of coefficient modulation. Three region of interest (ROI) were selected for each specific spatial frequency. First ROI was selected as it includes all slits of that spatial frequency (ROIA), second ROI with the same dimension lied in a region which includes Plexiglas (ROIB) and third ROI was determined by the same dimension in background region (ROIC) (Figure 1.b). Then the contrast scale between ROI’s in Plexiglas regions and background material was calculated, which is difference between average CT numbers in two regions. In next step, the SD of CT numbers in ROIA and also the average SD for ROIB and ROIC which named ROIAve were calculated. By using these values, the coefficient modulation based on Eq. (1) was calculated and the then the MTF value for related spatial frequency can be calculated by replacing contrast scale and coefficient modulation values in Eq. (2). Thereafter, the MTF curve plotted by calculating the MTF value in different spatial frequencies using SD method.
(a)
(b)
Modulation =
SD A 2 - SD Ave 2
(1)
MTF = 2.2 × ( Modulation / Contrast Scale )
(2)
III. CONCLUSIONS Different commercial CT scanners (Table 1.) used in this study for calculate MTF curve by this method. MTF curve in different scan mode and slice thickness calculated. Table 1 Scanner Number
Scanner Model
Number of Slices
CT 1
HiSpeed LX/i
Single slice
CT 2 CT 3
BrightSpeed HiSpeed FX/i
Single slice
CT 4
CT/e Plus
Dual slice
CT 5
HiSpeed NX/i
Dual slice
(c) Fig. 1 (a) GE performance phantom. (b) The CT image of GE performance phantom containing 11 different spatial frequencies with three ROI regions A, B, C. (c) The ROIs for all spatial frequencies
IFMBE Proceedings Vol. 29
Four slice
Experimental Measurement of Modulation Transfer Function (MTF) in Five Commercial CT Scanners
1.2
CT 1
1
CT 2
0.8 MTF
353
CT 3
0.6
CT 4
0.4
CT 5
0.2 0 0
1
2 3 4 5 6 Spatial Frequency (lp/cm)
7
8
9 (a)
(a)
1.2
CT 1 CT 2 CT 3 CT 4 CT 5
MTF
1 0.8 0.6 0.4 0.2 0 0
1
2
3
4
5
6
7
8
9
Spatial Frequency (lp/cm) (b)
(b)
Fig. 2 (a) MTF curves in axial mode and 10 mm slice thickness for different CT scanners. (b) MTF curves in helical mode and 10 mm slice thickness for different CT scanners
Fig. 3 (a) MTF curves in axial mode and 5 mm slice thickness for different CT scanners. (b) MTF curves in helical mode and 5 mm slice thickness for different CT scanners
Figure 2 shows the calculated MTF curves for different CT scanners in both axial and helical scanning mode in 10 mm slice thickness and Figure 3 shows the calculated MTF curves for different CT scanners in both axial and helical scanning mode in 5 mm slice thickness. Figure 4 shows the calculated MTF for CT4 (CT/e Plus) for different slice thickness and also different scanning mode. The results shows better performance of GE BrightSpeed CT scanner (CT 2) in comparison to other scanner, as it was expected due to newer design of this scanner. The results calculated in this study using simple experimental measurements were in good agreement with published technical
specification by manufacturers. The dependency of MTF to scanning mode in slice thickness of 10 mm was not high for different CT scanners and we could not see considerable differences between them but in slice thickness of 5 mm the difference between the calculated MTF curves dominant. Figure.4 shows the dependency of MTF to scanning mode and slice thickness were not high and we could not see considerable differences between them. The SD experimental method for calculation of MTF validated in this study, our group plan to use this method for performance comparison of wider range of commercial CT scanners design by different manufacturers.
IFMBE Proceedings Vol. 29
354
S.M. Akbari et al.
REFERENCES 1. Droege Ronald T and L. Morin Richard (1982) A practical method to measure the MTF of CT scanners, Med. Phys. 9(5), pp 758–760. 2. Boone John M (2001) Determination of the presampled MTF in computed tomography, Med. Phys. 28(3), pp 356–360. 3. Rathee S, Fallone P. G and Robinson D (2006) An effective method to verify line and point spread functions measured in computed tomography, Med. Phys. 33(8), pp 2757–2764. 4. Bushberg T Jerrold (2002) The essential physics of medical imaging, second edition by Lippincott Williams and Wilkins. 5. GE medical system product data for HiSpeed, BrightSpeed and CTe/ Plus scanners.
(a)
MTF
1,2 1
Axial, 10 mm
0,8
Helical, 10 mm
0,6
Axial, 5 mm
0,4
Helical, 5 mm
Author:Mohammad Reza Ay Institute: Department of Medical Physics abd Biomedical Eng., Tehran University of Medical Sciences, Tehran, Iran Street: Pour Sina City: Tehran Country: Iran Email: [email protected]
0,2 0 0
1
2
3 4 5 6 7 Spatial Frequency (lp/cm)
8
9
(b)
Fig. 4 (a) MTF curves in axial mode at different slice thickness in CT4. (b) MTF curves in axial and helical mode at 5 and 10 mm slice thickness in CT4
IFMBE Proceedings Vol. 29
Microcalcifications Segmentation Procedure Based on Morphological Operators and Histogram Filtering M.A. Duarte1, A.V. Alvarenga2, C.M. Azevedo3, A.F.C. Infantosi4, and W.C.A. Pereira4 2
1 Electronic Engineering Department/Gama Filho University (UGF), Rio de Janeiro, Brazil Laboratory of Ultrasound/National Institute of Metrology, Standardization and Industrial Quality (Inmetro), Duque de Caxias, Brazil 3 Gaffrée & Guinle University Hospital - University of Rio de Janeiro (UNI-RIO), Rio de Janeiro, Brazil 4 Biomedical Eng. Program/COPPE, Federal University of Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil
Abstract— Breast cancer is considered as one of the main causes of death among women. According to statistical data, its incidence is growing in the developed and developing countries. Clinical examination and mammography are the best methods for early breast cancer detection. Computeraided diagnosis (CAD) systems are being developed to help physicians make more precise diagnostics. Usually, these systems are composed by three phases: segmentation, parameters extraction and lesions classification. The individual improvement of each one of them will allow increasing the accuracy of the whole system. The first step for constructing a CAD system is to segment the suspicious lesions. This paper presents a microcalcification segmentation method, based on Morphological operators and histogram filtering. The former is applied to remove the image background and emphasize the microcalcifications, while the histogram filtering removes irrelevant grey-levels. The method was assessed in 66 regions of interest, captured from 13 digitalized images (300 dpi, 8 bits). According to two experienced radiologists, the algorithm correctly segmented more than 92% of the cases. Given this preliminary result, we believe that the algorithm can be explored and perhaps be part of a future CAD system. Keywords— Segmentation, Mammography, Mathematical Morphology, Microcalcifications, Breast Cancer.
I. INTRODUCTION Breast cancer has the second higher prevalence among cancer types in the world, being responsible for 22% of new cases per year. According to statistical data, its incidence is growing in the developed and developing countries. In Brazil, breast cancer is one of the main causes of women death. The Brazilian National Cancer Institute (INCa) forecasts 49,400 new cases of breast cancer in 2010 in this country [1]. According to INCa [2], early detection and tumor removing in an initial phase are the more efficient strategies to reduce cancer death rates. Clinical examination and mammography are the best methods to find early signs of
breast cancer. The last one is intended to detect non palpable breast lesions [2]. Several factors may interfere in the accuracy of mammographies. Some of them are: medical interpretation - knowledge and experience of the physicians; technical factors – equipment quality and examination techniques used by radiologists; patient factors – adipose and glandular tissues types. Even if mammography is carried out with adequate equipment and by an experienced technician, the final exam quality is dependent on the breast tissue itself. The more adipose is the breast, the easier to analyze and make a diagnosis [3]. Literature [4] indicates a large number of errors and divergent results in mammography exams made in USA, mainly due to the inexperience of radiologists, besides the already mentioned difficulties. Microcalcifications are present in a great number of malign lesions, and thus, are considered as a significant sign related to malignancy. However, just 30% to 50% of microcalcifications in carcinomas are detected in mammographies [5]. Given this scenario, systems that could highlight infraclinic lesions, aiding the specialists to make diagnostics, have been studied. Their objective is to lower the false-positive and false-negative rates of breast cancer diagnostics. These Computer-aided diagnosis (CAD) systems were developed based on parameters extracted from microcalcifications [6, 7]. Usually, these systems are composed by three phases: segmentation, parameters extraction and lesions classification. The individual improvement of each of them will allow increasing the accuracy of the whole system [8]. Concerning the segmentation of microcalcifications, Mathematical Morphology is one of the most used techniques [5, 9, 10]. This work presents a microcalcification segmentation method based on image grey-levels reduction. Herein, morphological operators are applied to remove the image background and emphasize the microcalcifications, while histogram filtering removes irrelevant grey-levels, reducing the number of possible thresholds. At this point, the best segmentation result is chosen by experienced radiologists.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 355–358, 2010. www.springerlink.com
356
M.A. Duarte et al.
II. MATERIALS AND METHODS From a database with 13 mammographies of eight patients (300 dpi, 8 bits), 66 regions of interest (ROI) were selected by the double reading method, by two experienced radiologists (that is, the ROI was chosen by one of them and confirmed by the other, independently). ROI dimensions are up to 41 x 41 pixels, based on literature [11]. Example of a mammogram with a selected ROI is presented in Fig. 1.
the grey-level zero was removed (Fig. 3a), as it represents the black pixels of background. The histogram gradient was determined (Fig. 3b) and its module calculated (Fig. 3c) to identify highest grey-levels transitions, then it was filtered to remove irrelevant peaks, using a morphological erosion operator (line SE with 1 pixel) (Fig. 3d). The remaining peaks indicate possible grey-level thresholds to segment the microcalcifications correctly. Hence, a set of binary images is obtained from the “difference” image using as thresholds the peaks presented in Fig 3d. To each image of the set is then applied an inferior reconstruction (disc SE with a 3pixel diameter), using the segmented binary image as mark and the “difference” image as mask, followed by a morphological dilatation (cross SE with 3-pixel diameter) to reconstruct and fill any flaw due to the histogram filtering (Fig. 2e – grey level 36, as example).
(a)
(b)
(c)
(d)
(e)
Fig. 2 (a) Original ROI selected from Fig. 1 (white square). (b) Image “thor” obtained from the top-hat by opening by reconstruction application in (a). (c) Image “thcr” resulted from the top-hat by closing by reconstruction applied to (b). (d) “Difference” image obtained from the point-wise subtraction between “thor” and “thcr”. (e) Example of binarized image at grey-level 36
Fig. 1 Example of mammogram and ROI (white square selected by a radiologist) The segmentation procedure starts by applying to the original ROI (Fig. 2a) a top-hat by opening by reconstruction, using a disc-shaped structuring element (SE) with a 5pixel diameter to enhance the structures smaller than the SE that present the highest grey-levels. The resulting image (Fig. 2b), named “thor”, is then filtered by a top-hat by closing by reconstruction (disc SE with 51-pixel diameter) to remove the structures that present the highest grey-levels and enhance the background (Fig. 2c). The obtained image is called “thcr”. Performing a point-wise subtraction between “thor” and “thcr”, it is obtained a new image, called “difference”, where the white structures (possible microcalcifications) in “thor” are emphasized, while the background is almost totally removed (Fig. 2d). The histogram of the “difference” image was determined and the peak related to
(a)
(b)
(c)
(d)
Fig. 3 (a) Histogram of the “difference” image (Fig. 2d). (b) Gradient of the histogram presented in (a), and (c) its respective modulus. (d) Graphic presented in (c) filtered with a morphological erosion operator Finally, the canny edge detection was applied to the set of images to identify the possible microcalcifications
IFMBE Proceedings Vol. 29
Microcalcifications Segmentation Procedure Based on Morphological Operators and Histogram Filtering
contours for each ROI. These images are then plotted sideby-side (together with the original ROI) and presented to the radiologist, who visually chooses one as the best result (as in Fig. 4). The segmentation procedure was developed in MATLAB® (Mathworks Inc., Natick, MA) using the SDC Morphology Toolbox V1.6 (SDC Information Systems, Naperville, USA).
(a) Original
(b) 8
(c) 16
(d) 36
(e) 46
(f) 62
(g) 80
(h) 86
(i) 109
(j) 127
Fig. 4 (a) Original ROI and respective segmentation results. The segmentation thresholds are the grey-levels indicated from the peaks presented in Fig. 3d, as follows: (b) 8, (c) 16, (d) 36, (e) 46, (f) 62, (g) 80, (h) 86, (i) 109 and (j) 127
III. RESULTS From the 66 selected ROIs, the first specialist was able to select at least one correctly segmented result, among the possible ones presented, in 92.4% of them (61 ROIs) while, for the second specialist, this value was 95.5% (63 ROIs). Some examples of ROIs presenting different kinds of background and also nearby an artifact are presented in Fig. 5, as well as their respective segmentation results.
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 5
Example of ROIs presenting microcalcifications over (a) dark and (c) bright backgrounds and (e) nearby an artifact. Their segmentations results are presented in (b), (d) and (f), respectively
IV. DISCUSSION Microcalcifications are important elements to breast cancer detection and CAD systems can be of help in interpreting them [12]. To do that, this kind of systems usually extracts morphological parameters from microcalcifications and feed them to a classification algorithm. A successful parameter extraction is strongly dependent on an efficient segmentation procedure [13]. This work is
357
focused on the first stage of a CAD system: segmentation of microcalcifications and their clustering. Herein, before segmentation begins, a preprocessing is performed to remove image background and enhance its contrast, thus facilitating microcalcification detection. The top-hat operator has proven to be an efficient filter in removing the unwanted background [14]. Besides, the choice of the Structuring Element shape and size is crucial to remove undesired characteristics while keeping untouched the important ones. This importance becomes evident by observing the example presented in Fig 5e, where the artifact (wire used to locate microcalcifications site in biopsy procedures) was not segmented, while microcalcifications were (Fig. 5f). Another important step that brings robustness to the proposed method is histogram filtering. One can observe in Fig 4 that for neighbor grey-levels (from Fig. 3d) of a ROI, results tend to be similar. This result encourages us to pursuit an improvement in segmentation procedure including an automatic grey-level threshold selection. The radiologists’ opinion were used in this work with the main purpose of verifying if the segmented microcalcifications would present accuracy enough to allow the professional elaborate a diagnosis hypothesis. In some cases, the proposed method was not capable of segmenting all individual microcalcifications presented in a ROI (e.g., in Fig. 5d only two microcalcifications, from the original ROI in Fig. 5c, were segmented). However, those cases were considered adequate when both radiologists agreed about their diagnostics based on the segmented microcalcifications. Despite of the limited number of ROIs (66), the proposed method seems to achieve satisfying segmentation results with rates superior to 92%, according to two different radiologists’ opinion. This preliminary result has motivated us to continue improving this segmentation method, with a more complete set of mammographic images.
V. CONCLUSION In this paper, it was proposed a method to segment microcalcifications, based on morphological operators and histogram filtering. For each ROI, a set of possible segmentation results given by the proposed method was presented to two experienced radiologists, who considered that more than 92% of the ROIs were correctly segmented. The preliminary results encourage us to work towards the automation of grey-level threshold selection, and increase the number of tested images.
IFMBE Proceedings Vol. 29
358
M.A. Duarte et al.
ACKNOWLEDGMENT The authors acknowledge the financial support of the Brazilian agencies CNPq and FAPERJ for the financial support. The authors gratefully acknowledge to Dr. Maria Julia Gregorio Calas for helping with image selection and segmentation analysis. Finally, the authors acknowledge the Gama Filho University for the technical and partial financial support.
REFERENCES 1. INCa (2010) Tipos de Câncer, Câncer de Mama, Ministério da Saúde at http://www2.inca.gov.br/wps/wcm/connect/ tiposdecan cer/site/home/mama. 2. INCa (2010) Câncer de Mama, Ministério da Saúde at http://www.inca.gov.br/conteudo_view.asp?id=336. 3. Azevedo, M C (1994) Manual de Radiologia da Mama. Rio de Janeiro. INCa / DuPont / Microservice. 4. Barlow, W E (2002) Performance of Diagnostic Mammography for Women with Signs or Symptoms of Breast Cancer. Journal of the National Cancer Institute, v. 94: 1151-1159. 5. Halkiots S, Botsis, T, Rangoussi, M (2007) Automatic Detection of Clustered Microcalcifications in Digital Mammograms Using Mathematical Morphology and Neural Networks. Signal Processing, v. 87: 1559-1568. 6. Veldkamp, W J, Karssemeijer, N, Otten, J D M, Hendriks, J H C L (2000) Automated Classification of Clusters Microcalcifications into Malignant and Benign Types. Medical Physics, v. 27, nº 11: 26002608.
7. De Santo, M, Molinara, M, Tortorella, F, Vento, M (2003) Automated Classification of Clustered Microcalcifications by a Multiple Expert System. Pattern Recognition, v. 36: 1467-1477. 8. Paquerault, S, Yarusso, L M, Papaioannou, J, Jiang, Y (2004) Radial gradient-based segmentation of mammographic microcalcifications: observer evaluation and effect on CAD performance. Medical Physics, v. 31, nº 9: 2648-2657. 9. Stojic, T, Reljin, I, Relgin B (2005) Local Contrast Enhancement in Digital Mammography by Using Mathematical Morphology. In: Internal Symposium on Signals, Circuits and Systems – ISSCS 2005, v. 2, Iasi, Romênia, 2005, pp 609-612. 10. Stojic, T, Reljin, I, Relgin B (2006) Adaptation of Multifractal Analysis to Segmentation of Microcalcifications in Digital Mammograms. Physica A, v. 367: 494-508. 11. Arikids N S, Skiadopoulos, S, Karajaliou, A (2008) B-spline active rays segmentation of microcalcifications in mammography. Medical Physics, v. 35, nº 11: 5161-5171. 12. Goumot, P A (1993) Le Sein. 1st edition, Paris Editions Vigot. 13. Timp, S, Karssemeijer, N (2004) A new 2D segmentation method based on dynamic programming applied to computer aided detection in mammography. Medical Physics, v. 31, nº 5: 958-971. 14. Soille, P (1999) “Morphological Image Analysis: Principles and Applications. 1st edition, Berlim, Springer-Verlang.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Marcelo de Almeida Duarte Gama Filho University Rua Manuel Vitorino, 533 Rio de Janeiro Brazil [email protected]
Segmentation of Anatomical Structures on Chest Radiographs S. Juhász1, Á. Horváth1, L. Nikházy1, G. Horváth1, and Á. Horváth2 1
Budapest University of Technology and Economics/Department of Measurement and Information Systems, Budapest, Hungary 2 Innomed Medical Co. Budapest, Hungary
Abstract— In this paper we present a solution for segmenting anatomical structures on chest radiographs. First we show an algorithm for the lung contour detection, then we describe a method for finding the ribs and clavicles. The results of these procedures are used as input for a bone shadow elimination algorithm. Different implementation results are also discussed, like C++ and a parallel solution on GPU. Keywords— radiograph, chest, segmentation, ASM, dynamic programming.
I. INTRODUCTION Recently intensive research and development work has been started throughout the world to develop Computer Aided Detection or Diagnostic (CAD) systems to help pulmonologists/radiologists in the evaluation of chest radiographs, where CAD systems usually serve as second readers in the evaluation of radiographs. Approximately two years ago a Hungarian consortium has also started a research/development work to extend a recently developed PACS with CAD functionality. The general goals and the first results of the system were presented last year in the World Congress of Medical Physics and Biomedical Engineering, Munich [1]. Computer aided evaluation of X-ray radiographs needs complex image processing/pattern recognition algorithms where first the images should be preprocessed, and the detection of abnormalities is done in the preprocessed images. Preprocessing means that the contours of the lung fields and the heart, as well as the contours of the bones – the clavicles and the rib cage – should be determined. The findings of these contours may serve two goals: (1) the shape of these anatomical parts, especially the shape of the lung fields may have diagnostic meaning, (2) having determined the contours of the bones and the heart, there is a chance to eliminate the shadows of these parts, “cleaning” the whole area of the lung fields from the “anatomical noise”, and making possible to “look behind” these parts. The suppression of the shadows of “disturbing anatomical parts” may significantly improve the performance of nodule detection, and may help in reducing false positive hits. The goal of this article is to present several preprocessing steps: lung contour detection, rib and clavicle contour
detection, bone shadow elimination. Then we briefly discuss the results of an efficient GPU-based implementation.
II. LUNG CONTOUR DETECTION A. Introduction A typical first step of chest X-ray image analysis is the detection of the lung contour. If we have the position of the lungs then the detection of other objects such as the heart, ribs and clavicles will be easier as we already know where to look for them. The size and shape of the lung contour has diagnostic value without further processing too, as it can show cardiomegaly and pneumothorax. We tried several techniques for the segmentation: AAM [1], ASM [2], kNN-classification [3], curve fitting, texture analysis and other ad-hoc methods. So far ASM gave the best and most robust results. Other researchers came to the same conclusion [4]. We define the lung fields as the set of pixels for which the X-rays go through the lung, but without the areas of the heart, mediastinum, aorta and the areas under the diaphragm. On Fig. 1 the top left image shows an example lung contour with this definition. B. Active Shape Mode (ASM) Active Shape Model is a parametric model of a curve or contour where the parameters are determined from the statistics of many sets of points obtained from different contours of similar images using principal component analysis (PCA) In ASM the boundary of the object is determined by n points. From these points a descriptor vector is created as
x = ( x1 , y1 , x 2 , y 2 ,..., x n , y n ) T
(1)
where xi and yi are the two coordinates of the i-th point of the curve. For the principal component analysis we compute the mean shape of the s training vectors
x= and the covariance matrix
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 359–362, 2010. www.springerlink.com
1 s ∑ xi s i =1
(2)
360
S. Juhász et al.
S=
1 s ∑ (xi − x)(xi − x)T s − 1 i =1
We choose the first t largest eigenvalues
λi
(3) of the covari-
ance matrix. As only the most significant eigenvalues are taken into consideration the number of parameters in this parametric description is much less than the number of contour points, n. The corresponding eigenvectors are arranged into the matrix Φ = (φ1 φ2 ...φt ) . The model parameters are computed by
b = Φ T (x − x)
(4)
and an approximation of the shape from the model parameters is obtained as:
x ≈ x + Φb
(5)
For a given contour a proper b vector is looked for. where all components of b are constrained in the interval ± m λi , where m is a properly selected constant. The search of the parametric description of the contour of an object starts from the mean shape, than two alternating steps are applied until some convergence or a certain number of iterations is reached. In the first step we try to move each contour point perpendicular to the contour. Several positions are tested on both sides of the contour and the best position is chosen. To decide the best position an intensity gradient profile model is built at each contour node and image resolution during the training. To find the best new position for the contour point Mahalanobis distance is used. After all the contour points are updated, to fit a model to the new point set the best b parameter is looked for according to (4). The whole process is repeated several times while the spatial resolution of the image is increased. C. Training and Results For obtaining the ASM-based contours a publicly available reference image database of 247 radiographs by the Japanese Society of Radiological Technology [5] has been used. The delineation of these pictures was made by the research group of van Ginneken [6]. We divided the image set to two parts. One half was used for training and the other for testing the algorithm. Several implementations were made. The execution time per image was 20s of the Matlab version and less than 6s of the C implementation. We used one core of a 2.3 GHz Core 2 Duo machine for the test runs. In both cases most of the time was spent on the Gaussian blurring.
The execution time of 6s is still not acceptable in realtime applications, in clinical examinations. Therefore we implemented the contour search algorithm on the graphical processing unit (GPU). This implementation needs fundamentally different programming techniques, because the program runs concurrently on many threads. The nVidia CUDA system was used for this implementation. We managed to reduce the execution time to 250ms. The ASM search algorithm itself took 30ms on average, the rest of the time was spent on Gaussian blurring and memory copying. We used an nVidia GeForce 9800 GTX for testing, which contains 128 processor cores. It is important to note that significantly better results cannot be achieved with a more advanced GPU, since the memory copies between the host computer and the GPU take remarkable part of the current execution time. We evaluated the results on a per-pixel level. We were able to get an average sensitivity of 0.956 at the specificity level of 0.984.
III. BONE DETECTION A. Algorithm Looking at chest X-ray images an apparent regularity can be observed at first glance: the ribs have a quasi parallel arrangement. So it seems feasible to model the ribcage somehow. Even though the two sides of the ribcage appear to be symmetric they have to be processed separately. The possible gain from utilizing this symmetry isn't worth while considering the complexity of handling the occasional disorders and irregularities. To build a complete model which is capable of enumerating the ribs one-by-one and to specify their positions would require too many parameters and still wouldn’t be accurate enough to handle all the anomalies. Thus we chose a different approach. We restrict our model only to the slopes of the ribs. Our model assigns a slope to every point of the image, this way it neglects the position information. These slopes can be described by a function with two arguments and graphed by a slope field. Several functions were investigated for this purpose and it is revealed that 3 parameters are enough to describe the ribcage. The final version of the function was produced by applying principal component analysis on a third order rational function of two variables. On Fig. 1 the top right image shows an example of this model. During model fitting the missing coefficients are found by an exhaustive search. In order to measure the fitness of the model to the image an edge detection step has to be performed. After selecting some points from the area of the lung, the best fitting arcs could be found in the vicinity of
IFMBE Proceedings Vol. 29
Segmentation of Anatomical Structures on Chest Radiographs
each point, resulting in a series of arcs defined by their positions, slopes, and curvatures. This kind of edge detection assures that the edge segments have a uniform distribution over the lung. The position and slope parameters of the arcs can serve as samples to adjust the model. From the obtained model quasi parallel curves can be generated including the real rib outlines. The lower and upper outlines of the ribs are not discriminated. By selecting the best fitting curves we can get to the approximate outlines of the ribs. This selection process is not straightforward, because next to a good match several other curves are present with similar high fitness values. Hence a local maximum selection has to be incorporated which considers the admissible distances between these outlines too. If these restrictions are well configured we can get the most likely approximation of the outlines by applying a dynamic programming on these constraints. To refine the approximate curves we introduced a parallel version of the dynamic programming active contour algorithm. This method handles the lower and upper outlines of the ribs simultaneously this way ensuring the quasi parallelism of these curves. Not only parallelism but other constraints can be incorporated: the rib thickness also can be restricted to the admissible interval and this method can foster the smoothness as well. By applying these constraints false results can be strongly reduced. This algorithm takes the middle-curve as a basis and then shifts its points along the normal of the curve at each point, this way creating several copies running parallel to the starting curve. Then these copies are split into short segments. The segments can be arranged into a matrix where each curve represents a row and each curve-segment can be assigned to an element in this row. The cell values represent the fitness of a given segment to the image. Now the goal is to find two quasi parallel path between the left and right side of the matrix with maximal sum. From every column two cells have to be selected. A pair of paths can be described by a mid path and a thickness value in every column. Thus the problem is mapped to another domain. The thickness defines the distance between the two selected cells in the given column. Constraints can be introduced for the adjacent path elements and for the adjacent thickness values. The cell which represents the middle of the rib cannot move more than one row at every column border. The adjacent thickness values are also restricted to change only one unit at a time. Constraints for the overall thickness of a rib can also be applied here. This problem can be transformed to a longest path problem in a graph, where nodes represent the cell and thickness pairs and the edges express the allowed steps between the adjacent columns. The edge values reflect the value of the cell which are pointed by the edge plus the additional
361
penalty values for changing row or changing thickness value. By solving this problem the rib outlines can be obtained with decent accuracy as seen on the middle left image of Fig. 1.
Fig. 1 Results at the main steps of the processing of the first image in the JSRT database. Images at the bottom show a magnified part from this image
This method was applied for the clavicles as well. But in contrast to the ribs’ case it is not required to find approximate clavicles in advance. Predefined clavicle templates can be used instead. If the templates have sheer slope and stark convex curvature they can only fit to clavicles when applying this method. The templates are placed on the images and their relative position to the lung outline was estimated by analyzing several radiographs. This segmentation data is then used to remove the bones from the images in order to enhance the structure of the lungs. The elimination is based on creating intensity profiles
IFMBE Proceedings Vol. 29
362
S. Juhász et al.
enough with the use of GPU for real-time evaluation. The results were good enough to be used for further processing steps. Innomed
JSRT 40
Percent of cases
on vertically differentiated images, and then subtracting them from this image and returning to the original domain by integration. The bottom images of Fig. 1 depict the result. The execution time of the current algorithm is around 2 minutes on a 2.3 GHz Core 2 Duo machine, which is far from acceptable, but the introduction of GPU accelerated subroutines are already begun successfully. The last part of the procedure which refines the outlines and originally run for 30 seconds could be reduced to 1 second, but including the transfer overhead between the video card and the memory it raised to 6 seconds. The overhead can be mitigated by executing longer continuous parts on the GPU.
30
Found
20
Correct
10 0 0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
Fig. 2 Distribution of ribs per lung B. Results We tested our algorithm on two sets of images. The first was the first 100 images of the JSRT database [5] and the other was from Innomed Medical Co. The former contains images taken by analog devices and is quite homogeneous. The shadows of the bones are faint but the whole scene is fairly clear. The latter set came from digital machines. It is heterogeneous; it has shoots from different devices with different resolutions and different photon energies. The bone shadows are darker, but the details are blurry and distorted by noise. The patients are mostly elderly having different diseases resulting in abnormal bone and tissue structure. Thus the latter set better represents real usage. Table 1 97% 81.5%
ACKNOWLEDGMENT This work was partly supported by the National Development Agency under contract KMOP-1.1.1-07/1-2008-0035.
Distribution of clavicles
Found Innomed JSRT
The presented clavicle outline detection algorithm performed well on digital radiographs, but further enhancement is still possible to handle the fainter contours of the JSRT images. The rib detection algorithm found enough ribs to produce a clean image of the lung after eliminating the shadows, but in some irregular cases it still skips too many ribs, however there are a lot of possible ways to enhance it.
Accurate 93% 66.5%
REFERENCES
Wrong 3% 18.5%
We were not able to undertake our system an objective examination, because we do not have reliable information about the position of the bones. Thus the classification of results was done by visual observation. The resulted outlines were divided into three categories. If it is an outline of a real bone than it is "found", if the contour follows the bone generally than it is "correct", and if there is no exactly assignable bone for it than it is "wrong". The number of found, correct and wrong ribs was counted per lungs, because the sides were processed separately. Fig 2 shows the distribution of these aggregates. There were 0.28 and 0.24 wrong outlines per lung respectively. In the case of clavicles these categories were aggregated only on the whole set.
IV. CONCLUSIONS It was shown that the lung contour detection can be solved with decent accuracy while keeping runtime low
1. Cootes T F, Edwards G J, Taylor C J (2001) Active Appearance Models. IEEE TAMI 23(6):681-685 2. Cootes T F, Cooper D, Taylor C J, Graham J (1995) Active Shape Models – Their Training and Application. CVIU 61(1):38-59 3. Goldstein M (1972) k-Nearest Neighbor Classification. IEEE TRANS.. INFORM THEORY 18(5):627-630 4. van Ginneken B, Frangi A F, Staal J J, tel Haar B M, Viergever M A (2002) Active Shape Model Segmentation with Optimal Features. IEEE TRANS. MED IMAGING 21(8):924-933 5. Shiraishi J, Katsuragawa S, Ikezoe J, Matsumoto T, Kobayashi T, Komatsu K, Matsui M, Fujita H, Kodera Y, Doi K (2000) Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists’ detection of pulmonary nodules. AM J ROENTGENOL 174:71-74 6. van Ginneken B, Stegman M B, Loog M (2006) Segmentation of Anatomical Structures in Chest Radiographs using Supervised Methods: A Comparative Study on a Public Database. MED IMAGE ANAL 10:19-40
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Áron Horváth Budapest University of Technology and Economics Magyar tudósok körútja 2. Budapest Hungary [email protected]
Lung Nodule Detection on Rib Eliminated Radiographs G. Orbán, Á. Horváth, and G. Horváth Budapest University of Technology and Economics/Department of Measurement and Information Systems, Budapest, Hungary
Abstract— A lung nodule detection algorithm was developed and tested against a comprehensive set of radiographs. Two new algorithms were utilized in the detection scheme. A preprocessing step eliminates ribs and collarbones on the image to enhance the visibility of nodules. The next step uses the Constrained Sliding Band Filter (CSBF) to raise the intensity of round shaped objects while suppressing other areas. The suspicious areas are then processed by a Support Vector Machine (SVM) based on mostly textural features to reduce the number of false detections. The algorithms were tested on the public database created by the Japanese Society of Radiological Technology (JSRT) and a private database. The new methods showed promising results, while the overall performance, 61% sensitivity at 2.5 false positives per image is comparable with state-of-the art algorithms. Keywords— Nodule detection, rib elimination, chest radiographs, CAD, sliding band filter.
I. INTRODUCTION Lung cancer is one of the most common causes of cancer death. Many cures are known, but most of them are effective only in the early and symptomless stage of the disease. Screening can help early diagnosis, but an accurate, cheap and side effect free method has to be used to enable mass usage. Standard chest radiography mostly meets these requirements, except that current methods have a moderate accuracy. Efficiency can be improved by analyzing the radiographs using a Computer Aided Detection (CAD) system. Current CAD applications can only be used as a second reader as they mark several suspicious areas on the radiographs and the examiner has to determine the real nodules. The most important problem of existing CAD systems is the high number of false detections. Although they can detect 60-70% of cancerous tumors, they also mark approximately four healthy regions on each image [1]. Despite that radiologists are able to filter false detections, observer studies show that the number of false positive diagnoses increases with CAD, showing that current methods have to be improved. For our solution we used the following three step scheme. The first step frees the image from unnecessary
objects and noise, thus making the nodule more visible. The next step enhances round shaped objects like the target lung nodules by using image processing algorithms. The last step selects suspicious areas on the enhanced image with the help of a classifier.
II. MATERIALS AND METHODS A. Elimination of Bone Shadows For eliminating ribs and clavicles they are first segmented based on a previously calculated rough model, afterwards the segmentation is refined and finally the objects are cleared form the image. The segmentation steps are not exposed here, a comprehensive description can be found in [2]. The segmentation data is used to remove the bone shadows from the images in order to enhance the structure of the lungs. The elimination is based on creating intensity profiles on vertically differentiated images, which are subtracted from the differentiated original image. An integration step returns to the original domain and produces the bone shadow free image [3]. An example result can be seen on figure 1.
Fig. 1 The output of bone shadow removal
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 363–366, 2010. www.springerlink.com
364
G. Orbán, Á. Horváth, and G. Horváth
B. Enhancement of Nodules The aim of the next step in the scheme is the enhancement of nodules on chest radiographs. These objects are darker than the surroundings, mostly round shaped and have an approximate radius of 5-35mm. According to our experiences, shape information can provide a better clue for finding nodules, because of usual low contrast. A commonly used filter family called Convergence Index (CI) approximates object borders and enhances them if their shape is approximately rounded. A common property of round shaped objects is the radial direction of gradient vectors along their border. The filters consider the surroundings of each pixel. The output depends on the angles of the vectors connecting the center and the surrounding points and the gradient vector at the surrounding point. One of the most successful realizations is the Sliding Band Filter (SBF) using the following idea [4]. The most important parameters are illustrated on figure 2. The algorithm considers each pixel of the image as a potential center of an object. For each center it slides a band in different directions within given bounds, while the band has a fixed width. For each band position the algorithm takes the points inside the band and sums the cosine of the angles of radial vectors (the vector connecting the center and the given point) and gradient vectors at the points. The final position of the band for a direction will be the one with the highest sum. Note that a high sum is caused by the convergence of negative gradient vectors towards the center and can sign an object border. For a round shaped dark object the negative gradients are convergent in every direction, if the starting point was the center of the object. Using this idea, the algorithm sums the maximal band values in each direction, and a high final sum indicates a nodule. A weakness of the algorithm is the independence of the bands in each direction, enhancing very spiculated and distorted objects. An intuitive solution is to apply a constraint on the bands in different directions. Our proposed algorithm the Constrained Sliding Band Filter (CSBF) links the position of the bands allowing smaller distortion. It ensures that the final band positions satisfy a circularity constraint controlled by a coefficient. The coefficient forces an upper bound to the ratio of the distance of the farthest and the closest band from the center. The enhanced pixel values can be calculated with the following formula.
CSBF ( x, y ) =
max Rmin ≤ r ≤
Rmax − d c
1 N
N
∑ Cmax i =1
1 n+d ∑ cosθ im , r ≤ n ≤ r ∗c d m=n
Cmaxir = max
ir
,
Fig. 2 Illustration of SBF algorithm where Rmin, Rmax are the bounds of the target object radius, c is the shape constraint coefficient, N is the number of directions concerned, d is the width of the band and θim is the angle of the mth gradient vector along the ith radial direction and the corresponding radial vector. An output identical to CSBF can be achieved by running several SBF filters with different bounds (Rmin, Rmax) and taking the minimum for each center, however the execution times would be much greater. Thanks for the implementation the CSBF take only slightly longer than one SBF run. For very large c values the CSBF works as a standard SBF, thus it is a more general algorithm. Furthermore for c=1 the CSBF is identical to the Iris filter [4], another realization of the CI family. Parameter selection for CSBF was made heuristically. Rmin and Rmax were set to match the smallest and larges nodules to be found. The parameter d affects the sensitivity to noise and was set to 5.6mm. N was set to 16 as a good compromise between precision and speed. The circularity parameter was optimal around 1.2. C. False Positive Reduction The last step of the CAD scheme concerns the areas with high CSBF value. A good practice is to collect many areas and select the suspicious ones with the help of a classifier. Our solution uses a Support Vector Machine (SVM) [5], due to its good generalization capability. The training sample set is extracted from a radiograph database with validated nodules. For the SVM to work efficiently the dimensionality of the input has to be reduced. This is done by calculating various features that describe the shape, texture and symmetry of the area to be classified. This way the raw image of the area is reduced to a vector of 140 dimensions. Afterwards it is further reduced to 12 dimensions by relevance based dimension reduction techniques. The 12 features that turned to be the most useful are the following. The coordinates of the nodule serve as an important clue, due to the uneven distribution of tumors on the image. Concerning the texture, several statistical features based on the distribution of directed pixel pairs provided useful results. These were the contrast, angular second
IFMBE Proceedings Vol. 29
Lung Nodule Detection on Rib Eliminated Radiographs
moment and various entropy related measurements described in [6]. Other important features are related to the output of a Laplacian of Gaussian filter and a so-called Average Fraction Under the Minimum (AFUM) filter. The former is a well known filter for edge detection, and can also be used for nodule finding. The value of the pixel at the center, the average and the entropy of filter output was used as a classifier input. The AFUM algorithm is another filter for finding round shaped objects described in [7]. The filter output at the center of the area proved to help classification. For the SVM itself, we use a general purpose radial kernel function. It showed good performance on low dimensional input compared to polynomial kernels. Parameter selection was made by a simple algorithm, which searches the parameter space on a logarithmic scale and iteratively refines the search near the found optimal solution.
III. RESULTS First we compared the new CSBF with an existing SBF solution, and then we analyzed the effect of rib removal and finally measured the overall performance. For testing we used two separate databases. A set of 247 images widely used for benchmarking created by the Japanese Society of Radiological Technology (JSRT) [8] and a private database of 150 images originating from a Hungarian clinic. They contain 154 and 100 nodules respectively of various subtleties. While the JSRT radiographs come from an analogue X-ray machine and digitized by a scanner, the ones in the private database are directly made by a digital detector. For the SBF versus CSBF comparison we disabled both the rib removal and the classifying phase to get more accurate results of nodule enhancing capability. Nodules are selected from the enhanced image by a simple adaptive thresholding. The results on the JSRT database can be seen from table 1. The new algorithm succeeded in five more cases at finding the real nodule than original SBF, meaning a 3% increase in sensitivity. We ran the complete three step algorithm on the JSRT database to test the effect of rib removal. This way we’re able to get a clear view how this preprocessing step can help the latter algorithms. Table 1
As the removal algorithm failed to detect ribs on a few images, but a future improvement will likely to fix the issue, we used only the images where rib removal succeeded. As a test method we’ve chosen 4-fold cross-validation. After optimizing the parameters and using the original images, 57% of the real nodules were found while producing on average 4 false detections per image. With utilizing the rib removal algorithm the sensitivity increased to 61% for the same number of false positives, which means a clear improvement. The low absolute performance is caused by the incomplete database. To approximate overall performance, we ran experiments on both databases. Because of the mentioned problem we couldn’t run the three step solution on the complete databases. To provide comparable results with the literature, we needed to process all the images, so we’ve chosen to omit the bone shadow removal algorithm when calculating overall performance. For the JSRT database we plotted the results on a FreeResponse Receiver Operating Curve (FROC). The results can be seen on figure 3. Here the sensitivity is shown as the function of the average number of false positives per image. An appropriate working point can be the 61% accuracy with 2.5 false positives on average. The performance on the private database turned to be somewhat worse. For example the sensitivity was 60% at a false positive rate of 4. The execution time of the original SBF based system without preprocessing was 10 seconds on a 2.6GHz Intel Pentium Dual Core processor. With the CSBF implementation execution time remained 10 seconds while with rib removal increased to 30 seconds.
Nodule detection performance of SBF and CSBF
Method SBF CSBF
365
Sensitivity 71.4 74.7
Avg. no. of false positives 19.5 19.5
Fig. 3 Overall performance on the JSRT database IFMBE Proceedings Vol. 29
366
G. Orbán, Á. Horváth, and G. Horváth
IV. DISCUSSION The main reason behind the improvement, when using CSBF is the insensitivity of the algorithm to distorted or spiculated but nodule sized objects. When these objects for example the parts of the bronchia are less enhanced, the used adaptive threshold gets lower, causing more nodules to become over the threshold. We’ve found the optimum of the circularity parameter around 1.2. It means that objects whose largest radius is more than 1.2 times larger than the smallest radius are usually not nodules. If we set the constraint to a lower thus stricter value, we start to lose a considerable amount of real nodules. The bone shadow removal algorithm has a complex effect on nodule finding performance. When the nodule is not overlapped by a bone shadow, it does not change anything on the original image. However, the output of the CSBF often increases if the nodule is close to an eliminated shadow. This is because a neighboring edge modifies gradient vectors of the nodule border usually to the opposite direction, reducing the CSBF value. In this case, bone shadow removal helps nodule finding. If the nodule is overlapped by a bone shadow, after removal it becomes slightly dimmer on the radiograph. The CSBF algorithm enhances somewhat the bony structures, which causes lower output for the nodule on bone shadow free images, so this effect reduces the performance. In most cases lung nodules aren’t overlapped by bone shadows, as they cover less than 50% of lung area, thus the first effect is more dominant and recognition efficiency increases. However, it would be still important not to suppress nodules overlapped by bone shadows but for this problem further improvements are needed. Concerning the overall performance, it shows comparable results to the best solution in my scope using the JSRT database [1]. 2.5 false positives mean acceptable extra work for the examiner, but the over 60% sensitivity ensures that the CAD system will have some results that the examiner would’ve missed otherwise. On the other hand the results on the private database showed that we can sometimes face worse results in realworld scenarios, and the JSRT may not be fully representative. The main reason behind the worse overall results on the private database is the greater variety of lungs in it and most of the patients having other disorders affecting their lung. This is because the included radiographs come from a lung clinic and not a screening station, so most of the cases are not like the average healthy lung with a lung nodule. The disorders can cause contrast changes in some parts of the lung creating false detections and decreasing detection performance. The execution time without preprocessing is appropriate for on-line usage, because the time while the radiologist
examines the radiograph without markings is enough for the algorithm to run in the background. Unfortunately the approximately half minute run time when using rib removal can be frustrating for some radiologists, however it still seems to be useful for most of the examiners.
V. CONCLUSIONS In conclusion the proposed algorithms can improve lung nodule detection accuracy. Both the CSBF and the bone shadow removal caused a small but obvious increase in performance making them useful for latter nodule detection systems. The achieved performance should enable our system to help radiologists; however the current number of false positives leaves the need for further improvements. Furthermore execution times have to be shortened for convenient usage. To gain experiences of live operation and usefulness, the system is built in the software of an X-ray machine and used experimentally at a clinic.
ACKNOWLEDGMENT This work was partly supported by the National Development Agency under contract KMOP-1.1.1-07/1-2008-0035.
REFERENCES 1. R. C. Hardie, S. K. Rogers, T. Wilson, A. Rogers (2008) Performance analysis of a new computer aided detection system for identifying lung nodules on chest radiographs. Med. Image Anal. 12/3:240–258 2. S. Juhász, Á. Horváth, L. Nikházy, G. Horváth, Á. Horváth (2010) Segmentation of anatomical structures on chest radiographs. Submitted. 3. G. Simkó, G. Orbán, P. Máday, G. Horváth (2008) Elimination of clavicle shadows to help automatic lung nodule detection on chest radiographs. IFMBE Proc. Vol. 22, 4th Eur. Conference of the International Federation for Medical and Biological Eng., Antwerp, Belgium, 2008, pp 488–491 4. Carlos S. Pereira et al. (2007) Evaluation of Contrast Enhancement Filters for Lung Nodule Detection. ICIAR Proc. vol. 1, International Conference on Image Analysis and Recognition, Montreal, Canada, 2007, pp 878–888 5. M. Altrichter, G. Horváth, B. Pataki, Gy. Strausz, G. Takács, J. Valyon (2006) Neurális Hálózatok. Panem, Budapest 6. R. Haralick, K. Shanmugam, I. Dinstein (1973) Textural Features for Image Classification. IEEE Transactions on Systems, Man and Cybernetics, Vol. SMC-3, no. 6: 610–621 7. M. D. Heath, K. W. Bowyer (2000) Mass Detection by Relative Image Intensity. IWDM Proc., International Workshop on Digital Mammography, Toronto, Canada, 2000, pp 219–225 8. J. Shiraishi, S. Katsuragawa, J. Ikezoe, T. Matsumoto, T. Kobayashi, K. Komatsu, M. Matsui, H. Fujita, Y. Kodera, K. Doi (2000) Development of a Digital Image Database for Chest Radiographs With and Without a Lung Nodule. AJR Am J Roentgenol 174: 71–74
IFMBE Proceedings Vol. 29
An improved algorithm for out-of-plane artifacts removal in digital tomosynthesis reconstructions K. Bliznakova, Z. Bliznakov and N. Pallikarakis BIT Unit, Department of Medical Physics, School of Health Sciences, University of Patras, 26500, Rio, Patras, Greece Abstract— Digital Tomosynthesis (DTS) is a method of limited angle reconstruction of tomographic images produced at variable heights, on the basis of a set of angular projections taken in an arc around human anatomy. Reconstructed tomograms from unprocessed original projection images, however, are invariably affected by tomographic noise such as blurred images of objects lying outside the plane of interest and superimposed on the focused image of the fulcrum plane. This work addresses the post-processing method for reconstructing tomograms based on constructing a noise mask from all planes in the reconstructed volume. Subsequently, this noise is subtracted from the in-focus plane. The algorithm was applied in conjunction with Multiple Projection Algorithm (MPA) used to reconstruct planes from the projection images. Comparison between unprocessed and processed tomograms show that the later contribute to less noisy tomosynthesis images, higher CNR and improved feature contrast for both low- and high contrast details. Keywords— Digital Tomosynthesis, Multiple Projection Algorithm, noise removal I. INTRODUCTION
Digital Tomosynthesis (DTS) is a method of limited angle reconstruction of tomographic images produced at variable heights, on the basis of a set of angular projections [1]. With the development of the flat panel detectors, during the last five years, this technique obtained a large popularity as a potential diagnostic x-ray imaging technique to detect breast microcalcifications, pulmonary nodules in the chest, early stages of dental caries, to mention but a few. Particularly in breast imaging, studies including clinical cases demonstrated that DTS may provide superior image quality in comparison to conventional mammographic images [2]. In DTS, the imaged volume is reconstructed from the two-dimensional projections to provide three-dimensional structural information of the human anatomy. Acquisition geometries for DTS vary in terms of the systems, detectors and sources of radiation used. In each case low dose projection images (9 to 100 in total) of human anatomy are acquired over an angular range of 300 to 600, taken at angular increments of 20 to 50. In all cases the x-ray source rotates but the detector either rotates or remains stationary.
Tomosynthetic slices exhibit high resolution in planes that are parallel to the detector plane. The basic algorithms used for reconstruction are the backprojection and shift-andadd methods. However, these algorithms lead to significant tomographic blur from other structures that are not lying on the plane of interest. This results in poor object detectability in the in-focus plane. Two major types of methods have addressed this problem: one category uses pre-processing of projection data (for example prefiltering of projections with various filters) and another involves post-processing of reconstructed tomograms. Among the latter are methods that reconstruct the noise and further it is subtracted from the in-focus plane [3]. The Biomedical Technology Unit at the Department of Medical Physics, University of Patras, Greece is recognized as one of the pioneers and major contributors to the development of DTS, including development, validation and implementation of a DTS imaging prototype, based on the Multiple Projection Algorithm (MPA) [4], as well as techniques to improve the quality of the reconstructed slices [3,5]. The objective of this work is further to improve and generalize the noise removal technique reported by Kolitsi et al [3] developed for the separation and subsequent removal of unrelated structures from the reconstructed planes. The algorithm was tested using simulated and experimental projection data. These results indicate that post processing of the reconstructed volume should be introduced to increase the feature contrast and decrease the tomosynthetic noise in the reconstructed planes. II. MATERIALS AND METHODS
A. Description of the algorithm DTS involves two steps: image acquisition during which the projection data are acquired and tomographic reconstruction. Figure 1 depicts the isocenter acquisition geometry used for DTS for the purposes of this study. The x-ray tube and the detector rotate synchronously about a fixed central point, the isocenter. During rotation, projection images at different angles in a limited arc (<600) are acquired. Tomograms are reconstructed using the Multiple Projection Algorithm (MPA) [4]. This algorithm is equivalent to the backprojection algorithm. While the latter is based on
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 367–370, 2010. www.springerlink.com
368
K. Bliznakova, Z. Bliznakov, and N. Pallikarakis
operations that include individual pixels, the MPA reconstructs planes of different orientations using transformations of group of pixels.
B. Evaluation Phantoms a) Mathematical phantoms. Two simple mathematical phantoms were created for initial design and testing of the new algorithm. The first phantom comprised three rows of cylinders placed at a distance -5, 0 and 10 mm in vertical direction away from the isocenter (fig 2a). Each row consisted of 5 cylinders with identical characteristics, i.e. each one with a radius and height of 2 mm, while the material corresponded to Al. They were placed in a polymethylmethacrylate (PMMA) slab. The second mathematical phantom (fig 2 b) was similar to the first one. The cylinder objects were replaced by spheres. The upper row consisted of 5 spheres with a diameter of 0.25 mm, the middle and the lower row consisted of spheres with a radius of 0.5 mm and 2 mm, respectively. The rows are located at z = -10, 0 and 10 mm distance way from the isocenter.
Fig. 1 DTS acquisition geometry. SID is the source to isocenter distance, SDD is the source to detector distance
For each acquired angle, the projection data is first geometrically projected onto the “image formation plane”, then shifted depending on the position of reconstructed plane, and normalized to the magnification of the isocenter plane. In this way, non filtered projections are used to reconstruct the volume of interest. The algorithm for reducing the noise in reconstructed planes is based on noise mask subtraction from the planes of the originally reconstructed volume using MPA (MPANM). The algorithm initially described by Kolitsi et al. [3], was further improved to account for all planes in the reconstructed volume. Specifically, all reconstructed images are projected onto the plane of interest (ipi) that is subjected to noise reduction for all viewing angles in order to form noise masks (NM). The final noise mask for the ipi (NMpi) is obtained by summing of all noise masks with an appropriate weighting: NM pi = ∑ NM (i ) * W (i ) i
W (i ) =
where
a (i ) ∑ a(i) i
(a)
(b)
Fig. 2 Mathematical phantoms b) Hardware phantom. In order to study the performance of the algorithm in case of inhomogeneous background, a complex phantom was designed. The phantom was constructed by combining the TOR-MAX and TORMAM image quality phantoms (Leeds Test Objects Ltd, Leeds, UK). Simulated versions of the two phantoms are depicted in figure 3. The combined phantom was obtained by clumping the two phantoms.
,
1 i − i pi a (i pi ) = 0
a(i ) =
, and
(1)
where i varies within the whole thickness of the reconstructed volume, and W(i) is the weighting coefficient for the corresponding NM(i). The NMpi is further subtracted from the reconstructed plane of interest (ipi) in order to obtain a noise free image. The same procedure is applied for each reconstructed image within the whole volume. IFMBE Proceedings Vol. 29
Fig. 3 Complex phantom
An Improved Algorithm for Out-of-Plane Artifacts Removal in Digital Tomosynthesis Reconstructions
The region of interest (ROI), through which the beam passes, is shown in the same figure. This region of interest contains 6 mm diameter low contrast and 0.25 and 0.5 mm high contrast details incorporated in the TOR-MAX phantom. The irradiated part from the TOR-MAM phantom contained background structures that mimic breast tissue with embedded microcalcification groups. The total phantom thickness was 3 cm. Projection images a) Simulated projection images. Projection images of the two mathematical phantoms were simulated with the X-Ray Imaging Simulator [6], using the acquisition geometry shown in figure 1. Details on the imaging chain parameters are summarized in Table 1. In order to compare to experimental data, the incident beam was set to parallel. Twenty one projections acquired at 20 increments in an acquisition arc of 400, were simulated. The detector is assumed to have a 100% efficiency, i.e. to absorb all incoming photons. Table 1
Detector size Pixel resolution SID/SDD, mm Beam energy
ing the stage vertically with a speed of 1.865 mm/s. The angular projections were obtained by rotating the phantom stage to the corresponding angle. The phantom was imaged 21 times in the angular range of -200 to 200 degrees with an angular step of 20.
(a)
2000 x 2000 50 µm 1000 / 1300 19 keV
Mathematical phantom 2 3000 x 3000 35 µm 1000 / 1300 19 keV
Hardware phantom 2048 x 2048 14 µm 23000 / 23090 19 keV
b) Experimental images. Projections of the hardware phantom were acquired at the SYRMEP beamline at ELETTRA Synchrotron Light Laboratory, Trieste, Italy, as shown in figure 4. The radiation is a polychromatic beam emitted from a bending magnet of the storage ring.
(b)
(c)
Fig. 5 Projection images taken at 00. (a,b) simulated projection from the mathematical phantoms and (c) experimentally acquired projections
Figure 5 shows projection images at an angle of 00, i.e. the source-detector axis is perpendicular to the object plane.
Imaging chain parameters
Mathematical phantom 1
369
III. RESULTS
For each phantom two reconstructed volumes were created using MPA and MPA-NM. Slices were reconstructed every 1 mm. Figures 6 and 7 show the slices reconstructed at planes where objects of interest are simulated. The upper row of each figure displays the reconstructed slices using MPA, while the lower row shows the reconstructions with MPA-NM.
Fig. 4 Experimental setup for acquisition of images
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 6 Tomographic slices obtained with MPA (a-c) and MPA-NM(d-f) for The phantom was mounted on a scanning stage that could rotate and move vertically. The detector was placed on a mechanism that could move horizontally and vertically. The detector used was a water cooled CCD camera (Photonic Science X-ray Hystar). The phantom was scanned by mov-
the first mathematical phantom.
Similarly, reconstructed slices from the experimental projection data are shown in figure 8. The features, reconstructed on a tomosynthesis plane that belongs to the TOR-
IFMBE Proceedings Vol. 29
370
K. Bliznakova, Z. Bliznakov, and N. Pallikarakis
MAX phantom are depicted in figure 8 a for MPA and 8 c for MPA-NM respectively. The MPA-NM demonstrates significantly better object appearance and detection. Similarly, figure 8 b,d displays the reconstructed features at plane, which belong to the TOR-MAM phantom, as these reconstructed with MPA-NM (fig. 8 d) are characterized with significant visibility.
MPA and MPA-NM. For the high contrast features on this plane, the MPA-NM demonstrates a ten-fold improvement in CNR compared to using just MPA. Similarly, it was calculated that the CNR for the high contrast features is 3 times higher compared to MPA. IV.
Fig. 7
(a)
(b)
(c)
(d)
(e)
(f)
Tomographic slices obtained with MPA (a-c) and MPA-NM(d-f) for the second mathematical phantom.
DISCUSSION AND CONCLUSIONS
The visual and quantitative assessment of the tomograms conclude that application of noise removal methods on the obtained tomograms improves significantly the visualization of high and low contrast features by eliminating the blurred information from other planes. The implementation of noise removal technique for planes containing high contrast structures causes “black” wings around the objects. The noise removal is attained by subtracting from the tomosynthetic reconstruction plane a blur mask, which is a sum of all restored set of blurred replicas of all tomosynthetic planes, weighted properly. Higher weighting coefficients are assigned to the noise masks for planes near the reconstructed plane of interest. As a result, artifacts are observed in the final processed image due to extensive removal. In order to diminish it, an automatic search can be performed in these planes to seek for the similar high contrast structures that should be excluded in the preparation of the noise mask. The method is especially valuable in the case of heterogeneous tissue-mimicking background, as it enhances the visibility of all the features by reducing tissue-overlap.
REFERENCES 1. (a)
2.
(b)
3. 4. 5. 6. (c)
(d)
Fig. 8 Reconstructed planes that contains objects of interest: (a,b) MPA and (c,d) MPA-NM
To quantitatively evaluate the reconstructed image quality, the Contrast-to-Noise Ratio (CNR). The CNR of the three low-contrast circular objects (Fig. 8 a,b) are similar for both
Grant D G (1972) Tomosynthesis - a three-dimensional radiographic imaging technique IEEE Trans. Biomed. Eng. 19:20–28 Poplack S, Tosteson T, Kogel C, and Nagy H (2007) Digital breast tomosynthesis: initial experience in 98 women with abnormal digital screening mammography AJR 189:616-623 Pallikarakis N (1993) A method of selective removal of out-of-plane structures in digital tomosynthesis Med. Phys. 20:47-50 Kolitsi Z, Panayiotakis G, Anastassopoulos V, Skodras A, and Pallikarakis N (1992) A multiple projection method for digital tomosynthesis Med. Phys. 19:1045-1050 Badea C, Kolitsi Z, and Pallikarakis N (1998) A wavelet-based method for removal of out-of-plane structures in digital tomosynthesis Comput. Med. Imaging. Graph. 22(4):309-315 Bliznakova K (2003): Study and development of software simulation for X-ray imaging, PhD thesis (Patras University, Greece) Author: Kristina Bliznakova Institute: BIT Unit, Department of Medical Physics, School of Health Sciences, University of Patras Street: University Campus City: Rio, Patras Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
Magnetic Resonance Imaging of Irreversible Electroporation in Tubers Mohammad Hjouj MS1,* and Boris Rubinsky PhD1,2 1
Center for Bioengineering in the Service of Humanity and Society, School of Computer Science and Engineering Hebrew University of Jerusalem, Israel [email protected] 2 Department of Mechanical Engineering and Graduate Program in Biophysics, University of California at Berkeley, USA
Abstract–– Purpose: A preliminary study on the magnetic resonance (MRI) characteristics of (vegetable) tissue NTIRE is an emerging minimally invasive surgery technique for tissue ablation in which, microsecond, high electrical field pulses form permanent nano-scale defects in the cell membrane. Materials and Methods: The potato is used as the simplest conceivable first order tissue model for studying the MRI characteristics of electroporation, because while made of structured cells it does not have blood flow, it is relatively homogeneous and cell damage is readily visible through a natural oxidation process of the intracellular enzymes.. Clinical NTIRE sequences were applied to potato tubers and MRI sequences of the treated area were compared with colorimetric photographs. Results: A comparison of T1 weighted, T2 weighted, fluid attenuation inversion recovery (FLAIR) and short TI inversion recovery (STIR) of NTIRE shows that T1 weighted signals, such as FLAIR, produce brighter images of the treated areas. In contrast the images of the treated areas are completely lost with liquids enhancing and fat eliminating sequences, such as STIR. Conclusion: MRI can image tissue treated by NTIRE. A possible explanation for the findings is that by producing nanoscale defects in the cell membrane lipid bilayer, NTIRE causes an enhanced signal from the cell membrane lipids that in the treated cells are not bound anymore in a restrictive liquidgel membrane structure and have more degrees of freedom. Keywords–– non-thermal irreversible electroporation, cell membrane, MRI, T1weighted, T2 weighted, FLAIR, STIR.
I. INTRODUCTION Electroporation, or electropermeabilization, is a phenomenon in which cell membrane permeability to ions and macromolecules is increased by exposing the cell to short (microsecond to millisecond) high electric field pulses. The increase in membrane permeability is associated with the formation of nanoscale defects, or pores, in the cell membrane leading to the term “electroporation” [1], [2]. Electrical fields that cause electroporation, in which defects * Corresponding author.
reseal, are known to cause “reversible electroporation”. Reversible electroporation of living tissues is the basis for very promising new therapeutic maneuvers in clinical use or under study for clinical implementation [5,6]. Electrical fields in which the electroporation leads to cell death are said to cause “irreversible electroporation” [7]. When the irreversible electroporation pulses do not produce simultaneously thermal damage due to the electrical Joule heating phenomenon, [8,9] the electroporation is known as non-thermal irreversible electroporation NTIRE. . Recently, NTIRE has begun to be used as a minimally invasive tissue ablation surgical technique 10]. Other minimally invasive or non-invasive tissue ablation surgical techniques, such as radiation, cryosurgery, ultrasound, radiofrequency, microwave heating affect indiscriminately all the molecules in the volume of treated tissue. In contrast, non-thermal irreversible electroporation (NTIRE) affects only the cell membrane lipid bilayer and all the other molecules in the volume of tissue remain intact. Studies on NTIRE can be found in [11-18]. NTIRE has reached the clinical stage and it is being tested in over 20 hospitals and research centers worldwide. Intra-operative medical imaging is central to the successful use of minimally and non-invasive surgery [19]. The outcome of NTIRE in the liver was examined with ultrasound [12-14], which has shown a hypo echoic image in the treated area, which turns within 24 hours into hyper echoic. The image was clearly visible in a vascular organ such as the liver [12,13], but was not clear in avascular prostate tissue [14]. This suggests that the image may be related to blood flow effects caused by the treatment and is only an indirect measure of the NTIRE damage to cells. We have undertaken this study to try and develop a fundamental understanding of the ability of MRI to image of NTIRE affected tissue. We have chosen to perform this first study on the potato for several reasons. Following the replacement concept of the 3 R’s approach for animal testing (reduction of the number of animals, refinement of procedures to reduce distress, and replacement of animal with non-animal
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 371–375, 2010. www.springerlink.com
372
M. Hjouj and B. Rubinsky
techniques [20]), it was recognized in the field of electroporation that some vegetables can be a proper alternative for studying bioelectrical aspects of tissue electroporation. In particular, raw potato tuber is a good choice because any irreversibly electroporated area will be distinctively darker about 5 hours after electroporation. Such darkening is probably due to an accelerated oxidation of chemical constituents caused by a de-compartmentalization of certain enzymes and substrates [21] that occurs at cell membrane lyses caused by electroporation. Furthermore, in this case, the effects of NTIRE on the cell membrane and the consequent MRI imaging can be determined in the absence of blood flow. In this study we will examine various MRI imaging sequences and correlate MRI images with colorimetric photographs to develop a fundamental understanding of NTIRE imaging with MRI.
II. MATERIALS AND METHODS The study was performed on the Dutch bred potato (Solanum tuberosum) cv. ‘Desiree’ - the world’s most popular red skinned yellow flesh main crop potato. All the potatoes used in this study were from the same batch. The experiments were performed on 1 cm slices of potatoes through which two 20 gage stainless steel needles were inserted parallel to each other at a distance of 1 cm between them using a specially designed holder, which ensured the repeatability of the experiments. To treat a volume of tissue, all is required is the insertion of two (or more) needles in such a way that the IRE effect encompasses the undesirable tissue and delivery of microsecond to millisecond electrical pulses. The entire surgical procedure is completed within seconds. For electroporation the needles were connected to a conventional electroporator power supply (BTX 830, Harvard Apparatus, Holliston, MA). Various electroporation sequences were chosen according to the calculations in [22], to produce irreversible electroporation while avoiding thermal effects (NTIRE). The electroporation sequences are typical to those used in other IRE experiments [12-14]. Following the electroporation the potato was introduced into a Marconi Eclipse 1.5 T MRI scanner using a Head coil. The potatoes were imaged with various MRI sequences, T1 weighted, T2 weighted, Fluid attenuation inversion recovery (FLAIR) and Short TI inversion recovery (STIR) at various times after the procedure (For all sequences a 16cm FOV, 3mm slice thickness, and no Gap were used). Photographs of the MRI imaged slices were taken using a digital camera
Olympus 7.1 Mb for optical photography at the same time as the MRI to correlate between the area of oxidation (dark) and the MRI image in the same treated area at the same time.
III. RESULTS Figure 1 is from the part of the study focusing on the MRI characteristics of irreversible electroporation. The electroporation protocol employed 25 pulses of 100 microseconds each delivered at a frequency of 1 Hz. Figures 1 is shows MRI images irreversible electroporated potatoes as a function of time after the application of the IRE pulses, for different MRI sequences and for different electrode voltages. The imaging sequences were T1 weighted, T2 weighted, FLAIR and STIR. Figure 1 was obtained for IRE with pulses of 250 V, 500V and 1000 V applied on the electroporation electrodes. The IRE was applied at three different locations on the potato with the 250 V amplitude pulse at the top location, 500 V amplitude pulse in the middle and the 1000 V amplitude pulse at the bottom. Figure 2 was obtained for IRE with pulses of 500V, 1500 V and 2500 V applied on the electroporation electrodes. The IRE was applied at three different locations on the potato with the 500 V amplitude pulse at the top location, the 1500 V amplitude pulse in the middle and the 2500 V amplitude pulse at the bottom. The figures compare untreated controls with images one hour, three hours, six hours and twelve hours after the electroporation. Figure 3 shows a comparison between a colorimetric photograph and a FLAIR-MRI image of irreversible electroporation treated potatoes as a function of time after the electroporation and for the various voltages applied to the electrodes and listed on the figures. It is important to notice the regions of interest (ROI) in these figures, where the electroporation was administered. Dark areas in the photographs are areas in which the cell enzymes have oxidized and are indicative of regions of cell damage. Bright areas in the FLAIR-MRI images are indicative of free lipids and also damage to cells.
IV. DISCUSSION The regions of interest (ROI) are around the electroporation needles and show brighter on the MRI images. The first column in Fig 1 was obtained for T1 weighted sequences. Brighter signals are seen in the of ROI almost immediately after the electroporation. (The sites at which the needles were inserted is seen as two dark dots in some of the images). In T1 weighted images
IFMBE Proceedings Vol. 29
Magnetic Resonance Imaging of Irreversible Electroporation in Tubers
Fig.
1 Time dependent MRI images of electroporated potato. From left: first column - T1 weighted images, second column - T2 weighted images, third column - FLAIR images, fourth column - FLAIR images. From top: first row - untreated controls, second row - one hour after treatment, third row - six hours after treatment, fourth row - twelve hours after treatment. Treatment was done with a sequence of 25, 100 microsecond pulses delivered at a frequency of 1 Hz. Three different voltages were applied during the treatment. The top treated area of the potato the voltage between electrodes was 250 V, the middle treated area of the potato the voltage between electrodes was 500 V and the bottom treated area of the potato the voltage between the electrodes was 1000V. The three round bright areas seen best in the third and fourth column are the treated areas
Fig.
3 Shows a comparison between photographic images of the IRE treated potato (left) and FLAIR-MRI images (right). The voltage used for electroporation is listed (500V at the top, 1500V at the middle and 2500V at the bottom) The affected region is dark on the photographs due to oxidation and bright on the MRI due to the signals from the lipids. The dimensions of the affected areas are listed on the images. Times after IRE: A) 1h, B) 3h, C) 6h, D) 12 h
373
Fig. 2 Time dependent MRI images of electroporated potato. From left: first column - T1 weighted images, second column - T2 weighted images, third column - FLAIR images, fourth column - STIR images. From top: first row - untreated controls, second row - one hour after treatment, third row - six hours after treatment, fourth row - twelve hours after treatment. Treatment was done with a sequence of 25, 100 microsecond pulses delivered at a frequency of 1 Hz. Three different voltages were applied during the treatment. The top treated area of the potato the voltage between electrodes was 500 V, the middle treated area of the potato the voltage between electrodes was 1500 V and the bottom treated area of the potato the voltage between the electrodes was 2500V. The three round bright areas seen best in the third and fourth column are the treated areas tissue with the shortest T1 will give the strongest signal. Since fat has the shortest T1 the signal we start seeing after electroporation means that it may be from fat or any other molecule with a similar T1 to fat. The second column in Fig 1 is for T2 weighted images. Liquids produce the strongest signal in T2 weighted images. The signals in the ROI in the second column of Figs 1 and 2 are not as clearly delineated as in the first column, especially when the electroporation was done with lower voltages and earlier times after the electroporation event. Fluid attenuation inversion recovery (FLAIR) images utilize a long inversion time in order to suppress the signal coming from fluid. After applying this sequence we still see a strong (bright) signal in the ROI of the third column of images; which means that the tissue giving this strong signal have a T2 different from that of liquids. The Short TI (inversion time) Inversion Recovery (STIR) sequence is a special case of the inversion recovery – spin echo (IR-SE) pulse sequence. In this sequence TI is chosen to have such a value that the signal from fat or any tissue with T1 similar to fat is suppressed. Since it is
IFMBE Proceedings Vol. 29
374
M. Hjouj and B. Rubinsky
known that the T1 of fat is usually the shortest of all the molecules, the inversion time (TI) value is also quite short. The fourth column in Figure 1 shows that the bright images from the ROI have disappeared. STIR suppresses the signal from fat (or anything which has the same T1 as fat). Because any bright image in the ROI is lost in STIRE – MRI imaging this means that the strong signal seen on T1 and FLAIR in the ROI are caused by fat or a molecule with the same T1 as fat. A possible explanation of this finding could be related to the mechanism of action of irreversible electroporation. Irreversible electroporation mode of cell death involves nanoscale defects in the cell membrane and a breach of the integrity of the cell membrane. One possible explanation for the observation in this study is that the lipids in the intact cell membrane are organized in a tight structure. Therefore the protons in the molecule are not free to vibrate. When nanoscale molecular defects are induced in the cell membrane the lipids in the membrane have additional degrees of freedom resulting in a stronger signal in T1 weighted images. This suggests that when irreversible electroporation is imaged with MRI a brighter image of the ROI is produced using imaging sequences that suppress the effects of the liquid and enhance the effects of lipids, such as the FLAIR sequence. Obviously this explanation is not yet a proven hypothesis and much work remains to demonstrate the proposed mechanism. Regardless of the validity of the explanation, we believe that the fact that the images obtained with FLAIR-MRI show the ROI much brighter is of fundamental importance in understanding MRI of electroporation. The ability of certain MRI sequences to detect cell death after minimally invasive surgery with irreversible electroporation may be of importance for monitoring this new minimally invasive surgical procedure.
V. CONCLUSION A study of the magnetic resonance characteristics of irreversible electroporation was performed using the potato as a model system. A variety of MRI sequences were tested to evaluate their ability to detect cell damage from irreversible electroporation as a function of time and in comparison with colorimetric enzyme oxidation tests. The study has shown that sequences weighted toward enhancing the signal from fat and reducing that from liquid, such as FLAIR-MRI, provides better and much earlier indication of cell damage than the colorimetric tests. The effect may be related to the mechanism of cell damage by irreversible electroporation, which is related to disruption of the cell membrane lipid bilayer and formation of permanent nanoscale defects in the membrane. The disruption of the lipid
bilayer may be responsible for the increased signal from the lipids. This finding may become important in developing non-invasive techniques for monitoring tissue damage during minimally invasive surgery with irreversible electroporation.
ACKNOWLEDGEMENT This study was supported by a gift from the Adelson Family Foundation. We are grateful to “The Medical Imaging Department- Makassed Hospital-Jerusalem”, for making this study possible.
REFERENCES 1. Weaver JC. Electroporation of cells and tissues. Plasma Science, IEEE Transactions on 2000;28(1):24-33. 2. Neumann E, Schaefer-Ridder M, Wang Y, Hofschneider PH. Gene transfer into mouse lyoma cells by electroporation in high electric fields. EMBO J 1982;1(7):841-845. 3. Baker PF, Knight DE. Calcium-dependent exocytosis in bovine adrenal medullary cells with leaky plasma membranes. Nature 1978;276(5688):620-622. 4. Kinosita K, Jr., Tsong TT. Hemolysis of human erythrocytes by transient electric field. Proc Natl Acad Sci U S A 1977;74(5):19231927. 5. Gehl J. Electroporation: theory and methods, perspectives for drug delivery, gene therapy and research. Acta Physiol Scand 2003;177(4):437-447. 6. Mir LM. Therapeutic perspectives of in vivo cell electropermeabilization. Bioelectrochemistry 2001;53(1):1-10. 7. Rubinsky B. Irreversible electroporation in medicine. Technol Cancer Res Treat 2007;6(4):255-260. 8. Lee RC, Gaylor DC, Bhatt D, Israel DA. Role of cell membrane rupture in the pathogenesis of electrical trauma. J Surg Res 1988;44(6):709-719. 9. Lee RC, Kolodney MS. Electrical injury mechanisms: electrical breakdown of cell membranes. Plast Reconstr Surg 1987;80(5):672679. 10. Davalos RV, Mir IL, Rubinsky B. Tissue ablation with irreversible electroporation. Ann Biomed Eng 2005;33(2):223-231. 11. Edd JF, Horowitz L, Davalos RV, Mir LM, Rubinsky B. In vivo results of a new focal tissue ablation technique: irreversible electroporation. IEEE Trans Biomed Eng 2006;53(7):1409-1415. 12. Lee EW, Loh CT, Kee ST. Imaging guided percutaneous irreversible electroporation: ultrasound and immunohistological correlation. Technol Cancer Res Treat 2007;6(4):287-294. 13. Rubinsky B, Onik G, Mikus P. Irreversible electroporation: a new ablation modality--clinical implications. Technol Cancer Res Treat 2007;6(1):37-48. 14. Onik G, Mikus P, Rubinsky B. Irreversible electroporation: implications for prostate ablation. Technol Cancer Res Treat 2007;6(4):295-300. 15. Maor E, Ivorra A, Leor J, Rubinsky B. The effect of irreversible electroporation on blood vessels. Technol Cancer Res Treat 2007;6(4):307-312. 16. Maor E, Ivorra A, Leor J, Rubinsky B. Irreversible Electroporation Attenuates Neointimal Formation After Angioplasty. Biomedical Engineering, IEEE Transactions on 2008;55(9):2268-2274.
IFMBE Proceedings Vol. 29
Magnetic Resonance Imaging of Irreversible Electroporation in Tubers 17. Maor E, Ivorra A, Rubinsky B. Non Thermal Irreversible Electroporation: Novel Technology for Vascular Smooth Muscle Cells Ablation. PLoS ONE 2009;4(3):e4757. 18. Lavee J, Onik G, Mikus P, Rubinsky B. A novel nonthermal energy source for surgical epicardial atrial ablation: irreversible electroporation. Heart Surg Forum 2007;10(2):E162-167. 19. J.C. Gilbert GMO, W. Haddick, and B. Rubinsky. The use of ultrasonic imaging for monitoring cryosurgery. 1984. p 5.
375 20. Russell WMS BR. The principles of humane experimental technique. London: Methuen & Co. Ltd.; 1959. 21. Ashie INA, Simpson BK. Application of high hydrostatic pressure to control enzyme related fresh seafood texture deterioration. Food Research International;29(5-6):569-575. 22. Davalos RV, Rubinsky B. Temperature considerations during irreversible electroporation. International Journal of Heat and Mass Transfer 2008;51(23-24):5617-5622.
IFMBE Proceedings Vol. 29
Superposition of activations of SWI and fMRI acquisitions of the motor cortex M. Matos1, M. Forjaz Secca1, 2, and M. Noseworthy3, 4 1
Physics Department, Cefitec, Monte de Caparica, Portugal 2 Ressonância Magnética - Caselas, Lisboa, Portugal 3 Electrical and Computer Engineering, School of Biomedical Engineering, McMaster University, Hamilton, Canada 4 Medical Physics and Applied Radiation Sciences, McMaster University, Hamilton, Canada
Abstract — Functional Magnetic Resonance Imaging (fMRI) has been an important tool for the understanding of the neural basis of cognition and behavior in the past years. Most studies rely on changes in the blood oxygenation level dependent (BOLD) contrast, to get an insight of the metabolic activity of specific areas in the brain, yet the particular physiological phenomena being measured is not fully understood. The present work aims to identify the correlation between fMRI signals and the venous structures being activated during the same tasks. By co-registering fMRI, Susceptibility Weighted Imaging (SWI) and T1 weighted images we can highlight the specific areas being activated during a behavioral task and correlate the fMRI signal to the spatial location of the active vein closest to the activated cluster. The SWI sequence activation is derived from the subtraction of images, obtained during rest and behavioral tasks, being able to provide images of the venous structures activated during a task. Although most of the used SWI subtraction data was too noisy, the results were quite promising. For a particular case, we managed to apply registration techniques to the fMRI, SWI and T1 weighted image sets, showing coherence between fMRI activation of the motor cortex and the vein identified in the SWI. Further development of the technique under better controlled conditions is required, in order to reduce the noise and deal with the difficulties we encountered. We hope to add extra information to the problem of the physiology mechanisms that underlie behavioral brain activation. Keywords— fMRI, BOLD, SWI, brain activation, image registration
Susceptibility Weighted Imaging (SWI) [9] allows us to see unique magnetic susceptibility differences between a certain structure and its background or the surrounding tissue. The deoxygenated venous blood, with its paramagnetic deoxyhemoglobin in red blood cells [10], is of particular interest to this study. The difference between oxygenated and deoxygenated hemoglobin [11], allows us to image the susceptibility of blood in small venous vessels in the brain. This technique opens the possibility of observing the changes in small vessels that occur during the activation of particular brain area. By acquiring a set of images at rest and another as the paradigm task is being performed, it should be possible to see changes in oxygenation at the vessel level, by performing the subtraction of the two image sets. The present work aims to identifying the correlation between fMRI signals and the venous structures active during a particular behavioral task (Fig.1). By co-registering fMRI, SWI and T1 weighted images it is possible to highlight specific areas being activated during a behavioural task and correlate the fMRI signal to the spatial location of the active vein closest to the activated cluster. Ultimately, we intend to add new information to the mechanisms underlying fMRI.
I INTRODUCTION Over the past years, functional Magnetic Resonance Imaging (fMRI) has been used as a tool in the study of the neural basis of cognition and behavior [1], with most studies relying on qualitative changes in the blood oxygenation level dependent (BOLD) contrast, in which hemoglobin is used as an endogenous contrast agent. fMRI measures the correlation of neural activity to the hemodynamic response, that is characterized by a chain of physiologic events [2-5]. The interpretation of BOLD fMRI signals relies on the complex interplay of changes in cerebral blood flow, cerebral blood volume and blood oxygenation [6-7], making the particular physiological phenomenon being measured unclear [8].
II MATERIALS AND RESULTS All MRI images were obtained on a 3.0T Signa GE Healthcare system. For the fMRI we acquired a 28 slice BOLD EPI sequence with an 8 channel phased array head coil using a Flip Angle = 90°, TE=35ms, TR=3s, Sl.Th.=5.0 mm, FOV=24 ×24 cm2 and a 64x64 matrix, with a total acquisition time of 282s. During this, a motor activation paradigm, consisting of a simple closing and opening of the hand in 30 second blocks of activation and rest, was performed.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 376–378, 2010. www.springerlink.com
Superposition of Activations of SWI and fMRI Acquisitions of the Motor Cortex
The fMRI post-processing was performed with FSL, FMRIB’s Software Library (Oxford, UK). Analysis was carried out using FEAT (fMRI expert analysis tool) V. The following pre-statistic processing was applied: motion correction using McFLIRT[12]; non-brain removal using BET[13]; spatial smoothing using a Gaussian Kernel of FWHM 5mm; mean-based intensity normalization of all volumes by the same factor; high pass temporal filtering (Gaussian-weight LSF straight line fitting, with sigma=50.0s). Time series statistical analysis was carried out using FILM with local autocorrelation correction[14]. Z (Gaussianised T/F) statistic images were thresholded using clusters determined by Z>3,5 with a cluster significance threshold of p=0,05[15]. Registration to high resolution was carried out using FLIRT[12, 16]. For visualize 3D functional MRI and display the regions of activation on a 3D cortex surface we used BrainVoyager QX software Version 2.0.7 (Maastricht, Netherlands). To better differentiate between sulcus and gyrus and to see the specific localization of BOLD activation, we applied flattening techniques. Two high resolution 3D SWI sets were acquired with full velocity compensation gradient echo sequences. The SWI post-processing (phase filtering) was performed on a GE Advantage Windows workstation. The first set was obtained with the subjects at rest and the other while performing the previously defined motor task. The movement was performed for the whole length of the acquisition sequence. In order to see SWI activation of the veins, a series of processing steps were performed on the FSL software to obtain the subtracted sets of the two SWI images. The subtraction was performed with basic commands, already in high resolution space. In order to reduce noise from the subtraction, we applied the tool SUSAN[17]. With this, we managed to highlight the changes in the blood vessel signal and in the oxyhemoglobin content between rest and activation tasks[11].
377
Fig. 2: 3D representation of the activated vein.
Fig. 3: Superposition of SWI and fMRI BOLD, on a 3D FSPGR anatomical sequence.
The activated vein actually lies in the sulcus, which can be located on the flattened brain, by increasing the threshold of the activation until a certain threshold (Fig.3).
III RESULTS Although most SWI subtraction data was too noisy to give a proper subtraction, for one particular case, the SWI subtraction was very evident and a 3D representation of the activated vein was obtained (Fig.1). By applying registration techniques, we managed to superimpose both the fMRI activation and the SWI activated vein on the 3D FSPGR (Fig.2), exactly located on the area specific for the motor task performed[18].
Fig. 4: BOLD activations on 3D cortex surface (a) and on flatted space (b). This allows us to get a good spatial correlation of the various processes involved in brain activation.
IFMBE Proceedings Vol. 29
378
M. Matos, M.F. Secca, and M. Noseworthy
V CONCLUSIONS
8.
We were able to use post-processing to reveal coherence between fMRI activation of the motor cortex and the vein identified in the SWI in only one of our 5 study subjects due to the noise involved in the SWI subtraction images. However, in this case, the images were quite promising. We need to further develop the technique, under better controlled conditions, in order to reduce the noise and deal with the difficulties we encountered, and hope to add some extra information on the problem of the physiology of brain activation.
V REFERENCES 1. 2. 3. 4.
5.
6. 7.
9. 10. 11. 12. 13. 14. 15.
Nair, D., About being BOLD. Brain Research Reviews, 2005. 50: p. 229-243. Glover, G., Deconvolution of impulse response in event-related BOLD fMRI. NeuroImage, 1999. 9: p. 416-29. J. Martindale, J.M., J. Berwick, M. Martin, et al, The hemodynamic impulse to a single neural event. Journal of Cerebral Blood Flow and Metabolism, 2003. 23: p. 546-55. Y. Yang, W.E., H. Pan, S. Xu, D. Silbersweing, E. Stern, A CBFbased event-related brain activation paradigm: characterization of impulse-response function and comparison to BOLD. NeuroImage, 2000. 12: p. 287-97. Y. Pu, H.L., J. Spinks, S. Mahankali, J. Xiong, et al, Cerabral hemodynamic response in Chinese (first) and English (second) language processing revealed by event-related functional MRI. Magn Reson Med, 2001. 19: p. 643-47. P. Fox, M.R., Focal physiological uncoupling of cerebral blood flow and oxidative metabolism during somatosensory stimulation in human subjects Proc. Natl. Acad. Sci., 1986. 83: p. 1140-44. P. Fox, M.R., M. Mintun, Nonoxidative glucose consumption during focal physiologic neural activity. Science, 1988. 241: p. 462-64.
16. 17. 18.
Logothesis, N., The ins and outs of fMRI. Nature Neuroscience, 2007. 10: p. 1230-1232. E. Haacke, Y.X., Y. Cheng, J. Reichenbach, Susceptibility weighted imaging (SWI). Magn Reson Med, 2004. 52: p. 612-618. S. Ogawa, T.L., A. Nayak, Oxygenation-sesitive contrast in magnetic ressonace image of rodent brain at high magnetic fields. Magn. Reson. Med., 1990. 14: p. 68-78. M. Secca, M.N., H. Fernandes, and A. Koziak, SWI brain vessel change coincident with fMRI activation, IFMBE Proceedings, 2008. 22: p. 1680-0737 M. Jenkinson, P.B., M. Brady and S. Smith, Improved Methods for the Registration and Motion Correction of Brain Images. NeuroImage, 2002. 17(2): p. 825-841. Smith, S., Fast Robust Automated Brain Extraction Human Brain Mapping, 2002. 17(3): p. 143-155. M. Woolrirh, B.R., J. Brady, Temporal Autocorrelation in Univariante Linear Modelling of fMRI data. NeuroImage, 2001. 14(6): p. 1370-1386. K. Worsley, A.E., S. Marrett et al., A three-dimensional statistical analysis for CBF activation studies in human brain. Journal of Cerebral Blood Flow and Metabolism, 1992. 12: p. 900-918. Smith, M.J.a.S.M., A Global Optimisation Method for Robust Affine Registration of Brain Images. Medical Image Analysis, 2001. 5(2): p. 143-156. Brady., S.M.S.a.J.M., SUSAN - a new approach to low level image processing. . International Journal of Computer Vision, 1997. 23(1): p. 45-78. T. Yousry, U.S., H. Alkadhi, D. Schmidt, A. Peraud, A. Buettner, and P. Winkler, Localization of the motor hand area to a knob on the precentral gyrus. Brain, 1997. 120: p. 141-157.
Contact: Mário Forjaz Secca Institute: Physics Department, Cefitec, Univ.Nova de Lisboa Street: Quinta da Torre, 2829 -516 City: Monte da Caparica Country: Portugal Email: [email protected]
IFMBE Proceedings Vol. 29
A New Optical Method for Measuring Creatinine Concentration During Dialysis. I. Fridolin1, J. Jerotskaja1, K. Lauri1, F. Uhlin1,3 and M. Luman2 1
2
Department of Biomedical Engineering, Tallinn Technical University, Ehitajate Rd 5, 19086 Tallinn, Estonia Department of Dialysis and Nephrology, North-Estonian Medical Centre, J.Sütiste Rd 19, 13419 Tallinn, Estonia 3 Department of Nephrology, University Hospital, Linköping, S-581 85 Linköping, Sweden
Abstract—The aim of this study was to compare creatinine (Cr) concentration measurements removed during dialysis by two optical algorithms based on single wavelength and multiwavelength UV-absorbance. Ten uremic patients, three females and seven males, mean age 62.6 ± 18.6 years, on chronic thrice-weekly hemodialysis were included in the study. Double-beam spectrophotometer (Shimatsu UV-2401 PC, Japan) was used for the determination of UV-absorbance in the collected spent dialysate samples. Two optical algorithms were developed to calculate Cr concentration removed during dialysis from measured UVabsorbance: (i) an algorithm utilizing only a single wavelength, revealing Cr concentration Cr_sw; (ii) an algorithm utilizing several wavelengths (multiwavelength algorithm), revealing Cr concentration Cr_mw. The mean value of Cr estimated at the laboratory was 107 ± 46,7 micromol/l, while UV-absorbance as Cr_sw (242 nm) was 107 ± 42.7 micromol/l, and 107 ± 44.7 micromol/l as Cr_mw. The mean concentrations were not significantly different (P = 0.99). The systematic errors, using Cr_lab as a reference, were -2.7% for Cr_sw and -1.7% for Cr_mw, and random errors were 17.3% and 13.6% for Cr_sw and Cr_mw, respectively. The systematic error was not significantly different for two optical algorithms (P = 0.25). The random error decreased significantly (P < 0.05) using Cr_mw algorithm compared to the Cr_sw model. In summary, the creatinine concentration removed during dialysis can be estimated with UV-absorbance technique. Keywords— Creatinine, absorbance, dialysis, monitoring, nutrition.
I. INTRODUCTION
A study initiated by the European Renal AssociationEuropean Dialysis and Transplantation Association (ERAEDTA) QUality European STudies (QUEST) initiative pointed out that the European Best Practice Guidelines within end stage renal disease care are implemented unsatisfactorily regarding the assessment of dialysis quality [1]. One reason could be the discomfort of blood sampling and laboratory analysis in everyday clinical praxis. On-line monitoring of dialysis quality offers a new perspective to this bottleneck. Several spectrophotometrical sensors for on-line monitoring of total ultra-violet (UV) absorbance or urea in the spent dialysate have been presented, aiming to
follow a single hemodialysis session continuously [2], [3], [4]. According to the guidelines protein–energy malnutrition should be avoided in maintenance haemodialysis because of poor patient outcome [5]. Malnutrition should be diagnosed by a number of assessment tools including normalized Protein Nitrogen Appearance (nPNA), formerly normalized Protein Catabolic rate (nPCR), a marker molecule urea based nutrition parameter. nPNA is usually estimated using blood samples and patients body weight. An interesting alternative is nPNA estimation by UV-absorbance technique [6]. Cr based parameters, Creatinine Index (CI) and Lean Body Mass (LBM), are related to the preservation of muscle mass and protein nutritional status and could be seen as markers of protein-energy malnutrition. Moreover, CI and LBM predict survival better than urea based Kt/V and nPNA, and are more stable than nPNA, which is highly dependent on protein intake and dialysis dose [7]. To effectively implement CI and LBM into clinical practice, an instrument capable of measuring Cr concentration removed during dialysis would be favorable. An earlier study by our group [8] indicated that approximately 90% of the cumulative and integrated UVabsorbance measured by the optical dialysis adequacy sensor originates from the 10 main peaks for a particular dialysis treatment. A small water soluble uremic solute creatinine was responsible for one of the main peaks. Furthermore, the most interesting wavelength regions for further investigations regarding instrumental design for Cr concentration estimation by UV-technique seemed to be around 237 nm [9]. The aim of this study was to compare Cr concentration measurements removed during dialysis by two optical algorithms based on single wavelength and on multiwavelength UV-absorbance. II. PATIENTS
This study was performed after the approval of the protocol by the Tallinn Medical Research Ethics Committee at the National Institute for Health Development in Estonia.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 379–382, 2010. www.springerlink.com
380
I. Fridolin et al.
An informed consent was obtained from all participating patients. Ten uremic patients, three females and seven males, mean age 62.6 ± 18.6 years, on chronic thrice-weekly hemodialysis were included in the study. Three different polysulphone dialysers, F8 HPS (N=14), F10 (N=3), and FX80 (N=13) (Fresenius Medical Care, Germany), with the effective membrane area of 1.8 m2, 2.2 m2, and 1.8 m2 were used. The dialysate flow was 500 mL/min and the blood flow varied between 245 to 350 mL/min. The type of dialysis machine used was Fresenius 4008H (Fresenius Medical Care, Germany).
where Cr_lab and Cr_sw are the Cr concentration values in the spent dialysate from laboratory and from the single wavelength UV-absorbance model, respectively. The Cr concentration from multiwavelength algorithm Cr_mw was used instead of Cr_sw when Accuracy was calculated for Cr_mw. Systematic error and random error were calculated as the mean value and as SD over the total material Accuracy. Student’s t-test (two tailed) and Levene Test of Homogeneity of Variances were used to compare means for different methods and SD values respectively.
IV.
Spent dialysate samples were taken at 10, 60, 120, and 180 minutes after the start of the dialysis session, and at the end. The concentration of creatinine was determined at the Clinical Chemistry Laboratory at North-Estonian Medical Centre (Cr_lab) using standardized methods. The accuracy of the methods for the determination of Cr in dialysate was ±5%. Double-beam spectrophotometer (Shimatsu UV-2401 PC, Japan) was used for the determination of UVabsorbance in the collected spent dialysate samples. Spectrophotometric analysis over a wavelength range of 190 380 nm was performed by an optical cuvette with an optical path length of 1 cm. Some of the measured values (absorbance or concentration) were excluded from data before the calculation of correlation coefficient r. The exclusion criteria were incorrect or illogical values of the measured concentration or absorption. Aside Cr_lab, two optical algorithms were used to calculate Cr concentration removed during dialysis from measured UV-absorbance utilizing self prediction method: (i) an algorithm utilizing only single wavelength, revealing Cr concentration Cr_sw; (ii) an algorithm utilizing several wavelengths (multiwavelength algorithm), revealing Cr concentration Cr_mw. The multiwavelength algorithm to calculate Cr concentration was obtained using regression analysis including wavelengths 230-330 nm as the independent parameters. Cr concentration removed during dialysis with the three methods was finally compared regarding mean values and SD. Random error was also calculated for different methods as SD over the sessions’ Accuracy. For a single Cr value Accuracy was
Accuracy
Cr _ lab Cr _ sw * 100% Cr _ lab
(1)
Figure 1 shows an example of the absorbance spectrum obtained on the spent dialysate samples at different time moments over a wavelength range of 210 - 380 nm during a single dialysis session. A lower UV-absorbance value is measured at all wavelengths as time increases, due to decreased concentration of the UV-absorbing compounds in the blood when transported through the dialyser into the dialysate and removed from the blood during the dialysis treatment. 3 10 min 60 min
2.5
120 min
Absorbance
III. MATERIALS AND METHODS
RESULTS
180 min
2
240 min
1.5
1
0.5
0 190 210 230 250 270 290 310 330 350 370 390
Wavelength, nm
Fig. 1 An example of the absorbance spectrum obtained over a wavelength range of 210-380 nm on the spent dialysate samples at different times during a single dialysis session.
The correlation coefficient r between UV-absorbance over a wavelength range of 210-380 nm and creatinine in the spent dialysate (Fig. 2) shows the highest correlation coefficient for creatinine (0.913) at the wavelength 242 nm. Figure 3 shows the scatter plot of the predicted Cr_sw at 242 nm, and Cr_mw against Cr_lab illustrating the linear
IFMBE Proceedings Vol. 29
A New Optical Method for Measuring Creatinine Concentration during Dialysis
relationship between Cr concentration measured at the laboratory and estimated by UV absorbance.
1 0.9 0.8
r(creatinine)
0.7 0.6 0.5
381
The mean value of Cr estimated by Cr_lab was 107 r 46.7 micromol/l, 107 r 42.7 micromol/l by UV-absorbance as Cr_sw (242 nm), and 107 r 44.7 micromol/l as Cr_mw. The mean values were not statistically different (P = 0.99). The SD-s were not significantly different (P = 0.76) for any methods. Figure 4 presents the systematic and the random error for Cr_sw (242 nm) and Cr_mw model using Cr_lab as a reference. The systematic error was -2.7% for Cr_sw and -1.7% for Cr_mw model. The systematic error is decreased when the Cr_mw algorithm was applied, being still not significantly different (P = 0.25) from the Cr_sw model.
0.4 0.3
20.0
0.2 15.0
0 190 210 230 250 270 290 310 330 350 370 390
Wavelength, nm
Fig. 2
Value of correlation coefficient r between UV-absorbance over a wavelength range of 210-380 nm and creatinine in the spent dialysate.
The final Cr_mw model included the wavelengths 238, 242, 249, 259, 263 and 268 nm. The higher correlation coefficient r and the determination coefficient R2 are obtained using the Cr_mw model.
Systematic +/- Random Error, %
0.1
10.0 5.0 0.0 -5.0 -10.0 -15.0 -20.0
Cr_sw
Cr_sw and Cr_mw, micromol/l
250 Cr_mw y = 0.916x + 9.04 r = 0.957 R2 = 0.916
200
Fig. 4
Cr_sw, 242 nm y = 0.834x + 17.81 r = 0.913 100 R2 = 0.834
V. DISCUSSION Cr_sw
50
Cr_mw Linear (Cr_sw) Linear (Cr_mw)
0 50
100
150
200
250
Cr_lab, micromol/l
Fig. 3
The systematic and the random errors for the Cr_sw and Cr_mw model using Cr_lab as the reference.
The random error using Cr_lab as a reference was 17.3% for Cr_sw and 13.6% for Cr_mw. The random error decreased significantly using Cr_mw (P < 0.05) compared to the Cr_sw model.
150
0
Cr_mw
The scatter plot of the predicted Cr_sw (marked as “o”, 242 nm) and Cr_mw (marked by rectangles) against Cr_lab (N = 119).
The highest r value in the spent dialysate for creatinine was obtained at the wavelength 242 nm (Fig. 2), which is very close to earlier results, according to the highest r value for creatinine in the spent dialysate was obtained at the wavelength 237 nm [9]. The regression models utilizing self prediction method revealed one algorithm utilizing only a single wavelength, and one algorithm utilizing several wavelengths (multiwavelength algorithm), which estimated the Cr concentration as Cr_sw and Cr_mw. The Cr concentrations by means
IFMBE Proceedings Vol. 29
382
I. Fridolin et al.
of the mean values r SD estimated by Cr_lab, by UVabsorbance as Cr_sw (242 nm) and as Cr_mw were not different. This indicates that optical creatinine concentration measurements removed during dialysis based on single wavelength and multiwavelength UV-absorbance methods is possible. The decrease of both the systematic and random error using Cr_mw occured, and the random error was significantly different (P < 0.05) from the Cr_sw model (Fig. 4). This means that Cr concentration in the spent dialysate predicted utilizing the model including six wavelengths has an improving effect in terms of Accuracy compared to one wavelength model on the studied material. The final algorithm should probably take into account several parameters relating to the diffusive and convective transport over the membrane, as they differ depending on the type of dialyser. Moreover, the cross-validation should be applied to the obtained algorithms utilizing data material not included into the model build up to further prove the validity of the models. The material used to validate the model should preferably include a new set of values for the model parameters which did not exist during the model build-up (e.g. new patients, dialyse filters, etc.). Including new patients is the most sensitive option due to possible different composition of the UV-absorbing compounds filtered from the blood into the dialysate during the dialysis. In summary, utilizing UV-absorbance the creatinine concentration removed during dialysis can be estimated with reasonable accuracy. The UV-method is versatile, does not interfere with dialysis machine’s operation, does not need blood samples, no disposables or chemicals, is fast, and allows to follow a single hemodialysis session continuously and monitor deviations in dialysis efficiency. To validate the algorithm with data material not included into the model build up will be an issue of further studies.
VI.
ACKNOWLEDGMENT The authors wish to thank Galina Velikodneva for assistance during clinical experiments, Rain Kattai for skillful technical assistance and also those dialysis patients who so kindly participated in the experiments. The study was partly supported by the Estonian Science Foundation Grant No 6936, the Estonian targeted financing project SF0140027s07, and by the European Union through the European Regional Development Fund.
REFERENCES 1.
2.
3.
4.
5. 6.
7.
8.
9.
CONCLUSIONS
The presented results show the possibility to optically estimate creatinine concentration removed during dialysis by utilising UV-absorbance. The multiwavelength UVabsorbance method seems to improve the measurement accuracy in terms of relative error compared to the algorithm based on single wavelength. More general validation of the UV-technique for creatinine concentration measurements in the spent dialysate should be performed in the next studies.
Couchoud, C., K. J. Jager, et al. (2009). "Assessment of urea removal in haemodialysis and the impact of the European Best Practice Guidelines." Nephrology Dialysis Transplantation 24(4): 1267-1274. Uhlin, F., I. Fridolin, et al. (2006). "Dialysis dose (Kt/V) and clearance variation sensitivity using measurement of ultravioletabsorbance (on-line), blood urea, dialysate urea and ionic dialysance." Nephrol Dial Transplant. Aug;21(8): 2225-31. Jensen, P., J. Bak, et al. (2004). "Online monitoring of urea concentration in dialysate with dual-beam Fourier-transform near-infrared spectroscopy." J Biomed Opt. 9 (3): 553-557. Olesberg, J. T., M. A. Arnold, et al. (2004). "Online Measurement of Urea Concentration in Spent Dialysate during Hemodialysis." Clin Chem 50(1): 175-81. Fouque, D., M. Vennegoor, et al. (2007). "EBPG Guideline on Nutrition." Nephrol. Dial. Transplant. 22 [Suppl 2]: ii45-ii87. Luman, M., J. Jerotskaja, et al. (2009). "Dialysis dose and nutrition assessment by optical on-line dialysis adequacy monitor." Clinical Nephrology 72(4): 303-311. Desmeules, S., R. Levesque, et al. (2004). "Creatinine index and lean body mass are excellent predictors of long-term survival in haemodiafiltration patients." Nephrology Dialysis Transplanta-tion 19(5): 1182-1189. Lauri, K., R. Tanner, et al. (2006). Optical dialysis adequacy sensor: contribution of chromophores to the ultra violet absorbance in the spent dialysate. IEEE 2006 International Conference of the Engineering in Medicine and Biology Society, New York City, New York, USA, Proceedings of Annual International Confer-ence of the IEEE Engineering in Medicine and Biology Society. Vol. 1-15, pp 11401143. Jerotskaja, J., K. Lauri, et al. (2007). Optical dialysis adequacy sensor: wavelength dependence of the ultra violet absorbance in the spent dialysate to the removed solutes. 29th Annual International Conference of the IEEE EMBS Cité Internationale, Lyon, France August 23-26, 2007., Pro-ceedings of Annual International Conference of the IEEE Engi-neering in Medicine and Biology Society.
Author:
Ivo Fridolin
Institute: Department of Biomedical Engineering, Technomedicum, Tallinn University of Technology Street: Ehitajate tee 5 City: 19086 Tallinn Country: Estonia Email: [email protected]
IFMBE Proceedings Vol. 29
The Role of Viscous Damping on Quality of Haptic Interaction in Upper Limb Rehabilitation Robot: A Simulation Study J. Oblak1, I. Cikajlo1, T. Keller2, J.C. Perrry2, J. Veneman2, and Z. Matjačić1 2
1 University Rehabilitation Institute, Ljubljana, Republic of Slovenia Fatronik-Tecnalia, Biorobotics/Health department, Donostia/San Sebastian, Spain
Abstract— In this paper we present a haptic device developed for neurological rehabilitation of upper limb. The main feature of the device is a variable parallel mechanism. By simple mechanical reconfiguration of joints in parallel mechanism, the device can be used in different operational modes, which enables gradual rehabilitation of upper limb. Device is actuated by 2-DOF haptic drive based on Series Elastic Actuator (SEA). A simulation study was undertaken, to explore the effect of added viscous damping in actuation of the device. Results of simulation indicate that addition of a damping element improves haptic performance. Keywords— Haptics, neurological rehabilitation, arm, wrist, series elastic actuation, damper.
I. INTRODUCTION Increased number of patients suffering from motor function disorders following stroke has substantially increased. There is a great effort to upgrade conventional rehabilitation methods carried out by therapists. Typically, rehabilitation after stroke consists of repeating some simple tasks, which demands a lot of therapist time. That presents a financial strain on health care system, which is usually followed by reduced quantity and quality of rehabilitation. Rehabilitation robotics is a rapidly evolving field due to the potential to increase the ratio between the outcome and cost of rehabilitation. Numerous haptic devices have been developed and successfully introduced into clinical rehabilitation of the upper limb [1, 2]. The drawback of the referenced devices is in limited Degrees of Freedom (DOF), which provides training environment for only one of upper extremity fundamental movements; either arm reaching or wrist orienting. Devices such as [3, 4] with more DOFs (up to 7), can provide environment to train both arm reaching and wrist orienting. Disadvantage of multi DOFs device is a complex, and therefore an expensive design, which impede their wide-spread use. Our goal was to build a low cost universal haptic device UHD [5] for comprehensive upper limb rehabilitation. The main feature of the UHD is variable parallel mechanism, which enables rehabilitation of complete upper limb gradually in different mechanical modes, with only 2-DOF drive.
Fig. 1 The photography of the UHD prototype The second important feature presents UHD’s drive, which is based on Series Elastic Actuation (SEA) principle [6]. The photography of the actual UHD system is presented in Fig. 1.
II. DIFFERENT MODES OF OPERATION Very few rehabilitation robots have implemented a parallel kinematic structure, which may be due to their inherent features of having limited working space, which was deemed to be unsuitable for the purposes of movement rehabilitation. On the other hand, however, parallel mechanisms usually have mechanical joints with many DOFs that greatly exceed the resulting DOFs of the whole mechanism. Main parts of the parallel mechanism presented in Fig. 2 A, are three joints (I, II, III) with an option to be locked or released. By locking or releasing DOFs in those joints, we
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 383–386, 2010. www.springerlink.com
384
J. Oblak et al.
can totally reset mechanical configuration of rehabilitation device, which enables use of the device in three operational modes: “ARM”, “WRIST” and “REACH” mode. Analysis made by means of Grübler equation [7] reveals that device has exactly 2DOFs in each mechanical mode configuration, which is in agreement with 2DOFs drive requirement. “ARM” mode: feasible training workspace derived from pick & place experiments in healthy population have shown that human moves arm toward a given target position in a way that requires only 2 DOFs while the amplitude of these movements is limited to 20cm away from the body [8]. By locking joint I and releasing DOFs in joints II and III, we can achieve 2 DOF quasi-planar movements in forward/backward/left/right direction, as shown in Fig. 2 B. It is important to point out that no matter what, the forearm which is strapped on support, remains parallel to the ground all the time. Therefore, we managed to overcome issue regarding gravity compensation in case when the user is not able to keep his elbow up. “WRIST” mode: in mechanical configuration, termed as “WRIST” mode, a subject holding on the handle bar can perform movements in his wrist, see Fig. 2 C. Role of the parallel mechanism in that mode, where joints II and III are locked and joint I is released, is that the forearm in rigidly strapped on the forearm support. For this reason, the center of the patient’s wrist is aligned with the center of the device’s DOF, which is the requirement for optimal rehabilitation. The orientation of the handle bar can be easily adjusted. The advantage of that option is that the same handle bar can be used for both right and left arm rehabilitation, by simply re-orientating the offset position of the handle bar. Besides that, 3-DOFs are needed for complete wrist and forearm exercising: flexion/extension, radial/ulnar deviation and pronation/supination. However, the UHD allows only 2-DOF actuated movements. By setting the offset orientation of the handle bar in horizontal or vertical position, we can achieve alternating activation of all 3-DOF, see Fig. 2C. “REACH” mode: by locking joints I and II and releasing DOFs in joint III, mechanical configuration of the device ensures training of forward/lateral reach movements, which are very common in activities of daily living.
III. ACTUATION The actuation design of the UHD is presented in Fig. 3 A. It consists of two sets of DC motors with gears and encoders, which are connected in series with elastic springs by means of string wires and pulleys. The string wires are connected to the actuated bar perpendicularly one to another, which enables to drive UHD in 2 DOFs. Introducing an elastic element in series with the motor provides many benefits in force control; principle is known
Fig. 2 (A) the main feature of the device is a variable parallel mechanism, which enables using device in different operational modes. Switching between modes can be easily achieved by locking or releasing joints I, II, III. Workspace in “ARM” mode (B), “WRIST” mode (C) and “REACH” mode (D) are presented
IFMBE Proceedings Vol. 29
The Role of Viscous Damping on Quality of Haptic Interaction in Upper Limb Rehabilitation Robot: A Simulation Study
as Series Elastic Actuator (SEA). These benefits include more accurate and stable force control, lower reflected inertia and attenuation of backlash and friction nonlinearities effects [6]. The use of compliance in the actuators also results in reduced forces during an accidental impact, which means better safety performance. However, these benefits come with one shortcoming, which relates to reduction of achievable bandwidth. The stiffer the spring, the larger bandwidth of actuation system can be achieved [6]. Therefore, selection of suitable spring stiffness requires a compromise between the above two contradicting requirements. We have experimentally determined spring stiffness to be 8000N/m, which enables actuator bandwidth of 2 Hz. This is sufficient since the UHD device is predominantly intended for rehabilitation purposes where relatively slow movement can be expected during training. Device performance from the perspective of actuation is in detail described in [5]. Because we are using an impedance control strategy, the required force we want to exert on the patient’s hand is set by selection of the virtual impedance. However, there are two distinct situations. The first situation is when we want to simulate “LOW” impedance environment. In other words, a patient should not feel any force while performing movements in UHD workspace. That situation is typical for patient-in-charge oriented exercises. Another situation occurs, when we want to simulate the biggest possible resistive force, to prevent the patient from moving in a certain direction. That situation is common for robot-in-charge mode or when we want to simulate “HIGH” impedance environment. In Fig. 3 B, a simple linear model for an actuator with a series elastic element in one DOF is illustrated, where all parameters and variables are converted from rotational to translational motion. FM is the motor’s force converted from the motor torque; K (8000N/m) is the stiffness of the spring, while xL and xM are the positions of the load and motor. Values M (36,7kg) and B (400Ns/m) stand for reflected mass and viscous damping in motor and planetary gearhead [5]. We estimated that reflected mass on the bottom of the actuated bar equals to approximately m=20kg. Our goal in this study was to find a value of viscous damping (b) on load (actuated bar), which would ensure us an optimal impedance based force control, see Fig. 3 B and C.
IV. RESULTS OF SIMULATION STUDY Simulation study based on model presented in Fig. 3 was made in Simulink (MATLAB). The values of parameters M,
385
Fig. 3 (A, B) Actuation of UHD consists of: 1-DC motors with gear and encoders, 2-elastic springs and string wires, 3-linear potentiometers, 4pulleys and 5-actuated bar. (B) Linear model of actuator in 1-DOF. (C) Schematic diagram of impedance control strategy that we use in simulation study B and m matched the real system, while, the viscous damping of load (b), this is in real system negligible, varied from 1Ns/m up to 500Ns/m. The input signal in the simulation model was sinusoidal movement of load with amplitude of 0.02m, which correlates to ±8cm movements of handle bar in “ARM” mode, see Fig. 3 C. Simulation was repeated with three different frequencies/speeds of input signal. Our principal goal is to build low cost rehabilitation device. Usually, low cost mechanical components are accompanied with a significant backlash. For this reason, simulations were made for two different values of backlash (0,001 and 0.004m). The UHD performance was verified for both “extreme” situations; “LOW” and “HIGH” impedance environment, where quality was determined by comparing desired/virtual
IFMBE Proceedings Vol. 29
386
J. Oblak et al.
Fig. 4 Plots present results of simulation study, where the impact of load’s viscous damping on the haptic performance was investigated. The quality of haptic interaction was estimated by comparing desired/virtual force FV and force on load FL. In plots, quality is expressed as an average difference between FV and FL, therefore minimum in the curve presents optimal haptic interaction force FV and force on load FL, see Fig. 3 C. Results of simulation study are presented in Fig. 4.
V. DISCUSSION It is obvious from Fig. 4, that haptic performance can be greatly improved by introducing viscous damping on load of at least b=100Ns/m. The improvement of haptic performance is a consequence of the fact that by introducing viscous damping on load, regulator gain can be increased while the system remains stable. Simulation study also revealed that haptic interaction does not change critically in case, when we have to deal with greater backlash. For that reason, we can use low cost mechanical parts. Implementation of viscous damping can be in real system easily achieved by mounting mechanical damping element on the actuated bar, as it is depicted in Fig. 3 B. Further work will focus on implementation of real mechanical damper on the UHD.
ACKNOWLEDGMENT Development of the universal haptic drive was supported by the Slovenian Research Agency and Fatronik-Tecnalia, which are gratefully acknowledged.
REFERENCES 1. Krebs HI, Hogan N, Volpe BT, Aisen ML, Edelstein L, Diels C (1999) Overview of clinical trials with MIT-MANUS: a robot-aided neurorehabilitation facility. Technology and health care 7(6):419-423 2. Sanchez RJ Jr, Wolbrecht E, Smith R, Liu J, Cramer S, Rahman T, Bobrow JE, Reinkensmeyer DJ (2005) A pneumatic robot for retraining arm movement after stroke: Rationale and mechanical design Proceedings of the 2005 IEEE 9th ICORR, 500-5004 3. Krebs HI, Volpe BT, Williams D, Celestino J, Charles SK, Lynch D, Hogan N (2007) Robot-aided neurorehabilitation: a robot for wrist rehabilitation. IEEE TNSRE, 15:327-335 4. Nef T, Mihelj M, Kiefer G, Perndl C, Müller R, Riener R (2007) ARMin – Exoskeleton for Arm Therapy in Stroke Patient. Proceedings of the 2007 IEEE 10th ICORR, 68-74 5. Oblak J, Cikajlo I, Matjacic Z (2009) Universal Haptic Drive: A Robot for Arm and Wrist Rehabilitation IEEE TNSRE “in presss” 6. Pratt G, Williamson MM (1995) “Series Elastic Actuators”. Proceedings of the IEEE International Conference on Intelligent Robots and Systems 1:399-406 7. Tsai LW (1999) Robot Analysis. The Mechanics of Serial and Parallel Manipulators 8. Yeong CF, Melendez-Calderon A, Burdet E (2009) Analysis of pickand-place, eating and drinking movements for the workspace definition of simple robotic devices. Proceedings of the 2009 IEEE 11th ICORR, 46-52 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Jakob Oblak University Rehabilitation Institute Linhartova 51 Ljubljana Republic of Slovenia [email protected]
A New Fibre Optic Pulse Oximeter Probe for Monitoring Splanchnic Organ Arterial Blood Oxygen Saturation M. Hickey1, N. Samuels2, N. Randive2, R. Langford2 and P.A. Kyriacou1 1
School of Engineering and Mathematical Sciences, City University London, London, UK 2 Anaesthetic Laboratory and Department, St Bartholomew’s Hospital, London UK
Abstract— a new continuous method of monitoring splanchnic organ oxygen saturation (SpO2) would make the early detection of inadequate tissue oxygenation feasible, reducing the risk of hypoperfusion, severe ischaemia, and, ultimately, death. In an attempt to provide such a device, a new fiber optic based reflectance pulse oximeter probe and processing system were developed followed by an in vivo evaluation of the technology on seventeen patients undergoing elective laparotomy. Photoplethysmographic (PPG) signals of good quality were obtained from the small bowel, large bowel, liver and stomach. Simultaneous peripheral PPG signals from the finger were also obtained for comparison purposes. Analysis of the amplitudes of all acquired PPG signals indicated much larger amplitudes for those signals obtained from splanchnic organs than those obtained from the periphery. Estimated SpO2 values for splanchnic organs showed good agreement with those obtained from the peripheral fibre optic probe and those obtained from a commercial device. These preliminary results suggest that a miniaturized ‘indwelling’ fibre optic sensor may be a suitable method for pre-operative and post-operative evaluation of splanchnic organ SpO2 and their health. Keywords— Fibre optics, pulse oximetry, photoplethysmography, perfusion, splanchnic organs. I. INTRODUCTION
If an organ or tissue is not sufficiently perfused with oxygenated blood, cell death and tissue necrosis can ensue. Failure of one organ due to malperfusion may lead indirectly to the dysfunction of distance organs through the release of various toxins into the portal blood stream [1]. This could result in the onset of multiple organ failure, which is a common cause of morbidity following major surgery [2]. Previous studies have indicated that the gastrointestinal tract may be the canary of the body, making early detection of malperfusion feasible [3]. Therefore, a continuous method for monitoring perfusion of the splanchnic area would be invaluable in the early detection of inadequate tissue oxygenation [4]. Current methods for assessing splanchnic perfusion have not been widely accepted for use in the clinical care environment. Techniques such as polarographic oxygen electrodes and positron emission tomography remain research tools [2], while laser Doppler, Doppler ultrasound [5], and
intravenous fluorescein [2] methods are complex, expensive, do not measure oxygenation directly, and are not suitable for routine monitoring. Gastric tonometry, although one of the few techniques currently used in clinical practice for estimating intestinal hypoxia, has not been widely accepted due to the intermittent, heavily operator dependent and time consuming nature of the device [6]. Pulse oximetry has also been used experimentally in both animals and humans [7, 8] where it was found to be a rapid, reproducible, as well as a highly sensitive and specific technique for detecting small bowel ischaemia. The use of commercial pulse oximeters for estimating splanchnic perfusion in humans has been found to be impractical (bulky probes, cannot be sterilized, etc) [4]. More recently a custom made reflectance pulse oximeter has shown for the first time that good quality photoplethysmographic (PPG) signals can be detected from various human abdominal organs (bowel, kidney, liver) during open laparotomy [4]. However, this probe is not suitable for prolonged continuous monitoring in the abdomen. In an attempt to overcome the limitations of the current techniques for measuring splanchnic perfusion, a new prototype fibre-optic probe was developed for investigating PPG signals from various splanchnic organs and for the estimation of arterial blood oxygen saturation (SpO2) of splanchnic organs during open laparotomy. An electrically isolated instrumentation system and a virtual instrument were also developed for driving the optical components of the sensor, and pre-processing and displaying the acquired PPG signals on the screen of a laptop computer. The developed system was evaluated in vivo on seventeen patients undergoing surgery. II.
METHODS
A. Fibre Optic Pulse Oximeter Probes A reflectance fibre optic splanchnic pulse oximeter probe was designed using 600 μm core silica glass step index fibres, infrared (850nm) and red (650nm) emitters, a 1mm2 active area photodiode [9]. In order to facilitate the evaluation of the fibre optic probe during open laparotomy, it was decided to configure the probe as a handheld device.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 387–390, 2010. www.springerlink.com
388
M. Hickey et al.
Fig.1(a) shows the finished splanchnic probe. For comparison purposes an identical reflectance fibre optic probe was also developed to enable the monitoring of PPG signals from a periphery site (finger or toe) (Fig.1(b)).
III. RESULTS
Good quality PPG signals with large amplitudes and high signal to noise ratio (80dB) were recorded in all attempts from the small bowel (n = 17), large bowel (n = 14), liver (n=5) and stomach (n=5). Figures 2 to 5 depict typical ac red (R) and infrared (IR) PPG traces from the splanchnic organs and the corresponding peripheral signals.
Fig. 1 (a) the developed splanchnic fibre optic probe and (b) the identical peripheral probe
B. Isolated Instrumentation System and Virtual Instrument An electrically isolated instrumentation system was designed and developed to drive the optical components of the fibre optic probes and also to detect and pre-process the red and infrared ac and dc PPG signals. A virtual instrument (VI) implemented in LabVIEW (National Instruments, USA) was also developed. The VI is used for driving various hardware sections of the instrumentation system and for the acquisition, displaying, analysis and storing of all acquired PPG signals. Detailed technical details of the processing system are described by Hickey and Kyriacou [9].
Fig. 2 ac red (R) and infrared (IR) PPG signals from the small bowel and periphery with simultaneous ECG
C. Preliminary Investigation of Fibre-Optic Probe during Open Laparotomy Ethics Committee approval was obtained to study patients undergoing elective laparotomy. Photoplethysmographic measurements were made in seventeen patients (four male and sixteen female, mean age (±SD): 54 ± 9.7). To enable the use of the fibre-optic PPG sensor in the sterile surgical site, the sensor was placed in a sterile medical ultrasound cover which was transparent to the light being emitted. At an appropriate time during the surgery, the surgeon placed the splanchnic PPG sensor on the surface of each accessible abdominal organ. For comparison purposes the identical fibre optic PPG peripheral sensor was also placed on the finger or toe. Signals were monitored and acquired for approximately two minutes on each site. Blood oxygen saturation from a commercial pulse oximeter (GE Healthcare) was also simultaneously monitored and recorded.
Fig. 3 ac red (R) and infrared (IR) PPG from the large bowel and periphery with simultaneous ECG
The low frequency artifact present on the splanchnic PPG traces was due to the mechanical ventilator and movement of the handheld sensor.
IFMBE Proceedings Vol. 29
A New Fibre Optic Pulse Oximeter Probe for Monitoring Splanchnic Organ Arterial Blood Oxygen Saturation
389
Fig. 6 Mean ( ±SD) ac PPG amplitudes for the small bowel (n = 17), large Fig. 4 ac red (R) and infrared (IR) PPG signals from the liver and periphery with simultaneous ECG
bowel (n = 14), liver (n = 5), stomach (n = 5) and the periphery (n = 17)
Fig. 5 ac red (R) and infrared (IR) PPG signals from the stomach and
Fig. 7 Mean (±SD) dc PPG amplitudes for the small bowel (n = 17), large
periphery with simultaneous ECG
bowel (n = 14), liver (n = 5), stomach (n = 5) and the periphery (n = 17)
In order to provide an indication of how PPG amplitudes differ between sites, the mean splanchnic ac and dc PPG amplitudes for each site were calculated. The mean peripheral ac and dc amplitudes were also calculated (Fig. 6 and 7). Although this is an uncalibrated system, preliminary mean SpO2 values were calculated for the small bowel, large bowel, liver, stomach and periphery (Fig. 8). The mean SpO2 values from the commercial pulse oximeter are also included for comparison purposes.
Fig. 8 Mean SpO2 (±SD) values for small bowel, large bowel, liver, stomach, and periphery. The mean SpO2 (± SD) value from the commercial device (GE Healthcare) is also indicated for comparison
IFMBE Proceedings Vol. 29
390
M. Hickey et al.
ACKNOWLEDGMENT
IV. CONCLUSIONS
A new fibre optic pulse oximeter probe, an instrumentation system and a virtual instrument were successfully developed and evaluated on seventeen patients during surgery. Good quality PPG signals were recorded on all attempts from various splanchnic organs. By observing Fig.6 and Fig.7 it can be seen that there is a significant difference between the mean ac and dc PPG amplitudes obtained from the splanchnic sites when compared with those obtained from the periphery. These differences might be due to differences in tissue type and vasculature amongst the sites investigated. It is possible that the arteries and therefore the blood supply are closer to the surface of the tissue in splanchnic organs when compared to a peripheral site such as the finger. In such occasions the light travelling through the splanchnic tissue will possibly encounter more pulsatile arterial blood along its path, than light travelling in the finger. Therefore, this may explain the larger red and infrared ac PPG signals obtained from the various splanchnic organs in comparison with those obtained from the finger. Furthermore, due to the thick epidermis layer present in the peripheral tissue, light travelling in the finger may undergo more absorption due to non-pulsatile tissue than light travelling in the splanchnic region. Again, this may explain the smaller red and infrared dc PPG amplitudes obtained from the finger. Despite the differences in the amplitude of the splanchnic PPGs, the mean SpO2 values from all splanchnic sites together with the peripheral SpO2 values estimated from the peripheral fibre optic pulse oximeter and the SpO2 values obtained from the commercial pulse oximeter showed a broad agreement as depicted in Fig 8. This preliminary evaluation has provided sufficient confidence that the PPG signals acquired from splanchnic organs using a new fibre optic pulse oximeter probe are of sufficient quality to be used in the determination of splanchnic arterial blood oxygen saturation.
The authors would like to thank the EPSRC (UK) for funding this work.
REFERENCES [1] Rittoo D, Gosling P, Bonnici C et al. (2002) Splanchnic oxygenation in patients undergoing abdominal aortic aneurysm repair and volume expansion with eloHAES. Cardiovascular Surgery 10: 128-133 [2] Jury of the Consensus. (1996) Tissue Hypoxia: How to detect, how to correct, how to prevent? Intensive Care Med 22:1250-57 [3] Dantzker DR. (1993) The gastrointestinal tract. The canary of the body?, JAMA, 270:1247-8 [4] Crerar-Gilbert AJ, Kyriacou PA, Jones DP, Langford RM. (2002) Assessment of photoplethysmographic signals for the determination of splanchnic oxygen saturation in humans. Anaesthesia 57: 442-45 [5] Lynn Dyess D, Bruner BW, Donnell CA, Ferrara JJ, Powell RW. (1991) Intraoperative Evaluation of Intestinal Ischemia: A Comparison of Methods. Southern Medical Journal 84: 966-970 [6] Kinnala PJ, Kuttila KT, Gronroos JM, Havia TV, Nevalainen TJ, Niinikoski JHA. (2002) Splanchnic and Pancreatic Tissue Perfusion in Experimental Acute Pancreatitis. Scandinavian of Gastroenterology 7:845-849 [7] DeNobile J, Guzzetta P, Patterson K. (1990) Pulse Oximetry as a means of assessing bowel viability. Journal of Surgical Research 48:21-3 [8] Ouriel K, Fiore WM, Geary JE. (1988) Detection of occult colonic ischemia during aortic procedures: use of an intraoperative photoplethysmographic technique. J. Vasc. Surg. 7: 5-9 [9] Hickey M, Kyriacou PA (2007) Development of a New Splanchnic Perfusion Sensor. Conf. Proc. IEEE. Eng Med. Bil. Soc, 2952-5
Use macro [author address] to enter the address of the corresponding author: Author: Dr Panayiotis A Kyriacou Institute: City University London Street: Northampton Square City: London Country: United Kingdom Email: [email protected]
IFMBE Proceedings Vol. 29
Electrical properties of teeth regarding the electric vitality testing T. Marjanović1, Z. Stare1 and M. Ranilović1 1
University of Zagreb / Faculty of electrical engineering and computing, Zagreb, Croatia
Abstract— In this paper we concentrate on electrical properties of tooth during electrical stimulation using various electrode-tooth interfaces. Established resistances of teeth connections are compared for each type of electrode. Total impedance as well as the voltage step response of tooth is recorded with different electrodes. The current stabilized pulp-tester is built and used to measure the required parameters of stimuli in order to obtain stable and comparable readings. For that purpose intensity-duration curve is recorded. Use of current and voltage stimuli by pulp-testers is reviewed.
even today many devices offered by renowned manufacturers uses inconsistent stimuli and display results in incomparable and arbitrary units, or even only in positive or negative assessment [4]. Some devices do not even have adequate neutral electrode and return current flows through the dentist hand, so readings strongly depend on rubber gloves worn by dentist [5, 6]. If the measurements are going to be compared, it is important to understand the influence of various kinds of electrodes and to define parameters for voltage or current stimuli.
Keywords— Tooth impedance, electric pulp vitality tester, intensity-duration curve of tooth, step response of tooth. II. I. INTRODUCTION
Pulp testing is used in endodontic diagnosis to determine the vitality degree of a pulp tissue. Testing is performed by an application of various stimuli. Heat and cold, electrical stimulation, palpation, percussion and tooth sleuth tests are commonly performed to determine if the tooth is alive [1]. During electrical pulp testing, electric stimuli are applied with increasing strength until the sensation is reached. No response from the tooth generally indicates the tooth has died and needs the root canal therapy or removal. A very quick response compared to the adjacent teeth generally indicates that the tooth is inflamed and probably is heading toward pulp death. If it responds the same as the other teeth then it is considered to be healthy [2]. Monopolar and bipolar electrical stimulation techniques are possible. At monopolar technique an active electrode is placed on the surface of tooth and the current flows through the body up to the neutral electrode. At bipolar both electrodes are placed on the tooth. Bipolar technique offers no significant advantage and it is less feasible, so monopolar is commonly used [3]. Electrical stimuli can be voltage or current pulses with different shape, duration and repetition frequency [1]. Voltage pulses are generally inappropriate for pulp stimulation due to the high electrode-tooth and tooth resistances. That could lead to a large variation of measurement results. Threshold for current stimuli on vital teeth is in the range of 1 to 50 μA. If the threshold is reached above 150 μA pulp vitality should be suspected. Larger current can stimulate periodontal nerves instead of the pulp which could lead to misinterpretation of vitality [3]. It is a shame to observe that
MATERIALS AND METHODS
Four experiments have been performed and during those were measured: resistances at DC, impedances, voltage step responses and intensity-duration curves by using the custom made automated pulp-tester, Fig. 1. In the first experiment were measured teeth resistances at DC for different types of tooth-electrode contact. Measurements were performed on 56 different teeth of 14 volunteers (age 22 to 28). We have used stainless steel electrodes with rounded tip and conductive rubber electrodes, 2 and 4 mm in diameter each. Electrodes were used dry, moistened with tap water or applied with an electro-conductive gel on top. A current-limited voltage source was used for measurement. Positive active electrode was applied at the middle third of the facial surface of teeth with intact enamel. Large area
Fig. 1 A block diagram of built current stabilized pulp-tester
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 391–394, 2010. www.springerlink.com
392
T. Marjanoviü, Z. Stare, and M. Raniloviü
neutral electrode was held in hand and return current was measured. Each tooth was dried with a towel and blown out with the air prior the experiment. In the second experiment were measured impedances of teeth with various types of electrode-tooth contact. Measurements were performed on 9 teeth of 3 volunteers. Impedances were measured using the Hewlett Packard HP4284A precise RLC meter controlled by a personal computer. Impedances were recorded at 8 frequencies in the range of 40 Hz to 100 kHz using the 1V AC stimuli. Within the third experiment we recorded response of tooth on voltage step using different types of electrodes. Positive voltage pulses 10V in amplitude were applied to the tooth and return current was measured by using digital oscilloscope. During the fourth experiment intensity-duration (I-TD) curves were measured. For that purpose the current stabilized automated pulp-tester was made, Fig. 1. Microprocessor was programmed to generate the ascending sequence of current pulses in the range of 1 to 250 μA until the sensation is achieved and button is pressed. Each time when button is pressed sequence restarts using different pulse duration. The intensity-duration curve was measured on frontal teeth for 9 subjects. Maximal repetition frequency of current pulses was determined prior the forth experiment and kept low during the experiment. For measuring the maximal repetition frequency the microprocessor was reprogrammed to increase repetition each time button is pressed and pulse sequence restarted, while keeping pulse duration long and constant.
Fig. 2 Resistances of healthy teeth with intact enamel contacted with different types of electrodes
III. RESULTS
Mean DC resistance of tooth with each type of electrode contact is drawn as a single point on Fig. 2. Range of obtained values (96% of occasions) is drawn as an error bar around that point. Labels used on following figures are defined in Table 1.
Fig. 3 Frequency dependence of Rp for different electrode types
Table 1 Definition of used labels. LABEL
electrode material
diameter
electrolyte
M2D M2V M2G M4D M4V M4G R2D R2V R2G R4D R4V R4G
stainless steel stainless steel stainless steel stainless steel stainless steel stainless steel conductive rubber conductive rubber conductive rubber conductive rubber conductive rubber conductive rubber
2 mm 2 mm 2 mm 4 mm 4 mm 4 mm 2 mm 2 mm 2 mm 4 mm 4 mm 4 mm
dry electrode tap water conductive gel dry electrode tap water conductive gel dry electrode tap water conductive gel dry electrode tap water conductive gel
Fig. 4 Frequency dependence of Cp for different electrode types
IFMBE Proceedings Vol. 29
Electrical Properties of Teeth Regarding the Electric Vitality Testing
Resistances are measured with 10V applied to the active electrode, except for M2D and M4D when the value of 100V was used. Impedances of tooth with tooth-electrode contact were interpreted as a parallel of resistance (Rp) and capacitance (Cp) for each frequency. Values of Rp and Cp for each type of electrode were averaged for all teeth and the results are shown on Fig. 3 and 4. Pulse response of tooth for each type of electrode is shown on Fig. 5. The current was measured as a response to 10 V pulses of 2 ms wide, but normalized on Fig. 5 to represent response to 1 V voltage step. Figures 3 to 5 lack the values of M2D because stabile measurements could not be obtained due to the large value
Fig. 5 Pulse response of tooth for different types of electrode
Fig. 6 Intensity-duration curves of frontal teeth
393
of electrode impedance. Figure 5 also lack the values of M4D for the same reason. Fig. 6 shows the intensity-duration curves of frontal teeth for each examined subject and the averaged I-TD curve. Curves were measured with 200 ms pause between pulses since it was determined to be slow enough not to influence the results. Examiners felt increased sensation of 10 ms current pulses for repetition below 100 to 150 ms.
IV.
DISCUSSION
Fig. 2 shows that the electrode made of 2 mm stainless steel moistened with tap water has questionable performances regarding the quality and repeatability of realized connection. This type of electrodes is most commonly in use [7]. Even the large (4 mm) wet metal electrode realized a relatively poor contact. We recommend using conductive gel with metal electrodes. Performance of conductive rubber electrodes also increases significantly when moistened or applied with a conductive gel. During DC resistance measurements we noticed voltage dependency of resistance up to 0.5 %/V. Conductivity of tooth with applied active electrode increases at higher frequencies for each type of electrode as seen on Fig. 3. With addition of Cp (Fig. 4), that leads to a stronger contact at the beginning and at the end of stimuli pulses. Phenomenon can be seen on Fig. 5, where higher current is detected at the beginning of voltage stimuli. Both Rp and Cp strongly depend on the used electrodes. In the case of voltage regulated pulp-testers, pulse rise and fall times should be limited to about 0.3 ms to avoid premature stimuli sensation. Pulses shorter than a few milliseconds are not recommended due to a high dependency of their shape with the transient response of used electrode. In the case of voltage controlled pulp-testers the problem of resistance volatility and dependency on used electrodes remains even if the transient is avoided. Since the sensation depends on current amplitude and on the pulse duration, current must be detected and indicated when voltage stimuli are used. Applied voltage should be calibrated at least on the first applied pulse to compensate for electrode resistance. However, better results can be achieved if the current is monitored constantly. While I-TD curves were measured using the built current controlled pulp-tester (Fig. 1) results showed no significant dependency on used electrode as long as impedance was low enough to conduct desired current due to a 400V source. If the current could not be achieved, analog to digital converter (ADC on Fig. 1) would report low amplitude to the processor and the measurement should be aborted. We revealed that a 400V source is high enough to manage
IFMBE Proceedings Vol. 29
394
T. Marjanoviü, Z. Stare, and M. Raniloviü
currents up to 250 μA for M2G, M4G, R2V, R2G, R4V and R4G electrodes in most cases. If M4V, M2V, R4D or R2D electrode is used on a tooth with intact enamel, measurement could be limited to low currents only. Therefore undercurrent signalization must be implemented in pulptesters, especially if lower voltage is used for stimulation (e.g. 270 V in [3]). In the case of dry metal electrode (M2D and M4D) not even a few microamperes could be reached when using 400 V source in most cases. Fig. 6 shows that the pulse duration for correct rheobase measurement on frontal teeth should be at least 5 ms with the minimally 200 ms between pulses. Unfortunately, many of pulp-testers offered on the market do not comply with the stated requirements [8]. During measurements we have also concluded that output stage of current-stabilized pulp-tester shown on Fig. 1 is inadequate for commercial use, not only due to potential hazard in the case of fused transistor Q but also because of large parasitic capacitance of Q. If the voltage on Q is low during the initial contact on the tooth, subject could sense a shock while output capacitance of Q charges. We have observed that even less than 10 pF can be felt as a shock when charging instantly to 400V. Initial output capacitance of high-voltage low-leakage FET could be 200 pF or more when the drain voltage is low. At higher drain voltages, capacitance reduces significantly, however a charge of a few nC has to flow through the tooth until the drain voltage has increased. This charge could be enough for the patient to feel pain. Additional circuitry must be embedded to recharge all parasitic capacitances to high voltage each time a current pulse finishes. Otherwise, if electrode is momentarily disconnected during the measurement, current pulse would discharge the parasitic capacitances and when electrode reconnects, shock could be sensed no matter if the next current pulse is generated or not. In case of voltage controlled pulp testers, no electric shock could be felt due to parasitic capacitances in the case of loose electrode contact during the measurement. V. CONCLUSIONS
should be limited to about 0.3 ms. Short voltage pulses (e.g. less than a millisecond) can not provide repetitive measurements due to variability of electrode impedances. Unfortunately many pulp-testers offered on the market do not satisfy any of these terms. In the case of current stabilized pulp-tester we realized that the output stage, as shown on Fig. 1, is inadequate for commercial purposes. There is a potential hazard if transistor is fused and a possible electrical shock caused by charging of parasitic capacitance of the output transistor. Most commonly used metal electrode, 2 mm in diameter, moistened with tap water will not always accomplish sufficient contact on the tooth. Pulp-testers must have implemented undercurrent signalization.
REFERENCES 1. 2.
3. 4. 5.
6.
7.
8.
Lin J, Chandler NP (2008) Electric pulp testing: a review. Int. Endod. J. 41:365-374, DOI: 10.1111/j.1365-2591.2008.01375.x Ahlquist ML, Edwall LGA, Franzén OG et al (1984) Perception of pulpal pain as a function of intradental nerve activity. Pain 19:353366, DOI: 10.1016/0304-3959(84)90081-2 Pepper MG, Smith DC (1981) An electric tooth pulp vitality tester. Med. Bio. Eng. & Comp. 9:208-214, DOI: 10.1007/BF02442716 Wang IW, Young ST (1996) An improved electric pulp tester. IEEE Eng. Med. Biol. Mag. 15:112-115, DOI: 10.1109/51.482851 Cailleteau JG, Ludington JR (1989) Using the electric pulp tester with gloves: a simplified approach. J Endod. 15:80-81. PMID: 2607273, DOI: 10.1016/S0099-2399(89)80113 Myers JW (1998) Demonstration of a possible source of error with an electric pulp tester. J. Endod. 24:199-200, PMID: 9558588, DOI:10.1016/S0099-2399(98)80184-2 Daskalov I, Indjov B, Mudrov M (1997) Electrical dental pulp testing. Defining parameters for proper instrumentation. IEEE Eng. Med. Biol. Mag. 16:46-50, DOI: 10.1109/51.566152 Dummer PMH, Tanner M, McCarthy JP (1986) A laboratory study of four electric pulp testers. Int. Endod. J. 19:161-71. DOI: 10.1111/j.1365-2591.1986.tb00472.x
Author: Tihomir Marjanović Institute: Street: City: Country: Email:
To measure rheobase correctly pulp-testers should generate stimuli pulses of at least 5 ms wide, with a repetition period no shorter than 200 ms. If voltage stimuli are going to be used, current must be detected and indicated in order to generate stable and comparable readings. Rising and falling edges of voltage stimuli
IFMBE Proceedings Vol. 29
Faculty of electrical engineering and computing Unska 3 Zagreb Croatia [email protected]
Stiffness of a small tissue phantom measured by a tactile resonance sensor V. Jalkanen1,3, B.M. Andersson1,3, O.A. Lindahl2,3 1
2
Department of Applied Physics and Electronics, Umeå University, 90187 Umeå, Sweden Department of Computer Science and Electrical Engineering, Luleå University of Technology, 97187 Luleå, Sweden 3 Center for Biomedical Engineering and Physics, Umeå University and Luleå University of Technology, Sweden
Abstract— Many pathological conditions, for instance cancer, alter the elastic stiffness of tissues. Therefore, it is of interest to objectively quantify the stiffness of tissue samples. Tactile resonance sensor technology has been proven to measure the stiffness of tissues in a variety of medical applications. The technique is based on a vibrating piezoelectric sensor element that changes its resonance frequency when it is put in contact with a soft object to be measured. The frequency change is related to the mechanical properties of the soft object. This principle is implemented in an indentation setup where also the impression force and impression depth can be measured. The aim of this study was to investigate how the measured parameters of a tactile resonance sensor system depend on the limited size of a small gelatin tissue phantom sample. Indentation measurements were conducted on different locations on a small gelatin sample. Results showed that the force and frequency change were dependent of the measurement location and thus the sample geometry. The estimated stiffness was independent of the measurement location. Further studies must be conducted to determine the full value of the method for measuring the stiffness of small tissue samples. Keywords— resonance sensor, stiffness, small tissue sample.
I. INTRODUCTION
Soft tissue mechanical properties have been of interest in a variety of medical applications for the diagnostic information they provide. Many pathological conditions, for instance breast and prostate cancer, alter the elastic stiffness of the tissues [1], and imaging modalities for imaging soft tissue elastic modulus have been developed [2]. Quantitative understanding and performance evaluation of the imaging techniques rely on the information of the elastic properties of abnormal and normal tissues. A sensor technique has emerged for breast cancer detection [3]. This sensor technique, called the tactile resonance sensor technique [4, 5], measures the elastic stiffness properties of soft tissue. The stiffness parameter used is the frequency change (ǻf) of a vibrating piezoelectric resonance element in a feedback system. This ǻf is obtained when the sensor makes contact with the soft tissue under a applied load.
Fig. 1 The resonance sensor indentation experiment setup and illustration of the measured frequency change (ǻf = f0 – f), force (F), and impression depth (d). A physical relationship, between ǻf and the applied force (F), including the elastic modulus (E) and the mass density (ρ) has been proposed [6]. Theoretically, an elastic stiffness sensitive parameter is
∂F E ∝ ∂Δf ρ
(1)
The sensor technique has been evaluated to objectively quantify liver tissue stiffness of normal and fibrotic tissue [7] and of normal lymph node and metastatic lymph node tissue [8]. More recently [6, 9], measured stiffness variations in prostate tissue have been explained by the quantified content of cancer tissue and stroma in relation to normal glandular tissue. The above mentioned studies on the elastic stiffness of tissues have been made on in vitro tissue specimens of varying size and usually of very limited size, i.e. small tissue samples. It can be assumed that the geometrical boundaries might affect the measured stiffness. This might be evident for measurements done across tissue surfaces, to map the tissue stiffness of a sample, as some measurements are close to the boundaries [9]. The aim of this study was to investigate how the measured parameters of a tactile resonance sensor system depend on the limited size of a tissue phantom when conducting measurements on different locations on the tissue phantom surface.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 395–398, 2010. www.springerlink.com
396
V. Jalkanen, B.M. Andersson, and O.A. Lindahl
10 5 0
6 0
1 d (mm)
2
Fig. 2 Photograph of the small rectangular gelatin tissue phantom with circular markings showing the approximate locations of the measurements. View from above the measured surface. Measurement order was from right to left. Scale is in centimeters.
II.
MATERIAL AND METHODS
A. Resonance sensor system A resonance sensor system, Venustron® (Axiom Co., Ltd., Koriyama Fukushima, Japan) equipped with a resonance sensor, force sensor, and a position sensor was used in the experiments. The sensors are arranged into a single probe within a motorized mounting enabling controlled indentation to be performed. During indentation, with a speed of 1 mm s-1 down to 2 mm impression depth, ǻf, F, and the impression depth (d) into a tissue phantom were recorded at 200 Hz. An illustration of the setup is shown in Figure 1. The system has been described earlier [6, 9]. B. Tissue phantom preparation In this study a soft tissue phantom was made from a gelatinwater mixture. Gelatin powder (BDH-Prolabo, VWR International AB, Sweden) was mixed with heated water with approximately 5% gelatin to water weight ratio. The mixture was poured into a Petri dish (height 13 mm, diameter 87 mm) and stored in a refrigerator for five hours, which resulted in a soft gel-like solid. A small piece of gelatin (length 24.5 mm, width 8.5 mm, thickness 8.5 mm) was cut and used as a small tissue phantom in the measurements. C. Measurements, parameters, and analysis
Δf (Hz)
300
F (mN)
F (mN)
15
4 2 0
200
0
100 0
0
1 d (mm)
50 100 Δf (Hz)
150
2
Fig. 3 The measured force (F), and frequency change (ǻf) related to the impression depth (d), of the indentation measurement into the gelatin tissue phantom. The circles show the values at d = 1 mm. The right panel displays the F and ǻf up to a limit of d = 1mm denoted by the circle. The slope of the linear relation was used to estimate the stiffness parameter. Mean values and standard deviations are presented. Analysis of variance (ANOVA) test followed by TukeyKramer’s multiple comparison tests were used to test for differences between groups. A test result with p < 0.05 was considered statistically significant.
III.
RESULTS
An example of the measured F and ǻf during the indentation into the gelatin tissue phantom is shown in Figure 3. From the measurements on the five measurement locations it was seen that the measured F and ǻf were dependent on the measurement location (Figure 4) and significant differences (p < 0.05) were found among the different measurement locations. For F, only the measurement location numbered with 5 differed significantly from locations 2 and 3, while for ǻf it was locations 1 and 5 that significantly differed from 2, 3, and 4 and also from each other. The stiffness parameter F/ǻf was not dependent on the measurement location (Figure 4) and there were no significant differences in F/ǻf between the different measurement locations (p > 0.05).
Indentation measurements were performed with the resonance sensor system on the gelatin tissue phantom. Five locations on the tissue phantom were subjected for a total of six measurements each (Figure 2). During each measurement the measured parameters F, ǻf, and d were registered (Figure 3) and from the linear relation between F and ǻf, the stiffness parameter (equation (1)) was calculated as the slope of the least squares line fit. IFMBE Proceedings Vol. 29
Stiffness of a Small Tissue Phantom Measured by a Tactile Resonance Sensor
at the periphery, the resistance to force of the sample was less due to the limited sample geometry. However, similar dependence was observed between ǻf and the sample geometry. In this case, the limited sample geometry was assumed to affect the load sensitive ǻf. The results of F and ǻf were shown at d = 1mm as an example (Figure 4), but similar results were observed at other d-values. At higher d the effect was more pronounced. The calculated stiffness parameter (equation (1)) measured the homogenous stiffness of the tissue phantom, unaffected by the limited sample geometry (Figure 4). This was assumed to be due to it being a combination of F and ǻf, where the geometry dependence was minimized. The variation in the measured parameters (Figure 4) includes effects due to measurement time, i.e. drying of the gelatin sample. These results suggest that the stiffness parameter is suitable for measuring the stiffness of prostate tissue samples which are of limited size. Therefore, in earlier studies [9, 10], it could be assumed that the effect of the limited prostate sample size on the estimated stiffness would be less. In this study a gelatin tissue phantom was used as a soft tissue mimicking phantom material because of it being relatively easy to prepare, and relatively easy to handle and cut into desired shape. In addition, the gelatin tissue phantom was homogeneous in stiffness. These properties of the tissue phantom were important in conducting this study.
F
d = 1 mm
(mN)
6 5 4 3 2
1
2 3 4 5 measurement location
Δf
d = 1 mm
(Hz)
180 160 140 120 100
1 2 3 4 5 measurement location
dF/dΔf (mN/Hz)
0.04
0.03
0.02
0.01
397
V. CONCLUSIONS
1 2 3 4 5 measurement location
Fig. 4 The mean and standard deviation of the force (F), frequency change (ǻf), and stiffness parameter (F/ǻf) shown for the five measurement locations (n = 6 on each location). Both F and ǻf are at an impression depth of d = 1 mm, while F/ǻf was calculated from the impression depth interval d = 0 to 1 mm.
IV.
The findings of this study suggest that the stiffness of a homogenous small tissue phantom can be measured with a tactile resonance sensor. The observed geometry independency of the measured stiffness parameter was promising for measurements on small biological tissue samples in vitro. Further studies must be done to determine the full value of the method and the reported findings. ACKNOWLEDGMENT
DISCUSSION
In this study the tactile resonance sensor technique was used to measure the stiffness of a tissue phantom sample of limited size. It was shown that the measured parameters of force and frequency change were dependent on the location of the indentation measurement. However, it was observed that the calculated stiffness sensitive parameter was independent of the indentation location. The force measurement alone will falsely estimate the elastic stiffness of the tissue phantom sample at the peripheral indentation locations (Figure 4). During an indentation
The study was supported by grants from Objective 2 Norra NorrlandEU structural Fund.
REFERENCES 1.
2.
Krouskop TA, Wheeler TM, Kallel F, Garra BS, Hall T (1998) Elastic moduli of breast and prostate tissues under compression. Ultrason Imaging 20:260-74 Greenleaf JE, Fatemi M, Insana M (2003) Selected methods for imaging elastic properties of biological tissues. Annu Rev Biomed Eng 5:57-78
IFMBE Proceedings Vol. 29
398 3.
4.
5.
6.
7.
8.
V. Jalkanen, B.M. Andersson, and O.A. Lindahl Murayama Y, Haruta M, Hatakeyama Y, Shiina T, Sakuma H, Takenoshita S, Omata S Constantinou CE (2008) Development of a new instrument for examination of stiffness in the breast using haptic sensor technology. Sensors Actuators 143:430-8 Omata S and Terunuma Y (1992) New tactile sensor like the human hand and its applications. Sensors Actuators 35:9-15 DOI 10.1016/0924-4247(92)87002-X Lindahl OA, Constantinou CE, Eklund A, Murayama Y, Hallberg P, Omata S (2009) Tactile resonance sensors in medicine. J Med Eng Technol 33:263-73 Jalkanen V, Andersson BM, Bergh A, Ljungberg B, Lindahl OA (2008) Explanatory models for a tactile resonance sensor system – elastic and density-related variations of prostate tissue in vitro. Physiol Meas 29:729-745 DOI 10.1088/0967-3334/29/7/003 Kusaka K, Harihara Y, Torzilli G, Kubota K, Takayama T, Makuuchi M, Mori M, Omata S (2000) Objective evaluation of liver consistency to estimate hepatic fibrosis and functional reserve for hepatectomy. J Am Coll Surg 191:47-53 Miyaji K, Furuse A, Nakajima J, Kohno T, Ohtsuka T, Yagyu K, Oka T, Omata S (1997) The stiffness of lymph nodes containing lung carcinoma metastases – a new diagnostic parameter measured by a tactile sensor. Cancer 80:1920-5
9.
Jalkanen V, Andersson BM, Bergh A, Ljungberg B, Lindahl OA (2006) Resonance sensor measurements of stiffness variations in prostate tissue in vitro – a weighted tissue proportion model. Physiol Meas 27:1373-1386 DOI 10.1088/0967-3334/27/12/009 10. Jalkanen V, Andersson BM, Lindahl OA (2009) Instrument towards faster diagnosis and treatment of prostate cancer – Resonance sensor stiffness measurements on human prostate tissue in vitro, IFMBE Proc. vol. 25(7), World Congress on Med. Phys. & Biomed. Eng., Münich, Germany, 2009, pp 145-48 The address of the corresponding author: Author: Institute: City: Country: Email:
IFMBE Proceedings Vol. 29
Ville Jalkanen Dept. of Applied Physics and Electronics, Umeå University 901 87, Umeå Sweden [email protected]
Vectorial magnetoencephalographic measurements for the estimation of radial dipolar activity in the human somatosensory system J. Haueisen1,2, K. Fleissig2 , D. Strohmeier1, R. Huonker2, M. Liehr2, and O.W. Witte2 1
Institute of Biomedical Engineering and Informatics, Technical University Ilmenau, Germany 2 Biomagnetic Center, Department of Neurology, University Hospital Jena, Germany
Abstract— Radial dipolar activity is difficult to estimate with standard single-component magnetoencephalographic measurements. However, recent technological developments allow for vectorial magnetoencephalographic measurements. We tested the hypothesis that radial dipolar activity can be estimated on the basis of vectorial magnetoencephalographic measurements. Eleven healthy participants received right median nerve stimulation, while the biomagnetic field was measured over the contralateral hemisphere with a novel vector-biomagnetometer. In this measurement system, SQUID based magnetometer sensors are arranged in perpendicular triplets at each measurement location. Subsequently, source analysis of radial and tangential cortical dipoles was performed. We found that both radial and tangential dipolar activity could be estimated in ten out of eleven participants. Dipole locations were found in the vicinity of the central sulcus and dipole orientations were predominantly tangential for the first cortical activity N20m and predominantly radial for the second cortical activity P25m. The mean location difference between the tangential and the radial dipole was 11.9 mm and the mean orientation difference was 97.5 degree. We conclude that radial dipolar activity can be estimated from vectorial magnetoencephalographic measurements. Keywords— magnetoencephalography, somatosensory cortex, median nerve, magnetic field measurements.
[4] and in simulation studies [5] that radially oriented dipoles produce 4 to 10 times weaker magnetic fields outside the head than tangentially oriented dipoles at the same position. Thus, MEG is considered to be not appropriate for the localization of radial sources. However, all previous studies used only one vectorial component (either Bz or Br values) of the vectorial biomagnetic field. Recently, vector-biomagnetometers were developed [6-8], which allow for the recording of all three vectorial components of the biomagnetic field. It has been shown in measurements [6] and simulations [9] that the information content is higher for measurements performed with a vector-biomagnetometer as compared to standard biomagnetometers. The aim of this study is to investigate experimentally the feasibility of reconstructing radially oriented dipoles based on vectorial measurements of biomagnetic fields. We have chosen the somatosensory system for its well known generator structure of a radial (Brodmann area 1) and tangential (Brodmann area 3b) dipolar source overlapping in time.
MEG, primary three-component
II.
METHODS
A. Participants and measurements I. INTRODUCTION
Electroencephalography (EEG) and magnetoencephalography (MEG) are non-invasive modalities for recording brain activity with high temporal resolution. Source reconstruction based on EEG and MEG is a widely used technique in the neurosciences allowing for spatio-temporal disentanglement of overlapping brain activity. Commonly, dipolar source models are employed in order to describe the electrical activity in a certain brain area. EEG and MEG have different sensitivities with respect to superficial and deep sources and also radial and tangential sources (e.g. [1]). Consequently, the combination of MEG and EEG is used to distinguish radial and tangential as well as superficial and deep sources [2, 3]. One specific property of MEG is its lower sensitivity to radial sources as compared to tangential sources. It has been shown both experimentally
Eleven healthy volunteers (6 males, 5 females), 10 righthanded and one left-handed, underwent examination with the ARGOS 200 vector-biomagnetometer (AtB SrL, Pescara, Italy) positioned over the somatosensory cortex. Contralateral to the magnetic recording site the median nerve was electrically stimulated (stimulation strength: motor plus sensor threshold; constant current square wave impulses with a length of 200 s). The stimulus onset asynchrony was randomized between 0.7 and 1.4 s. A total of 512 epochs was averaged. Data were sampled with 1 kHz. For artifact detection ECG, horizontal and vertical EOG was recorded. Isotropic T1-weighted magnetic resonance (MR) images with 1 mm resolution were taken of each participant’s head to provide realistic head modeling for the source localization procedure. Co-registration between MRI and MEG coordinate systems was obtained by digitizing and
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 399–401, 2010. www.springerlink.com
400
J. Haueisen et al.
rigidly transforming anatomical landmarks (nasion, left and right pre-auricular point). The study was approved by the ethics committee of the medical faculty of the Friedrich-Schiller-University Jena. All participants gave their written informed consent. B. Vector-biomagnetometer The used vector-biomagnetometer includes 195 Superconducting QUantum Interference Devices (SQUIDs). The sensors are fully integrated planar SQUID magnetometers produced using Nb technology with integrated pick-up loops. The sensing area is a square of 8 mm side length. The intrinsic noise level of the SQUIDs is below 5 fT Hz–1/2 at 10 Hz. Triaxial vector magnetometers are formed by grouping three basic sensor elements to a triplet. In each triplet the magnetic field vector, Bx, By, and Bz (with respect to the local coordinate system of the triplet) is measured, since the three square sensor elements are arranged perpendicular to each other on the three adjacent planes of the corner of a cube (Fig. 1 inset). In order to have a similar distance of all three SQUIDs to the bottom of the measurements system, the corner of the cube is placed closest to the bottom of the cryostat (the cube is standing on its corner). An additional advantage of this arrangement is that one can obtain the commonly measured Bz component (i.e. perpendicular to the cryostat bottom) of the magnetic field by simply adding the magnetic flux measured by all three SQUIDs.
The triplets are distributed over four levels. The lower level, the main measurement plane, is a planar sensor array consisting of 56 sensor triplets, laid out on a hexagonal grid, covering a circular planar surface with a diameter of about 25 cm (Fig. 1). The second level contains seven triplets, and the third and fourth level one triplet each. The second level is on a plane which is positioned parallel to the measurement plane at a distance of 98 mm. The third level is 196 mm above the first plane, and the fourth level is 254 mm above the first plane. The centre (cube corner point) of the triplets in the third and fourth level is located at the x, y position (0,0). The triplets located in the second, third, and fourth level are used for noise cancellation. The dynamic range of the SQUID electronics is 22 bit with a lowest resolution of 2.05 fT and a range of ±4.31 nT. The system is installed in a magnetically shielded room with three layers of highly permeable material and one layer of aluminum. For visualization purposes, the local Bx, By, and Bz at each triplet (note the different orientation of the triplets in Fig. 1) were transformed to global Bx, By, and Bz values at each center of gravity of the triplet. C. Source localization For each participant a realistic three compartment boundary element model to account for skin, skull, and brain was derived by segmenting the MR images. The triangle side length was set to 7 mm for each of the surfaces. We assumed conductivities of 0.33, 0.0042 and 0.33 S/m (skin, skull, brain). MEG data preprocessing consisted of artifact rejection, filtering (3rd order Butterworth 0.1 – 170 Hz), and baseline correction (-100 to 0 ms). A two step spatio-temporal dipole localization procedure was performed considering the time interval of the first cortical components N20m and P25m. First in the upstroke of the N20m a single dipole was fitted. This source activity was then projected out for the entire time interval and a second dipole was fitted representing mainly the activity of the P25m component. Source localization was done with Curry version 4.6 (Compumedics NeuroScan, Charlotte, NC, USA). III. RESULTS
Fig. 1: ARGOS 200 sensor configuration.
Fig. 2 shows an example of the measured magnetic field distributions. The Bz field pattern is similar to the field pattern measured with standard biomagnetometers and shows a typical dipolar arrangement for the tangential component and a monopolar arrangement for the radial component. For the tangential component, the Bx field pattern shows a slightly quadrupolar arrangement (two IFMBE Proceedings Vol. 29
Vectorial Magnetoencephalographic Measurements for the Estimation of Radial Dipolar Activity in the Human Somatosensory System
negative maxima in the center of the field pattern) and the By field pattern a tri-polar arrangement. For the radial component, both Bx and By show dipolar arrangements. The center points of the field patterns of the tangential and radial components are slightly different.
401
might provide more insight into brain activity as compared to standard magnetoencephalography.
ACKNOWLEDGMENT This work was in part supported by the Deutsche Forschungsgemeinschaft (DFG grant Ha 2899/7/8-1) and the BMBF grant 03IP605.
REFERENCES
tang
rad
Bx
By
Bz
Fig. 2: Measured magnetic field distributions of one volunteer for tangential N20m (top row) and radial P25m (bottom row). Small squares indicate triplet positions and small crosses indicate omitted channels due to artifacts (if one channel was omitted always the entire triplet was switched off). Line increment is 50 fT in the top row and 20 fT in the bottom row. We found the expected source locations for both dipoles (N20m and P25m) in the vicinity of the central sulcus for ten out of eleven participants. In one out of eleven participants the radial P25m component could not be localized reliably. The mean distance between the radial and the tangential dipole was 11.9 ±5.4 mm, which is within the expected range for the distance between the underlying Brodmann areas 3b an 1. The angle between the two dipoles was found to be 97.5 ±28.5 degrees, which is also in accordance with the expected range of angles.
[1] Goldenholz DM, Ahlfors SP, Hämäläinen MS et al. (2009) Mapping the signal-to-noise-ratios of cortical sources in magnetoencephalography and electroencephalography. Hum Brain Mapp 30(4):1077-86 [2] Wood CC, Cohen D, Cuffin BN, et al. (1985) Electrical Sources in Human Somatosensory Cortex - Identification by Combined Magnetic and Potential Recordings. Science 227(4690):1051-1053 [3] Jaros U, Hilgenfeld B, Lau S, et al. (2008) Nonlinear interactions of high-frequency oscillations in the human somatosensory system. Clinical Neurophysiology, 119(11):2647-57 [4] Melcher JR, Cohen D. (1988) Dependence of the MEG on dipole orientation in the rabbit head. Electroencephalogr Clin Neurophysiol 70(5):460-72 [5] Haueisen J, Ramon C, Czapski P et al. (1995) On the Influence of Volume Currents and Extended Sources on Neuromagnetic Fields: A Simulation Study. Ann Biomed Eng 23(11):728 - 739 [6] Kobayashi K and Uchikawa Y (2001) Estimation of multiple sources using spatio-temporal data on a three dimensional measurement of MEG. IEEE Trans Magn 37:2915–2917 [7] Schnabel A, Burghoff M, Hartwig, F et al. (2004) A sensor configuration for a 304 SQUID vector magnetometer. Neurol Clin Neurophysiol 70 [8] Liehr M and Haueisen J (2008) Influence of anisotropic compartments on magnetic field and electric potential distributions generated by artificial current dipoles inside a torso phantom. Phys Med Biol 53:245–254 [9] Arturi CM, Di Rienzo L and Haueisen J (2004) Information Content in Single Component versus Three Component Cardiomagnetic Fields. IEEE Trans Magn, 40(2):631-634 Corresponding author:
IV. CONCLUSIONS
To our knowledge this is the first study which reconstructs radially oriented dipoles based on vectorial biomagnetic measurements. We conclude that radial dipolar activity can be estimated from such measurements and that three-component magnetoencephalographic measurements
Author: Jens Haueisen Institute: Institute of Biomedical Engineering and Informatics, Technical University Ilmenau Street: Gustav-Kirchhoff Str. 2 City: Ilmenau Country: Germany Email: [email protected]
IFMBE Proceedings Vol. 29
Registration of Chest X-Rays J. Csorba, B. Kormanyos, and B. Pataki Budapest University of Technology and Economics, Department of Measurement and Information Systems, Budapest, Hungary [email protected], [email protected], [email protected]
Abstract— Doctors often want to compare X-ray images of one patient made at different times. In case of chest X-ray analysis it is hard to find the typically small differences of the two images. A software supporting this method could help them a lot. We analyzed the possible methods and developed a prototype of a feature-based registration software. Characteristic pairs of points were used for the registration transformation. These points were found automatically on the contour and in the inner part of the lung. Several transformations (rigid and non-rigid ones) were examined to find the best one for registration. Keywords— chest X-ray, medical image registration, feature-based registration, rigid and non-rigid transformations, disorder detection.
I. INTRODUCTION Lung cancer is a leading cause of death worldwide. It is responsible for 1.000.000 deaths yearly according to WHO statistics. Cancer death rate could be decreased if cases were detected and treated early. X-ray is one of the cheapest common tools for diagnosis. During examinations the doctor always compares the patient’s past X-ray images, if available, with the present one. It is a difficult task, and even expert physicians can easily make a mistake, especially when high number of patients must be examined. A little fault can lead to misdiagnosis so it is important to develop supporting software for this job. The core of the support software is the registration of two X-ray images. We analyzed and rated the possible methods of the registration and created a prototype of the software which can support doctors' work. An automatic method without any human interaction was chosen because a large number of images are to be processed and doctors have no time to help the program. There are intensity-based and feature based methods of registration. [1] Because the different images have different exposition characteristics the feature based method based on representative points of the lung was chosen.
II. ALGORITHM STRUCTURE A. Main Structure First we preprocessed the two X-ray images made at different dates from the same person. Then pairs of characteristic points were looked for: one representative point in the current image and its counterpart in the older one. First we searched these points on the contour of the lungs rather than in the inner parts of the lungs. We cleared from the set the pairs which are possibly mismatches and then transformed the older image to the same position as the current one using the cleared set of the point-pairs. B. Preprocessing In this first step the brightness of the images were normalized based on the area of the lungs. After that we used a recently developed tool [2] which can remove the clavicles and the ribs from X-ray images. The removing of the bones is important because they could also generate some false pointpairs. These pairs can not be used for registration because the bones are often moving independently from the lung tissue. C. Contour Points First we searched points on the contour of the lungs. On the different areas there were different difficulties such as a low signal-noise ratio near the mediastinum or the air bubbles in the stomach which could be a problem in finding the inner or the bottom edge of the lung. Because of these problems several methods had to be used to get the full contour of the lung. Topmost point: The first point that we determined was the topmost point of the lung. To find this we searched candidate points from the image which are in the expected region (in the top of the lung) and there is a big intensity difference between their upper and bottom neighbours. When we got the candidates we had to choose the real top point. There have to be a lot of similar points in its surrounding to make sure it is not just an isolated candidate because of some random noise and this candidate also have to be in as top position as possible because it is the top of the lung.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 402–405, 2010. www.springerlink.com
Registration of Chest X-Rays
403
Bottom line: After the top point we determined the bottom contour of the lung. For that we need a reliable point of that contour part. We searched candidates here too. These points were in the expected region of the bottom of the lung and their vertical neighbours had a big intesity difference. There also had to be a lot of other candidates in its surroundings to avoid false hits. The next step was a Sobel transformation on the image to heighten the vertical contours. In the new image the original contour is a high intensity ridge. In this ridge we can go along on the high intensity path from the reliable starting point to both the left and the right side. Because of the big noise we used the following algorithm. From a point already assigned to the ridge we search the angle in which the average intensity on a vector with a given length is the biggest and we step to the next contour point along that vector. We continue this algorithm till the average intensity is big enough. After we had found points on the bottom contour on the two images we had to pair them. The algorithm almost always finds the most outer points of the bottom line but sometimes it makes some mistakes at the most inner points (closest to the mediastinum) because of the low signal to noise ratio near the mediastinum. That is why we started pairing the points on the two images from the most outer points going along the contour. Sides: Now we know the top point and the bottom outermost one of the contour. The outer side of the contour is a line connecting them. It is also true for the inner side. To find these lines we searched points with proper intensity differences in there surrondings. These candidates could be points of the contour line. Knowing the ends of the line and the possible points of it (the candidates) we can find the whole line. We had to find point pairs in the contour lines of the two images. On the outer side we did not have characteristic points because the outer part of the lung is very homogeneus. Our only information was the position of a point on the line. We also did not have much information about the inner points because of the low signal to noise ratio near the mediastinum. Considering these problems we simply split every section of the contour to some smaller parts with equal length. The selected points were the ends of these small parts and we could pair them in the two images one after the other. D. Inner Points Looking for characteristic points: Finding characteristic inner points is a difficult task also for the human brain. But there are some clearly identifiable parts of the lung: branches of the vessels, enlargements of (blood) vessels,
and different pathological disorders of the lung. The common in these helpful parts is that they all appear on the Xray as a nodule. The detection of these nodules could be a starting point in order to find coherent pairs of points. To detect nodules we apply a small fixed sized window – called scanning window – to every pixel [3]. We count the average I avg and the minimal I min grey levels inside the scanning window. If the grey level of the actual (central) pixel is less than a suitable threshold, then we mark the pixel as a part of a suspected nodule. The threshold is calculated as a weighted average of I min and I avg . Thus the pixel (i , j ) is possibly part of a nodule, if its intensity:
I (i, j ) ≤ w1 I avg + w2 I min
(1)
The size of the scanning window determines the size of nodules we want to find while the threshold determines how large a change in the contrast we would like to detect. Our preliminary selection is the following: the length and height of the window are 1/50 of the original size of the images, and the weights are w1 = 0.25, w2 = 0.75 .
Parameters of the points: We may assign to each nodule its center of gravity calculated from the points belonging to this nodule. We can use these center points: this step does not cause significant inaccuracy, because the diameter of the nodules is small. Different features were defined in order to pair the nodule center points in, which were found on the two images independently: • •
•
Morphological parameters of the nodules (area, diameter and a commonly used shape factor – the circularity). “Characteristic” parameter of the nodule: This is calculated from the difference between the intensity of the points in the nodule and the above threshold. The more „characteristic“ a nodule is the better it is marked out from the image. We may assign a one variable function to every single point in the following way: Scan the picture with radial lines from the center of a nodule with a given length. Count the average of the intensity of the pixels, which can be found along the line. If we do this by different angles then we get a function. This function could be used to pair the center of the nodules on different images, because its values are similar on corresponding points and significantly different on non-corresponding ones.
Pairing the points: If we want to find the best fitting pair of a point marked in the old image without having any preliminary information, we theoretically need to check all characteristic points found in the new image. This situation
IFMBE Proceedings Vol. 29
404
J. Csorba, B. Kormanyos, and B. Pataki
could be significantly improved, when the contour point pairs are used to calculate an average translation between the old and new lung images. To determine the pair of a point we just need to look in the neighborhood of the translated position. In this neighborhood that point is chosen, which has the smallest distance in the space of the above defined features. The reliability of the pair is inversely proportional to the distance.
The pairs of points found can be used to create the registration transformation. Different transformations were examined. In the following part we will present the mappings, which were analyzed. We start with different well known rigid transformations and continue with some non-rigid mappings.
E. Data Cleaning
λ , rotates by α degrees and translates with a vector
F. Transformations
Linear transformation: This map enlarges with a factor
A consistency test is made with the existing pairs of points for the sake of reducing the mismatched pairs. For this purpose we define to every pair of points the “shift vector” as the difference of the coordinates of the points of a pair (new – old). We expect that for two different point pairs, if the two points on the old image are close to each other, then the difference of the two “shift vectors” cannot be large. Based on this we do the following: Start with an arbitrary pair and calculate the “shift vector”. After that, take the pairs found one by one. In each step choose a new pair. It is accepted if its shift vector does not differ too much from the shift vectors of any previously chosen pairs. “Not too much” means here that the length of the difference of the shift vectors is not larger than 10% of the distance of the points of the pairs on the old image. If a point is not accepted than there is inconsistency (between our pairs). At this situation we throw away the less reliable pair from the two pairs in conflict. After this procedure we get a set of consistent pairs.
( d x , d y ) , so it has 4 free parameters.
Affine transformation: The image of the point ( x0 , y 0 ) is given by:
⎛ A B⎞ ⎟ ⎜ (u 0 , v0 ) =( x0 , y 0 ,1) × ⎜ C D ⎟ ⎜E F ⎟ ⎠ ⎝
(2)
So it has 6 free parameters. Projective transformation: The image of the point ( x h , y h , w) , which is given in a homogeneous coordinate can be obtained in the frame of reference:
⎛A D G⎞ ⎟ ⎜ (u , v, h) = ( x h , y h , w) × ⎜ B E H ⎟ ⎜C F I ⎟ ⎠ ⎝
(3)
This transformation is given by 8 free parameters. Polynomial transformation: These can be considered as a generalization of the affine transformation. In affine case the coordinate functions are linear combinations of first degree polynomials, while here the coordinate functions are linear combinations of higher degree polynomials. Local weighted mean transformation: If a point ( x, y ) in the original image is close to a control point (characteristic point found and paired) in this image then we expect the point corresponding to ( x, y ) in the transformed image to be close to the control point (in the other image) [4]. In the original image take the N closest control points to ( x, y ) and denote them by: P , P … P . Now we can calculate 1
2
N
the (u , v ) point, which is corresponding to ( x, y ) using fittingly weighted average of P , P … P . 1
Fig. 1 The pairs of points that have been found
2
N
Interpolation based transformations: A transformation could be imagined as a two-variable function with two values. This function is known at the already found pairs of
IFMBE Proceedings Vol. 29
Registration of Chest X-Rays
405
points. A two variable function with two values can be considered as two two-variable functions with one value. They are surfaces, which are defined in some dots. The surfaces, which are important to the transformation, can be determined with a chosen interpolation procedure.
III. TEST RESULTS 8 pairs of images (an old and a new X-ray image of the same patient) were used for testing. In each pair a human expert marked 8-8 inner and 8-8 contour points, which were coherent. We could measure the difference (in pixel) between the marked and transformed control old points and the corresponding marked points in the new image. The mean square error was calculated from the differences. This procedure describes the accuracy of the whole registration and compares the transformations indirectly through this. The resolution of the original images was 2000*2000 pixels. We got the following errors of the different transformations (errors are averaged over the 8 images).
IV. CONCLUSIONS Based on our test results, the second degree polynomial transformation is the closest to the real deformation of the lung among the examined mappings. The reached 20 pixels imprecision is totally acceptable for pictures with similar resolution. The mistake is just barely visible to the human eye. Thus the second degree polynomial is proposed to model the shift of the lungs. The sufficiently accurate result permits the algorithm to be used in different applications. An opportunity is alternate display of the transformed old image and the new one. Another possibility is to display the “difference picture” in order to detect the change of the hardly noticeable nodules. Both tools are also useful to make doctors’ diagnostic work easier, and more efficient. The algorithms of these two functions were implemented. Results of this application are reassuring to create highprecision automated detection software in the future.
Table 1 Results of registrations by transformations Transformation Linear
Average error (in pixels) 25.38
Affine
25.13
Projective
26.75
Polynomial(2 order)
23.88
Polynomial(3 order)
31.75
Local weighted mean
31.13
Interpolation(linear)
27.50
Interpolation(cubic)
29.50
Fig. 2 New image, old image and difference image
ACKNOWLEDGMENT This work was partly supported by the National Development Agency under contract KMOP-1.1.1-07/1-2008-0035.
The rigid transformations performed better than the nonrigid mappings. Among these transformations the second degree polynomial turned to be the most efficient one. This can be explained by the degree of freedom (DOF) of the linear, affine and projective transformation being too small to model the deformations of the lung. However, the third degree polynomial (mapping) with its 20 DOF could already make too special changes. One possibly wrongly found pair of points in case of non-rigid transformations could cause a mistake in the neighborhood of the point. Applying it is too risky. In contrast to this, rigid transformations are fault-tolerant, which is an important aspect of decision support software for doctors.
REFERENCES [1] Medical Image Computing and Computer-Assisted Intervention — MICCAI 2002, 5th International Conference Tokyo, Springer, 517524, 2002 [2] G. Horváth, G. Orbán, Á. Horváth, G. Simkó, B. Pataki, P. Máday, S. Juhász and Á. Horváth, “A CAD System for Screening X-ray Chest Radiography” World Congress 2009 Medical Physics and Biomedical Engineering. Vol. 25, 210-213, 2009 [3] Kim Le, "Automated Detection of Early Lung Cancer and Tuberculosis Based on X-Ray Image Analysis", the 6th World Scientific & Engineering Academy and Society (WSEAS) International Conference, 2006 [4] Ardeshir Goshtasby, "Image Registration by Local Approximation Methods", Image and Vision Computing. vol. 6, no. 4, 255-261, 1988
IFMBE Proceedings Vol. 29
Arterial Pulse Transit Time Dependence on Applied Pressure K. Pilt, K. Meigas, M. Viigimaa, J. Kaik, R. Kattai, and D. Karai Tallinn University of Technology/Technomedicum/Department of Biomedical Engineering, Tallinn, Estonia Abstract— Arterial pulse transit time dependence on applied pressure is analyzed experimentally. The pressure was applied on brachial artery. Pulse transit times between left and right hand were compared by calculating the correlation on different applied pressures. In addition the pulse transit time characteristics were analyzed on different pressures. It revealed that pulse transit time is not influenced by applied pressure, when it is lowered to certain level. The level can be located from piezoelectric signal amplitude. Keywords— Pulse transit time, PPG, piezoelectric transducer, applied pressure.
I. INTRODUCTION Pulse transit time (PTT) is the time of a pulse wave to travel between two arterial sites. The time delay between two registered pressure waves, which are measured from the same artery, is directly proportional to blood pressure [1]. PTT depends on arterial stiffness. When arterial wall becomes stiffer it causes the PTT to be shortened and opposite. In addition to blood pressure rise, the arterial stiffness is also increased with age, arteriosclerosis and diabetes mellitus, which resulting in a shortening of the PTT [2-4]. Different methods have been used to register the pressure wave, such as piezoelectric pressure sensor, PPG, Doppler ultrasound, etc. PPG is a non-invasive optical technique for measuring changes in blood circulation. This method is widely used in oxygen saturation measurement, but it can be used also to register the pressure wave. The optical radiation from the light emitting diode (LED) is directed to the skin, which is often red or infrared. The light is absorbed, reflected and scattered in the tissue and blood. Only a small fraction of light intensity changes are received by the photodiode. Pulsating AC component of the registered PPG signal corresponds to pressure wave. There are two main PPG sensor design modes: the reflection and the transmission mode. In the reflection mode, a photodiode is placed adjacent to the LED and directed toward skin. The photodiode measures the reflected and scattered light intensity from the skin surface. In the transmission mode, the photodiode and the light source are placed on opposite sides of the measured volume. The photodiode measures the transmitted light intensity. The transmission sensor
measurement sites are limited, because of its geometrics, whereas reflection mode sensor can be placed to any point on the skin surface. Still the PPG signal has been easier to obtain from fingers, toes and earlobes by using transmission mode sensor (unpublished observations). Mechanical pulsation of the arteries can be registered with piezoelectric transducer. It is relatively simple method for the detection of the pressure wave. Piezoelectric transducer generates measurable voltage when a deforming mechanical force is applied. Piezoelectric transducer can be used to measure pressure wave from the measurement sites, where it is difficult to obtain the PPG signal. For example on elbow surface the brachial artery is hidden behind other type of tissues and the obtained PPG signal is with low signal-to-noise ratio (SNR). This article is about preliminary study for wider research in the area of diagnosis of cardiovascular diseases by using PTT as one of the parameter. The pressure wave measurement sites are wrist and elbow. The pressure wave signal from the brachial artery is planned to obtain with piezoelectric transducer. It is placed above the artery and should be fixed with ribbon, which is connected around elbow. In this article has been analyzed how additional stress from piezoelectric transducer influences the brachial artery and the pressure wave transit time.
II. METHODS To obtain piezoelectric signal with high SNR the transducer should be applied on the skin surface above the brachial artery with additional pressure on it. The brachial artery is closed for blood flow, when there is applied certain amount of pressure. When the pressure is lowered the blood flow recovers and it is possible to register the pulsating pressure wave. On certain pressure the piezoelectric signal is with maximum amplitude. Signal amplitude starts to lower again, when the pressure is lowered. At the optimal pressure the piezoelectric signal should be registered with as high signal amplitude as possible. At the same time the transducer pressure, which is applied on brachial artery, should not influence the pressure wave velocity, as the second sensor is placed on the same hand radial artery.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 406–409, 2010. www.springerlink.com
Arterial Pulse Transit Time Dependence on Applied Pressure
407
Fig. 1 Experimental device to analyze the dependency between PTT and applied arterial pressure The time interval, which is measured between the peak of R-wave on the electrocardiographic (ECG) and raising front of the pressure wave signal, can be taken as PTT [5]. In this article we denote this time interval as R-wave gated pulse transit time (RWPTT). In simplified conditions the arteries, which are leading from the heart to left and right hand radial arteries in wrist, are with identical parameters. The RWPTTs for both hands are comparable and with high correlation. Left hand brachial artery mechanical properties are influenced by applying pressure on it. Due to it the correlation, between left and right hand RWPTTs, starts to lower. At the moment, when the brachial artery is closed, there is no correlation between two time intervals. The experimental device was built to analyze the PTT dependency on piezoelectric transducer pressure. The device schematic is given on Figure 1. It consists of solid plastic tube, which is with enough large, that human hand can fit through it. Piezoelectric transducer is fixed inside the tube and to the opposite side of it is placed cuff, what can be filled with air. The pressure, what is applied on cuff is measured with manometer. Stem is fixed inside the tube and top of it is connected piezoelectric transducer. During the experiment subject's hand is placed through the tube and placed on cuff. The hand is placed in the way that transducer is aimed on the brachial artery above elbow. The position of the brachial artery is located through palpation previously. Cuff starts to expand under hand by pumping the air into it and the piezoelectric transducer is getting contact with skin surface. By continuing with air pumping the pressure under elbow starts to rise and the direct force from transducer is applied on artery. Three different pressure wave signals are registered synchronously during the experiment. Piezoelectric signal is registered from transducer, which is fixed inside plastic tube and placed on left hand. Two PPG signals are measured with reflectance sensors from the left and right hand wrist.
Fig. 2 Three different R-wave gated pulse transit times are measured between ECG signal and pressure waves In addition ECG signal, which describes the electrical activity of the heart, was registered synchronously. For all three registered pressure wave signals the corresponding RWPTTs are measured: RWPTTPPG - time interval between ECG and left hand PPG signal; RWPTTrefPPG – time interval between ECG and right hand PPG signal; RWPTTpiezo - time interval between ECG and piezoelectric signal (Figure 2). According to previous research, the RWPTT is suggested to be measured between 50% of the PPG signal raising front and ECG signal R-peak [6]. Pressure wave front from piezoelectric signal was detected from its every period maximum point. RWPTTrefPPG corresponds to pressure influenced time interval and RWPTTPPG corresponds to reference time interval. To analyze the pressure influence on artery the correlation are calculated between RWPTTPPG and RWPTTrefPPG.
III. RESULTS Experiments were conducted on 4 healthy volunteers. On every volunteer was carried out 3 experiments. The ages of the subjects were 22, 31, 38 and 61. During the experiments the room temperature was around 24 degrees and subjects were in resting position.
IFMBE Proceedings Vol. 29
408
K. Pilt et al.
Fig. 3 Four conducted experiments results. Correlations: rPPG, rpiezo, rstart, rstop; and the amplitude of piezoelectric signal on different applied pressures. a) 38 years old subject b) 61 years old subject c) 31 years old subject d) 23 years old subject
Raw PPG and piezoelectric signals were obtained with lab built circuit. For the PPG signals the Nellcor Max-Fast reflectance sensors were used. PPG sensors were placed on left and right hand wrist. Sensors were fixed on skin surface above radial artery, which was located through palpation. For the piezoelectric signal the ADInstruments MP100 transducer was used. PPG and piezoelectric signals were digitalized with National Instruments PCI MIO-16-E1 data acquisition card and registered with LabVIEW environment. ECG signal was obtained and digitalized with ADInstruments PowerLab 4/20T device and registered with ADInstruments Chart software. All the signals were digitized with sampling frequency 1kHz. Before experiments it was ensured, that signals are registered synchronously. The signals were recorded continuously during the whole experiment. The subject's left hand was fixed inside previously described experimental device. The cuff under hand was filled with air. At certain pressure piezoelectric sensor closed the blood flow in brachial artery. The closure of artery was detected from left hand PPG signal. The pressure in cuff, which was measured with manometer, was lowered step-by-step. Each pressure level was marked up synchronously with recorded signals. Before and after every experiment the signals were recorded without pressure on artery to determine the correlation between RWPTTPPG and RWPTTrefPPG afterwards. Post processing of the signals was carried out in MATLAB. The signals were filtered with high- and lowpass FIR filters. The cut-off frequencies were 0.1Hz and 30Hz respectively. R-peaks of the ECG signal were detected by using Hamilton-Tompkins algorithm [8]. Pulse wave rising fronts were detected from PPG and piezoelectric signals as it was described previously. Three different intervals: RWPTTPPG, RWPTTpiezo and RWPPTrefPPG; were measured between R-peak of the ECG signal and pressure waves. For each pressure the two correlations were calculated between RWPTTs: rpiezo is correlation between RWPTTpiezo and RWPTTrefPPG; rPPG is correlation between RWPTTPPG and RWPTTrefPPG. In addition reference correlations rstart and rstopp were calculated between RWPTTPPG and RWPTTrefPPG. Time intervals for rstart and rstop calculation were measured from part of the signals, which were recorded before and after the pressure was applied respectively. All correlations were calculated by using constant number of time intervals. On Figure 3 has been shown for each subject the average amplitude of the piezoelectric signal and two correlations rpiezo and rPPG dependency on different applied pressures on cuff. In addition on every graph has been given two level marker lines, which corresponds to rstart and rstopp.
IFMBE Proceedings Vol. 29
Arterial Pulse Transit Time Dependence on Applied Pressure
Fig. 4 From
23 year old subject measured RWPPTs on different applied
pressures
The correlations rPPG and rpiezo are given starting from the pressure, when there was detectable pulsating PPG signal. It is visible, that in every experiment when the rPPG is below 0.5 the piezoelectric signal amplitude is near its maximal value. Low correlation means that pressure on artery influences the pulse transit time. By lowering the pressure on the artery the rPPG increases near 1. The correlation varies from pressure to pressure around rstart and rstop. It can be assumed that pulse transit time is not influenced by the pressure on artery. On Figure 3 b) the rPPG follows the rpiezo. Near cuff pressure of 15mmHg the rPPG is decreased. It means that the properties of artery in right hand differ from left hand. The difference of the arteries is not caused by the transducer applied pressure, because also the rpiezo is decreased. rpiezo describes the correlation between RWPTTs in left and right hand without any applied pressure. On Figure 3 b) and d) the correlation is firstly decreases to its minimum and then starts to increase while the pressure is decreased. On Figure 4 is given RWPTTs which correlations are shown on Figure 3 d). The RWPTTPPG is higher than RWPTTrefPTT in the beginning. After decreasing the pressure the time interval transition is taking place. It means that high correlation around pressures 21-25mmHg, which is shown on Figure 3 b), does not describe the applied pressure influence. It is visible that the high pressure on artery causes constant increase in RWPTT. Similarly the high correlation around pressure of 23mmHg is explained on Figure 3 b).
IV. CONCLUSIONS The PTT dependence on applied arterial pressure was analyzed experimentally. The pressure was applied on left
409
hand brachial artery with piezoelectric transducer. The RWPTTs were measured between ECG signal R-peak and pressure waves, which were registered from left and right hand radial arteries. The correlations rpiezo and rPPG were calculated on different applied pressures. In addition the RWPTT characteristics were analyzed on different applied pressures. It can be conclude that PTT is not influenced by applied pressure, when it is lowered to certain level. The pressure level can be set according to piezoelectric signal amplitude. The piezoelectric signal amplitude reaches its maximum when the pressure is lowered. On lower pressures, than the maximum amplitude is located, the pulse transit time is not dependent on applied pressure. As future work the pulse shape should be analyzed on different pressures. In addition the problem should be analyzed analytically.
ACKNOWLEDGMENT This study was supported by the Estonian Science Foundation (grant No. 7506), by the Estonian targeted financing project SF0140027s07, and by the European Union through the European Regional Development Fund.
REFERENCES 1. Allen J (2007) Photoplethysmography and its applications in clinical physiological measurement. Physiol Meas 28:R1-R39 2. Smith R. P, Argod J, Pepin J. L, Levy P. A (1999) Pulse transit time: An appraisal of potential clinical applications. Thorax 54:452-457 3. Hlimonenko I, Meigas K, Viigimaa M, Temitski K (2008) Aortic and Arterial Pulse Wave Velocity in Patients with Coronary Heart Disease of Different Severity. Estonian J Eng, 14(2):167-176 4. O’Rourke M. F, Hayward C. S (2003) Arterial stiffness, gender and heart rate. J Hypertens 21:487-490 5. Naschitz J. E et al (2005) Pulse transit time by R-wave-gated infrared photoplethysmography: Review of the literature and personal experience. J Clin Monit Comput 18:333-342 6. Lass J, Meigas K, Kattai R, Karai D, Kaik J, Rosmann M (2004) Optical and electrical methods for pulse wave transit time measurement and its correlation with arterial blood pressure. Proc Estonian Acad Sci Eng 10:123-136 7. Hamilton P. S, Tompkins W. J (1986) Quantitative investigation of QRS detection rules using the MIT/BIH arrhythmia database. IEEE Trans Eng Biomed Eng 12:1157-1165
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Kristjan Pilt Department of Biomedical Engineering Ehitajate tee 5 Tallinn, 19086 Estonia [email protected]
Influence of an Artificial Valve Type on the Flow in the Ventricular Assist Device D. Obidowski, P. Klosinski, P. Reorowicz, and K. Jozwik Institute of Turbomachinery, Technical University of Lodz, Lodz, Poland Abstract— Numerical fluid mechanics methods play a significant role in the design process of many devices, thus increasing comfort of life. One of the fields where their application is of the widest interest is medicine. The authors used the latest Computer Aided Design and Computational Fluid Dynamics software to analyze the flow in the pneumatic Ventricular Assist Device (VAD) with two different types of valves applied. In the study, a MEDTRONIC-HALL tilting disc mechanical artificial heart valve was compared with a three-leaflet polyurethane artificial heart valve designed at the Foundation of Cardiac Surgery Development, Zabrze. A comparison was made on the basis of the flow visualization inside the VAD chamber and the size of stagnation regions where the flowing blood may coagulate. The presented results were obtained for the steady state flow conditions and on the assumption that walls of the assist device, adapters and valves were rigid. The simulated fluid was blood. Dynamic viscosity of blood was defined according to the Non-Newtonian Power Law. Simulations were preformed for systole and diastole. The Ansys CFX v12 code was used to perform preprocessing, solving and postprocessing stages. Deformations of the threeleaflet polyurethane valve were obtained in SolidWorks 2009 and imported to Ansys ICEM v.12. On the basis of the preformed analysis, it has been proven that the disc mechanical heart valve generates better flow conditions inside the heart chamber, especially when a risk of coagulation is concerned. Moreover, the flow observed inside the chamber when the disc valve was used is more homogenous and a single swirl occurring in the central part enables good washing of the connection of the diaphragm and chamber regions. The analysis presented here is an integral part of the investigations conducted within the Polish Artificial Heart Programme. Keywords— ventricular assist device, artificial heart, artificial heart valves, numerical fluid mechanics.
I. INTRODUCTION Heart diseases are one of the most frequent cause of death and they are serious threats to human life. A proper diagnosis procedure may help to treat heart diseases. In many cases the only available procedure to rescue the patient’s life is a heart transplantation. An insufficient number of donors is a significant problem not only in the heart illness treatment, but since heart malfunction is always
dangerous to life, many efforts are made to create an artificial heart that may be implanted in place of the human heart. This will make the treatment independent of a limited number of donors. One of methods that may help to heal heart or to extend the time the recipient is waiting for a transplantation is to use an external artificial heart that supports the patient’s heart by forcing an extra blood flow rate. The presented research is devoted to examination of an effect of the artificial heart valve type on the flow inside the chamber and in the adaptor. The experiment was conducted on the original Ventricular Assist Device developed at the Foundation of Cardiac Surgery Development (FCSD). Two different types of artificial valves were introduced into VAD. A Medtronic-hall tilting disc mechanical valve was the first type and a three-leaflet polyurethane valve developed at the Foundation (FCSD) was the second one. An evaluation of different solutions was made on the basis of the flow visualization inside the VAD chamber and the spread of blood flow stagnation regions. The analysis was based on the numerical experiment performed with the Computational Fluid Dynamics software Ansys v12. The non-Newtonian model of blood, based on the Power Law, was applied. Two stages of the device operation was simulated. Diastole – the diaphragm is in its most external position and the volume of blood inside the chamber is the highest as well. In this particular time of the heart cycle, the inlet valve is totally open and the outlet valve is almost closed (5% open). The opposite deflection of the diaphragm – systole – occurs when most of blood is discharged from the chamber. The outlet valve is totally open whereas the inlet valve is almost closed (5% open). All the computations were performed at the Institute of Turbomachinery, Technical University of Lodz, Poland.
II. NUMERICAL EXPERIMENT A. Computational Domain A model of the pneumatically driven Ventricular Assist Device – POLVADEXT – has been used in the presented study. The whole model consists of a blood chamber, an air chamber separated by a diaphragm, two artificial heart valves, and two adapters (Fig. 1). The pneumatic part of VAD is irrelevant in this study, hence it is omitted.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 410–413, 2010. www.springerlink.com
Influence of an Artificial Valve Type on the Flow in the Ventricular Assist Device
The CFD software solves the Navier-Stokes equations, employing the Finite Volume Method, thus the computational domain has to be divided into small volumes. The geometry has been imported into Ansys ICEM v12, in which it has been discretized. Due to the complexity of the geometry, an unstructured mesh has been used. In the boundary layer, in the vicinity of walls, prismatic elements have been employed to solve accurately the flow in the regions of highest velocity gradients. In all the presented cases, a number of elements in the whole domain have exceeded 14 millions.
411
the Power Law in the range of high strain values underestimates values of dynamic viscosity. Thus, for strains larger than 327 s-1, the Newtonian model is proposed, for which: μ = 0.00345 Pas. Table 1 Boundary conditions for all cases under investigations Inlet maximum velocity [m/s] Diastole Systole
3.815 0.315
Outlet static pressure [kPa] 13.65 -0.2821
The dynamic viscosity used in the study is described by equation (2). ∂v ⎧ −9 ⎪ μ = 0,554712 for ∂y < 1e ⎪ n −1 ⎛ ∂v ⎞ ∂v ⎪ −9 ⎜ ⎟ = μ μ ⎨ 0⎜ ⎟ for 1e ≤ ∂y < 327 ∂ y ⎝ ⎠ ⎪ ∂v ⎪ ≥ 327 μ = 0,00345 for ⎪ ∂y ⎩
Fig. 1
(2)
where: μ – dynamic viscosity, Computational domain with described model elements
The Shear Stress Transport turbulence model has been used. The quality of the mesh has been checked in a mesh independence study and with the Yplus parameter that was smaller than 8 in all cases. Low residual levels have been achieved in the solution. Boundary conditions were estimated on the basis of the literature survey. At the inlet, a velocity profile has been introduced with the maximum value listed in Table (1). The profile has based on the cork shape, typical of turbulent flows. It is described by the following formula (1): r⎞ ⎛ V = Vmax ⎜1 − ⎟ ⎝ R⎠
1
7
∂v – strain. ∂y The blood density was assumed to be equal to 1045 kg/m3. B. Numerical Study Results The most dangerous complication connected to an application of the VAD is blood coagulation. It may occur when blood platelets are activated by deformation or contact with air or material of the implanted device.
(1)
where: V – velocity at the particular node of the mesh at a distance r measured from the axis of the adapter, Vmax – velocity along the axis of the adapter, R – radius of the adapter, r – distance from the axis of the adapter. At the outlet cross-section, pressure has been assigned. Velocity and pressure values used in the presented study are listed in Table (1). The non-Newtonian blood model based on the Power Law [1, 2] model has been used in the study. The blood dynamic viscosity has been described as a function of strain. The Basic Power Law model has been limited in the range of very low strain values below 1e-9 [s-1]. It is known that
Fig. 2 Streamlines with velocity vectors – a disc valve during diastole
IFMBE Proceedings Vol. 29
412
D. Obidowski et al.
A risk of coagulation increases when the exposure time is elongated. This leads to a conclusion that regions of very low velocity are potentially most dangerous, especially in the vicinity of VAD walls. Two types of visualization methods are used, namely: streamlines with velocity vectors (Figs 2, 4, 6) and surfaces enclosing low velocity regions (Figs 3, 5, 7, 8).
It has been shown in the earlier studies [3] that the angular positioning of the disc valve plays a significant role in the proper stream optimization. In this paper, the best angular position is presented. For the same angular position, velocity streamlines during systole are illustrated in Fig. 4. High velocity (greenish lines) in the connection of the diaphragm and the chamber is worth noticing. This ensures a low risk of coagulation due to short contact time of blood platelets with the material of the chamber. Stagnation regions are depicted in Fig. 5. Although those regions are large, it is necessary to remember that stagnation occurs in limited time of the heart cycle. The most dangerous is a situation when stagnation is observed for both diastole and systole in the same regions, especially in the vicinity of the diaphragm. This conclusion has been drawn from clinical observations.
Fig. 3 Stagnation regions – a disc valve during diastole Swirls that can be seen in Fig. 2 have an axis around which blood circulates. Blood flows in the region where the diaphragm is connected to the artificial heart chamber. In Fig. 3 no significant regions of stagnation are visualized. The volume of regions where velocity is lower than 0.01 m/s is marked in colours depicted in the legend. In Figure 3 only very small spots, in the vicinity of the inlet valve, are visible, which proves that the velocity in the whole domain is larger than the value mentioned.
Fig. 4 Streamlines with velocity vectors – a disc valve during systole
Fig. 5 Stagnation regions – a disc valve during systole
Fig. 6 Streamlines with velocity vectors – a three-leaflet valve during diastole
IFMBE Proceedings Vol. 29
Influence of an Artificial Valve Type on the Flow in the Ventricular Assist Device
The main aim of this study was to compare a disc heart valve and a polyurethane three-leaflet valve operating under the same conditions in the same chamber. Streamlines of diastole only are presented in this paper as the flow conditions are better, which has been shown in the case of the disc valve.
413
The most important is that there are regions of stagnation visible in systole and diastole that occur between leaflets of the valve and the adaptor wall. This occurs only for the inlet valve. In those regions, a risk of blood coagulation is considerable. Another problematic region observed in the case of the three-leaflet valve during systole is a connection of the diaphragm and the chamber. This is an additional disadvantage of an application of the polyurethane threeleaflet valve for the pneumatic Ventricular Assist Devices described in the paper.
III. CONCLUSIONS
Fig. 7
Stagnation regions – a three-leaflet valve during diastole
When compared to the tilting disc valve, the polyurethane valve shows highly worse flow conditions. The streamlines vector plots shown in Fig. 6 present several regions of swirls within the chamber that do not lie along one axis. Moreover, separation is observed near the wall just after the blood enters the chamber. Backflows that can be seen on the external sides of leaflets cause stagnation regions visualized in Fig. 7 and Fig. 8.
In the presented numerical analysis, it is shown that the three-leaflet valve exhibits a high risk of blood coagulation in the place where the leaflet is attached to the adapter wall. Some significant regions of stagnation were observed for diastole and systole. The tilting disc mechanical valve when inserted in the optimal angular position ensures good flow circulation during diastole and systole as well. In the case of diastole, no significant regions of stagnation have been observed. Angular positioning of the three-leaflet valve would not provide any improvement in flow conditions as the regions of stagnation are not related to the geometry of the chamber but to the design of the valve itself. A further study for time-dependant boundary conditions and, possibly, involving fluid structure interaction simulations [4] should be carried out to support the conclusions drawn here.
ACKNOWLEDGMENT This work has been supported by the “Polish Artificial Heart” governmental project.
REFERENCES
Fig. 8 Stagnation regions – a three-leaflet valve during systole
1. K. Jozwik, D. Obidowski (2010), Numerical simulations of the blood flow through vertebral arteries, Journal of Biomechanics, Vol. 43, Issue 2: 177-185. 2. Johnston, B., Johnson, P., Corney, S., Kilpatrick, D. (2004), NonNewtonian Blood Flow in Human Right Coronary Arteries: Steady State Simulation, Journal of Biomechanics 37: 709-720. 3. K. Jozwik, D. Obidowski, P. Klosinski et al (2009), Modifications of an Artificial Ventricle Assisting Heart Operation on the Basis of Numerical Methods, Turbomachinery, Vol. 135: 61-68. 4. De Hart J., Baaijens F.P.T., Peters G.W.M., Schreurs P.J.G., (2003), A computational fluid-structure interaction analysis of a fiber-reinforced stentless aortic valve, Journal of Biomechanics, Vol. 36: 699–712.
IFMBE Proceedings Vol. 29
A New Stimulation Technique for Electrophysiological Color Vision Testing M. Zaleski and K. Penkala West Pomeranian University of Technology/Faculty of Electrical Engineering/Department of Systems, Signals and Electronics Engineering, Szczecin, Poland
Abstract— In the paper a novel kind of stimuli for ERG (Electroretinography) and VEP (Visual Evoked Potentials) color vision tests are presented, along with a special-purpose generator. The base of these stimuli is color change with constant luminance (isoluminance). Sample results obtained in laboratory experiments as well as possible applications of this stimulation technique are also discussed. Keywords— Electroretinography (ERG), Visual Evoked Potentials (VEP), color vision testing, color stimulation, objective anomaloscopy.
Table 1
Main features and parameters of the generator
Feature No. of channels LED current Flicker frequency Light pulse duration Optical isolation
Value (range) 3 0 – 30 1 – 50 1 – 1000 >1000
Unit mA Hz ms V
The generator’s software works in two main modes: the luminance equalization mode and the test mode (Figure 1).
I. INTRODUCTION Objective color vision testing is a very important issue in ophthalmology. Electrophysiological methods and techniques like Electroretinography (ERG) and Visual Evoked Potentials (VEP) are commonly used in clinical practice. However, color stimulation tools implemented in commercially available equipment for these tests cannot be used in objective investigations of color vision mechanisms, particularly in diagnosing several kinds of abnormalities of this visual function (“objective anomaloscopy”), because electrical responses recorded from the visual system are dependent on the luminance component of light stimuli. The method presented in the paper is free of this disadvantage – this technique is based on color alterations of strictly defined spectral characteristics, with equal, constant luminance (isoluminant color stimulation).
II. MATERIALS AND METHODS A. The Generator A microcontroller-based generator is used to drive a tricolor LED. The hardware is based on Atmel ATMega8 microcontroller and a three-channel adjustable current source. The LED drive current can be independently adjusted in a wide range. A typical BNC connector allows external triggering from an electrophysiology system, e.g. UTAS E-2000, LKC, USA which was used in the preliminary tests. The input is optically coupled to match safety requirements.
Fig. 1 Simplified algorithm for green-red color alterations
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 414–417, 2010. www.springerlink.com
A New Stimulation Technique for Electrophysiological Color Vision Testing
415
1. Luminance equalization mode
C. The Colors
In this mode the patient is exposed to a two-color flicker, with the 50% duty cycle. The luminance of one of the colors (LED) is set by the examiner. The luminance of the other color and the flicker frequency is controlled by the patient. As described in [1], the maximum color flicker recognition frequency is slightly smaller than the critical fusion frequency for luminance flicker. This allows the patient to equalize the luminance of the two given colors. In [2], where the idea of the isoluminant color test was presented, it was shown that this subjective method was reliable and offered repeatable results.
Three basic colors are created by a light emitting diode (LED). It is important to note, that the LED red, green and blue wavelengths partially correspond with sensitivity curves of cones (Table 2 and Figure 3). Table 2
Light emission maxima of the LED
Color
Maximum intensity wavelength
Blue Green Red
470 nm 530 nm 630 nm
2. The test mode In this mode, one of the previously chosen colors is the basic color, and during the test it changes to the other color when a trigger pulse comes from UTAS E-2000 (or any other ERG/VEP unit). The light pulse duration is controlled in a wide range from 1 ms to 1000 ms in 1 ms steps, and duty factor can be set according to requirements of the test. If one of the colors is switched off, the generator can be simply used as a source of color flash stimuli of equal luminance. B. The Optical Module The optical part consists of a high luminance tri-color SMD LED and Maxwellian View optics [3]. Fig. 3 Relative spectral sensitivity curves of three types of cones population in human retina (S – short wavelength, M – medium wavelength and L – long wavelength) D. Testing Procedure The generator is capable of creating a wide range of color stimuli:
Fig. 2 Maxwellian View optics – a simplified diagram Maxwellian View optics provides high luminance by focusing the light coming from source directly on the patient’s pupil (Figure 2). It consists of a black cylinder made of nonreflecting material, a lens and diaphragm system. Thanks to such arrangement - together with the electronic generator direct light stimulation of 100 of the central retina is obtained, with a wide range of flexible brightness adjustment for three different colors.
-
blue flash, green flash, red flash, isoluminant color alterations: all possible combinations (red-to-green, green-to-red, green-to-blue, blue-to-green, red-to-blue, blue-to-red).
In the preliminary experiments, simultaneous monocular ERG and VEP responses were recorded using the UTAS E2000 system (LKC Inc., USA), in a patient with normal color vision. Electrodes for ERG recordings were placed at the cornea (active, DTL), ipsilateral outer canthus (reference, gold cup) and forehead Fz (ground, gold cup), according to the International Society for Clinical Electrophysiology of Vision (ISCEV) ERG standard [4].
IFMBE Proceedings Vol. 29
416
M. Zaleski and K. Penkala
Electrodes for VEP recordings (Figure 4) were placed in standard locations for ISCEV Flash VEP [5] – occipital scalp Oz (active), forehead Fz (reference) and earlobe A1 or A2 (ground) - all gold cup electrodes.
Fig. 5 Sample VEP responses to color flash stimuli of equal luminance
Fig. 4 Electrodes placement (adapted from [5]) The recording time was 1 s with 1 ms time resolution. Amplifiers were set to 10 μV/div sensitivity and 0.3-500 Hz bandwidth. In each test 80 responses were averaged with proper artifact rejection, what required ca. one hundred test cycles. In single color flash responses maximum available luminance was used. For color alteration responses, luminance of respective colors was equalized as described above. This allowed recording of responses to color (spectral content) change with no influence of luminance. Fig. 6 Sample VEP recording in the isoluminant color alteration mode (green-to-red, two cycles are shown)
III. RESULTS Sample results for three color flash VEP and one color alteration (red – green) responses are shown on Figures 5 and 6, respectively. It can be seen that cortical responses to color flashes (Figure 5) are different in morphology (polarity, amplitude and time parameters). The first wave in the red response is positive, and in green as well as blue signal is negative. The blue response voltage is much higher than in red and green recordings, and blue as well as green first waves appear faster than the red one. We obtained also clear responses to isoluminant color changes (Figure 6). They seem to have similar properties to single color flash responses: green stimulus evokes negative signal polarity, while red makes it positive. The ERG recordings (Figure 7) are smaller in amplitude and, as a result of noise, more difficult in interpretation. However, at the retinal stage of signal and information processing, the morphological differences between responses to color flashes of equal luminance seem to be much less pronounced.
Fig. 7 Sample ERG recordings obtained in color flash stimulation with equal luminance
IFMBE Proceedings Vol. 29
A New Stimulation Technique for Electrophysiological Color Vision Testing
IV. DISCUSSION Although the first results are new and rather incomparable to anything recorded before, they seem to be very promising in further research on color vision processes [6,7]. What is important, the electrical responses of the visual system may be recorded dependent only of the spectral characteristics of the light stimulus, because the luminance contribution in this technique is eliminated. Recently an attempt has been made to extract isolated responses from L, M and S cones. The extraction, performed in the MatLab environment, is based on comparing cones sensitivity curves with the LED emission curves.
417
3. Beer RD, Macleod IAD, Miller PT (2005) The extended Maxwellian View (BIGMAX): A high-intensity, high-saturation color display for clinical diagnosis and vision research. Behavior Research Methods 37 (3):513-521 4. Marmor MF, Fulton AB et al (2009) Standard for clinical Electroretinography. Doc Ophthalmol 118:69-77 5. Odom JV, Bach M et al (2004) Visual Evoked Potentials standard. Doc Ophthalmol 108:115-123 6. Zaleski M, Brykalski A, Lubinski W, Penkala K (2009) A new approach to ERG/VEP stimulation in colour vision testing. Proc. ISTET’09 XV Int. Symp. on Theor. Electr. Eng., Lubeck, Germany, 2009, at http://www.istet09.de 7. Penkala K, Zaleski M, Lubinski W (2009) Isoluminant colour stimuli for electrophysiological tests. 47th ISCEV Symp., Padova - Abo Terme, Italy, p 61, at http://www.iscev2009.org/iscev2009 Corresponding author:
V. CONCLUSIONS The new kind of biosignals obtained in isoluminant color-alteration mode may be useful in investigations of color vision mechanisms (particularly in solving the “color coding” phenomenon), in modeling those processes as well as in clinical diagnosis of color vision disorders “electrophysiological, objective anomaloscopy”.
Author: Krzysztof Penkala Institute: Department of Systems, Signals and Electronics Engineering, Faculty of Electrical Engineering, West Pomeranian University of Technology Street: Sikorskiego 37 City: 70-313 Szczecin Country: Poland Email: [email protected]
REFERENCES 1. Le Grand Y (1960) Les yeux et la vision. Dunod, Paris 2. Penkala K (1989) A model of the photostimulator with light emitting diodes for electrophysiological evaluation of colour vision (in Polish) Probl Techn Med XX (3):141-149
IFMBE Proceedings Vol. 29
Novel TiN-based dry EEG electrodes: Influence of electrode shape and number on contact impedance and signal quality P. Fiedler1, S. Brodkorb1, C. Fonseca2, 3, F. Vaz4, F. Zanow5 and J. Haueisen1, 6 1
Institute for Biomedical Engineering and Informatics, Ilmenau University of Technology, Ilmenau, Germany INEB – Instituto de Engenharia Biomédica, Divisão de Biomateriais, Universidade do Porto, Porto, Portugal 2 Faculdade de Engenharia, Departamento de Engenharia Metalúrgica e de Materiais, Universidade do Porto, Porto, Portugal 4 Departamento de Física, Universidade do Minho, Guimarães, Portugal 5 eemagine Medical Imaging Solutions GmbH, Berlin, Germany 6 Biomagnetic Center / Department of Neurology, University Hospital Jena, Jena, Germany 2
Abstract—Usability of conventional wet electrodes for electroencephalography (EEG) is depending on a set of requirements, including time consuming and complex preparation of the skin of a subject, thus limiting possible application. A new class of “dry” electrodes without the need for electrolyte gels or pastes is being investigated. The dry application scenario of these novel electrodes requires a stable and reliable contact with the subject’s skin. In order to develop an electrode shape with large contact surface for low electrode-skin impedance while also ensuring a sufficient hair layer penetration, several studies were performed. In this paper a distinct titanium electrode substrate shape for titanium nitride (TiN) coated electrodes was analyzed regarding influences of the number of interconnected electrodes and contact surface on electrodeskin impedance and biosignal quality. As a result 10 interconnected TiN pins had the lowest impedance values of 14 to 55 kΩ (depending on signal frequency) in comparison to 2 to 44 kΩ using conventional Ag/AgCl electrodes. Also the mean average deviation (MAD) of 5 seconds long EEG episodes were computed. The lowest MADs of 2.00 to 2.25 µV were determined using three interconnected TiN pins. In comparison to MADs of 2.13 to 2.54 µV, using a second set of Ag/AgCl electrodes, this leads to the conclusion that most of the error was related to spatial distance. This first step in optimization of electrode shape for dry TiN based electrodes showed very promising results and enable their use for EEG acquisition. Keywords— titanium nitride, biomedical electrodes, bioelectric signals, electrochemical characterization, electroencephalography I. INTRODUCTION
In clinical routine and medical research electroencephalography (EEG) is a common technique for investigating the human central nervous system. In the majority of cases 32 to 256 electrodes are placed on the head of the subject in order to measure the electrical activity of the brain by detecting and recording electrical potential fluctuations at different areas of the human scalp [1]. Direct signal acquisition at the skin is still a difficult, time-consuming process and therefore susceptible to many
error sources. Conventional silver/silver-chloride electrodes (Ag/AgCl) are the most commonly used type of electrodes and can be considered to be the “gold standard”. They have to be used in combination with different types of pastes and / or gels containing necessary electrolytes. Preliminary to the application of electrodes the intended areas of the scalp must be cleaned and prepared in order to lower the skin-electrode impedance. Additional technological limitations, e.g. limited long-time stability of the pastes and gels as well as a risk of conductive bridges between adjacent electrodes can lead to changing electrochemical characteristics and therefore can directly lead to faulty measurements. Due to such drawbacks of the current technology, alternative types of electrodes and materials are being investigated [1, 2, 3]. This new class of electrodes is called “dry electrode” since there is no need for application of electrolyte gels or pastes. Additionally the preparation time and complexity is strongly decreased by means of elimination of the need for cleaning and other types of skin preparation. In the current study the authors focused on the conductive titanium nitride (TiN), deposited as thin film on titanium substrate. TiN is well-known for its chemical and mechanical stability as well as outstanding biocompatibility [4, 5]. The need for low contact impedance and therefore large contact surface conflicts with hair layer penetration. As a possible solution several single electrode pins, separately penetrating the hair layer, can be interconnected, thus combining the single contact surfaces. Electrode shape, pin number and hair layer penetration as well a stable and reliable electrode-skin contact still needs further investigation. The aim of the present study was to analyze the influence of electrode pin number on electrode-skin impedance and signal quality. Therefore impedance measurements were performed and EEG biosignal episodes were recorded using electrodes with different numbers of single pins on a biological subject. The results were compared to simultaneously acquired data of conventional Ag/AgCl electrodes.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 418–421, 2010. www.springerlink.com
Novel TiN-Based Dry EEG Electrodes: Influence of Electrode Shape and Number on Contact Impedance and Signal Quality
419
II. MATERIALS AND METHODS
A. TiN-coated biosignal electrodes Identical TiN films were deposited on titanium pin substrates cut and turned from a rod of 99.96 % pure titanium (GoodFellow Metals, London, UK) and coated by reactive DC magnetron sputtering in a custom made laboratory deposition system. The pin shape is shown in figure 1b. Preparation of the blank pins included abrasion, polishing, cleaning and drying. For creation of a homogenous coating a custom, turning substrate holder was used and positioned in front of a Ti target. During the sputtering a gas atmosphere composed of argon and nitrogen was applied. The distinct coating parameters were selected according to the results of a full series of previous tests for different kinds of TiN films [5]. Shielded copper signal cables were glued to the backside of the TiN electrode pins using conductive silver glue (Elecolit 325) in order to connect the electrodes.
(a) Electrode pin array (grey) and wiring schematic (top view)
Fig. 1:
(b) TiN electrode pin schematic (side view)
Fig. 2: Electrode placement on the head of the subject: star-shaped TiN array (black) and Ag/AgCl (grey) electrodes The Ag/AgCl electrodes were constantly used in combination with EEG paste (D.O. Weaver and Co. Ten20 conductive) applied to preliminary cleaned positions on the head. No additional liquid, gel or paste was added at the position of the TiN array thus creating a realistic dry application scenario for these electrodes. C. Impedance measurement A Hewlett Packard 4192A LF impedance analyzer was used for impedance measurement. The impedance between one frontal Ag/AgCl electrode and the TiN array at parietal position was measured. In order to evaluate the impedance drop due to increasing electrode number and therefore contact surface, the number of interconnected electrode pins at the TiN array was varied between one and ten electrode pins. As a reference an additional measurement between the frontal and a parietal Ag/AgCl electrode was carried out. All impedance measurements were executed sequentially using parameters according to table 1. Table 1: Impedance measurement parameters
TiN electrode array schematic and pin schematic
B. Electrode setup The ten TiN pins were arranged in a fixed star-like shape on an acrylic base plate according to arrangement and wiring scheme in figure 1a. For direct comparison of the results using different electrode setups and conventional Ag/AgCl electrodes, regarding differences in impedance characteristics and EEG signal quality, both types of electrodes were placed nearby on the head of a subject. In order to minimize artifacts due to relative movement between electrode and skin, a silicone headband was used for fixation. Figure 2 shows the electrode positions of the conventional Ag/AgCl reference electrodes as well as the TiN array at positions Fp2 and POz according to the international 10-20 system [6].
Frequency range 1 Frequency range 2
Start frequency
Stop frequency
Frequency steps
5 Hz
200 Hz
10 Hz
200 Hz
5 kHz
100 Hz
The total measurement time per sample was two seconds. Measurement control and recording as well as subsequent calculations were done using custom implemented MATLAB tools. D. Biosignal acquisition Two inputs of a commercial 12 channel DC amplifier (TheraPRAX from neuroConn GmbH, Germany) were used for amplification and recording of the biosignals. The frontal electrode was used as patient ground in a bipolar mea-
IFMBE Proceedings Vol. 29
420
P. Fiedler et al.
surement setup using patient ground reference. Therefore temporally parallel EEG acquisition with two different types of electrodes was possible while also minimizing the spatial distance between adjacent electrodes. The actual distance was approx. 4 cm (center to center). Due to the large and plane TiN array fixation on an acrylic plate a sufficient contact between all TiN electrode pins and the scalp was only possible in distinct regions of the subject’s head including parietal position. A reference measurement for the evaluation of signal differences caused by spatial electrode distance was performed by replacing the TiN array by a third Ag/AgCl electrode. During the measurements different signal episodes were recorded containing eye movement and eye blinking as well as alpha activity provoked by closed eyes. All signals were recorded using the neuroConn Software. Further investigation was performed using MATLAB.
B. Biosignal acquisition After filtering all EEG signals using a band pass with cut-off frequencies at 2 Hz and 30 Hz the results of TiN electrode array and Ag/AgCl electrode can be compared as shown in figure 4.
III. RESULTS
A. Impedance measurement The averaged results of the impedance measurements are plotted in figure 3. It is clearly visible that the impedance is decreasing with increased number of electrode pins. Using a single TiN pin leads to an impedance maximum of 220 kΩ for 5 Hz and a minimum of 60 kΩ for 5 kHz. The conventional Ag/AgCl electrodes show the lowest impedances, compared to all electrode setups, with a maximum of 44 kΩ for 5 Hz and a minimum of 2 kΩ for 5 kHz. The 10 pin TiN setup has slightly higher impedances with 55 kΩ and 14 kΩ respectively, being the setup having lowest impedances of all TiN arrays.
Fig. 4: Overlay plot of potentials produced by eye blinking and recorded using an array of three dry TiN electrode pins in comparison to Ag/AgCl electrodes in combination with EEG paste. Corresponding episodes X (TiN array / second Ag/AgCl) and Y (reference Ag/AgCl) of 5 seconds (N samples) were selected from both signals and the mean absolute deviation (MAD) according to equation 1 as well as the standard deviation of the MAD (SD-MAD) were calculated.
MAD =
1 ⋅∑ x − y N N
i=1
i
i
(1)
Due to too high impedance it was impossible to record an EEG with one TiN pin only. The results of the remaining electrode arrangements are shown in table 2. Table 2: Mean absolute deviation (MAD) and standard deviation of MAD (SD-MAD) between recorded signals of reference and TiN array setup Electrode setup Ag/AgCl TiN - 2 Pins TiN - 3 Pins TiN - 5 Pins TiN - 10 Pins
Fig. 3: Averaged impedance measurement results for different numbers of electrode pins
Eye blinking MAD SD-MAD [µV] [µV] 2.54 2.31 4.01 3.34 2.25 2.03 3.53 3.51 2.64 2.27
Alpha activity MAD SD-MAD [µV] [µV] 2.13 1.62 2.55 2.04 2.00 2.03 2.12 1.44 2.40 1.81
With MADs of 4.01 µV (eye blinking) and 2.55 µV (alpha activity) the combination of two TiN pins shows the highest MADs of all electrode setups while the combination of three TiN pins shows the lowest MADs of 2.25 µV (eye
IFMBE Proceedings Vol. 29
Novel TiN-Based Dry EEG Electrodes: Influence of Electrode Shape and Number on Contact Impedance and Signal Quality
blinking) and 2.00 µV (alpha activity). The MAD of the 10pin array is lower than the MAD of the five-pin arrangement for eye blinking, while in the case of alpha activity the MAD relation is reversed. The MADs of the EEG signals acquired using a second set of Ag/AgCl electrodes are in fact higher than those of the arrangement of three TiN pins. For all electrode setups the MAD of eye blinking recordings is higher than with alpha activity. This is caused by higher signal amplitudes. The fact that higher signal amplitudes lead to higher differences between both signals is also visible in the overlay plot in figure 4.
tion and optimization in order to develop an optimal dry electrode system based on a low cost, time effective and environmental friendly manufacturing process. An electrode design incorporating several electrode pins on a single base plate and increased pin density will be developed, thus further decreasing electrode-skin impedance and increasing signal quality. Future developments will also include the assembly of a custom multichannel EEG cap system. This will enable the use of the novel dry electrodes in clinical routine as well as in research scenarios including source localization and functional coupling [7, 8].
ACKNOWLEDGMENT
IV. DISCUSSION
In relation to the used measurement setup and method, the resulting impedance values are cumulated impedances of several components which can be simplified summarized according to equation 2. Herein ZFp2 is the impedance of the electrode at frontal position Fp2 while ZPOz is the impedance of the electrode at parietal position POz. Z = Z Fp 2 + Z EEG paste + Z Scalp + Z POz
421
This work was supported by the company ANT B.V., Enschede, The Netherlands. Additional financial support was granted by the Landesentwicklungsgesellschaft Thüringen mbH and the European Regional Development Fund (TNA XII-1/2009), the German Academic Exchange Service (D/07/13619) as well as the German Federal Ministry of Education and Research (03IP605).
(2)
Possible further impedance reduction could be achieved using more than 10 pins in an optimized arrangement for higher electrode pin density. Due to the dependency between signal quality and spacial distance, which was proven by comparison of signals from two adjacent Ag/AgCl electrode setups, signal quality can be improved by increased density of the electrode pin arrangement.
REFERENCES 1. 2. 3. 4. 5.
V. CONCLUSION
In the present study different TiN electrode setups varying in the number of electrode pins were analyzed regarding their influence on electrode skin impedance and EEG signal quality in order to evaluate the applicability for EEG biosignal acquisition as well as finding an optimal number of pins per electrode. Therefore the novel TiN electrodes were investigated under a realistic dry TiN films were selected because of their known excellent chemical, mechanical and biocompatibility properties. The used electrode shape is capable of sufficiently penetrating the hair layer of a biological subject. The TiN coatings revealed excellent electrochemical characteristics and enable EEG biosignal acquisition with adequate signal quality. Due to the low difference between the acquired signals, compared to Ag/AgCl electrodes, it is possible to summarize that the novel TiN coated electrodes are appropriate for EEG acquisition. This fact is suggesting further investiga-
6. 7.
8.
Taheri B A, Knight R T, Smith R L (1994) A dry electrode for EEG recording. Electroen Clin Neuro 90:376–383. Searle A, Kirkup L (2000) A direct comparison of wet, dry and insulating bioelectric recording electrodes. Physiol Meas 21:271-283 Ng W C, Seet H L, Lee K S et al. (2009) Micro-spike EEG electrode and vacuum-casting technology for mass production. J Mater Process Tech 209:4434-4438 Vaz F, Ferreira J, Ribeiro E et al. (2005) Influence of nitrogen content on the structural, mechanical and electrical properties of TiN thin films. Surf Coat Tech 191:317-323 Cunha L T, Pedrosa P, Tavares C J et al. (2009) The role of composition, morphology and crystalline structure in the electrochemical behavior of TiNx thin films for dry electrode sensor materials. Electrochim Acta 55:59-67 Jasper H H (1958) The ten-twenty electrode system of the International Federation. Electroen Clin Neuro 10:371–375 Haueisen J, Leistritz L, Süsse T et al. (2007) Identifying mutual information transfer in the brain with diffenential-algebraic modeling: Evidence for fast oscillatory coupling between cortical somatosensory areas 3b and 1. NeuroImage 37:130-136 Graichen U, Witte H, Haueisen J (2009) Analysis of induced components in electroencephalograms using a multiple correlation method. BioMed Eng OnLine 8:21 Corresponding author: Author: Patrique Fiedler Institute: Institute for Biomedical Engineering and Informatics, Ilmenau University of Technology Street: Gustav-Kirchhoff Str. 2 City: Ilmenau Country: Germany Email: [email protected]
IFMBE Proceedings Vol. 29
A finite element method study of the current density distribution in a capacitive intrabody communication system Ž. Luþev, A. Koriþan and M. Cifrek Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia Abstract— In this paper we present the finite element method (FEM) study of a capacitive intrabody communication (IBC) system. We analyze current density distribution at the frequencies of 100 kHz, 1 MHz, and 10 MHz. We investigate the ratio between the capacitive and resistive current density component inside the human body and the influence of skin humidity, as well as the electrode size on the total current density distribution. We showed that the highest total current density is achieved inside the muscle tissue, and that the total current density increases with frequency, skin humidity and the size of the excitation electrodes. The surface potential shows the same trend and is in order of microvolts. At the frequency of 100 kHz the safety limits on the total current density are exceeded for wet skin and for larger electrodes. At the higher frequencies (1 MHz and 10 MHz) maximum allowed current density is not exceeded. Keywords—finite element method (FEM), intrabody communication (IBC), human arm model, current density I. INTRODUCTION
In the intrabody communication (IBC) the human body is used as a signal transmission medium. The signal is sent through the body using the transmitter excitation electrodes, and is measured by the receiver. The received signal strength is affected by the orientation of the transmitter with respect to the receiver and the number of ground electrodes connected to the body [1–6]. Moreover, the signal transmission path highly depends on the surrounding environment [1–3]. In vivo measurements [1, 2] showed that the highest received signal strength is achieved when both transmitter electrodes and a receiver signal electrode are connected to the human body, and the receiver ground electrode remains disconnected. The dielectric properties of the human body, electrical conductivity and relative permittivity, determine the flow of electric current and the magnitude of polarization effects, respectively. The most relevant source of dielectric properties of the human tissues for a frequency range from 10 Hz to 10 GHz and different tissues is given by Gabriel et al. [7]. It is shown that the dielectric properties of tissues depend on the type of tissue, frequency, temperature, and the amount of water in a particular tissue. At lower frequencies the permittivities of biological tissues are mostly high, so in
order to develop a realistic human arm model, tissue capacitive properties should be taken into account, together with the tissue resistive properties. In order to understand the signal propagation through the human body and to improve the IBC hardware, it is essential to investigate and simulate current pathway through the human tissue and to assess the influence of the human anatomy on signal propagation. In [5, 6] the authors developed a human arm model to investigate the electrode structure and the effects of ground electrode on capacitive intrabody signal transmission. The proposed model is formed as a parallelepiped with a 5 cm x 5 cm base and the results are calculated at 5 MHz frequency using FDTD-based EM simulator, under the assumption that the human body is a lossy dielectric material. In [3] the author modeled a galvanic IBC system and investigated the influence of the distance between the coupler and detector, the influence of joints, the sensitivity to resistivity changes of the tissue layers, and different coupling by wet, dry and a combined electrode interface. It was shown that the majority of the current flows between the coupler electrodes in the fat and muscles without penetrating into the bone structure. In this paper we extend the study of the capacitive IBC system to include conditions of wet and dry skin and lower frequency range (<1 MHz) using finite element method (FEM) approach. We investigate the ratio of the capacitive and resistive current density component (i.e. displacement and conduction current density) inside the human body. Furthermore, we analyzed the effects of the electrode size on the total current density distribution. The extension of the study to lower frequencies is an important step towards realization of the IBC hardware with lower power consumption that still satisfies the safety standards. II. MODELING
A. Geometry Using COMSOL [8] we developed a numeric human arm 3-D model. The human arm is modeled as a cylinder with 5 cm radius and is 45 cm long. It consists of four concentrical and homogeneous layers relevant for the intrabody signal transmission: skin (pink), fat (yellow), muscle (red) and cortical bone (grey), positioned as in Figure 1. The radius of
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 422–425, 2010. www.springerlink.com
A Finite Element Method Study of the Current Density Distribution in a Capacitive Intrabody Communication System
423
Table 1 Dielectric properties of the human tissues [7, 9] Tissue
Dry skin Wet skin Fat Muscle Cortical bone
Fig. 1 A 3-D view of the numerical human arm model the cortical bone is set to 12.5 mm, and the thicknesses of other layers are set to 31 mm, 5 mm, and 1.5 mm for the muscle, fat, and skin layer, respectively. Biological tissue conductivities ı and relative permittivities İr at the temperature of 37 °C for three different frequencies used in simulations are specified in Table 1, [7, 9]. The excitation electrodes are modeled as quadratic plates with the size of 1 cm2. They are placed on the skin 10 cm away from the nearer base of cylinder, with an interelectrode distance set to 3 cm. As the developed model is axially symmetric, in order to reduce the computational time, in our simulations we used only the half with x d 0, as in Figure 1. B. Numerical implementation According to [4, 10], a simple tissue model is a parallel combination of a capacitor and a conductor. Assuming that an alternating voltage is applied to such a combination, the total current flowing through the tissue is the sum of the conduction (resistive component) and displacement (capacitive component) current. When the wavelength of the electromagnetic field in biological tissues is much larger than the tissue dimensions, inductive effect and wave propagation can be neglected. Therefore, the quasistatic approximation can be used, where the governing equation is given by: V jZH 0H r u 0 ,
(1)
and ı, Ȧ, İ0, İr, and u are tissue conductivity, field frequency, vacuum permittivity, tissue relative permittivity and electric potential. It is known that the electric field E can be expressed in terms of the electric potential u as: E
u ,
(2)
J disp J cond
VE .
ZH r H 0 V
U0
u
10 MHz ı [S/m] İr 0.1973 362 0.3660 222 0.0292 14 0.6168 171 0.0428 37
(4)
2
,
(5)
where U0 = 1.15 V is the voltage between the transmitter electrodes placed on the skin in [1, 2]. On the arm model midplane (x = 0), an antisymmetry boundary condition is set: u
0.
(6)
On the skin surface a zero current flux is considered and the Neumann boundary condition is employed (7), where n is the surface normal. On all interior layer boundaries the continuity of a current flux is applied (8): nJ
0,
n J 1 J 2 0
(7) (8)
The geometry of the arm cylinder was meshed with second order tetrahedral elements. The finest discretization was used around the electrode, while the coarser mesh resolution was used in the other regions of the model. The same mesh was used for all simulations with the same excitation electrode size: the mesh consisting of 306605 elements for the electrode size 1 cm2 and the 305332 elements mesh for the electrode size 4 cm2. Postprocessing was carried out in MATLAB 7.1. III. RESULTS AND DISCUSSION
(3)
The ratio of displacement current density Jdisp and conduction current density Jcond can be predicted using the simple Plonsey’s equation [11]:
Frequency 1 MHz ı [S/m] İr 0.0132 991 0.2214 1833 0.0251 27 0.5027 1836 0.0244 145
In the simulations we used only one half of the geometry (x d 0, Figure 1), so we utilized the symmetry boundary conditions [12]. On the signal electrode we applied Dirichlet boundary condition:
and the current density distribution J is given by: J
100 kHz ı [S/m] İr 0.0005 1119 0.0658 15357 0.0244 93 0.3619 8089 0.0208 228
Firstly, we measured total current density in an x-y crosssection which is 9 cm away from the excitation electrode, as in Figure 2a. The results obtained for the frequencies of 100 kHz, 1 MHz, and 10 MHz are depicted in Figure 2b
IFMBE Proceedings Vol. 29
424
Ž. Luþev, A. Koriþan, and M. Cifrek
from left to right, respectively. It can be seen that the total current density values increase with the frequency. For each frequency the highest current densities are achieved in a muscle tissue layer, since the conductivity of muscle is higher than the conductivity of other simulated layers at these frequencies. Safety limits set on a current density define maximum current densities of 0.2 A/m2 at 100 kHz, 2 A/m2 at 1 MHz, and 20 A/m2 at 10 MHz, [13], and they are not exceeded in any case. Since the penetration depth for given frequencies and tissues is larger that the tissue dimensions [9], it can be assumed that the current density is homogeneous along a cross-section in a particular tissue. Main results acquired in muscles along the arm are summarized in Figure 3. Results displayed in the first, second and third column are obtained for the frequencies of 100 kHz, 1 MHz, and 10 MHz respectively. Displacement and conductive component of the current density, effects of dry and wet skin, and electrode size are depicted in the three rows of Figure 3, respectively. Comparison of the results obtained for different frequencies shows that the total current density in the muscle increases with frequency for all setups. As expected, according to the equations (2) and (3), the highest values of the
2
J [A/m ]
100 kHz
Fig. 2 a) Position of the measurement plane, 9 cm from the excitation electrode; b) Total current densities in a depicted measurement plane for different signal frequencies. Total current density range is from 0 A/m2 (blue) to 0.02 A/m2 (red) total current density are distributed just beneath the signal electrodes (z = 10.5 cm), where the electric field is the high-
1 MHz
10 MHz
Jdisp
0.6
Jdisp
0.6
Jdisp
0.4
Jcond
0.4
Jcond
0.4
Jcond
0.2
0.2 10
20 30 z [cm]
0.6
0 0
40
Dry skin Wet skin
2
Jtot [A/m ]
b)
0.6
0 0
0.4 0.2 0 0
0.2 10
10
20 30 z [cm]
0.6
0 0
40
Dry skin Wet skin
0.4
0 0
40
Ae = 1 cm2
1
10
20 30 z [cm]
40
10
10
20 30 z [cm]
0.6
40
Dry skin Wet skin
0.4
0 0
40
Ae = 1 cm2
1
10
10
20 30 z [cm]
40
20 30 z [cm]
40
Ae = 1 cm2
1
2
Ae = 4 cm
0.5 0 0
20 30 z [cm]
0.2
2
Ae = 4 cm
0.5 0 0
20 30 z [cm]
0.2
2
Jtot [A/m ]
a)
2
Ae = 4 cm
0.5 0 0
10
20 30 z [cm]
40
Fig. 3 Current densities in muscles at the frequencies of 100 kHz (first column), 1 MHz (second column), and 10 MHz (third column): comparison of displacement and conduction current density (first row), current densities for dry and wet skin (second row), and different sizes of electrodes (third row)
IFMBE Proceedings Vol. 29
A Finite Element Method Study of the Current Density Distribution in a Capacitive Intrabody Communication System
est. Because of the equations (2) and (3) the surface potential that IBC receiver measures also increases with the frequency and falls off with the distance from the electrode. For greater distances the surface potential is almost constant and it is in microvolt range. An overview of the ratio of the displacement (Jdisp) and conduction (Jcond) current density for each simulation frequency is given in the first row in Figure 3. The ratio of the peak values of the displacement and the conduction current density obtained from these graphs is 0.1244, 0.2032, and 0.1542 for 100 kHz, 1 MHz, and 10 MHz frequencies, respectively, which is in accordance with the equation (4). The highest percentage of the tissue capacitive component in total current is observed at the frequency of 1 MHz. The influence of skin properties on the total current density in muscle layer is depicted in the second row in Figure 3. At the frequency of 100 kHz the peak value of total current density in muscle layer is about three times higher for a wet than for a dry skin, and it exceeds the safety limits. At the frequency of 1 MHz this ratio is lower and its value is around 1.5, while at the frequency of 10 MHz the total current density distribution is almost the same for wet and dry skin. The reason for this is that differences in conductivities and permittivities between wet and dry skin decrease with frequency. For the frequencies above 100 MHz these values are almost the same [7]. Finally, increasing the area of electrodes by four, from 1 cm2 to 4 cm2, we can observe that the peak values of the total current distribution in the muscle layer increased for about 2 times: 2.7 for 100 kHz, 2.1 for 1 MHz, and 1.8 for 10 MHz frequency. Nevertheless, at the frequency of 100 kHz the safety limits are exceeded.
ACKNOWLEDGMENT This research has been supported by the Croatian Ministry of Science, Education and Sport through the research project “Noninvasive measurements and procedures in biomedicine”, grant number 036-0362979-1554.
REFERENCES 1. 2.
3. 4.
5.
6.
7. 8. 9. 10.
IV. CONCLUSIONS
We simulated capacitive intrabody communication through the human arm model to assess the influence of the human anatomy on signal propagation in order to improve the IBC hardware. We showed that the highest total current density is achieved inside the muscle tissue, and that total current density increases with frequency, skin humidity and the size of the excitation electrodes. The surface potential shows the same trend. At the greatest distance from the electrodes the surface potential is in the order of microvolts, which makes the requirements on the IBC receiver design more stringent. At the frequency of 100 kHz the safety limits for the total current density are exceeded for wet skin and for larger electrodes. At the higher frequencies (1 MHz and 10 MHz) maximum allowed current density it is not exceeded.
425
11.
12.
13.
Luþev Ž, Krois I, Cifrek M (2009) A Multichannel Wireless EMG Measurement System Based on Intrabody Communication, IMEKO XIX World Congress, Lisbon, Portugal, 2009, pp. 1711–1715 Luþev Ž, Krois I, Cifrek M et al. (2009) Effects of Transmitter Position and Receiver Ground Plane on Signal Transmission in Intrabody Wireless EMG Measurement System, IFMBE Proc. vol. 25(7), 11th World Congress on Med. Phys. & Biomed. Eng., Munich, Germany, 2009, pp. 887–890 DOI 10.1007/978-3-642-03885-3_246 Wegmüller M S (2007) Intra-Body Communication for Biomedical Sensor Networks. PhD Thesis Diss. ETH No. 17323, ETH Zurich, Switzerland Wegmüller M S, Oberle M, Kuster N, Fichtner W (2006) From dielectrical properties of human tissue to intra-body communications, IFMBE Proc. vol. 14, World Congress on Med. Phys. & Biomed. Eng., Seoul, Korea, 2003, pp 613–617 Sung J B, Hwang J H, Hyoung C H et al. (2006) Effects of ground electrode on signal transmission of human body communication using human body as transmission medium, IEEE Ant and Prop Soc Intl Symposium 2006, Albuquerque, USA, 2006, pp. 491–494 DOI 10.1109/APS.2006.1710566 Oh J, Park J, Lee H, Nam S (2007) The electrode structure to reduce channel loss for human body communication using human body as a transmission medium, IEEE Ant and Prop Soc Intl Symp 2007, Hawaii, USA, 2007, pp. 1517–1520 DOI 10.1109/APS.2007.4395795 Gabriel S, Lau R W, Gabriel C (1996) The dielectric properties of biological tissues: III. Parametric models for the dielectric spectrum of tissues. Physics in Medicine and Biology 41: 2271–2293 Comsol Multyphysics v3.4 at http://www.comsol.com/ Dielectric properties of body tissues calculator, IFAC, at http://niremf.ifac.cnr.it/tissprop/, prosinac 2009. Miklavþiþ D, Pavšelj N, Hart F X (2006) Electric properties of tissues. Wiley Encyclopedia of Biomedical Engineering. DOI: 10.1002/9780471740360.ebs0403 Kuiken T A, Stoykov N S, Popovic M et al. (2001) Finite Element Modeling of Electromagnetic Signal Propagation in a Phantom Arm. IEEE Trans on Neural Systems and Rehabilitation Engineering, vol. 9, 4: 346–353 Lackoviü I, Magjareviü R (2005) Computer simulation of transesophageal pacing with conventional and selective leads using 3-D models, IFMBE Proc. vol. 11(1), European Med. and Biol. Eng. Conf. EMBEC’05, Prague, Czech Republic, 2005, pp. 2864–2869 ICNIRP (1998) Guidelines for Limiting Exposure to Time-Varying Electric, Magnetic, and Electromagnetic Fields (up to 300 GHz). Health Phys 74(4): 494–522 Author:
Željka Luþev
Institute: University of Zagreb Faculty of Electrical Engineering and Computing Street: Unska 3 City: Zagreb Country: Croatia Email: [email protected]
IFMBE Proceedings Vol. 29
Voice Controlled Neuroprosthesis System D.C. Irimia, M.S. Poboroniuc, M.C. Stefan, and Gh. Livint Faculty of Electrical Engineering, ‘Gheorghe Asachi’ Technical University, Iasi, Romania
Abstract— This paper aims to present a voice controlled neuroprosthesis system intended to help a disabled patient to perform different tasks related to his/her rehabilitation process. Prior to use a neuroprosthesis on disabled people within a clinical environment, we propose that any new control strategy to be tested in simulation and on a mechatronic device that emulate the human body movements while an electrical stimulus is supposed to be applied over the controlled group of muscles. Our tests have shown that a mechatronic device that mimic the human body movements while standing-up, standing and sitting down in paraplegia may offer a better understanding on how to tune controller parameters for different control strategies that aim to support standing exercises in paraplegia. Keywords— Functional Electrical Stimulation, Neuroprosthesis, Voice control, Paraplegia, Mechatronic Device.
I. INTRODUCTION Paralysis due to spinal cord injury leaves the muscles and their innervating motor neurons below the level of the lesion, largely intact. It is estimated that the annual incidence of spinal cord injury in the USA, not including those who die at the scene of the accident, is approximately 40 cases per million population or approximately 12000 new cases each year. Functional electrical stimulation (FES) is a technology that uses small electrical impulses to artificially activate peripheral nerves, causing muscles to contract, and this is done so as to restore body functions. The standing ability for patients with spinal cord injury assisted by FES has been reported since the early 1970s [1], [2]. Stimulation of the quadriceps muscles is regarded as the minimum to achieve standing, though other muscle groups have been used in the different phases of the sit-stand-sit manoeuvre [3], [4] and in controlling posture while standing [5]. Regular standing in spinal cord injured subjects is thought to help in preventing osteoporosis; in preventing contracture by preserving the range of movement at lower limb joints; in improving digestion, respiration and urinary drainage; in reducing the chance of decubitus ulcers by relieving pressure; and contributing to the psychological benefit by enhancing personal esteem [6], [7].
The devices that deliver electrical stimulation and aim to substitute for the control of body functions that have been impaired by neurological damage are termed neuroprostheses. The acceptance of these FES-based systems into the daily lives of spinal cord injured (SCI) persons depend on their performances, benefits for patients and ease of donning and doffing. A major requirement for the FES systems, used to assist SCI individuals, is to work autonomously. Applications include FES assisted balancing during standing exercises, torso control during sitting, and FES-based control of walking. Therefore, most of the work is directed towards the development of suitable algorithms for closedloop control of the delivered stimulation. During the last few years the human-machine interaction became a very important research topic [8]. People with disabilities which use artificial devices during their rehabilitative process may benefit by different methods to control them. Instead of using switches and instrumented crutches, people that have difficulty with or are unable to use their hands (Parkinson disease and other degenerative syndromes) can use a voice controlled device. A neuroprosthesis control may be performed by means of a voice recognition system or by a more complex method – brain computer interface (BCI) [9].
II. MATERIAL AND METHODS The communication in a natural language increases the speed of performing tasks and it is user friendly. Practically, any device which interacts with the humans can be provided with a voice recognition system making its use easier. Speech recognition (SR) is a technology which converts spoken words to machine-readable input. Using speech recognition to control neuroprosthesis, can be useful for people who have difficulty with or are unable to use their hands, from mild repetitive stress injuries to involved disabilities that require alternative input for support with accessing the computer. Fig. 1 presents the idea for a voice controlled neuroprosthesis aiming to help paraplegics in performing a chained motion standing-up, standing and sitting-down needed exercise.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 426–429, 2010. www.springerlink.com
Voice Controlled Neuroprosthesis System
427
The vocabulary of the voice controlled neuroprosthesis was defined by using the program “QuickT2SI”. It allows introducing words as text in special designed fields. The first written word “run” is a trigger word that will interrupt the state of continuous listening (CL) of the microcontroller. Once pronounced it will start, for a period of 3 seconds, the acknowledgement of a new spoken word, considered as a command, selected from the vocabulary list. The implemented commands are “stand up”, “sit down”, “muscle up” and “muscle down” to increase and to decrease the maximum stimulation level seen as a parameter for a standing control strategy. B. Human Body Model Fig. 1 Neuroprosthesis control based on a voice recognition system A. Voice Control Device The VR (Voice Recognition) Stamp is based on a RSC4128 signal processor, designed to support Hidden Markov Model (HMM) as well as Neural Network technologies, to perform speech recognition. The CPU core embedded in the RSC-4128 is an 8-bit, variable-length-instruction microcontroller. The instruction set has a variety of addressing mode, MOV and 16 bit instructions. The RSC-4128 processor avoids the limitations of dedicated A, B and DPTR registers by having completely symmetrical sources and destinations for all instructions. The 8-bit microcontroller integrates speech-optimized digital and analog processing blocks into a single chip solution capable of accurate speech recognition. The words spoken by a human operator are recorded into the VR Stamp kit and compared with a predefined vocabulary. When a spoken word is recognized like being part from the vocabulary, the processor send through its serial port a command assigned to the recognized word towards the microcontroller belonging to the neuroprosthesis or robot. Fig. 2 presents the block diagram of the RSC-4128 microcontroller.
Fig. 2 RSC-4128 block diagram
Human body models can help predict the best stimulation strategy (i.e., stimulation pattern) to obtain the targeted forces during FES to help minimize the fatigue and perform a required motion task (i.e., standing-up, standing, sittingdown). Depending on the adopted model the complexity of reality is reduced in the sense of lacking some properties of the human body motion processes. Mulder et al. [10], use a three segmental model of standing up in the sagittal plane. The model was set up consisting of three rigid bodies (trunk, upper and lower leg) interconnected by two onedimensional ideal joints, and provides the required knee torque to support a standing-up motion task. The quadriceps muscles are supposed to be electrically stimulated to provide the required knee torque. A simple linear second-order critically damped system has been used to model the quadriceps muscle dynamics. In our Simulink model we have implemented a three segmental model with nine mono and biarticular muscle groups, as described in [11]. These muscle groups are modeled in the sagittal plane inducing moments about the ankle, knee, and hip joints. All muscle groups except monoarticular hip flexors can be activated in a real experiment by a proper arrangement of surface electrodes. Each modeled muscle group has its own activation and contraction dynamics. The inputs for the model are the stimulator pulse width and frequency. Muscle activation, muscle contraction and body segmental dynamics are the three main components of the implemented model. The forces computed for any of the nine muscle groups that are activated due to an applied electrical stimulus, are input to the body-segmental dynamics. The interaction (horizontal and vertical reaction forces) with a seat is modeled by means of a pair of nonlinear spring-dampers. Paraplegic patients need their arms during FESsupported movements to maintain balance and to support a fraction of their body weight. The upper body effort has to be taken into account into any model that aims to support
IFMBE Proceedings Vol. 29
428
D.C. Irimia et al.
FES-based controllers testing. Within the patient model as developed in [11], shoulder forces and moment representing the patient voluntary arm support are calculated on a basis of a look-up table, as functions deviations of horizontal and vertical shoulder joint position and trunk inclination from the desired values, and their velocities. In fact, the shoulder forces and movement model is based on a reference trajectory of the shoulder position and trunk inclination during the sit-to-stand transfer obtained during an experiment on a sole paraplegic patient. A simplest and experimentally tested model of patient voluntary arm effort has been implemented in our developed model [12]. In this case the vertical shoulder forces can be modeled as a function for measured knee angles by means of a fuzzy controller. To conclude, the implemented human body model will be provided with the electrical stimulus parameters (pulse and frequency) for any selected muscle groups, in accordance with the desired control strategy of a motion task. The outputs are the angles, angular velocities and accelerations computed at the ankle, knee and hip joints levels. Any new proposed controller is implemented as a Matlab function which provides the stimulation parameters in accordance with the measured/computed joints variables (angles, angular velocities and accelerations). C. The Mechatronic Device Emulating the Human Body It is assumed that a FES-based controller will work for a half body, and therefore in practice two controllers have to be tuned for each side, while the patient arms will provide balance in the transversal plane. Four pulse-proportional servos HITEC HS-422 (two for the ankle joint and one for each of the knee and hip joints) are used to control the posture of a half robot-like human body in accordance with the simulation results. The HITEC HS-422 servos (range: 0 to 180°, voltage: 4.8 – 6.0 VDC, torque: 57 oz.-in., speed: 0.16s / 60 degrees) have been chosen to provide the desired speed and torque of the driven joints [12]. During simulation, a Lynxmotion SSC-32 controller board is used to provide the required pulse width to the servos in accordance with the angles and angular velocities provided by the Simulink&Matlab human body model. A wood light structure has been superposed on the 3-DOF robot links to provide the user with a better understanding of the human body reactions while using a neuroprosthesis.
real clinical trials, an open-loop control of standing-up that simply ramps up the stimulus intensity applied to the quadriceps is enough. Therefore, in accordance with real data recorded for the ankle, knee and hip joints, the standing-up is performed by the robot-like human body. For now the voice controls the initiation of different tasks (standing-up, standing, sitting-down) and tunning of few parameters required for the controllers (i.e. the maximum level allowed for the stimulation intensity). The human body model is initialized with statistically general dimensions for shank, thigh and upper body (0.45 m, 0.5 m and 0.8 m), seat high (0.45 m), link mass (3.5 kg, 10 kg, 23.7kg) and center of mass (0.279 m, 0.244 m and 0.4685 m) for each of the three links. During real clinical trials two controllers are necessary to be tuned for each of the human body sides (left or right). In our case, the left and right controllers are considered to be tuned with the same parameters, only half body is taken into account, and therefore the link masses values are taken for a half body. A sitting-down motion task ONZOFF control has been tested on our test bench (motion sub-phases: Buckle1=5° (knees are unlocked), SitDown=80° (ramp down stimulation to zero), ZONE=70°/s). The form of the switching curve in the second sub-phase (Buckle2) is defined by the maximum knee angular velocity (120°/s) and the Ox intersections (0° and 100°). Firstly, the standing posture is obtained by applying a sequence of pulse width for the servos which produces the same movement like in an open-loop controlled real trial of standing-up in paraplegia. During the controlled sitting-down motion task, the user has the possibility to observe the entire movement and to compare with those which have been observed on real patients. Figure 3 shows the pulse width values which have to be applied over the quadriceps, hamstrings and gluteals by a neuroprosthesis, in accordance with the sitting-down control strategy.
III. RESULTS AND CONCLUSIONS The main aim is to emulate the human body motion induced by a voice controlled neuroprosthesis, during a chained motion standing-up, standing and sitting-down. In
Fig. 3 ONZOFF controller output for quadriceps, hamstrings and gluteals muscles
IFMBE Proceedings Vol. 29
Voice Controlled Neuroprosthesis System
429
In accordance with the ONZOFF controller’s output (pulse width for the controlled groups of muscles), the human body posture during a sitting-down motion task is defined by means of a three joint angles at the ankle, knee and hip level. The ankle, knee and hip angles, which are obtained during simulation for a sitting-down motion task, are converted in pulse width values for the servos imposing the movement of the robot-like human body, in the same time when the simulation is performed (figure 4).
Fig. 4 Pulse width for the driven servos of the robot-like human body structure FES has been in existence since the 1960’s but yet few SCI patients benefited from using FES-based assistive devices. This is due in part to the challenges that controlling FES presents: nonlinear, coupled and time-varying muscles response in the presence of the electrical stimulus; muscle fatigue, spasticity, etc. The proposed robot-like human body mechatronic device offers the possibility to include these effects into the simulation model and to intensively test any new control strategy. Once the results are those expected, one can pass to the next step, the clinical trials, which are expected to be decreasing in number of required trials since a neuroprosthesis is fitted to a patient. It is expected that the use of the proposed mechatronic device will increase the understanding of all the professionals involved within the FES-based rehabilitation field on tuning neuroprosthesis parameters.
ACKNOWLEDGMENT The authors would like to thank the Romanian National Authority for Scientific Research (ANCS-CNMP) for financial support under the grant SINPHA 11-068/2007.
REFERENCES 1. A. Kralj, S. Grobelnik and L. Vodovnik, “Electrical stimulation of paraplegic patients – feasibility study”, in “Proc. Int. Symp. External Control of Human Extremities”, Dubrovnik Yugoslavia, 1973, pp. 561-565. 2. A. Kralj and T. Bajd, “Functional electrical stimulation: standing and walking after spinal cord injury”, CRC Press, Inc., Boca Raton, 1989. 3. M. E. Roebroeck, C. A. M. Doorenbosch, J. Harlaar, R. Jacobs and G. J. Lankhorst, “Biomechanics and muscular activity during sit-to-stand transfer”, “Clin. Biomech.”, vol. 9, 1994, pp. 235-244. 4. D. E. Wood, V. J. Harper, F. M. D. Barr, P. N. Taylor, G. F. Phillips and D. J. Ewins, “Experience in using knee angles as part of a closedloop algorithm to control FES-assisted paraplegic standing”, in “Proc. 6th Int. Workshop of FES: Basics, Technology and Apllication”, Vienna, Austria, 1998, pp. 137-140. 5. H. Golee, K. J. Hunt and D. E. Wood, “New results in feedback control of unsupported standing in paraplegia”, “IEEE Trans. Neural Syst. Rehabil. Eng.”, vol. 12, 2004, pp. 73-80. 6. P. W. Axelson, D. Gurski and A. Lasko-Harvill, “Standing and its importance in spinal cord injury managemnet”, in “Proc. RESNA 10th Ann. Conf. Rehab. Tech..”, San Jose, CA, 1987, pp. 477-479. 7. R. J. Jaeger and G. M. Yarknoy, “Implementing a simple FNS standing protocol in a clinical setting”, in “Advances in External Control of Human Extremities”, Dubrovnik Yugoslavia, 1990, pp. 265-273. 8. T. Inagaki, “Smart collaboration between humans and machines based on mutual understandings”, Annual Reviews in Control, Volume 32, Issue 2, December 2008, pp. 253-261. 9. S.F. Giszter, “Spinal Cord Injury: Present and Future Therapeutic Devices and Prosthesis”, Neurotherapeutics, Volume 5, Issue 1, January 2008, pp. 147-162. 10. A. J. Mulder, P. H. Veltnic and H. B. K. Boom, “On/off control in FES-induced standing up: A model study and experiments”, Med. Biol. Eng. Comput., vol. 30, 1992, pp. 205-212. 11. R. Riener, T. Fuhr, “Patient-driven control of FES-supported standing up: a simulation study”, IEEE Trans. Rehabil. Eng., vol. 6, 1998, pp. 113-124. 12. M. S. Poboroniuc, T. Fuhr, D. E. Wood, R. Riener and N. d. N. Donaldson, “A Fuzzy Controller to Model Shoulder Forces within FES Model-Based Simulation of a Paraplegic Patient”, in Proceedings of the 3rd Academic Biomedical Engineering Research Group (ABERG) Workshop, Bournemouth, UK, 2002, pp. 11-16. Author: Institute: Street: City: Country: Email:
Danut C. Irimia Faculty of Electrical Engineering 23 D.Mangeron Iasi Romania [email protected]
Author: Institute: Street: City: Country: Email:
Marian S. Poboroniuc Faculty of Electrical Engineering 23 D.Mangeron Iasi Romania [email protected]
Author: Institute: Street: City: Country: Email:
Ciprian M. Stefan Faculty of Electrical Engineering 23 D.Mangeron Iasi Romania [email protected]
IFMBE Proceedings Vol. 29
Preoperative Planning Program Tool in Treatment of Articular Fractures: Process of Segmentation Procedure M. Tomazevic1, D. Kreuh2, A. Kristan1, V. Puketa1, and M. Cimerman1 1
Traumatology Department, University Clinical Centre, Ljubljana, Slovenia 2 Ekliptik, l.t.d, Ljubljana, Slovenia
Abstract— a developing computer program for preoperative planning of articular fractures is presented. The program consists of three closely integrated tools, the 3D viewing tools, the segmentation tools and the reduction and fixation simulating tools. Data from CT of a fracture in DICOM format are used. First the 3D model is made, and then segmentation is carried out, where each fracture segment is made as an individual object. In reduction each fracture segment can be moved in all three directions, can be rotated in all planes and its pivot point of rotation can be changed. After reduction fixation can be undertaken, either with plates that can be automatically contoured or with pre curved plates that are already in program database. The plan of automatically contoured plates can be drawn and printed out in 1:1 scale. The most important is that all the steps can be carried out on a personal computer by the surgeon who is doing the preoperative planning. This is the complete novelty since segmentation can be carried out by the surgeon. In that way all the fracture lines are studied in preoperative planning. The procedure is quick and easy. This is why we made segmentation in 20 consecutive cases that were admitted to our department of fresh articular fractures where CT was indicated and done that were admitted to our department. The steps needed in segmentation process were recorded and fractures were described whether there were luxation, multifragmentary or impaction fractures and classified according to AO classification. The presented computer program is an easily usable application which brings significant value and new opportunities in clinical practice, teaching and research. Keywords— Preoperative, Planning, Articular, Fractures. Computer.
I. INTRODUCTION Because articular fractures extend into joint surfaces and because joint motion or loading may cause movement of the fracture fragments, intra-articular fractures can present challenging treatment problems. Most intra-articular fractures heal, but if the alignment and congruity of the joint surface is not restored, the joint may be unstable, and in some instances, especially if the fracture is not rigidly stabilized, healing may be delayed or nonunion may occur. However, prolonged immobilization of a joint with an intraarticular fracture frequently causes joint stiffness. For these reasons, surgeons usually attempt to reduce and securely fix
unstable intra-articular fractures. This approach ideally restores joint alignment and congruity and allows at least some joint motion while the fracture heals. Unfortunately, restoring joint alignment, congruity and stability in patients with severe intra-articular fractures may require extensive surgical exposure that further compromises the blood supply to the fracture site. Even after reduction and adequate initial stabilization, intra-articular fractures may displace due to high transarticular forces, failure of the stabilization or collapse of the subchondral cancellous bone.[1] A study of contact stress aberrations following imprecise reduction of experimental human cadaver tibial plateau fractures showed that generally peak local cartilage pressure increased with increasing joint incongruity (fracture fragment step-off), but the results varied among joints. In most specimens, cartilage pressure did not increase significantly until the fragment step-off exceeded 1.5 mm. When the step-off was increased to 3 mm, the peak cartilage pressure averaged 75% greater than normal [.2, 3] Although the functional anatomy of the joints is well studied and 3D CT has much improved imaging, complete understanding of the fracture lines and fragments is still difficult. Another problem is the choice of correct operative approach. Reduction of bone fragments, which is usually very demanding, represents a key element for the normal post-operative biomechanical functions in articular fractures. Complete and precise control of the reduced fragments is also problematic, because visualization of whole fragments and the joint surface is often technically impossible. After reduction, the problem of fixation occurs. The plates must be precisely contoured in all three planes to fit an individual bone. Taking all this into account, it is obvious that strict preoperative planning is a crucial step in articular surgery. It is not therefore surprising that new technologies have been introduced in orthopedics and trauma to help the surgeon to plan and to perform operative procedures more precisely. Computer assisted orthopedic surgery (CAOS) has been developed as the application of computer based technology to assist the surgeon to improve the precision of the operative procedure.[4, 5, 6, 7] There are reports on the use of virtual planning in the resection of pelvic bone tumors, for individual modeling of prosthetic substitutes5
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 430–433, 2010. www.springerlink.com
Preoperative Planning Program Tool in Treatment of Articular Fractures: Process of Segmentation Procedure
and maxillofacial surgery.[8, 9, 10] There were already described programs for planning of fractures treatment, but, because segmentation process was so demanding a group of people were involved in segmentation process.[5, 6] Together with computer engineers from Ekliptik l.t.d., we have developed an experimental computer program which enables performance of a complete procedure, from imaging, segmentation process and virtual operation of a fractured bone. The key element in segmentation process is, to know al the fracture lines, so the surgeon really studies the fracture. The purpose of virtual surgery is to perform all the steps of the ‘‘real’’ surgical procedure. We have been using this software in our institution first for dealing with acetabulum fractures, but because it was so useful and uncomplicated we started to use it also for all articular fractures. The segmentation process is a novelty that is why we studied which steps we need to do in different kind of fractures, so the fracture segmentation is done to the last fragment. We made segmentation on 20 consecutive cases of articular fractures that were admitted to our clinic and classified them according to AO classification.11
II. MATERIALS AND METHODS EBS software enables complete preoperative planning of intraarticular fractures on the model acquired from real patient data. Data from CT in DICOM format are used. Slices of 3 mm or less are required. We used various thickness of slices, depending on the joint that was injured. The thinner they are the better is the resolution of the reconstruction model. Models for the simulation are produced semi automatically on PC with windows environment by the surgeon who is doing the preoperative planning. The first part of this process is determination of the area that you want to build the model from and of the threshold that specifies desired densities which define bones. Then the computer builds automatically an 3D model. The second part of segmentation process is divided in to two parts. Basic segmentation is where we clear the aquierd segment of artifacts and divide the bones, which are in close contact with the fractured bone. Fracture segmentation is where we divide fragments of the fractured bones. In segmentation process different tools can be used. Merge tool is used when we merge different segments in to one segment. Paint tool is used where the fracture line needs to be drawn if the computer does not find it. Fill hole tool can be used in the osteoporotic bone. Separate tool can be used where new objects needs to be done from the unconnected bones. Split tool is used where segmentation in done by positioning the seeding points on different fracture sites and the computer finds the fracture line in between. Cut tool is used where we
431
cut thru the virtual bone, but there is no fracture line seen. This can be used in planning osteotomies, or in impaction fractures. After the segmentation process, each fracture fragment becomes a separate object. In the rendering process, each bony fragment can be colored. After this procedure, the simulation model is ready for use. The surgeon can start to perform the virtual operation Basic commands are made in a user friendly manner and the screen is similar to other programs run on regular pc computers in the Windows environment. The pelvis can be turned around in all directions during the virtual operation, so each step of the procedure can be studied in relation to the operative approach. Bone fragments can be moved and rotated in all three planes, and the pivot point can be changed so the reduction of the fracture can be performed and key bone fragmentsidentified. After reduction, fixation can be performed. The surgeon can chose the appropriate reconstruction plate and put it across the fracture. Contouring of the plate is performed automatically. The screws can be chosen and inserted into the plate or across the fracture. The length of the screws can be measured accurately. The direction of the screws can be controlled by making the bones more transparent. A special feature of the software is simulation of intraoperative C arm. In our study we recorded which steps were used in diffirent kind of fractures. We did not mesaure the time, because time is dependent on the learning curve and the operator. But it takes around 15 minutes for a suregeon with a few computes skills and with an average laptop PC. Later the segmentation process is shown step by step.
III. RESULTS From twenty consecutive cases of articular fractures, there were five proximal humerus fractures, one distal humerus fracture, two proximal tibia fractures, one distal cruris fracture, three spine fractures, one pelvis fracture, four acetabulum fractures, one metacarpal fractures and one midfoot fracture. In basic segmentation process only separate and split tool were used. In fracture segmentation process all tools except fill hole tool needed to be used. Most importantly paint tool needed to be used only once. Paint tool is the most time consuming tool. All other tools are semiautomatic and time sparring. It was needed in a compression spine fracture where the spine was already degenerative changed. The results are listed in table.(.Table 1) We can see that the most challenging are compression fractures. Because there are no fracture lines, only the bone is more dense than usual. In these cases either the use of paint tool or cut tool is necessary.
IFMBE Proceedings Vol. 29
432
M. Tomazevic et al.
Table 1 Tools needed for segmentation in diffirent kind of fractures Fracture site
Proximal humerus Proximal humerus Proximal humerus Proximal humerus Proximal humerus Distal humerus Disral radius Proximal tibia Proximal tibia Distal cruris Spine Spine Spine Pelvis Acetabulum Acetabulum Acetabulum Acetabulum Metacarpal Midfoot
AO clasification
Fracture Basic segmentation segmentation (number of tools (number of tools needed) needed)
11 B1
1
3
11 C1
2
1
11 C1
2
1
11 C1
1
1
11 C2
2
1
13 C3 23 B3 41 B2 41 C3 43 C3 53 A1 53 A1 53 A2 61 B1 62 A1 62 A1 62 B1 62 B1 74 C2 82 B2
2 2 2 2 3 2 2 2 2 2 2 2 2 2 2
3 1 3 2 1 4 2 2 1 1 3 3 3 2 4
Fig. 2 3D image Using the separate tool the unconnected items are automatically segmented. The computer finds the lines between the fragments and unconected bones by him self.(fig.3)
The segmentation process of a proximal humerus fracture is shown step by step. First data is loaded and the field of interest is chosen (fig.1) Then the 3D model is built automatically (fig.2).
Fig. 3 Automatic segmentation
Fig. 1 Data is acquired from DICOM images
Then the virtual bones that are of no interest to us are subtracted by simply unmarking them. Than we start with the segmentation of bones that are in close contact with the fracture site. They are marked with seeding points and the virtual bones that are in close contact with the fracture site are separated (fig.4, 5). To get the better view and better working area it is useful to subtract the bones that are unbroken to solving the fracture segmentation and later reduction and virtual operation. IFMBE Proceedings Vol. 29
Preoperative Planning Program Tool in Treatment of Articular Fractures: Process of Segmentation Procedure
433
to really know the fracture, surgeon must be involved in segmentation process. With EBS program the surgeon can completely prepare himself for the operation. The segmentation process is easy and can be done without any special computer skills. After segmentation, virtual reduction and fixation is done on the simulation model, the real operation seems like just another rehersal.
Fig. 4 Setting the seeding points on the bones that are in close contact with the fractured bone
Fig. 6 Segmented 11 C1 AO fracture
REFERENCES
Fig. 5 The semi automatic segmentation: clavicle, scapula and humerus are separated When we start working on a fractured bone, in our case proximal humerus it is useful to turn it around so we get the general idea of the fracture type and position. Than we again place seeding points on the fracture fragments and the computer fins the fracture segments by its own. And we can control them in all three planes on the CT images. When pleased with segmentation (fig.6), classification can be done and virtual surgery can proceed.
IV. CONCLUSION Articular fractures are very demanding to treat that is why the preoperative planning is essential. There were programs before, where you could do the virtual operation, but
1. Bucholz RW, Heckman JD, Court-Brown C (2006) Rockwood and Green’s Fractures in Adults. Lippincott Williams & Wilkins, London 2. Brown TD, Anderson DD, Neola JV, et al. Contact stress aberrations following imprecise reduction of simple tibial plateau fractures. J Orthop Res 1988; 6:851-862. 3. Trumble T, Allan CH, Miyano er al A preliminary study of joint surface changes after an intraarticular fracture: a sheep model of a tibia fracture with weight bearing after internal fixation.J Orthoph Trauma 2001 15:326-332 4. Nolte LP, Beutler T. Basics principles of CAOS. Injury 2004;35 (Suppl 1):6—15. 5. Dahlen C, Zwipp H. Computer-assistierte OP-Planung: 3D Software fur de PC. Unfallchirurg 2001 104: 466-479 6. Cimerman M, Kristan A. Preoperative planning in pelvic and acetabular surgery:The value of advanced computurised planning modules. Injury 2007 38: 442-449 7. Citak M, Gardner MJ, Kendoff J et al. Virtual 3D planning of Acetabular fracture reduction. J Ortoph Res 2008 26 547-552 8. Gellrich NC, Schramm A, Hammer B, et al. Computer-assisted secondary reconstruction of unilateral posttraumatic orbital deformity. Plast Reconstr Surg 2002;110(6):1417—29. 9. Langlotz F, Bachler R, Berlemann U, et al. Computer assistance for pelvic osteotomies. Clin Orthop Relat Res 1998;354:92—102. 10. Marchetti C, Bianchi A, Bassi M, et al. Mathematical modeling and numerical simulation in maxillo-facial virtual surgery (VISU). J Craniofac Surg 2006;17(4):661—7. 11. Rüedi TP, Buckley RE, Moran CG, (2007) AO Principles of Fracture management Thieme Verlag Stuttgart
IFMBE Proceedings Vol. 29
Neuroimaging of emotional activation: Issues on Experimental Methodology, Analysis and Statistics C. Styliadis1, 2, 4, C.Papadelis3, 4, P.D. Bamidis2 1
Laboratory for Human Brain Dynamics, AAI Scientific Cultural Services Ltd., Nicosia, Cyprus Laboratory of Medical Informatics, Medical School, Aristotle University of Thessaloniki, Thessaloniki, Greece 3 Laboratory of Functional Neuroimaging, Center for Mind/Brain Sciences (CIMeC), University of Trento, Mattarello (TN), Italy 4 Laboratory for Human Brain Dynamics, Brain Science Institute (BSI), RIKEN, Saitama, Japan 2
Abstract— Objective: Emphasize on significant issues in neuroimaging methods, for a SAM group analysis, when studying the neuronal dynamics of the emotional activation induced by emotional visual stimuli. Efficient tackling of such issues can grant access to the localization ability in order to exploit the temporal resolution, with accuracy of a few milliseconds, for measuring the neuronal dynamics of emotional processing. Methods: Magnetoenchephalography (MEG) was used to record brain activation during a mixed-picture paradigm. Neuronal sources of the MEG data were accessed using Synthetic Aperture Magnetometry (SAM), which provided continuous 3D images of cortical power changes. SAM images were spatially coregistered to anatomical scans. Both anatomical and functional images were spatially normalized into the Montreal Neurological Institute (MNI) template space. Nonparametric permutation methods unveiled the most significant regions among participants. Use of virtual channels revealed the neuronal dynamics for the main effects of the emotional stimulus. Conclusion: A step by step high precision procedure for the coregistration and normalization required for SAM group analysis was presented and the basis of safety and cooperation with the participants on which the study was built was emphasized. Accurate localization of neuronal sources during emotional activation, induced by emotional visual stimuli can be accessed to exploit the temporal resolution, with accuracy of a few milliseconds, for measuring the neuronal dynamics of emotional processing. Keywords— MEG, Coregistration, Normalization, SAM
I. INTRODUCTION
Over the last decade, neurophysiologic research has been systematically conducted in a quantitative as well as qualitative sense in order to accurately enhance the understanding of the way emotions are embodied into the brain resulting in activity by specific neuronal sources. A key issue in functional neuroimaging is to create neurophysiologic maps as sensitive as possible to neuronal activity that can be accurately coregistered to high resolution structural magnetic resonance imaging (MRI) scans. Precise and valid localization reconstructions, within well defined brain regions are
made possible, exploiting the temporal resolution for measuring the neuronal dynamics of the functional processes induced by emotional stimuli. Sensitivity can be enhanced when the measured signal is directly associated to the neuronal activity. Magnetoenchephalography (MEG) was used to record brain activation while the participants passively viewed emotional stimuli selected from the International Affective Picture System (IAPS) [1] collection. Neuronal sources of the MEG data were accessed, with accuracy in the order of millimeters, using Synthetic Aperture Magnetometry (SAM) [2], a distributed source localization method. Use of SAM for spatial localization of cortical activity associated with sensory and cognitive processes has been demonstrated in numerous studies [3], [4] and interestingly some have suggested good spatial accuracy and validity by the direct comparison of the MEG source solution and the BOLDresponse in functional MRI (fMRI) experiments [4]. SAM provided, for each participant, continuous 3D functional images of cortical power changes based on the whole run and each of these were spatially coregistered to the corresponding MRI scan. An important necessity in SAM group analysis is the precise and valid spatial normalization [5]. Inter-participant variability in the sulcal and gyral patterns of the brain can introduce a spatial uncertainty and if brain areas are not properly aligned between participants, then sensitivity is lost [6]. Brain normalization of each participant into a standardized space results to a dedicated correspondence between their brains. Automated image-matching algorithms using nonlinear warping [7], [8] were employed to minimize the difference between the participant's structural and functional image and the Montreal Neurological Institute (MNI) standard template space. Nonlinear warping aligns the sulci and other structures down to a spatial scale specified by the parameterization of the nonlinear warp. Nonparametric permutation methods (SnPM) [9] unveiled the statistical parametric mapping of the most significant regions among participants and their spatial extent was set out using cytoarchitectonic maps [10]. Virtual channels were associated to target coordinates of regions of interest (ROIs)
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 434–437, 2010. www.springerlink.com
Neuroimaging of Emotional Activation: Issues on Experimental Methodology, Analysis and Statistics
in order to provide temporal insights of the neuronal dynamics for the main effects of the emotional stimulus. This paper aims to present in a step by step fashion those methodological issues that are crucial for an MEG study on emotional activation.
II.
METHODS AND RESULTS
A. Participants Participants attended an MRI session followed within the following days by an MEG session. This study has been approved by the Research Ethics Committee of RIKEN and was part of the Affection project [11]. After complete and detailed description of the study to the participants, a written informed consent was singed prior to the experimental task. Participants with a history of psychiatric, neurological or other serious physical illness, drug or alcohol abuse, regular consumption of medication as well as existence of metal implants in their body were excluded. Each participant’s noise was measured inside the magnetically shielded room (MSR) prior to any measurement. A note about the participant’s alertness on a scale of 1 to 5 (drowsinessalertness) was taken after the end of each run. All participants could terminate the experiment at any time. B. Affective Stimuli Participants passively viewed pictures from the IAPS [1] collection on a homogenous black background. In brief, IAPS pictures follow the principle that emotional behavior is organized across the affective valence (ranging from pleasant to unpleasant) and the arousal (ranging from excitement to calm). The stimuli selection was in accordance to our previous study [12].
435
the back of the dewar so that the distance between the occipital lobe and the MEG sensors was minimized. The screen was positioned in a way that each picture was projected centered onto it (Fig. c). Participants were requested to keep their eyes open and fixate on the center of the screen. Pictures were presented at 400 x 400 pixel resolution on a 10 inches MSR compatible screen, 55 cm from the participant’s visual field at a visual angle of 4 degrees. The task, performed in the pitch darkness of the MSR was in accordance to the design of our previous study [12].
Fig. 1 (a) The DLP projector placed outside the MSR, (b) view of the MSR, (c) the optimal configuration of the experimental setup
D. Head Localization Three head localization coils were attached to the nasion, the left and right pre-auricular points respectively and defined a coordinate system based on these fudicial points (Fig. 2). Head localization data were collected in the beginning and end of each run by activating the three coils. These data were essential in calculating the participant’s head position relative to the sensors throughout the measurement. If a participant had moved excessively during a run (>5 mm), the run was repeated. By registration of the head position at these three points, MEG data can be superimposed on the corresponding anatomical scan with an accuracy of a few millimeters.
C. Visual Task Design Stimulus delivery was controlled by Presentation software (Neurobehavioral Systems, Inc., Albany, CA, USA). Accurate timing of visual stimulus onset times was determined by luminance detection with an optical sensor positioned on the visual stimulus screen. In aid to this, a white pixel, at 20 x 20 pixel resolution, was imported into the right corner of the screen’s black background and although having the same onset and offset as each stimuli, it remained unseen, by the attached to the screen optical sensor. Stimuli were back-projected onto a screen via a DLP projector with a 96 Hz refresh rate (HL8000Dsx+, NEC Viewtechnology Ltd., Tokyo, Japan) located outside the MSR (Fig. 1a). Each participant’s head was positioned directly under the gantry and the back of his/her head was rested at
Fig. 2 Anatomical scan of a participant’s head with the fudicial points placed on the scalp
E. Co-Registration of MEG and MRI, Polhemus and 3D Camera The spatial volume of the head can be represented in a variety of three-dimensional coordinate systems, each one expressed as points in three axes: (x, y, z). Coregistration between the MEG coordinate system estimated from the
IFMBE Proceedings Vol. 29
436
C. Styliadis, C. Papadelis, and P.D. Bamidis
relevant position of the MEG coils, the MRI coordinate system and Polhemus data based on the digitizer’s transmitter location was needed for the system to translate locations to a common reference (see Fig. 3). Each system has its own distinct coordinate system. The MEG head coordinate system served as a common reference. High-resolution anatomical T1-weighted scans were collected using a 1.5-T Scanner, (Model ExcelArt, Toshiba Medical Systems). Before the MEG experiment, five coils (the three main head-localization coils mentioned before and two additional coils which introduce more precision) were attached on the left and right forehead of the participant’s head. The surface of the head and face, with all five coils, was digitalized by the means of a 3D digitizer (FASTTRAK, Polhemus, Colchester, VT, USA) as well as a 3D camera system (VIVID 9i 3D Digitizer, Konica Minolta Holdings, Inc., Tokyo, Japan). One-by-one activation of the five coils allowed the calculation of their position in respect to the participant’s head coordinate system. The combination of the head and face surface details was used to reconstruct each participant’s head shape as accurately as possible (see Fig 4). The digitized head shape was fitted on the MRI to get a transformation matrix between the coils (fiduciary and extra) and the MRI coordinate system [13]. Coregistration results were manually checked, and in the case the fit was not accurate, the digitization process was repeated. F. Data Acquisition Magnetic fields were recorded using the CTF (VSM MedTech Ltd.) whole head 151-channel system (Omega 151, CTF Systems, Inc., Vancouver, B.C., Canada) inside the MSR, at the Brain Science Institute, RIKEN, in Saitama, Japan. This system is equipped with synthetic 3rd gradient balancing, an active noise cancellation technique that uses a set of reference channels to subtract background interference. Electrooculography (EOG) and Electrocardiography (ECG) were simultaneously recorded. Signals from all channels of different modalities as well as the photodiode and the trigger from the stimulus PC were collected through a 200 Hz low-pass filter and digitized at a sampling rate of 1250 Hz by the CTF acquisition program of the MEG hardware. Epochs were in accordance to those of our previous study [12].
Fig. 4 The different coordinate systems used in the coregistration procedure: (a) the MRI (b) the 3D head surface, (c) the coils, (d) the 3D face surface, (e) the accurate fitting of all coordinate systems except for the MRI, f) the head shape
G. Data Processing VSM/CTF software was used for MEG/MRI data processing by using a 3rd order gradient filter, a 50 Hz notch filter (and its harmonics) and by removing the DC. The recorded MEG signal was visually inspected for possible artifacts and bad channels inferring noise were removed. Prior to SAM analysis, a multisphere head model was created for each participant based on MRI scans. A multisphere’s advantage over a single sphere model is that each sphere (one per MEG sensor) is fitted to a small patch of the head model (directly under the sensor) resulting in better modeling of the local return currents. H. Synthetic Aperture Magnetometry (SAM) Analysis SAM is a beamformer method, with a spatial filter designed to detect signals from a specified location and attenuate signals from all other locations [2]. SAM provided continuous 3D images of cortical power changes within the frequency band of 0 to 40Hz and was applied to the raw MEG data. SAM Suite of VSM/CTF software was employed to analyze these data by the use of dual-state imaging. The active and the control intervals were 0 to 1000 ms and -1000 to 0 ms respectively and their source power difference was calculated and normalized with respect to the noise variance to give a pseudo t-statistic [2]. A significant part of this analysis was conducted at the Laboratory for Human Brain Dynamics, in Nicosia, Cyprus. I. SPM Normalization
Fig. 3 Optimal precision of the head shape’s fitting onto an MRI
For the needs of group analysis, each participant’s structural and functional image was spatially normalized into the MNI template space using the SPM5 software [14].
IFMBE Proceedings Vol. 29
Neuroimaging of Emotional Activation: Issues on Experimental Methodology, Analysis and Statistics
Fig. 5 Typical SAM images of two different participants displaying a
437
Fig. 6 Spatial localization of regions and its temporal dynamics
pseudo-t value on similar areas of activation
REFERENCES
J. Post SAM Statistical Analysis Nonparametric permutation methods (SnPM) [9] via the use of two sample-paired t-tests, unveiled the group averages of the t-statistic images. The main effects of the emotional stimuli were set out in the form of statistical parametric maps of spatial distribution at a level of significance of P < 0.05. Activated regions were considered significant if their spatial extent was in the order of 10 voxels or more and were set out using the cytoarchitectonic maps provided in SPM’s Anatomy [10], [14].
1.
2.
3.
4.
K. Virtual Channels 5.
SAM Suite of VSM/CTF software was employed to associate virtual channels to target coordinates of regions of interest in order to extract temporal insights of the neuronal dynamics for the main effects of the emotional stimulus.
6. 7.
8. III. CONCLUSION
Methods of this paper have presented a step by step high precision procedure for the coregistration and normalization required for SAM group analysis as well as the approach, in terms of safety and cooperation, towards participants that must be followed and the crucial role that a suitable and well informed participant can play in a study. Accurate localization of neuronal sources during emotional activation, induced by emotional visual stimuli can be accessed to exploit the temporal resolution with accuracy of a few milliseconds for measuring the neuronal dynamics of the emotional processing. Figure 6, illustrates the spatial localization of two regions involved in emotional processing as well their corresponding temporal dynamics expressed in the form of virtual channels.
9.
10.
11. 12.
13.
14.
ACKNOWLEDGMENT This work has been benefited from a grant by the Greek General Secretariat for Research and Technology and the Cyprus Research Promotion Foundation (grant Upgrade/ 0308/09).
Lang P J, Bradley M M, Cuthbert B N (2005) International affective picture system (IAPS): Instruction manual and affective ratings. Technical Report A-6, The Center for Research in Psychophysiology, University of Florida Robinson S E, Vrba J (1999) Functional neuroimaging by synthetic aperture magnetometry (SAM). Recent Advances in Biomagnetism (Yoshimoto, T., et al., Eds.), Tohoku Univ Press Hillebrand A, Singh K D, Holliday I E, Furlong P L, Barnes G R (2005) A new approach to neuroimaging with magnetoencephalography. Hum Brain Mapping 25:199-211. Singh K D, Barnes G R , Hillebrand A, Forde E M , Williams A L, (2002) Task-related changes in cortical synchronization are spatially coincident with the hemodynamic response. NeuroImage 16 (1):103– 114 Hillebrand A, Barnes G R (2003). The use of anatomical constraints with MEG beamformers. NeuroImage 20(4):2302-2313 Brett M, Johnsrude I S, Owen A M (2002) The problem of functional localization in the human brain. Nat Rev Neurosci 3:243–249 Ashburner J, Friston K J (1997) The role of registration and spatial normalization in detecting activations in functional imaging. Clinical MRI/Developments in MR 7(1):26-28 Toga A W, Thompson P M, Mori S, Amunts K, Zilles K (2006) Towards multimodal atlases of the human brain. Nat Rev Neurosci 7: 952–966 Nichols T E, Holmes A P (2002) Nonparametric permutation tests for functional neuroimaging experiments: A primer with examples. Human Brain Mapping 15:1-25 Eickhoff S, Stephan K E, Mohlberg H, Grefkes C, Fink G R, Amunts K, Zilles K (2005): A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. NeuroImage 25(4):1325-1335 The Affection Project (2008) at http://kedip.med.auth.gr/affection Lithari C, Frantzidis C A, Papadelis C, Vivas A B, Klados M A, Kourtidou-Papadeli C, Pappas C, Ioannides A A, Bamidis P D (2010) Are Females More Responsive to Emotional Stimuli? A Neurophysiological Study Across Arousal and Valence Dimensions. Brain Topography 23(1):27-40 Hironaga N, Schellens M, Ioannides A A (2002) Accurate coregistration for MEG reconstructions. In: Nowak H, Hueisen J, GieBler F and Huonker R (Eds.) Proceedings of the 13th International Conference on Biomagnetism VDE Verlag, Berlin, 2002, pp. 931– 933 SPM at http://www.fil.ion.ucl.ac.uk/spm Author: Styliadis Charalampos Institute: Laboratory for Medical Informatics, Medical School, Aristotle University of Thessaloniki Street: P.O. Box 323, 54124 City: Thessaloniki Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
Using Grid Infrastructure for the Promotion of Biomedical Knowledge Mining A. Chatziioannou1 , I. Kanaris2, C. Doukas2, Ilias Maglogiannis3, 1
2
Institute of Biological Research and Biotechnology/National Hellenic Research Foundation, Athens, Greece Department of Information and Communication Systems Engineering/University of the Aegean, Karlovassi, Samos, Greece 3 Department of Computer Science and Biomedical Informatics/University of Central Greece, Lamia, Greece
Abstract— Transcriptomic technologies (DNA microarrays, Next generation sequencers) represent a major innovation in biomedical research contributing an unprecedented wealth of data regarding genome-wide inspection of an organism. GRISSOM web application is a microarray analysis environment, exploiting Grid technologies. In this work we present how the novel functionalities it incorporates through the use of various web services, gradually transform it to a generic paradigm for versatile biological computing, semantic mining and knowledge discovery. Keywords— grid computing, DNA microarrays, genomic analysis, functional genomics, GRISSOM. I. INTRODUCTION
Transcriptomic technologies (DNA microarrays, Next generation sequencers) represent a major innovation in biomedical research contributing an unprecedented wealth of data regarding genome-wide inspection of an organism. Experiments that focus on global gene expression monitoring, lead among others to the identification of significant alterations in transcript levels of a certain organic system, or even the derivation of prognostic and diagnostic genetic signatures. The widespread adoption of this approach lead to the conduct of an ever growing number of transcriptomic studies based on various types of microarrays (oligonucleotide, cDNA) and the creation of a huge amount of data. The bottleneck however in this procedure resides in the meaningful interpretation of these experiments. Thus, independent scientists, institutions and research centers, are looking for tools and platforms for microarray analysis that are powerful and user-friendly at the same time, in order to tackle the enormous underlying complexity characterizing the interpretation of gene profiling experiments. On the other hand, there is an increasing need for computational power as the cost for conducting microarray experiments drops, thus facilitating the development of larger experimental datasets. In this paper, we present the case of a biological portal, GRISSOM (GRids for In Silico Systems BiOlogy and Medicine) microarray analysis environment, and how it can be transformed to a successful generic paradigm for versatile biological computing. GRISSOM is an application, programmed following various Grid computing
methodologies, (i.e.parallel job dispatchers, web services, distributed databases) and the more recent innovations with respect to its workflows, gradually transform it to a strong meta-mining environment. II. A DISTRIBUTED BIOIFORMATIC COMPUTING PARADIGM
A. Related Work Web based microarray analysis platforms have been implemented so far, providing various levels of functionality regarding both statistical analysis and annotation steps. SNOMAD [1] represents a rather simple implementation, utilizing R statistical functions for microarray analysis. Engene platform [2] uses C++ implemented routines for statistical analysis and a PHP based web interface that provides visualization for output or clustered data. The GEPAS system [3,4], performs pre-processing, statistical analysis and meta-analysis of differentially expressed genes using clustering, classification and annotation. GEPAS is based on cluster technology in order to provide high availability to user requests. However, the analysis algorithms are not parallelized for maximum performance and all analysis and annotation steps are single threaded executions. A seminal effort in utilizing Grid computing for DNA microarray analysis was the HECTOR platform [5], exploiting the Hellenic Grid infrastructure through use of MPI technology. GEMMA [6] is another example of such a solution, deployed over the Italian EGEE infrastructure. BioVLABMicroarray is a cloud computing inspired solution [7]. These latest developments emphasize in the profit of fusing bioinformatics with Grid computing technologies. B. GRISSOM Brief Overview The GRISSOM platform [8] is a web shell is capable of providing experts with a powerful, labor-free, rapid computational pipeline, able to manipulate huge volumes of DNA microarray data. The design of GRISSOM application which is given in Figure I, allows to a wide range of users with different levels of expertise, to perform versatile analyses through its web-based interface. Experimental analysis is divided into several steps including data importing, pre-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 438–441, 2010. www.springerlink.com
Using Grid Infrastructure for the Promotion of Biomedical Knowledge Mining
processing, statistical analysis and clustering while real time help is available for every step. User only needs to provide the platform with the necessary data and define options and values for the experimental analysis. The processing is not interactive since it’s running on GRID infrastructure, Annotated output results are exportable, can be submitted for Pathway Analysis exploiting Gene Ontology Terms and can be integrated upon request to the GRISSOM repository. The analysis pipeline adopted by GRISSOM is based upon Gene ARMADA [9] application, which has been extensively tested and used in several multi-level analysis studies [10, 11].
439
form [14], is the solution exploited for this scope. Normalization steps are selected to be parallelized because of theirs high computational cost (§O(n3logn)). After the end of data normalization, data are mustered and merged in a single OctaveForge data structure in order to perform the final statistical testing, which is not computationally intensive. The application implements a repository for microarray data submitted by its users, fully stored in the Grid storage elements, complying to MIAME [15] and miniML (MIAME Notation in Markup Language) [16] annotation systems. The compatibility of the distributed repository with these standards for microarray data exchange ensures the unobstructed access to published data from public repositories such as NCBI GEO [17] and ArrayExpress [18]. In this way, the users can also access easily through the application, and process through its pipeline, already published microarray experiments. D. Incorporation of Web Service Technologies to promote the versatility of the application
Fig. 1 GRISSOM Arhitecture C. Distributed computing features of the application The implementation of the web server, which transparently interfaces with the end users, is based upon PHP scripts over a MySQL database. Utilizing the gLite [12] and Globus [13] toolkits, the web server acts as a grid UI (user interface), communicating with the GRID infrastructure for job submission and monitoring. The instantiation of the jobs in the Grid infrastructure is performed through bits of code using the JDL language, through which input data together with the analysis parameters, are transfered to Grid storage elements and a description of the job to be executed, is created. In order to cope with the augmented use of the distributed processing power, by a multitude of heterogeneous, in terms of resources and duration demands, applications and to minimize the queing time, the application has been redesigned to support the utilization of distributed computing methodologies through job schedulers that perform supervised job management in the Grid, as derived by a special Directed Acyclic Graph (DAG), written in Python and Octave Forge mathematical language, instead of using MPI computing workflows. DagMan (Directed Acyclic Graph Manager) system, which is a tool of the Condor plat-
Web Services are emerging as a promising technology to build distributed applications. Service Oriented Architecture (SOA) [19] enables the concept of loosely-coupled, openstandard, language- and platform-independent systems. The loosely-coupled features allow service providers to modify backend functions while maintaining the same interface to clients. The core service functions are encapsulated and remain transparent to clients. The open-standard approach supports collaboration and integration with other services. Web Services are platform, programming language, tool and network infrastructure independent, fostering the reuse of existing back-end infrastructure. The basic SOA includes three service components: provider, requester and registry. The current implementation of Web Services mainly utilizes XML technologies and obeys W3C defined standards. WSDL (Web Service Description Language) is commonly defined by the service provider for invoking the service. SOAP (Simple Object Access Protocol) is adopted as message transfer protocol between requester and provider. UDDI (the Universal Description, Discovery and Integration) is used for service registration and discovery. In order to facilitate meta-analysis and massive annotation of experiment-related platforms and genes, GRISSOM integrates an interface for the BioMart system [20], a queryoriented descriptive biological data management system. BioMart provides transparent access through the use of web services or APIs written in Perl and Java. These web services were integrated into the GRISSOM application and adapted to querying the EBI ArrayExpress [18] and Ensembl databases [21]. GRISSOM can be integrated in other application workflows, through the use of its web service. Access pertains
IFMBE Proceedings Vol. 29
440
A. Chatziioannou et al.
experiment management (submission and monitoring) and query submission for experiment retrieval from the GRISSOM repository. The respective Web Service is described through the appropriate WSDL representation of the Service as illustrated in Figure 2.
Fig. 2 WSDL Instance of the GRISSOM WSDL describing basic functions for submitting and monitoring experiments to the platform.
of a specific web service, which provides full control with respect to the number and the identity of the pathways that are to be incorporated to a model, as well as the version of SBML model that the service will create, that is if it will only be a stoichiometric model or it will include kinetic information about the pertaining reactions. Towards the direction of providing support to as many experimental DNA microarray technologies as possible, novel workflows have been developed, which implement a distributed analysis pipeline for Illumina microarrays. Technically this pipeline has been implemented again through the use of the DagMan job scheduler, which partitions and monitors the dispatch in the Grid infrastructure of the submitted job automatically. Another very effective feature for the apposite interpretation of DNA microarray experiments, is that of the Batch Enactor. Currently it enacts multiple independent analysis workflows for the same or multiple datasets that are implemented to run both in a distributed fashion, by exploiting the GRID infrastructure, through a batch-mode implementation that has been developed. Figure 3 illustrates the parallelization that the batch enactor implements
E. Incorporation of novel knowledge mining features In order to link groups of genes to specific pathways and biological processes, additional analysis is required, which exploits available knowledge databases. GRISSOM performs meta-analysis exploiting the Gene Ontology (GO) combined with established statistical methods, through the incorporation of the RankGO algorithm [10], in order to link genes with important broader cellular processes. The meta-analysis algorithm performs statistical enrichment analysis of GO terms together with boostrapping, in order to tackle the inherent bias which the GO tree structure inserts and derive a ranked list of biological procedures linked to GO terms, where the prioritization is based both on statistical measures as well as biological content (number of genes) of each GO term. Another novel feature of GRISSOM in the direction of becoming a versatile Systems Biology platform is the incorporation of the KEGGConverter functionality. KEGGConverter [22] is a web-based application, which uses as source KGML files, in order to construct cellular networks, integrating several biochemical pathways, as these are described in the KEGG Pathways biological database, into a single SBML model fully functional for simulation purposes. This functionality is gained through the development
Figure 1 Different processing scenarios that the batch enactor processes in parallel. III. CONCLUSIONS
The computational complexity of the analysis workflows, of the modern high-throughput biological experimental techniques, sets as a critical priority the exploitation of computational methodologies that improve the processing performance and reduce the computing time for the respective workflows. GRISSOM represents a grid computing environment for versatile DNA microarray analysis, yet the generic nature of its workflows, makes them applicable to a wide range of problems, amenable to statistical processing. The incorporation of new functionalities through the use of different web services of biological data mining applications, exploiting various knowledge bases, transforms the application to a biological knowledge mining environment, gradually shifting its power to extensive semantic process-
IFMBE Proceedings Vol. 29
Using Grid Infrastructure for the Promotion of Biomedical Knowledge Mining
ing operations. In this sense, both already developed web services, as well as those, currently under design, which focus in the development of graph theoretic algorithms that can be combined with specific biomedical ontologies to derive detailed molecular descriptions as interaction networks of biomolecules, introduce a systems biology perspective, targeting to derive whole cellular networks from the experimental data, through extensive in-silico testing. Other future work will attempt to introduce knowledge mining from protein DNA and protein-protein interaction databases, as well as biological text mining functionalities.
9.
10.
11.
12.
ACKNOWLEDGMENT This work is funded by the Information Society Technology program of the European Commission “e-Laboratory for Interdisciplinary Collaborative Research in Data Mining and Data-Intensive Sciences (e-LICO)” (IST-2007.4.4231519).
REFERENCES 1. 2.
3.
4.
5.
6.
7.
8.
13. 14. 15.
16.
C. Colantuoni, G. Henry, S. Zeger and J. Pevsner: ‘SNOMAD (Standardization and NOrmalization of MicroArray Data): web-accessible gene expression data analysis’, Bioinformatics Vol. 18 no. 11, 2002. García de la Nava J, Santaella DF, Cuenca Alba J, María Carazo J, Trelles O, Pascual-Montano A.: ‘Engene: the processing and exploratory analysis of gene expression data’, Bioinformatics. 2003 Mar 22;19(5):657-8. Herrero J, Al-Shahrour F, Díaz-Uriarte R, Mateos A, Vaquerizas JM, Santoyo J, Dopazo J.: ‘GEPAS: A web-based resource for microarray gene expression data analysis’, Nucleic Acids Res. 2003 Jul 1;31(13):3461-7. Tárraga J, Medina I, Carbonell J, Huerta-Cepas J, Minguez P, Alloza E, Al-Shahrour F, Vegas-Azcárate S, Goetz S, Escobar P, GarciaGarcia F, Conesa A, Montaner D, Dopazo J.: ‘GEPAS, a web-based tool for microarray data analysis and interpretation’, Nucleic Acids Res. 2008 Jul 1;36. I. Maglogiannis, A. Chatzioannou, J. Soldatos, V. Mylonakis, J. Kanaris: ‘An Application Platform Enabling High Performance Grid Processing of Microarray Experiments’, in the Proc. of the 20th IEEE International Symposium on Computer Based Medical Systems (CBMS2007), Maribor, Slovenia, June 20-22, 2007. I. Porro, L. Torterolo, L. Corradi, M. Fato, A. Papadimitropoulos, S. Scaglione, A. Schenone and F. Viti: ‘A Grid-based solution for management and analysis of microarrays in distributed experiments’, BMC Bioinformatics 2007, 8(Suppl 1):S7. Y. Yang, J. Youl Choi, K. Choi, M. Pierce, D. Gannon, S. Kim, ‘BioVLAB-Microarray: Microarray Data Analysis in Virtual Environment’, eScience, IEEE International Conference on, pp. 159-165, 2008 Fourth IEEE International Conference on eScience, 2008. A. Chatziioannou, I. Kanaris, I. Maglogiannis, C. Doukas, P. Moulos, E. Pilalis, F.N. Kolisis. ‘GRISSOM web based Grid portal: Exploiting the power of Grid infrastructure for the interpretation and storage of DNA microarray experiment’, Proceedings of the 9th International Conference on Information Technology and Applications in Biomedicine (ITAB), November 5-7, 2009, Larnaca, Cyprus
17.
18.
19. 20. 21. 22.
441
[9] Aristotelis Chatziioannou, Panagiotis Moulos, Fragiskos. N Kolisis. ‘Gene ARMADA: an integrated multi-analysis platform for microarray data implemented in MATLAB’. (BMC Bioinformatics 10:354), 2009. Tzouvelekis A, Harokopos V, Paparountas T, Oikonomou N, Chatziioannou A, Vilaras G, Tsiambas E, Karameris A, Bouros D, Aidinis V: Comparative expression profiling in pulmonary fibrosis suggests a role of hypoxia-inducible factor-1alpha in disease pathogenesis. Am J Respir Crit Care Med 2007, 176(11):1108-1119. Welboren WJ, van Driel MA, Janssen-Megens EM, van Heeringen SJ, Sweep FC, Span PN, Stunnenberg HG: ChIP-Seq of ERalpha and RNA polymerase II defines genes differentially responding to ligands. EMBO J 2009, 28(10):1418-1428. E. Laure, S. M. Fisher, A. Frohner, C. Grandi, P. Kunszt, A. Krenek, O. Mulmo, F. Pacini, F. Prelz, J. White, M. Barroso, P. Buncic, F. Hemmer, A. Di Meglio, A. Edlund: ‘Programming the Grid with gLite’, 2006, Computational Methods In Science And Technology 12(1), 33-45. I. Foster, C. Kesselman: ‘Globus: a Metacomputing Infrastructure Toolkit’, 1997, International Journal of High Performance Computing Applications, Vol. 11, No. 2, 115-128. T. Tannenbaum, D. Wright, K. Miller, and M. Livny: ‘Condor - A Distributed Job Scheduler’, in Thomas Sterling, editor, Beowulf Cluster Computing with Linux, The MIT Press, 2002. Brazma A, Hingamp P, Quackenbush J, (…), Vilo J, Vingron M.: ‘Minimum information about a microarray experiment (MIAME)toward standards for microarray data’, in the Proc. Nat Genet. 2001 Dec; 29(4): 365-71. Edgar, R. and T. Barrett. ‘NCBI GEO standards and services for microarray data’, Nat Biotechnol, 2006,24(12): 1471-2. Barrett T, Troup DB, Wilhite SE, Ledoux P, Rudnev D, Evangelista C, Kim IF, Soboleva A, Tomashevsky M, Marshall KA, Phillippy KH, Sherman PM, Muertter RN, Edgar R.’ NCBI GEO: archive for high-throughput functional genomic data’, Nucleic Acids Res. 2009, (Database issue):D885-90. Epub 2008 Oct 21. Jan;37(Database issue):D885-90. Epub 2008 Oct 21. A. Brazma, H. E. Parkinson, U. Sarkans, M. Shojatalab, J. Vilo, N. Abeygunawardena, E. Holloway, M. Kapushesky, P. Kemmeren, G. Garcia Lara, A. Oezcimen, P. Rocca-Serra, S.-A.Sansone: ‘ArrayExpress - a public repository for microarray gene expression data at the EBI’. Nucleic Acids Research 2003,31(1): 68-71. E. Newcomer, G. Lomow, ‘Understanding SOA with Web Services’, Addison Wesley,2005; ISBN 0-321-18086-0. Haider S, Ballester B, Smedley D, Zhang J, Rice P, Kasprzyk A: BioMart Central Portal--unified access to biological data. Nucleic acids research 2009. Hubbard T, Barker D, Birney E, Cameron G, (…),Vastrik I, Clamp M.’ The Ensembl genome database project.’ Nucleic Acids Res. 2002 Jan 1;30(1):38-41. K.Moutselos, I.Kanaris, A.Chatziioannou, I. Maglogiannis, F.N. Kolisis. ‘KEGGconverter: a tool for the in-silico modelling of metabolic networks of the KEGG Pathways database’. (BMC Bioinformatics 10:324), 2009. Corresponding Author: Aristotelis Chatziioannou Institute: Institute of Biological Research and Biotechnology, National Hellenic Research Foundation Street: 48 Vassileos Constantinou Ave. City: Athens Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
A Laboratory Scale Facility for the Parametric Characterization of the Intraocular Pressure of the Human Eye A.V. Michailidou1, P. Chatzi1, P.G. Kalozoumis1, A.I. Kalfas1, M. Pappa2, I. Tsiafis2, E.I. Konstantinidis3, P.D. Bamidis3 2
1 Department of Mechanical Engineering, Laboratory of Fluid Mechanics and Turbomachinery, Department of Mechanical Engineering, Laboratory for Machine Tools and Manufacturing Engineering, 3 Lab of Medical Informatics, Medical School, Aristotle University of Thessaloniki, GR-54124 Thessaloniki, Greece
Abstract- This paper presents the initial phases of the design and development of a laboratory scale facility for the characterization of the intraocular pressure in the human eye. The development phase included the mechanical design of the various parts of the facility. Several manufacturing techniques have been employed including multi-axes computer aided manufacturing as well as three dimensional shaping of porous media. Furthermore, contact lens manufacturing technology has been employed in order to form the various parts of the eye model such as the cornea and the eyelid. Specialized contact lenses are envisaged that would be used to obtain fluid motion and mechanical stress data from the flow simulating the biological flow in the anterior chamber. Particular effort has been devoted in the characterization of the mechanical properties of the materials used to simulate the cornea in order to guarantee dynamic similarity between the model and the biological flow field. Keywords- eye, glaucoma, intraocular pressure, laboratory scale device.
I. INTRODUCTION The increasing effect of glaucoma eye on the general population and especially on young and elderly people in conjunction with its serious implications, necessitates the research for new measurement techniques for the fluctuant intraocular pressure (IOP). Glaucoma is considered to be the second more frequent cause of permanent visual disability or blindness in the developed world and is indissolubly connected with the increased intraocular pressure. It is estimated that the prevalence in general population is of approximately 2%, whereas this percentage rises to 3% for patients over 50 years old. Pathological increase of the intraocular pressure can occur by one of the following mechanisms: • • • •
Blockage of aqueous humor (AH) flow from the posterior chamber to the anterior chamber through the pupil. Blockage of AH outflow through the trabecular meshwork. Blockage of AH flow towards the trabecular meshwork Increase of the episcleral venous pressure
In the past, several methods have been used in order to measure the IOP, in vivo or experimentally. These methods are divided into direct and indirect. Direct measurements of IOP are invasive, whereas indirect measurements are noninvasive and demand the use of contact and non-contact tonometers. Contact tonometers (Goldman, Schiotz, Graeger, Mackay-Marg, etc.) are based on the applanation principle which states that when a flat surface is pressed against a fluid-filled sphere with a flexible membrane, the internal pressure may be measured by the force exerted on the plane and the area of contact [1]. On the other hand, non-contact tonometers (pneumotonometry), such as “air puff”, direct a puff of air at the cornea in order to flatten a portion of the cornea, as it happens with the contact tonometers. The air puff pressure is increased in a linear fashion until it flattens an area of the cornea [2]. The above mentioned methods are associated with some disadvantages. The first ones induce a difficulty in cooperation with the patient and also they cause inflammations on the eye, while the second ones overestimate the real IOP. However the most serious drawback, which was the reason for this study, is that IOP has several fluctuations during the day. All the mentioned methods take a limited number of measurements, only in a specific time of the day, which can lead to an inaccurate diagnosis. Some first steps for the continuous measurement of the IOP where made by [3]. They presented a minimally invasive approach to IOP monitoring based on a sensing contact lens. The key element of this measurement method was a soft contact lens with an embedded microfabricated strain gauge allowing the measurement of changes in corneal curvature correlated to variations in IOP. A prototype of this sensing contact lens was adapted and tested on enucleated porcine eyes. Later, the same researchers [4] removed the microfabricated strain gauge and replaced it with a wireless sensor mounted on a soft contact lens. In both cases the results were undoubtedly optimistic. However, the fact that the experiments were made on juvenile porcine eyes, and not on a living eye, indicates that several physiological parameters, which can affect the sensitivity of the method (perturbations caused by movements of the eye, blinking of the eyelid, etc.), were not taken into account.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 442–445, 2010. www.springerlink.com
A Laboratory Scale Facility for the Parametric Characterization of the Intraocular Pressure of the Human Eye
443
Fig. 1 Section view of the human eye model. P denotes the different parts of the model while S denotes the spaces. The direction of the flow is presented with red arrows.
The purpose of this study is to design a novel, laboratory scale, simplified eye model, and to select the appropriate materials for its construction. A measurement device for minimally invasive continuous IOP monitoring over prolonged periods is going to be designed and tested on this model. The simplified eye model will also help in the understanding of the flow phenomena that take place into the anterior and the posterior chamber, while it will offer the ability to compare in vivo measurements with the experimental results. II.
MATERIALS & METHODS
A. Design of the mechanical device As it can be seen in figure 1, the model consists of 8 parts: Listed from the outer to the inner parts of the model, we can see first the eyelid (P3), which is a structure that simulates the periodic movement of the human eyelid. Then there is the cornea (P2), which is going to be constructed from a material that is used for the construction of contact lenses. The drainage ring (P4) simulates the function of the trabecular meshwork. The cornea will be placed on the drainage ring and those two parts will be adjoined together through welding. The cover of the model (P5) will shield the whole structure by preventing the leakage of the water that will flow into the model. In addition to this, the mechanism that moves the eyelid will be supported on the cover. Last but not least the inner surface of the cover will hold both the drainage ring and the iris in the proper position. The iris (P6), which will be constructed from elastic material with similar Young’s modulus as the human iris, forms a 6º angle with the horizontal axis in order to simulate the gap between it and the lens and consequently the posterior chamber is formed. The lens (P7) will be constructed from Polymethyl methacrylate (PMMA) and its dimensions will be the same as the lens of human eye. There will be a projection radially so as the lens to be placed on the base of the porous material (P8). The base,
Fig. 2 a) Assembly model, b) Lens, c) Iris, d) Eyelid, e) Cornea (section view)
constructed from porous material, will simulate the function of the ciliary body. The internal surface of this base is going to be coated with a thin film of insulation material. The empty space (S2), that is formed, is going to be filled with silicon. Between the base of the structure and the outer surface of the porous base is formed an empty space (S1), which will be filled with water. From the upper segment of the porous base, water will enter the posterior chamber. The base of the structure will be constructed from metal or plastic. The water will enter the model through 4 small holes placed radially to the base. The cover of the model and the base of the model are connected with two screws, and this connection will hold all the parts to their right position, as it can be seen in figure 2a. B. Material research and selection For the construction of the intraocular lens, the eyelid and the cornea of the model, four materials were examined: PMMA, GMA-49, TYRO-97 and HARMONY. Those materials were tested through nanoindentations and the load-depth curves of each material were obtained (Figure 3). For the nanoindentation experiments, a computercontrolled microhardness tester FisherscopeH100 with a Berkovich indenter was used. Using these nanoindentation results and employing the “SSCUBONI” algorithm, the stress–strain curves were determined. The “SSCUBONI” (Stress Strain CUrve Based On NanIndentaton), is a FEM supported algorithm for the continuous simulation of the nanoindentation, introduced by the Laboratory for Machine Tools and Manufacturing Engineering, Mechanical Engineering Department, Aristotle University of Thessaloniki, enabling the extraction of materials’ stress– strain elastic–plastic laws [5]. This algorithm simulates stepwise the physical procedure of the indenter penetration into the examined material and simultaneously determines the stress–strain curve of the tested material [6, 7]. In this
IFMBE Proceedings Vol. 29
444
A.V. Michailidou et al.
procedure the Berkovich pyramid was replaced through equivalent cones. After the FEM-supported simulation of nanoindentation, the Young’s modulus, and the Yield Stress of each material were obtained (Figure 3). All the materials used in the model, will have the similar physical properties with the properties of the structures of the human eye and are presented in table 1.
intraocular pressure is presented. The size of the device will be equal to the human eye’s size. Most of the parts of the model, except from the eyelid, will have the same dimensions as those of each structure of the human eye that simulate. The dimensions of the eyelid are not those of the human eyelid. The structure of the eyelid simulates only the periodic movement of the eyelid (blinking) with the help of the appropriate mechanism. The periodic movement of the eyelid exerts a periodic force on the human eye. This periodic force on the cornea causes instant increase in IOP. Ordinary blinking causes a 10 mmHg increase in IOP [8]. IOP returns to its normal levels as soon as this force stops to act. An attempt was made to model only the parts of the eye that affect AH flow. The retina is absent from the model as it has no or little effect to the AH flow from the posterior chamber to the anterior chamber. Silicon is going to be placed under the lens and inside the porous base and it will prevent the displacement of the lens to the bottom of the model. The pupil diameter of the human eye can vary from 1 to 9 mm [9]. In this model, the pupil diameter is set steady to 4 mm. In the future, the iris of the model could be replaced with a structure that allows the pupil diameter to vary from 1 to 9 mm. After the construction of the model, a series of experiments is going to be conducted in order to achieve the mass flow rate of AH, which is 2,5μL/min [10] and a normal IOP of approximately 15 mmHg. Normal IOP is in the range of 10-21 mmHg [2]. Τhe model should simulate the real flow of AH and especially the deformation of cornea’s curvature, in order to design and construct a device for the continuous, non-invasive measurement of IOP over prolonged periods. This device is going to detect changes in the corneal curvature due to intraocular pressure’s variations. Previous studies about correlation between IOP and corneal curvature in humans [11], [12], have shown that a 1 mmHg in IOP can cause a change of 3μm of central corneal radius of curvature. This measurement device will be tested on our model with the prospect of its later use on humans. Table 1 Materials of each item Item Eyelid
Fig. 3 Nanoindentation experiments and stress strain curves determined by FEM-supported simulation
IV.
DISCUSSION
In this paper, the design and the development of a laboratory scale device for the characterization of the
Material GMA-49
Cornea
GMA-49
Drainage Ring
Porous material
Cover of the model
Metal or plastic
Iris
Elastic material
Lens
PMMA
Base of porous material
Porous material
Base of the model
Metal or plastic
IFMBE Proceedings Vol. 29
A Laboratory Scale Facility for the Parametric Characterization of the Intraocular Pressure of the Human Eye
V. CONCLUSIONS This study presents the design of a laboratory scale device of the human eye. The human eye model simulates the fundamental flow parameters that govern the flow inside anterior and the posterior chamber of the human eye. The ultimate goal of the current study has been the design and development of a minimally invasive device for the continuous measurement of the IOP during the course of normal daylight activities of humans who suffer from early stages of glaucoma. The human eye model enables the testing of various boundary conditions of ocular aqueous humor flow used in the simulation of flow conditions met in healthy as well as individuals suffering from glaucoma. Moreover, a variety of realistic fluids simulating the biological fluids can be tested in the laboratory scale device. Furthermore, future work will include computational simulations aiming to compare the experimental results with the numerical as well as to visualize the flow field in the anterior and posterior chambers of the human eye.
NOMENCLATURE Abbreviation AH
Aqueous Humor
IOP
Intraocular Pressure
PMMA
Common name for Poly(methyl methacrylate)
GMA-49
Common name for GM Advance with 49% water content (nature of the material: Terpolymer based on glycerol methacrylate)
TYRO-97
Common name for hofocon A, rigid gas permeable spherical, aspheric, toric and bifocal contact lenses for daily wear
HARMONY
Common name for a hybrid contact lens material with a gas permeable centre and a soft skirt
ACKNOWLEDGMENTS The authors would like to acknowledge the help and support of the “Eyeart” company and in particular Mr Eleftherios Karageorgiadis, optometrist, for cooperation and their invaluable help in the selection of the appropriate materials for the human eye model. Thanks are also due to Dr Anastasios Kapetanios, MD and Dr Konstantinos Papastathopoulos, MD, ophthalmologists of the “Hendry
445
Dunant” Hospital, Athens, for their continuing support and consultations during the course of development of the human eye model. The authors would also like to thank Dimitrios Raptis, MD, PhDc in Aristotle University of Thessaloniki, for his cooperation.
REFERENCES 1.
Weinreb R N, Brandt J D, Garway-Heath D F, Medeiras F A. (2007), “Intraocular Pressure”, Reports and Consensus Statements of the 4th Global AIGS Consensus Meeting on Intraocular Pressure, Kugler Publications 2. Katuri K C, Asrani S, Ramasubramanian M K (2008) Intraocular Pressure Monitoring Sensors. IEEE SENSORS JOURNAL, 8 (1): 1319 3. Leonardi M, Leuenberger P, Bertrand D, Bertsch A, Renaud P (2004) First steps towards noninvasive intraocular pressure monitoring with a sensing contact lens. Investigative Ophthalmology & Visual Science, 45: 3113-3117 4. Leonardi M, Pitchon E M, Bertsch A, Renaud P, Mermoud A (2009) Wireless contact lens sensor for intraocular pressure monitoring: assessment on enucleated pig eyes. Acta Ophthalmologyca, 87 (4): 433-437 5. Bouzakis K D, Michailidis N, Hadjiyiannis S, Skordaris G, Erkens G (2002) A continuous FEM simulation of the nanoindentation to determine actual indenter tip geometries, material elastoplastic deformation laws and universal hardness. Carl Hanser Verlag, Munchen Z. Metallkd. 93 6. Michailidis N, Pappa M, (2009) Application of strength properties determined by nanoindentations to describe the material response in micro- and macro-indentation. CIRP Annals – Manufacturing Technology, vol. 58: 511-514 7. Bouzakis K D, Michailidis N, (2006) Indenter surface area and hardness determination by means of a FEM-supported simulation of nanoindentation. Thin Solid Films, 494: 155-160 8. Colman D J, Trokel S (1969), Direct-recorded IOP variations in human subject, Arch Ophthalmol 82(5): 637-640 9. Heys J J, Barocas V H, Taravella M J (2001) Modeling passive mechanical interaction between aqueous humor and iris. ASME Journal of Biomechanical Engineering 123 (6): 540–547 10. McLaren J W, Trocme S D, Relf S, Brubaker R F (1990) Rate of flow of aqueous humor determined from measurements of aqueous flare. Invest Opthalmol Vis Sci. 31 (2): 339-346 11. Hjortdal J, Jensen PK. (1995) In vitro measurement of corneal strain, thickness, and curvature using digital image processing. Acta Ophthalmol Scand. 73:5–11. 12. Lam AKC, Douthwaite WA. (1997) The effect of an artificially elevated intraocular pressure on the central corneal curvature. Ophthalmic Physiol Opt. 17:18–24. Author: Alexandra Michailidou Institute: Department of Mechanical Engineering, Aristotle University of Thessaloniki Street: University Campus, GR-54124 City: Thessaloniki Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
AM-FM Texture Image Analysis in Multiple Sclerosis Brain White Matter Lesions C.P. Loizou1, V. Murray2, M.S. Pattichis2, M. Pantziaris3, I. Seimenis4, C.S. Pattichis5 1
2
Intercollege, Department of Computer Science, P.O.Box 51604, CY-3507, Limassol, Cyprus Department of Electrical and Computer Engineering, University of New Mexico, Albuquerque, N.M., USA 3
4 5
Cyprus Institute of Neurology and Genetics, Nicosia, Cyprus
Medical Diagnostic Centre “Ayios Therissos”, 2033 Nicosia, Cyprus
Department of Computer Science, University of Cyprus, Nicosia, Cyprus
Abstract—In this study we investigate the use of multiscale Amplitude Modulation-Frequency Modulation (AM-FM) methods for analyzing brain white matter lesions that are associated with multiple sclerosis MRI lesions imaged at 0 and 6-12 months. We use the instantaneous amplitude (IA) and the instantaneous frequency (IF) to assess disease progression. The IA and the IF were calculated in transverse sections of T2weighted magnetic resonance (MR) images acquired from 38 symptomatic untreated subjects between the first and the second examination scan. The findings suggest that the high-, medium-, and low- frequency scale instantaneous amplitude and frequency can be used to differentiate between normal tissue and lesions at 0 and 6-12 months. Moreover, support vector machine (SVM) models gave satisfactory results for differentiating lesions at 0 months using the medium scale IA and IF components for expanded disability status scale (EDSS) <=2 and EDSS >2. Further work is needed with more subjects in validating the proposed AM-FM analysis. Keywords—Multiple sclerosis, AM-FM analysis, brain white matter, disease progression I. INTRODUCTION
Multiple Sclerosis (MS) is a chronic idiopathic disease that results in multiple areas of inflammatory demyelization within the central nervous system. Within individuals the clinical manifestations are unpredictable, especially when it comes to predicting the development of disability [1]. Some of the correlating factors between MS and disability were investigated in [2]. In [3], disease subgroups were classified based on their MS disease severity. The use of texture characteristics for differentiating between normal and abnormal lesions was proposed in [4]-[8]. In [4], it was shown that texture features can reveal discriminant factors for tissue classification and image segmentation. The classification of active and non-active brain lesions in MS patients from brain MRI was investigated in [6] using texture analysis where it was shown that active lesions can be identified without frequent gadolinium injections. In [7], the performance of texture analysis and tissue discrimination between MS lesions and normal appearing white matter (related to the patient group) and brain white matter (BWM) was investigated for supporting early diag-
nosis in MS. Also in [8], shape and texture features were computed on 10 subjects for differentiating between normal and abnormal lesions. Significant differences in texture between normal and diseased spinal cord in MS patients were found in [10] as well as the significance correlation between texture features and disability. Our objective here is to investigate the progression of textural characteristics of the MS brain lesions, through the use of new multiscale Amplitude Modulation-Frequency Modulation (AM-FM) methods [11], [12] and compare/combine them with standard texture features. AM-FM models have been used in a variety of applications including image reconstruction, image retrieval, and video processing such as motion estimation and video analysis [11]. A theoretical framework for understanding the role of multidimensional frequency modulation was reported in [12]. In [13], for carotid plaque ultrasound images, AM-FM texture features were shown to provide better results than classical texture features. We propose to study changes in AM-FM characteristics that can be associated with MS disease progression at 0 and 6-12 months. Here, we use the measurements of the instantaneous amplitude (IA) and instantaneous frequency (IF) at different scales to investigate the progression of the disease. II. MATERIALS AND METHODS
A. Study group and MRI acquisition In agreement with the Cyprus national bioethics committee rules on clinical trials, thirty eight (17 male, and 21 female), aged 34.1±10.5 (mean age ± standard deviation), with a clinical isolated syndrome (CIS) of MS and MRI detectable brain lesions were scanned twice with an interval of 6-12 months. The images covered a field of view of 230mm at a pixel resolution of 2.226 pixels per mm. The data were processed retrospectively. All subjects were initially untreated and remained untreated bet ween the baseline MRI and the repeat MRI. They were also clinically examined by the MS neurologist (co-author M. Pantziaris) following the MRI and at the end of the study were given an EDSS (expanded disability status scale) score [14] with 2.07±0.75 (mean EDSS ± standard deviation). Additionally,
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 446–449, 2010. www.springerlink.com
AM-FM Texture Image Analysis in Multiple Sclerosis Brain White Matter Lesions
10 healthy, age-matched (mean±SD: 30.8±7.6) volunteers (4 male, and 6 female) were scanned for image texture analysis for normal BWM. The images used for analysis were obtained using a T2weighted turbo spin echo pulse sequence (TR=4408ms, TE=100ms, echo spacing=10.8ms). For more details on the MRI protocol and the acquisition parameters, we refer to [8], [10]. B. Manual delineations and visual perception All detectable lesions were identified and segmented by an experienced MS neurologist and confirmed by a radiologist for the 38 subjects studied. The manual delineation procedure and visual perception evaluation can be found in [8] and [9]. Similarly, normal BWM areas, cerebrospinal fluid (CSF) and air from the sinuses were also segmented from the 10 healthy subjects. C. Interscan intensity normalization A normalization algorithm was used to match the image brightness between the first (baseline) and the follow-up images (see [8] for details). The original image histogram was stretched, and shifted in order to cover all the gray scale levels in the image as follows:
f ( x, y )
g HIR g LIR .( g ( x, y ) g min ) g LIR g max g min
(1)
If the original histogram of the initial image g(x,y), starts at gmin and extends up to gmax brightness levels, then we can scale up the image so that the pixels in the new image, f(x,y), lie between a minimum level (gLIR) and a maximum level (gHIR). This is done by scaling up the intensity levels according to (1). We first quantified global signal characteristics by determining the average high (cerebrospinal fluid, gmax) and low intensity (air from the sinuses, gmin) values of the brain.
447
tions from each component. The instantaneous frequency functions are defined in terms of the gradient of the phase. AM-FM demodulation is applied over a dyadic filterbank after the image is filtered through an extended 2-D Hilbert filter. Let
I AS
be
I AS
I jH 2 D {I }
where H 2 D de-
f notes the 2-D Hilbert operator. Also, let AS denote the output of one of the band-pass filers. We estimate the IA and IP using [15], [16]:
a( k1 , k2 ) and
f AS ( k1 , k2 )
(3)
§ imag ( f (k , k )) ·
AS 1 2 ¸¸ M (k1 , k 2 ) arctan ¨¨ real f ( AS ( k1 , k 2 )) ¹ . ©
(4)
The IF is computed using a variable spacing, local linear phase (VS-LLP) method as described in [15], [16]: § g ( k1 n1 , k 2 ) g ( k1 n1 , k 2 ) · d 1 ¸¸ M k1 , k 2 # cos 1 ¨¨ dk1 n1 2 g ( k1 , k 2 ) © ¹
g k , k
f (k , k ) / f (k , k )
1 2 AS 1 2 AS 1 2 where , and similarly for the second component of the instantaneous frequency. In this paper we apply a dyadic filterbank using low, medium, and high frequency scale bands. For each lesion, for
D. Amplitude-Modulation Frequency-Modulation (AM-FM) methods Over each segmented lesion we compute an AM-FM representation given by [11], [15]: M
I k1 , k 2 | ¦ a n k1 , k 2 cos M n k1 , k 2 , n 1
(2)
an
where n=1, 2,…, M denote different scales, denote slowly-varying instantaneous amplitude (IA) functions and
Mn
denote the instantaneous phase functions (IP). Here, the
cos M
n capture fast changing texture FM components components. The IA can be used to quantify the contribu-
Fig. 1 Box plots for the medium IA (MIA) for the normal tissue (NIA0, NIA6), normal appearing white matter (NWMIA0, NWMIA6) and lesions (LIA0, LIA6) for 0 and 6-12 months, respectively. Inter-quartile range (IQR) values are shown above the box plots. In each plot we display the median, lower, and upper quartiles and confidence interval around the median. Straight lines connect the nearest observations within 1.5 of the Inter-Quartile Range (IQR) of the lower and upper quartiles. Unfilled circles indicate possible outliers with values beyond the ends of the 1.5 x IQR.
IFMBE Proceedings Vol. 29
448
C.P. Loizou et al.
each frequency scale band, we compute 32-bin histograms of the IA, IF magnitude (|IF|) and IF angle.
Table 1 Statistical analysis for the low, medium, and high IA and IF (IA/IF) components based on the Wilcoxon rank sum test at p<0.05. Significant difference is depicted with the name of the component, whereas “-“ depicts no significant difference.
E. Statistical Analysis
Classification analysis was carried out to classify lesions at 0 months for two classes: (i) lesions with EDSS<=2, and (ii) lesions with EDSS>2. We performed the classification using the Support Vector Machines (SVM) classifier and the leave-one-out cross-validation method. SVM classification was performed based on the construction of hyper planes able to separate the input data into two classes. We use a quadratic kernel [17] to solve the non-linear classification problem. We performed the classification considering the medium scale component. For this scale, we computed the results in terms of: a) IA, b) |IF|, c) IA and IF, and d) IA, |IF| and IF angle. III. RESULTS
In Fig. 1 we present the medium frequency components of the IA for the normal tissue, NAWM, and lesions, for 0 and 6-12 months respectively. An increase of the IA is demonstrated for both the NAWM and lesions at 0 and 6-12 months. Table 1 presents statistical comparisons between lesions, NAWM, and normal tissue, collected at 0 and 6-12 months. Our primary goal here is to detect significant changes in the lesions that are also associated with the advancement of the disease. The results from Table 1 indicate that several AMFM scales can be used to differentiate between early and advanced disease stages and between lesions and normal tissue. Here, more advanced disease stages are characterized by 6 months. Earlier disease stages are characterized by 0 months. As an example, we discuss the case of comparing lesions and normal tissue after 6 months. This case is shown in the first three rows of Table 1. Here, we have that the lowfrequency, medium-frequency and/or high-frequency IA values can be used to differentiate between lesions and normal tissue. Similarly, we can alternatively use lowfrequency or medium-frequency IF to differentiate between lesions and normal tissue. The combination of these findings suggests that AM-FM features can be used to reliably detect advanced progression of the disease.
0 months
6 months
0 months
6 months
0 months
F. Classification and Support Vector Machines Analysis
Lesions
LIA/LIF MIA/MIF HIA/HIF
LIA/LIF MIA/MIF HIA/HIF
LIA/LIF MIA/MIF HIA/-
LIA/LIF MIA/MIF HIA/-
6 months
The Wilcoxon rank sum test was used in order to identify if for each set of measurements a significant difference (S) or not (NS) exists between the extracted AM-FM texture features, with a confidence level of 95%. For significant differences, we require p<0.05.
Normal Tissue
NAWM
LIA/LIF MIA/MIF HIA/HIF
LIA/LIF MIA/MIF HIA/HIF
LIA/LIF MIA/MIF HIA/-
LIA/LIF MIA/MIF HIA/-
Table 2 Classification results using the SVM classifier in terms of Sensitivity (Sen.), Specificity (Spe.) and Correct Rate (CR) for the medium scale component for differentiating lesions at 0 months for EDSS<=2, and EDSS>2. Feature set
MIA
M|IF|
MIA & M|IF|
MIA & M||IF| & MIF angle
Sen.
0.79
0.64
0.71
0.79
Spe.
0.57
0.52
0.52
0.57
CR
0.66
0.57
0.60
0.66
Furthermore, we present in Table 2 the classification results for classifying lesions at 0 months for EDSS<=2 and for EDSS>2 in terms of Sensitivity (Sen.), Specificity (Spe.) and Correct Rate (CR, percentage of lesions correctly classified). Table 2 showed that there are certain combinations where good results for all the performance criteria used were obtained. These are: i) using the IA histograms of the Medium scale, and ii) using the IA & |IF| & IF angle of the Medium scale. IV. CONCLUDING REMARKS
The objective of this study was to investigate whether changes in AM-FM characteristics can be associated with MS disease progression. AM-FM features were extracted and investigated, based on statistical measures, and univariate statistical analysis, from the manually segmented MS lesions of 38 subjects with CIS of MS, and from NAWM areas, in an attempt to quantify pathological changes that occur in MS. All subjects were scanned twice with an interval of 6-12 months. The results indicate that high-, mediumand low-frequency, IA can be used to differentiate between early and advanced cases for the lesions. Similar findings were also found for the IF component. Furthermore, satisfactory results were obtained using SVM models for classifying the lesions at 0 months for EDSS<=2 and for EDSS>2. Texture analysis was also carried out by our group in a
IFMBE Proceedings Vol. 29
AM-FM Texture Image Analysis in Multiple Sclerosis Brain White Matter Lesions
smaller number of subjects [8]. It was found that there was a significant difference between the texture features extracted from the NAWM tissue and corresponding texture features extracted from the lesions at both 0 and 6-12 months (i.e standard deviation, contrast, difference variance, difference entropy, inverse difference moment, and sum variance). The images in this study were intensity normalized in order to eliminate the effects of the intensity diversity between images obtained at different time-points. Although the diversity in the intensity range across the images and subjects in the database might not interfere with the segmentation process of a single image, it has a major effect when trying to compare between different images and when trying to generate global tissue models for tissue classification [5]. The normalization process proposed in this study uses prior knowledge of the high and low intensity values of the brain, so that the new intensity histogram of the lesion has its maximum peak close to its average gray scale value [18]. It should be noted that AM-FM analysis as presented in this study is dependent as given in [4] on: i) MR acquisition parameters, ii) the quality assessment of the MRI device, and iii) the methods of image reconstruction. Furthermore, as documented in [4] for texture analysis, an open remaining question deals with the correspondence between texture analysis and histologic parameters as the voxel resolution is very large if compared to histologic structures. Further research work on a larger number of subjects is required for validating the results of this study and for finding additional AM-FM features that may provide information for differentiating between normal tissue and MS lesions as well as for longitudinal monitoring of these lesions.
ACKNOWLEDGMENT
3.
4. 5. 6.
7. 8.
9.
10. 11. 12. 13.
14.
15.
This work was funded through the project Quantitative and Qualitative Analysis of MRI Brain Images ȉȆǼ/ȅȇǿǽȅ/0308(ǺǿǼ)/15, 12/2008-12/2010, of the Program for Research and Technological Development 20072013, of the Research Promotion Foundation of Cyprus.
16. 17. 18.
REFERENCES 1. 2.
Fazekas F, Barkof F, Filippi M, et al. (1999) The contribution of magnetic resonance imaging to the diagnosis of multiple sclerosis. Neurology 53:448-456 Filippi M, Paty DW, Kappos L, Barkhof F, Compston DA, Thompson AJ, Zhao GJ, Wiles CM, McDonald WI, Miller DH(1995) Correlations between changes in disability and T2-weighted brain MRI activity in multiple sclerosis: a follow-up study. Neurology 45:255-260
449
Dehmeshki J, Barker GJ, Tofts PS, (2002) Classifications of disease subgroups and correlation with disease severity using magnetic resonance imaging whole-brain histograms: Application to magnetisation transfer ratios and multiple sclerosis. IEEE Trans Med Imag 21:4: 320-331 Herlidou-Meme S, Constans JM, Carsin B, Olivie D, Eliat PA et al. (2003) MRI texture analysis on texture test objects, normal brain and intracranial tumours. Mag Res Imag 21:989-993 Meier DS, Guttman CRG (2003) Time-series analysis of MRI intensity patters in multiple sclerosis. NeuroImage 20:1193-1209 Yu O, Mauss Y, Zollner G, Namer LJ, Chambron J (1999) Distinct patterns of active and non-active plaques using texture analysis of brain NMR images in multiple sclerosis patients: Preliminary results. Magn Reson Imag 17:9:1261-1267 Zhang J, Wang L, Tong L (2007) Feature reduction and texture classification in MRI-Texture analysis of multiple sclerosis, IEEE/ICME Conf Complex Med Eng, 2007, pp 752-757 Loizou CP, Pattichis CS, Seimenis I, Eracleous E, Schizas CN, Pantziaris M (2008) Quantitative analysis of brain white matter lesions in multiple sclerosis subjects: Preliminary findings”, IEEE Proc. Int Conf Inf Techn and Appl on Biomed, ITAB 2008, Shenzhen, China, pp. 58-61 Loizou CP, Patziaris M, Seimenis I, Pattichis CS (2009) MRI intensity normalization in brain multiple sclerosis subjects. ITAB2009-10th Int Conf on Inform Techn and Applic in Biomed, Larnaca, Cyprus, pp. 1-5 Mathias JM, Tofts PS, Losseff NA (1999) Texture analysis of spinal cord pathology in multiple sclerosis. Magn Res in Med 42:929-935 Murray Herrera VM (2008) AM-FM methods for image and video processing. Ph.D. dissertation, University of New Mexico Pattichis MS, Bovik AC (2007) Analyzing image structure by multidimensional frequency modulation. IEEE Trans Pattern Anal Mach Intellig 29:5:753–766 Christodoulou CI, Pattichis CS, Murray V, Pattichis MS, Nicolaides AN (2008) AM-FM Representations for the Characterization of Carotid Plaque Ultrasound Images. MBEC´08 4th Eur Conf Int Feder Med Biolog Eng, Antwerp, Belgium, pp 1-4 Yu O, Mauss Y, Zollner G, Namer IJ, Chambron J (1999) Distinct patterns of active and non-active plaques using texture analysis of brain NMR images in multiple sclerosis patients: Preliminary results. Magn Reson Imag 17:9:1261-1267 Murray V, Rodriquez P, Pattichis MS (2010) Multi-scale AM-FM demodulation and reconstruction methods with improved accuracy. To appear, IEEE Trans Imag Proces Havlicek JP (1996). AM-FM image models. Ph.D. dissertation, The University of Texas at Austin Cristianini N, Shawe-Taylor J (2000). An introduction to support vector machines and other kernel-based learning methods,” First Edition, Cambridge: Cambridge University Press. Collewet G, Strzelecki M, Marriette F (2004) Influence of MRI acquisition protocols and image intensity normalization methods on texture classification. Magn Reson Imag, 22:81-91
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Christos P. Loizou Intercollege 2, Ayias Phylaxeos Str., P.O.Box 51604 CY-3507, Limassol Cyprus loizou.c@ lim.intercollege.ac.cy; [email protected]
Reliable Hysteroscopy Color Imaging I. Constantinou1,2, V. Tanos3, M. Neofytou1,2, and C. Pattichis1 1
University of Cyprus, Department of Computer Science, Nicosia, Cyprus 2 MedTechSol (Medical Technology Solutions) 3 Aretaeion Medical Center, Nicosia, Cyprus
Abstract— The aim of this study was to investigate the reliability of two hysteroscopy cameras, the Circon IP 4 and Karl Storz HD, in relation to white balance, camera response over time, and color correction. Experimental results, show that for both the Circon and Karl Storz cameras: (i) for white balancing, either guaze, or white sheet, or the white color cheker can be used, (ii) white balance can be carried out at any distance between 0.5 to 3.0 cm, (iii) white balance can be carried out at 1cm and at any angle between 0 to 45 degrees, (iv) there was no camera color variation over both short (60 min) and long (4 weeks) time intervalls, and (v) color correction algorithm 2 gave better results. Most imortantantly, there was no significant difference between the MSE of the Circon and Storz cameras. The above results will be incorporated in a standardized protocol for texture feature analysis of endoscopic imaging for gyneacological cancer developed by our group, and will enable multi-center quantitative analysis (constrained by the use of the two cameras investigated). Keywords— Endoscopy imaging, laparoscopy imaging, hysteroscopy imaging, white balance, gamma correction algorithm, color systems, CCD medical camera.
I. INTRODUCTION Endoscopy is considered to be the gold-standard technique for the diagnosis of intrauterine pathology [1]. The physician guides the telescope connected to a camera inside the human body in order to investigate suspicious lesions of cancer [2]. However, apart from the clinical advantage that endoscopy procedure provides, the endoscopy quantitative image analysis remains a big challenge to the scientists in order to develop algorithms that provide early diagnostic results [3], [4]. However, one of the main difficulties of the endoscopy image analysis is that the camera response signal is devicedependent. This affects the output signal of the camera and the physician diagnosis, because the original colors of the target, captured from one device, are different from those that are captured from another medical camera. This problem makes it difficult to exchange or to compare similar endoscopy images with the same pathology, extracted from different medical cameras (that might be at different medical centers) [5], [6]. The aim of this study is to investigate the reliability of hysteroscopy color imaging of two cameras with respect to: (i) white balance, (ii) camera response over
time, and (iii) color correction. This will facilitate the development of Computer Aided Diagnostic Systems (CADs) which will make the endoscopy procedure, independence from the device – camera used. The white balance procedure is proposed by the camera manufacturer to calibrate the camera before the examination. This procedure has been exhaustively investigated in digital imaging and there are already a number of automatic white balance methods in the bibliography, based both on software and hardware approaches [7]. Even so, in practice the medical camera manufacturers propose that white balancing is carried out manually before the examination using a white material (usually, apiece of gauze) [8]. The effect of color correction has been studied extensively, within several research centers. Color correction algorithms have been proposed for different tasks using different approaches depending on the application [9]. The most popular methods according to [10] are based on the polynomial based correction method [11] and neural networks transformation [12]. However, a very limited number of studies used color correction for hysteroscopy or labaroscopy imaging [6]. In this paper, in sections II, III and IV we present the methodology, results and concluding remarks respectively.
II. METHODOLOGY The Circon and Karl Storz HD medical cameras were used [8]. The analog output signal of the camera (PAL 475 horizontal lines) was digitized at 720x576 pixels, using 24 bits color at 25 frames per second, and was saved in the AVI format to a laptop pc. Also, the Circon and Karl Storz, light source, were used in each system respectively. In addition, we used the Minolta luminance meter [13] for measure the light intensity (cd/m2). The x-rite standard mini color checker (24 colors) and the x-rite standard mini white balance checker were used [14]. The testing targets were fixed in a case model, mimicking the real conditions during an endoscopy examination. The Karl Storz telescope was used in both cameras, with 30 degrees viewing angle, and was stabilized in a holding base. The results were applied by cropping ROIs of 25x25 pixels, from palette color. Then for each ROI, we computed the Mean Intensity, for each channel, using the intensity percentile range from 5% - 95%.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 450–454, 2010. www.springerlink.com
Reliable Hysteroscopy Color Imaging
451
The Wilcoxon rank sum test [15] was used in order to identify if for each set of measurements a significant difference (S) or not (NS) exists between the MSE extracted for the experiments investigated, with a confidence level of 95%. For significant differences, we require p<0.05. A. White Balance Experiments 1.1 White Balance Procedure vs Different Materials The white balance procedure was studied using different target materials from a constant distance of 1cm from the tip of the telescope to the target. The target materials were: (i) white gauze, (ii) white sheet, (iii) the mini white balance colour checker and (iv) the response of the camera without the white balance procedure. The selection of materials was based on materials easily available in the operating theatre. The camera output signal was evaluated in relation to the 12 of the 24 standard colour palette vs palette original colours using MAE, MSE and RMSE measurements. 1.2 White Balance vs Distance The white balance procedure was studied for different target distances (0.5 – 3cm), keeping the camera angle constant. The experiments were carried out, using white gauze because it is the most common material that the physician uses in the operating theater for the white balance procedure. The estimation of camera response in relation to distance was evaluated using again the standard color palette. 1.3 White Balance vs Camera Angle In this experiment, the white balance procedure was studied for 3 different camera angle views: (i) 15, (ii) 30 and (iii) 45 degrees. The material was white gauze and the target distance was 1cm from the tip of the telescope. Also, after applying the white balance procedure, we captured images from the color palette at a distance of 6 cm. B. Camera Response over Time The reliability of the camera response vs time was also investigated. After applying white balance using a white gauze, we measured the response of the camera during 1 hour of continuous working by capturing the 24 ROIs of the color palette from an 8 cm distance every 5 minutes. The experiments were repeated every week for a month. We measure the mean absolute difference within each channel for every color. C. Color Correction 1.1. Color Correction Algorithm 1 The gamma correction algorithm based on the following equations was used:
Rout = aR Rinγ R + bR Gout = aG Ginγ G + bG
(1)
γB
Bout = aB Bin + bB The Rin, Gin and Bin denote the original Red, Green and Blue color signals and Rout, Gout and Bout denote the corrected output color signals. The equations were solved using the non-linear least squares algorithm (see lsqnonlin function in MATLAB [16]). 1.2. Color Correction Algorithm 2 This model is based on the following equations [17]:
⎡ Rout ⎤ ⎡a11 a12 a13 ⎤ ⎡ Rin ⎤ ⎡k1 ⎤ ⎢G ⎥ = ⎢a a a ⎥ ⎢G ⎥ + ⎢k ⎥ ⎢ out ⎥ ⎢ 21 22 23 ⎥ ⎢ in ⎥ ⎢ 2 ⎥ ⎢⎣ Bout ⎥⎦ ⎢a31 a32 a33 ⎥ ⎢⎣ Bin ⎥⎦ ⎢⎣k3 ⎥⎦ ⎣ ⎦ N k
(2)
A
Rcor = 255( Rout / 255)γ R Gcor = 255(Gout / 255)γ G
(3)
Bcor = 255( Bout / 255)γ B The [Rout Gout Bout]T denote the original input Red, Green, and Blue components of the target image. The [Rcor Gcor Bcor]T denote the output (corrected) RGB components of the target image. The output signal was multiplied, using a linear matrix A and a constant offset matrix k. The nonlinear gamma model can be approached using equation 3. The computation of the parameters was based on the nonlinear least squares algorithm (see lsqnonlin function in MATLAB [16]). We estimate matrices A, k and the gamma values in each channel, γR, γG, γB and the result is the correction of the output signal.
III. RESULTS A. White Balance Experiments Table 1 tabulates the white balance results for gauze, white cloth, white color checker, and without white balance. Results showed that there was no significant difference for the MSE error between the different white balance methods except for the without white balance method for the Circon camera whereas this is not applicable for the Storz camera. Thus, any of the guaze, or the white cloth, or the white color checker can be used for white balancing.
IFMBE Proceedings Vol. 29
452
I. Constantinou et al.
Table 1 Results of the White Balance procedure in relation to different Material Channels MAE Circon MSE RMSE MAE Storz MSE RMSE
R 11 218 15 22 636 25
Gauze G B 16 18 366 479 19 22 31 27 1180 1294 34 36
Y 15 354 19 27 1037 32
R 9 136 12 25 867 29
White Cloth G B 14 15 317 433 18 21 30 30 1160 1239 34 35
Y 13 295 17 28 1089 33
R 12 223 15 26 889 30
material for 12 Colors
White Color Cheker G B Y 16 17 15 391 488 367 20 22 19 34 32 31 1390 1517 1265 37 39 35
Table 2 Results of the White Balance procedure in relation to distance Distance(cm) Channels MAE Circon MSE RMSE MAE Storz MSE RMSE
R 19 469 22 22 695 25
G 16 390 20 28 1030 32
0.5
Distance(cm) Channels MAE Circon MSE RMSE MAE Storz MSE RMSE
R 14 281 17 24 807 28
G 13 289 17 33 1348 37
1.0
B 24 1165 3 25 1089 33
Y 20 675 15 25 938 30
R 21 573 24 21 617 26
G 18 476 22 29 1047 32
Y 16 453 21 33 1245 35
R 11 203 14 24 819 29
G 13 269 16 33 1287 36
2.5
R 10 168 13 23 771 28
for 12 Colors
Y 22 780 27 26 945 30
R 18 389 20 22 658 26
G 14 329 18 31 1193 35
2.0
B 23 948 31 30 1383 37
Y 18 555 23 30 1078 33
R 16 367 19 23 698 26
G 15 339 18 31 1193 35
B 22 917 30 33 1580 40
Y 18 541 22 33 1157 34
3.0
B 20 780 28 33 1580 40
15 G 14 300 17.3 32.2 1280 35
Without white balance G B Y 18 27 22 451 1338 759 21 37 27 37 35 34 1675 1793 1544 41 42 39
1.5
B 26 1290 36 26 1171 34 B 17 525 23 33 1542 39
Y 14 332 18 33 1216 35
Table 3 White Balance procedure vs angle viewimg for 12 colors Angle (degree) Channels MAE Circon MSE RMSE MAE Storz MSE RMSE
R 20 488 22 30 1163 34
B 17 466 22 30 1376 37
R 10 164 13 23 717 27
45 G 14.4 329 18.1 32.7 1275 1186
B 16 458 21 30 1434 38
Table 2 tabulates the results of the white balance for 12 colors in relation to distance. Results showed that there was no significant difference for the MSE error between 0.5, 1.0, 1.5, 2.0, 2.5, and 3.0 cm distance. Thus, any distance in the range of 0.5 to 3.0 cm can be used. Table 3 tabulates the results of the white balance for 12 colors in relation to angle from 1cm distance. Results showed that there was no significant difference for the MSE error between 15, 30 and 45 degrees. Thus, any degree in the range of 0 to 45 degrees can be used from 1 cm. B. Camera Response over Time Table 4 tabulates the results of camera response over time. It is clearly shown that there is no difference in the
R 11 187 14 22 684 26
Table 4 Camera ouput signal vs time 90 G 14.2 323 18 30.7 1186 34
B 16 460 21 29 1339 37
Time Interval Channel Min Max Circon Mean Std Min Max Stroz Mean Std
Sort (60min) R G B 0.9 0.5 0.9 2.2 3.1 3.6 1.4 1.2 1.6 0.3 0.5 0.6 1.1 1.0 1.4 5.8 3.5 6.2 2.6 2.1 2.6 1.1 0.7 1.4
Long (4 weeks) R G B 0.9 0.6 1.0 2.248 3.1 3.7 1.4 1.3 1.7 0.3 0.6 0.6 1.0 0.9 1.4 5.7 3.5 6.1 2.5 2.1 2.5 1.1 0.6 1.4
absolute difference statistics of color variation over time (for both short (60min) and long (4 weeks) time intervals) for both cameras. Time variability experiments showed that the camera response is not sensitive to short and long intervals. These permit color correction calibration to became once under some conditions. C. Color Correction Results Table 5 illustrates that color correction based on algorithm 2, reduced significantly the error measures for both cameras. Figure 1 shows significant improvement in the image color quality following color correction using algorithm 2 for both cameras. Furthermore, there was no significant difference between the MSE of the Circon and Karl
IFMBE Proceedings Vol. 29
Reliable Hysteroscopy Color Imaging
453
Table 5 Color correction results for 24 Colors Channels MAE Circon MSE RMSE MAE Storz MSE RMSE
Without Correction R G B Y 16.9 10.8 25.7 18 464.9 178 1160 601 21.5 13.3 34.0 23 28.5 15.8 25.9 23 1241.6 402.6 964.2 869 35.2 20.1 31.1 29
R 16.5 443.2 21.1 21 792.1 28.1
Storz cameras and between the camera color response and the palette original colors.
Model 1 G B 6.9 20.1 73 710 8.5 26.6 10.5 20 164.9 613.6 12.8 24.8
• •
(a)
(b)
(c)
(d)
IV. CONCLUDING REMARKS The main difficulty of quantitative hysteroscopy image analysis is that the camera response signal is device dependent. The objective of this study was to investigate the reliability of hysteroscopy color imaging of two cameras with respect to: (i) white balance, (ii) camera response over time, and (iii) color correction. The results of this study demostrate the following for both the Circon and Karl Storz cameras:
•
Model 2 G B 6.8 9.9 63.4 183.7 7.9 13.5 5.2 3.8 40.5 20.2 6.4 4.5
Y 8 118 11 6 52 7
The above results will be incorporated in a standardized protocol for texture feature analysis of endoscopic imaging for gyneacological cancer developed by our group [6], [18]. It is imortant to note that gamma color correction can be carried out at least once a month and there was no significant difference in the color response between the two cameras (Circon and Karl Storz). The latter finding will enable multi-center quantitative analysis (constrained by the use of the two cameras investigated). The proposed quantitative hysteroscopy image analysis system will be further evaluated on more subjects collected from different medical centers.
1 (a) Circon original output image, (b) Circon ouput image after gamma correction (c) Karl Storz original output image, and (d) Karl Storz ouput image after gamma correction
•
R 7.9 105.9 10.3 7.8 96.5 9.8
There was no camera color variation over both short (60 min) and long (4 weeks) time intervalls. Color correction algorithm 2 gave better results. Most imortantantly, there was no significant difference between the MSE of the Circon and Storz cameras.
Fig.
•
Y 15 409 19 17 524 22
For white balancing, either guaze, or white sheet, or the white color cheker can be used. White balance can be carried out at any distance between 0.5 to 3.0 cm. White balance can be carried out at 1cm and at any angle between 0 to 45 degrees.
ACKNOWLEDGMENT This study was partially funded by the Ministry of Commerce and Industry, Cyprus, Call for High Technology and Innovative Incubator Companies, project entitled: “A Computer Aided Diagnostic tool in Gynaecological Endoscopy (CAD_Gen)”, May 2008 – April 2010.
REFERENCES [1] Jamil A. Fayez, Matthew F. Vogel (1991) Comparision of different treatment methods of endometriomas by laparoscopy. Obstet. Gynecol., Vol. 78, pp. 660-665. [2] Jansen F. Willem, Vredevoogd B. Corla et al. (2000) Complications of hysteroscopy: A Prospective, Multicenter Study. Obstet. Gynecol., Vol. 92, issue 2, pp. 266-270. [3] Saurabh Srivastava, Jeffrey J. Rodríguez, et al. (2008) Computeraided identification of ovarian cancer in confocal microendoscope images. Journal of Biomedical Optics 13(2), 024021 [4] Stavros A. Karkanis, Dimitrios .K. Iakovidis, et al. (2003) Computer-aided tumor detection in endoscopic video using color wavelet features, IEEE Transactions on Information Technology in Biomedicine 7(3): 141-152
IFMBE Proceedings Vol. 29
454
I. Constantinou et al.
[5] Marios S. Neophytou, Constantinos .S. Pattichis et al. (2005) The Effect of Color Correction of Endoscopy Images for Quantitative Analysis in Endometrium. 27th Annual International conference of the IEEE Engineering in Medicine and Biology Society, 1-4 September, Shanghai, China, 2005, pp. 3336-3339. [6] Marios S. Neofytou, Constantinos S. Pattichis et al (2007) A Standardised Protocol for Texture Feature Analysis of Endoscopic Images in Gynaecological Cancer. BioMedical Engineering OnLine, http://www.biomedical-engineeringonline.com/content/6/1/44, BioMedical Engineering OnLine 2007, 6:44. [7] Simone Bianco, Francesca Casparini et al. (2007) Combining Strategies for White Balance, Digital Photography III. Edited by Martin, Russel A.; DiCarlo, Jeffrey M.; Sampat, Nitin. Proceedings of the SPIE, Volume 6502, pp. 65020D. [8] Karl Storz at http://www.karlstorz.com/ [9] Hsien-Che Lee (2005) Introduction to color imaging science, Cambridge University Press 2005. [10] Feng Shao, Zongju Peng et al. (2008) Color Correction for Multiview Video Based on Background Segmentation and Dominant Color Extraction. Wseas Transactions on Computer, Issue 11, Volume 7, ISSN: 1109-2750, November 2008.
[11] Vender Y. Haegnhen, Jean M.A. Naeyaert, et al. (2000) An imaging system with calibrated color image acquisition for the use in dermatology. IEEE Trans. Med. Imag., 19, no 7, pp. 722-730, Jul. 2000. [12] Vien Cheung, Stephen Westland et al. (2004) A comparative study of characterization of color cameras using neural networks and polynomial transforms. Coloration Technology, vol. 120, pp. 19-25, 2004. [13] Konica Minolta at http://www.konicaminolta.eu/ [14] X-Rite at http://www.xrite.com [15] Gregory W. Corder, Dale I. Foreman (2009) Nonparametric Statistics for Non-Statisticians: A Step-by-Step Approach, New Jersey: Wiley, June 2009 [16] MathWorks at http://www.mathworks.com [17] M.D. Grossberg, S.K. Nayar, “Modeling the space of camera response functions”, IEEE Trans Pattern Anal Mach Intell. 2004, 26(10):1272-1282. [18] Ioannis P. Constantinou, Christina A. Koumourou et al (2009) An Integrated CAD system for supporting the diagnosis of the endometrium cancer. 9th International Conference on Information Technology and Applications in Biomedicine, ITAB 2009, November 57, 2009, Larnaca, Cyprus.
IFMBE Proceedings Vol. 29
Comparison of Methods of Measurement of Head Position in Neurological Practice P. Kutilek, J. Charfreitag, J. Hozman Czech Technical University in Prague, Faculty of Biomedical Engineering, Czech Republic Abstract - In this paper we describe advanced methods for the precise posture head measurement and explain our method of non-invasive head position measurements by cameras and accelerometers. The methods are designed for use in neurology to discover relationships between some neurological disorders (such as disorders of vestibular system) and postural head alignment. The main goal of this study is to compare possibilities of the methods. The results are presented for rotation and flexion of head. It was experimentally checked that the accuracy of this method in order of tenths of degrees and therefore this method satisfies the general physicians´ requirement for the accuracy of the measurement about 1-2°.
satisfy the general physicians´ requirement for the accuracy of the measurement about 1-2°.
a) Torticollis
b) Laterocollis
c) Retrocollis
d) Anterocollis
Keywords - head posture, neurology, cameras calibration, head tracking.
I. INTRODUCTION
Head posture can be influenced negatively by many diseases of the nervous system, visual and vestibular systems. These can be divided into several groups: - Cervical blockades and diseases of cervical spine often cause abnormalities of the head position in the wide range. - “Movement disorders” from the group of dystonias. For them, the abnormal position of affected body segments is typical. See Fig. 1. - Paralyses of eye muscles also often cause a compensative position of the head when the insufficient function is compensated by a tilt of the head in direction of the affected muscle. In many cases, the abnormalities of the head position can be small and hard to be observed. In clinical practice, it has been possible to quantify only those deviations that are well visible (up to this time). Despite the fact that an accurate method for measuring the head postural alignment could contribute to diagnosis of vestibular and some other disorders, this issue has not been systematically studied. This article explains methods of head position measurement. Methods are designed for use in neurology to discover relationships between some neurological disorders. The main goal of this study is to compare possibilities of the methods and validation tests. Above all, we compared application of system based on cameras and system based on accelerometers for measurement of head position. It was experimentally checked that the accuracy of the methods
Fig. 1 Head position abnormalities. II. METHODS
At the present time, the use of an orthopedic goniometer is the standard way how to evaluate angles simply and rapidly in clinical practice. But there are some limitations, especially in the case of head posture measurement. Because of the combination of three components of movement, it is problematic to use only the goniometer. In Ferrario, V.F. et al, 1995 [1] the new method based on the television technology was developed as method faster than conventional photographic analysis. The subject's body and face were identified by the 12 points. All subjects were pictured using a standardized technique for frontal views of the total body and lateral views of the neck and face. After 20 seconds of standings, two 2-second films were taken for each subject. Based on the image analysis program the specified angles were calculated after the digitization of the recorded films. In Galardi, G. et al, 2003 [2] objective method to measure posture and voluntary movements in patients with cervical dystonia using Fastrack was developed. The Fastrack is an electromagnetic system consisting of a stationary transmitter station and four sensors.The head position in the space was reconstructed (based on the sensor signals) and observed from axial, sagittal, and coronal planes.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 455–458, 2010. www.springerlink.com
456
P. Kutilek, J. Charfreitag, and J. Hozman
Hozman, J. et al, 2004 [3] proposed new method based on the application of three digital cameras with stands and appropriate image processing software. The new method of non-invasive head position measurement was designed for use in neurology to discover relationships between some neurological disorders and postural alignment. Objective was to develop a technique for precise posture head measurement or, in other words, for measurement of the native position of the head in 3D space. The technique was supposed to determine differences between anatomical coordinate system and physical coordinate system with accuracy from one to two degrees in case of tilt and rotation. Pictures of the head marked on tragus and outer eye canthus are taken simultaneously by three digital cameras aligned by laser beam. Similar technique has not been developed up to this time such that it could be widely and easily used in neurological clinical practice. Head position was measured with precision [3] of 0,5 o in three planes (rotation-yaw, flexion-pitch and inclination-roll).
By described way we can avoid an influence on patients during the time of measurement of the inclination (roll), flexion (pitch) and rotation (yaw) of the head. This is very important advantage for medical doctors because they can apply various examinations which need open space in front of a face. Start of measurement
The first measurement ?
Yes
Capturing of pictures to correction calculation
No
Capturing of a picture of patient's head Calculation of the inclination, flexion and rotation
Analysis of images
Calculation of the correction values
Display results of measurement and data storage in a database
a) Anatomical horizontal
Fig. 3 Flowchart of clinical measurement
b) Anatomical axis
Fig. 2 Anatomical horizontal and axis In the last our designed method [5], two cameras are required for determination of head positions. The rotation and inclination of the head is evaluated from the difference between tragus coordinates in the left-profile and rightprofile image (Fig.5). The coordinates of the left and right tragus (Fig.2) are automatically evaluated by finding the centre of the rounded mark attached on the tragus, using Hough transform. The images were captured at the same time using two cameras and the cameras were situated on the same optical axis which is parallel with the frontal plane subject. The method which was used for evaluating the tilt in sagittal plane (flexion/extension) using a profile photograph. The flexion value was measured relatively as the inclination of the connecting line between tragus and exterior eye corner (Fig.2).
The problem of deviations of CCD sensors and the problem of deviation of optical axes of both cameras can be excluded by scanning of correction mark on a transparent mask [5]. By these we find differences of coordinates of this point in both frames. These differences represent the deviations that will be used for the calculation correction. At present, a test version uses the software correction based on the MatLab Camera Calibration Toolbox. This software enables the accurate detection of mutual positions of the optical axes. Software provides information on the mutual displacement and the mutual rotation of the axes. The exact values of displacements and rotations of optical axes can be added to the calculation of correction of displacements or to refine the angles. It appears, however, that the angle correction software is time consuming and impractical for medical practice and for this reason is used only the correction of displacement of optical axes and the CCD sensors. The second main area of designed methods we tested for application in neurological practice uses accelerometers [6]. The headtracker in the eMagin Z800 3DVisor® personal display can measure head position in the 3D space. For the acquisition of the head motion we programmed software FBMI SPH in C# language based on Z800 3DVisor® SDK
IFMBE Proceedings Vol. 29
Comparison of Methods of Measurement of Head Position in Neurological Practice
457
in the middle distance between the Approximate Change of angle of Angle identiChange of angle of Differences in Angle identitwo cameras and it value of the flexion measured fied by accelflexion measured by changes of angles fied by causes large differo o o o o o flexion angle [ ] by cameras [ ] erometers [ ] accelerometers [ ] of flexion [ ] cameras [ ] ences of distances 0 13,2 1,2 between the CCD -5 9,0 4,2 -2,9 4,1 0,1 -10 4,0 5,0 -8,0 5,1 0,1 sensors (cameras) -15 -0,5 4,5 -12,6 4,6 0,1 and the measured -20 -5,3 4,8 -18,3 5,7 0,9 head [5]. The special Table 2 Differences in changes of angles of rotation to the right measured by cameras and accelerometers glasses – headApproximate Change of angle of Angle identiChange of angle of Differences in Angle identitracker does not value of the rotation measured fied by accelrotation measured by changes of angles fied by rotation angle [o] by cameras [o] erometers [o] accelerometers [o] of rotation [o] cameras [o] allow to determine 0 -1,0 -0,6 information about -5 -5,4 4,4 -6,9 6,3 1,9 angles between -10 -9,7 4,3 -11,5 4,6 0,3 anatomical horizon-15 -14,6 4,9 -16,0 4,5 0,4 tal and axis and the -20 -18,9 4,3 -21,2 5,2 0,9 physical coordinate system. Neverthe2.2. The SW (software) retrieves position of the head from less, this measurement method is portable and faster way to the build-in headtracker through the USB connection and measure motion of head. save the measured results in to the CSV (comma-separated The first measured values were used as initial, i.e. zero values) file. and were used as correction for all subsequent values. Head position was measured with precision of 1,0o in three planes (rotation, flexion and inclination) [6]. The result is that the accuracy of the method alone is in eights of degree per the ten measurements. This is the dynamic error thanks the lowcost headtracker which has long stabilize time after the previous measurement. Table 1 Differences in changes of angles of flexion measured by cameras and accelerometers
Fig. 4 Tested the eMagin Z800 3DVisor. III. RESULTS
A result of this study is a recommendation to use cameras and accelerometers in neurological practice. The system based on two identical digital cameras is a sufficiently accurate system for determination of inclination, flexion and rotation of head in neurological practice. Advantage of the system is easy to determine angles between anatomical horizontal and axis and the physical coordinate system defined by cameras position. The cameras measure with precision of 0,05o ideally if there are not large abnormalities of the head position [4]. Disadvantage of cameras system is the increasing of error of detected angle in the case of increasing of abnormalities of the head position / measured angles. The reason of that is a large deviation of head position from the optimal location
Fig. 5 User interface of developed software during the comparative measurements with accelerometers.
Disadvantage of cameras system is the increasing of error of detected angle in the case of increasing of motion of measured subject. We can not measure position/angles of fast moving patients. For example in the case of a tremor.
IFMBE Proceedings Vol. 29
458
P. Kutilek, J. Charfreitag, and J. Hozman
On the contrary, the advantage is measuring all angles/head positions with a constant error. This means that we can measure the high abnormalities of the head position as well. It was experimentally checked that the accuracy of the method satisfy the general physicians´ requirement for the accuracy of the measurement about 1-2°. The cameras measure with precision of 0,05o ideally and headtracker with precision of 1,0o in three planes. A result of study is a recommendation to use the headtracker with lesser dynamic error (less than 0.3°/s) for the measurement of head position. The whole accuracy of the method could be this way markedly increased. From the above mentioned description of both systems it shows the possibility of a combination of advantages of cameras and accelerometers. Most useful modification of these systems could allow the clinical measurement of head and shoulder posture [7]. Generally, combination of identification of positions of specific anatomical markers by cameras and motion by accelerometers can provide many uses, not only in medicine. IV. CONCLUSIONS
Engineering Research II” of the Czech Technical University sponsored by the Ministry of Education, Youth and Sports of the Czech Republic.
REFERENCES 1.
2.
3.
4.
5.
6.
We designed two equipments/methods for evaluation of head motion and position in neurological practice. Both systems are cheap in comparison with sophistic systems which use accelerometers, magnetometers and gyroscopes. Mentioned ways of measurement of head posture could be applied within another engineering, medical and science areas as well.
ACKNOWLEDGMENT This is the work of the Department of Biomedical Technology, Faculty of Biomedical Engineering, Czech Technical University in Prague in the frame of the research program No. MSM 6840770012 ”Transdisciplinary Biomedical
7.
Ferrario, V. F., Sforza, C., Germann, D., Dalloca, L.L., Miani, A., Head Posture and Cephalometric Analyses: An Integrated Photographic/ Radiographic Technique, American Journal of Orthodontics & Dentofacial Orthopedics, Vol. 106, No. Sep (3), 1994, pp. 257-264. Ferrario, V.F., Sforza, C., Tartaglia, G., Barbini, E., Michielon, G., New Television Technique for Natural Head and Body Posture Analysis, Cranio, Vol. 13, No. Oct (4), 1995, pp. 247-255. Cerny, R., Strohm, K., Hozman, J., Stoklasa, J., Sturm, D., Head in Space - Noninvasive Measurement of Head Posture In: 11th Danube Symposium 2006 - International Otorhinolaryngological Congress. Bled, Slovenia, September 27-30, 2006. Medimond S.r.l., Bologna, Italy, pp. 39-42. Hozman J., Zanchi V., Cerny R, Marsalek P., Szabo Z. Precise Advanced Head Posture Measurement, Proceedings of the 3rd WSEAS International Conference on Remote Sensing (REMOTE´07), WSEAS Press 2007, ISBN 978-960-6766-17-6, pp.18-26. Kutilek P., Hozman J., Non-contact method for measurement of head posture by two cameras and calibration means, In Proceedings of the 8th Czech-Slovak Conference Trends in Biomedical Engineering. Bratislava: STU, 2009. ISBN 978-80-227-3105-8, pp. 51-54. Charfreitag J., Hozman J., Cerny, R., Specialized glasses — projection displays for neurology investigation, 4th European Conference of the International Federation for Medical and Biological Engineering. Springer Berlin Heidelberg: 2009. ISBN 1680-0737. pp. 97-101. Harrison, A.L., Wojtowicz, G., Clinical Measurement of Head and Shoulder Posture Variables. The Journal of Orthopaedic &Amp; Sports Physical Therapy (JOSPT), Vol. 23, No. June (6), 1996, pp. 353-361.
Authors: Patrik Kutilek, Jaroslav Charfreitag Institute: FBMI CTU in Prague Street: Sq. Sitna 3105 City: Kladno Country: Czech Republic Email: [email protected]
IFMBE Proceedings Vol. 29
The Nanopous Al2O3 Material Used for the Enzyme Entrapping in a Glucose Biosensor C. Ravariu1, A. Popescu2, C. Podaru2, E. Manea2, and F. Babarada1 1
Faculty of Electronics, BioNEC Group, Politehnica University of Bucharest, Romania 2 Institute of Microtechnology IMT, Bucharest, Romania
Abstract— This paper presents the enhanced performances of a glucose biosensor constructed with a dielectric multi-layer Si/SiO2/Si3N4/Al2O3. If the Si/SiO2 system is present in all the silicon integrated sensors, the biosensors processing involves special attention to the intermediate layer responsible for the biological receptors entrapping. A first experience imposes to use the Si3N4 layer over the SiO2 layer mainly to stops the Na+ and K+ ions diffusion from bio-environment toward the silicon transducer. These results are reflected in the CV traced curve and a minimum leakage current. The nitride besides to a nanometric top Al2O3 layer grown by anodization, helps to a better enzyme entrapping, maintains the GOD active centers and are in agreement with the planar technology requirements. An original Al2O3 deposition technique is presented. The final multilayer structure was characterized as a capacitor by CV-metry, ellipsometry, impedancemetry. Keywords— biosensor, deposition, anodization, nanopores.
I. INTRODUCTION For the biosensors fabrications as integrated microelectronics devices is known the microcell technique with 3 electrodes: a single Work Electrode (WE), a Counter Electrode (CE) and a Reference Electrode (RE), [1, 2, 3]. In this paper, a glucose biosensor 4 electrodes – a Counter Electrode (CE), two Work Electrodes (WE) and a Reference Electrode (RE) – was considered. One of the difficulties in the integrated biosensor manufacturing is related to the Reference Electrode allocation. This electrode needs standard materials, unusual for microelectronics devices. Some authors preferred to avoid this RE electrode, constructing a separate ISFET with those specials metals, [3]. We propose here a solution of integration on the same chip of all the electrodes. Another delicate problem encountered in the biosensor manufacturing is related to a good immobilization of the biological receptor element onto an inorganic surface [4], maintaining in the same time its catalytic properties without alleviations due the immobilization technique. A key enzyme, GOD in this case, must be entrapped on the silicon oxide surface of the transducer element manufactured in the planar Si-technology, [5]. Many techniques were proposed and all of
them take into account some intermediates and complicates buffer layers, like Fe3O4-SiO2 nanoparticles, [6]. In this work we propose a simple solution for the GOD immobilization, using a cheap material, Al2O3 “ γ ” type, achieved by anodization. This nano-porous material captures inside him the biological enzyme due to the capillary effect exerted by pores and adhesion forces, of course, after some technological processes. The present glucose biosensor design and conception is based on the amperometric transducer principle, which detects the blood glucose by the current variations due to its metabolism products during an enzyme assisted reaction. The electro-catalytic oxidation of glucose produces glucono-δ-lactone acid and H2O2 in the Glucose-oxidase (GOD) enzyme presence, [7]. When a 0.7V potential is applied versus the Reference Electrode (RE), the oxygenated water is discomposed in water and Oxygen – O- ions being easily to be detected with some Ions Sensitive Electrodes ISE. Therefore, the glucose concentration into the blood stream is a function of the current, collected by a special transducer element, designed in this scope.
II. THE KEYS TECHNOLOGICAL STEPS FOR THE BIOSENSOR The biosensor manufacturing has started from a Silicon substrate with <100> orientation and with 40 Ωcm initial resistivity. After a chemical standard cleaning process, the wafers were annealed at 1160٥C, time 80 minutes, in a miscellaneous atmosphere with wet oxygen and Cl2. The SiO2 film, thermal grown, ensures a good electrical isolation among electrodes and between electrodes and substrate. Because, in the bio-environment exists the risk of SiO2 contamination with alkaline ions as Na+ and K+, which can lead to an ionic conduction through the oxide pores, an additional Si3N4 layer is deposited onto oxide. This layer plays the barrier role against the alkaline ions from the test solutions and improves the electrical isolation between electrodes and substrate, due to its dielectric itself property. Over the Si/SiO2/Si3N4 sandwich, an additional Al2O3 layer, with adsorbent property is placed. The technology of the aluminum oxide deposition consists in anodic oxidation
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 459–462, 2010. www.springerlink.com
460
C. Ravariu et al.
of an Al layer. This film was deposited in vacuum, at a low rate of deposition, with 800Ǻ initial thickness. The “γ” catalytic type of Al2O3 was achieved. The anodic oxide aluminum, γ-type, has a double advantage for the glucose biosensor: high adsorbent properties for the GOD enzyme immobilization and good catalytic properties for some organic substances electro-oxidation. For the enzyme membrane preparation, the glucose oxidase GOD was immobilized in serum albumin, using a glutaraldehide solution - 2% concentration, as polymerization agent. The quantities of the entrapped GOD enzyme in serum albumin were 50mg/80mg. The glucose biosensor, conceived as a Clark microcell, has 4 electrodes: a Counter Electrode (CE), two Work Electrodes (WE) - from TiPt and a Reference Electrode (RE) formed with TiAg/AgCl, fig.1. The structure designed with two Work Electrodes allows a differential signal processing for two kinds of deposited membranes onto the cell electrodes: - a membrane with entrapped enzyme on the first Work Electrode (WE), - a membrane without entrapped enzyme on the second Work Electrode of the microcell (WE). The Pt and reference electrodes were manufactured by the standard planar process of lift-off. The Pt electrodes were deposited at a low rate, 10 Ǻ/s and 2000 Ǻ metal thickness. The reference electrode is carried out by an Ag deposition, with 40 Ǻ/s.
III. THE AL2O3 NANOSTRUCTURED LAYER PROCESSING The proposed multilayer dielectric structure Si/SiO2/ Si3N4/Al2O3 used in this biosensor, was conceived as a micro-electronic device that ensures a transducer function with high performances concerning the residuals currents and a suitable compatibility between the inorganic Si3N4 layer and the active biological material – GOD enzyme. The Al2O3 anodic oxide film with a porous and anisotropic structure was produced by an anodic oxidation technology, starting form an Aluminum film deposition in high vacuum, at a controlled grown rate of 1-2 Ǻ/s in order to ensure the best adherence to substrate. The initial metal thickness is 80nm. The impurities presence in the Al film, the rate deposition of the film in vacuum, the pre-processing previous steps of Al deposition, the anodization potential at constant voltage, the optimum current experimental established through the electrolyte, the temperature of the anodization sink and post-anodization temperature influence the final Al2O3 ordered structure and the pores sizes. The anodization process was accomplished at a constant potential and variable current. The electrolyte was the oxalic acid and the ortophosphoric acid in the ratio of 0,11 mol/l:0,4mol/l. Then the oxide aluminum was thermal processed at T=4200C, in N2 environment, during 30 minutes. The post-anodization annealing occurred at 320ºC, resulting the final nanostructurated Al2O3, fig. 2.
Fig. 1 Four electrodes mask used in the glucose biosensor
Fig. 2 The Al2O3 anodic oxide film with a porous structure visualized by SEM
Both Work Electrodes were designed and deposited on microcell so that they permit the preliminary characterization of the biological membrane conduction properties on wafer, using a dedicated test circuit.
The γ-type of Al2O3 with adsorbent properties finally processed, was analyzed by a series of investigations like: ellipsometry, C-V curves, pico-ampermeter measurements.
IFMBE Proceedings Vol. 29
The Nanopous Al2O3 Material Used for the Enzyme Entrapping in a Glucose Biosensor
IV. TESTS RESULTS In this paragraph, the characterization of the multilayer system Si/SiO2/Si3N4/ Al2O3 was performed on capacitors with the same sizes as that for the amperometric glucose biosensor. The ellipsometry analysis in IR concerns the optical constants determination of the material for two cases: Al2O3 onto SiO2 and Al2O3 onto Si3N4, fig. 3. The optimum solution occurs for the Al2O3 onto Si3N4 case. The capacitive test structure was additionally annealed in inert atmosphere at 450°C, in correlation with the postanodization process for Al2O3. The CV curves were recorded before and after the annealing. The Capacitance-Voltage CV experimental curves from fig. 4 were traced on a CP 38 Plotter. The curve 1 was traced for the un-annealed capacitive structure. After heating at 250°C, at an electrical stress about +5V and -5V, were recorded the curve 2 and respectively the curve 3. The capacitance is ranging from 130pF to 210pF and is resting at a specific value for dielectrics, while the applied voltage is varied from – 17V to +2V.
461
The horizontal shift of the curves in the – 6V vicinity proves the inherent capture of the alkaline ions into the SiO2 film, as is happened in the MOS or SOI technologies, [8, 9].
Fig. 4 The CV curves traced for the optimal post-anodization process, at an optimal annealing temperature 450 °C for the multilayer capacitor carried out from Si/SiO2/Si3N4/Al2O3 in unstressed conditions and under thermal and electrical stress In order to measure the leakage current through the capacitive multilayer structure, two wires were placed on the top and on the bottom of the structure and were connected to the measuring system. The currents were read from a Keithley 236 pico-ampermeter at a maximum applied voltage equal with +2V. The leakage measured currents are resting < 50 pA – suitable for the insulator properties of the Si/SiO2/Si3N4/Al2O3 multi-layer.
V. CONCLUSIONS
Fig. 3 The ellipsometry spectrum of the Al2O3 layer, manufactured in the anodization technological step The C-V measured shift from this figure demonstrates the dielectric stability at the work bias of the device, + 0.7V.
This paper discussed some technological aspects from the microfabrication of a glucose biosensor. The study was focused firstly on the manufacturing process of a nanoporous Al2O3 γ-type oxide by anodization. Starting from a ultra-thin Al layer deposited in high vacuum, with tens of nanometer thickness, the Al2O3 with nano-pores was electrochemically prepared by anodization. In conclusion, can be mentioned the optimum annealing temperature - 420ºC in N2 time 30 minutes, followed by a post-anodization treatment. The anodic oxide aluminium, γtype, has a double advantage: high adsorbent properties for the GOD enzyme and good catalytic properties for the electro-catalytic oxidation of the glucose.
IFMBE Proceedings Vol. 29
462
C. Ravariu et al.
The Si3N4 layer seems to be not so important in the biosensor construction, but it fulfills a key role: as barrier against the alkaline ions from the test solutions. Additionally the nitride improves the electrical isolation between electrodes and substrate. The properties of the multilayer glucose biosensor were highlighted in a second part of the paper. The Si/SiO2/ Si3N4/Al2O3 multilayer structure performances were analyzed by a series of investigations like: ellipsometry, C-V-metry. The C-V measured shift from this figure demonstrates the dielectric stability at the work bias of the device, + 0.7V. Also, the leakage currents between top-bottom of the structure are resting lower than 50 pA, small enough to not disturb the currents flow from the Counter Electrode to the Work Electrode. The proposed structure proved high performances, at low costs of fabrications, being suitable also for others enzyme biosensors implementation.
ACKNOWLEDGMENT This work is reporting on the PN2 no. 12095, and 62063 Partnership National Romanian Program, ANCS.
2. Ravariu C, Ravariu F (2007) A test two-terminals biodevice with lipophylic and hidrophylic hormone solutions. Journal of Optoelectroncs and Advanced Materials ISI JOAM, Bucharest, Romania, vol. 9, no. 8: 2589-2592 3. Doretti L, Ferrara D, Lora S, Schiavon F, Veronese F.M (2000) Acetylcholine biosensor involving entrapment of acetylcholinesterase and poly ethylene glycol-modified choline oxidase in a poly vinyl alcohol cryogel membrane. Enzyme and Microbial Technology Elsevier Journal, vol. 27: 279-285 4. Ravariu C, Mihaiescu D et al (2008) Non-linear electrical conduction through testosterone undecanoate and rethinol oily solutions. Creete, Greece, Chania, 6th European Symposium on Biomedical Engineering, ESBME’2008, 19-21 June 5. Li Chi, Li Yin, et al (2001) Study on separative structure of ENFET to detect acetylcholine. Sensors and Actuators B 71: 68-72 6. Jianding Qiu, Huaping Peng, Ruping Liang (2007) Ferrocene-modified Fe3O4-SiO2 magnetic nanoparticles as building blocks for construction of reagentless enzyme-based biosensors. Electrochemistry Communications, Vol. 9, Issue 11: 2734-2738 7. Lihuan Xu, Yihua Zhu, Yaoxia Li, Xiaoling Yang, Chunzhong Li (2008) Bienzymatic glucose biosensor based on co-immobilization of glucose oxidase and horseradish peroxidase on gold nanoparticlesmesoporous silica matrix. Proc. Nanoelectronics Conference, INEC 2nd IEEE International, 2008, pp 390-393 8. Ravariu C, Rusu A (2006) Interface electric charge modeling and characterization with <delta> - distribution generator strings in thin SOI films. Microelectronics Elsevier Journal, vol.37, no.3: 943-947 9. Spyrou S, Bamidis P.D, Maglaveras N, Pangalos G, Pappas C (2008) A Methodology for Reliability Analysis in Health Networks. IEEE Transactions on Information Technology in Biomedicine, Vol. 12, Issue 3: 377 - 386, DOI 10.1109/TITB.2007.905125 The address of the corresponding author:
REFERENCES 1. Pallikarakis N, Moore R (2007) Health Technology in Europe - Regulatory Framework and Industry Perspectives of the "New Approach". IEEE Engineering in Medicine and Biology Magazine Vol. 26, Issue 3:14 - 17, DOI 10.1109/MEMB.2007.364923
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Cristian Ravariu Politehnica University of Bucharest, Electronics. Splaiul Independentei 313 Bucharest Romania [email protected] or [email protected]
Hand-held resonance sensor instrument for soft tissue stiffness measurements – a first study on biological tissue in vitro V. Jalkanen1,3, O.A. Lindahl2,3 1
2
Department of Applied Physics and Electronics, Umeå University, 90187 Umeå, Sweden Department of Computer Science and Electrical Engineering, Luleå University of Technology, 97187 Luleå, Sweden 3 Center for Biomedical Engineering and Physics, Umeå University and Luleå University of Technology, Sweden
Abstract— A stiffness sensitive sensor capable of measuring the stiffness of a soft object through contact has been implemented through the resonance sensor technique. This technique is based on a piezoelectric element in a feedback circuit configuration. At contact with a soft tissue the resonance frequency changes and together with force measurement it is possible to estimate the elastic stiffness of the measured object. Earlier sensor implementations have been limited to controlled indentation setups. Recently, the hand-held resonance sensor concept was introduced and evaluated on a soft tissue phantom. The aim of this study was to investigate the concept on soft biological tissue, specifically to investigate if the measured stiffness was impression speed independent. Measurements were conducted on porcine muscle tissue and a stiffness parameter and an impression speed parameter were calculated. Correlation analysis showed weak non-significant correlation suggesting that the stiffness was independent of impression speed. This was promising for further studies with the handheld resonance sensor on soft biological tissue. Keywords—hand-held, resonance sensor, stiffness, soft biological tissue.
I. INTRODUCTION
A type of resonance sensor technology has been demonstrated to measure stiffness of soft objects [1]. The sensor consists of a piezoelectric transducer element oscillating with the resonance frequency by the means of a feedback circuit. This transducer element is mounted into a sensor tip. When the sensor tip makes contact with a soft object, the resonance frequency changes and a frequency change (ǻf = f0 -f) is observed (Figure 1(a)). At a constant applied load ǻf reflects the stiffness of the soft object. This technique has been implemented in a balance arm setup. Due to the sensor’s stiffness measuring properties it has been used to objectively measure soft tissue stiffness in a variety of medical applications [2]. For instance, it has been shown that increased lymph node stiffness is related to the presence of metastases [3] and increased liver stiffness is correlated to liver fibrosis [4]. The sensor technique is promising for characterizing oedema [5].
Fig. 1 (a) The principle of the resonance sensor technique where a vibrating piezoelectric element changes its resonance frequency (ǻf = f0 -f) upon contact with a soft object. The contact force (F) is also measured. (b) The stiffness is obtained from the line slope between the measured F and ǻf. (c) Illustration of the hand-held resonance sensor system for soft tissue stiffness measurements. (d) Photograph of the Venustron® resonance sensor probe. Scale is in centimeters. We have demonstrated with an indentation controlled resonance sensor system, that prostate tissue stiffness variations can be measured and related to the prostate tissue histological variation [6, 7]. In this sensor configuration, the resonance sensor, a force sensor, and a position sensor are arranged in a motorized mounting attached to a stable stand. During a measurement the sensor tip is pressed into the tissue by the motor, allowing ǻf, the force (F), and the impression depth (d) to be measured during a ramp indentation. Furthermore, a linear relation exists between ǻf and F (Figure 1(b)) and the line slope is related to the elastic stiffness modulus (E) and the density (ρ) [8]. For prostate tissue it is shown that the density-related variations are nonsignificant [8]. The stiffness can be estimated by the derivative, i.e.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 463–466, 2010. www.springerlink.com
464
V. Jalkanen and O.A. Lindahl
50 30 0
0
0.1
0.2
Δf (Hz)
400
Fig. 2 Two porcine muscle tissue specimens with approximate measurement locations marked with circles. Scale is in centimeters.
∂F E ∝ ∂Δf ρ
(1)
In a recent study [9] the concept of a hand-held resonance sensor is demonstrated for stiffness measurements on a soft tissue phantom made of gelatin with a tumor phantom inclusion made of silicone rubber. In a hand-held setup, the sensor is pressed into the tissue phantom by freehand movement (Figure 1(c)). Linear elastic theory, thus equation (1), is shown to be valid for the hand-held setup and impression speed independence is verified on the elastic gelatin tissue phantom [9]. The aim of this study was to investigate if the measured stiffness (equation (1)) was impression speed independent for hand-held resonance sensor stiffness measurements on soft biological tissue.
II.
MATERIAL AND METHODS
F (mN)
F (mN)
100
20 10 0
200
0
0
0.1 time (s)
0
100 Δf (Hz)
200
0.2
Fig. 3 Freehand measured force (F) and frequency change (ǻf) as functions of measurement time. The circles show an initial time interval of F and ǻf values. In the right panel the circles display the linear relation between F and ǻf in the initial time interval. The line is the least squares line fit and the resulting slope value was used to estimate the stiffness parameter. B. Biological tissue specimens and measurements In this study porcine muscle tissue of pork chops was used as to evaluate the impression speed independence on soft biological tissue. Tissue was stored in refrigerator prior to measurement. Pieces of bone and fat were cut away to obtain as homogeneous piece of muscle tissue as possible. Two pieces were obtained, both approximately 15 mm thick. Three measurement locations were chosen on both of the tissue specimens (Figure 2). Physiological saline solution was brushed on the tissue surface before ten measurements on each location were performed. Measurements were conducted at room temperature (range 22-24 °C).
A. Resonance sensor instrumentation
C. Stiffness data acquisition and analysis
In this study a resonance sensor system, Venustron® (Axiom Co., Ltd., Koriyama Fukushima, Japan) was used in the experiments (Figure 1(d)). This sensor system is equipped with the resonance sensor, force sensor, and position sensor and all are arranged into a probe where a motor is used to control the impression of the hemispherical shaped sensor tip (radius 2.5 mm). In this study the motor and position sensor were disabled to allow freehand measurements to be conducted. From the start of a measurement, ǻf and F, were sampled at fs = 200 Hz. As ǻf is contact sensitive [8] a threshold level of 20 Hz was used to detect the point of contact.
From the freehand measurements of stiffness, F and ǻf were recorded during the measurement time and an example is illustrated in Figure 3. The stiffness parameter ∂F/∂ǻf (equation (1)) was calculated as the line slope from a least squares line fit on F and ǻf (Figure 3). This was done by linear regression on F and ǻf-data in an initial time interval (ǻt) limited by a ǻf-limit of 150 Hz. The time interval ǻt = N/fs, where N was the number of samples in the interval. The impression speed was not measurable directly, but it was estimated by ǻt-1. A correlation analysis was done between ∂F/∂ǻf and ǻt-1 to investigate the role of the impression speed. A test result with p < 0.05 was considered statistically significant.
IFMBE Proceedings Vol. 29
Hand-Held Resonance Sensor Instrument for Soft Tissue Stiffness Measurements – A First Study on Biological Tissue In Vitro
1 normalized ∂F/∂Δf
∂F/∂Δf (mN/Hz)
0.4 0.3 0.2 0.1 0
465
0
20
40 Δt
−1
60
80
(s )
III. RESULTS
A typical freehand stiffness measurement of F and ǻf is shown in Figure 3. From the ten measurements on each of the six measurement locations, it was seen that the stiffness parameter ∂F/∂ǻf was independent of the impression speed parameter ǻt-1 as demonstrated by the weak and nonsignificant correlation coefficients of -0.08, -0.07, 0.20, 0.24, 0.25, and 0.48 (p > 0.05, n = 10) (Figure 4). The corresponding p-values were 0.82, 0.85, 0.58, 0.50, 0.50, and 0.16. The measured stiffness due to repeated measurements seemed to drop on average as compared to the first measurement (Figure 5).
IV.
DISCUSSION
In this study we have demonstrated that the measured stiffness was independent of the impression speed for measurements done on porcine muscle tissue in vitro using a hand-held resonance sensor. Similar findings were obtained from stiffness measurements on a gelatin tissue phantom [9]. The stiffness, as given by equation (1), was acquired from the measurements of F and ǻf in a small time interval ǻt limited by a ǻf-limit of 150 Hz, on average less than one third of the peak ǻf. This small ǻf-limit was chosen to represent a small impression depth limit since it has been shown that ǻf ∝ρd3/2 [8]. The impression speed was represented by ǻt-1, since ǻt was the time to reach the ǻf-limit. Each of the six measurement locations showed large stiffness variations (Figure 4). These variations were not related to the impression speed as shown by the correlation analysis. Instead it was assumed that the large variation was due to the nature of soft biological tissue.
0.6 0.4 0.2 0
100
−1
Fig. 4 The stiffness parameter ∂F/∂ǻf in relation to the impression speed parameter ǻt-1. The markers correspond to the six different measurement locations. The lines are least squares line fits illustrating the dependence.
0.8
1 2 3 4 5 6 7 8 9 10 measurement order number
Fig. 5 The ten stiffness measurements normalized with the first measurement. The circle and bars show the mean and standard deviation of the six different measurement locations. Porcine muscle tissue was chosen as a model to represent soft biological tissue in this first study. Generally biological tissue is a complex structure with both solid and fluid parts giving it complex mechanical properties [10]. In this study we observed that repeated measurements on the tissue surface left a remaining impression cavity on the tissue surface at the measurement location. Tissue pressure drop due to fluid translocation, possibly because of repeated indentations, could result in a lower apparent stiffness. This might be what is indicated in Figure 5, where already after the first measurement a drop in the apparent stiffness is seen. This could be used for describing complex mechanical properties of soft tissue and characterizing pathological conditions, for instance oedema [5] or intratumoral pressure as opposed to normal healthy tissue. It was observed that the physiological saline solution that was brushed onto the surface might have been accumulated in small amounts into these surface cavities. This accumulation would certainly affect the stiffness measurements and stiffness calculations since ǻf is contact sensitive [8]. The ǻf would register contact while F would be zero, thus resulting in a false lower stiffness. However, the effect of this was assumed to be minimized by using a threshold level of ǻf = 20 Hz when detecting the point of contact with the tissue surface.
V.
CONCLUSIONS
This study on biological tissue suggests that the stiffness measured by a hand-held resonance sensor is independent of the impression speed. This is promising for further development of resonance sensor based instruments and methods for characterizing mechanical properties of soft biological tissue.
IFMBE Proceedings Vol. 29
466
V. Jalkanen and O.A. Lindahl 6.
ACKNOWLEDGMENT The study was supported by grants from Objective 2 Norra NorrlandEU structural Fund.
REFERENCES 1.
2.
3.
4.
5.
Omata S and Terunuma Y (1992) New tactile sensor like the human hand and its applications. Sensors Actuators 35:9-15 DOI 10.1016/0924-4247(92)87002-X Lindahl OA, Constantinou CE, Eklund A, Murayama Y, Hallberg P, Omata S (2009) Tactile resonance sensors in medicine. J Med Eng Technol 33:263-73 DOI 10.1080/03091900802491188 Miyaji K, Furuse A, Nakajima J, Kohno T, Ohtsuka T, Yagyu K, Oka T, Omata S (1997) The stiffness of lymph nodes containing lung carcinoma metastases – a new diagnostic parameter measured by a tactile sensor. Cancer 80:1920-5 Kusaka K, Harihara Y, Torzilli G, Kubota K, Takayama T, Makuuchi M, Mori M, Omata S (2000) Objective evaluation of liver consistency to estimate hepatic fibrosis and functional reserve for hepatectomy. J Am Coll Surg 191:47-53 Lindahl OA and Omata S (1995) Impression technique for the assessment of oedema: comparison with a new tactile sensor that measures physical properties of tissue. Med Biol Eng Comput 33:27-32
Jalkanen V, Andersson BM, Bergh A, Ljungberg B, Lindahl OA (2006) Prostate tissue stiffness as measured with a resonance sensor system: a study on silicone and human prostate tissue in vitro. Med Biol Eng Comput 44:593-603 DOI 10.1007/s11517-006-0069-6 7. Jalkanen V, Andersson BM, Bergh A, Ljungberg B, Lindahl OA (2006) Resonance sensor measurements of stiffness variations in prostate tissue in vitro – a weighted tissue proportion model. Physiol Meas 27:1373-1386 DOI 10.1088/0967-3334/27/12/009 8. Jalkanen V, Andersson BM, Bergh A, Ljungberg B, Lindahl OA (2008) Explanatory models for a tactile resonance sensor system – elastic and density-related variations of prostate tissue in vitro. Physiol Meas 29:729-745 DOI 10.1088/0967-3334/29/7/003 9. Jalkanen V (2009) Hand-held resonance sensor instrumentation towards faster diagnosis of prostate cancer – Stiffness measurements on a soft tissue phantom, IFMBE Proc. vol. 25(7), World Congress on Med. Phys. & Biomed. Eng., Münich, Germany, 2009, pp 808-11 10. Fung YC (1993) Biomechanics – Mechanical Properties of Living Tissues. Springer-Verlag, New York The address of the corresponding author: Author: Institute: City: Country: Email:
IFMBE Proceedings Vol. 29
Ville Jalkanen Dept. of Applied Physics and Electronics, Umeå University 901 87 Umeå Sweden [email protected]
Head Position Monitoring System P. Cech1, J. Dlouhy1, M. Cizek1, J. Rozman1, and I. Vicha2 1
Brno University of Technology/Department of Biomedical Engineering, Brno, Czech Republic 2 University Hospital Brno/Department of Ophthalmology, Brno, Czech Republic
Abstract— The head position monitoring system is designed for monitoring patient head after special ophthalmological operation called vitrectomy. In the final phase of this operation a certain amount of vitreous is replaced with an expansive gas or oil bubble. This bubble presses the retina parts together allowing its better recovery. Correct position of the head during recovery time is therefore important. The designed system consists of a telemetric monitoring unit and communication unit. The monitoring unit provides periodical tilt sensing using the digital MEMS accelerometer. The tilt angle and corresponding time are stored in built-in memory. The patient is also warned by sound alarm when posture of head is incorrect. The second part of this system is communication unit which provides communication with the monitoring unit via wireless interface. This communication supports single or continual mode, so the measured data can be available immediately or after treatment period. The third important part of entire system is a special software application, which provides communication such as setting up the device and downloading the data of measurement. It also serves for data visualization and analyzing. The presented system is a combination of biofeedback and data logging system. The goal is to help the patient with maintenance of correct head posture. Secondly it is a powerful diagnostic tool for ophthalmological specialists.
sensing the head position is a way to monitor the bubble placement inside the postoperative eye. A head mounted electronic sensor described in the presented paper is capable of warning the patient in case of an incorrect tilt of the head. The measured head position data are collected into a flash memory and can be analyzed and visualized by the affiliated software application.
II. PRINCIPLE OF MEASUREMENT The position of the bubble inside the postoperative eye is mainly influenced by the resulting direction of the applied acceleration. This acceleration can be split in two main parts: the gravitational acceleration and the acceleration originating from movements of the patient’s body. Figure 1 shows a simplified view of how exactly the bubble position depends on the resultant acceleration vector direction.
Keywords— MEMS, vitrectomy, tilt sensing, head position monitoring, software processing.
I. INTRODUCTION Human eye can be affected by many diseases or by injuries. Especially damaged retina can lead to partial or full blindness. Pars plana vitrectomy is a special kind of retinal microsurgery operation. In the final phase of this operation a certain amount of vitreous is removed and replaced by an expansive gas (SF6 or C3F8) or oil bubble. This bubble causes local pressure increasing which allows faster retina recovery. During several weeks the bubble is gradually absorbed and replaced by liquid. For optimal treatment results following the vitreoretinal surgery a patient must often maintain a head posture which allows the injected oil or gas bubble to push on the affected place on retina [1][5]. In is almost impossible to monitor the bubble inside the eye. Because the eye is swollen after operation so its relative position to the rest of the head is constant. Therefore
Fig. 1 The resulting acceleration vector direction influences the position of a gas bubble floating inside the postoperative eye Utilized triple axis MEMS accelerometer is capable of sensing accelerations in a frequency range from DC to several hundreds of Hz and measures 3 basic Cartesian components of the resultant applied acceleration. For the purposes of head posture monitoring it is necessary to calculate the angle between the actual acceleration vector and the acceleration vector initially measured with patient’s head in the correct position determined by the clinical specialist.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 467–470, 2010. www.springerlink.com
468
P. Cech et al.
III. MEASUREMENT SYSTEM HARDWARE The designed measuring unit is based on a low power 8-bit microcontroller. A triple axis LIS3LV02DL accelerometer communicating through I2C interface is used for measuring acceleration vector components. Basic noise suppression is done by averaging of the input signals. Angular deviation from the desired head position is calculated by the microcontroller firmware using vector algebra:
α = arccos(a1n ⋅ a0 n ) a0 n =
a0 a0
a1n =
a1 a1
(1)
Where α is the computed angular deviation, a0 is the acceleration vector initially measured with the patient’s head in the correct position, a1 is the actual measured acceleration vector, a0n and a1n are the normalized vectors a0 and a1. Achievable accuracy of angular measurements is approx. 1 degree, which is sufficient enough for clinical practice demanding 10 degree angular resolution.
A 3,6V lithium cell is used as a power supply. Thanks to power-saving mode of the MCU only a single battery is sufficient for powering the device during the whole monitoring period without replacement. Flash memory is used for storing data obtained during periodic head position measurements. Each record in the memory contains a time stamp and angular deviation data. The contents of the memory can be downloaded to a personal computer via wireless communication interface and analyzed during regular post operative examinations at the clinic. The periods of sampling and storing data are user configurable. The default values of 1 second and 5 minutes mean that within 1 second the patient can be warned of improper head posture and the measured data is stored in the external memory every 5 minutes.
Fig. 3 Printed circuit boards of a PC wireless communication interface (left) and a head mounted intelligent sensor (right) Because the MCU has to be woken up from its power saving mode before taking a sample of the measured acceleration, the sampling period influences the power consumption of the device. The effectiveness of utilizing available capacity of the memory is dependent on the frequency of data logging. With the default settings, the device is capable of 45 days of continuous monitoring, which is sufficient enough for the most of the clinical cases.
Fig. 2 The block diagram of the whole designed monitoring system An acoustic alarm is activated whenever the patient holds the head outside the desired range of positions. This sort of bio-feedback actively helps the patient in maintaining the optimal head position that was determined by the ophthalmologist at the time of operation.
Fig. 4 The sensor can be mounted to the head with an elastic strap
IFMBE Proceedings Vol. 29
Head Position Monitoring System
469
IV. SOFTWARE APPLICATION The designed software provides two functions: communication and data analysis. Both functions are included in special software application called Head Position Monitor (Figure 5). The communication between telemetric monitoring unit and communication unit is provided by wireless interface as mentioned above. For this purpose a special communication protocol has been developed. The communication mode is based on master-slave model, where the communication unit is the master and the monitoring unit is a slave device. It is possible to communicate with more than one monitoring unit. This is important when online monitoring is required. Typical situation of online monitoring is during hospitalization. Once the communication between monitoring device and computer (via communication unit) is established, the user is informed about monitoring device status and can access
the device. Setting up the device must be done before new treatment period. The ophthalmologist determines the maximum allowed deviation from the optimal position and this value is set to the device as the angular threshold. The function of the monitoring device can by adjusted by following parameters: • • • •
Sampling period Storing period Improper head position time before alarm activation Proper head position time for alarm termination.
Sampling period represents period of measurement sampling and it is set to 1 second. Storing period represents time interval of records in memory. Each record consists of time and actual angle of tilt. The other two parameters represent the minimum time for alarm initiation respectively alarm termination. Patient's personal data such as name, surname, date of surgery, etc. are also stored into the device.
Fig. 5 Head Position Monitor IFMBE Proceedings Vol. 29
470
P. Cech et al.
The communication software is also capable of downloading measured data from the device. Following a successful retrieval the data are saved to an external file for further analysis. The data file consists of basic patient information and whole treatment monitoring. Each patient has a single file so the analysis can be done any time it is necessary. The Head Position Monitor also serves for data visualization and analysis. The measurement data can be loaded from device or from external file so the application can operate in online or offline mode. The application window consists of the information area (left) and the graph area. The behavior of measurement in time is displayed in the graph area. Time and date of measurement are shown on the horizontal axis. The tilt angle deviation is shown on the vertical axis. This area can be individually zoomed. The threshold angle defined by ophthalmologist is shown in the graph as horizontal line with specific value. All points of measurement exceeding the threshold angle are marked by a small square sign. The information panel on left side of application window contains device and monitoring data. User is informed whether the communication unit and the monitoring unit are connected. The monitoring device battery status is also displayed. Monitoring data panel contains information about patient and monitoring. All important characteristics of measurement are calculated based on the measured data. Following parameters are important for treatment efficiency assessment: • • • • •
Threshold angle Start of monitoring End of monitoring Overall monitoring time Out of threshold time
Threshold angle was set by ophthalmologist in the beginning of monitoring and defines maximum deviation from required head position. Overall monitoring time is calculated as difference between the End of monitoring and the Start of monitoring. Out of threshold time is calculated as a sum of all intervals when the angular deviation is exceeding the threshold angle. This value is also expressed as a percentage of overall monitoring time.
V. CONCLUSIONS Presented system is focused on head position monitoring after special microsurgery operation called Vitrectomy. Head position can be described by sensing angle of tilt.
Based on real testing the MEMS accelerometers appear to be an efficient method for head tilt sensing. Achieved accuracy of tilt angle measurement is approximately 1 degree, which is sufficient for clinical use demanding resolution of 10 degrees. The device is designed to warn the patient by sound alarm in case of improper head position. Such biofeedback system may help the patient maintain proper head position so it can improve postoperative treatment. The designed system can provide a great feedback for the ophthalmologists. Based on analyzing data logged in the treatment period, the postoperative complications caused by improper head position can be separated from those caused by relevant clinical factors. Thanks to this ophthalmologists can assess the effectiveness of the treatment more precisely. We started the clinical testing in the 3/4 of 2009. Based on achieved results we will continuously fine tune the system in collaboration with the clinical specialists. The expected optimalization will be focused on measured data analysis, proper head fixation and software application improvement.
ACKNOWLEDGMENT The research is supported by Czech Science Foundation project No. 102/08/1373 and by the research plan MSM 0021630513.
REFERENCES 1. Vicha I., Rozman J., Vlkova E., Girgle R., Cizek M., Dlouhy J., Vaclavik V.: A New Electronic system for Postoperative Monitoring of Patient’s Head Position. Abstracts of 8th European Vitreo-Retinal Society Congress, Prague 2008. 2. Cullen R. Macular hole surgery: helpful tips for preoperative planning and postoperative face-down positioning. J. Ophthal. Nursing. Technol., 1998, 17, s. 179-181. 3. Dahl A., Stöppler M. Retinal Detachment Causes, Symptoms, Signs, Treatment and Risks. MedicineNet.com. Online: <www.medicinenet.com/retinal_detachment/article.htm>. 2007 4. Jacobs P. M. Vitreous loss during cataract surgery: prevention and optimal management. Eye. Online: . Feb. 2008 5. Vitreous–retina–macula consultants of New York. Vitrectomy. Online: < http://www.vrmny.com/pe/ vitrectomy.html >. New York, 2006
Author: Petr Cech Institute: Department of Biomedical Engineering, Brno University of Technology Street: Kolejni 4 City: Brno Country: Czech Republic Email: [email protected]
IFMBE Proceedings Vol. 29
Short range wireless link for data acquisition in medical equipment N.M.Roman1, S.Gergely2, R.V. Ciupa1, M.V.Pusca1 1
2
Technical University of Cluj-Napoca/ Biomedical Engineering Department, Cluj-Napoca, Romania National Institute for Research and Development of Isotopic and Molecular Technologies, Cluj-Napoca, Romania
Abstract— The majority of patient monitoring equipments do have numerous cable connections to the main data processing unit. Therefore, especially the emergency rooms are usually packed with a lot of uncomfortable equipments which could create a difficult access to the patient. One solution to this issue is the using of low-micro power radio remote data transmission. Keywords— wireless, link, MSP430F5419, MSP430F5500, CC1000
I. INTRODUCTION The signal types which are suitable for a radio link are the respiratory, the ECG, EMG and temperature signals. This paper describes an application for a 12 channel analog or digital signals transmission using a short range radio link. The wireless link has two modules respectively the acquisition module and the host module. There is one transceiver on each module having the same program structure. The structure of the designed wireless link is shown in figure 1. ADC converter
Signal encoding
• Controlled Signal multiplexing • MSP430F5419
Signal decoding
• Radio channel link • Transceiver • CC1000
• USB port controlled • Transceiver • MSP430F5500
Fig. 1 Link structure II. DESCRIPTION OF THE SYSTEM The PC controls the host interface through the full speed USB port. A good solution for the host microcontroller is the Texas Instruments MSP430F5500. The Texas Instruments MSP430 family of ultralow-power microcontrollers consists of several devices featuring different sets of peripherals well targeted for this application. The architecture, combined with five low-power modes is optimized to achieve extended battery life in portable measurement applications. The de-
vice features a powerful 16-bit RISC CPU, 16-bit registers, having high code efficiency. The USB module is a fully integrated USB interface that is compliant with the USB 2.0 specification. The module supports full-speed operation of control, interrupt, and bulk transfers. The module includes an integrated LDO, PHY, and PLL. The PLL is highly-flexible and can support a wide range of input clock frequencies. USB RAM, when not used for USB communication, can be used by the system. The used USB protocol does not require the installing of a device specific driver. It uses the Human Interface Device (HID) protocol which programs the USB registers through the embedded firmware USB protocol. The below listed registers must be programmed [1] to achieve the desired data packets dimension; correspondingly by programming the endpoint registers: USB Configuration Registers (Base Address: 0900h) USB key/ID USB module configuration
USBKEYID USBCNF
00H 02H
USB Control Registers (Base Address: 0920h)
Offset
Input endpoint#0 configuration Input endpoint #0 byte count Output endpoint#0 configuration Output endpoint #0 byte count Input endpoint interrupt enables Output endpoint interrupt enables Input endpoint interrupt flags Output endpoint interrupt flags USB interrupt vector USB frame number
00H 01H 02H 03H 0EH 0FH 10H 11H 12H 1AH
IEPCNF_0 IEPCNT_0 OEPCNF_0 OEPCNT_0 IEPIE OEPIE IEPIFG OEPIFG USBIV USBFN
At start up the host is programmed for the desired number of channels, transmission rate and data resolution as a number of bit/sample. Also the microcontroller’s ports are programmed for the data exchange with the host-transceiver, using a very simple routine. During transmission, data is embedded in a structure that allows the correct per channel
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 471–474, 2010. www.springerlink.com
472
N.M. Roman et al.
distribution in the PC-running application. In our case the structure is presented in figure 2.
Fig. 2 Transmitted data structure After the host establishes the connection with the PC, the next step is to program the host-transceiver. The first step at the opening of the communication channel is to program the desired number of channels at the acquisition module. Data structure is the same like in picture 2 with the observation that after the DC balanced preamble, the packet start-ID cod which contains a command-code is repeated until the interface is completely programmed. The end-data section contains the acknowledged code transmitted by the interface to the host module by reversing the transmission direction.
In receive mode CC1000 is configured as a traditional superheterodyne receiver. The RF input signal is amplified by the low noise amplifier (LNA) and converted down to the intermediate frequency (IF) by the mixer (MIXER). In the intermediate frequency stage (IF STAGE) this downconverted signal is amplified and filtered before being fed to the demodulator (DEMOD). As an option a RSSI signal, or the IF signal after the mixer is available at the RSSI/IF pin. After demodulation CC1000 outputs the digital demodulated data on the pin DIO. Synchronization is done on-chip providing data clock at DCLK. In transmit mode the voltage controlled oscillator (VCO) output signal is fed directly to the power amplifier (PA). The RF output is frequency shift keyed (FSK) by the digital bit stream fed to the pin DIO. The internal T/R switch circuitry makes the antenna interface and matching very easy. The frequency synthesizer generates the local oscillator signal which is fed to the MIXER in receive mode and to the PA in transmit mode. The frequency synthesizer is programmed to achieve the desired transmission respectively reception frequency. The device is programmed through the 3 wire digital serial interface (CONTROL) at a very fast rate up to 10 MHz. Throughout the data transmission the registers of both transceivers are programmed accordingly to the desired transmission way. Because of the bandwidth limitation we have preferred the synchronous NRZ transmission mode. Data transmission mode is shown in figure 4.
Transceiver module Our application uses the Texas Instruments CC1000 transceiver. CC1000 is a true single-chip UHF transceiver designed for very low power and very low voltage wireless applications. [2] The circuit is mainly intended for the ISM (Industrial, Scientific and Medical) and SRD (Short Range Device) frequency bands at 315, 433, 868 and 915 MHz. We selected the value for transmission frequency to 433MHz. In figure 3 is shown the simplified block diagram of the CC1000:
Fig. 4 Synchronous NRZ mode
Fig. 3 Simplified block diagram of the CC1000
There are 28 8-bit configuration registers, each addressed by a 7-bit address. A Read/Write bit initiates a read or writes operation. A full configuration of CC1000 requires sending 22 data frames of 16 bits each (7 address bits, R/W bit and 8 data bits). At reception data is detected through a digital demodulator. The IF signal is sampled and its instantaneous frequency is detected. The result is decimated and filtered. In the data
IFMBE Proceedings Vol. 29
Short Range Wireless Link for Data Acquisition in Medical Equipment
slicer the data filter output is compared to the average filter output to generate the data output. The averaging filter is used to find the average value of the incoming data. While the averaging filter is running and acquiring samples, it is important that the number of high and low bits received is equal. Therefore all modes, also synchronous NRZ mode, need a DC balanced preamble for the internal data slicer to acquire correct comparison level from the averaging filter. The used preamble is a ‘010101…’ bit pattern. This is necessary for the bit synchronizer to synchronize correctly. The averaging filter must be locked before any NRZ data can be received. If the averaging filter is locked (MODEM 1.LOCK_AVG_MODE=’1’), the acquired value will be kept also after Power Down or Transmit mode. After a modem reset (MODEM1.MODEM_RESET_N), or a main reset (using any of the standard reset sources), the averaging filter is reset. In a polled receiver system the automatic locking can be used. The programming of CC1000 requires an initializing routine which consist of all registers programming, starting with address 00H to 46H, followed by PLL and VCO calibration routine. Finally the TX/RX-routine allows data transmission. Due to bandwidth requirement the programmed baud rate is set to the maximum value which is 76.8 KBaud in NRZ synchronous mode. The data acquisition module The acquisition board consists of the MSP430F5419 microcontroller. The main reason why we used this microcontroller is because of the 12 bit on board ADC. Also the multi I/O architecture allows the direct to chip analog signal inputting of all 12 cannels without external multiplexing. The 12 external and 4 internal analog signals are selected as the channel for conversion by the analog input multiplexer. The input multiplexer is a break-before-make type to reduce input-to-input noise injection resulting from channel switching. The assigned transceiver is programmed using the serial data output from the CC1000 transceiver. In figure 5 it shows the simplified block diagram of the interface module:
473
sion end. The cyclic redundancy check (CRC) module provides a signature for a given data sequence. The CRC module produces a signature for a given sequence of data values. The signature is generated through a feedback path from data bits 0, 4, 11, and 15. The CRC signature is based on the polynomial given in the CRC-CCITT-BR polynomial shown in eq1: The CRC generator is first initialized by writing a 16-bit word (seed) to the CRC Initialization and Result (CRCINIRES) register. Any data that should be included into the CRC calculation must be written to the CRC Data Input (CRCDI or CRCDIRB) register in the same order that the original CRC signature was calculated. The actual signature can be read from the CRCINIRES register to compare the computed checksum with the expected checksum. To allow parallel processing of the CRC, the linear feedback shift register (LFSR) functionality is implemented with an XOR tree. The power management module (PMM) includes an integrated voltage regulator that supplies the core voltage to the device and contains programmable output levels to provide for power optimization. The PMM also includes supply voltage supervisor (SVS) and supply voltage monitoring (SVM) circuitry, as well as brownout protection. The brownout circuit is implemented to provide the proper internal reset signal to the device during power-on and poweroff. The SVS/SVM circuitry detects if the supply voltage drops below a user-selectable level and supports both supply voltage supervision (the device is automatically reset) and supply voltage monitoring (SVM, the device is not automatically reset). SVS and SVM circuitry is available on the primary supply and core supply. In standby mode (LPM3 RTC Mode) the overall measured current consumption of the microcontroller was 2.80 μA. This is a very good value for a portable interface unit. After the preconditioning of the signals, the implemented software filtering routine is capable of a real time noise reduction for a maximum of 3 allocated channels. Noise filtering is done by n=25 point moving average routine. A tremendous advantage of the moving average filter is that it can be implemented with an algorithm that is very fast. The moving average filter operates by averaging a number of points from the input signal to produce each point in the output signal. In equation form, this is written in eq2:
Fig. 5 Acquisition interface module block diagram The End-Data section from the data transmission structure contains a computed CRC value at the end of each transmis-
where ci are the known smoothing coefficients for a symmetric averaging. The moving average filter is in fact a convolution using a very simple filter kernel. The only limitation
IFMBE Proceedings Vol. 29
474
N.M. Roman et al.
of such a filter is that it is not suitable for the frequency domain because the moving average filter cannot separate one band of frequencies from another. An improvement in computational efficiency can be achieved if we perform the calculation of the mean in a recursive fashion. A recursive solution [3] is one which depends on a previously calculated value. Suppose that at any instant k, the average of the latest n samples of a data sequence xi is given by eq3:
Similarly at the previous time instant, k-1, the average of the latest n samples is:
even at the lowest sampling rate. These sampling rate values are correlated with the number of used channels. Regarding the averaging filtering method, if the spectrum of the noise and signal components do not overlap in the frequency domain, one can simply design a filter that keeps or enhances the desired signal term, x(t), and discards the unwanted noise term, n(t).While this is a simple and useful way of cleaning up a signal, this approach does not work in many instances because the biological signal and possible noise spectrums overlap. This is especially true in the case of an acquisition tool designed for in deep signal analysis or in other words for medical research. As a future development of this wireless link we intend to add to the host PC a wavelet set of routines for the rejection of signal base line wobbling and also a selective power supply noise removal. Also a multi address wireless network may monitor more patients using a single host computer.
ACKNOWLEDGMENT We would like to thank Texas Instruments for their electronic samples and software which we used to be able to produce the experimental model.
on rearrangement gives:
In fact this is a very simple filtering method, but in practice the results are great and the low computation time gave us the chance to use the routine for 3 acquisition channels. These above mentioned channels do also have the possibility of an automatic signal level scaling. All inputs are calibrated for 1V. In case that the noise reduction capable 3 channels are programmed for the differential mode, then obviously the maximum channel number is reduced to 9 channels. Further signal processing is done by the host PC having much powerful DSP routines.
REFERENCES 1. 2. 3. 4. 5. 6.
Texas Instruments, Mixed Signal Microcontrollers Chicon Products, Texas Instruments, Single Chip Very Low Power RF Transceiver Sanjit K. Mitra, Digital Signal Prodessing. A computer based Approach, Mc Graw-Hill Jerry Luecke, Analog and Digital Circuits for Electronic Control System Applications using the MSP 430 microcontroller, Elsevier, ISBN 0-7506-7810-0 John Enderle,Susan Blanchard, Joseph Branzino, Introduction to Biomedical Engineering, Elsevier Academic Press, ISBN 0-12238662-0 David Prutchi, Michael Norris, Design and Development of Medical Electronic Instrumentation, Willey-Interscience ISBN 0-471-67623-3
III. CONCLUSIONS This wireless link is fairly simple and is suitable for patient monitoring than for diagnosing. Because the resulted overall bandwidth is in the mid range, the obtained acquisition rate may be programmed from 400 up to 4000 samples/s. As a result of the sampling theorem which states that:
Author: Nicolae Marius Roman Institute: Technical University of Cluj Napoca/ Biomedical Engineering Department Street: 26-28 Gh. Baritiu City: Cluj Napoca Country: Romania Email: [email protected] [email protected]
the frequency content of an analog ECG signal which is in the 0.5–100 Hz domain, is acquired without significant loses
IFMBE Proceedings Vol. 29
Corneal Quantitative Fluorometry – A Slit-Lamp Based Platform J.P. Domingues1,2, Isa Branco2, and A.M. Morgado1,2 1
Biomedical Institute for Research on Light and Imaing – University of Coimbra, Coimbra, Portugal 2 Department of Physics University of Coimbra
Abstract— Ocular Fluorometry has long been used (since early eighties) to measure non-invasively the presence and concentration of tracers in ocular tissues and fluids. The most common tracer has been sodium fluorescein, after systemic administration, but tissue native fluorescence has also been clinically valuable. Our goal is the development of a cooled CCD-camera based instrument configured as an accessory to a slit-lamp - a common instrument in ophthalmic observation of anterior eye - capable of measuring fluorescence in the eye from cornea to anterior vitreous with enough sensitivity and spatial resolution. Sensitivity of 0.1 ng/ml fluorescein equivalent concentration and 100 µm axial spatial resolution have been achieved with in vitro tests. This represent a crucial step forward in slitlamp based quantitative measurements as several new clinical issues can be addressed: Cornea auto-fluorescence and its relation with Diabetic Retinopathy and corneal function evaluation are two of them. With those figures of sensitivity and spatial resolution one can use narrower excitation bands in different wavelengths to address different fluorophores and also reduce slit widths and optimize angular positioning in order to reach inner locations in the eye with enough axial resolution. Accurate corneal in vivo fluorescence quantification evaluating its relation with age and pathologies like diabetes is our first step. Some results have already been achieved and are presented. These developments will also make it possible to improve quantification of Blood-Aqueous Barrier (BAB) leakage into anterior chamber and to assess anterior vitreous fluorescence resulting from Blood-Retinal Barrier (BRB) breakdown. Both are closely related with Diabetes progression. Keywords— Ocular Fluorometry, Slit-Lamp, CCD multielement sensors.
I. INTRODUCTION Diabetes has gained a strong social and economic impact in health care over the last decades and years. Its prevalence worldwide – mainly in developed countries – has dramatically increased: The 2007 National Diabetes Fact Sheet [1], most recent data available in US, states that in the age group of 20 years or older 23.5 million (or 10.7% of total) have diabetes and 57 million have pre-diabetes. The undiagnosed diabetes is 5.7 million and 1.6 million new cases are diagnosed each year (mostly in the age group of 40 – 59 years old). Loose of vision is one of major complications and Diabetes is the leading cause of new cases of
blindness among adults aged 20 – 74 years (Diabetic Retinopathy causes 12,000 to 24,000 new cases of blindness each year in the US). In European Union, according to a report issued by the International Diabetes Federation [2], the number of people suffering from diabetes increased by almost 20% (to 31 million) during the 2003-2006 period. The prevalence rate forecast for 2025 made in 2003 have been reached and surpassed in some countries in 2006. Only 13 of the EU's 27 member states have national plans to address diabetes. In some of them the direct costs reach 18% of total heath care spending. Diabetic Retinopathy (DR) and Diabetic Neuropathy (DN) are among the major chronic complications of diabetes mellitus. DR is the leading cause of blindness among adult population and peripheral DN is responsible for 5075% of non traumatic amputations. In both diseases early diagnosis is a key factor to define treatments and new therapies. Ocular fluorometry has long been used as an early diagnostic tool to DR [3][4][5] by quantification the Blood-Retinal Barrier leakage. In the eighties the first commertial Ocular fluorometer became available (Fluorotron Master, Ocumetrics, USA) and since then several research data has been published relating the amount of sodium fluorescein leakage into vitreous (after systemic administration of the sodium fluorescein tracer) with the grade of diabetic retinopathy and even before DR disease in diabetic patients[3][4][5][7]. More recently this line of research has been reinforced using more sophisticated instrumentation including ocular fundus angiographs [6]. Also Blood-Aqueous Barrier (BAB) proved to be a good indication of alterations in blood vessels permeability and, consequently, of the possibility of measuring DR progression[3][7][13]. Finally, consistent studies indicate that the corneal auto-fluorescence is also a good indicator of metabolic control in diabetic patients and, so, related with DR grading [8][9][11][14]. On measuring natural occurring fluorescence like corneal autofluorescence there is no need of tracer injection which is an enormous advantage as some adverse reactions occur to fluorescein. Also there is no need to take blood samples as n the case of Blood-Ocular barriers permeability evaluation. Therefore, accurate quantification of corneal auto-fluorescence which is usually as low as 10
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 475–478, 2010. www.springerlink.com
476
J.P. Domingues, I. Branco, and A.M. Morgado
ng/ml fluorescein equivalent concentration is well needed and opens new possibilities in clinic diagnosis.
Optical amplification is given by the combination of objective lens and focusing lens on a classic two-lens system (M =
II. MATERIALS AND METHODS A. Ocular Fluorometer Hardware A slit-lamp based Ocular Fluorometer has been developed by our group (US patent 06,013,034, EP 0 656 759 B1) with a multi-element sensor to quantify fluorescence along line segments of ocular globe by electronic scanning. A clinical study to measure fluorescein leakage into anterior chamber after systemic administration has already been conducted [13]. A new data acquisition system has been developed to improve sensitivity, measurement resolution, portability and to increase programmability. This is achieved by using a dsPIC Microcontroller (dsPIC30F6012A, Microchip, USA) together with a 16-bit, 1.25 MSPS ADC and allowing the possibility of using cooled CCD cameras. The communications with the PC (for PIC programming and data reading) are done either by USB or RS-232 and a robust power supply for overall instrument has been coupled. Next picture depicts a simplified block diagram. In vitro performance tests have been performed with this new architecture. This new hardware setup and mainly the possibility of using a cooled high sensitive CCD camera represent a crucial advancement in performance (sensitivity and spatial resolution) which will allow to address other locations in the eye (cornea and vitreous) and, mainly, to test the possibility of quantitatively evaluate different clinical situations (corneal auto-fluorescence and its relation with progression of Diabetes, corneal epithelial and endothelial function, effects of contact lenses in cornea, inflammation follow-up, vitreous fluorophotometry). Of course, it keeps the compatibility with a set of lower grade NMOS multi-element image sensors. The system is configured as an add-on to an ordinary slit-lamp - the most common ophthalmic equipment for anterior segment observation which gives the system the capability of widespread clinical use (Fig. 1). B. Optical Setup To reach the best results making use of the slit-lamp basic optics some additional components can be used. Excitation filters must be selected according with application. The standard slit-lamp filter set does not usually fit our demands. For preliminary tests we used a band pass filter (460-490 nm) with peak transmission (90%) at 480 nm. At emission side we introduced a standard high pass filter, HP 500 nm. A Zeiss 30SL/M slit-lamp was used.
f2 ) which can be used in conjunction with the slitf1
lamp built-in Galilean system. Another possibility is the use of a cylindrical lens to increase the image illuminance over the one-dimensional array detector for better detectivity. With high sensitivity camera sensors narrower excitation bands can be used improving fluorophore selectivity and also narrower slit widths can be selected improving spatial resolution. Also lower pixel pitch can, of course, contribute to a better resolution.
Fig. 1 Overall system diagram
III. RESULTS A. In Vitro A graphical user interface using MatLab and some software tools have been developed for data collection and analysis and preliminary tests have been performed using an Hamamatsu C5809 Multichannel Detector Head. This uses a thermoelectrically cooled FFT-CCD sensor with 24 µm pixel size in line binning operation, using charge integration method. Figures 2 and 3 show results of in vitro measurements to determine linearity at low fluorescein concentrations (of the order of corneal auto-fluorescence equivalent values, about 10 ng/ml fluorescein equivalent). Linearity was determined to be 2% (large percentage deviation of experimental points from best line fit).
IFMBE Proceedings Vol. 29
Corneal Quantitative Fluorometry – A Slit-Lamp Based Platform
477
geometry measurements can be done which is the case of corneal measurements.
Fig. 2 Linearity of measurements for low fluorescein concentrations
Fig. 4 Measurements of lateral spatial resolution with USAF 1951 test target B. Preliminary Corneal Measurements
Fig. 3 Output response for in vitro mesurements Another important parameter is the Lowest Level of Detection (LLOD) defined as background value (0 ng/ml) added to twice the standard deviation of its measurement distribution. It was found to be 0.1 ng/ml with the cooled CCD camera and 1.5 ng/ml fluorescein with conventional NMOS sensors. We also measured the lateral spatial resolution using the USAF 1951 target. Figure 4 depicts one of the results obtained with Group 1, Element 6 (3.56 LP/mm). Measurement resolution depends on Galilean system optical amplification and on the focusing lenses used. Of course pixel-to-pixel pitch and overall optics quality are important. We were able to reach 100 µm resolution. This relates directly with ocular axial resolution as long as 90º slit-lamp
The fluorometer proved to be accurate enough to measure lens auto-fluorescence and anterior chamber fluorescence (after either IV or oral fluorescein administration). Cornea auto-fluorescence has significantly lower intensity and its axial extent is only about 500 µm. We used the already mentioned cooled CCD camera with very low dark charge and tested different measurement geometry, optical amplification, filter set and slit width to obtain the best results. Also signal amplification must be accurately defined. We were able to perform preliminary in vivo corneal measurements in some volunteers with different measurements setup but measurement calibration and definition of standard protocols must proceed and be established. However promising results can already be foreseen. Figure 5 shows measurements with 90º slit-lamp geometry and 1 second integration time. The optical setup was the one already briefly described on which we used a 65 mm focal length focusing lens and 30 × Galilean amplification. Left-end peak is the beginning of lens autofluorescence peak – not shown. Two consecutive scans are shown with Y-scale representing sensor output in ADC relative units. Inter-scan reproducibility was found to be 7%. Much remains to be done towards calibration standards and geometry/optical settings optimization and, finally, in vivo validated studies to ensure the efficiency of the instrument to report for clinically relevant parameters and diagnosis.
IFMBE Proceedings Vol. 29
478
J.P. Domingues, I. Branco, and A.M. Morgado
cornea
Fig.
5 Preliminary results of quantification of corneal auto-fluorescence (two scans)
REFERENCES 1. 2008 National Diabetes Fact Sheet, general information and national estimates on diabetes in the United States, Atlanta, GA: US Department of Health and Human Services 2008. 2. Hall, Michael, Together we are stronger, report presented to the IDF Europe General Assembly, Sep. 2008. 3. Yoshida Akitoshi et al., Permeability of blood-ocular barriers in adolescent and adult diabetic patients. British Journal of Ophthalmology 1993; 77: 158-161 4. Cunha-Vaz JG et al. Blood-Retinal Barrier permeability and its relation to progression of retinopathy in patients with type 2 diabetes. A four-year follow-up study. Graefe’s Arch Clin Exp Ophthalmol (1993) 231:141-145. 5. Cunha-Vaz, J. G. The Blood-ocular barriers: past, present and future, Documenta Ophthalmologica 93: 149-157, 1997.
6. C. Lobo R. Bernardes, J. Figueira et al. Three-year follow-up study of blood-retinal Barrier and retinal thickness alterations in patients with type-2 diabetes mellitus and mild non-proliferative diabetic retinopathy. Arch. Ophthalmol., 122:211-217, 2004 7. Schalnus, Rainer, Christian Ohrloff, Eckart Jungmann, Kerstin Maaβ, Stephan Rinke Anette Wagner - Permeability of the Blood-retinal Barrier and the Blood Aqueous Barrier in Type I diabetes without diabetic Retinopathy: Simultaneous Evaluation with Fluorophotometry, German J. Ophthalmol 2: 202-206, 1993. 8. Stolwijk TR, van Best JA. Corneal auto fluorescence by fluorophotometry as indicator of diabetic retinopathy. Invest Ophthalmol Vis Sci 32 (Suppl.):1067 (1991). 9. van Schaik HJ, Coppens J, van den Berg TJ, van Best JA. Autofluorescence distribution along the corneal axis in diabetic and healthy humans. Exp Eye Res 69(5):505-10 (1999). 10. Cunha-Vaz, J. Domingues, JPP, Correia, CMBA Ocular Fluorometer, EP 0 656 759 B1, European patent (1998). 11. Van Best, J et al. Simple, low-cost, portable corneal fluorometer for detection of the level of diabetic retinopathy, Applied Optics, Vol. 37 No. 19, 4303-4311. 12. Cunha-Vaz, J. Domingues, JPP, Correia, CMBA Ocular Fluorometer, US patent 06,013,034 (2000). 13. Domingues J. P. P.,Figueira J., Correia C. M. , Cunha-Vaz J. G. Blood-Aqueous Barrier Permeability Assessment By Ocular Fluorescence Measurements After Oral And Iv Fluorescein Administration, , IFMBE Proceedings, Vol. 11. Prague: IFMBE, 2005. ISSN 17271983. Editors: Jiri Hozman, Peter Kneppo (Proceedings of the 3rd European Medical & Biological Engineering Conference EMBEC’05. Prague, Czech Republic, 20-25.11.2005). 4752 p. 14. Satoshi Ishito et al. Corneal and Lens autofluorescence in young insulin-dependent diabetic patients, Ophthalmologica, 212: 301-5 (1998) Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
José Paulo Domingues Biomedical Institute For Research on Light and Image Az. Sta Comba - Celas Coimbra Portugal [email protected]
Automatic Detection of Patients’ Spontaneous Activity During Pressure Support Ventilation G. Matrone1, F. Mojoli2, A. Orlando2, A. Braschi2 and G. Magenes1 1
2
Dept. of Computer Engineering and Systems Science, University of Pavia, Pavia, Italy Dept. of Surgical Sciences - Anaesthesia and Intensive Care, University of Pavia, Pavia, Italy
Abstract— The occurrence of significant patient-ventilator asynchronies in assisted ventilation modes is an impellent problem in clinical practice. Addressing this question, an original software has been developed and is here proposed. This tool implements a new automatic technique to identify the beginning and the end of the patient’s respiratory effort, events that are sometimes missed or detected with significant delay by the ventilator. Its improved skills have been evaluated on a set of signals coming from 6 ICU patients and including 6445 respiratory acts, and proved to outperform the machine in increasing the amount of respiratory acts assisted without significant delay from 22 to 70%. The presented tool is the first step in the development a hardware-software device to be directly interfaced with the ventilator in order to represent a monitoring aid for the clinician and possibly to directly drive the device activity. Keywords— Pressure Support Ventilation, asynchrony, inspiratory trigger, expiratory trigger.
I. INTRODUCTION
Positive Pressure mechanical Ventilation (PPV) is the fundamental assistive resource in modern Intensive Care Units (ICU) for patients suffering from respiratory failure. Basically, it aims to provide the patient with adequate oxygenation and removal of carbon dioxide. Mechanical ventilation devices are said to work in controlled mode when the breathing cycle is completely machine-controlled, or instead in assisted mode when its intervention is synchronized to the patient’s spontaneous breathing activity. Among these, Pressure Support Ventilation (PSV) is the most widespread assisted ventilation mode in clinical practice. In assisted mode, the ventilator synchronizing systems (i.e. inspiratory and expiratory triggers), should detect respiratory muscles activations and relaxations in order to correctly drive the device activity [1], which consists in the opening/closing of the inspiratory and expiratory valves. In modern ventilators, both inspiratory and expiratory triggering systems are flow-based. The first one opens the inspiratory valve, providing pressure support to the patient whenever his/her spontaneous activity generates an inspiratory flow overcoming a set threshold. Expiratory
trigger instead causes the inspiratory valve closure while opening the expiratory one, only when the inspiratory flow is below an adjustable threshold (usually defined as a fraction of peak inspiratory flow). Particularly, when treating patients with obstructive pulmonary disease patientmachine asynchronies are frequently prone to occur. In these cases the machine is often unable to recognize the patient’s spontaneous breathing activity or recognizes it but with a significant delay. As a matter of fact, surveys in the literature assess that asynchronies can be noticed in 10 to 97% of respiratory acts during assisted ventilation [2]. A non-optimal interaction between patient and ventilator can either directly damage the respiratory muscles or, in any case, cause complications leading to prolonged mechanical ventilation, longer ICU stay and worse outcome [3] [4]. Alternative ventilation modes have been developed in order to address such a problem [4] [5]. Actually though, they are not always implemented by common ICU ventilators and their primacy hasn’t been assessed in everyday clinical practice yet. In this paper a new software for the automatic detection of the patient’s spontaneous respiratory activity – i.e. contraction and subsequent relaxation of respiratory muscles – is presented. The automatic detection technique here introduced is based on the identification of sudden changes in the flow trajectory due to the patient’s respiratory activity. Such an improved skill will be employed to real-time monitor patient-ventilator synchronization during traditional (flow-based) trigger operation, and it could also be used to directly drive the ventilator activity.
II.
MATERIALS AND METHODS
A. Patients recruitment Data presented in this paper refer to a set of signals recorder from 6 ICU patients, including even 6445 respiratory acts. All the enrolled subjects were hospitalized at the Intensive Care Unit, Policlinico S. Matteo, Pavia, Italy, and assisted by means of the Galileo ICU ventilator (Hamilton Medical AG, Rähzüns, Switzerland), running in PSV mode. Selected patients were characterized by an high-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 479–482, 2010. www.springerlink.com
480
G. Matrone et al.
frequency asynchronies occurrence, as it was detectable on the ventilator monitor screen. These patients were all difficult to wean from mechanical ventilation; as a matter of fact, they underwent PSV for about 15 days. Patients respiratory data recordings, used for the subsequent analysis, lasted about one hour on the average. The ventilator respiratory cycling operations, i.e. inspiratory/expiratory triggers, were flow-controlled. Their respective threshold values were set to 3 l/min and to the 30% of inspiratory flow peak value. The ventilator was connected via RS-232 serial port to a stand-alone personal computer (Intel Core2 Duo, 2.4 GHz) running the data acquisition software. Pressure, flow and volume data recorded from the machine sensors were acquired with a sampling rate of 68 Hz. While running, the software stored all data records in a text file, which was available to the clinician at the end of the recording for further analyses. B. Visual analysis Modern ventilators are able to display the patient’s airway pressure and airflow variations in real-time, allowing the clinicians to visually evaluate patient-ventilator interaction [3]. In particular, patient-ventilator asynchronies are usually identified by the clinician by accurately observing the airflow curve (Fig. 1) [1]. Such events can be broadly classified as inspiratory or expiratory asynchronies. Concerning these first ones, the ventilator may be significantly late in assisting the patient inspiratory act (inspiratory delay) or may not deliver any support at all (ineffective effort). Inspiratory delay is defined as the time between the beginning of the inspiratory effort and the inspiratory valve opening by the ventilator. The real beginning of the inspiratory effort can be recognized as a sudden and sustained upward deviation of the flow signal (Fig. 1). On the other hand, expiratory asynchronies take place when the expiratory valve is opened too early (early
Fig. 1 Typical patient-ventilator asynchrony patterns visible on the airflow waveform: inspiratory delay, expiratory delay and ineffective effort.
Fig. 2 The designed software graphical interface. Airways pressure (top), flow (middle) and the synthetic signal (bottom) waveforms are displayed. cycling-off) or late (delayed cycling-off) with respect to the patient’s inspiratory musculature relaxation. Also in this case, the patient’s relaxation (i.e. the end of inspiratory effort) can be detected as a sudden change in the flow trajectory. Similarly to inspiratory delay, the expiratory one represents the time between the typical flow trajectory change and the expiratory valve opening (Fig. 1). In this work, visual inspection has been performed by a single operator (A. O.) and used as the standard reference for software validation. C. Data visualization and analysis software A new software has been developed in order to graphically represent and process respiratory data coming from mechanically ventilated ICU patients (Fig. 2). Up to now, it has been used as an off-line processing tool, working on data recordings already stored in text files by the ventilator data acquisition software (sampling rate = 68 Hz). At the moment, a real-time implementation is being developed, in order to conjugate data acquisition with online visualization and elaboration. The visualization and analysis software has been ad-hoc designed using LabVIEW (National Instruments Corp., Austin, TX, USA). For what concerns mere signals display, after choosing the data file to be processed, our tool is able to represent airway pressure (cmH2O), volume (ml) and airflow (ml/s) curves, dynamically evolving in time. The clinician can select which of these waveforms are to be displayed and set the width of the temporal window per screen to be shown (5-15-30 sec). The evolution of the selected waveforms on the screen can be paused in order to switch to a more accurate visual analysis mode. Moveable cursors are made visible on each one of the three paused graphs, and the user can always see which value (both time and amplitude) the cursor is pointing at while being moved over the curve. Cursors can also be used to mark any point of interest on
IFMBE Proceedings Vol. 29
Automatic Detection of Patients’ Spontaneous Activity during Pressure Support Ventilation
plots to be saved into a text file for subsequent analysis (e.g. contractions and relaxations of respiratory muscles). Not only the system is able to represent physiological curves: the user can also choose to visualize an additional waveform, synthetically generated by the software itself (Fig. 2, third graph). This new signal will represent the basis for the development of an innovative and more accurate triggering system; from here forth, it will be called flow trajectory trigger. The conceived algorithm mainly involves the airflow signal derivative computation. Since flow data can be affected by high frequency noise, digital low-pass prefiltering is necessary in order to avoid undesired spikes generation and alterations of the new signal shape. Different filter implementations were tested in order to find out the most appropriate one and to determine a trade-off between noise filtering and signal delaying, after a previous Fourier analysis. Particularly, both moving average and a 3rd order digital Butterworth IIR low-pass filter, with a cut-off frequency of 2 Hz (significant frequency band of the flow signal almost under 5 Hz) allowed to obtain the desired result. However, considering that IIR filters have non-linear phase response and can also alter the signal shape, in the end the airflow signal was smoothed using a moving average filter (span = 25 samples). In order to generate the synthetic signal (ml/s2), the digitally filtered airflow signal must be numerically differentiated over time. We have defined two temporal windows in which the synthetic signal is calculated by the system (otherwise it is set to zero). The first time window is placed between the inspiratory peak of filtered flow and the cycling-off time (expiratory valve opening and inspiratory valve closure). The second one starts when the filtered flow reaches the 90% of peak expiratory (filtered) flow and lasts until the opening of the inspiratory valve. If Δd is the variation of the filtered flow derivative, computed in each of these temporal windows, the signal value is equal to Δd
Fig. 3 Behavior of the new monitoring system during inspiratory/expiratory delays and ineffective efforts. Typical asynchornies patterns are shown by the flow curve (top) and by the corresponding synthetic signal (bottom). Green and red dots are produced by the system to identify patient’s activity.
481
when the derivative is positive; the signal equals –Δd when the derivative is negative. This way, the signal is positive whenever the flow trajectory deviation represents an inspiratory muscles contraction; the signal is negative whenever the flow trajectory deviation represents an inspiratory muscles relaxation. A simple threshold comparison can be then used to identify triggering events. An inspiratory trigger occurrence is “announced” by our automatic system when the variation of the synthetic signal significantly exceeds a user-defined positive threshold (e.g. 150 ml/s2). This event is pointed out by the software displaying a green dot (Fig. 2, 3) just below the flow curve. Similarly, an expiratory trigger is signaled by a red dot, whenever the signal variation goes significantly below a negative threshold (e.g. -100 ml/s2) (Fig. 2, 3). This new expiratory trigger works together with the traditional flowbased one in a competitive manner. As a matter of fact, a red dot appears also when the flow decreases under a userdefined flow threshold. All the mentioned threshold values can be set and modified by the clinician using the graphical interface. Yellow horizontal line segments, instead, correspond to the mechanical inspiratory time, that is the time between inspiratory valve opening and closure (Fig. 3).
III.
RESULTS
Patient recordings analyzed in this paper last about 6 hours altogether. Visual analysis succeeded in identifying 6445 respiratory acts with their corresponding muscular contraction and relaxation times. In some cases, the visual analysis wasn’t able to recognize the patient’s activity with sufficient reliability. Thus, these data (less than 2%) were neglected. The operator was able to visually identify 1758 ineffective efforts (27% of all patients’ efforts). Among 4687 assisted acts, the average inspiratory delay was 330±241 ms while the expiratory delay was 65±15 ms. A significant inspiratory delay (>200 ms) occurred in 3164 acts (49%); expiratory delays greater than 200 ms were instead observed in 650 acts (10%). 51% of respiratory acts were assisted by the ventilator with significant delay; therefore, only 22% of patients’ efforts were correctly supported. On the other hand, our software recognized 6425 patients’ respiratory acts (99.7%). In 5798 cases, both inspiratory muscles contraction and relaxation were detected. In 647 acts (10% of 6445) only the relaxation was identified. The beginning of patients’ inspiratory effort was recognized 126±96 ms after the operator did (visual inspection); in 1535 acts (17.6% of 6445) the delay was more than 200 ms. Considering the mechanically supported respiratory acts, the new system anticipated the ventilator
IFMBE Proceedings Vol. 29
482
G. Matrone et al.
Fig. 4 Patient–ventilator interactions during traditional flow triggering versus flow trajectory triggering.
by 224±16 ms. Muscular relaxation was identified 15±17 ms after the operator did. Only in 158 cases (2.5%) this delay was more than 200 ms. The percentages of non/well/delayed assisted acts are resumed in Figure 4, when operating in two different conditions: traditional flow-based triggering (real condition) and our flow trajectory-based trigger (hypothetical condition). In this last case the new triggering system was assumed to drive the ventilator.
IV.
DISCUSSION AND CONCLUSIONS
In our case study, the ventilator device correctly gave its support to the patient only in a little more than one case out of five. A high number of ineffective efforts was observed (in almost one out of four respiratory acts) and inspiratory/expiratory delays were detected in about 50% of assisted breaths. The newly developed software has been used by clinicians as a visual and elaboration aid, in order to identify patient-ventilator asynchronies and to evaluate the quality of the mechanical ventilation device behavior. Our visualization tool is highly sensitive in immediately highlighting ineffective efforts (which actually are the most typical asynchronies in PSV) almost in 99% of occurrences; it behaves similarly or sometimes even better than previously developed methods [6] [7]. For what concerns triggering, our software identifies more than the 63% of efforts not assisted by the ventilator (Fig. 3), thanks to the automatic identification algorithm previously introduced and based on the synthetic signal computation. Thus, if this tool were running in synergy with the machine itself, probably almost nine out of ten spontaneous breathing acts would be recognized and assisted. Even when the ventilator supports the patient’s breathing activity, the software allows to detect the muscular activity in advance (~220 ms) with respect to the machine; this way, inspiratory delay could be significantly reduced. For example, we can see in Figure 3 that the first respiratory act is recognized by the ventilator but with meaningful inspiratory and expiratory delays
(yellow horizontal segment). The second respiratory act is completely missed by the ventilator. Our automatic system instead promptly identifies both the beginning and the end of the two patient’s inspiratory efforts (green and red dots). Altogether, if our triggering system were to drive the ventilator, the correctly (without delays) assisted acts would be 70% compared to 22% of the real case. Summarizing all the obtained results, we can assert that the implemented algorithm outperforms the machine flow-based triggering system. So far, our software has proved to be a reliable offline monitoring system which is likely to improve patientventilator interaction. Further developments are foreseen in the immediate future. First of all, the described software functionalities are being improved in order to supply it with an automatic system for both asynchronies detection and classification. Next, a real-time implementation will be developed, including data acquisition operations. Connecting the PC running our program to the ventilator will provide the clinician with an additional but more reliable monitoring and data analysis system. The long-term objective of this work is to provide the ventilator machine with a new flow trajectory based triggering system.
REFERENCES 1.
2.
3.
4.
5.
6.
7.
Mojoli F, Venti A, Pozzi M, Via G, Braschi A (2009) Patientventilator interaction during Pressure Support Ventilation: how to monitor and to improve it. Proc. 63rd SIAARTI Conf., Florence, Italy, 2009, 75(7-8):533-536 Thille AW, Rodriguez P, Cabello B, Lellouche F, Brochard L (2006) Patient-ventilator asynchrony during assisted mechanical ventilation. Intensive Care Med 32(10):1515-1522 Georgopoulos G, Prinianakis G, Kondili E (2006) Bedside waveforms interpretation as a tool to identify patient-ventilator asynchronies. Intensive Care Med 32:34 Xirouchaki N, Kondili E, Vaporidi K et al. (2008) Proportional assist ventilation with load-adjustable gain factors in critically ill patients: comparison with pressure support. Intensive Care Med 34(11):20262034 Brander L, Leong-Poi H, Beck J et al. (2009) Titration and implementation of neurally adjusted ventilatory assist in critically ill patients. Chest 135(3):695-703 Mulqueeny Q, Ceriana P, Carlucci A et al. (2007) Automated detection of ineffective triggering and double triggering during mechanical ventilation. Intensive Care Med 33:2014-2018 Younes M, Brochard L, Grasso S et al. (2007) A method for monitoring and improving patient: ventilator interaction. Intensive Care Med 33:1337-1346
Author: Giulia Matrone Institute: Dept. of Computer Engineering and Systems Science, University of Pavia Street: via Ferrata, 1 City: 27100 Pavia Country: Italy Email: [email protected]
IFMBE Proceedings Vol. 29
Determination of In Vivo Three-Dimensional Lower Limb Kinematics for Simulation of High-Flexion Squats P.D. Wong1, B. Callewaert2, K. Desloovere2, L. Labey1, and B. Innocenti1 1
2
European Centre for Knee Research, Smith & Nephew, Leuven, Belgium University Hospital Pellenberg, Katholieke Universiteit Leuven, Leuven, Belgium
Abstract—In vitro and numerical simulations of the knee require reasonable kinematic and load inputs and boundary conditions, in order to help ensure their clinical relevance. However, previous simulations of high-flexion squats often have applied loads and motions that possibly oversimplify the true knee kinematics. This study aimed to improve future simulations of squatting by obtaining three-dimensional squat kinematics from a cohort of healthy adults. Seventeen subjects (age range 24-75) underwent motion capture sessions using a standard, systematic clinical procedure. Joint positions were normalized versus femur and tibia segment lengths, and ground reaction forces were normalized versus body weight. Range of motion and velocity decreased with age. The ankle was more anterior to the hip with decreasing hip height. Dynamic squat kinematics were reported.
of joint loads, which then can better aid the evaluation of knee pathology and treatments. However, the literature lacks attempts to define “average” or standard squat kinematics. Without this data, the design of better test systems is based more on assumption rather than a population. Considering this problem, this study attempted to define three-dimensional lower-body kinematics of typical adult subjects while they performed high-flexion body-weight squats. It would normalize the data and report it in a general form that can act as inputs to knee kinematics simulations, particularly for electromechanical machines that are more complex than the first Oxford Rig.
Keywords— squat, high flexion, motion analysis, knee simulator, healthy subjects
I. INTRODUCTION
Researchers often perform in vitro or computational studies to simulate in vivo knee biomechanics. This gives an alternative when in vivo studies are impractical or invasive to patients. The clinical relevance of these simulations then relies on the definition of plausible load inputs and boundary conditions. For example, previous studies often use electromechanical systems and computer modeling to simulate the knee joint during a squat, which requires various assumptions about motion curves, loads, and muscle connections [1,2]. Although such studies have produced much useful information so far, their clinical relevance still may be limited. The squat kinematic simulators in the literature today are modeled off the “Oxford Rig” design reported in 1997 [3]. This machine advanced research capabilities at the time, as it was a controllable six-degree-of-freedom joint simulator that could produce vertical motion. However, it lacked the ability to control anteroposterior or mediolateral motion, and therefore could only simulate a simplified squat, more like squat up against a wall (Fig 1). Better knee simulations should incorporate the full threedimensional motions of the lower limb to be more realistic. This could hypothetically allow more accurate simulations
Fig. 1 Schematic lateral view of a simulated deep squat where the hip lies directly over the ankle, versus more realistic squat kinematics
II.
MATERIALS AND METHODS
Seventeen adult subjects (age range 24-75, 6 female, 11 male) with no reported musculoskeletal pathologies volunteered for this study after giving informed consent. They each underwent one motion analysis session, using a 14camera optical motion tracking system (Vicon, Oxford, UK), two forceplates (AMTI, Watertown, MA, USA), and a standard clinical kinematic model (Plug-in-Gait [4] with Knee Alignment Device [5], Vicon, Oxford, UK). In each session, a subject was asked to stand with their feet over two separate forceplates, which were spaced 115mm apart, and then perform a high-flexion squat. This consisted of descending as far down as comfortably possible, and then
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 483–486, 2010. www.springerlink.com
484
P.D. Wong et al.
rising back up to standing position, without using upper limb support (e.g.. no holding the thighs with the hands). Beyond these instructions, all subjects used self-selected speeds and postures. Three repeated squat trials were taken, and one trial for each subject with no loss of balance was identified for further analysis. Subject age, height, and mass were recorded. The motion tracking system measured ground reaction forces, calculated joint centers based on skin marker trajectories, and calculated joint rotations with Euler angles. Femur and tibia segment lengths were measured from the motion tracking data with automated algorithms (Matlab, Mathworks, Natick, MA, USA). Femur length was taken as the average distance between the hip joint center and knee joint center throughout the squat, and tibia length was taken similarly between the knee and ankle. The femur-to-tibia length ratio (Fem/Tib) and total femur-plus-tibia leg length (Fem+Tib) were recorded for each subject. Data were then generalized, so that they could be used as inputs into typical load- and motion-controlled knee simulators, which often use linear actuators. Linear translations were measured as follows (Fig 2).
III.
RESULTS
Average subject characteristics (n=17) are summarized in Table 1. Subjects had a healthy average body-mass index but still exhibited a wide variety of characteristics. Table 1 Subject characteristics Mean
SD 15.3
Min 23.9
Max
Age (y)
49.8
75.4
Mass (kg)
71.8
13.5
45.3
97.6
Height (cm)
172.9
9.5
158.0
190.0
Body-mass index
23.9
3.8
17.7
30.8
Femur length (mm)
404.1
30.0
364.0
455.5
Tibia length (mm)
407.1
26.8
364.7
454.1
Fem+Tib (mm)
811.1
54.5
733.4
909.6
Fem/Tib ratio
0.993
0.040
0.931
1.067
Squat cycle time (s)
4.207
1.411
2.320
7.050
For each year older, squat times slowed by 0.0634 s, minimum HH increased by 0.50% of Fem+Tib length, and maximum knee flexion decreased by 0.70° (p<0.01) (Fig 3). Squat cycle tim e vs. Age
Time (s)
7
HH ML
5 3
y = 0.0634x + 1.0493 R2 = 0.4757
1
AP
20
40
Age
60
80
Fig. 2 Dynamic joint distances reported, between ipsilateral ankle and hip Hip Height (HH) m inim um vs. Age
60% 40% y = 0.00503x + 0.23136 R2 = 0.42499
20% 0% 20 170
Flexion (°)
The vertical distance between the hip and ankle joint centers was taken as hip height (HH). The distance of the ankle away from the hip was also recorded in the anteroposterior (AP) and mediolateral (ML) directions. Anterior and medial positions of the ankle gave positive values. The HH, AP, and ML directions were perpendicular and defined according to the laboratory coordinate system, since subjects faced the same direction throughout the squat. These distances were normalized by dividing them by the total Fem+Tib length. Ground reaction forces were normalized against individual body mass and expressed as N/kg. All dynamic data also were normalized in time across a 0-100% squat cycle, where the start and end points of a squat were defined at times of maximal knee extension. The normalized data were averaged using a random leg of each subject, and the resulting curves were reported. Correlations were analyzed between age and the other discrete measurements, with significance of the correlations tested with α=0.05.
HH (% Fem+Tib)
80%
40
Age
60
80
Knee Flexion m axim um vs. Age
150
y = -0.6961x + 147.1 R2 = 0.4283
130 110 90 70 20
40
Age
60
80
Fig. 3 Squat cycle time, HH minimum, and knee flexion maximum, for subjects. HH is expressed as a percent of the femur+tibia length.
IFMBE Proceedings Vol. 29
Determination of In Vivo Three-Dimensional Lower Limb Kinematics for Simulation of High-Flexion Squats
Ground Reaction Forces vs. %Squat Cycle N per kg body mass
6 5 4
120% Distance (%Fem+Tib)
Ground reaction forces were nearly constant during the squat, with normalized mean anterior forces of 0.014 N/kg (SD 0.028), medial forces of 0.439 N/kg (SD 0.127), and upward vertical forces of 4.90 N/kg (SD 0.234) (Fig 4).
3
Hip and ankle positions vs. %Squat Cycle
100% 80% 60% 40% 20% 0% -20%
2
0%
20%
1
40% 60% %Cycle AP
0 -1 0%
20%
40% 60% %Cycle
Anterior
80%
Medial
100%
80%
ML
100%
HH
Fig. 6 Mean hip height (HH) superior to the ankle, anteroposterior distance of the ankle from the hip (AP), and mediolateral distance of the ankle from the hip (ML). Anterior and medial positions of the ankle are positive.
Vertical
Mean AP was plotted versus mean HH, along with possible analytical estimates of the curve (Fig 7). A leastsquares parabolic best-fit curve of the data had the equation:
Fig. 4 Mean ground reaction forces during squats. The average knee rotation curves in the three anatomical planes were plotted versus the squat cycle (Fig 5). Mean maximum knee flexion angle was 112.4° (SD 16.3).
AP = -0.904*HH2 + 0.971*HH – 0.067
Knee Rotations vs. %Squat Cycle
AP = 0.227 1 − HH
100
(1)
An ellipse could better reflect the motion of some individual subjects, and one visual best-fit ellipse had the equation:
120
Angle (°)
485
2
(2)
Overall the ankle was more anterior with lower hip height. No subjects showed the ankle becoming more posterior in any part of descent, even for those subjects who descended past 50% hip height.
80 60 40 20
Ankle AP distance vs. Hip height
25%
-20 0%
20% Flexion
40% 60% %Cycle Adduction
80%
100%
Internal Rotation
Fig. 5 Mean 3D knee rotation angles during squats.
Mean 3D distances between hip and ankle joints were plotted versus the squat cycle (Fig 6). The ML position of the ankle was nearly constant, staying lateral to the hip by 10.5% (SD 5.2) of the Fem+Tib length. The mean AP ankle position changed throughout the squat, starting and ending at 3.4% (SD 4.7) posterior to the hip and going to 21.2% (SD 5.8) anterior to the hip. The mean HH at the lowest point of the squat was 48.2% (SD 11.9).
AP dist. (%Fem+Tib)
0
20% ascent
15% 10%
descent
5%
Mean data Ellipse-fit Poly-fit
0% -5% 0%
20%
40%
60%
80%
100%
HH (%Fem +Tib) Fig 7. Mean ankle anterioposterior (AP) distance from the hip vs. hip height (HH) above ankle. Possible analytical curves to fit to the data and extrapolate to smaller HH values are shown: an ellipse and a parabola.
IFMBE Proceedings Vol. 29
486
P.D. Wong et al.
Examples of this model are shown (Fig 7).
DISCUSSION AND CONCLUSIONS
This study investigated the 3D lower-body kinematics of an unrestrained, high-flexion squat. The output data is intended to be usable as inputs into electromechanical or computational lower-body squat simulations. To do this, it analyzed healthy adults with a range of ages, anatomies, and masses. Overall patterns of hip posterior movement (or ankle anterior position) were clear with decreasing hip height. In these subjects, no relationships were found between the normalized squat kinematics and either gender or BMI, but a larger sample size could possibly show a connection. However, age was found to have significant effects. The results presented can be used to simulate the mean data of this limited cohort, but they can also be used to simulate more realistic squats of individuals with specific ages, ranges of motion, squat velocities, and bone lengths. For example, the mean dynamic lower-body kinematics and ground reaction forces reported in Figs 4-7 can be input into a machine like that used previously by Victor et al [1], which can test cadaver specimens. Additionally, the mean curves can be adjusted according to the specimen characteristics. For example, a 90 kg donor with no musculoskeletal asymmetry would be predicted to see an ankle load under each leg with a vertical component of 90kg × 4.90N/kg = 440 N, or about half the body weight, based on Fig 4. The squat range of motion and velocity can be estimated from donor age, in combination with the measurements of the femur and tibia segments. For example, using the linear best-fit equations in Fig 3, an 80-year-old donor would be predicted to squat down and back up in 6.12s, up to 91.4° knee flexion, and down to a lowest hip height above the ankle of 63.4% of the total length of the femur and tibia. The length of the undissected specimen then could be measured, from the femoral head, to the knee center, to the ankle center. All distance and time measurements can be scaled according to these values. The dynamic kinematics curves found here can be estimated by cosine functions, if a simple analytical model is necessary, using the general equation:
⎛ S − L ⎞ cos( 2πt ) + ⎛ S + L ⎞ ⎟ ⎟ ⎜ t max ⎝ 2 ⎠ ⎝ 2 ⎠
y =⎜
(3)
900
Exam ple Hip Height m odels
800 Hip Height (mm)
IV.
S = 750 L = 380 tmax = 4
700 600
S = 800 L = 320 tmax = 5
500 400
S = 850 L = 250 tmax = 6
300 200 0
2
Tim e (s)
4
Fig 8. Example simple cosine models using Equation 3.
Individual knee simulators would customize the input curves and scales appropriately to suit their specific system requirements. Possible systems that could use these data are various mechanical test systems, finite element simulations, and numerical rigid-body simulations.
ACKNOWLEDGMENT The authors thank Dr. Hilde Vandenneucker, Prof. Johan Bellemans, Alberto Leardini, Stanley Tsai, and the staff of Smith & Nephew for supporting for this study.
REFERENCES 1.
2.
3. 4.
5.
Victor J, Labey L, Wong P et al. (2009) The influence of muscle load on tibiofemoral knee kinematics. J Orthop Res. Nov 4 [in press, Epub ahead of print] doi:10.1002/jor.21019 Baldwin MA, Clary C, Maletsky LP et al. (2009) Verification of predicted specimen-specific natural and implanted patellofemoral kinematics during simulated deep knee bend. J Biomech. 42(14):2341-8 Zavatsky AB. (1997) A kinematic-freedom analysis of a flexedknee-stance testing rig. J Biomech. 30(3):277-80 Kadaba MP, Ramakrishnan HK, Wootten ME. (1990) Measurement of lower extremity kinematics during level walking. J Orthop Res. 8(3):383-92 Schache AG. (2006) Defining the knee joint flexion-extension axis for purposes of quantitative gait analysis: an evaluation of methods. Gait Posture. 24(1):100-9
such that: t = independent variable of time y = dependent variable to be modeled S = the start value L = the value at the lowest point of the squat tmax = time to complete the squat cycle
6
Corresponding Author: Bernardo Innocenti, PhD Institute: European Centre for Knee Research, Smith & Nephew Street: Technologielaan 11 bis City: Leuven Country: Belgium Email: [email protected]
IFMBE Proceedings Vol. 29
Evaluation of chronic diabetic wounds with the Near Infrared Wound Monitor Michael Neidrauer1, Leonid Zubkov1, Michael S. Weingarten2, Kambiz Pourrezaei1, and Elisabeth S. Papazoglou1 1
2
Drexel University, School of Biomedical Engineering, Philadelphia, USA Drexel University College of Medicine, Department of Surgery, Philadelphia, USA
Abstract— Sixteen human diabetic foot ulcers were interrogated using a near infrared wound monitor that is based on Diffuse Photon Density Wave (DPDW) methodology of Near Infrared spectroscopy. Temporal changes of oxy- and total hemoglobin concentration were significantly different in healing vs. non-healing wounds. Keywords— Frequency-domain near infrared spectroscopy (NIRS), Diffuse Photon Density Wave, chronic diabetic foot ulcers, wound healing, hemoglobin I. INTRODUCTION
Diabetic foot ulcers are a growing problem as the prevalence of diabetes increases worldwide. Diffuse Photon Density Wave (DPDW) methodology of Near Infrared spectroscopy can be used to measure the concentrations of oxyhemoglobin and deoxyhemoglobin in tissue at depths of up to several centimeters [1], and therefore may provide clinicians with valuable information to supplement traditional wound assessment methodologies which consist primarily of wound surface assessment. We have previously demonstrated that changes in the optical properties in an animal model of acute wounds could be quantified using a near infrared wound monitor, and that these changes corresponded to changes in wound vascularization and oxygenation [2, 3]. These animal studies led us to the development of a model of the expected behavior of hemoglobin concentration during the course of healing. Our results from independent animal studies demonstrate that optical absorption coefficients increase compared to the established pre-wound baseline values and later return to the pre-wound baseline condition after healing is complete. The clinical success of any optical device would depend on the validity of this hypothesis in human studies. In a human study, patients present with existing open wounds. Therefore, we do not have baseline measurements in a human study. In this case, tissue changes due to wound healing will be manifested by changes in optical properties and hemoglobin concentrations, while the absence of any dynamic change would correspond to non-healing wounds. In this paper, we report the results of a pilot human study in which chronic diabetic foot ulcers were monitored over the course of several weeks using DPDW methodology of Near Infrared spectroscopy.
II. METHODS
Details of the frequency domain near infrared instrument have been described previously [2, 3]. Briefly, one optical fiber was used to deliver intensity modulated light (70MHz) to the tissue from three diode lasers (O = 685, 780, and 830). Four optical fibers were used to deliver backscattered light from the tissue to the instrument. A Teflon probe was used to hold the fibers in place, with the four detector fibers at fixed distances (U = 4, 8, 12, and 16 mm) from the source fiber. Using the diffusion approximation, the optical absorption and reduced scattering coefficients (Pa and P’s) were calculated from the amplitude and phase shift of backscattered light at each detector position [3]. Values of oxyhemoglobin concentration [HbO2] and deoxyhemoglobin concentration [Hb] were calculated from the optical absorption coefficients at each wavelength [2]. Total hemoglobin concentration [Tot Hb] was calculated as the sum of [HbO2] and [Hb]. Sixteen patients with diabetic foot ulcers were enrolled in the study. All subjects were between the ages of 30 to 65. All patients had a previous documented history of diabetes mellitus of at least 6 months. Areas evaluated were strictly ankle and foot wounds secondary to complications from diabetes that presented with a minimum surface area of 1 cm2. Patients were only enrolled in the study if the AnkleBrachial Index was > 0.75, indicating adequate blood supply to the wounded limb. All wounds were debrided of necrotic tissue before entering into the study. Patients diagnosed with osteomyelitis had excision of the infected bone and treatment with antibiotics before enrollment. Each of the sixteen patients underwent a standard wound care routine for their foot ulcers, which consisted of weekly or biweekly debridement, offloading when possible, and treatment with moist wound healing protocols. When indicated, active wound healing modalities such as hyperbaric oxygen, negative pressure wound healing, and active biosynthetic skin substitutes were used. Optical measurements were done prior to weekly or biweekly debridement. Serial measurements were obtained at every patient visit to the clinic from the time of enrollment until complete wound closure, amputation of the limb, or a
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 487–489, 2010. www.springerlink.com
488
M. Neidrauer et al.
maximum of 20 visits without closure or amputation. A wound was classified as “non-healing” for the purposes of this study if it did not heal by the 20th visit or if the limb was amputated. The optical measurements of wounds were conducted at varying sites, depending on the geometry and location of each wound. Locations measured included the area (1) directly on the wound (2) on intact skin at the edge of the wound (3) on non-wound tissue on the wounded limb at a distance of at least 2 cm from the wound, and (4) nonwound tissue on the contralateral limb as symmetric to wound location as possible. The control site chosen varied between a location on the wounded or contralateral limb, depending on access to the site due to other wounds or previous amputations. Tegaderm transparent sterile dressing (3M Health Care) was used to cover the fiber optic probe
during all measurements. During every visit a digital photograph was taken of the wound after the near infrared data was collected. Crosspolarizing filters were used to reduce surface reflections. A paper ruler was used in each photograph to correct for variations in the distance between the camera and wound. The wound boundary was traced and the surface area was calculated using an image analysis program created with MATLAB computing software (MathWorks, Inc.). III. RESULTS AND DISCUSSION
Of the 16 wounds studied, 7 wounds completely healed
OxyhemoglobinConcentration OxyhemoglobinConcentration
0.25
0.35
0.20
Conc.(mM)
Conc.(mM)
0.30 0.25 0.20 0.15
0.15 0.10
0.10
0.05
0.05
0.00
0.00
0 0
10
20
30
40
50
12
Area(cm2)
Area(cm2)
10 8 6 4 2 0 30
Fig
1. Example of data obtained from a healing wound that closed after 41 weeks. (upper) Oxyhemoglobin concentration [HbO2] as measured by the NIR device at the wound center (Ɣ), wound edge (ǻ), and control site on the same limb as the wound (+). The solid line is the linear trendline associated with data obtained from the wound center (slope = -3.9 ȝM/wk); the dashed line is the linear trendline associated with data obtained from the wound edge (slope = -2.9 ȝM/wk). The slopes of both trend lines are negative, as is characteristic of the healing wounds in this study. (lower) Wound sizes measured on each day.
10
20
30
time(weeks)
40
time(weeks)
Fig
45 40 35 30 25 20 15 10 5 0 0
20
30
WoundArea
WoundArea
10
20
time(weeks)
time(weeks)
0
10
2. Example of data obtained from a non-healing wound that resulted in amputation of the limb after 28 weeks of participation in the study. (a) Oxyhemoglobin concentration [HbO2] as measured by the NIR device at the wound center (Ɣ), wound edge (ǻ), control site on the same limb as the wound (+), and control site on the contralateral limb (x). The solid line is the linear trendline associated with data obtained from the wound center (slope = 0.4 ȝM/wk); the dashed line is the linear trendline associated with data obtained from the wound edge (slope = 0.0 ȝM/wk). The slopes of both trend lines are nearly zero or slightly positive, as is characteristic of the non-healing wounds in this study. (b) Wound sizes measured on each day.
IFMBE Proceedings Vol. 29
Evaluation of Chronic Diabetic Wounds with the Near Infrared Wound Monitor
and 9 wounds remained unhealed or resulted in amputation. In both healing and non-healing wounds, oxyhemoglobin concentration [HbO2] and total hemoglobin concentration [Tot Hb] during the initial measurement session was greater at the wound centers and wound edges than at the control sites. In the seven wounds that healed, wound measurements of [HbO2] and [Tot Hb] decreased gradually over time and converged with control site values of [HbO2] and [Tot Hb]. An example of data obtained from a healing wound is given in Figure 1. In the nine non-healing wounds, [HbO2] and [Tot Hb] at the wound sites remained elevated throughout the duration of the study and did not converge with control site values. An example of data obtained from a non-healing wound is given in Figure 2. The rates of change in hemoglobin concentration over time were quantified by fitting a linear trend line to the measured values. The [HbO2] and [Tot Hb] slopes for all healing wounds were negative while the slopes of nonhealing wounds were nearly zero or slightly positive. The mean and standard error of slopes obtained from healing and non-healing wounds are compared in Figure 3. The healing and non-healing groups were compared using two– tailed heteroscedastic t-tests, and a significant difference was found between the [HbO2] and [Tot Hb] slopes of healed and non-healing wounds (p < 0.05).
rate of chnage (ȝM/wk)
0.0
*
moglobin concentration over time were used to differentiate healing from non-healing wounds in a study of human diabetic foot ulcers, indicating that this method may be able to help wound care clinicians in the assessment of overall wound health when treating diabetic foot ulcers.
IV. ACKNOWLEDGMENTS The authors would like to thank Varshana Gurusamy, Sarah Kralovic, Usha Kumar, and Xiang Mao for their help with wound measurements. This research was made possible by the generous support of the Wallace H. Coulter Foundation and the U.S. Army Medical Research Acquisition Activity. This research was funded in part by The U.S. Army Medical Research Acquisition Activity, 820 Chandler Street, Fort Detrick, MD 21702-5014 is the awarding and administering acquisition office. This investigation was funded under a U.S. Army Medical Research Acquisition Activity; Cooperative Agreement W81XWH 04-1-0419. The content of the information herein does not necessarily reflect the position or the policy of the U.S. Government or the U.S. Army and no official endorsement should be inferred.
V. REFERENCES
MeanSlopesofHemoglobin concentration 2.0
1.
*
2.
Ͳ2.0 Ͳ4.0
3.
Ͳ6.0 Ͳ8.0
J. Mobley and T. Vo-Dinh, Optical properties of tissue, in Biomedical Photonics Handbook. 2003, CRC Press, Boca Raton, Fla. E. S. Papazoglou, M. S. Weingarten, L. Zubkov, M. Neidrauer, L. Zhu, S. Tyagi, and K. Pourrezaei, Changes in optical properties of tissue during acute wound healing in an animal model. Journal of Biomedical Optics, 2008. 13: p. 044005. E. S. Papazoglou, M. S. Weingarten, L. Zubkov, L. Zhu, S. Tyagi, and K. Pourrezaei, Optical Properties of Wounds: Diabetic Versus Healthy Tissue. IEEE Transactions on Biomedical Engineering, 2006. 53(6): p. 1047-1055.
Ͳ10.0 [TotHb]
[OxyHb]
489
[DeoxyHb]
HealingWounds(N=7) NonͲHealingWounds(N=9)
Fig. 3: Mean ± standard error of the temporal slopes of total, oxy-, and deoxy-hemoglobin concentration for healing and non-healing wounds. A significant difference was found between the [HbO2] and [Tot Hb] slopes of healed and non-healing wounds (*p < 0.05, two–tailed heteroscedastic t-tests).
These results indicate that temporal changes in the concentration of hemoglobin derived from diffuse near infrared measurements of the optical absorption coefficient in diabetic foot ulcers can be used to monitor healing progress. Changes in the calculating the linear rate of change of heIFMBE Proceedings Vol. 29
Non-contact UWB Radar Technology to Assess Tremor G. Blumrosen1, M. Uziel2, B. Rubinsky1, and D. Porrat1 1
Hebrew University of Jerusalem, School of Engineering and Computer Science, Jerusalem, Israel 2 Hebrew University of Jerusalem, Applied Physics Department, Jerusalem, Israel
Abstract— This work quantifies and analyzes tremor using Ultra Wide Band (UWB) radio technology. The UWB technology provides a new technology for non contact tremor assessment with extremely low radiation and penetration through walls. Tremor is the target symptom in the treatment of many neurological disorders such as Parkinson’s disease (PD), midbrain tremor, essential tremor (ET) and epilepsy. The common instrumental approaches for the assessment of tremor are motion capture devices and video tracking systems. The new tremor acquisition system is based on transmission of a wideband electromagnetic signal with extremely low radiation, and analysis of the received signal composed of many propagation paths reflected from the patient and its surroundings. An efficient UWB radar detection technique adapted to tremor detection is developed. Periodicity in the time of arrival of the received signal is detected to obtain tremor characteristics. For a feasibility test we built an UWB acquisition system and examined the performance with an arm model that fluctuated in the range of clinical tremor frequencies (3-12 Hz). A devlpoment of this work can lead to a monitoring system installed at any home, hospital or school to continuously asses and report tremor conditions during daily life activities. Keywords— Tremor, UWB, human radar signature, detection techniques.
I. INTRODUCTION Tremor is the target symptom in the treatment of numerous neurological disorders such as Parkinson’s disease (PD), midbrain tremor, and essential tremor (ET) [1]. Quantification and analysis of tremor is significant for diagnosis and establishment of treatments. For clinical research purposes, a number of scales have been developed for semiquantitative assessment of frequency and magnitude [2] of tremor. Motion capture devices such as accelerometers [2] or gyroscopes [3], are the most popular for tremor assessment. But must be attached to patient’s body and have limited capabilities on giving precise tremor amplitude due to amplitude drift [3]. Video recording is another popular technology for tremor assessment in gait analysis laboratories [4], but requires the patient to be inside the range of the video camera lens and consequently cannot be used for continuous assessment of tremor during daily life activities. A narrow band radar has been used [5] for the detection and classification of people’s movements and location based
on the Doppler signatures. When humans walk, the motion of various components of the body including the torso, arms, and legs produce a characteristic Doppler signature. Fourier transform techniques were used to analyze these signatures and identified key features representative of the human walking motion. [6] uses a classifier on the human body radar signature to characterize gait, in particular step rate and mean velocity. Radar techniques based on Doppler cannot detect tremor as the signal bandwidth they use, and correspondingly the temporal and spatial resolution, is usually too low to detect typical tremor. Ultra-wideband (UWB) is a radio technology that can be used with very low energy levels for short-range highbandwidth communications, by using a large portion of the radio spectrum. The potential strength of the UWB radio technique lies in its use of extremely wide transmission bandwidths, which result in accurate position location and ranging, and material penetration. Most recent applications target sensor data collection and locating and tracking applications such as [7]. [8] suggests the use of biomedical applications of UWB radar for cardiac biomechanic assessment and chest movements assessment, OSA (obstructive sleep apnoea), and SID (sudden infant death syndrome) monitoring. We suggest to quantify and analyze tremor with UWB radar technology. The UWB radar technology is based on transmission of a wideband electromagnetic signal and analysis of the received signal reflected from the patient to assess tremor characteristics. We provide data analysis tools for the UWB tremor acquisition system and give preliminary results for an UWB tremor acquisition system prototype we built. This paper is organized as follows. Section 2 describes the UWB tremor acquisition system and efficient algorithms to assess tremor characteristics. Section 3 describes the experimental set-up which consists of an arm model with tremor and an UWB tremor acquisition system. We performed a series of experiments with different distances in the range of 1-2 meters between the acquisition systems and the arm with different sources of disturbance. Section 4 analyzes the performance of the UWB acquisition system. Section 5 concludes the work and gives suggestions for future research.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 490–493, 2010. www.springerlink.com
Non-contact UWB Radar Technology to Assess Tremor
491
II. SYSTEM AND METHODS
Linear Minimum Mean Square Error (MMSE) criterion with the tremor amplitude and frequency constraints is:
A. System Model Our system is composed of a transmitter and a receiver. A high bandwidth pulse is transmitted into the medium where the patient is located. The signal that has propagated through a wireless channel consists of multiple replicas (echoes, mainly caused by reflections from objects in the medium) of the originally transmitted signal, named Multipath Components (MPCs). Each MPC is characterized by attenuation and a delay. The received signal at time instance t is: r t
∑
,
β t
mT p t
mT
τ
,
n t
(1)
Where p(t) is a pulse with typical duration around 10 ns, m is the pulse index and T is the pulse repetition time, τm,k is the k’th MPC delay in the m’th pulse, βm,k is its related attenuation factor which is assumed constant for a short observation time, n(t) is an additive noise component. The noise includes thermal and amplifier noise which can be modeled by white Gaussian processes, distortion from non linearity of amplifiers and interference from other radio signals from narrow band systems. The received signal can be further separated to desired MPCs reflected from Tremorring Body Parts (TBPs), non desired MPCs from other reflectors in the medium, and the noise. We sample the received signal in (1) every period of T . Per each pulse, we reduce the observation period to N temporal samples that include only reflections from around the patient center body (torso) which can be obtained by any UWB tracking mechanism. The received signal for M consecutive pulses, are stored in an observation matrix r of size Nh . The column dimension of r represents the time dimension of pulse repetition. The row dimension represents delay, which is equivalent to the spatial dimension since for a given MPC, multiplying the MPC’s delay by the speed of light c gives us twice the distance the transmitted pulse propagated in space from the reflecting object to the acquisition system. B. Data Analysis Algorithm We divide the data analysis to two stages. First we extract from the received signal MPCs’ delays which relate to TBP’s displacements. Then we analyze the MPCs’ delays and obtain tremor characteristics. If we choose an observation period small enough so that the patient is stationary and the TBP’s displacements are around center location, the MPCs related to the TBP differ in time mainly by a weight time shift. This weight time shift is in a range that is determined by the tremor amplitude and the frequency of the change of the weight time shift in the observation period is determined by tremor frequency. A
s. t.
, A 2c
argmin , E ∑ A τ , 2Hz 2c
s |
(2)
τ |
12 z
Where r is an N length vector of the sampled received signal for pulse index m, 1≤m≤M, sl is a scalar representing the signal energy reflected from the l’th TBP surface for pulse index m, τ is the m’th weight time shift, w is an N length weight vector w shifted by τ , w n w n τ , τ is an M long vector that includes the weight time shifts, τ1, τ2 . . τM , A is the maximal clinical tremor displacement (in range of 1-4cm) and E · is the expectation operator in the observation period. The first constraint in (2) operates on observation matrix rows and limits the solution to the clinical tremor amplitude range, this is a spatial constraint. The second constraint operates on observation matrix columns and limits the solution during observation period to changes in tremor in the range of clinical frequencies, this is a temporal constraint. An MMSE optimal solution to (2) is based on match filtering of the received signal with the transmitted pulse shape and combining the result with an optimal MMSE weights. It can be shown that the constraints can be translated to ones that satisfy Karush–Kuhn–Tucker (KKT) conditions. A solution derived by methods of nonlinear programming (NLP) [9] is optimal. An optimal solution is cumbersome, requires nonlinear programming and unavailable statistics and is sensitive to distortion in pulse shape. A suboptimal solution to the problem, with no significant sacrifice in performance, is to apply to the matched filter outputs, the constraints in (2) one after another. A further efficient approximation uses instead of the MMSE weights, the Maximal Ratio Combining (MRC) weights, which combine the MPCs according to their Signal-to-Noise Ratios (SNRs), and is optimal if MPCs are well separated [10]. MRC has the advantage that it does not require the usually unavailable a-priory statistical information. Full tremor characteristics can be derived by the approximated weight time shifts vector τ τ , τ . . τ . A set of tremor frequencies and amplitudes is obtained by Fourier transform of τ. For a single dominant tremor frequency the estimated tremor frequency and amplitude are: |
| ,
|
|
(3)
Where c is the speed of light, and are the approximated tremor amplitude and frequency. More advanced pattern matching algorithms based of the spectrum of
IFMBE Proceedings Vol. 29
492
G. Blumrosen et al.
known tremor patterns pathologies over time can be applied in the future. In the common case of multiple TBPs we need to map the MPCs to the different TBPs. One way to map is according to the proximity of the MPCs where paths with similar delays are more likely to be related to the same TBP. This mapping is not accurate in a medium rich with scatterers that has no direct paths. Another way to map is according to MPCs pattern change in time. With a metal marker attached to the TBP of interest, the related MPCs amplitudes are enhanced and become more distinct then MPCs related to other TBPs.
processing unit was a common notebook computer (Lenovo T61) and the SW we used for processing was Matlab.
III. EXPIREMENTAL SETUP The experimental setup consisted of a UWB prototype system and an arm model to model a TBP of a patient. For modeling arm tremor we used a conduction coil, an AC generator source and a solid arm model with a small magnet attached. The AC generator induced periodic electrical current. The generator was wired to a transformer, which created a varying magnetic field in its core that induced a varying electromotive force. The force acted on a magnet attached to the solid arm model and generated periodic movement of the arm in the AC generator frequency. We attached a metal strip to the arm to magnify the UWB reflection. The UWB sensor node prototype consisted of a transmitter, a receiver, a processing unit and a storage unit. The transmitter was based on pulse generator (Picosecond Pulse Labs 4015D). The Pulse width Tp was 100ps, the pulse amplitude was 1.35V, the pulse repetition frequency was 85 Hz, and the bandwidth was 8GHz, similar to commercial UWB dongles. The pulse generator was connected to an omni-directional antenna (EM-6865 Elector Metrix) via an 30 dB amplifier (Herotek AF2 1828A). The transmission power was extremely low with peak power of 52mWatt/cm2 and average power of 105µWatt/cm2 measured at distance of 1 meter from the antenna. The UWB signal was received by an omni-directional antenna (EM-6865 Elector Metrix) with an amplifier (Herotek AF2 1828A) and then fed to the receiver. The receiver was based on an oscilloscope (Agilent DSO81304A) with sampling rate Ts of 20GS/s. The receiver was synchronized to the transmitter by a trigger from the pulse Generator. The raw data was sent to a storage unit. To enhance antenna gain and improve directionality, we added a metal cover structure over both transmit (Tx) and receive (Rx) antennas. We isolated the received a nd transmit antenna with a Carton board wrapped by aluminum foil to avoid a direct path. We used the segmented memory oscilloscope memory for storage. The
Fig. 1 UWB tremor acquisition system prototype and arm model. The arm model in the back was moving back and forth from and to the UWB acquisition system in a way that maximized the reflection surface The arm surface was moving back and forth from and toward the UWB acquisition system in a single axis. The UWB antennas were placed in an optimal orientation to capture the maximal reflection from the arm model. The arm model was placed in 1 to 2 meters from the acquisition system. Figure 1 shows the UWB tremor acquisition system prototype, the arm model can be seen in the back. We performed two sets of experiments. The first set was performed with different distances between the arm and the acquisition system. For each distance, the arm model trembled with a single frequency which varied from 3 to 12Hz. The second set of experiments was performed with different disturbers. We recorded for 20 seconds with relatively stationary channel conditions (arm fixed to one place, no change in environment).
IV. RESULTS For each experiment we followed the steps that is described in Section II-B. First we approximated the matched filtering to pulse shape by a simple peak detector. Then the constraints in (2) were applied one after another to the approximated matched filtered outputs. The approximated weight time shifts related to the arm (having similar frequency content) were combined with MRC weights. From the weight time shifts τ we estimated the tremor frequency and amplitude according to (3). Figure 2 shows the
IFMBE Proceedings Vol. 29
Non-contact UWB Radar Technology to Assess Tremor
estimated amplitude as a function of tremor frequency for distances between the arm and the acquisition system of 1, 1.5 and 2 meters. The approximated amplitude decreases with frequency. Near the frequency of 5Hz there is a peak which indicates the arm model resonance frequency. The amplitude estimations for the different distances are correlated with correlation factor of 0.97 and the average deviation from a video reference estimation was 0.1cm. The amplitude estimation at distance of 1.5meters was lower by factor of 2 than the other amplitude estimations. This difference is explained by the variation of tremor amplitude along the arm as the tremor’s amplitude near the body is lower. With a smaller marker surface, the amplitude variation of the TBP can be minimized and the amplitude estimation variance can be improved. The average tremor frequency estimation error was 0.01Hz. The accuracy achieved by the tremor frequency estimation is explained by the single tremor frequency present in our arm model (unlike the tremor amplitude that varied along the arm).
0.8
Tremor Amplitude(cm)
of less than 0.03Hz. Amplitude estimation error from a video reference was less than 1 mm.
V. CONCLUSIONS We suggested UWB technology to quantify tremor for diagnosis of different patient pathologies. We built a UWB tremor acquisition system prototype and provided data analysis tools for the acquisition system. A feasibility test that was performed showed accurate tremor frequency and amplitude estimations. This new technology can offer non contact tremor assessment, utilizing extremely low radiation that can penetrate walls, work in any light condition, and can collect accurate data continuously. This new technology can be utilized in the future to work at any home and transmit the collected data to a remote hospital for continuous tremor monitoring and analysis with minimum cost. Directional antennas, higher sampling rates, higher bandwidth, higher radiation power, smaller marker size and more advanced equalization techniques can all improve the system performance.
1 meter meas 1.5 meter meas 2 meter meas
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 2
493
4
6
8
10
12
Tremor freqency(Hz)
Fig. 2 Tremor
amplitude approximation for distances of 1, 1.5 and 2 meters between the acquisition system and the arm model. The tremor amplitude declines with tremor frequency but near 5 Hz, the resonance frequency, there is an increase in amplitude. The amplitude estimation at a distance of 1.5meters is lower by factor of 2 due to the variation of tremor amplitude along the arm surface
We verified system performance with different noise sources. We used the following noise sources: static metal reflectors, a wooden partition that separated the arm model from the acquisition device and a person with his hand covered with metal in the background. We tested the system at a distance of 1.5 meter with tremor frequency of 5 Hz. The system has shown tolerance to all noise sources. In all cases the frequency estimation was excellent with absolute error
REFERENCES 1. M. Ivan, “Electromyographic differentiation of tremors,” Clinical neurophysiology : official journal of the International Federation of Clinical Neurophysiology, vol. 112, no. 9, 2001, pp. 1626-1632. 2. J. Jankovic and J.D. Frost, Jr., “Quantitative assessment of parkinsonian and essential tremor: Clinical application of triaxial accelerometry,” Neurology, vol. 31, no. 10, 1981, pp. 1235-. 3. A. Salarian, et al., “Quantification of Tremor and Bradykinesia in Parkinson's Disease Using a Novel Ambulatory Monitoring System,” Biomedical Engineering, IEEE Transactions on, vol. 54, no. 2, 2007, pp. 313-322. 4. A. Jobbagy and G. Hamar, “PAM: passive marker-based analyzer to test patients with neural diseases,” Proc. Engineering in Medicine and Biology Society, 2004. IEMBS '04. 26th Annual International Conference of the IEEE, 2004, pp. 4751-4754. 5. S.S. Ram, et al., “Doppler-based detection and tracking of humans in indoor environments,” Journal of the Franklin Institute, vol. 345, no. 6, 2008, pp. 679-699. 6. C. Hornsteiner and J. Detlefsen, “Characterisation of human gait using a continuous-wave radar at 24 GHz,” Adv. Radio Sci., vol. 6, 2008, pp. 67-70. 7. C. Chang and A. Sahai, “Object tracking in a 2D UWB sensor network,” Proc. Signals, Systems and Computers, 2004. Conference Record of the Thirty-Eighth Asilomar Conference on, 2004, pp. 12521256 Vol.1251. 8. E.M. Staderini, “UWB radars in medicine,” Aerospace and Electronic Systems Magazine, IEEE, vol. 17, no. 1, 2002, pp. 13-18. 9. M. Avriel, Nonlinear Programming: Analysis and Methods., 2003. 10. J.G. Proakis, “Digital communications,” McGRAW-HILL INTERNATIONAL EDITIONS, New York. 3rd ed, 1995., pp. 780-782.
IFMBE Proceedings Vol. 29
High frequency mechanical vibrations stimulate the bone matrix formation in hBMSCs (human Bone Marrow Stromal Cells) D.Prè1,3, G.Ceccarelli2,3, M.G.Cusella De Angelis2,3 and G.Magenes1,3 1
Department of Computer and System Science, University of Pavia, Pavia, Italy 2 Department of Experimental Medicine, University of Pavia, Pavia, Italy 3 C.I.T., Tissue Engineering Centre, Pavia, Italy
Abstract— The aim of this work is to test the effects of a specific mechanical stimulation (Low Amplitude, high Frequency Vibrations) on the bone matrix formation of hBMSCs. Previous studies demonstrated that chemical culture conditions could influence the differentiation of hBMSCs toward bone: by plating the cells in appropriate osteogenic culture medium hBMSCs differentiate into osteoblasts [1-3]. In our experiment the cells were treated for 21 and 40 days by vibrating the wells for 45 minutes a day at a working frequency of 30 Hz. In order to separate the effects of the induction toward bone caused by the osteogenic culture medium and the effects of the high frequency vibrations, we divided the cells in four samples: in normal medium with or without mechanical treatment and in osteogenic medium with or without mechanical treatment. Afterwards, in order to measure the level of calcium deposition and consequently, the formation of bone matrix, the Alizarin Red assay was performed. The results express a strong increase in the deposition calcium for the extracellular matrix for the vibrated samples with respect to the non-mechanically treated ones for the 40 days treatment.
chemical factors, by adding the osteogenic improver in the culture medium and we tested the effects of low amplitude, high frequency vibration on differentiation of cells. In particular, the deposition of calcium has been investigated because it is the first step of extracellular bone matrix formation. Our previous studies demonstrated that high frequency vibrations increase the expression of many osteogenic proteins in SAOS-2 human osteoblasts [9]. Thus, our aim is to improve the differentiation of the hBMSCs and to reduce the time required to deposit mineralized bone matrix with the association of chemical and mechanical factors. In fact, the possibility to create a bone matrix is on one hand a strong indicator of differentiation of BMSCs toward bone, and on the other hand it makes possible to create prosthesis by removing the cellular part from the composite and leaving the only extracellular one: so the structure will be patient-independent, and it could be used as a general prosthesis.
Keywords— Bioreactor, differentiation, hBMSCs, Bone matrix.
I. INTRODUCTION
Bone tissue engineering offers innovative therapeutic opportunities for the repair of bone tissues damaged by diseases or injuries. Despite many advances in cell-based tissue engineering, significant challenges remain with respect to cell sourcing, expansion, and differentiation toward bone tissue for the application on patients. The identification of various adult stem cells retaining the capability to differentiate into multiple cell types has been a critical step in providing potential cell sources for bone tissue engineering. One of the first adult stem cell types investigated to reproduce osteoblasts were hBMSCs (human Bone Marrow Stromal Cells). Good results have been obtained for the differentiation of this cell line toward bone tissue [1, 4-7], but hBMSCs are difficult to harvest, very delicate and slow in their proliferation [8]. Subsequently, it’s useful to reduce the time to obtain osteoblasts from BMSCs. In order to accelerate their differentiation, we used
II. MATERIALS AND METHODS
A. Cell cultures We used BMSCs (bone marrow stromal cells) taken from a male young patient at the 3rd passage. Iliac crest bone marrow aspirates were collected. The bone marrow sample was centrifuged at 500 g for 10 min and about 25 ml of plasma was collected. Plasma obtained by this low speed centrifugation contained the platelet fraction. Buffy coat (pellet) was resuspended in an equal volume of DMEM and the resulting suspension was centrifuged on a Ficoll separating solution for 20 min at 1200 × g. The mononucleated cells were recovered from the interphase and counted on a hemocytometer. They were then plated in flasks of 75 cm2 surface containing the proliferative medium and incubated at 37°C in a humidified atmosphere (95% air, 5% CO2), till reaching the confluence.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 494–497, 2010. www.springerlink.com
High Frequency Mechanical Vibrations Stimulate the Bone Matrix Formation in hBMSCs (Human Bone Marrow Stromal Cells)
B. The bioreactor
495
E. Cell counting
A device to produce the vibrating stimuli to in vitro cells cultures was used. Its detailed description has already been reported in [10]. The system is composed by an eccentric motor to produce the displacement (Maxon Motor™ Brushless E-Series) with a Voltage of 24V and a diameter of 22 mm. The imposed displacement is thus a half of the diameter (11 mm). The angular velocity (i.e. vibration frequency) of the motor is voltage-controlled and can be modified through an electronic controller. By changing the Voltage supply from 2.5 up to 24 V the vibration frequency varies between 1 Hz and 120 Hz. A 3D accelerometer was added to detect the acceleration forced on the platform by the motor. C. Bioreactor Cultures The cells were divided in four groups. The first one with proliferative medium (PM) and subjected to the mechanical treatment of high frequency vibration (TP), the second one with the same medium but without any mechanical treatment (CP), the third one with osteogenic medium (OM) and subjected to high frequency vibration (TO) and the last one with OM but without any mechanical treatment (CO). The treated samples were subjected to high frequency vibration treatment at 30 Hz for 45 minutes every day. The four groups of cells were stopped and subjected to the tests after 21 and 40 days. The media were replaced every 6 days. All cells were plated in 9 cm diameter dishes (so, with an Area of 28.27 cm2 each one) at a density of 5000 cells/cm2. After the end of the experiment the Alizarin Red test were performed and the results were normalized on the cell number. D. Alizarin Red Test Alizarin red test was used as biochemical assay to quantitatively determine by colorimetry the presence of calcific deposition of an osteogenic lineage. It is an early stage marker of matrix mineralization, a crucial step towards the formation of calcified extracellular matrix associated with true bone. The cells were stained with pH-adjusted (4.1–4.3) 2% Alizarin Red solution (Electron Microscopy Sciences, Fort Washington, PA), washed, and then photographed using transmitted light. The stain was eluted by adding 1 ml of 10% cetylpyridinium chloride per well for 10 min at room temperature in gentle agitation. Afterwards the intensity of the color, proportional to the calcium deposition, was measured with the Nanodrop™ (Nanodrop Technologies, Wilmington, USA) at a wavelength of 562 nm. The level of Alizarin was normalized to the number of cells.
The number of cells to normalize the alizarin red values was evaluated by counting cells by Burker’s chamber. To follow the evolution of the number of cells during the experiment this analysis was performed after 0, 21 and 40 days of treatment. Cells were detached by using tripsin 1X and resuspended in an appropriate volume of PBS before counting. F. Statistical Analysis The experiments were repeated three times in order to obtain a better statistical confidence. To evaluate the effects on the vibrated cells with respect to the controls, a two-way analysis of variance (ANOVA) was performed for the XTT test and Molecular Biology tests. We used the statistical tool of Matlab 7.1. Statistical significance was declared at p≤0.05, in which p is the null hypothesis.
III.
RESULTS
The results of Alizarin Red test on 21 days cultures are shown in Figure 1 for the proliferative medium (1A) and osteogenic medium (1B). In the proliferative medium the value of the concentration of Calcium deposition in the control sample is equal to the treated one (20 pg/(ml*cell)), and in differentiative medium, the concentration is slightly higher for the control sample (48.75 pg/(ml*cell) with respect to 46.25 pg/(ml*cell) of the treated), even if it is not statistically significant (p>0.05). The graphics that summarize the results obtained with the Alizarin Red Test after 40 days of treatment are expressed in Figure 2.
Figure 1: Graphic of Alizarin Red level at 21 days for treated and controls in both the culture media (in proliferative medium-A- and in differentiative medium -B). The vertical blue lines represent the standard deviations on data.
IFMBE Proceedings Vol. 29
496
D. Prè et al.
The red colouring represents a high concentration of calcium deposition. By analyzing the samples at optical microscopy, it’s possible to observe the difference in red-coloured calcium deposition between treated and control samples, as reported in Figure 4. IV. DISCUSSION
Figure 2: Alizarin Red Test at 40 days for proliferative (A) and osteogenic (B) medium on hBMSCs. The p-values are reported on the graphics and the vertical blue lines represent the standard deviation on data.
The results show a strong increase in the deposition of extracellular matrix for the mechanically treated samples with respect to the non-treated ones. In fact, the normalized level of Alizarin red in the treated samples in proliferative medium is 27% higher with respect to the non-mechanically treated sample in the same medium (Figure 2A). The difference of the Alizarin Red Level for the samples kept in osteogenic medium is even more significant (Figure 2B). In fact, the normalized level of alizarin red (and, consequently, of the calcium deposition for the extracellular matrix) of the mechanically treated samples after 40 days is 3120 pg/(ml*cell), compared with a value of 1610 pg/(ml*cell) for the control samples. Thus, the mechanical treatment increases the deposition of Calcium (i.e. the production of extracellular matrix) of 93%. All the p-values are less that 0.001 As it is possible to observe from Figure 3 the difference between the mechanically treated and the control samples in osteogenic medium is clearly visible.
Previous experiments [9] have demonstrated an inductive effect of high frequency vibrations on the differentiation toward bone on SAOS-2 cell line. However, SAOS-2 is a tumoral cell line and it cannot be considered for future clinical application. Consequently, we decided to test the effect of the same stimulation on normal human stem cell lines: the BMSC line has been the first choice, because of their demonstrated capability to differentiate into bone. We changed the duration of the treatment as a consequence of the different proliferation rate between SAOS-2 and BMSCs, following the mathematical modeling proposed in [8]. Since the inductive properties of the osteogenic medium have already been demonstrated, we studied separately the effects of the mechanical treatment by creating four groups of samples: in this way, we can highlight the effect of the high frequency vibration by itself on differentiation of cells. One of the first parameters to understand the differentiation level is the amount of calcium deposition, by the alizarin red test. Our results demonstrate the strong differentiative effect of the treatment, in particular if it is associated with the inductive effect of the osteogenic medium. By observing the results on the deposition of calcium at 21 days, it is possible to conclude that the high frequency vibration treatment does not have any effect on this parameter for both culture media. On the contrary, the effect of osteogenic medium on the calcium deposition at 21 days is evident. However, 21 days is a period of early differentiation for this cell line, so probably the cells are not yet in the phase of mineralization. In fact, to understand the effects of the mechanical treatment on the deposition of calcium, it is important to evaluate the difference between treated and control samples in the mineralization phase, after 40 days. The results show the positive effects of the combination between the osteogenic medium and the mechanical treatment on BMSCs, and it is the first strong signal of the differentiative effect of the treatment.
Figure 3: Picture of the plates of BMSCs after 40 days of culture in differentiative medium for a control sample (C, on the left) and a mechanically treated one (T45, on the right).
IFMBE Proceedings Vol. 29
High Frequency Mechanical Vibrations Stimulate the Bone Matrix Formation in hBMSCs (Human Bone Marrow Stromal Cells)
497
VI. REFERENCES
1.
Figure 4: Pictures with light microscopy of the plates of BMSCs in differentiative medium after 40 days. (A) Control sample at a magnification of 20x. (B) Treated sample at a magnification of 20x. (C) Control sample at a magnification of 10x. (D) Treated sample at a magnification of 10x.
Further analyses on the osteogenic genes and on the following translation in proteins should confirm these preliminary results. Afterwards, it will be possible to affirm the efficacy of the high frequency vibration treatment on in vitro cell culture.
Cheng, S.L., et al., Differentiation of human bone marrow osteogenic stromal cells in vitro: induction of the osteoblast phenotype by dexamethasone. Endocrinology, 1994. 134(1): p. 277-86. 2. Anselme, K., et al., In vitro control of human bone marrow stromal cells for bone tissue engineering. Tissue Eng, 2002. 8(6): p. 941-53. 3. Agata, H., et al., Feasibility and efficacy of bone tissue engineering using human bone marrow stromal cells cultivated in serumfree conditions. Biochem Biophys Res Commun, 2009. 382(2): p. 353-8. 4. Ashman, R.B., et al., A continuous wave technique for the measurement of the elastic properties of cortical bone. J Biomech, 1984. 17(5): p. 349-61. 5. Gomes, M.E., et al., Influence of the porosity of starch-based fiber mesh scaffolds on the proliferation and osteogenic differentiation of bone marrow stromal cells cultured in a flow perfusion bioreactor. Tissue Eng, 2006. 12(4): p. 801-809. 6. Jiang, Y., et al., Pluripotency of mesenchymal stem cells derived from adult marrow. Nature, 2002. 418(6893): p. 41-9. 7. Marolt, D., et al., Bone and cartilage tissue constructs grown using human bone marrow stromal cells, silk scaffolds and rotating bioreactors. Biomaterials, 2006. 27(36): p. 6138-49. 8. Prè, D., Ceccarelli, G., Benedetti, L., Cusella De Angelis, M.G., Magenes, G. . A comparison between the proliferation rate of SAOS-2 human osteoblasts and BMSCs (Bone Marrow Stromal Cells) using mathematical models. in World Congress 2009Medical Physics and Biomedical Engineering. 2009. Munich, Germany. 9. Pre, D., et al., Effects of Low Amplitude, High Frequency Vibrations on Proliferation and Differentiation of SAOS-2 Human Osteogenic cell line. Tissue Eng Part C.Methods, 2009. 10. Pre, D., et al. A high frequency vibration system to stimulate cells in bone tissue engineering. 2008. Shanghai, China.
V. CONCLUSIONS
The results obtained suggest a differentiative effect of the mechanical treatment on hBMSCs: in particular, the association with the osteogenic medium increases the efficacy of the treatment. The following steps require the association of the cell culture with a scaffold: after that, an in vivo study on immunodeficient mice and finally on humans could be performed. At the end, it will be possible Co to treat the hBMSCs of the patient with the HFV treatment to induce their differentiation toward bone in order to build autologus prosthesis.
Corresponding Author: Author: PRÉ DEBORAH Institute: UNIVERSITY OF PAVIA Street: VIA FERRATA 1, 27100 City: PAVIA (ITALY) e-mail address: [email protected]
IFMBE Proceedings Vol. 29
Mobispiro: A Novel Spirometer Eleni J. Sakka, Pantelis Aggelidis, and Markela Psimarnou Ρ.Α. Mobihealth LTD, Nicosia, Cyprus
Abstract— Respiratory function monitoring and pulmonary diseases evaluation usually involve patient examination with spirometers, for the recording and monitoring of two principal parameters: the total air volume that the lungs can inhale and exhale, and the peek of the exhaled air flow. The purpose of implementing a portable spirometer device is to be used by patients who suffer from chronic pulmonary diseases, and need frequently to record and monitor their respiratory function. Mobispiro is a novel spirometer that enables recording and transmission of vital signs over GSM/GPRS networks and supports patient-centric models of healthcare provision. The core of the device includes all the essential hardware components, as well as software providing the necessary functionality and it is expected to fulfill all expert needs for monitoring patient’s disease. Mobispiro measurements are transferred to the doctor through GSM network. In addition, the use of the GPRS service for data transmission supports the ability for the device to communicate with receiving points, e.g. PC, PDA or a cell phone. It is anticipated that the volume of data to be transferred is relatively low and thus, the above connectivity solutions are satisfactory for this kind of medical use. The device is programmed to send data to predefined access points. The solution is integrated with a web server application which gives the ability to access data simply by using an internet connection. The web server application is responsible for the service administration, the retrieval of measurements from the database and their presentation. Mobispiro is also provided as an OEM solution and complies with the international standard for medical data transmission and storage and the ATS recommendations for diagnostic spiromenters. Keywords— Spirometry, GSM/GPRS.
I. INTRODUCTION Pulmonary diseases such as chronic obstructive pulmonary disease (COPD) and asthma can be evaluated with the help of spirometry. In this examination the monitoring of the respiratory function involves the recording of two principal parameters: the total air volume that the lungs can inhale and exhale, and the peek of the exhaled air flow. Chronic obstructive pulmonary disease (COPD) is a lung ailment that is characterized by a persistent blockage of airflow from the lungs. It is an under-diagnosed, life-threatening lung disease that interferes with normal breathing and is not fully reversible. Chronic obstructive pulmonary disease is more than a “smoker’s cough”. An
estimated 210 million people have COPD worldwide, whereas more than 3 million people died of COPD in 2005, which is equal to 5% of all deaths globally that year. According to WHO, almost 90% of COPD deaths occur in low- and middle-income countries. The primary cause of COPD is tobacco smoke, either through tobacco use or second-hand smoke. The disease now affects men and women almost equally, due in part to increased tobacco use among women in high-income countries. COPD is not curable, but treatment can slow the progress of the disease. Total deaths from COPD are projected to increase by more than 30% in the next 10 years without interventions to cut risks, particularly exposure to tobacco smoke. Asthma is a chronic disease characterized by recurrent attacks of breathlessness and wheezing, which vary in severity and frequency from person to person. Symptoms may occur several times in a day or week in affected individuals, and for some people become worse during physical activity or at night. During an asthma attack, the lining of the bronchial tubes swell, causing the airways to narrow and reducing the flow of air into and out of the lungs. Recurrent asthma symptoms frequently cause sleeplessness, daytime fatigue, reduced activity levels and school and work absenteeism. Asthma has a relatively low fatality rate compared to other chronic diseases. WHO estimates that 300 million people currently suffer from asthma. Asthma is the most common chronic disease among children. It is a public health problem not just for high-income countries; it occurs in all countries regardless of the level of development. Most asthma-related deaths occur in low- and lower-middle income countries. Often asthma is under-diagnosed and under-treated. It creates substantial burden to individuals and families and often restricts individuals’ activities for a lifetime. For the monitoring of both COPD and asthma, spirometry is an imperative solution. Most of the spirometers that are available in market are designed to record the air flow data and show the measurements either on a display or send it to the doctor’s PC via cable. Contrarily to them, Mobispiro has embedded communication capabilities that enable it to communicate measurements to remote destinations. At least two use case scenarios show the potential added value of Mobispiro. In the first, the device is operated by a
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 498–501, 2010. www.springerlink.com
Mobispiro: A Novel Spirometer
499
GP, and the mobile unit is programmed to send a) data packets to an FTP server which collects the measurements, and b) notification SMS to a predefined medical expert’s cell phone, who is getting informed by this way for the transmission. In the second use case scenario Mobispiro is operated by the patient and the communication module is programmed to send a) data packets to the medical expert’s email address and b) SMS with specific measurement parameters to his cell phone. The measurements’ receiving node can be either a PC or an advanced cell phone or PDA, where the control, advanced processing and print of data are possible, through a handy application.
B. Implementation The implementation of the spirometer was based on the standards of the American Thoracic Society (ATS) for diagnostic spirometers [2]. According to these standards these medical devices should comply with the values presented on table 1. Table 1 Minimal recommendations for diagnostic spirometry Test VC
II. MOBISPIRO A. Mobispiro Spirometer Overview
FVC
Mobispiro is a high quality spirometer that facilitates patient-centric, continuous monitoring services of healthcare provision models. Mobispiro can be used to perform a comprehensive set of spirometry measurements (e.g. FEV1, FVC, FEF 25-75%, etc). Measurements are locally stored on the device's memory and also presented on the device's screen. The device has diverse communication capabilities, both wired (e.g. Ethernet) and wireless (GSM/GPRS). Mobispiro is a patented spirometer that is designed and developed to comply with international standards. It is a portable medical device targeting chronic patients with pulmonary diseases and easy to use. Its user friendliness makes it suitable even for users with limited possibilities or special needs.
FEV1
Time zero
PEF
FEF2575%
V
MVV
Fig. 1
Mobispiro prototype
Range/Accuracy (STPS) 0.5 to 8 L ±3% of reading or ±0.050 L whichever is greater 0.5 to 8 L ±3% of reading or ±0.050 L whichever is greater
Flow Time Range (s) (L/s) zero to 30 14 zero to 15 14
0.5 to 8 L ±3% of reading or ±0.050 L whichever is greater
zero to 1 14
The time point from which all FEVt measurements are taken Accuracy: ±10% of reading or ±0.400 Us, whichever is greater Precision: ±5% of reading or ±0.200 Us, whichever is greater 7.0 Us ±5% of reading or ± 0.200 Us, whichever is greater ±14 Us ±5% of reading or ±0.200 Us, whichever is greater 250 Umin at TV of 2 L within ±10% of reading or ±15 Umin, whichever is greater
zero to 14
Resistance and Back Pressure
Test Signal 3-L Cal Syringe
Less than 1.5 cm H20/L/s Less than 1.5 cm H2O/L/s
24 standard Waveforms 3-L Cal Syringe 24 standard Waveforms
Back extrapolation
± 14 Same as FEV1
zero to 15 14 ± 14
Same as FEV1
15 Same as FEV1
± 3%
12 to 15
26 flow standard waveforms 24 standard waveforms Proof from manufacturer
Pressure less than Sine ±10 cm wave H2O at 2-L pump TV at 2.0 Hz
The basic components of Mobispiro are a microcontroller, the communication add-in, the airflow sensor the digital display and the algorithms for the measurements processing. For the implementation of the Mobispiro the Wavecom [3] Wireless CPU Q2686 microcontroller was used as the main microcontroller of the device. This microcontroller has a USB 2.0 and two UART outputs. Digital control can be IFMBE Proceedings Vol. 29
500
E.J. Sakka, P. Aggelidis, and M. Psimarnou
obtained through two INT, two SPI, an I2C and a 5 x 5 keyboard. The advanced feature of this microcontroller is that it supports an embedded GSM/GPRS communication protocol covering GSM bands worldwide i.e. 800/900/1800/1900 MHz. With the appropriate libraries installed the microcontroller gives the opportunity to connect, register to the cellular network and have all the available functionality of the GSM networks. The microcontroller was programmed on the Deployment Kit Q26 of Wavecom, whereas for the source code the Microsoft Visual Studio .NET 2003 [4] and the development platform Open AT v1.08.02 were used. The airflow sensor is based on MEMS technology and was developed by THEON Sensors [5]. It is developed to comply with the ATS standards. It is a brand new sensor developed for Mobispiro spirometer and the operating principle is the hot film anemometer. Basically, it is an ohmic sensor which is warmed by an electric signal and exposed to the airflow. The sensor has a digital UART output so as for the measurement process to be more efficient and easier.
• •
•
•
• •
VC (vital capacity), that is the maximum volume of air which can be exhaled or inspired during either a forced (FVC) or a slow (VC) manoeuvre, FEV1 – (Forced Expiratory Volume in 1 Second), that is the volume expired in the first second of maximal expiration after a maximal inspiration and is a useful measure of how quickly full lungs can be emptied. FEV3 – (Forced Expiratory Volume in 3 Seconds), that is the volume expired in the first three seconds of maximal expiration after a maximal inspiration and is a useful measure of how quickly full lungs can be emptied. FEV1/FVC (FEV1%), that is the FEV1 expressed as a percentage of the VC or FVC (whichever volume is larger) and gives a clinically useful index of airflow limitation. PEF (Peak Expiratory Flow), that is the maximal expiratory flow rate achieved and occurs very early in the forced expiratory manoeuvre. and FEF 25-75% or 25-50% (Forced Expiratory Flow 2575% or 25-50%) that is the average expired flow over the middle half of the FVC manoeuvre and is regarded as a more sensitive measure of small airways narrowing than FEV1.
In order to evaluate the device special tests were made by pneumonologists. In these tests the reliability of the device was verified. During these tests several patients ware asked to take a spirometry test before and after a bronchodilator. These measurements were compared to measurements taken by spirometers available in market. C. Mobispiro Service
Fig. 2 MEMS Air flow Sensor The LCD display that was used is Crystalfontz [6] CFA635-TFE-KU1. The display is connected through a UART port to the microcontroller and its overall size is 142x37 mm, having 82.95x27.5 mm displaying surface. It has, also, four LED lights and six buttons. The last component of the Mobispiro spirometer is the software that implements sensor’s data handling, data processing, display of the measurements on the LCD, communication with the base station and, finally, transmission of data to the server. The Mobispiro software was implemented on Microsoft .NET Visual Studio 2003. The algorithms for data processing and calculations for the pulmonary function evaluation are based on the sum of squares formula. The following parameters are being calculated and send to the Mobispiro service centre:
Mobispiro spirometer can also be used as part of an integrated patient telemonitoring service that includes a centralized database and a web based application for the presentation of the measurements. The Mobispiro Service is user-oriented, aiming at the collection and transmission of pulmonary function parameters with minimum user intervention. The user only needs to blow to the mouthpiece of the medical device; then, by pressing a button the measurement is transmitted to a web server, for centralized storage. A web based application allows user friendly retrieval, visualization and process of the medical data by specialized healthcare professionals. For each user/patient a comprehensive Electronic Health Record can be created, including the patient demographics and her medical profile. The distinctive diagnostic value of the Mobispiro Service lies on the fact that medical data recording is feasible at the time when the patient feels pain, discomfort or any other symptom, not necessarily present when and while being in a hospital.
IFMBE Proceedings Vol. 29
Mobispiro: A Novel Spirometer
501
for diagnostic spiromenters and enables recording and transmission of vital signs over GSM/GPRS networks and supports patient-centric, continuous monitoring models of healthcare provision. Mobispiro is a prototype and the future steps include the amelioration of user friendliness of the device as well as the improvement of the software bugs that were revealed during the verification tests.
ACKNOWLEDGMENT Mobispiro was implemented during the incubation period at Diogenes Business Incubator [7] University of Cyprus Ltd, in Nicosia, Cyprus.
Fig. 3 The MobiSpiro service Overall, Mobispiro enables the provision of costeffective telemonitoring services to chronic patients with pulmonary diseases (e.g. asthma, COPD, etc). The novel spirometer along with the web-based application constitute a unique m-health solution, bringing the point of care closer to the patient, and enabling personalized treatment plans and efficient health monitoring.
III. CONCLUSIONS The monitoring and the examination of patients suffering from pulmonary diseases usually involve examination with spirometers, for the recording of two principal parameters: the total air volume that the lungs can inhale and exhale, and the peek of the exhaled air flow. Mobispiro is a novel spirometer that complies with the ATS recommendations
REFERENCES 1. 2. 3. 4.
WHO Chronic obstructive pulmonary disease (COPD) at www.who.int American Thoracic Society, Standardization of Spirometry, 1994 Wavecom at http://www.wavecom.com Microsoft .NET Visual Studio at http://msdn.microsoft.com/en-us/vstudio/default.aspx 5. Theon Sensors at http://www.theon.com/en/homepage_en.php 6. Crystalfrontz at http://www.crystalfontz.com/ 7. Diogenes Business Incubator at http://www.diogenes.com.cy/ Corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Eleni J. Sakka Ρ.Α. Mobihealth LTD 91 Aglandjia Avenue 1678, Nicosia Cyprus [email protected].
A computer program for the functional assessment of the rotational vestibulo-ocular reflex (VOR) A. B¨ohler1 , M. Mandal´a2 and S. Ramat3 1
Medical Device Technology, Upper Austria University for Applied Sciences, Linz, Austria. 2 Dipartimento di Scienze Anestesiologiche e Chirurgiche, University of Verona, Italy. 3 Dipartimento di Informatica e Sistemistica, University of Pavia, Pavia, Italy.
Abstract— The vestibulo-ocular reflex (VOR) uses head angular acceleration information transduced by the semicircular canals in the inner ear in order to drive eye movements that compensate for head rotations, and thus stabilize the visual scene on the retina. Peripheral and central vestibular pathologies may impair the function of the VOR so that compensation becomes incomplete, making clear vision during head movement impossible. The clinical assessment of vestibular function is made difficult by the adaptive processes activated by the central nervous system of the patient, which quickly learns to use residual vestibular information or information provided through other senses to supplement the deficient VOR, especially for slow head movements. Clinical assessment may still be made using the head impulse test. A compensatory saccade at the end of the head movement is the clinical sign of a vestibular deficit. Here we propose a new computerized technique for assessing vestibular function at different head angular accelerations, based on evaluating the ability of the patient in reading a character briefly displayed on a computer screen while the head is being rotated. Keywords— inertial sensor, vestibular system, clinical testing, VOR
I. I NTRODUCTION The vestibular system contributes to the control of balance by transducing head accelerations and triggering reflex responses aimed at stabilizing gaze and body position in space. The VOR is responsible for the stabilization of gaze, and without an effective VOR vision would be impaired every time the head moves. The life-long prevalence of dizziness is about 30% in the general population and this figure is even larger in the elderly since aging affects the function of the vestibular system. The assessment of dizziness needs to evaluate the vestibular system, yet such evaluation may prove difficult since the central nervous system tries to compensate vestibular dysfunction by adaptation and substitution processes. Such mechanisms are especially efficient in response to low acceleration rotational stimuli, yet most diagnostic tools also investigate movements
in the same stimulus range. The head thrust test [1] is generally accepted as the clinical test of reference for high acceleration stimuli: the patient is asked to fix upon a target (typically the examiner’s nose) while the examiner briskly rotates the head. A normally working VOR will hold gaze steady, otherwise a corrective saccade will be needed at the end of the head movement to bring the image of the target back to the fovea. Such saccade represents the clinical sign of dysfunction of the semicircular canal towards which the head has been rotated. The detection of the corrective saccade may be difficult at the bedside and performing the test therefore requires an experienced clinician. In the few laboratories equipped with a magnetic search coil system it is possible to implement a quantitative, yet invasive, version of the test by simultaneously recording both the eye and the head movement [2], but such approach clearly cannot be of widespread use. Previous work has suggested a measure of vestibular function in terms of dynamic visual acuity, i.e. the assessment of the visual acuity of a subject during head movement [3, 4, 5]. With such testing technique the head is either actively [6] or passively [5] rotated and its angular velocity is recorded so that an optotype is presented on a computer display when a fixed threshold velocity is exceeded. Current studies aim at assessing the dynamic visual acuity so that the resulting measurement is expressed in terms of the logarithm of the minimum angle resolvable (logMAR), which typically results in a lower visual acuity for dynamic vs. static conditions. The DVA may be used for diagnostic assessment either by comparing the decrease in visual acuity with a normative database or by comparing the performance in one direction of rotation with that in the other. Predictable head motion, such as self generated or sinusoidal head rotations, has been shown to cause low sensitivity of DVA as a diagnostic tool [4, 7]. Our approach differs from that of DVA assessment as we want to test vestibular function in order to gain information on which parameters of the head movement, and not those of the visual stimulus, affect a subjects ability to stabilize gaze in space. With such information we will be able to improve the sensitivity and the specificity of the test while understand-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 502–505, 2010. www.springerlink.com
A Computer Program for the Functional Assessment of the Rotational Vestibulo-Ocular Reflex (VOR)
ing which natural activities may be impairing vision for each patient and should therefore be considered as potentially dangerous for himself and the people around him. In the following we describe a software program implementing a new technique for the assessment of vestibular function at different head angular accelerations. The subject’s ability to stabilize gaze during head movement is challenged only by the intensity of head angular acceleration. The size of the optotype being displayed is normalized to the individual visual acuity and enlarged enough for readability not to be an issue.
503
soon as the specified number of data points are captured. In order to reduce the effect of sampling noise, raw data is captured at 10000Hz, but then smoothed to 100Hz (100 samples are captured at 100Hz and averaged to provide one angular velocity or linear acceleration data point).
II. M ETHOD Based on the head impulse test rationale, we have developed a Python software program for acquiring angular velocity and linear accelerations from a head mounted sensor and displaying an optotype on a computer monitor when the imposed head angular acceleration exceeds a user-defined threshold. The software simultaneously verifies that the head rotational stimulus is correctly delivered. The sensing device consists of two parts: the sensor attached to the patient’s head and the acquisition system, attached to a computer.
Fig. 1: Flowchart
The inertial sensor is made up of a gyroscope (ADXRS300) and a 3-axis accelerometer (ADXL330), both manufactured by Analog Devices. The gyroscope gives accurate information about the rotation of the head within a range of ±300◦ /s, whereas the accelerometer allows the verification of the movement within a range of ±3g. The sensors are packed together on a 2cm by 2cm circuit board, weighing only a few grams. This assembly is then mounted with an elastic band to the subject’s head, thus allowing natural head movements.
The CalcThread does the acutal data processing: It first converts Volts to ◦ /s and then calculates the derivative (acceleration). When the threshold is reached, the DisplayThread is notified about the change and asked to display the optotype on screen. The CalcThread continues to capture data for one second and then dispatches the dataset to the EvalThread. The EvalThread in turn does an offline analysis of the dataset: it verifies the accelerometer data to assess the correctness of the stimulus (see section Head Movement Criteria) and assigns the trial to the correct head acceleration bin, based on the maximum angular acceleration reached. The DisplayThread itself is responsible for the presentation of the letter, after a user-selectable delay the letter and for a user-modifiable number of video frames. After presenting the letter the DisplayThread signals the GUI to ask the user for the letter displayed.
B. Data Acquisition and Data Processing
C. Head Movement Criteria
The analog outputs of the sensors are captured by data acquisition hardware from National Instruments. We successfully tested our Software on various USB-based cards, even some bus-powered ones, allowing the portable use of the system. We developed a simple wrapper around the C-library provided by NI using Python’s ctypes library. The code is based on sample code from the scipy.org Cookbook as well as NI’s C example code. The entire system is GUI-driven with a GTK+ interface designed using Glade-3. The Flowchart in Figure 1 shows how the communication with the hardware runs in its own thread (ListenThread) and calls a function in the calculation thread (CalcThread), as
In order to ensure that the test is correctly performed, our software looks for characteristics of the imposed head movement which may undermine the reliability of the test, and excludes trials presenting these patterns from those considered in the assessment of vestibular functionality. Two main concerns are addressed: 1- the variability of the axis of head rotation and 2- the presence of a translational movement of the head. In order to correctly stimulate one vestibular canal at a time, head rotation needs to occur in the plane of that canal. Horizontal semicircular canals, for instance, are tilted about 20◦ up with respect to the gravitational horizontal. Therefore,
A. Sensor System
IFMBE Proceedings Vol. 29
504
A. Böhler, M. Mandalá, and S. Ramat
in order to properly stimulate that canal pair with a horizontal head rotation, the head needs to be tilted nose-down. As a general rule of thumb the head should be pitched forward by an angle between 20 and 30 ◦ . Once the subject is being positioned accordingly, our software ensures that during the rotational impulse the orientation of the head is not varied beyond a predefined tolerance (e.g. 5 deg). When the threshold tolerance is exceeded the trial is discarded and a message detailing the error type is displayed. Second, the software monitors the presence of translational components in the delivered stimulus, a condition that may frequently occur when the experimenter attempts to deliver higher acceleration stimuli. The compensation of a head translation requires the intervention of the translational VOR (tVOR), which depends on the otolith organs, and whose performance is known to be less than compensatory even in normal subjects [8]. Therefore, introducing a translational component in the stimulus would reduce the subjects possibilities of reading the displayed optotype and make the results of the test unreliable. To avoid such biasing factor, our software verifies that the linear acceleration relative to the plane in which the head rotation occurs does not exceed that expected by the amount of instantaneous head angular velocity. Ideally our head movement sensor should be positioned on the axis of head rotation, so that the only acceleration sensed by the three axis accelerometer will be that of gravity in both static and dynamic conditions. If the sensor is displaced with respect to the axis of head rotation by a distance r, the accelerometer will instead pick up two components of head acceleration: a centripetal (directed radially, ar ) and a tangential acceleration (at ), as shown in Eq. 1. ar = −ω 2 · r at = r ·
dω dt
III. P ERFORMANCE & V ERIFICATION A. Timing The use of the Python language has both advantages and disadvantages with respect to performance. Although Python is a very efficient language, it is still interpreted. Normally timing is a problematic issue, but fortunately the acquisition hardware by National Instruments offers hardware-timed sampling, allowing to avoid software-based timers. B. Verification In order to verify the timing of the system, a crucial performance for the accuracy of the test, we attached a photodiode to the screen that detects a white square that pops up together with the letter. The output of the photodiode is also captured via the data acquisition system and fed to the analysis software. The software can save the raw dataset in an HDF5 file for later analysis. The photodiode itself is a generic RS-Components diode, BPW21, RS part no. 303-719. It has a typical rise-time of 1μs, which was therefore neglected.
(1) (2)
Prior to beginning the test we therefore perform some example head rotations while monitoring head angular acceleration and we reposition the sensor in order to minimize the linear accelerations transduced by the accelerometer along the radial and tangential directions. This implies reducing the distance r in Eq. 1, thus improving the positioning of the sensor so that it is closer to the axis of head rotation. During the test of individual semicircular canal function, we can then use the angular velocity and acceleration data acquired through the gyroscope to verify that the radial and tangential accelerations match those expected based on Eq. 1. Trials with lateral head accelerations greater than an adaptive threshold (Eq. 2) will be rejected. max(4, 1.1 · αT /1000)
where αT is the angular acceleration threshold in ◦ /s2 .
Fig. 2: Raw dataset and timing verification To verify the timing of the visual stimulus we developed a simple Matlab software measuring real-world delays based on the known sampling rate. A typical raw dataset is shown in Figure 2. The threshold was set at 2000◦ /s2 (marked with a circle), the diode threshold was about 3.5V (marked with an asterisk). The delay, that is the time from overcoming the threshold until the diode detects the optotype, is 34ms. The
(3) IFMBE Proceedings Vol. 29
A Computer Program for the Functional Assessment of the Rotational Vestibulo-Ocular Reflex (VOR)
505
time on screen, that is the time the diode’s value is high, is 16ms corresponding to one frame at 60Hz. The artificial delays before displaying the letter and the display time of the letter are controlled via a call to the sleep() function. The accuracy of this approach was similarly verified. Figure 2 also shows how, with the current prototype and test setup, the letter is displayed around the time when maximum head acceleration is reached. C. Operating System Influence
Fig. 3: Left: Healthy subject; Right: Patient with deficits
Our tests showed severe differences between Windows XP and Windows Seven. On Windows XP, the optotype display is about 25ms faster than on Windows Seven (on the same hardware), see Table 1 for details. The first test was done using Windows Seven 64bit (but 32bit Python!), the second one using Windows XP Professional Tablet PC Edition 2005, 32bit. The program was set with no artificial delay and a character display time of 16ms. W7 Delay 65.7
W7 Disp.Time XP Delay 32.7 41.5 Table 1: Timing Comparison
XP Disp.Time 34.7
IV. R EPORTS AND R ESULTS The analysis software currently supports four different modes of representing results. The “Detailed Results” page lists every trial, the letter it asked for, the users’s response and direction as well as rate and (max) acceleration. For better readability correct results are colored green, false answers are colored red. The “Bins” page puts all results within a range of e.g. 1000◦ /s2 into one bin and lists the percentage and number of correct answers in every direction. The “Vestibulogram” graphically shows the percentage of correct answers per bin with respect to the rotation direction. Finally, the “Error-Plot” represents the number of errors per bin, also divided into clockwise and counterclockwise. Figure 3 compares the results of two recordings. While the left-hand picture is from a healthy subject, the picture on the right are answers from a patient with bilateral deficits. As expected, a normal subject can read nearly 100% of the presented letters while the vestibular patient has severe problems identifying the letter even at lower accelerations and his performance drops dramatically at higher accelerations.
V. D ISCUSSION We have developed a software program for performing a test of the functionality of the individual semicircular canals of the vestibular system. The test evaluates the ability of the vestibulo-ocular reflex to maintain stable vision during passive, high acceleration impulsive head rotations of a range of intensities. The software displays a letter optotype on the test screen for a predefined number of video frames, when the acceleration of the head overcomes an adjustable threshold. We have verified the timing of the visual stimulus with respect to the head acceleration recordings and confirmed that the display occurs when the acceleration reaches its peak values. The software ensures that the stimuli delivered to the head are correct by discarding trials presenting either changes of pitch angle or spurious translations. Four different pages summarize the test results and provide detailed diagnostic information about the performance of the subject. Head movement and test performance data are saved to disk for further analysis.
R EFERENCES 1. G. M. Halmagyi and I. S. Curthoys, “A clinical sign of canal paresis,” Arch.Neurol., vol. 45, no. 7. pp.737-739, July, 1988. 2. S. T. Aw, T. Haslwanter, G. M. Halmagyi et al., “Three-dimensional vector analysis of the human vestibuloocular reflex in response to high-acceleration head rotations. I. Responses in normal subjects,” J.Neurophysiol., vol. 76, no. 6. pp.4009-4020, Dec., 1996. 3. S. J. Herdman, “Role of vestibular adaptation in vestibular rehabilitation,” Otolaryngol.Head Neck Surg., vol. 119, no. 1. pp.49-54, July, 1998. 4. J. R. Tian, I. Shubayev, and J. L. Demer, “Dynamic visual acuity during yaw rotation in normal and unilaterally vestibulopathic humans,” Ann.N.Y.Acad.Sci., vol. 942. pp.501-504, Oct., 2001. 5. M. C. Schubert, A. A. Migliaccio, and C. C. la Santina, “Dynamic visual acuity during passive head thrusts in canal planes,” J.Assoc.Res.Otolaryngol., vol. 7, no. 4. pp.329-338, Dec., 2006. 6. S. J. Herdman, R. J. Tusa, P. Blatt et al., “Computerized dynamic visual acuity test in the assessment of vestibular deficits,” Am.J.Otol., vol. 19, no. 6. pp.790-796, Nov., 1998. 7. M. C. Schubert, S. J. Herdman, and R. J. Tusa, “Vertical dynamic visual acuity in normal subjects and patients with vestibular hypofunction,” Otol.Neurotol., vol. 23, no. 3. pp.372-377, May, 2002. 8. S. Ramat and D. S. Zee, “Ocular motor responses to abrupt interaural head translation in normal humans,” J.Neurophysiol., vol. 90, no. 2. pp.887-902, Aug., 2003.
IFMBE Proceedings Vol. 29
New Application for Automatic Hemifield Damage Identification in Humphrey Field Analyzer (HFA) Visual Fields A. Salonikiou, V. Kilintzis, A. Antoniadis, and F. Topouzis AUTH, Laboratory of Research and Clinical Applications in Ophthalmology, AHEPA Hospital, Thessaloniki, Greece
Abstract— Glaucoma is a disease affecting the optic nerve. Structural as well as functional changes occur as the disease progresses. Glaucomatous optic discs appear with certain patterns of structural damage, and the disease’s main consequence is patient’s visual field damage. Taking therapeutic decisions for glaucomatous patients requires visual field examination which until today is the cornerstone of glaucomatous patients’ management. Humphrey Field Analyzer (HFA) is the most commonly used device for visual field examination. There are certain demographic and anatomic risk factors contributing to glaucoma occurrence and progression. Vascular risk factors as well as ocular blood flow have been studied also, but their role remains unclear. Studying the correlation between various risk factors and structural and functional damage in glaucoma is one of the most interesting research fields lately. The purpose of this project was the development of software that reads HFA data from large databases. Then it categorizes visual fields according to hemifield damage. The results can be used for the correlation of hemifield damage with differences in ocular blood between the upper and lower half of the patients’ retina, as measured with the Heidelberg Retina Flowmeter (HRF). The software can be also used for the calculation of the binocular visual field, after the proper modifications. The software was developed using php and the interface was designed so as to offer a user friendly environment and a summary as well as a descriptive results display. Keywords— glaucoma, Humphrey Field Analyzer, visual field, hemifield, structure-function correlation.
I. INTRODUCTION Glaucoma is the leading cause of irreversible blindness that could be prevented in the world [1]. It is a complex disease [2, 3] with unknown etiology and includes a group of different clinical entities. It affects the optic nerve causing characteristic damage to the optic disc and the retinal nerve fiber layer (RNFL). Visual field (VF) damage is its consequence if it remains untreated. Its course is long lasting and does not cause symptoms in the majority of the patients until the visual field is severely damaged and the central vision is affected.
Risk factors for glaucoma occurrence and progression have been determined and include age, central corneal thickness, family history of glaucoma and elevated intraocular pressure [4-7]. The latter is the only modifiable risk factor. Vascular risk factors are also being studied, but their role remains unclear [8-11]. Diagnosis and management is based firstly on clinical examination. VF examination is necessary for the determination of the stage of the disease and is the cornerstone of further management. Automated perimetry is the method to examine patients’ visual fields, and Humphrey Field Analyzer (HFA) is one of the most frequently used devices. Nowadays new laser imaging technologies have been developed for the diagnosis and further management of glaucoma. This fact offers the opportunity to study different structural aspects of the disease and provides with quantified data enabling us to study correlation between structure and function. Correlation of visual field damage and blood flow has been done using various methods. Hemifield damage has been correlated to mean blood flow measured with the Heidelberg Retina Flowmeter (HRF) [12], while VF indices like the mean deviation (MD) or Corrected Pattern Standard Deviation (CPSD) have been correlated with blood flow calculated using 10x10 pixel window analysis of the HRF [13] or laser speckle flowgraphy (LSFG) [14]. Finally, other studies have correlated VF damage to RNFL measurements with the Optical Coherence Tomography (OCT) [15,16] or the Scanning laser polarimetry (GDx) [17]. The purpose of this project was the development of software that reads HFA data from large databases. Then it identifies hemifield with the more extended VF defect. Current HFA software does not provide this information and identification of the hemifield with the more extended damage is currently done by clinicians when they evaluate the printout result of the VF test. Clinicians evaluating printouts would be impractical when it has to do with large databases. This software will provide with a useful tool in order to automatically obtain the information of the hemifield with the more extended damage in large VF databases. This can be used firstly for the correlation of hemifield damage with differences in ocular blood flow between the
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 506–509, 2010. www.springerlink.com
New Application for Automatic Hemifield Damage Identification in Humphrey Field Analyzer (HFA) Visual Fields
507
upper and lower half of the patients’ retina, as measured with the Humphrey Retina Flowmeter (HRF) using the pixel-by-pixel technique [18]. Further, other potential applications include hemifield structure-function correlation with Heidelberg Retina Tomograph (HRT) measurements. Moreover, after the proper modifications the software could be used i) to evaluate the hemifield damage not only with regards to how extended it is but also according to how deep it is and ii) to calculate the binocular visual field.
II. METHODS A. HFA Operation and Description of the Printout HFA records the sensitivity of certain test points in the visual field to known intensity light targets. The sensitivity is measured in decibels (dB). HFA has a range of suprathreshold and full threshold strategies, with the 30-2 and 24-2 SITA standard program being the most commonly used [19]. The 24-2 program tests 54 points while 30-2 tests 76 points. At the end of the examination, the HFA gives the measured sensitivity of each test point in a printout (Fig.1) containing displays including the greyscale, the numerical, the total and standard deviation. The total deviation display represents the deviation of the patient’s results from that of age-matched controls, while pattern deviation is adjusted for any generalized depression in the overall field which might be caused by other factors such as lens opacities or miosis [19]. There are also indices summarizing test results in a single number. The more often used is Mean Deviation (MD), which is a measure of the overall field loss [19]. Important piece of information on the printout are the reliability indices fixation losses, false positives and false negatives. Values >33% for fixation losses or >20% for false positives and false negatives render the test unreliable, according to the European Glaucoma Society (EGS) Guidelines [20]. This means that the test results may not represent the true visual field status. However, a value of more than 33% for false negative answers may be a sign of disease severity [19]. There are studies that consider as acceptable false positive and false negative values of up to 33% [21]. What should be mentioned at this point is the fact that there is no open source software for the extraction and processing of HFA data up to date.
Fig. 1 Visual Field printout randomized clinical trial [22]. The investigators of this study have developed quantitative methods to assess the test reliability and measure the severity of glaucomatous visual field defects with the 24-2 threshold program of the Humphrey Visual Field Analyzer. More specifically, the scoring of visual field defects is based on the following: • • • •
•
Defects may occur in the upper or lower hemifield, or in the nasal field. The total deviation plot is used. Test locations above and below the center of the physiologic blind spot are excluded from scoring since they are not reported on the total deviation plot. For a defect in the hemifields to be considered, three or more adjacent sites within the hemifield must be affected. Two locations are adjacent if they are side by side either horizontally, vertically or obliquely. Three or more locations in a cluster of sites are adjacent if each location in the cluster is adjacent to at least one other in the cluster. To be considered defective, the depression of a patient’s threshold at a test site must be sufficiently large, compared with age-adjusted normal values, as to be unlikely due to spontaneous intra-test fluctuation [23]. The defect should be caused by glaucoma and not by other ocular diseases. This is based on the clinical ophthalmic examination.
B. VF Defect Scoring
•
Evaluation of a VF test examination is based on certain rules, since not all defects are attributed to glaucoma. Most commonly used criteria are those suggested by the Advanced Glaucoma Intervention Study (AGIS), a multicenter,
The amount of depression that renders a test site defective varies with its location, as shown in Fig.2.
IFMBE Proceedings Vol. 29
508
A. Salonikiou et al.
A cluster of three or more depressed sites in a hemifield constitutes a hemifield defect.
•
Fig. 2 Minimum
amount of depression, in decibels, that identifies test locations as defective in Humphrey Field Analyzer threshold 24-2 test total deviation plot of the STATPAC-2 printout. Array shown is for right eye; left eye is a mirror image. Sites above and below the center of the blindspot are not counted [22]
•
C. Software Development The developed web based application incorporates the advantages of a user friendly interface and remote access capability. The development language was PHP5 and the service was supported by Apache Web Server, both commonly used open source tools. We used VF data from our Laboratory‘s HFA database. The conversion from the sensitivity map to the deviation map was done with an external commercial program linked to the HFA, producing a comma separated value (csv) file. • •
•
•
At first, a script that parses the csv file was implemented. Two 4x9 arrays were defined to hold 24-2 VF data or two 5x10 arrays for 30-2 VFs. Each one of the two 4x9 or 5x10 arrays represents the upper and lower hemifield respectively. Peripheral points with no values in the VF were given the value 99 so as to be ignored during the calculations in the next steps. The same value was given to the points corresponding to the “blind spot” (the points corresponding to the optic nerve head). With the use of the appropriate loops, data from the csv file were read and test point values were attributed to the proper array positions. Data in the file is stored in a different order for right and left eyes, so the parsing script manipulates data differently, according to eye. The result of this procedure was the reconstruction of the total deviation plot. Then the deviation from the sensitivity thresholds was calculated, using the AGIS array as shown in Fig.2. This was done by algebraically adding the minimum amount of depression to each test point’s sensitivity
•
value. Since the array contains thresholds only for 24-2 VFs, the peripheral values of 30-2 VFs were ignored and these specific VFs were handled as if they were 242. As soon as there are respective values in the literature for these test points, they will be incorporated in the program. If the number resulting from the above procedure had a negative sign, this meant that the specific test point had a reduction in sensitivity outside the normal range. All test points with a negative value were detected and their neighboring test points were checked for a possible negative value, with properly designed loops. Each time a defected neighbor was detected, a counter increased its value by one. At the end of the procedure, a new same dimension array as the initial was produced, containing values representing the number of defected neighbors for each specific point. Then, clusters of defected test points were detected. When the sum of the values of two adjacent points was >2, meaning that the specific point and one of its neighbors had at least one more defected neighbor, this meant the existence of a cluster. A counter held the number of hemifield test points belonging to clusters. The comparison of the number of points forming clusters for the upper and lower hemifield revealed the hemifield with the most extended defect.
Reliability of VFs was checked according to the aforementioned criteria [20]. If a VF test was found unreliable, then it was displayed in red, allowing exclusion of the specific test from further investigations. The interface is designed in a user friendly way. The user chooses the local csv file to be analysed. A notification appears if a wrong file type is selected. There is the option to see the results either in a list format or in a more descriptive display. The results can be saved as .txt files for further processing. Application home page is as illustrated in Fig.3.
IFMBE Proceedings Vol. 29
Fig. 3 Application home page
New Application for Automatic Hemifield Damage Identification in Humphrey Field Analyzer (HFA) Visual Fields
III. CONCLUSIONS This software will offer the opportunity to study large VF databases, allowing pointwise VF data processing. For the time being, our program runs on a local environment. However, the fact that it is a web based application allows it to be remotely used and therefore serve, for example, databases from large population based multicentric studies. Hemifield-based analysis of glaucomatous VFs along with measurements using imaging technologies may reveal interesting information on structure-function correlation in glaucoma. Furthermore, after the proper adjustments in the program, reconstruction of the binocular visual field will be useful for the evaluation of the consequences of VF damage in glaucomatous patients’ every day living.
ACKNOWLEDGMENT We would like to thank the scientific as well as the technical staff of the Laboratory of Research and Clinical Applications in Ophthalmology for their valuable advice and help during the realization of this project.
REFERENCES 1. Quigley HA. (1996) Number of people with glaucoma worldwide. Future of health insurance. Br J Ophthalmol. 80:389-393 2. Copin B, Brézin AP, Valtot F et al. (2002) Apolipoprotein EPromoter Single-Nucleotide Polymorphisms affect the phenotype of primary open-angle Glaucoma and demonstrate interaction with the myocilin gene. Am J Hum Genet..70:1575-1581 3. Wiggs JL, Allingham RR, Hossain A et al. (2000) Genome-wide scan for adult onset primary open angle glaucoma. Hum Mol Genet.9:1109-1117 4. Leske MC, Wu SY, Hennis A et al. (2008) Risk Factors for Incident Open-angle Glaucoma The Barbados Eye Studies. Ophthalmology 115:85–93 5. Leske MC et al. (2004) Factors for progression and glaucoma treatment: the Early Manifest Glaucoma Trial. Curr Opin Ophthalmol.15:102-6 6. Leske MC, Heijl A, Hussein M et al. (2003) Factors for glaucoma progression and the effect of treatment: the early manifest glaucoma trial. Arch Ophthalmol;121:48 –56 7. Leske MC, Heijl A, Hyman L et al. Predictors of Long-term Progression in the Early Manifest Glaucoma Trial. Ophthalmology 2007;114:1965–1972 8. Wilson MR, Hertzemark E, Walker AM et al. (1987) A case-control study of risk factors in open angle glaucoma. Arch Ophthalmol 105: 1066-1071
509
9. McLeod SD, West SK, Quigley HA, Forzard JL. (1990) A longitudinal study of the relationship between intraocular and blood pressures. Invest Ophthalmol Vis Sci 31: 2361-2366 10. Gasser P, Flammer J. (1991) Blood cell velocity in the nailfold capillaries in patients with normal-tension and high-tension glaucoma. Am J Ohpthalmol 111: 585-588 11. Drance SM, Douglas GR, Wisjman K et al. (1988) Response of blood flow to warm and cold in normal and low-tension glaucoma patients. Am J Ophthalmol 105: 35-39 12. Sato EA, Ohtake Y, Shinoda K et al. (2006) Decreased blood flow at neuroretinal rim of optic nerve head corresponds with visual field deficit in eyes with normal tension glaucoma. Graefes Arch Clin Exp Ophthalmol.244:795-801 13. Ciancaglini M, Carpineto P, Costagliola C et al. (2001) Perfusion of the optic nerve head and visual field damage in glaucomatous patients. Graefes Arch Clin Exp Ophthalmol.239:549-55 14. Yaoeda K, Shirakashi M, Fukushima A et al. (2003) Relationship between optic nerve head microcirculation and visual field loss in glaucoma. Acta Ophthalmol Scand.81:253-9 15. Harwerth RS, Vilupuru AS, Rangaswamy NV et al. (2007) The relationship between nerve fiber layer and perimetry measurements.Invest Ophthalmol Vis Sci.48:763-73 16. Hood DC, Kardon RH. (2007) A framework for comparing structural and functional measures of glaucomatous damage. Prog Retin Eye Res.26:688–710 17. Mai TA, Reus NJ, Lemij HG. (2007) Structure-function relationship is stronger with enhanced corneal compensation than with variable corneal compensation in scanning laser polarimetry. Invest Ophthalmol Vis Sci.48:1651-8 18. Mavroudis L, Harris A, Topouzis F et al. (2008) Reproducibility of pixel-by-pixel analysis of Heidelberg retinal flowmetry images: the Thessaloniki Eye Study. Acta Ophthalmol.:86: 81–86 19. Kanski J (2003) Clinical Ophthalmology A systematic approach. Butterworth Heinemann, Elsevier, USA 20. European Glaucoma Society. Terminology and Guidelines for Glaucoma. 3rd edition 2008 21. Topouzis F, Wilson MR, Harris A et al. (2007) Prevalence of openangle glaucoma in Greece: the Thessaloniki Eye Study. Am J Ophthalmol.144:511-9 22. The Advanced Glaucoma Intervention Study Investigators. (1994) Advanced Glaucoma Intervention Study 2. Visual Field Test Scoring and Reliability. Ophthalmology101:1445-1455 23. Jampel HD, Vitale S, Ding Y et al. (2006) Test-Retest variability in structural and functional parameters of glaucoma damage in the Glaucoma Imaging Longitudinal Study. J Glaucoma15:152-157
Author: Angeliki Salonikiou Institute: Aristotle University of Thessaloniki, Laboratory of Research and Clinical Applications in Ophtalmology, A’ Department of Ophthalmology, AHEPA Hospital Street: Stilponos Kyriakidi 1 City: Thessaloniki Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
The Effect of Mechano– and Magnetochemically Synthesized Magnetosensitive Nanocomplex and Electromagnetic Irradiation on Animal Tumor V.E. Orel1, A.V. Romanov1, I.I. Dzyatkovska1, M.O. Nikolov1, Yu.G. Mel'nik1, N.M. Dzyatkovska1 and I.B. Shchepotin2 1 National Cancer Institute/Medical Physics & Bioengineering Laboratory, Kyiv, Ukraine National Cancer Institute/Department of Tumors of Abdominal Cavity and Retroperitoneal Space, Kyiv, Ukraine
2
Abstract—The research of animals with Guerin carcinoma was shown, that mechano– and magnetochemically synthesized magnetosensitive nanocomplex (MNC) on the basis of nanoparticles of Fe3O4, KCl and doxorubicin had greater antitumor effect than conventional doxorubicin and similar mechanochemically synthesized MNC at next local electromagnetic irradiation and mild hyperthermia of animal tumors. Survival rate of animals with tumors was maximal in experiments after introduction of mechanochemically or mechano– and magnetochemically synthesized MNC and next irradiation of animal tumors. Keywords— mechano– and magnetochemically synthesized magnetosensitive complex, magnetic nanoparticles, electromagnetic irradiation, mild hyperthermia, tumor. I. INTRODUCTION
The problem of cancer therapy is, that often the dose of systemically applied chemotherapeutics needed to annihilate all tumor cells would also end the life of cancer patient. The use of magnetic nanoparticles is one attempt to over-come this dilemma. Direct injection of magnetic particles themselves for inducing local hyperthermia are currently under investigation. Magnetic drug targeting is another promising attempt for treating malignancies. This method makes use of superparamagnetic nanoparticles bound to chemotherapeutics, focused by a strong external magnetic field to the tumor region. This leads to higher doses of the chemotherapeutic agent in the region of the malignancy, even if the overall dose is reduced [1]. The use of spatially inhomogeneous electromagnetic field in local inductive hyperthermia at physiological temperatures increased antitumor effect of antitumor drug doxorubicin (DR) for transplanted DR-resistant Guerin's carcinoma and accompanied by the change of thermodynamical entropy. [2]. We suppose that the magnetosensitive spin-dependent reaction between structural defects initiated by mechanochemical activation [3] probably will increase antitumor effect of DR. In this study we are focusing on the effect of mechano– and magnetochemically synthesized magnetosensitive
nanocomplex (MNC) and spatially inhomogeneous electromagnetic irradiation (EI) on nonlinear dynamics of the growth for Guerin carcinoma. II. MATERIALS AND METHODS
A. Experimental animals and tumor transplantation. In the study 56 male rats weighing 100 ± 15g bred in the vivarium of National Cancer Institute were used. The transplantation of Guerin carcinoma was performed according to the established procedure. All animal procedures were carried out according to the rules of the regional ethic committee. Animals were housed in 7 groups: 1 – control (no treatment); 2 – treatment by DR; 3 – DR + ȿI; 4 – treatment by mechanochemically synthesized ɆNC; 5 – mechano– and magnetochemically synthesized ɆNC; 6 – mechanochemically synthesized ɆNC + ȿI; group 7 –mechano– and magnetochemically synthesized ɆNC+EI. B. Mechano- and magnetochemical synthesis. Electromagnetic irradiation. MNC was mechano– and/or magnetochemically synthesized from Fe3O4 nanoparticles, KCl and DR by laboratorial magnetic-resonance high-precision tribogenerator. Mechanical processing was performed at a frequency 35 Hz, amplitude 9 mm for 5 min using an input mechanical energy of 20W/g and 27.7MHz EI with an initial power of 100 W. Mean diameter of Fe3O4 nanoparticles was 20–40 nm. First prototype of the device for EI called “Magnetotherm” (Radmir, Ukraine) was used (Nikolov et al., 2008). The frequency of EI was 40 MHz with an initial power of 100 W. The animal tumors were irradiated locally by inductive coaxial applicators that had spatial inhomogeneity of electromagnetic field and initiated mild hyperthermia (37.9qC) in tumors [2]. Experimental animals were treated by MNC: DR (Pharmacia & Upjohn) in the dose 1.5 mg/kg, Fe3O4 + KCl in the dose 3 mg/kg. Weight percentage of KCl was 3%.The treatment was performed three times by drug and EI from 3 day
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 510–512, 2010. www.springerlink.com
The Effect of Mechano– and Magnetochemically Synthesized Magnetosensitive Nanocomplex and Electromagnetic Irradiation
511
day after tumor transplantation every other two days. In an area of tumors was disposed a permanent magnet with H = 1990 A/m for localization of MNC within tumor. C. The analysis of nonlinear kinetics of tumor volume. Nonlinear kinetics of tumor volume was evaluated by growth factor M according to autocatalytic equation and the braking ratio [4]. Statistical processing of numerical results was carried out using Statistica 6.0 (© StatSoft, Inc. 1984–2001) computer program with parametric Student’s t-test. III. RESULT AND DISCUSSION.
The growth kinetics of animal tumors is shown in Table 1. The growth kinetics for 7th group had minimal response under the influence of mechano– and magnetochemically synthesized ɆNC and EI. Table 1 The growth kinetics of Guerin carcinoma from 7 to 24 days after transplantation N
Parameters
Treatment
M, day-1 (M ± m) 1 4 2 6 3 5 7
Control (without DR, MNC and ȿI) Mechanochemically synthesized ɆNC DR Mechanochemically synthesized ɆNC + ȿI DR + ȿI Mechano– and magnetochemically synthesized ɆNC Mechano– and magnetochemically synthesized ɆNC+EI
N
0.31 r 0.04 0.28 r 0.04 0.18 r 0.01* 0.16 r 0.01* 0.16 r 0.02*
1 1.08 1.66 1.94 1.89
0.16 r 0.01*
1.88
0.13 r 0.03*
2.43
* Statistically significant difference from control group
Greatest survival rate was observed for animals from 6th and 7th groups (Fig. 1). It exceeded survival rate for animals from a control group on 77%, and animals from a 4th group with injected mechanochemically synthesized MNC on 50%. For animals from 5th group, when MNC before introduction was synthesized as mechano- as magnetochemically, the survival rate increased on 39% in comparison with 4th group, but it was lesser on 20% compare to 6th and 7th group of animals.
Fig. 1 The survival rate of animals with Guerin carcinoma: 1– control (without DR, MNC and ȿI); 2 – DR; 3 – DR + ȿI ; 4–mechanochemically synthesized ɆNC; 5 – mechano– and magnetochemically synthesized ɆNC; 6 –mechanochemically synthesized ɆNC + ȿI ; 7 – mechano– and magnetochemically synthesized ɆNC+EI Our co-operation studies with Dr. A.P. Burlaka and Dr. S.N. Lukin shown that electron spin resonance spectra of mechano– and magnetochemically synthesized MNC had broad peak with g –factors in range 4.25 – 6.0. That is typical g –factors for the iron-transport proteins methemoglobin and transferrin [5]. Nanoparticles Fe3O4 have g–factor 2.0839 and 2.18838 [6]. For conventional and mechanochemically synthesized DR g-factor equal to 2.005; 2.003 and 1.97 [7]. We purpose, that effect increases of antitumor MNC was result of spin conversion in radical electron pair during mechano– and magnetochemically synthesized ɆNC and EI of animal tumor. IV. CONCLUSION
The research of animals with Guerin carcinoma was shown, that mechano– and magnetochemically synthesized MNC on the basis of nanoparticles of Fe3O4, KCl and doxorubicin had greater antitumor effect than conventional doxorubicin and similar mechanochemically synthesized MNC at next local electromagnetic irradiation and mild hyperthermia of animal tumors. Survival rate of animals with tumors was maximal in experiments after introduction of mechanochemically or mechano– and magnetochemically synthesized MNC and next irradiation of animal tumors.
REFERENCES 1.
Peng X, Qian X, Mao H et al (2008) Targeted magnetic iron oxide nanoparticles for tumor imaging and therapy. Int J Nanomedicine 3: 311–321
IFMBE Proceedings Vol. 29
512 2.
3. 4. 5. 6. 7.
V.E. Orel et al. Orel V, Romanov A (2010) The Effect of Spatially Inhomogeneous Electromagnetic Field and Local Inductive Hyperthermia on Nonlinear Dynamics of the Growth for Transplanted Animal Tumors. In: Nonlinear Dynamics (Ed. Todd Evans). INTECH, Croatia Golovin Y (2004) Mechanochemical reaction between structural defects in magnetics fields. J Mat Sci 39: 5129–5134 Emanuel N (1977) Kinetics of experimental tumor processes, Nauka, Moscow (in Russian) Saifutdinov R, Larina L, Vakulskaya T et al (2001) Electron paramagnetic resonance in biochemistry and medicine. Kluwer Academic, Pleunim Publisher, New York. Köseoglu Y, Ysildiz F, Kim D et al (2004) EPR studies on Na–oleate coated Fe3O4 nanoparticles. physica status solidi. Physica status solidi 12: 3511–3515 Orel V, Kudryavets Y, Bezdenezhnih N et al (2005) Mechanochemically activated doxorubicin nanoparticles in combination with 40 MHz frequency irradiation on A–549 lung carcinoma cells. Drug Delivery 12: 171–178 Author: Institute: Street: City: Country: Email:
Valerii E. Orel National Cancer Institute 33/43 Lomonosova Str Kyiv Ukraine [email protected]
IFMBE Proceedings Vol. 29
Verification of Measuring System for Automation Intra – Abdominal Pressure Measurement T. Tóth, M. Michalíková, L. Bednarčíková, M. Petrík, and J. Živčák Technical University of Košice, Faculty of Mechanical Engineering, Department of Biomedical Engineering, Automation and Measurement, Košice, Slovakia Abstract— With the newest medical researches and studies increase equipment‘s rate of health centers. By contrast to it, any parts of medicine in SR and abroad don‘t utilize all actual knowledge’s and possibilities. One of this is the measurement of intra-abdominal pressure in critical ill patients. From the results of analysis the most often intra-abdominal pressure measurement is via bladder. This is not invasive method with high significant weight. The measurement techniques what are in present time usually used does not meet criteria for modern diagnostic methods. This paper describes the verification of measuring system for intra – abdominal pressure measurement. The measuring system is the basic part of proposal device for automated measurement of intra – abdominal pressure. Keywords— intra-abdominal syndrome, measurement.
pressure,
•
the high abdominal pressure decrease the vessel perfusion, and may cause the death.
The intra – abdominal compartment syndrome was described by Kron, Harman and Nolan in 1984 and Feistam was the first who use the term “abdominal compartment syndrome” in 1989. [8] The standardization of terms was beginning in the Second World Conference of Abdominal Compartment Syndrome and the final report was publishing in 2004.
II. BASIC THERMS
compartment
I. INTRODUCTION Accidents in abdominal area and polytraumas cause that the number of patient with intra – abdominal hypertension and abdominal compartment syndrome increasing. By one of methods which allow preventing complications due high abdominal pressure is his measurement. The untreated intra – abdominal hypertension have high death rate. Intra – abdominal pressure (IAP) was first described in a work of Marey in 1863. In the year 1865 Braune register the first measurement of intra – abdominal pressure. The measurement was performed via rectum. In the next 25 years was documented the results of treatment and measurement of intra – abdominal pressure. Heinricius from Germany (1890) determine, that the pressure between 27 and 45cmH2O (approx. 20 – 33mmHg, 2,64 – 4,41kPa) is lethal for the animals with breath disease whereby decrease the blood pressure and heart diastolic distension. Approximately in the year 1911 Haven Emerson issue the work about intra – abdominal pressure. The results from this work are: • the contraction of diaphragm during the breath in is the main factor of intra – abdominal pressure raising, • the anesthesia and muscle paralysis reduce the pressure in abdomen,
The abdomen can be considered a closed box with walls either rigid (costal arch, spine, and pelvis) or flexible (abdominal wall and diaphragm). The elasticity of the walls and the character of its contents determine the pressure within the abdomen at any given time. Since the abdomen and its contents can be considered as relatively noncompressive and primarily fluid in character, behaving in accordance to Pascal’s law, the IAP measured at one point may be assumed to represent the IAP throughout the abdomen [1, 3]. Intra-Abdominal Pressure (IAP) IAP is defined as a stable pressure into the abdominal cavity. In the inspiration the IAP increasing (diaphragm contraction) and in the expiration decreasing (diaphragm relaxation). It is directly dependent on the volume of organs, presence of diseases and with limitations of abdominal wall expansion. [2, 4, 6] In the Figure 2 is illustrated the intra – abdominal hypertension, her value was depend on clinical scenario. Intra-Abdominal Hypertension (IAH) Pathological IAP is a continuum ranging from mild IAP elevations without clinically significant adverse effects to substantial increases in IAP with grave consequences to virtually all organ systems in the body. [2, 4, 6]
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 513–516, 2010. www.springerlink.com
514
T. Tóth et al.
25mmHg. The bag is connected to the bottom of container with Velcro. The second part is the sensing system. For level detection are used two glass pipes as a level gauges. The first gauge is connected via reduction to the saline bag and measure the pressure (water column above the bag surface) in the bag. The second gauge directly measure the level of water column. The gauges inputs have the same distance from the bottom of the container. The sensing system is connected to reduction through the hose – pipe with 4mm inner diameter. [10] The measurement was performed with following approach: Fig. 1 Description of the pressure status in abdomen and organ dysfunction depending up the intra – abdominal pressure Abdominal Compartment Syndrome (ACS) Critical IAP in the majority of patients, as outlined above, appears to reside somewhere between 10 and 15 mmHg. It is at this pressure that reductions in microcirculatory blood flow occur, and the initial development of organ dysfunction and failure is first witnessed. ACS is the natural progression of these pressure-induced end-organ changes and develops if IAH is not recognized and treated in a timely manner. [2, 4, 6] The recognition of significance of IAP monitoring at IAH, IAP diagnostic and management starting the development of direct (invasive) and indirect (noninvasive) measurement methods. In medical praxis the most used method is the indirect measurement via bladder. In the Table 1 are correlation coefficients for intra – abdominal pressure measured via bladder. [3, 5] Table 1
• • • • • • •
saline bag was filled with 100ml of water creation of 5mmHg pressure through the water column, the value was read from the level gauge, stabilization of the water level (15 - 20)s, measuring process, increasing the pressure up to 25mmHg with 5mmHg steps, measuring after each increment, decreasing the pressure after the reaching 25mmHg with 5mmHg steps, measuring after each decrement, with this approach was obtained 20 measurement packs, in each pack contain 5 levels (5 – 25mmHg with 5mmHg step). [9]
Pressure dependency of bladder pressure and intra – peritoneal
Author
Publicize in
Ridings Johna Fusco Davis Risin Schachtrupp
J Trauma CC forum J Trauma Int Care Med Am J Surg Crit Care Med
Year 1995 1999 2001 2005 2006 2006
Correlation Coefficient 0,98 0,92 0,88 0,95 0,96 0,95
III. VERIFICATION OF MEASURING SYSTEM The testing device for pressure sensor verification is set up from two parts (Figure 2). The first part represents the abdomen model which is made from 250ml saline bag (replacement of bladder). The saline bag is set in the bottom of 35L container, which allow create the pressure up to
Fig. 2 Schematic representation of proposal for the experimental verification of sensing system The one measurement contains 50 values with 100ms pause between two values. The pressure sensor have analog output (0 – 5V), which is processed in PIC microprocessor. The program in PIC was designed for data reading from sensor, A/D conversion (10-bit) and data sending to PC. The values from A/D converter are recalculated to the pressure value (Table 2).
IFMBE Proceedings Vol. 29
Verification of Measuring System for Automation Intra – Abdominal Pressure Measurement
The results of measurements are effected by measuring methodic. The biggest problem is the connection of saline bag to the container. The bag is connected with Velcro only in his centre line. Table 2
Table of the recalculated values for 20 measurement packs p(5) 5,413 5,368 5,408 5,377 5,411 5,430 5,430 5,403 5,368 5,443 5,422 5,421 5,361 5,371 5,431 5,491 5,504 5,415 5,472 5,443 5,419 0,040
Pressure [mmHg] p(10) p(15) p(20) 10,326 15,180 20,084 10,194 15,072 19,870 10,267 15,154 19,950 10,232 15,113 19,978 10,284 15,157 20,001 10,249 15,114 19,944 10,273 15,255 19,985 10,214 15,075 19,911 10,267 15,075 19,933 10,236 15,092 19,762 10,271 15,114 19,982 10,226 15,041 19,889 10,227 15,095 19,960 10,254 15,051 19,905 10,249 15,063 19,971 10,309 15,097 19,997 10,352 15,166 20,006 10,283 15,107 20,026 10,279 15,157 20,015 10,315 15,135 20,006 10,265 15,116 19,959 0,040 0,052 0,069
p(25) 24,889 24,834 24,845 24,821 24,840 24,886 24,817 24,837 24,817 24,824 24,804 24,839 24,786 24,826 24,783 24,905 24,853 24,865 24,837 24,861 24,838 0,032
p + 3s p
5,538
10,384
15,270
20,166
24,934
p − 3s p
5,300
10,146
14,961
19,752
24,743
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Mean p Standard Deviation sp
The dependency of measured values on expected value is linear with correlation coefficient from 0,98 to 1. The total error of measurement is given by sum of sensor εs and A/D converter εc maximum error. [7]
ε = ε s + ε c = 0,375 + 0,035 = 0,41 mmHg
IV. CONCLUSION The proposing of measuring devices for medical applications must satisfy terms for safety and reliable using. By one of these terms is the sterilization of all parts which come into the touch of body liquids (urine). This condition has had the basic effect at the sensor selection. This measurement has the character of pilot measurement for acquiring the basic parameters of sensor and measuring string elements.
515
From the measurement result, that the first problem is the saline bag fixation to the container. One of the possibilities is fixation of the bag border. For decreasing of the total error of measurement is possible used the stand alone 16 – bit A/D converter. The total error will be in this case equal to the sensor error. After the application of proposal changes are necessary additional testing including long time testing of sensor stability.
ACKNOWLEDGMENT This research has been supported by the research project 1/0829/08 VEGA - Correlation of Input Parameters Changes and Thermogram Results in Infrared Thermographic Diagnostic.
REFERENCES 1. Malbrain, ML., Cheatham, ML., Kirkpatrick, A., Sugrue, M., Parr, M., De Waele, J., Balogh, Z., Leppäniemi, A., Olvera, C., Ivatury, R., D'Amours, S., Wendon, J., Hillman, K., Johansson, K., Kolkman, K., Wilmer, A.: Results from the International Conference of Experts on Intra-abdominal Hypertension and Abdominal Compartment Syndrome. I. Definitions, Intensive Care Med. 2006 Nov;32(11):1722-32. Epub 2006 Sep 12 2. Malbrain, ML., Cheatham, ML., Kirkpatrick, A., Sugrue, M., Parr, M., De Waele, J., Balogh, Z., Leppäniemi, A., Olvera, C., Ivatury, R., D'Amours, S., Wendon, J., Hillman, K., Wilmer, A.: Results from the International Conference of Experts on Intra-abdominal Hypertension and Abdominal Compartment Syndrome. II. Recommendations. Intensive Care Med. 2007 Jun;33(6):951-62. Epub 2007 Mar 22 3. Malbrain, ML., Deeren DH.: Effect of bladder volume on measured intravesical pressure: a prospective cohort study, Critical Care 2006, 10:R98, http://ccforum.com/content/10/4/R98 4. Efstathiou, E., Zaka, M., et al.: "Intra-abdominal pressure monitoring in septic patients." Intensive Care Medicine 31, 2005, Supplement 1(131): S183, Abstract 703 5. Kinball, EJ.: IAP measurement: Bladder techniques, WCACS,, Antwerp, 2007 6. Malbrain ML, Cheatham ML, Kirkpatrick A, Sugrue M, De Waele J, Ivatury R.: Abdominal compartment syndrome: it's time to pay attention!, Intensive Care Medicine, Volume 32, Number 11, November 2006 , pp. 1912-1914(3) 7. Kozlíková, K.: Základy spracovania biomedicínskych meraní I, Askepios Bratislava, 2003, ISBN 80-7167-064-2 8. Ivatury, R., Cheatham M., Malbrain, M., Sugrue, M.: Abdominal Compartment Syndrome, Landes Biosciences, ISBN 978-1-58706196-7 9. Toth, T.: Návrh zariadenia na meranie intra – abdominálneho tlaku, Doktorandská dizertačná práca, košice, 2009 10. Tóth, T., et al.: Meranie intra-abdominálneho tlaku, Automatizácia a riadenie v teórii a praxi, ARTEP 2009 : Workshop odborníkov z univerzít, vysokých škôl a praxe v oblasti automatizácie a riadenia : Zborník príspevkov : 4.3. - 6.3.2009, Stará Lesná, SR. Košice : TU, 2009. s. 68-1-68-7. ISBN 978-80-553-0146-4
IFMBE Proceedings Vol. 29
516
T. Tóth et al. Author: Teodor Tóth Institute: Technical University of Košice, Faculty of Mechanical Engineering, Department of Biomedical Engineering, Automation and Measure-ment Street: Letná 9 City: Košice Country: Slovakia Email: [email protected]
IFMBE Proceedings Vol. 29
Evolution in Bladder Pressure Measuring Implants Developed at K.U.Leuven P. Jourand1 , J. Coosemans1,2 and R. Puers1 1
Katholieke Universiteit Leuven, Departement Elektrotechniek, ESAT-MICAS, Leuven, Belgium 2 now with Zenso, Heverlee, Belgium
Abstract— Bladder pressure monitoring devices have been a topic of great interest for the past two decades. Three devices developed at ESAT-MICAS in this time are reviewed, showing the evolution of these devices. Two of these devices are diagnostic tools small enough to be inserted into the bladder cavity through minimal invasive cystoscopy. The third device is a long term bladder pressure monitoring implant that, if used to drive an artificial sphincter muscle, forms a urological pacemaker that could diminish or rule out urinary incontinence. After a summary of the three devices, recent results are presented from one of the diagnostic tools. Keywords— Bladder pressure, Capacitive sensor, Battery operated, Silicone embedding
Fig. 1: A long term bladder pressure monitoring device [5] developed and produced at ESAT-MICAS
I. I NTRODUCTION The study of urodynamics allows for the direct assessment of possible lower urinary tract dysfunctions [1]. After recording micturation patterns and performing a free flow study, a cystometry is often needed to obtain a correct diagnosis or a visual inspection of the bladder. The standard procedure (which is considered minimal invasive) for a urological investigation is to introduce a small sized catheter directly into the bladder cavity through the natural opening. This allows the study of pressure variations under filling and voiding conditions. If a visual inspection of the bladder is needed, a cystoscope is used instead of a catheter. The catheter or cystoscope remains in place during the procedure, which introduces discomfort, pain and elevated risk of infections for the patient. Furthermore, these clinical investigations are far from ”normal life” conditions since the patient has to remain in an uncomfortable position in the inspection room. The only viable solution to overcome the discomfort and to improve the quality of the recorded signals is the use of a wireless embedded device that either communicates through telemetry [2, 3], or logs the data [4]. The typical technical specifications, recommended for the recording of bladder pressure are [1]: • ±1 cm H2 O or ± ∼ 0.98 mbar pressure resolution. • Ranges of 0 − 250 cm H2 O or ∼ 0 − 245 mbar (relative to atmospheric pressure). • Measurement frequency of 10 Hz.
II. D EVELOPMENTS IN UROLOGICAL TOOLS At the ESAT-MICAS labs of the K.U.Leuven, investigations on bladder pressure measurements have been ongoing since the mid 80s [2, 3]. Two approaches are taken in this research, depending on the term of implantation. Diagnostic tools are used to investigate pathologies under normal life conditions for a short term. For obvious reasons, the use of such tools require a non or minimal invasive procedure. On the other hand, long term implants intend to realize the ultimate dream of the urological pacemaker where an invasive procedure can be justified. A. Long term implantation: towards a urologic pacemaker A proper sensing of bladder pressure is the first step in the idea of a full bladder control system, introduced in 1987 [6]. An autonomous and reliable bladder pressure measurement can ensure adequate stimulation of the detrusor and bladder sphincter muscles. In this way, the muscles are not overstimulated and the lifetime of the ”urological pacemaker” is prolonged. This idea can even be extended one step further by adding a bidirectional telemetric link: • The device signalling the patient when the pressure buildup has reached a critical point, alerting the need to void. • The patient instructing the urological pacemaker when he or she is ready to commence voiding, stopping the stimulation of the sphincter muscles.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 517–520, 2010. www.springerlink.com
518
P. Jourand, J. Coosemans, and R. Puers
An invasive approach was chosen for the development of such a long term bladder pressure monitoring system [5] which is depicted in Fig. 1. Bladder pressure is indirectly measured by placing a pressure transducer (Fig. 1 right) on the outside of the bladder wall inside the abdomen. An inductive link (Fig. 1 left) is used for both powering [7, 8] and communication [9], and is placed right beneath the skin to get a good coupling with the external coil. Both parts are interconnected with flexible tracks. The implant coil has an outer diameter of 23 mm and is 1.5 mm thick. All electronics needed for communication, processing of the results and power conversion, are placed within this coil on a single sided flex. A combination of Pulse Position Modulation (PPM) and Binary Phase Shift Keying (BPSK) is used to reduce the ON time of the load modulation for the downlink communication. A PIC16F88-ML from Microchip is used to digitise the pressure readings and for communication. By using an external clock of 132 kHz, extracted from the RF carrier frequency instead of the internal 4 MHz clock, the current drain of the device drops from 0.61 mA at 2 V to 0.20 mA at 1.8 V. The device is embedded with Nusil Med4210, a biomedical grade polydimethylsiloxane (PDMS) designed for encapsulating medical devices. The PDMS is poured after mixing the two components. Any entrapped air is removed by exposing the device to a vacuum, followed by a curing for 4 hours at 60 ◦ C. Where this inductively powered device introduces restrictions on patient movement, should the bidirectional communication link be used to drive an artificial sphincter, a urological pacemaker with unlimited lifetime (power-wise) is created. B. Short term: diagnostic tools Diagnostic tools require non or minimal invasive procedures. A completely non-invasive procedure focuses on the externally applied pressure, required to interrupt the urinary flow. Using a penile cuff, the internal bladder pressure is assessed by the applied pressure on the cuff. The results are fair but yield limited information. Furthermore, the procedure restricts patient movement and is considered slightly uncomfortable [10, 11]. A minimal invasive approach is reported in [12] allowing some movement to the patient, yet the catheter tube of this device remains in place during the procedure elevating both risk of infections and discomfort. Creating a diagnostic tool while placing all electronics inside the bladder cavity is the most challenging method, yet the only way to create a truly imperceptible bladder pressure measurement system. Research on this method started as soon as 1984 [2] and has been ongoing ever since [4, 13, 14].
Fig. 2: A short term bladder pressure monitoring device developed and produced at ESAT-MICAS
The following approach is used: the system is introduced into the bladder cavity by inserting it through a cystoscope, rendering the procedure minimal invasive. Powering such devices can either be achieved by wireless transfer [7, 8] or by incorporating batteries. Data must either be logged on a memory module and read out after retrieval, or transmitted wirelessly. Using such a cystoscope as an introduction tool extremely restricts the size of the application. The diameters of catheters and cystoscopes used in urology [15] vary depending on both procedure and patient using the French scale to indicate the size. Campbell and Walsh’s bible on urology [16] suggests F8 to F12 and F16 to F25 cystoscopes, for paediatric and adult cystometry respectively. Taking into account that the internal diameter has typical values of 85-90% of the outer diameter, using an F20 cystoscope limits the diameter of the bladder pressure device to 5 mm. Two devices were developed at ESAT-MICAS over the years: the first was developed in 1984 and is depicted in Fig. 2. The device measured pressure values using a resistive Honeywell-Philips pressure sensor, at a sampling rate of 10 Hz. The results were transmitted using a small inductor. The complete hybrid system, operated on a Leclanch´e SR33 5 mAh mercury battery with a nominal voltage of 1.45 V, a diameter of 3.3 mm and a length of 3.4 mm. The complete structure sized 4 mm in diameter by 40 mm length. Although lifetime was increased significantly by switching the pressure sensor, it was still rather limited. Recent technological achievements with capacitive pressure sensors enabled the creation of the second device, depicted in Fig. 3. The device is built-up with off-the-shelf discrete components placed on a Kapton foil and embedded in a protective silicone encapsulation. This results in a low cost flexible bladder pressure measurement system. Because the device is battery operated and it logs and stores the pressure data on an EEPROM module, it is imperceptible for the patient, since no external devices are mandatory during recording. The device, is powered by a BR316 lithium cell with a capacity of 13 mAh. Its power can be switched off magnetically with a reed switch to increase shelf-life after assembly. The E1.3N capac-
IFMBE Proceedings Vol. 29
Evolution in Bladder Pressure Measuring Implants Developed at K.U. Leuven
Range and Accuracy Test of an Unembedded Sample 1300 Calculated Pressure (mbar)
itive pressure sensor from MicroFab was chosen for its operating range (0.5 to 1.3 bar absolute pressure) and size. Within normal bladder pressures (1 to 1.3 bar absolute pressure), the capacitance value varies between 6.0026 and 6.260 pF. These values are digitized by an AD7153 capacitance-to-digital converter (CDC) from Analog Devices to a 12 bit value. The pressure data is reduced to 5 bits by a PIC10F206 microcontroller from Microchip and further combined with 11 bits
519
1250 1200 1150 1100 1050 1000
0
100
200
300
400
500 Time (s)
600
700
800
900
Detail of Accuracy Test
Calculated Pressure (mbar)
1037
1036
1035
1034
1033
1032
810
Fig. 3: The latest short term bladder pressure monitoring device developed at ESAT-MICAS and produced by CMST-Gent
of timing data counting the number of unchanged samples. These 16 bit data samples are stored on a 64 kbit EEPROM through I2 C communication. The devices were tested with a Druck DPI 600 reference pressure, showing a resolution of 30 mbar with the 5 bit pressure result, while consuming 320μA on average at 3 V. Having a diameter of 5 mm and a length of 40 mm, the device can be inserted using an F20 cystoscope therein logging and storing over 24 hours of pressure data on an EEPROM module. The system specifications can be summarized as: • Sizing 40 mm in length and 5 mm in diameter allowing insertion through an F20 cystoscope. • Detecting absolute pressures between 1000 and 1300 mbar. Without a connection to the outside world, the bladder cavity lacks a reference pressure, explaining the need for an absolute pressure sensor. • Operating on battery power for over 24 hours. The diameter of the battery must not exceed 4 mm. • Logging pressure data on a 64kbit EEPROM module. • Containing an on/off switch for extended shelf life. • Low-cost implementation and production.
III. M EASUREMENTS AND RESULTS The most recent device was tested in its non-encapsulated form inside a dedicated pressure chamber with a Druck DPI
815
820
825
830
835 840 Time (s)
845
850
855
860
865
Fig. 4: Accuracy and resolution test of an unembedded prototype digital reference pressure. After sensor calibration, the pressure curve P(x) was found to be: P = 1.70 · 10−7 · x3 − 0.94 · 10−3 · x2 + 2.20 · x + 470 (1) In this equation x is the digital CDC read out obtained either in its 12 bit form from serial communication, or its 5 bit read out from the EEPROM. The first is only available during testing, needing a wired connection, which will not be available in the final implementation. Nevertheless, these 12 bit results are used to obtain a clear view of the potential pressure resolution. Using equation 1, the system was put through the following test:
1. The sample was kept at a pressure of ∼ 1016 mbar for a duration of 100 seconds. 2. Pressure was increased to ∼ 1300 mbar in 180 seconds. 3. Pressure was decreased to atmospheric pressure (∼ 1005 mbar) in 70 seconds by opening a valve. 4. Pressure was increased to ∼ 1036 mbar. 5. Pressure was decreased in steps of 0.1 mbar by moving a membrane within the Druck DPI 600 to accurately assess the resolution of the sample.
IFMBE Proceedings Vol. 29
520
P. Jourand, J. Coosemans, and R. Puers
The result is shown in Fig. 4. During this test, the current drain was monitored and found to be 320μA. The test reveals a resolution of 0.7 mbar. This resolution however is not further exploited in order to save memory space. Where only a window of five out of the twelve bits is used to indicate the pressure, the resolution of the final device can than be calculated by a multiplication with 25 since the 5 least significant bits and the 2 most significant bits, will be discarded. This yields a resolution of 22mbar, which is acceptable for the envisioned tests. An encapsulated sample was also tested, revealing a decrease in sensitivity of ∼ 40% resulting in a resolution of 1 mbar and 32 mbar for respectively the 12 bit and 5 bit results.
IV. C ONCLUSION Three devices produced at ESAT-MICAS, have been reviewed showing a clear evolution in functionality. One of the devices presents a major step forward towards the idea of a ”urological pacemaker”, a long term bladder pressure monitoring device with inductive powering and bidirectional communication. The other two devices are diagnostic tools for short term implementation. The first one was developed over 20 years ago. The second takes advantage of capacitive based pressure sensors and uses a silicone embedding as a protective encapsulation. Using 5 bit pressure values at a sampling rate of 2 Hz to conserve both power consumption as well as memory, a pressure resolution of ∼ 32 mbar is achieved. Where this does not meet the standards set for urological tools yet, should the available power and memory be increased, both can be augmented by reprogramming the microcontroller. In such a case, the full 12 bit pressure readings are available presenting an accuracy of 0.7 mbar. With the current parameters, the device consumes on avarage 320μA at 3 V, logging pressure data for a period of over 24 hours.
R EFERENCES 1. Sh¨afer W, Abrams P, Liao L, et al. Good Urodynamic Practices: Uroflowmetry, Filling Cystometry, and Pressure-Flow Studies Neurology and Urodynamics. 2002;21:261–274. 2. Puers R, Sansen W, Vereecken R. Development Considerations of a Micropower Control Chip and Ultraminiature Hybrid for Bladder Pressure Telemetry in Biotelemetry;VIII(Dubrovnik, Yugoslavia):328–332 1984. 3. Puers R, Sansen W, Vereecken R. Realisation of a telemetry capsule for cystometry in in IEEE Frontiers of Engeneering and Computing in Health Care 1984:711–714 1984. 4. Coosemans J, Puers R.. An Autonomous Bladder Pressure Monitoring System Sensors and Actuators A:Physical. 2005;123–124:155–161. 5. Coosemans J. Wireless and Battery-less Medical Monitoring Devices. Katholieke Universiteit Leuven 2008. 6. Sansen W, Vereecken R, Puers R, Folens G, Nuland T Van. A closed loop system to control the bladder function in In Proceedings of the Ninth Annual Conference of the IEEE Engineering in Medicine and Biology Society(Boston, USA):1149–1150 1987. 7. Lenaerts B, Puers R. An inductive power link for a wireless endoscope Biosensors & Bioelectronics. 2007;22:1390–1395. 8. Carta R, Tortora G, Thon´e J, et al. Wireless powering for a selfpropelled and steerable endoscopic capsule for stomach inspection Biosensors & Bioelectronics. 2009;25:845–851. 9. Carta R, Jourand P, Hermans B, et al. Design and implementation of advanced systems in a flexible-stretchable technology for biomedical applications Sensors and Actuators A: Physical. 2009;156:79–87. 10. Blake C, Abrams P. Non invasive techniques for the measurement of isovolumetric bladder pressure Journal of Urology. 2004;171:12–19. 11. Harding C K, Robson W, Drinnan M J, Ramsden P D, Griffiths C, Pickard R S. Variation in Invasive and Noninvasive Measurements of Isovolumetric Bladder Pressure and Categorization of Obstruction According to Bladder Volume Journal of Urology. 2006;176:172–176. 12. Tan R, McClure T, Lin C K, et al. Development of a fully implantable wireless pressure monitoring system Biomedical Microdevices. 2009;11:259–264. 13. Wang C-C, Huang C-C, Liou J-S, et al. A Mini-Invasive Long-Term Bladder Urine Pressure Measurement ASIC and System IEEE Transactions on Biomedical Circuits and Systems. 2008;2:44–49. 14. Jourand P, Puers R. An Autonomous, Capacitive Sensor Based and Battery Powered Internal Bladder Pressure Monitoring System in Proceedings of the Eurosensors XXIII Conference, Procedia Chemistry;1(Lausanne, Switserland):1263–1266 2009. 15. ApexMed at http://www.apexmed.eu/ (February 2010) 16. Wein A J, Kavoussi L R, Novick A C, Partin A W, Peters C A. Campbell-Walsh Urology, Edition 9. Saunders 2009.
ACKNOWLEDGEMENTS This research has been developed in the frame of BIOFLEX, an IWT funded project, contract number IWT040101. Special thanks to Michel De Cooman for producing the flex prints used in the testing of the devices, the Centre for Microsystems and Technology Gent for moulding the devices and Alexander Thomas from ESAT-VISICS for editing the ifmbe LATEX style-file to handle multiple affiliations.
Author: Philippe Jourand Institute: Katholieke Universiteit Leuven, ESAT-MICAS Street: Kasteelpark Arenberg 10 City: B-3000 Leuven Country: Belgium Email: [email protected]
IFMBE Proceedings Vol. 29
Including the effect of the thermal wave in theoretical modeling for radiofrequency ablation J.A. López Molina1, M.J. Rivera1, M. Trujillo1, V. Romero-García2 and E.J. Berjano3 1
Departamento de Matemática Aplicada, Instituto de Matemática Pura y Aplicada, Universidad Politécnica de Valencia, Valencia, Spain 2 Centro de Tecnologías Físicas: Acústica, Universidad Politécnica de Valencia, Valencia, Spain 3 Departamento de Ingeniería Electrónica, Universidad Politécnica de Valencia, Valencia, Spain
Abstract— In this paper we outline our main findings about the differences between the use of the Bioheat Equation and the Hyperbolic Bioheat Equation in theoretical models for RF ablation. At the moment, we have been working on the analytical approach to solve both equations, but more recently, we have considered numerical models based on the Finite Element Method (FEM). As a first step to use FEM, we conducted a comparative study between the temperature profiles obtained from the analytical solutions and those obtained from FEM. Keywords— Ablation, COMSOL, Finite Element Method, theoretical model, radiofrequency ablation. I. INTRODUCTION
Radiofrequency (RF) heating of biological tissues is currently employed in many surgical and therapeutic procedures such as the elimination of cardiac arrhythmias, the destruction of tumors, the treatment of gastroesophageal reflux disease, and the heating of the cornea for refractive surgery. In order to investigate and develop new RF ablation techniques, besides understanding the complex electrical and thermal phenomena involved in the heating process, numerous theoretical models have been employed [1]. To date, all these models have employed the Bioheat Equation (BE) proposed by Pennes [2], in which the heat conduction term is based on Fourier’s theory (i.e. they have employed a parabolic heat transfer equation). Therefore, it related to r heat flux ( q ) in the following way:
r r r r q (r , t ) = −k∇T (r , t )
(1) r where k is the thermal conductivity and T (r , t ) the tem-
r
perature at point r at time t. This approach assumes an infinite thermal energy propagation speed, and although it might be suitable for most RF ablation procedures, it has been suggested that under certain conditions (such as very short heating times), a non-Fourier model should be considered by means of the Hyperbolic Bioheat Equation (HBE), i.e. considering a thermal relaxation time (τ) for the tissue ≠0 [3]. It is known that heat is always found to propagate at
a finite speed [4], and in fact Cattaneo [5] and Vernotte [6] simultaneously suggested a modified heat flux model in the form: r r r r (2) q (r , t + τ ) = −k∇T (r , t ) where τ is the thermal relaxation time of the biological tissue. Equation (2) assumes that the effect (heat flux) and the cause (temperature gradient) occur at different times and that the delay between heat flux and temperature gradient is τ. The particular case of considering τ = 0 obviously corresponds to the BE. In order to study how the temperature profiles could be altered when HBE is considered in place of BE, we have conducted different theoretical studies based on onedimensional analytical models [7-9]. In this models, we solved both BE and HBE under different circumstances. Obviously, since that the analytical approach does not allow easily to consider complex geometries or to solve non-linear equations, recently we are using a complementary approach based on numerical techniques, specifically the Finite Element Method (FEM). In this paper we summarize the main findings using the analytical approach and present new results about the use of COMSOL Multiphysics to solve the HBE in models for RF ablation. II. ANALITYCAL APPROACH
For this approach, we have used model geometry very simple. Briefly, we considered a r0 radius spherical electrode completely imbedded in and in close contact with the biological tissue (see Fig. 1), which had an infinite dimension. This model presented radial symmetry and a onedimensional approach was possible. Regarding the electrical problem, we always modeled a constant-power protocol, i.e. the source term for the BE and HBE (i.e. the Joule heat produced per unit volume of tissue, Q(r,t)) was always: P ⋅ r0 (3) Q (r , t ) = H (t ) 4 ⋅π ⋅ r 4
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 521–524, 2010. www.springerlink.com
522
J.A.L. Molina et al.
where P is the total applied power (W), r0 the electrode radius, and H(t) is the Heaviside function. Although this temporal function have not been included in the previous study by Erez and Shitzer [10], later it was crucial to study the pulsed protocol in RF ablation [11].
To set the boundary condition in r = r0 we adopted a simplification assuming the thermal conductivity of the electrode to be much larger than that of the tissue (i.e. assuming that the boundary condition at the interface between electrode and tissue is mainly governed by the thermal inertia of the electrode). This obviously modeled a dry electrode. Other thermal boundary conditions should be considered for the case of internally cooled electrodes [13]. III. NUMERICAL APPROACH
Fig. 1 Schematic diagram of the model geometry. A spherical electrode (grey circle) of radius r0 is completely imbedded and in close contact with the biological tissue, which has an infinite dimension. As a result, the model presented a radial symmetry, and a one-dimensional approach is possible (dimensional variable is r).
The HBE was obtained by combining the energy equation:
∂T (r , t ) (4) ∂t where ρ is the density and c the specific heat, with the heat transfer model proposed by Özişik and Tzou’s [12]: ∂q (r , t ) (5) q(r , t ) + τ = −k∇T (r , t ) ∂t The result was: 1 ∂T ( r , t ) ∂ 2T ( r , t ) )= − ∆T ( r , t ) + ( +τ α ∂t ∂ 2t (6) 1 ∂Q(r , t ) (Q(r , t ) + τ ) k ∂t where α is the thermal diffusivity. Finally, we combined (3) and (6) to obtain the HBE: ∂ 2T (r , t ) 2 ∂T (r , t ) ∂T (r , t ) −α( + )+ r ∂r ∂t ∂r 2 (7) Pα r0 ∂T (r , t ) +τ = ( H (t ) + τ δ (t )) ∂t 4π k r 4 where δ(t) is Dirac’s function. It is important to emphasize that in all these cases both BE and HBE did not consider the blood perfusion term, and hence they are only useful to model RF ablation in non-perfused tissue (e.g. cornea) or in those tissue where this term has been suggested to be negligible (e.g. cardiac tissue far from large vessels). − ∇q ( r , t ) + Q ( r , t ) = ρ c
The majority of heat transfer problems of real situations involve complex geometries, are non-linear problems or their initial and boundary conditions lead us to use numerical methods to solve them. This is absolutely true in RF ablation. Some widespread numerical methods to solve this kind of problems are the Finite Element Method (FEM) and the Finite Differences Method. There is abundant available software for building models, solving them by the mentioned methods and post-processing the results. We have chosen COMSOL Multiphysics (Burlington, MA, USA), which has been broadly employed in the study of the RF ablation of biological tissues. However, all of those previous studies considered the BE. In this respect, our recent objective has focused on the validation of COMSOL Multiphysics for using the HBE in obtaining the temperature distribution during RF ablation. This issue is especially important by taking into account the cuspidal-type singularities found in the analytical solutions of the HBE, which are materialized as a temperature peak traveling through the medium at a finite speed [7]. In other words, it is necessary to know if this behavior will be accurately modeled by numerical methods in general, and by COMSOL in particular. For this reason, we build with COMSOL the same onedimensional model previously solved by analytical methods, and then we obtained the numerical solution. Our idea was to validate COMSOL by comparing the numerical and analytical solution. We used COMSOL Multiphysics software version 3.2b, which can virtually model and solve any physical phenomenon which can be described with Partial Differential Equations (PDE) using the FEM. COMSOL presents several models to solve a wide range of PDEs. In a previous modeling study we tried to validate COMSOL by using a 2D model, but we found important differences (up to 13ºC after 60 s) between analytical and numerical solutions [14]. Now, we have chosen a one-dimensional problem. We used the automatic mesh generated by COMSOL, and for this reason we conducted a sensibility analysis to check that a more refinished meshes do not produce results closer to the analytical ones. The control parameter used to
IFMBE Proceedings Vol. 29
Including the Effect of the Thermal Wave in Theoretical Modeling for Radiofrequency Ablation
conduct this sensibility analysis was the maximum temperature reached at the interface electrode-tissue after 60 s. In order to compare analytical and numerical solution we plotted the progress of temperature from each solution. In the case of the analytical solution we used the software Mathematica 7.0 software (Wolfram Research, Champaign, IL, USA). To make graphics of the numerical solution we used the post-processing option of COMSOL. In order to plot the results we particularized the solutions for a specific case. As biological tissue we chose the liver with the following characteristics: density ρ of 1060 kg/m3, specific heat c of 3600 J/kg⋅K and thermal conductivity k of 0.502 W/m⋅K. The electrode characteristics were the density of 21500 kg/m3 and the specific heat c of 132 J/kg⋅K. The initial temperature of tissue was 37ºC. The applied power was of P=1 W. Moreover, we included the term of blood perfusion both in BE and HBE. These numerical solutions were compared to those obtained analytically (data non published yet).
523
explained due to the fact that when using HBE a period of time is needed for heat to travel to a particular location inside the tissue. When these conclusions were particularized for specific tissues, once more the differences between BE and HBE temperature profiles were greater for lower times and shorter distances. For this reason, our results suggested that the HBE should be considered in the case of RF heating of the cornea (heating time 0.6 s), and for short time ablation in cardiac tissue (less than 30 s) [8]. When BE and HBE were analytically solved for a pulsed application of RF power, we found three typical waveforms for the temperature progress depending on the relations between the duration of the RF pulse and the λ ( ρ − 1) being λ the dimensionless thermal relaxation time and ρ the dimensionless position. In BHE solution we also observed that the temperature at any location is the result of the overlapping of different heat sources delayed different durations (each heat source being produced by an RF pulse of limitless duration).
IV. RESULTS AND DISCUSSION
Regarding the analytical solutions, we found, from a mathematical point of view, that the HBE solution shows cuspidal-type singularities in the form of a temperature peak traveling through the medium at finite speed (see Fig. 2). This peak arises at the electrode surface, and clearly reflects the wave nature of the thermal problem. In [11] we tried to provide an explanation about this behavior which is based on the interaction of forward and reverse thermal waves.
Fig. 2 Dimensionless temperature progress during the RF heating of the biological tissue at three normalized locations: on the electrode surface ρ=1, and at ρ=1.7 and 2.7. The thermal relaxation time of the biological tissue was 16 s. Two solutions are shown from different equations: BE (dashed line) and HBE (solid line).
At the beginning of heating (i.e. when the considered time was comparable to or shorter than the thermal relaxation time), BE provided temperature values lower than those provided by HBE. In general, the speed of temperature change in the case of HBE was slower than BE. This can be
Fig. 3. Progress of dimensionless temperature of the HBE for three conditions. (A) Case in which duration of the RF pulse is higher than λ ( ρ − 1) . (B) Case for an RF pulse lower than
λ ( ρ − 1) . (C) Transitional case λ ( ρ − 1) . The solid lines correspond with the temperature from HBE and the dashed line with the temperature from BE.
where the duration of the RF pulse is equal to
IFMBE Proceedings Vol. 29
524
J.A.L. Molina et al.
Regarding the numerical results, Figure 4 shows the temperature progress for HBE and for two values of thermal relaxation time (1 and 16 s) and for two blood perfusion conditions (without perfusion ω=0, and perfusion ω=0.01 1/s). These results were almost coincident with those obtained from analytical approach, both using BE and HBE, which suggests that COMSOL can be a suitable tool to model the heating of biological tissues using BE and HBE. Now, future work will be conducted to implement theoretical models based on FEM (COMSOL) with more realistic geometries.
ACKNOWLEDGMENT This work received financial support from the Spanish “Plan Nacional de Investigation Científica, Desarrollo e Innovación Tecnológica del Ministerio de Educación y Ciencia” (TEC2008-01369/TEC) and FEDER Projects MTM2007-64222 and MAT2009-09438.
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8.
Fig. 4. Temperature progress obtained from COMSOL during 60 s of RF ablation using the HBE and for two values of thermal relaxation time (1 and 16 s) and for two blood perfusion conditions (without perfusion ω=0, and perfusion ω=0.01 1/s). The plots correspond with a location r=2r0.
V. CONCLUSION
9. 10. 11.
In this paper, we have outlined our main findings about the differences between the BE and HBE models for RF ablation. These differences encourage the use of the HBE approach for processes in which great amounts of heat are transferred to any material in very short times, e.g. RF heating in the cornea. At the moment, we have been working on the analytical modeling of the HBE, but more recently, we have considered numerical models based on FEM. As the first step to use FEM should be the validation, we have conducted a comparative study between the temperature profiles obtained from the analytical solutions and those obtained from FEM.
12. 13.
14.
Berjano E J (2006) Theoretical modeling for radiofrequency ablation: state-of-the-art and challenges for the future. Biomed Eng Online 5 24 Pennes H H (1998) Analysis of tissue and arterial blood temperatures in the resting human forearm. 1948. J Appl Physiol 85:5–34 Liu J, Chen X, Xu L X (1999) New thermal wave aspects on burn evaluation of skin subjected to instantaneous heating. IEEE Trans Biomed Eng 46:420–428 Hader MA, Al-Nimr MA, Abu Nabah BA (2002) The dual-phase-lag heat conduction model in thin slabs under a fluctuating volumetric thermal disturbance. Int J Thermophysics. 23:1669–1680 Catteneo C (1958) Sur une forme de l’équation de la chaleur éliminant le paradoxe d’une propagation instantaneé. Compes Rendus 247:431–433 Vernotte P (1958) Les paradoxes de la théorie continue de l´équation de la chaleur. Comptes Rendus 246: 3154–3155 López-Molina JA, Rivera MJ, Trujillo M et al (2008) Effect of the thermal wave in radiofrequency ablation modeling: an analytical study. Phys Med Biol 53:1447–1462 López-Molina JA, Rivera MJ, Trujillo M et al (2008) Assessment of hyperbolic heat transfer equation in theoretical modeling for radiofrequency heating techniques. Open Biomed Eng J 10:2:22–27 Tung MM, Trujillo M, López-Molina JA et al (2009) Modeling the heating of biological tissue based on the hyperbolic heat transfer equation. Mathematical and Computer Modelling 50:665–672 Erez A, Shitzer A (1980) Controlled destruction and temperature distributions in biological tissues subjected to monoactive electrocoagulation. J Biomech Eng 102:42–49 López Molina JA, Rivera MJ, Trujillo M et al (2009) Thermal modeling for pulsed radiofrequency ablation: analytical study based on hyperbolic heat conduction. Med Phys 36:1112–1119 Özişik MN, Tzou DT (1994) On the wave theory in heat conduction. ASME J Heat Transfer 116:526–535 Rivera MJ, Molina JA, Trujillo M et al (2009). Theoretical modeling of RF ablation with internally cooled electrodes: comparative study of different thermal boundary conditions at the electrode-tissue interface. Math Biosci Eng 6:611–627 Romero-García V, Trujillo M, Rivera MJ et al (2009) Hyperbolic Heat Transfer Equation for Radiofrequency Heating: Comparison between Analytical and COMSOL solutions. Proceedings of the COMSOL Conference 2009 Milan. Author: Enrique J Berjano Institute: Departamento de Ingeniería Electrónica (7F) Street: Camino de Vera s/n City: Valencia 46022 Country: spain Email: [email protected]
IFMBE Proceedings Vol. 29
Textile Integrated Monitoring System for Breathing Rhythm of Infants H. De Clercq1 , P. Jourand1 and R. Puers1 1
Katholieke Universiteit Leuven, ESAT-MICAS, Kasteelpark Arenberg 10, 3001 Heverlee, Belgium
Abstract— Monitoring the breathing rhythm of infants during sleep can be life saving. But today, most monitoring systems lack patient comfort. In this paper an innovative biomedical monitoring system with textile integrated sensors is developed and tested. Monitoring breathing activity is used as a case study, yet, the platform is extendable by an architecture that can contain up to twenty modular sensor channels, divided in several sensor islands. It is therefore useful for all kinds of (biomedical) applications. Flexible carriers for the electronic circuit lead to better textile integration and more wearing comfort. Quantification of breathing rhythm and volume is performed by accelerometers. These breathing signals are calculated on the basis of the technique that the angle of the gravitation vector in the coordinate systems of the accelerometers changes, because of movement of the abdomen. Differential use of two accelerometers makes this measurement insensitive for movement and posture. The comparison of these signals with a spirometer yields promising results. Keywords— Textile integration, Breathing measurement, Accelerometers, Infant monitoring, Home monitoring
I. I NTRODUCTION In the era of ubiquitous miniature intelligent systems, an ever increasing amount of portable electronics is carried about by people. Textile has the potential to integrate most of these voluminous devices into one coherent wearable system. For biomedical applications in particular, measurement devices can ”seamlessly” be integrated into textile to enhance both patient comfort and ease of use in everyday monitoring. Special precautions need to be taken during the design of such systems, prerequisite being of course absolute patient safety. This requirement includes guaranteed electrical, allergen and toxic safety, but also avoiding manual configuration during set-up to reduce human error. Additionally, the patient should not be hampered during his/her normal activities by neither cables, nor rigid or voluminous electronics. Just-in-time processing of signals ought to provide the user and/or physician with the appropriate information at the right moment, using an easily interpretable interface. An emerging market for textileintegrated electronics is the monitoring of vital parameters during sleep. Particularly Sudden Infant Death Syndrome (SIDS), going together with deceleration of breathing and low oxy-
genation of body tissues, can severely threaten infants’ lives. Although deceases because of SIDS have consistently been decreasing during the past decennia through improved knowledge about its causes, an important risk group, as well as major concern among infants’ parents, remains. A low-cost, reliable and easy-to-use system for everyday monitoring is designed, to measure breathing activity. After all, continuous monitoring seems to be the most reliable detection technique for the symptoms of SIDS. The whole system is powered by a flexible lithium-polymer battery (3.7V ). Integration with an existing inductive coupling [1] is envisaged as a next step.
II. ACCELEROMETER - BASED ESTIMATION OF RESPIRATORY ACTIVITY
An accurate measure of breathing rate and, if possible, breathing volume is required for e.g. assessment of the symptoms of SIDS in infants. In clinical environment, this respiratory activity is mostly measured with a spirometer. This device consists of a mouthpiece through which the patient has to breath. Inside a propeller rotates in function of the air flow. This technique is very precise in both volume as rhythm measurement and is therefore used as reference. A spirometer is however not usable for long term measurements, nor is it very comfortable for the patient. Hence other monitoring techniques, more suitable for home monitoring were introduced, such as impedance variation measurement [2] and respiratory inductive plethysmography (RIP) [3]. Although these methods are more comfortable, they still deal with disadvantageous like skin irritation and short life span. This paper discusses a recent technique with more promising results regarding these issues. A. Two techniques based upon accelerometers A technique based upon accelerometers to estimate the breathing waveform was refined from [4] for increased robustness to motion and changes of posture. Dual-axis accelerometers (ADIS16003) were placed in the transversal body plane by bending the flexible substrate 90 degrees up from the abdomen wall (Fig 1) to register only relevant signals, since the third axis would normally be horizontal during sleep. Table 1 contains the most important specifications of these accelerometers.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 525–528, 2010. www.springerlink.com
526
H. De Clercq, P. Jourand, and R. Puers x2
x2 x1 y1
y2
x1
α2
X
α1 Δα
y2
y1
Y (=g)
Y (=g)
Fig. 2: Inclination variation technique for registration of respiration using two accelerometers.
Fig. 1: The accelerometers mounted on a flexible pcb. Table 1: Specifications of accelerometer ADIS16003
Measure range Cross sensitivity Noise density Resolution Non-linearity Offset Power
Value ±1, 7 5 110 10 < 2, 5 140 5, 55
x (t)2 + y (t)2
with: magnitude (t) =
Unit g % √ μg/ Hz rms bits % LSB mW (typ.@3, 7V )
B. Results & Discussion Results for both assumptions, together with the spirometer signal, are shown (Fig 3). inclination [degrees]
Parameter
magnitude was derived as the modulus of the sensed accelerations in the x- and y-direction. The same placement of accelerometers was used, summing both magnitude variations to increase sensitivity. The magnitude is derived
-15 -25 -35
The second technique is formulated using the rhythmical motion of the abdomen wall due to breathing. This causes the accelerometer to sense magnitude variations, added to the dominant DC signal due to gravitation. The
20
30
40
50
60
70
80
90
100
110
80
90
100
110
80
90
100
110
time [s]
0.5
0.3 10
spirometer signal
The first technique is based upon inclination variations when the accelerometers are placed more laterally (about 8 cm left/right from the umbilicus), since the abdomen, during inhalation, expands outwards, slightly tilting both y-axes inwards (Fig 2). The inclination of the accelerometers was derived from the orientation of the gravitation vector in the xy-plane of the accelerometer. One accelerometer was placed on each side of the umbilicus, resulting in a common-mode signal (sum of the sensed inclination corners) because of the patient’s posture, and a differential signal (difference of the sensed inclination corners) because of the patient’s breathing movements. The angle is derived with: angle (t) = y(t) 180 π (sign (x (t)) − sign (|x (t)|) + 1) arctan − π 2 x(t)
magnitude [g]
10
Two techniques were tested and compared to a Jaeger PulmAssist spirometer,used as a reference for the breathing measurements, evaluating their ability to extract the breathing rhythm and amplitude in an accurate and reliable way.
20
30
40
50
60
70
time [s] 20 0 -20 10
20
30
40
50
60
70
time [s]
Fig. 3: A comparison of 3 methods for measuring respiration activity including detection of breathing cycles.
Since during relaxed breathing, respiration is mostly coordinated by the diaphragm, measurements are performed on the abdomen. Necessary information is extracted to relate symptoms of SIDS to the patient’s posture and movements, for it is well known that prone sleep position leads to increased risk for SIDS [5]. To extract the breathing rhythm and amplitude, the robust, yet simple respiration rate estimation algorithm proposed by Lukocius [6] was adopted. A differential moving window is scanning the signal for changes in slope, after which maxima and minima in between are extracted (Fig 4). A threshold determines whether the detected peak originates from respiration, and is updated adaptively, based upon the information of the last 8 respiration cycles. From the respective location and amplitude of the extrema, breathing rate and
IFMBE Proceedings Vol. 29
Textile Integrated Monitoring System for Breathing Rhythm of Infants
38
breathing rhythm (bpm)
volume were calculated and plotted in real-time. The same processing sequence was applied to all respiration signals.
527
34
30
26
22 0
Mean relative error = 6.8% 20
40
60
80
100
120
time(s)
Fig. 6: Comparison of the respiration frequency between spirometer(red) and magnitude method(blue). 2.6
Fig. 4: Peak detection algorithm proposed by Lukocius [6]. Both techniques for the accelerometer were tested and compared vis-`a-vis the spirometer, during two minute tests where the patient was asked to vary his/her breathing amplitude and frequency in a random way. In supine lying position, the mean respiration rate error for the inclination technique was 3.8% (Fig 5), while the magnitude assumption showed an error of 6.8% (Fig 6). On the other hand, the respective breathing amplitudes were compared in a scatter plot. Here, the R2 -value of the correlation again shows better results for the first assumption (0.85 (Fig 7) compared to 0.46 for the second assumption (Fig 8)). With the patient lying on the side, the accuracy of rate detection and correlation of the amplitudes remained quite constant for the first assumption, but declined significantly for the second.
amplitude accelerometer (normalised)
2.4 2.2 2 1.8 1.6 1.4 1.2 1 0.8
trend line: y = 0.37 + 0.64 R2-value = 0.85
0.5
1
1.5
2
2.5
3
3.5
amplitude spirometer (normalised)
Fig. 7: Correlation between amplitude of angle method and amplitude of
2.4 2.2
30
25
Mean relative error = 3.8% 20 0
20
40
60
80
100
120
time [s]
Fig. 5: Comparison of the respiration frequency between spirometer(red) and angle method(blue).
amplitude accelerometer (normalised)
breathing rhythm [bpm]
spirometer.
2 1.8 1.6 1.4 1.2 1 0.8 0.6
During change of lying position, the peak detection algorithm showed a robust detection, although movements during transition were not fully canceled. Results are shown in Fig. 9, where the patient’s posture (delayed due to processing) was added.
2
R -value = 0.46 0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
amplitude accelerometer (normalised)
Fig. 8: Correlation between amplitude of magnitude method and amplitude
IFMBE Proceedings Vol. 29
of spirometer.
H. De Clercq, P. Jourand, and R. Puers
breathing signal (degrees)
80 60 40 20 0 -20
ECG
4000
3600
60
0
10
20
30 time (s)
40
50
60
Fig. 9: Respiration and posture signals derived from the accelerometers. The patient changes sides after 20 and 40 seconds.
III. A POLYVALENT ARCHITECTURE Because of the wide range of applications in textile integrated electronics [7], the system consists of modular sensor islands, which can easily be changed with other islands according to the application (Fig 10).
Sensor island 3
SPI
SPI
Sensor island 2
Battery 3.7 V
Wireless data transmission Nordic
Master island
SPI
Receive island
500
1000
1500
2000
time (ms)
2500
breathing frequency
50
ï
ï
ï 0
500
1000
1500
2000
time (ms)
80 60 40
0
500
1000
0
500
1000
0
500
1000
1500
2000
2500
1500
2000
2500
1500
2000
2500
time (ms)
32 30 28 26
2500
time (ms)
1 0.8
1905
pulse oximeter
40
magnitude respiration
30
free space
posture (degrees)
20
80 60 40 20 0
Sensor island 1
100
3800
3400
10
heart rrhythm
528
0.6
1900
0.4
1895
0.2 0
0
500
1000
1500
2000
time (ms)
2500
1890
time (ms)
Fig. 11: Real-time visualization of measured signals. Detected signal peaks are indicated with vertical lines.
IV. C ONCLUSION Two methods with accelerometers for breathing monitoring are compared. The inclination method gives noticeable better results, especially concerning posture and movement dependency. A comparision is made with a spirometer, resulting in a very small error. Also the comparision of breathing volume extraction is promising. Finally, a polyvalent system architecture is described.
RS232
Sensor island 4
SPI
SPI
R EFERENCES Sensor island 5
Fig. 10: The system architecture that can process up to 20 sensor signals, divided on 5 islands, and communicate wirelessly with a processing unit. A slave island contains up to four sensors, sensor processing electronics and an A/D-converter (MCP3204). The obtained digital signals (twelve bit quantization) are sent to the master island, using SPI. These bits give a resolution of 0.9mV , which also stipulates the maximum sensitivity of the analog sensor outputs. The master island (Fig 1) contains a microcontroller (PIC16F687) and a wireless transceiver (nRF24L01), that sends the data through wireless transmission to a receive island preceding the processing unit. The bandwidth of 1000 Hz for each sensor is more than satisfactory for most biomedical signals, but can be improved if necessary by making the system dedicated to one application and converting the processing unit into an ASIC. A real-time visualisation of some measured signals is shown (Fig 11). In this case these signals are an ECG, a respiration signal, measured by the accelerometes and a pulse oxymeter signal.
1. J. Coosemans, B. Hermans, and R. Puers, “Integrating wireless ecg monitoring in textiles,” Sensors and Actuators A, vol. 130-131, pp. 48–53, 2006. 2. C. S. Poon, Y. C. Chung, T. T. C. Choy, and J. Pang, “Evaluation of two noninvasive techniques for exercise ventilatory measurements,” Engineering in Medicine and Biology Society, 1988. 3. M. N. Fiamma, Z. Samara, T. S. P. Baconnier, and C. Straus, “Respiratory inductive plethysmography to assess respiratory variability and complexity in humans,” Respiratory Physiology and Neurobiology, vol. 156(2), pp. 234–239, May 2007. 4. P. D. Hung, S. Bonnet, R. Guillemaud, E. Castelli, and P. Yen, “Estimation of respiratory waveform using an accelerometer,” in Biomedical Imaging: From Nano to Macro, vol. 5, June 2008, pp. 1493–1496. 5. M. Willinger, H. J. Hoffman, and R. B. Hartford, “Infant sleep position and risk for sudden infant death syndrome,” National Institutes of Health, vol. 22, p. 42, November 1993. 6. R. Lukocius, J. A. Virbalis, J. Daunoras, and A. Vegys, “The respiration rate estimation method based on the signal maximums and minimums detection and the signal amplitude evaluation,” Electronics and Electrical Engineering, vol. 8, pp. 51–54, 2008. 7. R. Carta, P. Jourand, B. Hermans, J. Thone, D. Brosteaux, T. Vervust, F. Bossuyt, F. Axisa, J. Vanfleteren, and R. Puers, “Design and implementation of advanced systems in a flexible-stretchable technology for biomedical applications,” Sensors and Actuators A, vol. 156, pp. 79–87, November 2006.
Email: [email protected]
IFMBE Proceedings Vol. 29
Comparison between VHDL-AMS and PSPICE modeling of ultrasound measurement system for biological medium N. Aouzale1, A. Chitnalah 1, H. Jakjoud 1, D. Kourtiche 2, M. Nadi 2 1
2
L.S.E.T, Université CADI AYYAD FST BP 549, 40000 Gueliz Marrakech Maroc. L.I.E.N, Nancy Université, Faculté des Sciences et techniques, BP 70239, 54506 Vandœuvre, France
Abstract— Piezoelectric materials are commonly used in many applications. Different approaches were developped to predict the piezoelectric transducer behaviour. Among them, the resolution of piezoelectric equations by numerical methods is currently used. Another method is based on the equivalent electrical circuit simulation : Pspice or VHDL– AMS tools. This paper proposes a comparison between VHDL-AMS and Pspice models for a pulse echo ultrasonic system. The simulation is based on the Redwood model and its parameters are deduced from the transducer acoustical characteristics. The electrical behaviour of the proposed model is in very good agreement with the real system behaviour. Keywords— Ultrasound, VHDL-AMS, Modeling, Redwood, Measurement I. INTRODUCTION
Ultrasound systems are widely used. They find many applications in engineering, medicine, biology, and other areas [1]. Modelling and simulation of such systems is a difficult task due to the presence of multi-physics effects and their interactions. VHDL-AMS language is appropriate for ultrasonic systems methodology conception because it could take into account all the transducer environment including microelectronic stimulation and acoustic load. The use of behavioural models in simulation simplify physics and explore interactions between different domains in a reasonable amount of time . This paper presents a method for multi-domain behavioural modelling of ultrasound measurement system. We validated this methodology through a study cases in linear ultrasound measurement in which the ultrasound transducer model takes great importance. To perform the implementation, a virtual-prototyping environment, ADVance® MS (ADMS) tool from Mentor Graphics is used. This environment provides multi-level model integration required for real systems design and analysis. The obtained results are compared to those obtained with Pspice simulation and by measurement.
II. TRANSDUCER MODELLING
Many electrical equivalent circuits have been made to represent ultrasonic transducer. The model of Mason represents the transducer in the form of an electrical equivalent circuit where the transducer acoustic port is represented by localized elements connected to the electrical port of the transducer by an ideal transformer. This model presents some disadvantages such as the negative capacitor and it can not model multilayered transducers. Redwood improved this electromechanical model by incorporating a transmission line, making possible to extract useful information on the temporal response of the piezoelectric component [2, 3]. Figure 1 present the transducer and his Redwood’s transmission line version of Mason’s equivalent circuit. The multiple parameters appearing in this model are as follows : Q1 and Q2 are the acoustic particles velocities at the front and the back faces of the disc, F1 and F2 are the acoustic forces at the transducer faces. T is an ideal electro-acoustic transformer with a ratio h33.C0 with h33, a piezoelectric stiffness constant for the ceramic and C0, the clamped capacitance. The Redwood model is divided in two parts, first is the electrical port which is composed by the capacitors C0 and –C0 that represent the capacitance motional effect, the electrical port is connected to a resistance R and a voltage source noted V , the second part is composed by the two acoustic ports,. Piezoceramic layer is assimilated to a propagation line characterized by its thickness e, acoustic impedance of the ceramic Zt = U. c0 A, with c0 the particle velocity, ȡ the material density and A the area of the ceramic. One branch of the piezoceramic layer is in contact with the back medium (Zback) and the other is in contact with the propagation medium (Zfront). Transducer modelling with VHDL-AMS language is based on writing of the different equations of the Redwood scheme elements. The multiple parameters appearing in this model are defined below : v1 and v2 (m/s) are the acoustic particles velocities at the front and the back faces of the disk, the parameter k is
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 529–532, 2010. www.springerlink.com
530
N. Aouzale et al.
the wave number for the piezoelectric ceramic, k = Z /vD, where Z the angular frequency (rad/s), vD (m/s) is the wave speed of compressional waves in the D C 33 / U ҏ in terms of piezoelectric plate given by Q D the elastic constant of the piezelectric ceramic, C33D (N/m2) , at constant electric flux density, and Uҏ (kg/m2), the density of the ceramic. h33 (V/m) , is the piezoelectric stiffness constant for the ceramic, and C0, the clamped capacitance of the plate. C0 is given by C0 A/ E33s d where A (m2) is the area of the ceramic and E 33s (m/F) is the dielectric impermeability of the ceramic at constant strain, and d (m) is the ceramic thickness. The quantity Zt = A U vD (Rayl) is the plane wave acoustic impedance of the piezoelectric ceramic, while ZB (Rayl) ҏis the corresponding acoustic impedance of the backing, which is a function of frequency [4,5].
III. ULTRASOUND PULSE-ECHO SYSTEM
The most common configuration of an ultrasonic system widely used for acoustical measurements is shown on the figure 3. Transmitter-receiver device
Propagating medium
Reflector
d/2 Oscilloscope display
Transducer
Fig. 3 Bloc diagram of experimental setup
d
Area A
F1
F2
Q1
Q2 z
I3 V3 Fig.1 Model (1-D) for electrical and acoustical parameters for piezoelectric ceramic and its representation as a three port system
Q1 F1
Q2
ZA
ZA
F2
Z I -C0
C0
It involves the generation, propagation and reception of the signal. The system operates in pulse echo mode. The ultrasonic waves generated by the transducer propagate through the medium and the received echo is converted by the same transducer to electrical signal. The ultrasonic transducer is a bi-physical device that transforms electrical signal to acoustical wave and vice versa. To obtain a VHDL-AMS description of this pulse echo measurement system, one must give: i)- a description of the ultrasonic transducer by using equivalent circuit of Mason as adapted by Redwood. The mechanical part of the piezoelectric transducer is easily represented using a transmission line model, two parameters are sufficient to entirely define the mechanical part of the transducer, the impedance and the sound propagation delay through the transducer. ii)- a description of the propagation medium by means of a transmission line. This model corresponds to the electric equivalent circuit of Branin [5]. The transmission line parameters are calculated according to the characteristics of the biological medium. IV. VHDL-AMS MODELLING OF THE EXPERIMENTAL SETUP
I3 V3
Fig. 2 Mason's equivalent circuit model of three port system ZA = -j Zt tan(k d/), Z = -j Zt sin (k d), I = h33 C0
The global schema with the pulse echo transducer is implemented with Redwood’s model Fig. 4. The inferior part is the transducer in reception mode and its connected to a charge (Cscope, Rscope) which represent the input impedance of the electric measurement tool. The piezoceramic layer equivalent circuit is shown as a linear propagation diagram. The superior part correspond to the transducer in emission mode which is connected to the
IFMBE Proceedings Vol. 29
Comparison between VHDL-AMS and PSPICE Modeling of Ultrasound Measurement System for Biological Medium
electric source. The associated test-bench with VHDLAMS language of the experimental setup is presented in Fig 5. The frequential transducers response study is essential to predict the sensitivity of the system for the various analyzed biological mediums. The studied transducers are produced with PZT ceramic of P188 type (Quartz et Silice) with characteristics are recalled in table 1. TABLE1 TRANSDUCERS ACOUSTIC CHARACTERISTIC Parameters F0 A e Zt co
Value Type A
Quantity Resonance Frequency Area Thickness Acoustic impedance Acoustic velocity Capacitor of the ceramic disc Dielectric constant Thickness coupling factor Piezoelectric Constant
Co
Ǽ33 kt h33
2.25 MHz 132.73 mm² 1 mm 34.9 MRayls 4530 m/s 1109.8 pf 650.0 0.49
531
to the electrical input. The simulation code used to obtain simulation is presented in Fig 5. The simulation results with VHDL-AMS and PSPICE are compared with a transducer in vitro measurement. The pulse voltage is – 100 Volt during 0.222 μs and we consider a 50.0 Ohms resistor between the electrical transducer input and the voltage source. The time 0.222 μs corresponds to 1/(2uF0) where F0 is the resonance frequency of the transducer. The study is done on time response and its spectral frequency analysis. The results are presented in figures 6 and 7. The simulated voltage is V3 : the electrical transducer port. The simulations show good agreement with the measurement obtained with a real transducer. The pick voltage obtained with VHDL-AMS (-4.20 V) is more important than the one obtained with PSPICE (-1.30 V) and thus nearest to the measured one (-3.85 V). The signal wave form obtained with VHDL-AMS is not disturbed by irregularities contrary to the PSPICE simulation.
1.49.10+9 ENTITY Measure_cell IS END Measure_cell; ARCHITECTURE structure OF Measure_cell IS
Piezoceramic layer
linear Medium
v2
F2
v1
Zback
F1 Co
R
I3
T Emitter
V3 Co
TERMINAL n1,n2,n3,n4,n5,n6,n8,n9 : ELECTRICAL; TERMINAL Tb,Tb2,Tf,Tf2 : kinematic_v; CONSTANT A : real := 132.73e-3 CONSTANT e: real := 1.0e-3; CONSTANT Co: real := 1109.8e-12; CONSTANT Va : real := 4530.0; CONSTANT kt : real := 0.49; CONSTANT epsi0 :real:= 8.8542e-12; CONSTANT epsi33 :real:= 650.0; CONSTANT ro :real:= 3300.0; CONSTANT h :real:= kt*Va*sqrt(ro/(epsi0*epsi33)); CONSTANT K : real := h*Co CONSTANT ZT : real := 34.9e6; CONSTANT Zfront : real := 1.5e6; CONSTANT Zback : real := 445.0; QUANTITY vinput across ie through n1 electrical_ground;
to
BEGIN If now > 0.0 and now < 222.2e-9 USE vinput == -100.0; Else vinput == 0.0; End USE;
Piezoceramic layer v2
F2
v1
Z back
F1 Co
C Scope
R Scope
Receiver
T
Co
Fig. 4 Measurement cell schema to be implemented in VHDL-AMS
V. RESULTS
The tools used for simulation are: ADMS v3.0.2.1 of Mentor Graphics company for VHDL-AMS simulation and OrCAD software for PSPICE simulation. To perform the transducer frequency analysis, we used a VHDLAMS Testbench where a frequential source is connected
R1 : entity Resistor(bhv) generic map (50.0) port map ( n1, n2); T1 : entity Redwood(bhv) generic map( Co, K, A*ZT, e/Va) port map( n3, kinematic_v_ground, n4, kinematic_v_ground, n2, electrical_ground); Medium : entity linearMedium(bhv) generic map (1.5e6, 20e-9 ,fo ,1500.0 ,5.0 ,0.9 ,0.045 ,1.0e-9) port map( n3, kinematic_v_ground, n5, kinematic_v_ground); back : entity Resiskinematic(bhv) generic map ( A*Zback ) port map ( n4, kinematic_v_ground ); back2 : entity Resiskinematic(bhv) generic map ( A*Zback ) port map ( n8, kinematic_v_ground ); T1 : entity Redwood(bhv) generic map( Co, Kt, A*ZT, e/Va) port map( n3, kinematic_v_ground, n8, kinematic_v_ground, n9, electrical_ground ); RScope : entity Resistor (bhv) generic map ( 1.0e6) port map ( n9, ground); Cscope : entity Capacitor (bhv) generic map ( 13.0 e12) port map ( n9, ground); END ARCHITECTURE structure;
IFMBE Proceedings Vol. 29
Fig.5 VHDL-AMS code of the measurement cell
532
N. Aouzale et al. VI. CONCLUSIONS
4 3
In this paper, based on previous works an approach of ultrasonic transducer modeling system is presented. The use of VHDL-AMS language shows the advantage to combine multiphysical domains. The approach can be readily used in current electronic design flow to include distributed physics effects into modeling and simulation process with VHDL-AMS. The transducer is simulated by the Redwood model, while the medium of pro pagation is represented by a transmission line which supposes the plane wave theory. The model allows development of further optimisation with respect to electrical matching and transmitted waveform. It also could be extended to include other phenomena like diffraction and distortion of the acoustic wave propagation in the biological medium under test. Usual medium modelling are based on transmission line theory. To perform measurements sensitivity, we can easily adjust in simulation transducer acoustic parameters and also take into account the best parameter for the transducer conception. The transducer response obtained in simulation shows a good correlation with measurement and indicate that the simulation of an ultrasound sensing device, including both electronics and transducers (electromechanical) is possible using VHDL-AMS. 4 3
(a) Amplitude (Volt) Measure
2
VHDL-AMS
1 0 -1 -2 -3 -4 -5 0.0
0.3
1.0
t (10-6.s)
2.0
3.0
(b) Magnitude (V².Hz-1)
0.25 0.2
(a) Amplitude (Volt)
Measure PSPICE
2 1 0 -1 -2 -3 -4 -5 0.0
1.0
2.0
3.0
t (10 -6 .s) 0.3
(b) Magnitude (V².Hz-1)
0.25
FFT_PSPICE
0.2
FFT_Measure
0.15 0.1 0.05 0 0.0
2.0
4.0
6.0
8.0
10.0
F (MHz)
. Fig. 7 Comparison between the measure time response and the PSPICE simulation (a) and their Fourier transform (b).
REFERENCES 1- Mason W.P. (1942) Electromechanical transducers and wave filters. 2nd ed., New York : Van Nostrand, 2- Redwood M. (1961), “Transcient performance of a piezoelectric transducer,” 33, J. Acoust. Soc. Amer, 527536. 3- Morris S. A., Hutchens C.G. (1986) “Implementation of Mason’s model on circuit analysis programs,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr 33, 295-298. 4- Leach, W.(1994) “Controlled-source analogous circuits and SPICE models for piezoelectric transducers,” IEEE Trans. Ultrason., Ferroelect., Freq. Contr, 41, 60-66 5- Guelaz, R. and al. Modelling and Simulation of Ultrasound Non Linearities Measurement for Biological Mediums, 11th Mediterranean Conference on Medical and Biomedical Engineering and Computing 2007 Springer Berlin H, 16, 377-380
FFT_VHDL-AMS FFT_Measure
0.15
Author:
0.1 0.05 0 0.0
2.0
4.0 6.0 F (MHz)
8.0
1.0
Fig. 6 Comparison between the measure time response and VHDLAMS simulation (a) and their Fourier transform (b)
Djilali KOURTICHE
Institute: LIEN, Nancy Université, Faculté des sciences et technologie Street: BP 70239, Boulevard des Aiguillettes City: Vandoeuvre, 54506 Country: France Email: [email protected]
IFMBE Proceedings Vol. 29
Stimulation Parameter Testing and Verification during Pacing Martin Augustynek1, Marek Penhaker1, Pavel Sazel1, and David Korpas2 1
VSB - Technical University of Ostrava / Department of Measurement and Control, Ostrava, Czech Republic 2 Palacký University / Faculty of Medicine, Olomouc, Czech Republic
Abstract— The main object of the work is measurement and verification of the adjusted values of the cardiostimulator’s parameters. For measuring this parameters we used the devices Impulse 7000D from Fluke company. On a dual-chamber pacemaker we will measure the pulse width, amplitude and impedance on the out of pacemaker.
The CONTAK RENEWAL® TR 2 cardiac resynchronization therapy pacemaker (CRT-P), Model H145 is meant to provide cardiac resynchronization therapy (CRT). Cardiac resynchronization therapy is for the treatment of heart failure (HF) and uses biventricular electrical stimulation to synchronize ventricular contractions.
Keywords— transform, pacemaker, measurement methods, stimulation voltage, detection.
I. INTRODUCTION The cardio stimulator is an electronic device assuming the primary function of myocardial muscle stimulation by generating of electric impulses in patients with sinus node dysfunction or cardiac conduction system dysfunction. Stimulation can be divided from various standpoints: into indirect stimulation (stimulation through surrounding tissues) and direct stimulation (stimulation is performed in the heart cavity), or according to the duration of the cardio stimulator application into temporary (with the stimulator outside the patient’s body) or permanent (the stimulator placed under the skin), according to the dependence on the heart action into asynchronous or synchronous stimulation, according to point of the stimulation into single-chamber and dual-chamber stimulation.
Fig. 1 CONTAK RENEWAL® TR 2, Model H145 The ZOOM® LATITUDE™ Programming System Model 3120 Programmer/Recorder/Monitor (PRM) is intended to be used as a complete system to communicate with Guidant or Boston Scientific implantable pulse generators. The software in use controls all communication functions for the pulse generator. For detailed software application instructions, refer to the System Guide for the Guidant or Boston Scientific pulse generator being interrogated.
Table 1 The pacing mode is designated most often by a three-digit NGB code Stimulated chamber
Sensed chamber
Response mode
A - atrium
A - atrium
V - ventricle
V - ventricle
T - triggering I - inhibition
D - booth
D - booth
D - booth
O - none
O - none
O - none
II. MATHERIALS AND METHODS The device evaluated is the pacemaker Contak Renewal TR 2 CRT-P (model H145, type DDDR) with attached electrodes. The next component is the ZOOM® LATITUDE™ Programming System Model 3120 PRM and system for parameters testing Impulse 7000D from Fluke Company.
Fig. 2 The ZOOM® LATITUDE™ Programming System Model 3120
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 533–536, 2010. www.springerlink.com
534
M. Augustynek et al.
The Impulse 7000DP Defibrillator/Transcutaneous Pacer Analyzer Test Systems are rugged, portable precision test instruments that ensure proper operation and ultimate performance of critical life-support cardiac-resuscitation equipment. The Impulse 7000DP test capabilities encompass the spectrum of worldwide-established pulse shapes, showcase breakthrough AED technology compatibility, and outperform in accuracy and standards. Additionally, the Impulse7000DP incorporates the tests and the extensive range of test loads and measurement algorithms needed to test external transcutaneous pacemakers.
500
538
38
1,10
550
589
39
0,10
600
648
48
8,90
650
701
51
11,90
700
739
39
0,10
750
803
53
13,90
800
852
52
12,90
850
879
29
10,10
900
938
38
1,10
950
971
21
18,10
1000
1006
6
33,10
1050
1085
35
4,10
1100
1164
64
24,90
1150
1215
65
25,90
1200
1256
56
16,90
1250
1256
6
33,10 24,10
1300
1315
15
1350
1380
30
9,10
1400
1433
33
6,10
1450 1500
1511 1511
61 11
21,90 28,10
Fig. 3 The Impulse 7000DP Defibrillator/Transcutaneous Pacer Analyzer A. Impedance Measuring by Dual-Chamber Pacemaker For this measure we used only one ventricular electrode. This electrode was strip off and then we connected this electrode to Impulse 7000DP. On this device was a setting the mode for pacemakers testing. In the next table we can see the results by the impedance measuring. Table 2 The results of impedance measuring Settings on the FLUKE [Ω]
Measured [Ω]
Deviation [Ω]
Absolute Departure
50
<100
100
133
33
6,10
150
190
40
0,90
200
236
36
3,10
250
293
43
3,90
300
345
45
5,90
350
398
48
8,90
400
446
46
6,90
450
503
53
13,90
Fig. 4 Divergation of impedance B. Measuring of Pulse Width and Amplitude For this measured we used the oscilloscope TPS2014 from Textronix company. For measuring atrium stimulation pulse was the atrium electrode connected to the oscilloscope probe. Than we used the pacemaker programmer Model 3120 and we are setting the pacemaker to AAI mode and than we settings the individual parameters by pacemaker. Our real records and values were scanned by oscilloscope.
IFMBE Proceedings Vol. 29
Stimulation Parameter Testing and Verification during Pacing
Table 3 Results of amplitude and pulse width Programmed Width [ms]
Measured Width [ms]
Programmed Amplitude [V]
Measured Amplitude [mV]
2,00
2,000
5
588
1,50
1,532
5
584
1,00
1,031
5
584
0,50
530,6
5
584
0,06
0,092
5
580
2,00
2,000
3
360
1,50
1,531
3
356
1,00
1,032
3
356
0,50
0,532
3
360
0,06
0,092
3
356
2,00
2,000
1
120
1,50
1,532
1
120
1,00
1,032
1
124
0,50
0,532
1
116
0,06
0,094
1
120
Fig.
5 The compare measuring on the impedance boarf and Impulse 7000DP. Blue point is Impulse 7000DP and green point is the impedance board
III. CONCLUSIONS Measurement and verification of pacemaker stimulation parameters is very important for proper function. From one side there are influences by time-changing electrode impedance and the changing of stimulation therapy on the other side. During the verification we found of that dispersion of the impedance set-up and real measured are maximum 39,1 Ω.
535
Also the verification of A-V delay shown that measured dispersion at A 120 ms V 50 ms is both in tolerance of the cardio stimulator producers. The next research is important for time-parameters stability verification. This should be simulated and measured both and compare then after.
ACKNOWLEDGMENT The work and the contribution were supported by the project Grant Agency of Czech Republic – GACR 102/08/1429 ”Safety and security of networked embedded system applications”. This work was supported by the Ministry of Education of the Czech Republic under Project 1M0567. This work was partially supported by the faculty internal project,” Biomedical engineering systems V”. Grand - aided student, Municipality of Ostrava, Czech Republic.
REFERENCES 1. M. Penhaker , M. Cerny, L. Martinak, et al. HomeCare "Smart embedded biotelemetry system" Book Series IFMBE proceedings World Congress on Medical Physics and Biomedical Engineering, AUG 27-SEP 01, 2006 Seoul, SOUTH KOREA, Volume: 14 Pages: 711-714, 2007, ISSN: 1680-0737, ISBN: 978-3-540-36839-7 2. M. Penhaker , M. Cerny, " The Circadian Cycle Monitoring " Conference Information: 5th International Summer School and Symposium on Medical Devices and Biosensors, JUN 01-03, 2008 Hong Kong, PEOPLES R CHINA, Pages: 41-43, 2008, ISBN: 978-1-4244-2252-4 3. M. Penhaker , M. Cerny, M. Rosulek "Sensitivity Analysis and Application of Transducers " 5th International Summer School and Symposium on Medical Devices and Biosensors, JUN 01-03, 2008 Hong Kong, PEOPLES R CHINA, Pages: 85-88 Published: 2008, ISBN: 978-1-4244-22524. V. Kasik, "FPGA based security system with remote control functions." 5th IFAC Workshop on Programmable Devices and Systems, NOV 22-23, 2001 GLIWICE, POLAND, IFAC WORKSHOP SERIES Pages: 277-280, 2002, ISBN: 0-08-044081-9 5. Cerny M, Martinak L, Penhaker M, et al. Design and Implementation of Textile Sensors for Biotelemetry Applications In konference proceedings 14th Nordic-Baltic Conference an Biomedical Engineering and Medical Physics, JUN 16-20, 2008 Riga, LATVIA, Volume: 20 Pages: 194-197 , 2008, ISSN: 1680-0737 ISBN: 978-3-540-69366-6 6. Cerny M., Penhaker M. Biotelemetry In conference proceedings 14th Nordic-Baltic Conference an Biomedical Engineering and Medical Physics, JUN 16-20, 2008 Riga, LATVIA, Volume: 20 Pages: 405408 Published: 2008, ISSN: 1680-0737, ISBN: 978-3-540-69366-6 7. Cerny M., Penhaker M. The HomeCare and circadian rhythm In conference proceedings 5th International Conference on Information Technology and Applications in Biomedicine (ITAB) in conjunction with the 2nd International Symposium and Summer School on Biomedical and Health Engineering (IS3BHE), MAY 30-31, 2008 Shenzhen, VOLS 1 AND 2 Pages: 110-113 Published: 2008 ISBN: 978-1-4244-2254-8 8. Penhaker M., Cerny M., Martinak L., et al. HomeCare - Smart embedded biotelemetry systém In Book Series IFMBE proceedings World Congress on Medical Physics and Biomedical Engineering, AUG 27-SEP 01, 2006 Seoul, SOUTH KOREA, Volume: 14 Pages: 711-714, 2007, ISSN: 1680-0737, ISBN: 978-3-540-36839-7
IFMBE Proceedings Vol. 29
536
M. Augustynek et al.
9. Cerny, M.: Movement Monitoring in the HomeCare System . In IFMBE proceddings. Ed. Dossel-Schleger, Berlin:Springer, 2009, issue. 25, ISBN 978-3-642-03897-6; ISSN 1680-07 10. V. Kasik, G.K. Adam, G. Garani, N. Smaras,V. Srovnal, J. Koziorek, J. Kotzian, "Design and development of embedded control system for a lime delivery machine" 10th WSEAS International Conference on Mathematical Methods and Computational Techniques in Electrical Engineering, MAY 02-04, 2008 Istanbul, TURKEY, Pages: 186-191, 2008, ISBN: 978-960-6766-60-2 11. WORKSHOP SERIES Pages: 277-280, 2002, ISBN: 0-08-044081-9 12. J. Havlík, J. Uhlíř, Z. Horčík, "Human Body Motions Classifications," In IFMBE Proceedings EMBEC 2008,. [CD-ROM], Berlin: Springer, 2008, ISBN 978-3-540-89207-6 13. Cerny, M.: Movement Monitoring in the HomeCare System . In IFMBE proceddings. Ed. Dossel-Schleger, Berlin:Springer, 2009, issue. 25, ISBN 978-3-642-03897-6; ISSN 1680-07 14. Peterek, T., Zurek, P., Augustynek, M., Penhaker, M. Global courseware for visualization and processing biosignals, WORLD CONGRESS 2009 - MEDICAL PHYSICS AND BIOMEDICAL ENGINEERING, Mnichov - http://www.wc2009.org 15. Penhaker, M., Zurek, P. Peterek, M. Signal processing and visualization of multiparameters biosignal, 9th International Conference BIOMDLORE 2009 , Bialystok, Poland http://www.biomdlore2009.pb.edu.pl 16. Vasickova, Z., Penhaker, M., Augustynek, M.: Using frequency analysis of vibration for detection of epileptic seizure. Global courseware for visualization and processing biosignals. In World Congress 2009. Sept 7. - 12. in Munich, ISBN 978-3-642-03897-6, ISSN 16800737 17. Prauzek, M., Penhaker, M., Methods of comparing ECG reconstruction. In 2nd Internacional Conference on Biomedical Engineering and Informatics, Tianjin: Tianjin University of Technology, 2009. Stránky 675-678, ISBN: 978-1-4244-4133-4, IEEE Catalog number: CFP0993D-PRT 18. Prauzek, M., Penhaker, M., Bernabucci, I., Conforto, S., ECG - precordial leads reconstruction. In Abstract Book of 9th International Conference on Information Technology and Applications in Biomedicine. Larnaca: University of Cyprus, 2009. Page: 71, ISBN: 978-14244-5379-5 19. Brida, P., Majer, N., Duha, J., Cepel, P., "A Novel AoA Positioning Solution for Wireless Ad Hoc Networks Based on Six-Port Technology", In IFIP, Volume 308, Wireless and Mobile Networking, pp. 208-219. (2009) 20. Horak, J., Unucka, J., Stromsky, J., Marsik, V., Orlik, A., "TRANSCAT DSS architecture and modelling services", In Journal: Control and Cybernetics, vol. 35, pp. 47-71, (2006)
21. Krejcar, O., Janckulik, D., Motalova, L., Kufel, J., "Mobile Monitoring Stations and Web Visualization of Biotelemetric System - Guardian II". In EuropeComm 2009. LNICST vol. 16, pp. 284-291. R. Mehmood, et al. (Eds). Springer, Heidelberg (2009). 22. Krejcar, O., Janckulik, D., Motalova, L., "Complex Biomedical System with Mobile Clients". In The World Congress on Medical Physics and Biomedical Engineering 2009, WC 2009, September 07-12, 2009 Munich, Germany. IFMBE Proceedings, Vol. 25/5. O. Dössel, W. C. Schlegel, (Eds.). Springer, Heidelberg. (2009) 23. Krejcar, O., Janckulik, D., Motalova, L., Frischer, R., "Architecture of Mobile and Desktop Stations for Noninvasive Continuous Blood Pressure Measurement". In The World Congress on Medical Physics and Biomedical Engineering 2009, WC 2009, September 07-12, 2009 Munich, Germany. IFMBE Proceedings, Vol. 25/5. O. Dössel, W. C. Schlegel, (Eds.). Springer, Heidelberg. (2009) 24. Idzkowski A., Walendziuk W.: Evaluation of the static posturograph platform accuracy, Journal of Vibroengineering, Volume 11, Issue 3, 2009, pp.511-516, ISSN 1392 - 8716M. Penhaker , M. Cerny, L. Martinak, et al. HomeCare "Smart embedded biotelemetry system" Book Series IFMBE proceedings World Congress on Medical Physics and Biomedical Engineering, AUG 27-SEP 01, 2006 Seoul, SOUTH KOREA, Volume: 14 Pages: 711-714, 2007, ISSN: 16800737, ISBN: 978-3-540-36839-7 25. Nielsen J., 1994, Usability Engineering, Morgan Kaufmann, San Francisco. 26. Haklay, M. and Zafiri, A., 2008, Usability engineering for GIS: learning from a screenshot. In The Cartographic Journal, Vol. 45, Issue 2. pp. 87-97. 27. Arikan E., Jenq J., 2007, Microsoft SQL Server interface for mobile devices, Proceedings of the 4th International Conference on Cybernetics and Information Technologies, Systems and Applications/5th Int Conf on Computing, Communications and Control Technologies, Orlando, FL, USA, Jul 12-15. 28. Jewett M., Lasker S., Swigart S., 2006, SQL server everywhere: Just another database? Developer focused from start to finish, In DR DOBBS Journal, Vol. 31, Issue 12.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Martin Augustynek VSB - TU Ostrava, FEI 17. listopadu 15 Ostrava Czech Republic [email protected]
Biosignal Monitoring and Processing for Management of Hypertension A. Stan1, R. Lupu1, M. Ciorap1 and R. Ciorap2 1
2
„Gh. Asachi” Technical University of Iaúi, Romania “Gr.T. Popa” University of Medicine and Pharmacy Iaúi, Romania
Abstract— Today the high number of the patients, suffering of chronically diseases, put a big pressure to the health system with their demand for homecare monitoring. In this paper are presented the design of the wearable medical device for long term monitoring of pulse wave, SpO2 and NIBP. The monitoring device acquiring the signals, store them on local memory and send all the data to a central station were the data are processed. The pulse wave was processed for calculate the second derivative of photoplethysmogram as an indicator of the hardness of the blood vessel [1]. The work was focused on design and implementation of an ultra low power wearable device able to acquire the pulse wave, causing minimal discomfort and allowing high mobility
and transducers are connected to the device, for vital parameters acquisition. The biomedical parameters acquired are: photoplethysmogram, oxygen saturation (SpO2), heart rhythm (HR) and blood presure (BP).[6]
Keywords— patient monitoring, e-health, wearable device, pulse wave, photoplethysmogram
Fig. 1 Diagram of monitoring system I. INTRODUCTION
Cardiovascular disease affects individuals in their peak mid life years disrupting the future of the families dependant on them and undermining the development of nations by depriving valuable human resources in their most productive years. In developed countries lower socioeconomic groups have greater prevalence of risk factors, higher incidence of disease and higher mortality. Hypertension affects approximately 72 million people in the United States and is associated with considerable cardiovascular morbidity and mortality. [2]. Hemodynamic abnormalities that parallel the underlying cardiovascular changes in hypertension include changes in cardiac output, vascular resistance, fluid volume status, endothelial function, pulse wave velocity, and arterial stiffness [3]. These abnormalities can be evaluated monitoring the photoplethysmogram (PPG) and calculate the acceleration plethysmogram (APG) who is the mathematical second derivative of the PPG waveform. In this paper we presents the partial results of the research project PNCDI-II 11-070/2007 intitled “e-Health integrated solution for vital parameters monitoring for patients with chronically disease – SIMPA”. II. MATERIALS AND METHODS
The monitoring device is build using custom developed hardware and application software. Low power amplifiers
The system uses the ICnova AP7000 OEM platform that integrates the AVR 32 processor from ATMEL and memory (8MB Flash and 64 MB SDRAM). The processor runs with Linux operating system. The electronic board designed for this application contains: - power supply module - user interface module: o graphic LCD with touch screen o 3 LED’s o 3 push button - 4 serial interfaces (RS232) - SD card connector - Ethernet interface Data provided by the sensors are retrieved and written to a log file according to a prescribed format. At established intervals by the configuration file, this log file is sent via Internet to a server using one of the communication modules available. For acquiring the photoplethysmogram wave, SpO2 and blood pressure we use OEM modules PERL 100 respectively NIB Scan. The heart rate(HR) is calculate from photoplethysmogram wave. These OEM modules are connected with system using one of the 3 serial ports (UART1-3). Pulseoximetry employs the different absorption phenomena of oxygenated and deoxygenated hemoglobin at (800940 nm) and (600-700 nm) respectively to determine arterial oxygen saturation. The pulseoximeter also measures the heart rate which is an indicator of cardiac condition. The
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 537–540, 2010. www.springerlink.com
538
A. Stan et al.
pulseoxymeter thus utilizes two photo-plethysmograms (PPGs) signals: an infrared (IR) signal and visible red signal. A mathematical model for pulseoximetry begins by considering light at two wave lengths, Ȝ1 and Ȝ2, passing through tissue and being detected at a distant location. At each wave the total light attenuation is described by four different component absorbances; oxyhemoglobine in the blood (concentration c0, molar absorptivity İ0, and effective path length l0), “reduced” deoxyhemoglobine in the blood (concentration cr, molar absorptivity İr, and effective path length lr), specific variable absorbances that are not from the arterial blood (concentration cx, molar absorptivity İx, and effective path length lx), and all other non specific sources of optical attenuation, combined as Ay , which can include
possible to start the measurements directly ("manually") by transmitting the "start" command[14]. Connection to the board is done via serial, asynchronous communication with a baudrate of 4800 Baud. All commands and messages begin with a Start of Text character, ASCII 02, and close with an End of Text character, ASCII 03. The measuring unit is controlled by the host via command frames.
light scattering, geometric factors, and characteristics off the emitter and detector elements.
AO1 ® ¯ AO 2
H o colo H r cr lr Ay H o colo H r cr lr Ay 1
1
1
2
2
2
(1)
The blood volume change due to arterial pulse results in a modulation of the measured absorbances. By taking the time rate of change of the absorbances, the two last terms in each equation are effectively zero. Those equations can also be considered as a linearization. In general, a linearization procedure is not easy and not ensured that it should be correct. At established intervals the system connects to the server. This is possible through one of the present communication modules (Ethernet, WiFi) or GPRS if we use the third serial port The PEARL100 module uses the proprietary PEARL algorithm, that constantly shifts a time window of six seconds over a data buffer that contain samples of the red and infrared waveforms. The algorithm detects the correct pulse rate by convoluting a template over the waveform at different phase angles. On each detected pulse, a block with new saturation, pulse rate and quality information is transmitted. The pulse wave sample points are transmitted continiously with 50 bytes per second. Their values are located between 0 and 0xF7. Values that are higher than 0xF8 are used for marking the following data byte as a new data value (0xF9 for SpO2 and 0xFA for HR).[13] The NIBScan uses the oscillometric method for measuring a person's systolic, mean and diastolic pressure. During inflation and deflation of the cuff, the current cuff pressure is transmitted 5 times per second. The module has a selectable internal "cycling" mode, that automatically starts a measurement after a given time. The intervals of these cycles are adjustable by commands sent by the user. It is also
Fig. 2 Monitoring device III. RESULTS
The signals are continuously recorded in separate files on flash memory for feature analysis. Once pathological abnormality is detected, the monitoring device requests a transmission to the remote care centre. The communication between tasks is implemented with semaphores and waiting queues allowing a high level of parallelism between processes. Each process may be individually enabled or disabled. This feature is very important in increasing the flexibility of the application: if real time monitoring is desired. Then SD Card Process may be disabled and Transmitting Process is enabled, if only long term monitoring is desired then SD Card Process is enabled and Transmitting Process may be disabled. This has a positive impact on power consumption because only the resources that are needed are enabled for use. We calculate the mathematical second derivative on photoplethysmogram who is called acceleration plethysmogram (APG). This second derivative more clearly separates the components of the PPG waveform and easily allows the measurement of the time differential between wave peaks. The APG helps to more closely determine a representative “biological age” of the arteries.
IFMBE Proceedings Vol. 29
Biosignal Monitoring and Processing for Management of Hypertension
539
Fig. 4 The PPG and APG wave
Fig. 3 The screen shot of software application IV. CONCLUSIONS
This analysis gives valuable information about the heart rate, artery flexibility, hydration levels and overall cardiovascular health. This technique measures the wave patterns created by heart every time it beats. The pulse wave pattern tells a story about how the blood travels through the body and just how healthy and flexible the blood vessels are. The waveform results from the ejection of blood from the left ventricle and moves with a velocity much greater than the forward movement of the blood itself. The PPG sensor reads this from the small arteries of the fingertip as the pulse waves travel down the arterial walls. The height of the diastolic component of the waveform equates to the strength of the pressure wave, and the shape and other components of the waveform relates to the tone of the arteries. For example, the timing of the diastolic component relative to the systolic component depends on how fast the wave passed through the aorta and large arteries. The stiffer the arteries, the quicker the blood and waveform pass through. This produces changes in the dicrotic notch in the waveform which is a characteristic of arterial elasticity. In figure 4 is shown the relation between PPG and APG.[7]
For a “normal” subject the APG wave have the following characteristics: - a-b, a-c, a-d, a-e equal the time between each wave peak - a and b waves correspond to the early systolic component - c and d waves correspond to the late systolic component - b/a is an index of arterial wall elasticity where b gets smaller with cardiovascular age d/a is an index of vasoconstriction and vasodilation where d gets larger with cardiovascular age.
ACKNOWLEDGMENT This work is supported by Romanian Minister of Education, Research and Innovation in the framework of National Program for Research, Development and Innovation (PNCDI-2) under partnership project no. SIMPA 11-070
REFERENCES 1.
2.
3.
4. 5. 6.
Takazawa Kenji, Aizawa Akira, Kano Mineko et al. Measurement of vascular age by second derivative of photoplethysmogram and its usefulness, Japanese Journal of Clinical and Experimental Medicine, 82: 1032-1036, 2005 Rosamond,W., Flegal, K., Friday, G., Furie, K., Go, A., Greenlund, K. et al. “Heart disease and stroke statistics – 2007 update: a report from the American Heart Association Statistics Committee and Stroke Statistics Subcommittee”, Circulation 115: e69–171, 2007 Ferrario C. M., Basile J., Bestermann W., et al. The role of noninvasive hemodynamic monitoring in the evaluation and treatment of hypertension, Therapeutic Advances in Cardiovascular Disease, 1(2) 113–118, 2007 Kamat, Vijaylakshmi Pulse oximetry..: Indian Journal of Anaesthsia, Vols. 46(4):261-268, 2002 Bronzino J.D. The Biomedical Engineering Handbook – second edition, CRC Press in cooperation with IEEE Press, Boca Raton, Florida, 2000 Ciorap R., Zaharia D., Corciovă C., Ungureanu Monica , Lupu R, Stan A. Dispozitiv wireless pentru monitorizarea pacienĠilor cu afecĠiuni cronice, Revista Medico-Chirurgicală, Vol. 112, Nr.4 , Septembrie-Decembrie 2008
IFMBE Proceedings Vol. 29
540 7.
A. Stan et al.
Ciorap R., Corciovă C., Andritoi D, Turnea M., Zaharia D., Monitorizarea saturatiei de oxigen si a pletismogramei in bolile cronice, Revista Medico-Chirurgicală, Vol 113, nr.2. apr-iun.2009 8. Wong, A.K.Y.; Kong-Pang Pun; Yuan-Ting Zhang; Ka Nang Leung, “A Low-Power CMOS Front-End for Photoplethysmographic Signal Acquisition With Robust DC Photocurrent Rejection”, Biomedical Circuits and Systems, IEEE Transactions on, 2(4):280 - 288, 2008 9. R. A. Payne, D. Isnardi, P. J. D. Andrews, S. R. J. Maxwell and D. J. Webb “Similarity between the suprasystolic wideband external pulse wave and the first derivative of the intra-arterial pulse wave”, British Journal of Anaesthesia 99(5):653-661, 2007 10. Wilkinson IB, Fuchs SA, Jansen IM, et al. Reproducibility of pulse wave velocity and augmentation index measured by pulse wave analysis. J Hypertens; 16: 2079–84, 1998 11. Sherebrin M. H., Sherebrin R. Z. “Frequency Analysis of the Peripheral Pulse Wave Detected in the Finger with a Photoplethysmograph, Biomedical Engineering IEEE Transactions on, 37(3): , 1990
12. Jochanan E. Naschitz, Stanislas Bezobchuk, Renata MussafiaPriselac, Scott Sundick, Daniel Dreyfuss, Igal Khorshidi, Argyro Karidis, Hagit Manor, Mihael Nagar, Elisabeth Rubin Peck, Shannon Peck, Shimon Storch, Itzhak Rosner, and Luis Gaitini, “Pulse Transit Time by R-Wave-Gated Infrared Photoplethysmography: Review of the Literature and Personal Experience”, Journal of Clinical Monitoring and Computing, 18: 333–342, 2004 13. Medlab – PEARL 100 – Pulse Oximeter OEM Board – Technical manual, Medlab 2004-2008 14. Medlab – NIBScan - noninvasive blood pressure board – Technical manual, Medlab 2004-2008
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Radu Ciorap “Gr.T.Popa” University of Medicine and Pharmacy Kogalniceanu 9-13 Iasi Romania [email protected]
Design and Development of an Electrophysiological Signal Acquisition System: A technological aid for research, teaching and clinical application E. Villavicencio1, D. García1, L. Navarro1, M. Torres1, R. Huamaní1 and L.F. Yabar1 1
Universidad Tecnológica del Perú/Facultad de Ingeniería Electrónica y Mecatrónica, Grupo de I&D en Ingeniería Biomédica, Lima, Perú
Abstract— The present work aims to the study and design of a prototype that allows us to view and analyze the ECG, EMG, EEG and EOG signal. The acquisition and interpretation of these signals is often used in the clinical field for monitoring in critically ill patients and the diagnosis of pathologies, also the reading of these signals is quite used in the field of research within the biosciences, to assess the effects of different substances on certain organisms. Furthermore, the analysis and processing of these signals has led to remarkable advances in the areas of engineering, among them we can mention the development of the popular brain-computer interfaces (BCI). In this sense, we propose the development of this prototype that will allows us to record and display data, and apply tools for electrophysiological signal analysis. Keywords— ECG, EEG, Electrophysiological signal, EMG, EOG. I. INTRODUCTION
The first records of electrophysiological signals date back to 1901, with the appearance of the first electrocardiograph. Over the years, new signals were discovered and developed new methods and improving the already known, up to modern specialized equipment ECG, EEG and other that allows us to acquire electrophysiological signals quite accurately, and that are found in all hospitals and clinics today as conventional monitoring and diagnosis tools. However, this need to record electrophysiological signals has also been presented at various universities and institutes around the world, where they have been developing research work connected with the acquisition and processing of such signals. In some publications chose to use commercial systems for the acquisition and focus work on the stage of processing [1], in other cases developed the hardware used for the acquisition, adding characteristics according to the application, amount and type of variable gain [2]. These investigations have given off in the fields of biological sciences, medicine and engineering. It is for this reason that companies began to surface acquisition system manufacturers with benefits for teaching, research and clinical field. The first systems of this type were produced by companies Biopac (USA) and ADInstruments (New Zealand), emerged in 1985 and 1988 respectively. Today we find these devices
produced by different brands such as Eastern Technology, DeLorenzo, among others. Significantly, these systems have different tools and options for viewing, processing and analyzing signals, besides being able to purchase multiple electrophysiological signals, unlike the proprietary systems for clinical use that have specific options according to the requirement of the doctor and , usually specializing in one type of electrophysiological signal [3]. Furthermore, some universities have managed to develop systems with features similar to those of the systems mentioned: In 2007, the Universidad de los Andes (Venezuela) developed a system which enabled the acquisition of ECG and EMG signals by using a commercial purchase card and a software developed by themselves [4]. Later, in 2008, the Universidad Pontificia Bolivariana (Colombia) Completed development of a modular system, called Biolab, which allowed the acquisition and analysis of ECG, EMG and EEG signals, and the generation of visual and auditory stimuli for its application in study of evoked potentials [5]. Currently these systems are applied in various disciplines in the area of biological sciences are used to acquire such equipment EEG signals and evaluate the benefits of new medicinal plants [6], in the area of engineering these systems are used to develop prosthetic devices or vehicles controlled by EEG and EMG signals, in order to improve the quality of life of patients with certain disabilities [7]. However, despite offering a variety of applications, and that our country has distributors that sell these systems, very few universities in our midst who have opted to buy one of these, either through ignorance or lack of subject budget. Moreover, most publications related to the acquisition of electrophysiological signals, developed by universities and institutes of our country, show no significant contribution to the fields of medicine, life sciences or engineering. These works are limited to acquiring a single signal, most times not even apply additional processing to find new results. All the situations described above motivated us to study and design of a system for acquiring electrophysiological signals.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 541–544, 2010. www.springerlink.com
542
E. Villavicencio et al. II.
METHODOLOGY
Table 1 Electrophysiological signals amplitude range and bandwidth
Both anatomical-physiological basis and the technological aspects are the basis for understanding the present work, the first one provides us information about the signals to be studied, and the second describes the technology that we use. Thus, both aspects will be detailed for better comprehension. A. Anatomical and physiological aspects Electrocardiogram is the record of a periodic signal that provides information from the heart's electrical behavior. As shown in Fig. 1, the signal can be acquired depending on the plane (front, horizontal) or polarity (unipolar or bipolar).
Signal ECG EMG EOG EEG
Signal Amplitude range 0.5 mV – 4 mV 0.1 mV – 5 mV 50 µV – 3.5 mV 5 µV – 300 µV
Signal bandwidth 0.01 Hz – 250 Hz 0 Hz – 10 KHz 0 Hz – 50 Hz 0 Hz – 150 Hz
B. Technological Aspects a) Transducers: For acquisition of the four electrophysiological signals the use of electrodes will be required. The type of electrode to use depends on the application and type of electrophysiological signal we want to record, as shown in Fig. 3. For most applications in humans surface electrodes will be used, but there are other cases where it will be necessary to use needle electrodes [9].
Fig. 1. Location of electrodes for ECG signal acquisition at different planes.
Electroencephalography measures the spontaneous electrical activity of the cerebral cortex, unlike the ECG it does not present a pattern to follow as the QRS complex. Its analysis is in amplitude and frequency, where the classification is into four groups: Alpha, Beta, Theta and Delta, as shown in Fig. 2.
Fig 2. EEG signal components.
The electrooculogram (EOG) is the measurement and registration of potential generated between the cornea and retina of the eye. These potentials allow us to identify the direction of gaze respect to the head. It is called electromyography (EMG) registration of voltage variations in muscle fibers during voluntary or spontaneous contraction of muscle [8,9].
Fig 3. Surface electrodes for EOG recording and needle electrodes for EMG recording.
b) Signal Conditioning: To raise the small potential sensed, it will be necessary to use instrumentation amplifiers. We suggest using integrated instrumentation amplifiers, such as the INA116 or AD620A, for presenting an excellent common mode rejection ratio and greater stability than discrete operational amplifiers, not to mention its high input impedance. Furthermore, to select the pass band signal according to gain, as shown in Table 1 [9] we propose to use Butterworth filters, because of its flat frequency response (more stable). c) Signal Digitalization, Processing and Transmission : For signal digitization and processing we’re going to use a digital signal microcontroller. The microcontroller code to use is dsPIC33FJ32GP710 , manufacturer Microchip. We propose to use this microcontroller because it’s provided by high processing speed, multiplier and divider, internal ADC module, high sampling rate and built-in serial port; features that make it ideal for scanning of multiple signals, application of digital signal processing techniques and communication with the PC via the serial port.
IFMBE Proceedings Vol. 29
Design and Development of an Electrophysiological Signal Acquisition System
d) User Interface: In principle we suggest using LabVIEW 8.0 software since it has an intuitive graphical programming, as well as show a more aesthetic mask. On the other hand C language is widely used for Digital Signal Processing for its wide variety of mathematical libraries and tools for real time applications, in addition to its portability that makes it surpass others languages like Pascal or Forlan. C. Block Diagram The block diagram proposed to develop the prototype is shown in Fig. 4. The system has several stages, described below: First, the signal will be sensed by the electrodes and then carried to the instrumentation amplifier in charge of raising the small potential sensing to the order of volts. Then the signal will pass through filters, which are responsible for mitigating the undesirable components, we can fit the bandwidth of the sensed signal. Then they proceed to add to the DC signal level constant, so that the resulting signal does not present negative values, this is done so condition the signal for subsequent digitization. Once conditioned, enter the desired signal to the microcontroller through an internal multiplexer and then becomes digitized via the ADC module. Once you have input signal to the microcontroller may implement the various known processing techniques to obtain more information from the acquired signal.
543
ticularly useful for display, separately, the EEG signal components because, when implemented by software, will help us reduce the circuit size, not to mention the lower cost. A control algorithm will lead the direction of the information according to user requirements. In case the user just want to see the signal, the data will be transmitted via the serial port (UART) microcontroller directly to the PC. In case you want to apply additional processing to the signal, the data will be transmitted to a second microcontroller, whose task is devoted exclusively to this process. This second microcontroller will external EEPROMs for cases that require processing to make a large number of samples. After the information has been pro-sada, then sent back to the first microcontroller, which in turn transmit this information to the PC. Final-mind, the information coming to the PC will be received and interpreted by the graphical interface, which enables the interaction between the user and the system. Initially the user must select the signal with which to work (ECG, EEG, EOG or EMG) and place the corresponding electrodes, and thereafter, this signal can be displayed on the screen of the PC through a graphical environment that will enable the user to modify certain parameters, such as zooming or removal of the signal or choose the components of the signal that you want displayed (Lead II, alpha signal, etc.).. Furthermore, it may use tools to extract more information in the signal and store information for later analysis such as frequency spectrum display of the signal acquired, acquisition of heart rate, obtaining the number of spikes in an EEG signal, record data in a spreadsheet, etc. III.
RESULTS
a) Interferences supression: It’s expected to significantly reduce the effects generated by unwanted components (shown in Fig. 5) such as motion artifact caused by changes in position of voluntary and involuntary patient, 60 Hz network interference, variation in baseline, muscle noise, and other interference that occurs in the electrophysiological signals of ECG, EEG, EMG, EOG; for which we intend to use fourth and eighth order Butterworth filters.
Figura 4. Diagrama de bloques del sistema
For example, in the case of acquiring the ECG signal, it can implement an algorithm for detecting the QRS segment and, through this, find the heart rate. Another tool that will use is the application of digital filters. These filters are par-
Fig. 5 Interference by biological sources and artifacts: a)Motion artifact; b)Baseline variation; c)Muscle noise; d)Power line noise
IFMBE Proceedings Vol. 29
544
E. Villavicencio et al.
b) Acquisition and Processing: By choosing suitable sampling frequency and using different techniques of digital signal processing is to develop analytical tools that allow us to extract additional information and monitor different variables from electrophysiological signal acquired, as shown in Fig. 6.
Fig. 6 Graphs of eye position, speed and acceleration, obtained from an EOG signal.
c) Graphic interface: Taking advantage of the LabVIEW software and well knowing user requirements for most typical applications (zoom, frequency spectrum, peak count in EEG signal, recording data in a spreadsheet) is expected to develop a system that covers the expectations of the user, provide a user-friendly graphical interface and with an aesthetic plotter like the shown in Fig. 7.
IV.
The electrophysiological signal acquisition applications undergraduate level research carried out predominantly on a non-invasive (surface electrodes) in the case of human patients and so invasive (needle electrodes) in the case of animals. On the other hand, is more appropriate to develop the processing algorithms in the microcontroller signals instead of programs on the PC, as this makes the system performance depends mainly microcontroller and not the type of PC that used. This represents an advantage for the user because it is not necessary that your computer has too much RAM. With the development of the prototype proposed in this paper we obtain a system that can be used in universities as teaching materials in courses at both undergraduate and graduate as well as being used as an indispensable tool for carrying out various researchings, projects and developments by the groups of life sciences, health sciences and engineering. Finally, considering the variety of filters with different bandwidths that are needed for the analysis of signals and for displaying the desired components, we propose the use of digital filters as a better alternative because of its greater insensitivity to environmental conditions, its versatility to perform various types of filtering without having to modify the hardware and ability to work with very low frequency signals, making them suitable for applications in biomedical instrumentation.
REFERENCES 1.
2.
(a)
3. 4.
5. (b)
CONCLUSIONS
6.
Fig. 7. a) EMG visualization b) ECG (Lead II) visualization 7. 8. 9.
Mendoza A., Archila L., Ardila A. (2001). “Caracterización del intervalo QT en una señal electrocardiográfica usando la transformada Wavelet”. Memorias II Congreso Latinoamericano de Ingeniería Biomédica (Electronic version). P. Niño, O. Avilés, J. Saavedra, M. Orejuela, M. De la Hoz (2001). “Módulo de adquisición para prueba de esfuerzo cardiovascular (MAPEC)”. Memorias II Congreso Latinoamericano de Ingeniería Biomédica (Electronic version). Biopac MP Research Catalog. Available at: http://www.biopac.com/Manuals/research_catalog.pdf Abel Romero, Diego Jugo, Marco Parada (2007). “Diseño e implementación de un instrumento virtual para la adquisición y procesamiento de señales fisiológicas”. Revista del Instituto Nacional de Higiene “Rafael Angel”, Nº 38 (Electronic version). Sayra M. Cristancho, Carlos D. Giraldo, Alex A. Monclou (2008). “Integración de la Adquisición y Visualización de Señales Biomédicas: BIOLAB”. E-magazine: RevistaeSalud.com, Vol 4, Nº15. Sanchez Mendoza María Elena (2007). “ Mecanismo de acción relajante de Berberina aislada de Argemone ochroleuca Sweet en anillos de tráquea aislada de cobayo”. Thesis presented at Instituto Politécnico Nacional de México. Daniel Di Lorenzo, Joseph Bronzino. NEUROENGINEERING. Editorial Taylor & Francis Group (2008). Arthur Guyton & John Hall. “Tratado de Fisiología Médica”. Editorial Mc. Graw Hill, Décima Edición (2001). John G. Webster. Medical Instrumentation: Application and Design. Editorial John Wiley & Sons, Tercera Edición (1998).
IFMBE Proceedings Vol. 29
Numerical Models of an Artery with Different Stent Types M. Brand1, M. Ryvkin2, S. Einav3 ,I. Avrahami4, J. Rosen5, M. Teodorescu6 1
Department of Mechanical Engineering and Mechatronics, Faculty of Engineering, Ariel University Center of Samaria, Ariel, Israel 2 School of Mechanical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv, Israel 3 Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv, Israel 4 Afeka College of Engineering, tel Aviv, Israel 5 Department of Computer Engineering, Baskin School of Engineering, University of California, Santa Cruz , CA, USA 6 Department of Automotive Engineering, School of Engineering, Cranfield University, Bedford, UK
Abstract — Main cause of restenosis after balloon angioplasty is due to the stresses generated in the artery as well as from the stent artery interaction. Understanding the factors that are involved in this interaction, and the ability to evaluate the stresses that are formed in the artery, could help to lessen the number of failures. The goal of the present study is to develop computationally efficient numerical models for estimating the Damage Factor (DF) as the contact stresses, and to investigate their influence upon stent design, artery and plaque parameters. At first the artery was taken as a hollow cylinder with homogenous, linear elastic properties of the material. Later, the artery was taken as a two dissimilar layers model, with non-linear hyper-elastic properties. The variation in the Damage Factor value as a function of the mismatch between stent’s and artery’s diameters is nearly linear, and as much as the diameter of the artery increases, the Damage Factor for the same mismatch decreases. For arteries with 75% blocking and mismatch of 1mm, the Damage Factor is 3.8. Keywords — Stent, Artery, Interaction, Numerical model, Damage Factor.
I. INTRODUCTION At the final stage of the balloon angioplasty a stent is inserted into the artery. The stent keeps the internal space of the artery, the lumen, from decreasing, and for this end it requires a specific geometry and mechanical properties [1]. Main cause of restenosis after stent implantation is due to stresses generated in the artery by the stent [2]. The mismatch between the stent and the artery diameters cause high stresses in the arterial wall as well as local injury of the artery. These factors cause formation of new layer producing a narrowing of the arterial lumen and increase the risk for a restenosis [3-5]. The goal of the present study is to formulate numerical models in order to calculate the DF, a dimensionless parameter which defined as the value of the interface pressure normalized relative to a value of average blood pressure. Initially a plane two dimensional numerical model was developed. Later, a more sophisticated and accurate spatial three dimensional model was employed. Comparison be-
tween the results obtained for the two models showed that a good match exists. In recent years, many research works were published, where the blocked artery was considered to be consisting of several layers with non linear properties [6, 7]. Hence, additional two dimensional and three dimensional numeric models were formulated, in which the artery is modeled as a bi-layered structure with non linear constituents. In this latter case the comparison between the results obtained by the numerical models showed good match. This agreement enables us to assume that it is feasible to compute the DF using a two dimensional model. The two dimensional model is “easy” to use and significantly faster, and enables us to receive results that are found to highly match the results received from the more accurate three dimensional model. II. MODELS OF A STENT INSERTED INTO A HOMOGENOUS ARTERY WITH LINEAR ELASTIC MATERIAL PROPERTIES
A. Plane 2D Model of a Stent Inserted into an Artery The two dimensional model was developed using the so called "strip theory" of structural mechanics, consider the cross section perpendicular to the longitudinal x-axis of the stent-artery system (Fig. 1). The rotational cyclic symmetry of the domain, together with the symmetry of the loading being the internal pressure, allows us to consider a single sector ABCD (Fig. 2).
Fig. 1 - Cross sections location for a 2D model
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 545–548, 2010. www.springerlink.com
Fig. 2 - Artery and stent cross section at the middle of stent’s beam
546
M. Brand et al.
For the consider case, 18 beams cross sections are distributed along the inner circle uniformly. The dimensions of the inner and outer radii of this sector before the stent inserting are identical to the dimensions of a blocked artery after its inflation. The stent strut is represented by a square cross section which is located in the center of the sector (Fig. 3).
ployed for the two dimensional model. The axial size of the sector was defined in accordance with the length of the stent’s beam. The geometry of the stent’s beam is obtained from a cylindrical envelope with suitable dimensions wherein the difference between the inner and outer diameters of the envelope is equal to the thickness of the stent’s beam (Fig. 4).
Fig. 4 - Geometry of an artery’s sector and a stent’s beam.
Fig. 3 – 2D plane problem for a stent being inserted into an artery with the appropriate boundary conditions
The radial stiffness of the artery is defined by its geometry, and elastic properties of the wall material. In contradistinction, the stiffness of the stent cannot be represented in a similar manner, and hence its radial stiffness was defined by adding a spring element (Fig. 3). The stiffness of the spring was computed in accordance with the radial stiffness of the stent, as determined by a numeric manner [8]. Boundary conditions appropriate for this problem were formulated as illustrated in Fig. 3. In addition, a radial displacement was formulated for the end of the spring, at a value that is equal to the mismatch that is defined by the difference between the stent and artery initial radii: Ur=R0S-R0A, where R0A is the inner radius of the artery before stent’s insertion, and R0S is the outer radius of the stent at its free state. This situation simulates the displacements of the stent and the artery after the insertion of the stent, wherein the artery moves outwards and the stent moves inwards in the radial direction. The displacement of the stent is represented by the contraction of the spring.
The boundary conditions were formulated as illustrated in Fig. 5. The stiffness of the stent is defined from the stent’s geometric shape, mechanical properties and boundary conditions.
Fig. 5 - Boundary conditions as defined in the 3D model.
C. Stent Inserted into an Artery with Linear Isotropic mechanical Properties – Numerical Results Table 1 presents the values of the DF for a 4.25 mm diameter stent inserted into arteries smaller than it, with mismatch values of 0.5, 0.75 and 1.0 mm (the recommended mismatch range).
B. Spatial 3D Model of a Stent Inserted into an Artery In order to enable us to find the level of accuracy of the results obtained when using the two dimensional model, a three dimensional numerical model was developed and a comparison between the two sets of results for these models was made. The repetitive module of the artery is a 3D sector of a hollow cylinder. The radii of the cylinder, the inner and the outer, are determined in a manner identical to the one em-
Table 1 - DF for a 4.25 mm stent with mismatch values of 0.5, 0.75 and 1.0 mm. Artery diameter after being inflated [mm] Mismatch -
Δd
[mm]
DF computed for a two dimensional model DF computed for a three dimensional model The difference (%) in the DF value between the 2D and 3D numerical models
IFMBE Proceedings Vol. 29
3.25 1
3.5 0.75
3.75 0.5
3.85 3.8 1.32
2.98 2.95 1.02
2.1 2.11 0.47
Numerical Models of an Artery with different Stent Types
III. MODELS OF A STENT INSERTED INTO A BI LAYERED ARTERY WITH HYPER ELASTIC MATERIAL PROPERTIES
In the following, we will examine whether the match between the results for the 2D and the 3D models, is obtained and valid also if it relates to the artery made up of two layers, healthy and plaque (diseased), with non-linear and hyper-elastic material’s properties. Once the Ogden model was selected as the model representing the strain’s energy W, the constants of the materials, the healthy artery and the plaque layer, have to be found. In order to find these constants it is necessary to express the behavior of the material by a stress – strain diagram. This behavior was defined by using the data given in the paper by Holzapfel [6]. In this work the artery with a plaque layer is consider as tissue containing eight layers with different orthotropic hyper elastic properties (Fig. 6).
547
B. Results In order to examine the match of the DF between the two numeric models, the results of the models for cases of arteries with 75% blocking into which stents of different diameters were inserted, were compared. Fig. 7 depicts DF values for a 3.5 mm diameter artery.
Fig. 7 - DF of a 3.5 mm diameter blocked artery (75%) as a function of the radial stent-artery mismatch.
The two models are highly matched. The DF has higher values in the three dimensional model, wherein the mismatch is smaller, and smaller values when the mismatch increases. IV. EMPLOYING THE SUGGESTED 2D MODEL FOR DIFFERENT STENT TYPES
Fig. 6 – Stress – strain diagram of the artery layers A – Adventitia, M – Media, I – Intima, nos - non diseased, f/fm - fibrous, fl – collagenous, lp – lipid pool. (based on Holzapfel, [7])
We are interested in computing the DF for the case of a stent inserted into an artery composed of two layers with hyper elastic properties. For achieving it, “the average values” were computed from a stress – strain diagram of the diseased and healthy layers (Fig. 6), and they were selected to represent the properties of the artery for those two layers. A. The Numerical Models Numerical models (2D and 3D) were developed, in order to calculate the DF of an artery made of two layers with hyper elastic properties. These numerical models are similar to the models cited earlier. The first difference is by the geometry of the artery which consists two symmetric layers. The outer layer represents the healthy layer and the inner layer represents the plaque layer. The second difference is related to the properties of the materials of the two layers, as already explained above.
Employing the numerical models enables to calculate stresses in an artery into which different stents were inserted; each stent has a suitable periodical structure with a uniform beams cross section. In principle, it is possible to apply this method for a diversity of stents, but in this section we calculate only the stresses developed in an artery as a result of inserting a Micro II type stent into it. This stent was selected because we have the data regarding his stiffness and geometrical dimensions. The results showed that the stiffness of the stent increases when its diameter decreases. Hence for various cases in which stents of different sizes are inserted, it is necessary to pay attention to the appropriate value of the stent’s stiffness. In the following case the artery was considered as being a hollow, two layered entity, with non linear and hyper elastic properties. A. Potential Damage Factor for a Micro II Type Stent In this model, we present the results for a Micro II type stent (Fig. 8). The radial stiffness was computed relying on the data in the cited paper by Schrader and Beyar (1998), see Fig. 9. From the above stress-strain diagram one obtains
IFMBE Proceedings Vol. 29
548
M. Brand et al.
radial stiffness value of 1 ⋅ 106 N / m 2 for a 3 mm diameter stent and 0.7 ⋅106 N / m2 for a 4 mm diameter stent.
Fig. 8 - Micro II stent (Serruys and Kutryk, 1998).
Fig. 9 - Stress – strain diagram for Micro II stent expanded to 3mm (*line) and 4mm (+-line), and after relaxation (dash-dotted O-line) (Schrader and Beyar, 1998)
The values of the Potential Damage Factor for the collection of cases, in which the artery’s blocking percentage and stent to artery mismatch values varies, are presented in Fig. 10. A surface describes the Potential Damage Factor for a stent inflated to 3mm diameter wherein the diameter of the artery after recoil is smaller than the stent’s one by a value in the range of 0.1 to 0.5mm. The blocking percentages of the artery selected for the same assemblage of cases was 15% to 90%.
Finally, use was made of the two dimensional model for examining additional stents, e.g. those with periodic structures that were inserted into arteries, where their geometry and radial stiffness were known. From the results obtained it was possible to conclude that: For arteries with small diameters – larger stresses are obtained relative to the case of arteries with larger cross section (for identical blocking and radial mismatch). A similar phenomenon was reported by Brand et al. [8, 9] wherein they treated different stents. The results we obtained can be applied by the designers of stent as well as by medical personnel for choosing the most suitable stent for a specific patient. Future research should use the match that was found in order to obtain results for the Potential Damage Factor in the case of asymmetric artery stenosis.
REFERENCES 1.
2. 3.
4.
5.
6.
Fig. 10 - Potential Damage Factor diagram for a Micro II type 3 mm stent inserted into an artery as a function of stent to artery radial mismatch and artery’s blocking percentage.
7.
8.
V. SUMMARY AND CONCLUSIONS The goal of this research is to suggest a numerical approach for calculating the contact stresses applied to the wall of the artery following the insertion of a net structured stent into it. Representation of the interface pressure between stent and artery was performed using a dimensionless parameter, the Damage Factor (DF). Two kinds of numeric models were examined, two dimensional and three dimensional, and a good match was obtained between them. This match enables us to compute the stresses using the two dimensional model, which is simpler and much faster in computing the results as compared to the three dimensional model.
9.
Schrader, C. S., and Beyar, R., 1998, Evaluation of the Compressive Mechanical Properties of Endoluminal Metal Stents, Cathet Cardiovasc. Diagn., 44, pp. 179–187. De Belder, A., and Thomas, M. R., 1998, The Pathophysiology and Treatment of In-stent Restenosis, Stent, 1(3), pp. 74–82. Akiyama, T., Di Mario, C., Reimers, B., Ferraro, M., Moussa, I., Blengino, S., and Colombo, A., 1997, Does the High-Pressure Stent Expansion Induce More Restenosis? J. Am. Coll. Cardiol., 29, p. 368A. Oesterle, S. N., Whitbourn, R., Fitzgerald, P. J., Yeung, A. C., Stertzer, S. H., Dake, M. D., Yock, P. G., and Virmani, R., 1997, The Stent Decade: 1987 to 1997, Am. Heart J., 136, pp. 578–599. Rachev, A., Manoach, E., Berry, J., and Moore, J. E., Jr., 2000, A Model of Stress-Induced Geometrical Remodeling of Vessel Segments Adjacent to Stents and Artery/Graft Anastomoses, J. Theor. Biol., 206, pp. 429–443. Holzapfel, G. A., Stadler, M., and Schulze-Bauer, C. A.J., 2002, A Layer- Specific Three-Dimensional Model for the Simulation of Balloon Angioplasty using Magnetic Resonance Imaging and Mechanical Testing, Ann. Biomed. Eng., 30, pp. 753–767. Holzapfel, G. A., Stadler, M., Changes in the Mechanical Environment of Stenotic Arteries During Interaction With Stents: Computational Assessment of Parametric Stent Designs, J. Biomech. Eng., February 2005, Volume 127, Issue 1, 166 (15 pages). Brand M., Ryvkin M., Einav S., The SciMED RADIUS Stent-Artery Interaction, Proceedings of the 9th Biennial ASME Conference on Engineering Systems Design and Analysis, ESDA08, 2008, Haifa, Israel. Brand M., Ryvkin M., Einav S. and Slepyan L., 2005, The Cardiocoil Stent-Artery Interaction, Journal of Biomechanical Engineering, 127, pp 337-344. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Moshe Brand Ariel University Center of Samaria P.O.B. 3 Ariel Israel [email protected]
A macro-quality field control of dual energy X-ray absorptiometry with anatomical, chemical and computed tomographical variables A. Scafoglieri1, S. Provyn1,2, O. Louis3, J.A. Wallace1,4, J. De Mey3 and J.P. Clarys1, 1 Department of Experimental Anatomy (EXAN-LK), Vrije Universiteit Brussel, Belgium Anatomy, Morphology and Biomechanics Department, Haute Ecole Paul-Henri Spaak, Brussels, Belgium 3 Department of Radiology, University Hospital, UZ-Brussel, Belgium 4 Centre for Movement Sciences and Ergonomics, University of Aberystwyth, Wales, UK
2
([email protected]) Abstract – Dual-energy X-ray absorptiometry (DXA) studies both in humans and animals one cannot avoid obtaining a controversial impression concerning its use. The pro’s and contra’s however are dictated by the model DXA was verified with. This study will cross-validate and compare fan beam data with both dissection and computed tomography (CT) scanning data. Twelve porcine carcasses, viscera included, were measured with DXA and CT before dissection into its major components. Soft tissue samples allowed for chemical and hydration analyses and the complete skeleton was ashed. This macro-quality evaluation confirms that part of the existing problem is the result of terminology and that the predictive character of DXA is good. The verification of the precision capacity of DXA variables resulted into significant differences indicating that clinical precision for the individual patient is at risk. Keywords – ashing, CT, DXA, quality control, tissue dissection
I. INTRODUCTION Although body composition (BC) data acquisition and ad hoc analysis are both popular and important, selecting an appropriate method or technique for accurate and/or precise assessment of individuals and/or groups remains a challenging task within various sectors of veterinary science and public health. Reviewing the literature of DXA application one cannot avoid obtaining a very controversial impression of this new method of choice. On the one hand one finds an important number of application studies that support DXA technique as convenient for % Fat, Lean Body Mass (LBM) and Bone Mineral Content (BMC) measures. Other studies suggest violation of basic assumptions underlining that DXA is indirect and lacking accuracy for measuring BC. Validation or even cross-validation in between indirect methods cannot guarantee both accuracy and reality precision [1-4]. Perfect correlations and low coefficients of variation allow for predictions and assumptions only not accuracy or precision. In addition the accuracy of absorptiometry can be affected by the choice of calibrating materials. As a consequence both absolute and relative values can differ substantially between manufacturers, between instruments and the ad hoc software used [5,6]. This controversy
related to DXA studies gives rise to concern of its measuring quality. Validations against direct values are suggested before one can be confident about the accuracy of absorptiometry [6]. Two statements resulting from the literature are retained: a) dissection and direct comparison combined with bone ashing is considered as the most accurate direct validation technique [7] and b) further research with direct dissection and ashing is needed in particular for the influence of abdominal an thoracic organs associated with dispersed gas/air pockets and internal panniculus adiposus [3,6]. DXA is increasingly used in clinical practice, and clarity about the content and meaning of “lean” as produced by DXA is needed. Tissue combinations, e.g. skin, muscle, viscera and bone will be related to the DXAlean variable. In order to cross-validate this study will compare fan beam data, with both dissection and computed tomography (CT) scanning data. II. METHODS AND PROCEDURES Twelve, young “Belgian Negative” pigs, prepared for human consumption were acquired immediately after electroshock slaughter (6 female and 6 castrated males, mean weight standard deviation (sd), 39.509 4.335 kg). The carcasses were exsanguinated and decapitated between the atlas and the occipital bone. To minimize further dissection error front and hind legs were disarticulated distal from humeri and femora e.g. on elbow and knee level, respectively. The mean weight sd of the remaining carcass plus viscera was 33.051 3.324 kg (whole carcass weights being taken with a digital hang scale (KERN-HUS-150K50) accurate to 50g. A. Dual energy X-ray absorptiometry, DXA A QDR 4500A upgraded to Discovery HOLOGIC DXA device (Hologic, Waltham, MA, USA) utilizes a constant X-ray source producing fan beam dual energy radiation with effective dose equivalents (EDE) of 5 µSv (e.g. to situate this low radiation in terms of an example: a one-way transatlantic flight produces 80 µSv EDE and a spinal radiograph approximately 700 µSv EDE ) [6]. The estimations of fat and lean mass are based on extrapolation of the ratio of soft tissue attenuation of two
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 549–553, 2010. www.springerlink.com
550
A. Scafoglieri et al.
X-ray energies in non-bone-containing pixels. The two Xray energies are produced by a tungsten stationary anode X-ray tube pulsed alternately as 70 kVp and 140kVp. The software (for Windows XP version 12.4:3) performs calculations of the differential attenuations of the two photon energies and presents data for each carcass of percentage of fat, fat mass (g), lean mass (g), bone mineral mass (g), bone mineral density (BMD) in g/cm2 and total weight. According to the manufacturer, a coefficient of variation (CV) for human BMD of 0.5% can be expected during repeated measurements. To determine the reliability of DXA measurements, each pig carcass was scanned three times consecutively without (2x) and with (1x) repositioning. The DXA equipment was calibrated daily with a spine phantom (supplied by the manufacturers) to assess stability of the measurements, but also weekly, using a step phantom to allow for correction of sources of error related to beam hardening.
After the dissection and weighing procedures samples of all tissues of 100g to 150g were deep-frozen. Small parts were cut off and weighed in recipients of known weight before lyophilisation overnight. With the dried samples the water content could be measured and after storing into metal cells. Fat (lipid) is extracted with technical Hexane using a Dionex accelerated solvent extractor. After the hexane evaporation of the extraction, total (final) lipid content was determined (weighed). The skeleton was diamond-cut into pieces in order to fit in the ashing furnace (type Nabertherm, Liliental, Germany). After incineration each sample was heated using a ramped temperature protocol of two hours to 800°C and ashed for eight hours, as determined by prior pilot work. Before weighing on the Mettler Toledo precision scale the ash was cooled undercover and collected in a main container. The ashing of one full porcine skeleton took between 50 to 60 hours.
B. Computed tomographic scanning
Data are reported as mean(x) ± standard deviation(sd). Normality of all variables was verified with a Kolmogorov-Smirnov goodness of fit test and all DXA, CT and dissection data were (matrix) compared with Pearson correlation coefficients whereas differences were verified with one-way analysis of variance repeated measures (Anova). Reliability and consistency of these results were verified with intra-class correlations (ICC) and Bland-Altman [9] plots were used to access agreement of the direct carcass dissection data with the indirect DXA and CT estimates. All statistical tests were performed using SPSS 16.0 for windows and p values of <0.05 indicated significant differences.
A whole body scan of the pigs was taken with a CT scanner (type Philips Brilliance BZC 16, Koninklijke Philips Electronics NV, Eindhoven, The Netherlands) using the following settings: 120 kVp, 200 mAs, pitch 0.641, slice collimation 64 x 0.625 mm, reconstructed slice width 0.75 mm and using the BrillianceTM V2.3.0.16060 software. Tissues (Adipose tissue = AT, soft tissue = ST and bone = B) were classified based on Hounsfield Units (HU) and their respective volumes were calculated using a maximum likelihood Gaussian mixture estimator implemented in Matlab (The Mathworks Inc., Natick, United States). The following optimal classification scale was employed to determine each tissue: AT: -180..-7 HU; ST: -6..142 HU and B: 143..3010 HU. Tissue volumes were multiplied by their reference densities to obtain tissue weight estimates. C. Dissection, bone ashing, chemical fat and hydration analysis After the DXA measurements the carcasses were dissected into its various components: skin, muscle, adipose tissue, viscera and bones. The subcutaneous, intramuscular and intra-visceral AT was combined as one tissue. Blood vessels and nerves within tissues were attributed to it. Bones were carefully scraped, ligaments were added with muscle tendons to muscle tissue, cartilage remained part of the bone tissue. Seven expert pro-sectors and anatomists worked simultaneous and each dissected particle was collected minimizing or eliminating evaporation [3,8]. Masses were measured during the dissection using Mettler-Toledo digital scales (Excellence XS precision balance Model 40025) accurate to 0.01g. Once a bone fully prepared, the same procedure was continued and completed with its hydrostatic weight suspended to the same scale allowing for the volume-based bone density (g/cm3) calculation.
D. Statistical analysis
III. RESULTS Table I lists the data of all directly obtained measures and the complete set of indirect estimates made by DXA and CT. Its purpose is to evaluate the predictive quality of both DXA and CT, but also to evaluate precision and accuracy between direct and indirect values. This study considers r 0.90 as a good, r 0.80 as a medium and r 70 an average indicator of prediction confirmed or rejected by the ICC. If not significantly different with the dissection reference, one can assume an acceptable level of measurement precision. Table I indicates for almost all soft tissue comparisons a majority of good correlations (r 0.90), two medium correlations (r 0.80) and two average (r 0.70). Adiposity prediction expressed in % seems to be problematic for the CT. Despite the majority of good prognoses for prediction we do find significant differences in accuracy for total masses (DXA), adiposity (g and %)(DXA and CT) for all non-adipose soft tissue combinations (DXA and CT) and for all bony comparisons. Except for the ashing there are indications of acceptable precision and comparability with DXA-BMC.
IFMBE Proceedings Vol. 29
A Macro-quality Field Control of Dual Energy X-Ray Absorptiometry with Anatomical, Chemical and Computed Tomographical Variables TABLE I
551
COMPARISON BETWEEN DIRECT DISSECTION DATA VALUES WITH THE CORRESPONDING DXA AND CT VALUES
Variables Total mass (g)
Total tissue mass (g)
Dissection x ± sd
DXA x ± sd
CT x ± sd
r
33051.3 ± 3323.8
33192.3 ± 3336.6
--
1.00
17.903**
1.00***
33051.3 ± 3323.8
--
33041.7 ± 3337.8
0.99
0.006
0.99***
--
33192.3 ± 3336.6
33041.7 ± 3337.8
0.99
1.463
0.99***
32723.4 ± 3427.0
33192.3 ± 3336.6
--
1.00
24.061***
0.99***
--
33041.7 ± 3337.8
0.98
2.689
0.98***
--
0.91
268.516***
0.85***
5508.3 ± 844.7
0.72
131.446***
0.69**
5508.3 844.7
0.80
0.777
0.80**
32723.4 ± 3427.0 Adipose tissue / Fat (g)
3571.6 ± 632.8
Adipose tissue / Adipose tissue (g)
3571.6 ± 632.8
Fat / Adipose tissue (g)
--
Adipose tissue / Fat (%)
10.8 1.27
Adipose tissue / Adipose tissue (%)
10.8 1.27
Fat / Adipose tissue (%)
--
ATFM / Lean + BMC (g)
29479.7 ± 2874.7
ATFM / Soft Tissue + Bone (g)
5653.1 ± 934.1 -5653.1 934.1 17.0 1.87 --
ICC
0.81
370.409***
0.76**
16.6 1.19
0.31
195.514***
0.31
16.6 ± 1.19
0.46
0.594
0.41
27544.7 ± 2681.5
--
0.99
227.14***
0.99***
29479.7 ± 2874.7
--
27525.0 ± 2559.9
0.98
142.665***
0.98***
--
27544.7 ± 2681.5
27525.0 ± 2559.9
0.97
0.012
0.97***
Muscle / Lean (g)
17684.3 ± 1908.8
27103.1 ± 2647.3
--
0.95
1012.029***
0.90***
Muscle / Soft Tissue (g)
17684.3 ± 1908.8
--
24166.7 ± 2270.1
0.94
790.922***
0.93***
--
27103.1 ± 2647.3
24166.7 ± 2270.1
0.97
196.183***
0.96***
Lean + BMC / Soft Tissue + Bone (g)
Lean / Soft Tissue (g)
17.0 ± 1.87
--
Anova F
Skin
1326.7 ± 244.0
--
--
--
--
--
Muscle + Skin / Lean (g)
19011.1 ± 2092.3
27103.1 ± 2647.3
--
0.95
960.440***
0.93***
Muscle + skin / Soft Tissue (g)
19011.1 ± 2092.3
--
24166.7 ± 2270.1
0.95
642.421***
0.95***
Viscera
7465.3 ± 803.8
--
--
--
--
--
Muscle + skin + viscera / Lean (g)
26476.4 ± 2593.8
27103.1 ± 2647.3
--
0.99
61.326***
0.99***
Muscle + skin + viscera / Soft Tissue (g)
26476.4 ± 2593.8
--
24166.7 ± 2270.1
0.97
162.206***
0.97***
Skeleton mass / BMC (g)
2505.3 ± 317.5
--
0.62
641.302***
0.24
Skeleton mass / Bone mass (g)
2505.3 ± 317.5
3358.3 ± 446.1
0.59
65.404***
0.55*
3358.3 ± 446.1
0.40
566.598***
0.11
BMC / Bone mass (g)
--
441.6 ± 64.6 -441.6 ± 64.6
Ash weight (g) / BMC
445.6 ± 66.2
441.6 ± 64.6
--
0.73
0.086
0.73**
Skeleton Density (g/cm3) / BMD (g/cm2)
1.201 ± 0.02
0.782 ± 0.09
--
0.68
370.144***
0.24
1.720 ± ND
ND
ND
ND
1.720 ± ND
ND
ND
ND
3
Skeleton Density / Bone density (g/cm ) BMD (g/cm2) / Bone density (g/cm3)
1.201 ± 0.02 --
-0.782 ± 0.09
DXA = dual energy X-ray absorptiometry, CT = computed tomography, x = mean, sd = standard deviation, r = Pearson correlation coefficient, ICC = intra-class correlation coefficient, ATFM = adipose tissue free mass, BMC = bone mineral content, *P<0.05, **P<0.01, ***P<0.001, ND = not determined (CT considers bone density as a constant value).
IV. DISCUSSION The ICC and the Bland-Altman [9] plots confirm these findings. In Table II, dissection tissue masses were subdivided according to anatomic segmentation into respectively upper limb (Up limb), lower limb (Lo limb) and trunk (e.g. for skin, muscle and bone). For adipose tissue additional differentiation was made for subcutaneous (e.g. external) and visceral (e.g. internal) trunk AT. For each segment the water content and the fat (e.g. lipid) content was determined for the respective tissues and presented as % of the studied mass per tissue.
Given the basic reasoning that the measurement of whole body adiposity (in g or %), or non adipose tissue (in g) and density (in g/cm3) with different techniques and different equipment should produce similar, if not identical results on the same individuals cannot be supported because the underlying different assumptions or models [1,10]. Body fat (BF) is defined as the etherextractable constituent of body tissues, and must be considered as a chemical component of the body lipids. The interchangeable use of the terms BF and AT has led and is leading still to ambiguities.
IFMBE Proceedings Vol. 29
552
A. Scafoglieri et al. TABLE II WATER (LYOPHILISATION) AND LIPID (ETHER EXTRACTION) CONTENT OF TISSUES
Tissue Skin
Adipose
Muscle
Bone
Segment
Water (%) x ± sd
Lipid (%) x ± sd
r
Up limb Lo limb Trunk Ext. Up limb Ext. Lo limb Ext. Trunk Int. Trunk Up limb Lo limb Trunk Up limb Lo limb Trunk
61.0 ± 8.6 60.7 ± 4.9 50.1 ± 9.3 47.2 ± 7.0 47.2 ± 6.6 21.0 ± 5.3 50.1 ± 10.6 75.4 ± 1.4 74.5 ± 2.7 73.8 ± 3.9 39.0 ± 8.2 39.5 ± 8.1 49.4 ± 2.4
4.6 ± 6.0 4.3 ± 1.4 10.2 ± 7.4 15.0 ± 7.0 15.6 ± 6.9 29.0 ± 7.3 19.0 ± 6.7 1.4 ± 1.0 3.1 ± 3.2 3.7 ± 2.3 10.9 ± 2.7 9.8 ± 1.9 7.7 ± 3.3
- 0.73 - 0.55 - 0.20 - 0.72 - 0.84* - 0.16 - 0.70 - 0.86* 0.16 - 0.70 - 0.84* - 0.71 - 0.20
x = mean, sd = standard deviation, r = Pearson correlation coefficient, *P<0.01.
Adipose tissues are masses separable by gross dissection and includes not just lipid but also the non-lipid constituents of cells, such as water and protein and of course the bulk of the subcutaneous AT and tissue surrounding organs, viscera and variable amounts between muscles, e.g. the intramuscular AT. These phenomena are known since the fifties and sixties and were reinforced recently [3,4]. In attempts to identify the physiologically relevant tissues, the concept of Lean Body Mass (LBM) was introduced more than half a century ago [11]. This consists of the Fat Free Mass (FFM) plus the essential fat whose specification has varied from 2 to 10% for the FFM. Because of its imprecise definition, Lean or LBM also has led to much confusion and is often erroneously used as a synonym for FFM. If we look at the mean value level of the respective variables in Table I there cannot be any doubt that both DXA and CT are producing anatomical-morphological quantities, not chemical as claimed by the DXA manufacturer. In addition DXA and CT do not take into account the water content and lipid content variations (Table II) of both its adipose and non adipose constituents. Small variation of tissue hydration may explain important differences of ad hoc estimates [12]. Both in CT and DXA, BF is calculated on the constancy assumption that 74.6% of LBM (e.g. Lean or Lean + BMC) is water [13]. This assumed constancy of hydration e.g. the observed ratio of total body water to FFM was confirmed in humans by Wang et al. [12]. However, this assumption is subject to some questions that highlight the need for more research on the matter. Viewing Tissue Water Content (TWC) obtained by lyophylisation in several tissues (Table II) one can make two observations: 1) assuming a constant % of water may be jeopardized by the variable TWC within and between
the tissues. and 2) water content in AT is highly variable e.g. ranging from ±21% to ±50% in this study. Constancies claimed by DXA and CT cannot be maintained (e.g. with fluid ranging between ±50 to ±61% for skin, between ±39 to ±49% for bone but little variability for muscle. Lipid content is expressed as % of the measured sample mass. From sample masses being identical for hydration and lipid fractionation (Table II) we learn that lipid content of tissues is variably related to its ad hoc fluid content. If the extremities are considered separately one notices an apparent constancy both in hydration and lipid fractionation. The fact that all trunk tissue data (e.g. in skin AT, muscle and bone) deviate both but non systematically in hydration and lipid content from the upper and lower extremities indicate the importance of the trunk as discriminating segment. As Elowsson et al. [7] and Provyn et al. [3] were previously evaluating the accuracy of DXA with dissection in animals also, both studies motivated the choice of using plain carcasses (decapitated pigs without abdominal and thoracic organs) or just hind legs to minimize various errors. According to Elowsson et al. [7] this would increase DXA’s underestimation. This can no longer be supported, on the contrary, not measuring the internal trunk will increase the error because of the assumption of segment constancy of hydration and ad hoc lipid fractionation. The segmental data presented in Table II dismisses the idea of constant hydration and the assumed ad hoc constancy of 0.73 cannot be retained. Regardless the existing mechanisms and regardless the hydration and lipid (fat) content of non-adipose tissue this study has not been able to detect what the content is of the DXA non-adipose variables, e.g. “Lean” and/or “Lean + BMC”. We still do not know what DXA is exactly measuring under these ad hoc headings. “Lean” compared with muscle tissue, with muscle plus skin tissue and with muscle plus skin plus viscera (dissection and CT) resulted in equally high correlations (r-values between 0.94 and 0.99) assuming a good prediction estimate but with systematic significant difference confirming its imprecision. “Lean plus BMC” is certainly not measuring ATFM (e.g. skin + muscle + viscera + bone) although its high r = 0.99, but again with a significant difference (p <0.001) indicating a lack of precision and accuracy. Contrarily to Bloebaum et al. [14] but in agreement with Louis et al. [15], BMC seems a good estimate (r = 0.73) with no significant difference of its ash weight. This study cannot confirm what the nonadipose component of DXA is measuring, but it does confirm that all the DXA components and the CT bone components are subject to measurement error, and to terminology error and violation of basic assumptions. In addition it is known that density in its weight/volume quantification (g/cm3) can be considered as a separate dimension of BC. The DXA-derived BMD, however, is a weight/surface quantification (g/cm2) and therefore not a true density, nor the density on which osteoporosis classifications were studied [2,16].
IFMBE Proceedings Vol. 29
A Macro-quality Field Control of Dual Energy X-Ray Absorptiometry with Anatomical, Chemical and Computed Tomographical Variables
In a pilot (dissection) study using porcine hind legs in which DXA BMD was compared with bone covered with muscle, AT and skin tissue and compared with scraped bones only [3,4] it was found that DXA BMD underestimates true density with more than 40%. In this study (Table I) under whole body conditions one notices a similar level of high underestimation of DXA but with a better correlation, e.g. r = 0.68 for the whole body value against r = 0.39 for the hind leg study. The extensive work done by Bolotin [17] shows DXA measured BMD methodology (in vivo) to be an intrinsically flaw and misleading indicator of bone mineral status and an erroneous gauge of relative fracture risk. The transfer of their findings to the in situ carcass situation of this study confirms that the DXA methodology cannot provide nor accurate, nor quantitative precise, not meaningful determinations of true bone densities and proper bone mass because of the contamination of independent soft tissue, e.g. fluid and lipid content contributions. V. CONCLUSIONS The acceptance and understanding of the DXA estimate quality rests solely upon a number of (multiconfirmed) in vivo and in situ significant high correlations. The hypothesis that DXA methodology provides accurate and clinically relevant BC determinations are proven to be unwarranted and misplaced. ACKNOWLEDGEMENT The authors are grateful for the assistance of the prosectors during dissection and for the technical assistance of P. Clerinx for the CT-calculations. REFERENCES [1] V.H. Heyward, “Evaluation of body composition. Current issues,” Sports Med., vol. 22, no. 3, pp. 146–156, 1996. [2] H.H. Bolotin, and H. Sievänen, “Inaccuracies inherent in dual-energy X-ray absorptiometry in vivo bone mineral density can seriously mislead diagnostic/prognostic interpretations of patient-specific bone fragility,” J. Bone Min. Res., vol. 16, no. 5, pp. 799–805, 2001. [3] S. Provyn, J.P. Clarys, J. Wallace, A. Scafoglieri, and T. Reilly, “Quality control, accuracy and prediction capacity of the dual energy X-ray absorptiometry variables and data acquisition,” J. Physiol. Anthropol., vol. 27, no. 6, pp. 317–323, 2008. [4] J.P. Clarys, S. Provyn, J. Wallace, A. Scafoglieri, and T. Reilly, “Quality control of fan beam scanning data processing with in vitro material,” in Transactions of 2008 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), Singapore, pp 208–212, 2008. [5] J.L. Clasey, J.A. Kanaley, L. Wideman et al. “Validity of methods of body composition assessment in young and older men and women,” J. Appl. Physiol., vol. 86, no. 5, pp. 1728–1738, 1999.
553
[6] A.
Prentice, “Application of dual-energy X-ray absorptiometry and related techniques to the assessment of bone and body composition,” in Body composition techniques in health and disease, P.S.W. Davis, and T.J. Cole (eds), Cambridge University Press: England, 1995, pp 1–13. [7] P. Elowsson, A.H. Forslund, H. Mallmin, U. Feuk, I. Hansson, and J. Carlsten, “An evaluation of dual-energy X-ray absorptiometry and underwater weighing to estimate body composition by means of carcass analysis in piglets,” J. Nutr., vol. 128, no. 9, pp. 1543–1549, 1998. [8] J.P. Clarys, A.D. Martin, M.J. Marfell-Jones, V. Janssens, D. Caboor, and D.T. Drinkwater, “Human body composition: a review of adult dissection data,” Am. J. Hum. Biol., vol. 11, no. 2, pp. 167–174, 1999. [9] J.M. Bland, and D.G. Altman, “Statistical methods for assessing agreement between two methods of clinical measurements,” Lancet, vol. 8, no. 1, pp. 307–310, 1986. [10] A.D. Martin, V. Janssens, D. Caboor, J.P. Clarys, and M.J. Marfell-Jones, “Relationships between visceral, trunk and whole body adipose tissue weights by cadaver dissection,” Ann. Hum. Biol., vol. 30, no. 6, pp. 668–677, 2003. [11] A.R.Jr. Behnke, B.G. Feen, and W.C. Welham, “The specific gravity of healthy men. Body weight divided by volume as an index of obesity. 1942,” Obes. Res., vol. 3, no. 3, pp. 295–300, 1995. [12] Z. Wang, P. Deurenberg, W. Wang, A. Pietrobelli, R.N. Baumgartner, and S.B. Heymsfield, “Hydration of fatfree body mass: new physiological modeling approach,” Am. J. Physiol., vol. 276, no. 6, pp. E995–E1003, 1999. [13] R. Brommage, “Validation and calibration of DEXA body composition in mice,” Am. J. Endocrinol. Metab., vol. 285, no. 3, pp. E454–E459, 2003. [14] R.D. Bloebaum, D.W. Liau, D.K. Lester, and T.G. Rosenbaum, “Dual-energy X-ray absorptiometry measurement and accuracy of bone mineral after unilateral total hip arthroplasty,” J. Arthroplasty, vol. 21, no. 4, pp. 612–622, 2006. [15] O. Louis, P. Van Den Winkel, P. Covens, A. Schoutens, and M. Osteaux, “Dual-energy X-ray absorptiometry of lumbar vertebrae: relative contribution of body and posterior elements and accuracy in relation with neutron activation analysis,” Bone, vol. 13, no. 4, pp. 317–320, 1992. [16] E.M. Lochmüller, P. Miller, D. Bürklein, U. Wehr, W. Rambeck, and F. Eckstein, “In situ femoral dual-energy X-ray absorptiometry related to ash, weight, bone size and density, and its relationship with mechanical failure loads of the proximal femur,” Osteoporosis Int., vol. 11, no. 4, pp. 361–367, 2000. [17] H.H. Bolotin, “DXA in vivo BMD methodology: an erroneous and misleading research and clinical gauge of bone mineral status, bone fragility, and bone remodeling,” Bone, vol. 41, no. 1, pp. 138–154, 2007.
Corresponding author: Prof. J.P. Clarys, Dept. of Experimental Anatomy (EXAN-LK), Vrije Universiteit Brussel (VUB), Bldg B, Rm B037, Laarbeeklaan 103, B1090 Brussels, Belgium, Email: [email protected]; http://www.vub.ac.be/EXAN/
IFMBE Proceedings Vol. 29
Digital filter in hardware loop for on line ECG signal baseline wander reduction A. Petrenas1, V. Marozas2, S. Daukantas1, A. Lukosevicius1 1
Biomedical engineering institute, Kaunas University of Technology, Kaunas, Lithuania 2 Signal processing department, Kaunas University of Technology, Kaunas, Lithuania
Abstract— This paper presents the method for the ECG signal baseline wander reduction and prevention the analog to digital converter from saturation. The method is based on digital low pass filter in feedback loop. The digital filter estimates the ECG signal low pas variation (<0,5Hz) and via digital to analog converter feeds back the signal to reference input of instrumentation amplifier. The change of the signal voltage at the reference input is opposite in sign to the change of the signal at instrumentation amplifier input. Thus the baseline wander is significantly reduced. The method is suitable for on line processing of ECG signals, is resistant to high motion intensities and can be implemented using low power mixed signal microcontrollers. Keywords— low power ECG sensor, base line wander, motion artifacts, feedback loop.
I. INTRODUCTION
Baseline wander in electrocardiography (ECG) signal (slowly changing isoelectric line or artifact) can be caused by respiration or skeletal muscles, electrode impedance change and body movements [1]. It can lead to saturation of signal amplifier and analog to digital converter (ADC). Thus baseline wander makes manual or automatic analysis of ECG signal difficult and must be somehow reduced as early as possible in signal processing chain. The conventional method to reduce the drift of the ECG signal base line is to use signal inverting RC integrator (low pass filter) with differential amplifier in the feedback loop from the instrumentation amplifier output to its reference input [2,3]. However this method has disadvantage because RC integrator must have relatively large integration constant (>0,5s) which calls for additional reset circuit. The additional circuits increase the size and weight of the ECG sensor. The limitations of analog approach can be reduced by moving the integrator to digital domain. Digital to analog converter (DAC), which is inside the microcontroller, can be used to feed back some correction signal to reference input of the instrumentation amplifier [4]. In addition, digital implementation of integrator allows much more flexibility in its design. In order to increase effectiveness of base
line wander reduction we used the double integrator which can be interpreted as being two pole Gaussian filter. Gaussian filters show the least possible lag introduced into the signal. We present here the implementation and evaluation of this method.
II.
METHOD
A. Front end design A block diagram of a electrocardiograph baseline wander reduction stage is shown in Fig. 1. It consists of 1 instrumentation amplifier (INA333, Texas Instruments Inc.) as the first stage of amplification and base line regulation, the second stage amplifier for additional subtraction of estimated base line wander from ECG signal and for setting the signal midline (1/2 Vcc), 1 operational amplifier (OPA333, Texas Instruments Inc.) for low pass aliasing filter and microcontroller (MSP430F618, Texas Instruments Inc.) with two ADC inputs (12bits resolution) and two integrated DAC’s (12bits resolution).
Fig. 1. Block diagram of ECG front end for reduction of base line wander
The signal coming out of instrumentation amplifier and sensed at ADC1 is composite: low pass part (<0,5Hz) represents mainly baseline wander, while higher frequencies- ECG signal. The purpose of digital low pass filter is to select from the composite signal spectrum the lower part i.e. to estimate the base line wander. This part of the signal is fed via DAC1 output back to instrumentation amplifier’s
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 554–557, 2010. www.springerlink.com
Digital Filter in Hardware Loop for On Line ECG Signal Baseline Wander Reduction
555
reference input to automatically reduce the base line variation. In addition, the signal from DAC0 is fed to negative input of summator and thus by direct subtraction reducing base line wander in the ECG signal even further.
Sampling frequency is F = 1 T = 500 Hz thus the difference equation can be expressed as:
B. Digital filter design.
Control theory says that system function of control loop can be written as:
y[n ] = 0,0063x[n ] + 0,9937 y[n − 1]
Here we will show the design of digital low pass filter. The simple analog RC integrator was used as the model for digital filter design. Its system function is:
1 RC s + 1 RC
H ( s) =
(1)
By substitution of like terms we can obtain the digital filter system function:
z RC z + e −T RC
H [z] =
(2)
Here T – sampling period. Simplification
[1 − α ] = e −T
H [z] =
RC
z − [1 − α ]
z −1 1 + z −1 H ( z ) H ( z )
(10)
If we insert (3) into (10), we get final system function of the system:
z −1 + 2(α − 1) z −2 + (α − 1) 2 z −3 H loop ( z ) = (10) 1 + (α 2 + 2α − 2) z −1 + (α − 1) 2 z − 2 The magnitude frequency response of the system was found using Matlab and is shown in Fig. 3. C. Simulink model.
leads to:
αz
H loop ( z ) =
(9)
(3)
The ECG front end system behavior was analyzed by using simplified Simulink model (see Fig. 2):
And the frequency response of the digital filter can be written as:
H [e jωT ] =
αe jωT e jωT − [1 − α ]
(4)
The cut off frequency of such filter is the point at which the amplitudes of the two terms in denominator are equal:
e −ωT = [1 − α ]
(5) Fig. 2. Simulink block diagram of ECG front end for reduction of base line wander
Thus the cut off frequency can be estimated as:
ω=−
ln(1 − α ) T
Our desired cut off frequency is stant α can be expressed as:
(6)
f = 0,5Hz thus the con-
α = 1 − e −2πfT
(7)
The difference equation of digital integrator can be derived from system function in (3):
y[n] = αx[n] + (1 − α ) y[n − 1]
(8)
Two types of signals were used in the investigation: a) synthetic composite signal consisting from train of 0,1s pulse width rectangular pulses added to low frequency (0,05Hz) sine signal (base line wander), b) approximately 2 min long real ECG signal recorded with developed ECG sensor but disconnected feedback loop. Loop disconnection leads to DC coupled input. The simplified Simulink model models the loop: instrumentation amplifier, digital filter, reference input of instrumentation amplifier. The digital filter is a cascade of two equal filters. Block “Unit Delay1” is used in order to make the model realizable.
IFMBE Proceedings Vol. 29
556
A. Petrenas et al.
III.
REZULTS
The frequency response of the base line wandering reduction loop is shown in Fig. 3. It can be clearly observed that the ECG processing loop has a high pass character. Thus it is able to reduce the low frequency base line wander and to protect ADC from saturation. Magnitude frequency response Magnitude K, dB
2 0 -2 -4 -6 -1
10
0
10
1
10
2
10
The upper graph shows ECG signal with significant variations of the signal base line. The second graph shows the signal which can be seen in instrumentation amplifier output. The third graph represents estimated and inverted ECG baseline wander. Finally, the fourth graph shows the conditioned ECG signal which is later amplified and filtered with antialiasing filter. Figure 5 shows ECG processing front end behavior in case of synthetic signal. European standard EN 60601-2-51. 2003 [5] recommends testing ECG front end circuit with rectangular pulses of 100ms duration and 3 mV in amplitude. These pulses should not produce an offset on the ECG record from the isoelectric line. Results (see Fig. 5) show significant reduction of low frequency sine line representing base line variation. The negative offset produced by the circuit is not significant.
log frequency, Hz
Fig. 3. Digital integrator magnitude frequency response characteristic
Figure 4 shows an illustration of the circuit behavior during 2 min time period of ECG signal registration.
Fig. 5. Illustration of intermediate results in ECG signal processing chain Fig. 4. Illustration of intermediate results in ECG signal processing chain
The proposed ECG base line wander reduction approach was implemented in ECG sensor for exercise monitoring (Fig. 6). The sensor has embedded algorithm for estimation
IFMBE Proceedings Vol. 29
Digital Filter in Hardware Loop for On Line ECG Signal Baseline Wander Reduction
of RR intervals. BlueTooth wireless link is used for telemetry of RR intervals and one lead raw ECG signal to remote devise.
a)
b)
Fig. 6. The developed ECG sensor for exercise monitoring: a) inner electronics, b) comparison setup with Polar’s WearLink+ W.I.N.D. sensor [6]
In order to check effectiveness of the proposed solution during intensive exercising, the developed sensor was compared with Polar’s WearLink+ W.I.N.D. sensor. The proposed method helped to keep the ECG signal inside the ADC range and resulted in less RR interval estimation errors (Fig. 7):
557
The drawback may be compensated by increasing the time constant of digital integrator (decreasing α). However this measure leads to larger lag of the filter and thus decreases responsiveness to sudden variations in the base line. It is argued in [7] that the usage of high resolution (>20 bits) ADC in the ECG acquisition system enables DC coupling and all the high pass filtering can be avoided. However, most of existing low power microcontrollers has 16 bit data bus, 32 bits multiplier. Thus we believe that with today’s microcontroller technology our approach is better. The main advantage of the proposed circuit – it keeps the signal steady in the middle of ADC range even at high motion intensity during exercises. This is very important as no signal processing could help after signal registration with saturated ADC. Further digital processing may be used for high fidelity reduction of ECG signal base line wandering, for example, using these methods [8, 9].
ACKNOWLEDGMENT This work was partially supported by Lithuanian State Science and Studies foundation, project VitaActiv.
REFERENCES 1. 2. 3. 4.
Fig. 7 The illustration of comparison results between developed ECG sensor (VitaActiv) and Polar’s Wearlink+ W.I.N.D. sensor
IV. DISCUSSION AND CONCLUSIONS
5. 6. 7.
8.
The online method for ECG signal base line wander reduction and ADC saturation prevention in low power voltage miniaturized sensor is proposed. The method is based on digital filter replacing the analog integrator in the feedback loop. Even simple low power microcontroller is able to do the processing in real time. The performance of the proposed ECG front end is satisfactory as it keeps the base line steadily in the middle of 12bits ADC range (see the lower graph in Fig. 4). The main drawback of the approach is reduced amplitudes in lower part of ECG signal spectrum.
9.
Rajendra Acharya U, Jasjit S. Suri, Jos A.E. Spaan, S .M. Krishnan. Advances in Cardiac Signal Processing. Springer, 2007. Raju.M. Heart-Rate and EKG Monitor Using the MSP430FG439. Application note at http://focus.ti.com/lit/an/slaa280a/slaa280a.pdf INA 333 Micro-Power, Zero-Drift, Rail-to-Rail Out Instrumentation Amplifier data sheet at http://focus.ti.com/lit/ds/symlink/ina333.pdf Bosch E.; Hartmann E. ECG Front-End Design is Simplified with MicroConverter. Analog Dialogue 37 – 11, 2003. European Standard EN 60601-2-51. 2003 Polar WearLink+W.I.N.D, http://www.polar.fi/usen/products/accessories/WearLink_transmitter_WIND R. Abächerli, H. Schmid. Meet the challenge of high-pass filter and ST-segment requirements with a DC-coupled digital electrocardiogram amplifier. Journal of Electrocardiology, 2009:574-579. M.Mneimneh, E. Yaz,M. Johnson, and R. Povinelli, “An adaptive Kalman filter for removing baseline wandering in ECG signals,” in Proc. 33rd Annu. Int. Conf. Comput. Cardiol., 2006:253–256. S. Hargittai. Efficient and fast ECG baseline wander reduction without distortion of important clinical information, Computers in Cardiology, 14-17 Sept. 2008: 841-844. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Vaidotas Marozas Kaunas university of technology Studentu 65-107 Kaunas Lithuania [email protected]
Assessment of a Patient-Specific Silicon Model of the Human Arterial Forearm K. Van Canneyt, F. Giudici, P. Segers, and P. Verdonck Institute Biomedical Technology – bioMMeda, Ghent University, Ghent, Belgium Abstract— The complex branching topology of the local vascular bed contributes to the complex nature of the blood pressure and flow in the human forearm. The aim of this work is to develop a full scale hydraulic bench model as a research and validation tool of a computer model of the vasculature of the arm. The work fits within the framework of research into arterio-venous fistula creation. A silicon 3D-model of the brachial, ulnar, radial and anterior interosseous artery completed with the palmar arch was constructed in full scale. The geometry was based on patient-specific functional measurements and MR-data. The patient-specific arterial model was built in a mock loop consisting of an upstream reservoir, a pulsatile pump, a windkessel and variable resistances downstream. Pressure profiles at seven locations throughout the model were assessed. Wave Intensity Analysis (WIA) was performed to assess wave reflection patterns in the model. A realistic WIA pattern at the brachial inlet was found. The in-vitro model shows the complexity of the wave reflections and can be used as validation tool for a computational model.
AVFs, using native vessels, have without doubt the best long-term outcomes and lowest complication rates. AVF function depends on several patient-specific factors, such as vessel disease status, presence of stenoses, presence of accessory veins on the venous outflow, and intra-operative blood flow rate: for this reason, predicting VA maturation and the onset of long-term complications, or even planning intervention, has proven challenging. Within this context, a European Seventh Framework project, ARCH project (Project n. 224390), has been created. The ARCH project has the goal to build patient-specific image-based computational modelling tools and an ICT service infrastructure for surgical planning and assistance in the management of complications arising from VA creation in patients requiring chronic HD treatment. The general purpose is to improve treatment quality and to decrease the costs of AVF creation and management (Fig. 1).
Keywords— Experimental model, vascular access, wave intensity analysis, validation.
I. INTRODUCTION More than 500,000 End-Stage Renal Disease (ESRD) patients in Europe live by chronic renal replacement therapies and similar numbers are reported for the USA. The global maintenance dialysis population was reported to be over 1.1 million patients in 2001, with an estimated increasing rate of 7%/year [1]. A United States Renal Data System (USRD) statistic projection for 2020 forecast an ESRD population of more than 700.000 patients only in the USA. Successful hemodialysis (HD) treatment critically depends on the availability of a vascular access (VA) that can achieve a blood flow in excess of 350 ml/min. As there is no such peripheral and accessible blood vessel in the human body, it has to be created in an artificial way. The three main options in vascular access for HD are the arteriovenous fistula (AVF), the arterio-venous graft (AVG), and central venous catheters (CVC). VA, however, remains the Achilles heel of HD. Each year in Europe more than 90,500 new VA surgical procedures, around 90,000 replacements and 298,000 interventions are required to start or to prolong a successful HD treatment [2].
Fig. 1 ARCH-project overview The core of the project consists in the creation of computational models to simulate hemodynamic changes induced by AVF surgery and long term vascular and cardiac adaptation [3]. This manuscript describes a full scale hydraulic bench model, consisting of a patient-specific pre-operative silicone model of the arm arterial bed. This model can be used for the experimental validation the described computational model.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 558–561, 2010. www.springerlink.com
Assessment of a Patient-Specific Silicon Model of the Human Arterial Forearm
559
II. MATERIAL AND METHODS A. Construction of Experimental Model The experimental in-vitro model was based on patientspecific data used in the computational model of Huberts et al. [4]. The 2D-data (vessel diameters and lengths) was complemented with 3D-data (angles of bifurcations) by use of MR-images of the same patient. With this complete dataset a 3D CAD-model was created with a geometry as shown in Fig. 2. An STL-file of the geometry was generated, which served as input to a Rapid Prototype (RP) printer. From the RP-model, a full scale silicon Patient-specific Arterial Model (PAM) could be constructed. This silicon model was then built in a mock loop (Fig. 3). The circuit consisted of a reservoir connected to a pulsatile pump (Pulsatile blood pump 1423; Harvard, US) which first pumps the blood to a windkessel. This windkessel, with one exit to the PAM and one directly to the reservoir, mimics the global compliance and resistance of the arterial network. The flow through the PAM was guided over outflow resistances towards the outflow reservoirs. B. Collection of Boundary Conditions The inflow profile was measured during all measurements by use of a Transonic flow meter (TS420; Transonic Systems Inc., Ithaca, NY, USA), With the probe (Transonic Flow probe ME16PXL202) placed between the windkessel and the model inlet (proximal brachial artery). The signal of the flow meter was sampled (using a data-acquisition (DAQ) unit (SC-2345; National Instruments, Austin, Texas, USA) at 1000 Hz. The acquisition software was an in-house program written in LabVIEW 7.0 express (2003) (National Instruments Inc., Austin, Texas, USA). The inflow profile is shown in Fig. 4.
Fig. 3
Mock loop consisting of a reservoir, pump, windkessel, the PAM, variable resistances and outflow reservoirs (Inter, f1-4)
The used fluid was a mixture of water and glycerin with a dynamic viscosity of 4.38 mPa.s measured with a capillary Ubbelohde viscometer (Schott, Germany) and a density of 1115 kg/m³ measured with a constant volume flask at a temperature of 20°C. The outflow distribution was measured using the outflow reservoirs to collect the flow for at least 30 minutes. Distensibility of the model was calculated from the measured pressure profile and the measured diameter change at the same location for one cardiac cycle. The diameter changes were obtained using a dedicated walltracking algorithm [5] based on radiofrequent ultrasound signals (Vivid 7 PRO, GE VingMed Ultrasound, Horten, Norway), using a 10–12 MHz linear array vascular probe (12L). The pressure used for this calculation was measured using the method described below (output pressure profiles). In Table 1, the compliance (C) and the distensibility coefficient (DC) of the different arteries, calculated with equation 1 and 2 respectively, is shown. (CSA: CrossSectional Area)
Fig. 2 3D CAD-model as base for Rapid Prototyping IFMBE Proceedings Vol. 29
560
K. Van Canneyt et al.
C = (CSAsyst −CSAdiast ) (Psyst − Pdiast ) DC =
(CSA
syst
−CSAdiast )
CSAdiast
(P
syst
(1) (2)
− Pdiast )
same DAQ unit as described for the inflow measurement. It was verified that the frequency response of the measuring system was sufficiently high to allow for reliable measurements. Table 2 In-vitro pressure measurement locations
Table 1 Distensibility of silicon model
Pressure measurement locations Compliance
Brachial artery Radial artery Ulnar artery
[cm²/mmHg] 4.84 E-04 1.68 E-04 1.51 E-04
Distensibility Coefficient [1/mmHg] 2.46 E-03 2.09 E-03 2.64 E-03
Proximal brachial artery Distal brachial artery Proximal radial artery Distal radial artery Proximal ulnar artery Middle ulnar artery Distal ulnar artery
Exact location 1 cm distal to inlet 1 cm proximal to brachial-radialulnar bifurcation 1 cm distal to brachial-radial-ulnar bifurcation 4.1 cm proximal to palmar arch 1cm distal to brachial-radial-ulnar bifurcation 1cm distal ulnar-interosseous bifurcation 3.8 cm proximal to palmar arch
D. Wave Intensity Analysis The technique of Wave Intensity Analysis (WIA) was performed to assess hemodynamics and wave reflection patterns. WIA is a mathematical technique for evaluating wave transmission in the cardiovascular system; it allows a time domain approach to the interpretation of arterial hemodynamics, as an alternative to a frequency domain approach based on the arterial input impedance analysis [6] [7]. In the WIA approach, waveforms are considered as a superposition of successive wavelets, described as the changes in pressure and velocity during a sampling period. Fig. 4 Flow measurement at the proximal brachial artery
C. Measurements of Output Pressure Profiles After the patient-specific model was build and all boundary conditions were set and measured, the output pressure waves were measured using fluid-filled catheters connected to disposable pressure transducers (DT-X Plus; Becton Dickinson, USA). The measurements were performed at seven different locations along the model (brachial proximal, brachial distal, radial proximal, radial distal, ulnar proximal, ulnar middle – after the interosseous bifurcation, and ulnar distal), as described by Table 2, sliding a fluid filled (epidural) catheter (Portex Epidural Catheter; Smiths Medical ASD, inc. Keene, USA) through the model and placing the catheter tip each time at one of the seven fixed reference points, permanently marked on the model surface. The signal of the pressure transducers was connected to the
Fig. 5 Pressure
measured at the proximal brachial artery (black), at distal radial artery (grey) and at distal ulnar artery (dotted)
IFMBE Proceedings Vol. 29
Assessment of a Patient-Specific Silicon Model of the Human Arterial Forearm
561
measured and pressure profiles were assessed at seven locations. Anticipated WIA patterns were found in the brachial artery: the contraction of the heart first generates an initial FCW which is distally reflected in a BCW. This reflected wave causes the pressure amplification along the arm arteries. Our data also display the existence of forward and backward running expansion waves later in the heart cycle. In our model, these are probably caused by a negative reflection of the BCW on the upstream windkessel. It remains to be shown whether these patterns also reflect human forearm hemodynamics. Nonetheless, these results open the way for full experimental validation of the computer model and study of more complex phenomena as arterio-venous fistula creation in the human arm. Fig. 6 Wave intensity at brachial artery for the experimental model. Forward compression waves (FCW), backward compression waves (BCW), forward expansion waves (FEW) and backward expansion waves (BEW) are highlighted
ACKNOWLEDGMENT
WIA allows the calculation of wave intensity dI, defined as power per unit area, from the changes in pressure and flow velocity (dI = dP dU), and it allows the separation of forward and backward travelling pressure and flow waves in the arterial system.
The research leading to these results has received financial support from the European Community's Seventh Framework Program (FP7/2007-2013: ARCH, Project n. 224390). Ms. F. Giudici was supported by an Erasmus scholarship and worked in collaboration with the Laboratory of Biological Structure Mechanics, Politecnico di Milano, Milan, Italy.
III. RESULTS
REFERENCES
In Fig. 5, the pressure profiles for the proximal brachial, the distal radial and the distal ulnar artery are shown. It can be seen that the pressure profiles have a realistic shape; the profile consists of a first peak added with one to two reflection peaks. The pulse pressure (PP) (maximum minus minimum) rises from 67.9 mmHg for the proximal brachial artery to 72.5 mmHg for the distal brachial artery. The PP in the distal radial and ulnar artery even rises to 75.9 mmHg and 74.3 mmHg, respectively. The experimental wave intensity calculated by the WIA at the proximal brachial artery is shown in Fig. 6. The signal consists of a forward compression wave (FCW) generated by the compression of the heart, which is reflected distally as a backward compression wave (BCW). Open end reflections in the windkessel and relaxation of the pulsatile pump produce a forward expansion wave (FEW) with its again reflected backwards as an expansion wave (BEW).
IV. CONCLUSIONS We constructed a model of the human forearm based on a patient-specific geometry. All boundary conditions could be
1. Lysaght MJ (2002) Maintenance dialysis population dynamics: Current trends and long-term implications. J Am Soc Nephrol 13(Suppl 1): S37–S40 2. Tordoir JH, Keuter X, Planken N et al. (2006) Autogenous options in secondary and tertiary access for haemodialysis. Eur J Vasc Endovasc Surg 31(6): 661-666 3. Huberts W, Bosboom M, Bode A et al. (2009) Computational simulations of vascular access. EVC textbook: Vasc. Access: 51-63 4. Huberts W, Bosboom EMH, Van De Vosse FN (2009) A lumped model for blood flow and pressure in the systemic arteries based on an approximate velocity profile function. Math Biosc Eng 6: 27-40 5. Rabben SI, Bjaerum S, Sorhus V, Torp H (2002) Ultrasound-based vessel wall tracking: An auto-correlation technique with RF center frequency estimation. Ultrasound Med Biol 28(4): 507-517 6. Zambanini A, Khir AW, Byrd SM, Parker KH et al. (2002) Wave intensity analysis: a novel non-invasive method for determining arterial wave transmission. Computers in Cardiology (2002) 29:717-720 7. Avolio A, Westerhof BE, Siebes M, Tyberg JV (2009) Arterial hemodynamics and wave analysis in the frequency and time domains: an evaluation of the paradigms. Med. Biol. Eng. Comput. 47: 107-110 Author: Koen Van Canneyt Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Institute Biomedical Technology, Ghent University De Pintelaan 185, block B Ghent Belgium [email protected]
Numerical Investigations of the Strain-Adaptive Bone Remodeling in the Prosthetic Pelvis A. Bouguecha1, I. Elgaly1, C. Stukenborg-Colsman2, M. Lerch2, I. Nolte3, P. Wefstaedt3, T. Matthias1 and B.-A. Behrens1 1
Institute of Metal Forming and Metal-Forming Machines, Leibniz Universität Hannover, Garbsen, Germany 2 Department of Orthopedics, Hannover Medical School, Hannover, Germany 3 Small Animals Clinic, University of Veterinary Medicine Hannover, Hannover, Germany
Abstract— Bone remodeling due to stress shielding is a major cause of hip implants aseptic loosening. Previously, investigations on this phenomenon were based mainly on clinical observations. Currently, the finite element method (FEM) has been established as a reliable and efficient computing method to examine stress shielding after total hip arthroplasty and the related bone remodeling in the prosthetic Femur. The life of hip implants is not only limited by the loosening of the prosthesis stem, but also frequently affected by the loosening of acetabular components caused - among other factors - by resorption of the bone surrounding the prosthesis due to stress shielding. However, only few numerical research studies, which focus on the FE computation of the strainadaptive bone remodeling in the prosthetic pelvis, have been published. The aim of the work presented here is to estimate the changes in the physiological load distribution and the resulting bone remodeling in a pelvis provided with a cemented acetabular prosthesis component (cup) based on FEM. Therefore, a three-dimensional FE model of the left intact pelvis half was initially reconstructed based on CT data of a male patient with 85 kg weight. In a further step, a FE model of the prosthetic pelvis was built. The anchoring of the considered acetabular prosthesis component was carried out using a homogeneous cement layer of 2 mm thickness. Then the change of the apparent bone density in the prosthetic pelvis was calculated by means of FEM and the strain-adaptive bone remodeling in the acetabular region was analyzed. The analysis of the numerical results reflects significant estimated bone loss, especially in the acetabular limbus adjacent to the cup. As a result, loosening of the cemented prosthesis component can be anticipated. Keywords— bone remodeling, pelvis, cemented cup, FEM I. INTRODUCTION
For the treatment of severe degenerative or traumatic damages of the hip joint, total hip arthroplasty (THA) has become a well-proven and prevalent practice [1]. Nevertheless, bone remodeling due to stress shielding is a major cause of aseptic implant-loosening [2]. This loosening makes revision surgery in most cases inevitable
[3]. In the past, investigations on the stress shielding after total hip arthroplasty (THA) were based mainly on clinical observations. Currently, the finite element method (FEM) has been established as a reliable and efficient computing method to examine this phenomenon and the related bone remodeling in the prosthetic femur [4-6]. The life of hip implants is not only limited by the loosening of the prosthesis stem, but also frequently affected by the loosening of acetabular components (cup) [7-11]. According to several researchers [2, 4, 5, 6, 12], the implant loosening is caused, among other factors, by resorption of the bone surrounding the prosthesis due to stress shielding. However, to our knowledge, only few numerical investigations focused on the FE computation of the strain-adaptive bone remodeling in the prosthetic pelvis. For example, cementless cups have been investigated numerically using two-dimensional FE models [12]. The aim of the work presented here is to estimate the changes in the physiological load distribution and the resulting bone remodeling in a pelvis provided with a cemented acetabular prosthesis component (cup). These investigations are carried out based on FEM applied on 3D models. II. MATERIALS AND METHODS
A. FE model In order to compute the strain-adaptive bone remodeling in the acetabulum provided with a cemented cup, a closed surface STL (Surface Triangulation Language) model of the left half of the pelvis based on CT data of a male patient with 85 kg weight was initially reconstructed by means of the 3D medical image processing and editing software Mimics (Materialize, Leuven, Belgium). The pelvis solid model was meshed using ten-noded tetrahedral elements via the preprocessor HyperMesh (Altair Engineering GmbH, Böblingen, Germany). To build the FE model of the pelvis/cup assembly, the region in which the cup is implanted, was edited in the
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 562–565, 2010. www.springerlink.com
Numerical Investigations of the Strain-Adaptive Bone Remodeling in the Prosthetic Pelvis
563
reconstructed model of the intact natural pelvis simulating the surgical procedures, and the cemented cup was then assembled as shown in Fig. 1. The anchoring of the considered acetabular prosthesis component was carried out using a homogeneous cement layer of 2 mm thickness.
Fig. 2 Fig. 1
FE model of the pelvis/cup assembly
B. Strain-adaptive bone remodeling
The properties of the bone structure in the FE model were defined as follows: First, the apparent bone density (ABD) was calculated for each FE element of the pelvis based on the measured Hounsfield Unit (HU), as given by Eq. 1 in [13]:
U
0.114 0.916 10 3 HU
(1)
The correlation function between the Young’s modulus of the bone and the ABD in each tetrahedron is given by Eq. 2 [14]: E
3790 U 3
The evolution of the hip joint force during the whole gait cycle [19]
(2)
The cup material was defined as an ultra high molecular weight polyethylene (UHMWPE) polymer. Each of cement and cup was modeled as homogenous and isotropic material. The Young’s modulus was taken as E = 1500 MPa for UHMWPE [15] and as E = 3200 MPa for the considered cement [16]. The Poisson’s ratios of both materials are assumed to be Ȟ = 0.3. For the loading situation, the whole gait cycle (the most frequent dynamic activity of a patient after THA [17]) was examined. The pelvis constraints are taken from [12, 18]. The evolution of the hip contact forces during the whole gait cycle, as shown in Fig. 2, are obtained from the clinical investigations of Bergmann et al. [19]. In this way, the effect of muscles and ligaments around the hip joint were taken into account without having to model the muscle and ligament group according to the assumption of Bachtal et al. [18].
The Principle of the FE calculation of the bone remodeling is shown in Fig. 3. The physiological load distribution in the intact pelvis under the considered loading history was computed in one cycle. For this single cycle, the strain energy density D was calculated according to Eq. 3: D
1 T V H 2
(3)
Where: H represents the strain vector and VT the transposed stress vector. From these vectors, the strain energy per unit of mass S is determined using Eq. 4: D
S
(4)
U
These results serve as reference data to compute the strain-adaptive bone remodeling. After THA, some changes in the distribution of the physiological load in the prosthetic pelvis were noticed [12]. The stimulus [ for the bone remodeling is defined by the ratio of the strain energy per unit of mass in the prosthetic pelvis Spro to that in the natural intact one Sref (Eq. 5). [
S pro Sref
100%
(5)
The changes in the ABD in the pelvis after THA were determined according to Eq. 6:
IFMBE Proceedings Vol. 29
U ( n 1)
U ( n ) U 't
(6)
564
A. Bouguecha et al.
Where: U(n) and U(n+1) represent the prosthetic ABD in the current (n) and next computing step (n+1), U the ABD evolution rate and 't the used time increment. Furthermore, the value of the new Young’s modulus in the considered bone structure was calculated using Eq. 2. This iterative simulation of the strain-adaptive bone remodeling is ended when the change in the bone mass converges.
Fig. 4
Bone adaptation law
III. RESULTS
In Fig. 5, the progress of the bone mass loss in the prosthetic pelvis is presented. The initial state in the simulation (computation step 1) corresponds to the medical situation directly after THA, while the stationary final state (computation step 31) representing the result of the FE computation after reaching convergence corresponds to the clinical long-term situation. According to our numerical calculations, a total bone mass loss of 3.6% in the whole prosthetic pelvis can be expected (Fig. 5).
Fig. 3
The principle of the FE calculation of the bone remodeling
In order to calculate the strain-adaptive bone remodeling in the prosthetic pelvis (provided with a cemented cup) by quantifying the change in the apparent bone density, the bone adaptation law (Fig. 4) presented in our recent publications [6, 20] was implemented in the FE solver MSC.MARC (MSC.SOFTWARE Corp., Santa Ana, USA). This bone adaptation law allows the calculation of the ABD evolution rate as a function of the bone remodeling stimulus (Eq. 7). U
f [
(7)
The value for the dead zone, where changes in the load situation – compared to that in the natural pelvis - do not cause bone remodeling, was defined in this study as z = 75 %. The chosen threshold level for severe overloading ([ > y) causing necrosis in the acetabular bone structure and hence decrease in the ABD (bone resorption) was set to y = 400 %.
Fig. 5
Progress of the bone mass loss in the prosthetic pelvis over the computation steps
For a better localization of the strain-adaptive bone remodeling, a comparison between the initial and the final ABD distributions is represented by frontal cross-sectional view through the midsection of the cup at the, as shown in Fig. 6. It is clear that, due to the change in the load situation compared to that in the intact natural pelvis, the ABD decrease leads to a considerable bone resoption in the acetabular limbus adjacent to the cemented cup. In this region, the most force transmission from the implant into the bone occurs.
IFMBE Proceedings Vol. 29
Numerical Investigations of the Strain-Adaptive Bone Remodeling in the Prosthetic Pelvis 4.
5. 6.
7.
8.
Fig. 6
Post-convergence distribution of the ABD in the prosthetic pelvis
IV. CONCLUSIONS
10.
In this study, a three-dimensional FE model of a pelvis provided with a cemented cup is presented. The numerical simulation of the strain-adaptive bone remodeling in the acetabular region considers elastic properties modification due to ABD changes. The FE calculations show significant bone loss especially in the acetabular limbus adjacent to the cup. This result confirms the findings of clinical observations [9] and proves that bone remodeling is a relevant factor causing loosening of the cemented cup. Verification of the built models and validation of the FE calculations through DEXA investigations are being in scope of our current research work.
The investigations are a part of the subproject D6 performed within the framework of the Collaborative Research Centre 599 “Sustainable degradable and permanent implants out of metallic and ceramic materials”. The authors would like to thank the German Research foundation (DFG) for the financial support.
REFERENCES
2. 3.
11.
12. 13. 14. 15. 16.
ACKNOWLEDGMENT
1.
9.
Fritz ST, Volkmann R, Winter E and Bretschneider, C (2001): The BICONTACT hip endoprosthesis- a universal system for hip arthroplasty for traumatic and idiopathic indications results after 10 years. European J. Trauma, E-Supplement 2001, 1, 18-22 Huiskes R and van Rietbergen B (1995): Preclinical Testing of Total Hip Stems. Clinical Orthopaedics and Related Research, 319, 64-76 Havelin LI, Espehaug B, Vollset SE and EngesaeterLB (1995): Early aseptic loosening of uncemented femoral components in primary total hip replacement. A review based on the Norwegian Arthroplasty Register. J. Bone Joint Surg, Br 77/1, 11-17
17. 18. 19. 20.
565
Engh CA and Amis AA (1999): Correlation between pre-operative periprosthetic bone density and post-operative bone loss in THA can be explained by strain-adaptive remodelling. J. Biomechanics, 32, 695-703 Kuiper JH and Huiskes R (1997): The predictive value of stress shielding for quantification of adaptive bone resorption around hip replacements. J. Biomech. Eng. 119, 228-231 Behrens B-A, Wefstaedt P, Nolte I, Stukenborg-Colsman C, Bouguecha A (2009): Numerical investigations on the strain-adaptive bone remodelling in the periprosthetic femur: Influence of the boundary conditions. BioMedical Engineering OnLine 2009, 8:7 Garcia-Cimbrelo E, Diaz-Martin A, Madero R and Munera L (2000): Loosening of the cup after low-friction arthroplasty in patients with acetabular protrusion. The importance of the position of the cup. J. Bone Joint Surg, Br 82/1, 108-115 Grant P and Nordsletten L (2004): Total hip arthroplasty with the Lord prosthesis. A long-term follow-up study. J Bone Joint Surg, Am86, 2636-2641 Shetty NR, Hamer AJ, Kerry RM, Stockley I, Eastell R and Wilkinson JM (2006): Bone remodelling around a cemented polyethylene cup - A longitudinal densitometry study. J. Bone Joint Surg, Br 88/4, 455-459 Laursen MB, Nielsen PT and Søballe K (2007): Bone remodelling around HA-coated acetabular cups - A DEXA study with a 3-year follow-up in a randomised trial. Int. Orthopaedics, 31/2, 199–204 Wright JM, Pellicci PM, Salvati EA, Ghelman B, Roberts MM and Koh JL (2001): Bone density adjacent to press-fit acetabular components. A prospective analysis with quantitative computed tomography. J Bone Joint Surg, Am83, 529–536 Levenston M, Beaupré G, Schurman D and Carter D (1993): Computer simulations of stress-related bone remodeling around noncemented acetabular components. J. Arthroplasty, 8/6, 595-605 Rho J Y, Hobatho M C and Ashman R B (1995): Relations of mechanical properties to density and CT numbers in human bone. Medical Engineering and Physics, 17, 347-355 Carter D R and Hayas W C (1977): The compressive behaviour of bone as a two-phase porous structure. J of Bone & Joint surgery, Am59, 954-962 Eyerer P (1986): Kunststoffe in der Gelenkendoprothetik. Z. Werkstofftech., 17/10, 384-391 Kuehn KD, Ege W and Gopp, U (2005): Acrylic bone cements: mechanical and physical properties. Orthop. Clin. Am.36, 29-39 Morlock M, Schneider E, Bluhm A, Vollmer M, Bergmann G, Müller V and Honl M (2001): Duration and frequency of every day activities in total hip patients. J. Biomechanics, 34, pp 873-881 Bachtar F, Chen X, Hisada T (2006): Finite element contact analysis of the hip joint. Med. Bio. Eng. Comput., 44, 643–651 Bergmann G, Deuretbacher G, Heller M O, Graichen F, Rohlmann A, Strauss J and Duda G N: Hip contact forces and gait patterns from routine activities. J. Biomechanics, 2001, 34, pp 859-871 Behrens B-A, Nolte I, Wefstaedt P, Stukenborg-Colsman C, and Bouguecha A (2009): Femoral load change and caused bone remodeling after hip arthroplasty. 11th International Congress of the IUPESM, 07.-12.09.2009, Munich
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Anas Bouguecha Institute of Metal Forming and Metal-Forming Machines An der Universität 2 Garbsen Germany [email protected]
Development of a System Dynamics Model for Cost Estimation for the Implantation and Revision of Hip Joint Endoprosthesis J. Schröttner and A. Herzog Institute of Health Care Engineering, Graz University of Technology, Graz, Austria
Abstract— The increasing population, particularly regarding the proportion of people over the age of 60 causes new challenges to public health care systems. Simulations of existing health care models indicate that especially orthopedic and arthritic conditions show a considerable impact of population aging on total health care costs. To demonstrate the estimated total amount of costs, a model was developed in this work, which allows simulating the costs regarding hip joint endoprosthesis. The benefit of this model will be the chance to compare the results of different simulation scenarios to each other, with regard to their economic effectiveness. The model simulates the number of implantations as well as revisions of total and part hip endoprosthesis and the incurred costs, based on real statistical data, until the year 2040. For modeling the system dynamic technique was used. To calculate the distribution of the population with a hip implant the parameters rate of implantation, type of implant, mortality and the rate of revision have been considered. The model incorporates 898 elements of which 133 represent adjustable variables. As a result of these properties, a wide range of different simulation scenarios can be investigated, such as scenarios based on capacity limits, changes in implantation and revision times, variations in implant costs, expense-related costs or changes in the allocation of cemented and un-cemented implants as well as bearing combinations. Therefore the model can be enhanced by adapting it for future developments. The model presented in this paper represents a comprehensive tool, which for the first time enables a lot of possibilities for different simulation scenarios in the medical area of hip arthroplasty regarding cost estimations and resource management. Keywords— health care, modeling, hip joint, endoprosthesis, system dynamics.
I. INTRODUCTION It is well known that population distribution has already changed in the last decades and will further change in the near future. Increasing life expectancy and low fertility rate are the main causes for the so called “double aging” effect. Not only do the amount of older people increase, but also their percentage within the population rises dramatically. Estimations for the population development in Austria show, that total population rises from 8.23 millions of people in the year 2005 to 9.52 millions in the year 2040,
which causes an relative increase of 15.7% based on 2005 [1, 2]. Population projection for a selected age group of 60 and more years show, that the proportion of elderly increases from 22% in the year 2005 to 34% in the year 2040. Applied to the selected group this means an absolute increase of 80% of the elderly population in the next 30 years. This causes new challenges for public health care systems, particularly for disease pattern, which occur for the most part in the group of people over the age of 60. Simulations of existing health care models indicate that especially orthopedic and arthritic conditions show a considerable impact of population aging on total health care costs [3]. Main diagnoses for an implantation of a hip endoprosthesis are arthrosis and injury of the hip [4]. These diagnoses cover 92% of all implanted hip endoprosthesis in Austria. Investigations showed, that implantations caused by arthrosis are preferable performed in the population group aged between 65 and 80 years. Implantations caused by injury of the hip show a significant increase starting from 80 years. In Austria expenditures only in the field of purchasing implants amounted about 25 Million Euros in the year 2002, whereas the costs of materials accounts only for about 12% of the total costs of an implantation [5, 6]. Total costs for an implantation of a hip joint endoprosthesis in Austria are calculated between 6.880 and 8.280 Euros. Additionally it has to be considered, that because of limited life time or technical failure a revision is necessary, in which the costs for the revision surgery of the endoprosthesis increases of about 22% [6, 7]. The demographic transition is an important challenge for the development of future health care. Building models and running these models with probable prospective scenarios can support the decision making process in different medical areas. Furthermore, the simulation results can be used to assess future developments and their outcome to react early enough if necessary. Therefore it was the aim of this work to develop a model to estimate the expenditures for the implantation and revision of hip joint endoprosthesis in the future.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 566–569, 2010. www.springerlink.com
Development of a System Dynamics Model for Cost Estimation for the Implantation and Revision of Hip Joint Endoprosthesis
II. METHODS AND MATERIAL For modeling the system dynamic technique (Software Anylogic®, XJ Technologies) was used, where the real processes are represented in terms of stocks, flows between them and equations as well as functions that define the values of the flow. To model the “flow” of the distribution of the population with a hip implant the parameters rate of implantation, type of implant, mortality and the rate of revision have been considered. As input for the hip arthroplasty model a predefined population model was used [2]. The process of aging in the population model was calculated separately for each age with constant values for fertility, mortality and migration based on data from the year 2005. In the hip arthroplasty model the population was then divided into age classes of five years.
567
patients, the date of implantation and the mortality of the population have to be considered. Furthermore this module includes a second stock, which identifies all patients who necessarily had already their first revision surgery. Finally this module allows the definition of a stock of patients who already got a hip implant before starting a simulation of the model. Module 3 includes the calculation of the rate of revisions and is closely connected to the stock of patients of module 2. To consider the rate of revision for each age group of the model, the calculation is based on data of the Swedish Hip Arthroplasty Register [7] and the Australian Orthopedic Association National Joint Replacement Registry [8]. Figure 2 shows an example of the percentage of not revised cemented implants for calculating the rate of revision. This module allowed the integration of possible influence parameters like weight of the patients or technical improvements which could strongly affect the rate of revision and therefore their contribution for the total cost estimation. The patient flow through module 3 ends up with the condition, that one person receives a maximum of two revision surgeries. If this is the case the patient leaves the model environment and will not be considered for a third revision.
Fig. 1 Modules of the hip arthroplasty model The structure of the hip arthroplasty model can be divided into 4 modules: calculation of the implantations of endoprosthesis per year, population with a hip implant and revision per year, calculation of the rate of revision and the calculation of the total costs. Figure 1 shows the interrelation between these four modules. In module 1 the amount of implantations for each age class and separated for men and women is calculated via the rate of implantation. The rate of implantations is based on data from Statistics Austria [1] for five years age classes from 1997 to 2006 for the diagnosis arthrosis and injury of the hip. Additionally the proportions of cemented or uncemented mounting as well as the proportions of the different types of prosthesis (metal, ceramic or polyethylene material) are considered. As mentioned before these two diagnosis cover 92% of all implanted hip endoprosthesis in Austria. The patient stock (population with a hip implant) in module 2 is the central part of the model because it identifies all implanted prosthesis related to age class and gender. To enable the estimation for the future the aging of the
Fig. 2 Percent of not revised implants over postoperatively years of cemented implants and all reasons for revision from [7] In module 4 the total costs are calculated. The total costs are summarized from hospital costs, costs for implantations
IFMBE Proceedings Vol. 29
568
J. Schröttner and A. Herzog
It can be concluded, that the developed model represents a comprehensive tool, which for the first time enables a lot of possibilities for different simulation scenarios in the medical area of hip arthroplasty regarding cost estimations and resource management. Particulary the amount of adjustable parameters like proportions of the different prosthesis types or detailed implementation of cost contributions make this developed model to an outstanding tool for the assessment of prospective planed cost reduction activities, and can be used for decision making in health care systems. Furthermore, the modular concept of the model allows an easy transition to evaluate other related medical areas like knee arthroplasty. 2.500
2.000
1.500
1.000
500
2006
2005
2004
2003
2002
2001
2000
1999
0 1998
The model was developed with the aim to obtain a tool for future cost estimations for the implantation and revision of hip joint endoprosthesis. Within 898 input parameters and variables, whereas 133 of them are implemented as “adjustable” parameters this model allows the simulation of a lot of different scenarios for cost estimations in this special medical area. For example the implementation of module 1 enables the possibility of a displacement from the time of implantations or a limitation of medical service. Additionally the type of mounting (un-cemented or cemented), prostheses types (part or total endoprostheses) as well as the implant materials are adjustable. By modelling the rate of implantation not only new implants, but also persons who already had a hip implant before are considered for future health care expenditures. The additional stock of revisions in module 3 makes it possible, that also a time-shift or limitation of the revisions surgeries can be simulated. Moreover the limitation of the revision surgery can be adjusted in a way that it will be effective for people above defined age boundaries. The model even allows to simulate a possible influence of overweight on the life-span of the prostheses and therefore the rate of revisions. Module 4 offers the possibility to analyse the impact of different cost adjustments. Not only the costs for implantations and revisions are differentiated, but also types and materials can be considered in each sub-module. Furthermore, the simulation experiments can be adapted in case of changes of the actual DRG System simply by adjusting the new values on the user interface. Figure 3 shows first results of the final model regarding the calculated number of implantations in comparison with the recording of Statistics Austria from the years 1996 to 2006 for the province Styria in Austria. The solid lines present the results for total endoprosthesis and the dashed lines represent part endoprosthesis, whereas black colored lines are results of the model and grey lines show the recordings of Statistic Austria. This comparison shows a maximum variation of 15.7% at the beginning of the simulation time. After the year 2001 the variations decrease between 0.5% and 4.6%. It has to be mentioned, that the modality of data recording of Statistic Austria changed in
IV. CONCLUSIONS
1997
III. RESULTS
the year 2001, what explains the higher deviation at the beginning.
1996
and the costs for revisions. The calculations of the hospital costs are based on the Austrian DRG (Diagnosis Related Groups) system of the year 2009 [9]. Cost estimation for implantations and revisions consider different material costs depending on the type of prostheses. This module also contains parameters which take effect on cost reduction like retentions for different age groups or high risk groups.
Fig. 3 Number of implantations accordion to the recordings of Statistics Austria from the years 1996 to 2006 (black: model results, grey: Statistics Austria [1]) for total endoprosthesis (solid line) and part endoprosthesis (dashed line)
REFERENCES 1. Bundesanstalt Statistik Austria - SuperSTAR Database at www.statistik.gv.at/web_de/services/datenbank_superstar/index.html 2. Schröttner, J., König, E., Leitgeb, N.,: A population Prospect for Future Health Care Models based on a System Dynamics Model, Proceedings of the European Medical & Biological Engineering Conference, IFMBE Proceedings 22, pp.1018-1021, 2008 3. Martini,E.M., Garrett, N. et. al.: „The Boomers Are Coming: A Total Cost of Care Model of the Impact of Population Aging on Health Care Costs in the United States by Major Practice Category”, Health Services Research, 42 Part 1, pp.201 – 218, Februrary 2007 4. Felson, McAlindon: Osteoarthritis: New insights Part 1, Annals of Internal Medicine, vol. 133, pp. 635-646, October 2000
IFMBE Proceedings Vol. 29
Development of a System Dynamics Model for Cost Estimation for the Implantation and Revision of Hip Joint Endoprosthesis 5. Österreichischer Rechnungshof: Wahrnehmungsbericht des Rechnungshofs (Reihe Bund 2003 / 3), Einkauf von Hüftendoprothesen, pp. 81 – 106, Wien, Juli 2003 6. Effenberger, H., Zumstein, M.D., Rehart, S., Schuh, A.: Benchmarking in der Hüftendoprothetik. Orthopädische Praxis 2008, 44: 213 – 225 7. Kärrholm, et al.: Swedish Hip Arthroplasty Register. Annual Report 2007, Göteborg, 2008. 8. Graves, et al.: Australian Orthopedic Association National Joint Replacement Registry. Annual Report 2008, Adelaide, 2008
569
9. Embacher, Leistungsorientierte Krankenanstaltenfinanzierung. Modell 2009, Wien, 2008 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Jörg Schröttner Health Care Engineering Inffeldgasse 18 Graz Austria [email protected]
The Volume Regulation and Accumulation of Synovial Fluid between Articular Plateaus of Knee Joints M. Petrtyl1, J. Danesova2 and J. Lísal1 1
Czech Technical University/Faculty of Civ, Engineeering/Department of Mechanics, Laboratory of Biomechanics and Biomaterial Engineering, Prague, Czech Republic 2 Czech Technical University/Faculty of Civ. Engineering/Department of Mathematics, Prague, Czech Republic
Abstract- The viscoelastic properties of the peripheral zone of articular cartilage and its molecular structure ensure the regulation of the transport and accumulation of synovial fluid between articular plateaus. The viscoelastic properties of articular cartilage in the peripheral zone ensure that during cyclic loading some amount of synovial fluid is always retained accumulated between articular plateaus, which were presupplemented with it in the previous loading cycle. Keywords—articular cartilage, synovial fluids, viscohyperelastic properties, residual strains
I. INTRODUCTION
Articular cartilage (AC) is a viscohyperelastic composite biomaterial whose biomechanical functions consist (1) in transferring physiological loads into the subchondral bone and further to the spongious bone, (2) in ensuring the lubrication of articular plateaus of joints and (3) in protecting the structural components of cartilage from higher physiological forces. The peripheral zone of articular cartilage (Fig. 1) has two fundamental biomechanical safety functions, i.e. to regulate the lubrication of articular surfaces and to protect the chondrocytes and extracellular matrix from high loading.
isotropic and biphase material [1]. There also exist models of transversally isotropic biphase cartilage material [2], [3], non-linear poroelastic cartilage material [4], models of poroviscoelastic [5] and hyperelastic cartilage material [6], models of triphase cartilage material [7], [8], and other models [9], [10]. The published models differ, more or less, by the angle of their authors’ view of the properties and behaviour of articular cartilage during its loading. The authors base their theories on various assumptions concerning the mutual links between the structural components of the cartilage matrix and their interactions on the molecular level. II. CONTENTS
In agreement with our analyses, the properties and behaviour of articular cartilage in the biomechanical perspective may be described by means of a complex viscohyperelastic model (Fig.2). The biomechanical compartment is composed of the Kelvin-Voigt viscoelastic model (in the peripheral and partially in the transitional zone of AC) and of the hyperelastic model (in the middle transitional zone and the low zone of AC). The peripheral zone is histologically limited by oval (disk-shaped) chondrocytes. The viscohyperelastic properties of AC are predetermined by the specific molecular structures.
Fig. 1 Complex structural system of articular cartilage (collagen fibres of 2nd type are not drawn)
The properties and behaviour of AC have been studied from numerous aspects. A number of biomechanical models of the properties and behaviour of AC are available today. The traditional model presents cartilage as homogeneous,
Fig. 2 Mechanical diagram of the complex viscohyperelastic model of articular cartilage. The mechanical compartment is composed of the Kelvin-Voigt viscoelastic model (in the peripheral and transitional zone of AC) and of the hyperelastic model (in the middle transitional zone and the low zone of AC)
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 570–573, 2010. www.springerlink.com
The Volume Regulation and Accumulation of Synovial Fluid between Articular Plateaus of Knee Joints
The mechanical/biomechanical properties of articular cartilage are topographically non-homogeneous. The material variability and non-homogeneity depends on the type and the size of physiological loading effects [11], [14]. AC is composed of cells (chondrocytes), of extracellular composite material representing a reinforcing component – collagen type II [13] and of a non-reinforcing, molecularly complex matrix [12]. A matrix is dominantly composed of glycoprotein molecules and firmly bonded water. In the peripheral zone, there is synovial fluid unbound by ions. The principal construction components of the matrix are glycoproteins. They possess a saccharide component (80-90 %) and a protein component (ca 20 - 10 %). Polysaccharides are composed of molecules of chondroitin4-sulphate, chondtroitin-6-sulphate and keratansulphate. They are bonded onto the bearing protein, which is further bonded onto the hyaluronic acid macromolecule by means of two binding proteins. Keratansulphates and chondroitinsulphates are proteoglycans which, through bearing and binding proteins and together with the “supporting” macromolecule of hyaluronic acid, constitute the proteoglycan (or glycosaminoglycan) aggregate. As the saccharide part contains spatial polyanion fields, the presence of a large number of sulphate, carboxyl and hydroxyl groups results in the creation of extensive fields of ionic bonds with water molecules. The proteoglycan aggregate, together with bonded water, creates an amorphous extracellular material (matrix) of cartilage, which is bonded onto the reinforcing component – collagen type II. Glycosaminoglycans are connected onto the supporting fibres of collagen type II by means of electrostatic bonds. In articular cartilage, nature took special efforts in safeguarding the biomechanical protection of chondrocytes in the peripheral zone. In the biomechanical perspective, chondrocytes are protected by glycocalix (i.e. a spherical saccharide envelope with firmly bonded water). Glycocalix is composed of a saccharide envelope bonded onto chondrocytes via transmembrane proteoglycans, transmembrane glycoproteins and adsorbed glycoproteins. The glycocalix envelopes create gradually the incompressible continuous layer during the loading in peripheral zone of AC (Fig. 3). Our research has been focussed on analyses of viscoelastic deformations of the upper peripheral cartilage zone. The peripheral cartilage zone consists of chondrocytes packaged in proteoglycans (GAGs) with firmly bonded molecules of water. In the intercellular space, there is unbound synovial fluid which contains water, hyaluronic acid, lubricin, proteinases and collagenases. Synovial fluid exhibits non-Newtonian flow characteristics. Under a load the synovial fluid is relocated on the surfaces of AC.
571
The articular cartilage matrix with viscoelastic properties functions dominantly as a “protective pump“ and a regulator of the amount of synovial fluid permanently maintained (during cyclic loading) between articular plateaus. The importance of the “protective pump“ is evident from the function of retention of AC strains during cyclic loading. Due to slowed-down viscoelastic deformation, part of accumulated (i.e. previously discharged) synovial fluid from the preceding loading cycle is retained in articular cartilage (Fig. 3).
Fig. 3 Application of Kelvin-Voigt viscoelastic model for the expression of step-by-step increments of strains Hti in the peripheral zone of AC during cyclic loading (e.g. while walking or running)
Fig. 3 in its upper part (a) shows the loading cycles e.g. during walking, while in the lower part (b) strains during the strain-time growth and during strain relaxation are visible. The strain-time growth occurs during the first loading (see the first concave curve OA of the strain growth). At the time t1 after unloading strain relaxation occurs (see the convex shape of the second curve AB). At the time t2 the successive (second) loading cycle starts. The strain-time growth during the successive loading cycle, however, does not start at a zero value (as was the case during the initial, first loading cycle), but at point B, or at the value of the residual deformation Ht2. The first residual strain provides the initial presupplementation of articular plateaus with synovial fluid. Fig. 3 manifests that the envelope curve OBDF slightly grows during cyclic loading to stabilize after a certain time at a steady value characterizing long-term deformation (during the time of cyclic loading) and long-term presupplementation of articular space with synovial fluid. After cyclic loading stops (i.e. after AC unloading) during the last loading cycle, as seen in Fig. 1(b). After the termination of the last loading cycle, synovial fluid is sucked back into the peripheral layer of AC. The mechanism of viscous strain-time growth and viscous strain relaxation creates a highly efficient ”protective pump“ functioning not only to discharge and suck back synovial fluid, but also to pump (accumulate) it into the articular space.
IFMBE Proceedings Vol. 29
572
M. Petrtyl, J. Danesova, and J. Lísal
Stresses in the peripheral zone may be expressed for the Kelvin-Voigt model by the constitutive equation:
Where Ș
,
(1)
E is the
is the coefficient of viscosity,
dH (t ) modulus of elasticity, İ(t) is the strain of AC and dt is the strain rate of cartilage tissue in the peripheral zone. Equation (1) is a first order linear differential equation for an unknown function İ(t). The solution to the nonhomogeneous equation (1) under the given initial conditions determines the time-related strain of articular cartilage. In our case, it is in the form: 1
H (t ) e K
1 · §1 t Et ¨ V (W )eK dt ¸ ¸ ¨ K t³ ¹ © 0
.
(2)
Let us further consider the case where articular cartilage is loaded by a constant load V ( t ) V c konst (Fig. 3).
V c §¨
1
· ¸ . (3) H (t ) 1 e ¸ E ¨© ¹ Equation (3) implies that the strain of AC is a function of time depending on the magnitude of the constant stress ıc also (for example by shifting an individual’s weight onto one foot). The presence of residual strain (marked by a thick line in FIG. 3 b)) ensures the accumulation of synovial fluid between articular plateaus. It means that during each step (during cyclic loading) articular plateaus are presupplemented with the lubrication medium – synovial fluid. The magnitudes of residual strains of AC play a key role in the presupplementation of AC surface plateaus with synovial fluid. The magnitudes of residual strains may be determined from the functions expressing strain during the strain-time growth and from the functions expressing strain during the strain relaxation of AC, this may be performed separately for each loading cycle of cartilage (FIG. 3 (b)). For the 1st phase of the first loading cycle (for t t 0 ;t1 (FIG. 3), the concave curve is defined by K
(5)
For the 2nd phase of the first loading cycle
(for
t t1 ;t 2 ) (Fig. 3), the convex curve AB is defined by the function for articular cartilage strain: 1
Ht e K
H (t )
E ( t t1 )
.
1
Ht
Discrete strain at the time t0 is
(6)
0 , at the time t1
0
discrete strain is: 1
Ht e K
2
E ( t 2 t1 )
.
1
(7)
The magnitudes of strains during cyclic loading at the starting points of loading and unloading of articular cartilage may be expressed by recurrent relations. For the time ti with an odd index, the strain at the respective nodal points is:
E ( t t0 )
function (3) for the articular cartilage deformation: 1 E ( t t0 ) · V c §¨ ¸ . (4) 1 eK H (t ) ¸ E ¨© ¹ 0 , at the time t1 Discrete strain at the time t0 is H t0
º » »¼
.
Ht Et
E ( t1 t 0 )
K «1 e E «¬
1
dH (t ) V (t ) K EH (t ) dt
1
Vc ª
Ht
Ht
E
Vc ª
«1 e K E ¬«
( 2 k 1 )
( k 1) l
º » ¼» k
,
(8)
0 ,1, 2...
where l is the length of the time interval . For the time ti with an even index, the strain is :
Ht
E
Vc ª
«e E «¬
2k
K
l
E
e
K
( k 1 ) l
º » »¼ k
, (9) 0 ,1, 2 ...
where l is the length of the time interval , i = 0, 1, 2, …During long-term cyclic loading and unloading, for k o f
the strain
steady state
Vc E
Ht
( 2 k 1 )
asymptotically approaches the
for k o f
;
the deformation
asymptotically approaches the steady state
Vc E
E
Ht
2k
l
e K . It is
evident that for k o f it holds true that:
Ht
Vc ( 2 k 1 )
discrete strain is: IFMBE Proceedings Vol. 29
E
!
Ht
2k
Vc e E
E
K
l
.
(10)
The Volume Regulation and Accumulation of Synovial Fluid between Articular Plateaus of Knee Joints
573
regulated long-term protection of articular cartilage plateaus.
CONCLUSIONS The viscoelastic properties of the peripheral zone of articular cartilage and its molecular structure ensure the regulation of the transport and accumulation of synovial fluid between articular plateaus. The hydrodynamic lubrication biomechanism adapts with high sensitivity to biomechanical stress effects. The viscoelastic properties of articular cartilage in the peripheral zone ensure that during cyclic loading some amount of synovial fluid is always retained accumulated between articular plateaus, which were presupplemented with it in the previous loading cycle. During long-term harmonic cyclic loading and unloading, the deformations stabilize at limit values. The limit deformation value of articular cartilage during loading is always greater than its limit deformation value after unloading. Shortly after loading, the strain rate is always greater than before unloading. In this way, the hydrodynamic biomechanism quickly presupplements the surface localities with lubrication material. Shortly after unloading, the deformation rate is high. During strain relaxation, it slows down. This is the way how the articular cartilage tissue attempts to retain the lubrication material between the articular plateaus of synovial joints as long as possible during cyclic loading. Analogically to the low and the middle zone of articular cartilage where an incompressible zone arises under high loads whose dominant function is to bear high loads and protect chondrocytes with the intercellular matrix from destruction, in the peripheral zone as well a Partial Incompressible Zone arises whose function is to bear high loads and protect the Peripheral Tissue from mechanical failure. The appearance of the incompressible tissue in all zones is synchronized aiming at the creation of a single (integrated) incompressible “cushion“. The existence of an incompressible zone secures the protection of chondrocytes and extracellular material from potential destruction. Articular cartilage as a complex viscohyperelastic biomaterial possessing supporting and protective functions. It transfers dynamic effects into subchondral and spongious bone and protects chondrocytes (and the matrix material) from their destruction. Under cyclic loads, it also ensures
ACKNOWLEDGEMENT The contents presented in this paper was partially supported by the Research Grant from MSM No:.6840770012
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.
C. B. Armstrong and W. M. Lai, V. C. Mow (1984), J Biomech.Eng. 106, 165-173 B. Cohen and W. M. Lai, G. S. Chorney, H. M. Dick, V.C.Mow (1992) Advances in Bioengineering, ASME, 22, 187-190 B. Cohen and T. R. Gardner, G. Ateshian (1993) Transactions Orthopaedic Research Society, 39, 185 L. P. Li et al (1999) Clinical Biomechanics, 14, 673-682, 1999. W. Wilson and C. C. van Donkelaar, B. van Rietbergen, K. Ito, R.Huiskes .(2005) J of Biomech, 38, 2138-140 J. J. Garcia and D. H. Cortes(2006) Journal of Biomechanics, 39, 16, 2991-2998 W. M. Lai. et al., J. Biomech Eng. (1991) 113(3), 245-258 G. A. Ateshian. et al (2004) J.Biomech., 37(3), 391-400 W. Wilson, C. C. van Donkelaar, B. van Rietbergan, K. Ito, R. Huiskes (2004) J Biomech; 37, 357-66 J. Jurvelin and I. Kiviranta, A. M. Saamanen, M. Tammi, H. J.Helminen, J Biomech, (1990) 23(12), 1239-1246 S.Akizuki and V. C. Mow, F. Muller, J. C. Pita, D. S. Howell, D. H. Manicourt (1986) Journal of Orthopaedic Research, 4, 379-392 A.Bjelle (1977) Connective Tissue Research, 3, 141-147 A. Benninghoff (1925) Zeitschrift fur Zellforschung, 2, 783-862 M. Petrtyl, J. Lisal, J. Danesova (2008), Locomotor Systems, Advances in Research, Diagnostics and Therapy, 15, 3+4, 173-183
The corresponding author: Author: Prof. Miroslav Petrtyl, DrSc. ,PhD., MSc. Institute: CTU Prague, Faculty of Civil Engineering, Laboratory of Biomechanics and Biomaterial Engineering Street: Thakurova 7 City: 160 00 Prague 6 Country: Czech Republic Email: [email protected]
IFMBE Proceedings Vol. 29
Anthropometric Measurements and Model Evaluation of Mass-Inertial Parameters of the Human Upper and Lower Extremities G.S. Nikolova Institute of Mechanics, Bulgarian Academy of Sciences/Laboratory of Biomechanics and Human Assisted Rehabilitation, Acad. G. Bonchev Str., Building 4, Sofia 1113, Bulgaria
Abstract— Based on our own anthropometric measurements of 50 Bulgarian men and 52 women that complement the representative anthropological investigation [1] of the Bulgarian population aged 30-40 years, we present a method for determining the mass-inertial characteristics of upper and lower arm, thigh and shank of the human body using 3D geometrical modelling. The segments are modeled via threedimensional versions of right elliptical stadium solids. The comparison performed between our model results and data reported in literature demonstrates that this modelling is successful being closer to the real shape of the segments envisaged. Keywords— Mass-inertial parameters, anthropometry, human body modeling.
I. INTRODUCTION The investigation of mass-inertial parameters of human extremities is an integral part of biomechanics of human motion. Body segment parameters can be estimated in a number of different methods, including cadaver-based studies [2-4], gamma mass scanning [5], magnetic resonance imaging [6], and computerized tomography [7], etc. The geometrical modelling techniques represent the shape of different body segments by means of standard geometric solids [8-11]. In the present study we present a specific 16segmental simplified 3D biomechanical model of the human body, which provides a possibility to calculate the massinertial parameters of all segments of the body. The main goal of the current article is to improve the 16-segmental mathematical model of the human body of the average Bulgarian man and woman [12] by modelling the upper arm, lower arm, thigh and shank with versions of right elliptical stadium solids instead of using frustums of a cone.
II. METHODS We start with a 3D model of the human body which consists of 16 segments [12], [13]: head + neck, upper part of torso, middle part of torso, lower part of torso, thigh, shank, foot, upper arm, lower arm and hand, assumed to be
relatively simple geometrical bodies. The explanation of one of the possible ways one determines the numerical values of the geometrical parameters, the choice of the anthropometric landmarks, the way the segments are modeled, etc., is described, e.g., in [12], to which we refer the interested reader for details. In the current article we will change the geometrical modelling of the thigh, shank, foot, upper arm and lower arm of the average Bulgarian males and females and will verify that the new model suggested describes the inertial parameters of these segments better than the model used in [12]. More specifically, in the model used in the current study, the upper arm, lower arm, thigh and shank are considered to be right elliptical solids (see Fig.1). Let us immediately stress that one of the consequences of modelling these segments via right elliptical stadium solids is the lack of the “left-right” symmetry for the inertial moments of these segments. The last symmetry was preserved in [12] and is usually also present in most of the geometrical models of the human body we are aware about. In [12] the main part of the geometrical data needed to determine the geometrical parameters of the segments of the body is taken from a detailed representative anthropological investigation of the Bulgarian population [1] where the authors measured a total of 5290 individuals - 2435 males and 2855 females. Unfortunately the data collected does not include all the data needed to model the thigh, shank, upper and the lower arm as right elliptical solids. For that reason we made our own complementary anthropometric measurements of these segments on additional 102 Bulgarians - 50 men and 52 women. A. Anthropometric Measurements In the anthropometric measurements performed data for Dl , Dt and Lcir (see Fig. 1) have been collected. We shape the upper arm (acromion-radiale) as a right elliptical stadium solid. For this segment we measure the circumference across epicondyles ( Lcir ) , the epicondilar diameter of humerus ( Dl ,0 ) , the diameter perpendicular to humerus ( Dt ,0 ) , as well as the axillary circumference at the proximal
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 574–577, 2010. www.springerlink.com
Anthropometric Measurements and Model Evaluation of Mass-Inertial Parameters of the Human Upper and Lower Extremities
end. The vertical lengths of this segment, as well as of all the other segments ( L) are taken from [1].
575
values of a, rl and rt of the corresponding segments. In order to determine the values of these parameters for the proximal ends of the upper arm and thigh, we assumed that a rescaling of the parameters of the distal end to the proximal end can be used based on the ratio of the measured values of the circumferences of the envisaged segments at these ends. The average values given in Table 1, as well as the average data for the remaining parameters presented in [1] (see also [12]), define the so called “average” men and “average” woman. Let us remind that, as given in [1,12], the height and weight of the average man are 1.71.m and 77.7 kg, while for the average woman they are 1.58 m and 65.3 kg. Table 1 The average values of the directly measured independent parameters (cm) for males (M) and females (F). In brackets the standard (SD) and mean deviations (MD) are given. Here Dl and Dt shall be understood as
Dl ,0 and Dt ,0 for elbow and knee and as Dl ,1 and Dt ,1 for wrist and ankle Parameter
Dl
Dt
M
Fig. 1 A right elliptical stadium solid The lower arm (radiale-stylion) is approximated by stadium solid. The parameters for its proximal end are the above mentioned dimensions for the distal end of the upper arm. For the wrist (distal end) of lower arm (see Fig. 1, but turned upside down) we measured breadth of radio-ulnar joint ( Dl ,1 ) , the thickness in the middle of the radio-ulnar joint perpendicular to its breadth ( Dt ,1 ) and the radio-ulnar
joint circumference ( Lcir ) . For the length of the thigh we use the real anthropometric length defined via the distance between tibiale and iliospinale. The measured knee parameters (distal end of thigh) are: subepicondilar diameter of femur ( Dl ,0 ) , saggital diameter perpendicular to subepicondilar diameter of femur ( Dt ,0 ) and knee circumference across epicondyles ( Lcir ) . The shank (tibiale-sphyrion) is modeled as a stadium solid. The parameters for modelling its proximal end are the above mentioned dimensions (distal end of the thigh). The parameters measured for the ankle are: transversal supramalleolar diameter ( Dl ,1 ) , saggital supramalleolar diameter perpendicular to the transversal ( Dt ,1 ) and the shank circumference over malleoli ( Lcir ) . The average data for the directly measured independent geometrical parameters described above, as well as their standard and mean deviations for males and for females, are summarized in Table 1, respectively. With the data measured for Dl , Dt and Lcir , and having in mind the analytical properties of the stadium solid, one can determine the
F
Lcir
M
F
Axillary arm circumference
-
-
-
-
Elbow
7.9 (0.3) (0.2) 5.4 (0.3) (0.3) 10.0 (0.8) (0.6) 6.1 (0.6) (0.5)
6.6 (0.2) (0.2) 4.6 (0.3) (0.2) 9.1 (0.7) (0.6) 5.6 (0.5) (0.4)
6.1 (0.3) (0.2) 3.5 (0.3) (0.2) 11.1 (0.8) (0.7) 8.0 (0.8) (0.6)
5.1 (0.2) (0.2) 3.0 (0.2) (0.2) 10.0 (0.8) (0.6) 7.4 (0.7) (0.6)
Wrist Knee Ankle
M 33.2 (3.5) (2.7) 27.1 (1.7) (1.4) 17.2 (0.8) (0.6) 39.0 (2.5) (1.9) 24.1 (1.7) (1.3)
F 25.5 (3.8) (3.3) 22.8 (1.6) (1.2) 14.9 (0.7) (0.6) 36.0 (2.7) (2.2) 21.7 (1.6) (1.3)
Fig. 2 presents, as example, the histogram (Fig. 2a) and probability density function (Fig. 2b) for one of the measured anthropometric parameters -male’s shank cirumference over malleoli. The histogram demonstrates that the mean value of the male’s shank cirumference over malleoli is 24.1 cm with standard deviation 1.7 cm. The width of the bin intervals is 1.8 cm. Let us note that 42% of the mesured male’s shank cirumference is in the interval (23.2 ± 0.9) cm, 30% in (25.0 ± 0.9) cm, 13% in (21.4 ± 0.9) cm, 10% in (26.8 ± 0.9) cm , 4% in (28.4 ± 0.9) cm and 1% in (20.3 ± 0.9) cm. By determining the fraction of values divided by bin width one can build the density of the probability distribution of the measured data. This can be compared, on its turn, with the normal probability distribution (the red bold curve) centered around the calculated mean value 24.1 cm and having the corresponding standard deviation of 1.7 cm. We conclude that our data with a good
IFMBE Proceedings Vol. 29
576
G.S. Nikolova
(a)
(b)
Fig. 2 Histograms (a) and probability density function (b) for male’s shank cirumference over malleoli approximation can indeed be considered of being normally distributed.
III. RESULTS AND DISCUSSION On the basis of the original experimental data and our own anthropometric measurements, after deriving analytical expression for the moments of inertia of right elliptical stadium solid, see the Appendix, and performing the corresponding numerics, we determine the principal moments of inertia of the extremities of the human body – the upper arm, lower arm, thigh and shank. Table 2 contains the soobtained results for males, and Table 3 for females, respectively. Inspecting the data in these Tables one can conclude that the model, for most of the quantities of interest, pro-
duces results closer to those reported in [13] and [14] than the model described in [12]. One indeed observes good agreement between our simple model results and those experimentally found and reported in the literature available for other Caucasian. Thus, in the way described above we have achieved the goal of the current study to improve the existing 3D biomechanical geometrical model of the human body [12]. The main advantage of the new model over the preceding one is that in the new model some of the body segments have been approximated by elliptical solids, which are closer in shape to the real segments of the human body. Furthermore, in contrast with [12], no adjustments of the measured geometric parameters, after using appropriate regression equations, were necessary. Despite that the method described above has been applied to the “average” Bulgarian man and women,
Table 2 Moments of inertia of the body segments through the centre of mass (kg.cm2) for males Zatsiorsky Shan and Bohna Nikolova and Toshev (see Ref. [13]) (see Ref. [14]) (see Ref. [12]) IXX IYY IZZ IXX IYY IZZ IXX IYY IZZ Upper arm 114.4 127.3 38.9 108.8 103.8 28.4 220.8 220.8 25.1 Lower arm 60.2 64.7 12.6 49.8 54.6 7.3 54.7 54.7 8.5 Thigh 1999.4 1997.8 413.4 1872.6 1879.9 420.6 1564.0 1564.0 307.7 Shank 371.0 385.0 64.6 357.3 408.9 88.3 231.9 231.9 34.0 a The data are obtained by using the regression equations derived by [14] applied for the average Bulgarian male person. Segment
Our data IXX 169.8 46.6 1932.6 337.0
IYY 163.2 49.4 2066.3 363.9
IZZ 34.4 8.9 367.2 52.9
Table 3 Moments of inertia of the body segments through the centre of mass (kg.cm2) for females Zatsiorsky Shan and Bohna Nikolova and Toshev (see Ref. [13]) (see Ref. [14]) (see Ref. [12]) IXX IYY IZZ IXX IYY IZZ IXX IYY IZZ Upper arm 80.7 92.3 26.2 88.6 87.0 19.6 123.5 123.5 15.8 Lower arm 39.7 40.9 5.3 29.9 31.8 4.2 34.6 34.6 4.0 Thigh 1647.3 1690.1 324.2 1111.1 1118.2 299.8 1714.7 1714.7 290.5 Shank 399.7 409.9 48.6 256.2 298.8 69.0 119.4 119.4 24.8 a The data are obtained by using the regression equations derived by [14] applied for the average Bulgarian female person.
Segment
IFMBE Proceedings Vol. 29
Our data IXX 85.9 23.7 1578.9 224.0
IYY 83.5 22.7 1678.0 239.8
IZZ 13.6 4.3 373.1 34.5
Anthropometric Measurements and Model Evaluation of Mass-Inertial Parameters of the Human Upper and Lower Extremities
the lack of any adjustment and the quality of the results obtained suggest that it shall be working reasonably well also for determination of the corresponding characteristics of a given individual, provided some relatively simple geometrical measurements for this individual are performed.
ACKNOWLEDGMENT The support via grant № BG051PO001/07/3.3-02 55/17.06.2008 of ESF, OP “Human Resources Development”, is gratefully acknowledged.
REFERENCES 1. Yordanov Y et al. (2006) Anthropology of the Bulgarian population at the end of the 20-th century (30-40 years old persons). Professor Marin Drinov Academic Publishing House, Sofia, Bulgaria. 2. Dempster WT (1955) Space requirements of the seated operator. WADC Tech Report 55-159, Ohio. 3. Clauser CE et al. (1969) Weight, volume, and center of mass of segments of the human body. Tech Report AMRL-TR-69-70, Ohio. 4. Chandler RF et al. (1975) Investigation of inertial properties of the human body. Tech Report AMRL-TR-74-137, Ohio. 5. Zatsiorsky, VM, Seluyanov VN (1983) The mass and inertia characteristics of the main segments of the human body. Biomechanics VIIIB, Human Kinetics, Champaign, IL, 1152-1159. 6. Munigiole M, Martin, PE (1990) Estimating segment inertial properties: comparison of magnetic resonance imaging with existing methods. J Biomech 23: 1039-1046. 7. Erdmann WS (1997) Geometric and inertial data of the trunk in adult males. J Biomech 30: 679-688. 8. Hanavan EP (1964) A mathematical model of the human body. AMRL-TR-64-102, Ohio. 9. Jensen RK (1987) Estimation of the biomechanical properties of three body types using a photogrammetric method. J Biomech 11: 349-358. 10. Hatze H (1980) A mathematical model for the computational determination of parameter values of anthropomorphic segments. J Biomech 13: 833-843. 11. Kwon Y-H (1999) Kwon3D motion analysis, at http://kwon3d.com/. 12. Nikolova G, Toshev Y (2007) Estimation of male and female body segment parameters of the Bulgarian population using a 16-segmental mathematical model. J Biomech 40: 3700-3707. 13. Zatsiorsky VM (2002) Kinetics of human motion. Human Kinetics, Champaign, IL. 14. Shan G, Bohn C (2003) Anthropometrical data and coefficients of regression related to gender and race. Appl Erg 34: 327-337.
APPENDIX: ANALITICAL RESULTS FOR THE MOMENTS OF INERTIA OF THE ELLIPTICAL STADIUM SOLID
We define the origin of the coordinate system to be at the point O with z axis pointing up orthogonal to S0. Let us consider a point P on the z axis at some distance h above S0. Let us then denote all points which lie on any straight line that connects P with any point on S0 with Vel. We will call the so-formed figure Vel elliptic cone (or elliptic pyramid). Let us now consider the intersection SL of Vel with a plane parallel to S0 at a distance L apart. It is easy to show that SL is also an elliptical stadium, characterized with parameters 2a1, 2rt,1 and 2rl,1. All the points between the two stadia S0 and SL form a three-dimensional geometrical body which we call right elliptic stadium solid or, briefly, stadium solid - see Fig. 1. It can be shown that the principal moments of inertia of the right elliptical stadium solid are: I XX =
{
1 Lρ 240
4 L2 rt ,0 (8 a0 + 12 a1 + 2 π rl ,0 + 3π rl ,1 )
{
+ ⎡⎣128 a0 + 32 a1 + 3π ( 4 rl ,0 + rl ,1 )⎤⎦ rt3,0 + rt ,1 12 L2 ⎡⎣ 4 a0 + 16 a1 + π ( rl ,0 + 4 rl ,1 )⎤⎦
}
+ ( 96 a0 + 64 a1 + 9π rl ,0 + 6π rl ,1 ) rt2,0 + ( 64 a0 + 96 a1 + 6π rl ,0 + 9 π rl ,1 ) rt ,0 rt2,1 +
{
⎡⎣ 32 a0 + 128 a1 + 3π ( rl ,0 + 4 rl ,1 ) ⎤⎦ rt3,1 − 10 L2 ⎡⎣π ( rl ,0 + rl ,1 ) rt ,0 + π ( rl ,0 + 3 rl ,1 ) rt ,1 + 4 a0 ( rt ,0 + rt ,1 ) + 4 a1 ( rt ,0 + 3 rt ,1 ) ⎤⎦
{⎡⎣8 a
0
2
}/
+ 4 a1 + π ( 2 rl ,0 + rl ,1 )⎤⎦ rt ,0 + ⎡⎣ 4 a0 + 8 a1 + π ( rl ,0 + 2 rl ,1 )⎤⎦ rt ,1
}
}
,
{
1 Lρ 32 a03 ( 4 rt ,0 + rt ,1 ) + 32 a02 a1 ( 3 rt ,0 + 2 rt ,1 ) 240 +32 a0 a12 ( 2 rt ,0 + 3 rt ,1 ) + 32 a13 ( rt ,0 + 4 rt ,1 )
I YY =
(
+4 L2 ⎡ ( 8 a0 + 12 a1 + 2 π rl ,0 + 3π rl ,1 ) rt ,0 + 3 4 a0 + 16 a1 + π ( rl ,0 + 4 rl ,1 ) rt ,1 ⎤ ⎣ ⎦ 10 L ⎡⎣π ( rl ,0 + rl ,1 ) rt ,0 + π ( rl ,0 + 3 rl ,1 ) rt ,1 + 4 a0 ( rt ,0 + rt ,1 ) + 4 ( a1rt ,0 + 3 rt ,1 ) ⎤⎦ ⎡8 a0 + 4 a1 + π ( 2 rl ,0 + rl ,1 )⎤ rt ,0 + ⎡ 4 a0 + 8 a1 + π ( rl ,0 + 2 rl ,1 )⎤ rt ,1 ⎣ ⎦ ⎣ ⎦ 2
−
2
}
+3π ⎡⎣ rl3,0 ( 4 rt ,0 + rt ,1 ) + rl 2,0 rl ,1 ( 3 rt ,0 + 2 rt ,1 ) + rl ,0 rl 2,1 ( 2 rt ,0 + 3 rt ,1 ) + rl3,1 ( rt ,0 + 4rt ,1 ) ⎤⎦
and
{
1 L ρ 32 a 03 ( 4 rt ,0 + rt ,1 ) + 32 a 02 a1 ( 3 rt ,0 + 2 rt ,1 ) 240 + 32 a13 ( rt ,0 + 4 rt ,1 ) + 32 a1 ( rt 3,0 + 2 rt 2,0 rt ,1 + 3 rt ,0 rt 2,1 + 4 rt 3,1 )
I ZZ =
+ 3 π rt ,0 ⎡⎣ 4 rl 3,0 + 3 rl 2,0 rl ,1 + 2 rl ,0 rl 2,1 + rl 3,1 + ( 4 rl ,0 + rl ,1 ) ⎤⎦ rt 2,0 + ⎡⎣ ( rl 3,0 + 2 rl 2,0 rl ,1 + 3 rl ,0 rl 2,1 + 4 rl 3,1 + ( 3 rl ,0 + 2 rl ,1 ) rt 2,0 ⎤⎦ rt ,1 + ⎡⎣ ( 2 rl ,0 + 3 rl ,1 ) rt ,0 rt 2,1 + ( rl ,0 + 4 rl ,1 ) rt3,1 ⎤⎦
}
+ 32 a 0 ⎡⎣ ( 4 rt 3,0 + 3 rt 2,0 rt ,1 + 2 rt ,0 rt 2,1 + rt 3,1 + a12 ( 2 rt ,0 + 3 rt ,1 ) ⎤⎦
Let us take an ellipse lying in the (x,y) plane with a major diameter 2rl,0 along the x axis and minor diameter Dt =2rt,0 along the y axis. Splitting the ellipse along the minor axes in two halves and inserting in-between a rectangular with sides 2a0 and 2rt,0 , where the first one is along the x axis, we obtain a planar figure S0 which we call elliptical stadium. Let O is the intersection of the diameters of the rectangular.
577
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
.
Gergana Stefanova Nikolova Institute of Mechanics, Bulgarian Academy of Sciences Acad. G. Bonchev str., bl.4 Sofia Bulgaria [email protected]
.
Validation of a Person Specific 1-D Model of the Systemic Arterial Tree P. Reymond1, Y. Bohraus1, F. Perren2, F. Lazeyras3, and N. Stergiopulos1 1
Laboratory of Hemodynamics and Cardiovascular Technology, Ecole Polytechnique Fédérale de Lausanne EPFL, Switzerland 2 Clinical Neurosciences Department, Hôpitaux Universitaires de Genève HUG, Switzerland 3 Radiology Department, Hôpitaux Universitaires de Genève HUG, Switzerland Abstract— The aim of this study is to validate a personspecific distributed model of the main systemic arterial tree, coupled to a model of the left ventricle of the heart. This model is built and validated with non-invasive measurements on the same person, leading therefore to a coherent set of physiological data. Although previous studies have been done on 1-D model of arterial trees, demonstrating their aptitude of modeling wave propagation, they were mainly based on generic disparate arterial trees and were not able to provide for both a qualitative and quantitative validation. The 1-D form of the fluid equations is applied over each large arterial segment. The systemic arterial tree geometric dimensions are deduced from angio MR, performed with contrast agent injection. A nonlinear viscoelastic constitutive law for the arterial wall is considered. Arterial wall distensibility is based on literature data and adapted to match the specific subject, using traveling time of the waves by ECG referenced tonometry measurements. The intimal shear stress is modeled using the WitzigWomersley theory. The arterial tree is coupled to the heart, which is modeled using the time varying elastance model. To validate model predictions, we performed non-invasive measurements of pressure and flow waveforms. Pressure was measured using applanation tonometry and flow rate using transcranial ultrasound and PC-MRI. The model predicts pressure and flow waveforms shape and wave features with high qualitative agreement compared to in-vivo measurements. The quantitative aspect of pressure and flow waveforms is also well reproduced. The results obtained let us conclude that an 1-D model based on specific geometric arterial tree is able to predict qualitative and quantitative pressure and flow waveforms in the main systemic circulation. Keywords— Wave propagation, heart model, cerebral circulation, nonlinear viscoelasticity, noninvasive vascular imaging.
I. INTRODUCTION The aim of this study is to validate a person-specific distributed model of the main systemic arterial tree. This model is built and validated with non-invasive measurements on the same person, leading therefore to a coherent set of physiological data. One-dimensional (1-D) models have been used for more than 30 years to predict or analyze pressure and flow in the arterial tree (Avolio [1], Stergiopulos et al [2], Westerhof et al [3]), demonstrating their
aptitude of modeling wave propagation, however, they have never being validated using in vivo measurements. A quantitative validation was performed in vitro in an elastic tube network dimensioned to resemble the human arterial tree by Matthys et al. [4]. The results were supportive of the 1-D model’s capacity to yield good predictions, however, neither the form of the waves nor the elastic properties of the in vitro tube network were matching faithfully their physiological counterparts, so the interest to quantitatively validate the 1-D model in vivo remained.
II. METHODS Mathematical and geometrical description of the model: The 1-D form of the fluid equations is applied over each large arterial segment. The arterial tree including 105 arterial segments was constructed on geometry data derived from measurements on the same person by MR angiography acquisition carried out with addition of contrast agent. Diameter values were deduced from average measures while length measures were acquired along the midline of the arterial segments. Arterial wall distensibility is based on literature data and adapted to match the specific subject, using traveling time of the pressure waves measured with tonometry. A non-linear viscoelastic constitutive law for the arterial wall is considered, based on the model of Holenstein et al [5]. The intimal shear stress and non linear convective acceleration terms are modeled using the Witzig-Womersley theory. At its proximal end (root of the ascending aorta), we coupled the arterial tree to a left heart ventricle model, which is based on the time varying elastance model. All distal vessels and vascular beds are terminated with three-element Windkessel models to account for the proximal, distal resistance and compliance of the distal small arteries, arterioles and capillaries. In vivo measurements: To validate model predictions, we performed non-invasive measurements of pressure and flow waveforms. Pressure was measured using applanation tonometry on superficial arteries (radial, carotid and temporal arteries) and flow rate using doppler ultrasound in precerebral and cerebral arteries (external carotid, internal
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 578–579, 2010. www.springerlink.com
Validation of a Person Specific 1-D Model of the Systemic Arterial Tree
carotid, vertebral and middle cerebral artery) and PC-MRI in main systemic arteries (ascending, descending, thoracic and abdominal aortas, iliac and femoral arteries). Flow rate using Doppler ultrasound is deduced from velocities measurements using the Witzig-Womersley theory and based on diameter measurements (angio MR or Mmode Doppler). The set of equations with the boundary conditions described above are solved using an implicit finite difference scheme to yield pressure and flow waveforms over the entire arterial tree.
III. RESULTS AND DISCUSSION The model predicts pressure and flow waveforms shape and wave features with high qualitative agreement compared to in-vivo measurements (Fig. 1). The quantitative aspect of pressure and flow waveforms is also well reproduced. Even though we are conscious that deducing quantitative flow rate from Doppler velocity measurements is based on many assumptions [6] (Fig. 1.A and 1.B). The absolute quantification of flow rate is more reliable from the MRI acquisitions in the large arteries (Fig. 1.C).
Fig. 1 Model predictions (dark lines) compared to in vivo measurements of flow and pressure waves for different systemic and cerebral arteries. Flow is measured with B-mode color-coded duplex flow imaging technique in the external carotid ECA (A) and middle cerebral artery MCA (B), with PC-MRI in the thoracic aorta (C). Pressure was measured with applanation tonometry in the common carotid artery CCA (D)
579
Even though geometry and flow and pressure measurements were all performed on the same individual, there were still data (terminal compliance and resistances) that were not person specific. The peripheral resistance and compliance data were estimated based on measured flows and on literature values for a young healthy adult.
IV. CONCLUSIONS The results obtained let us conclude that an 1-D model based on specific geometric arterial tree is able to predict in a quantitatively plausible fashion pressure and flow waveforms in the main systemic circulation, thereby validating the suitability of the 1-D model to predict pressure and flow waves in the entire systemic arterial tree.
REFERENCES 1. A.P. Avolio (1980) Multi-branched model of the human arterial system. Med Biol Eng Comput, 18(6), 709-718 2. N. Stergiopulos, D. F. Young, T. R. Rogge (1992) Computer simulation of arterial flow with applications to arterial and aortic stenoses. Journal of biomechanics, 25(12) 3. N. Westerhof, F. Bosman, C.J. De Vries, A. Noordergraaf (1969) Analog studies of the human systemic arterial tree. Journal of biomechanics, 2(2), 121-143 4. K.S. Matthys, J. Alastruey, J. Peiro, A.W. Khir, P. Segers, P.R.Verdonck, K.H. Parker, S.J. Sherwin (2007) Pulse wave propagation in a model human arterial network: assessment of 1-D numerical simulations against in vitro measurements. Journal of biomechanics, 40(15), 3476-3486 5. R. Holenstein, P. Niederer, M. Anliker (1980) A viscoelastic model for use in predicting arterial pulse waves. Journal of biomechanical engineering, 102(4), 318-325 6. H. A. Kontos (1989) Validity of cerebral arterial blood flow calculations from velocity measurements. Stroke; a journal of cerebral circulation 20: 1-3
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Philippe Reymond Institute of Bioengineering Station 15 1015 Lausanne Switzerland [email protected]
First Trimester Diagnosis of Trisomy-21 Using Artificial Neural Networks C.N. Neocleous1, K. Nikolaides2, K. Neokleous3 and C.N. Schizas3 1
2
Department of Mechanical Engineering, Cyprus University of Technology, Lemesos, Cyprus. Harris Birthright Research Centre for Fetal Medicine, King’s College Hospital Medical School, London, United Kingdom. 3 Department of Computer Science, University of Cyprus, Nicosia, Cyprus.
Abstract—Langdon Down in 1866 reported on a syndrome in which individuals have skin appearing to be too large for the body, a nose small and a flat face. This is a chromosomal disorder caused by the presence of all or part of an extra 21st chromosome, and is known as the Down syndrome, or trisomy 21, or trisomy G. In the last fifteen years it has become possible to observe these features by ultrasound examination in the third month of intrauterine life. About 75% of trisomy 21 fetuses have absent nasal bone. In the present work, neural network schemes that have been applied to a large data base of findings from ultrasounds of fetuses, aiming at generating a predictor for the risk of Down syndrome are reported. A good number of feed forward neural structures, both standard multilayer and multi-slab, were tried for the prediction. The database was composed of 23513 cases of fetuses in UK, provided by the Fetal Medicine Foundation in London. For each subject, 19 parameters were measured or recorded. Out of these, 19 parameters were considered as the most influential at characterizing the risk for this type of chromosomal defect. The best results obtained were with a multi-slab neural structure. In the training set there was a correct classification of the 98.9% cases of trisomy 21 and in the test set 100%. The prediction for the totally unknown verification test set was 93.3%. Keywords— Trisomy 21, Down syndrome, neural networks. I. INTRODUCTION
During the last 35 years, extensive research has aimed at developing a non-invasive method for prenatal diagnosis based on the isolation and examination of fetal cells found in the maternal circulation. Examination of fetal cells from maternal peripheral blood is however, more likely to find an application as a method for assessment of risk, rather than the non-invasive prenatal diagnosis of chromosomal defects. In addition to this, there is contradictory evidence concerning the concentration of cell-free fetal DNA in trisomy 21 pregnancies. Invasive techniques on the other hand require invasive testing and thus increase the risk of miscarriage even if this is carried out by an appropriately trained and experienced operator [1].
In the 1990s, screening by a combination of maternal age and fetal nuchal translucency (NT) thickness at 11 to 13+6 weeks of gestation was introduced. This method has now shown to identify about 75% of affected fetuses for a screen-positive rate of about 5% [2]. Subsequently, maternal age was combined with fetal NT and maternal serum biochemistry (free ȕ-hCG and PAPP-A) in the first-trimester to identify about 85-90% of affected fetuses. The level of free ȕ-hCG in maternal blood normally decreases with gestation. In trisomy 21 pregnancies free ȕhCG is increased. The level of maternal blood normally increases with gestation and in trisomy 21 pregnancies the level is decreased. In 2001, it was found that in 60-70% of fetuses with trisomy 21 the nasal bone is not visible by ultrasound at 11 to 13+6 weeks and preliminary results suggest that this finding can increase the detection rate of the first trimester scan and serum biochemistry to more than 95%. The risk for trisomies in women who have had a previous fetus or child with a trisomy is higher than the one expected on the basis of their age alone. In women who had a previous pregnancy with trisomy 21, the risk of recurrence in the subsequent pregnancy is 75% higher than the maternal and gestational age-related risk for trisomy 21 at the time of testing. Thus, for a woman aged 35 years who has had a previous baby with trisomy 21, the risk at 12 weeks of gestation increases from 1 in 249 to 1 in 87 [1]. The Fetal Medicine Foundation (FMF) which is a UK registered charity has established a process of training and quality assurance for the appropriate introduction of NT screening into clinical practice. An automatic and user friendly tool is suggested to build using artificial neural networks and trained with the data that was collected by FMF during recent years. Such a tool may improve the detection of chromosomal defects, as for instance a reliable predictor or a method for the effective and early identification of an abnormality. This tool would be of great help to obstetricians and of course to pregnant women and unborned children. In recent years, neural networks and other computationally intelligent techniques have been used as medical diag-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 580–583, 2010. www.springerlink.com
First Trimester Diagnosis of Trisomy-21 Using Artificial Neural Networks
nosis tools aiming at achieving effective medical decisions incorporated in appropriate medical support systems [3], [4], [5]. Neural networks in particular have proved to be quite effective and also have resulted in some relevant patents [6], [7]. II. DATA
The data were obtained from the greater London area and South-East England from pregnant women attending routine clinical and ultrasound assessment of the risk for chromosomal abnormalities. This assessment was performed by measurement of fetal nuchal translucency thickness, maternal serum free human chorionic gonadotropin (fhCG) and serum pregnancyassociated plasma protein A (PAPP-A) at 11 to 13+6 weeks of gestation. Gestational age was derived from the fetal crown-rump length (CRL). The database was composed of 23513 cases. These were provided by the Fetal Medicine Foundation (FMF) in London. For each case, 19 parameters that were presumed to contribute to diagnosis were recorded. Based on recommendations from medical experts, some data (parameters) were excluded in the study. Finally, only 11 parameters out of the total of 19 were ultimately considered to be the most influential at characterizing the risk of chromosomal defect occurrence, and those were used in the built-up of the neural predictor. These parameters are shown in Table 1.
581
A test set of 109 cases was extracted and used to test the progress of training. This data set included 25 cases (23%) of fetuses that were abnormal. Also, a hold-out verification data set having 93 cases out of which 15 were with abnormality (16%), was extracted to be used as totally unknown data set to the neural network, and thus to be used for checking the prediction capabilities of each attempted network. The data set under examination is very difficult to study because of its unbalance nature; it has 23513 cases from which 23296 are normal and 217 are the abnormal. The nature of this problem makes the use of artificial neural networks more interesting since their ability to handle such complex problems can be demonstrated. III. NEURAL PREDICTOR
A number of feed forward neural structures, both standard multilayer, of varying number of layers and neurons per layer, as well as multi-slab of different structures, sizes, and activation functions, were systematically tried for the prediction. This was done in a planned and systematic manner so that the best architecture would be obtained. Considering the results obtained by this search, it was possible to ultimately select and use a multislab neural structure having five slabs that were connected as depicted in Figure 1.
Table 1 Parameters used for the first trimester diagnosis of trisomy 21 Maternal age CRL, Crown Rump Length (mm) GA, Gestation Age when the crown rump length (CRL) was measured (in days) Previous T21, T18, T13 (yes or no) NT, Nuchal Translucency (mm) FHR, Fetus Heard Rate NB, Nasal Bone (normal, abnormal) TF, Tricuspid Flow from RA to RV (normal, abnormal) DV, Ductus Venosus Flow (normal, abnormal) Defects Serum marker PAPP-A
Fig. 1. The neural structure that was ultimately selected and used for the first trimester diagnosis of chromosomal defects
ȕ-hCG, Human Chorionic Gonadotropin
Those parameters were encoded in appropriate numerical scales that could make the neural processing to be most effective.
Based on extensive previous experience, all the weights were initialized to 0.3, while the learning rate was the same for all connections, having value of 0.1. Similarly, the momentum rate was 0.2 for all links. The test set was applied at the end of each epoch to test the progress of training. If the results of the testing at time t
IFMBE Proceedings Vol. 29
582
C.N. Neocleous et al.
were better than those at time t – 1, the weights were saved and kept as a better set. The training progress was monitored in order to observe whether there was improvement during the application of the training and test set data. For most of the network structures attempted, there was little generalization improvement after about 1500 epochs. Different sets of inputs were used to find an effective neural structure that would predict first trimester diagnosis of chromosomal defects to an acceptable level. The inputs that were ultimately selected are those shown in Table 1. IV.
Table 3. Results of data sets used for the development of the neural predictor of trisomy 21 SET Training set correct cases
Table 2. Data sets used for the development of the neural predictor of trisomy 21
T21 cases Training set cases Training set, normal cases Training set, T21 cases Test set cases
23513 23296 217 23311 23134
175
98.9
Test set correct cases
109
100.0
Training set T21 correct cases
25
100.0
Verification set correct cases
92
98.9
Verification set T21 correct cases
14
93.3
177
The results summarized in Table 3 are very encouraging because they give a 100% diagnostic yield during testing and 93.3% diagnostic yield to a totally unknown T21 data set. It should be mentioned that we have not came across to any previous work that is using artificial neural networks for handling this problem. Statistical techniques were used however, with results of lower performance. The results obtained in this study are to our knowledge, the highest recorded in the literature. The study is currently under further development for improvement and in addition other chromosome defects are being studied, namely T-18, T-13, and Turner Syndrome with additional data from FMF, about these cases [1]. It is aimed to develop with the help of the doctors from the clinic, a full diagnostic tool and user friendly that will cover the whole spectrum of chromosomal deficiencies during the first trimester.
109
Test set, normal cases
84
Test set, T21 cases
25
Verification set cases
99.9
V. CONCLUSIONS AND FUTURE WORK
Following a rigorous investigation of different neural structure topologies, sizes and initial conditions, a structure as described before gave the best results which are summarized in Tables 2 and 3 below. The size of the various data sets used for the training and verification are shown in Table 2.
Normal cases
23307
% of total cases
Training set T21 correct cases
RESULTS
Total number of cases
SIZE
93
Verification set, normal cases
78
Verification set, T21 cases
15
The results obtained for the neural structure described before are given in Table 3. It is seen that a high correct classification of 98.9% was obtained for the all the cases of the totally unknown verification set. In this set there were 15 unknown cases of trisomy 21, and 14 (93.3%) were correctly predicted. The correlation coefficient for this unknown verification data set was 95.5%.
ACKNOWLEDGMENT The FMF foundation is a UK registered charity (No. 1037116). This research is partly supported by the University of Cyprus International Affairs Committee.
REFERENCES 1. 2. 3. 4.
Nicolaides K (2004) The 11-13+6 weeks scan. Fetal Medicine Foundation, London. Nicolaides K (2004) Nuchal translucency and other first-trimester sonographic markers of chromosomal abnormalities. Am J Obstet Gynecol. 1191:45-67. Rumbold A, Crowther C, Haslam R, Dekker G, Robinson J (2006) ACTS Study Group. Vitamins C and E and the risks of preeclampsia and perinatal complications. N. Engl. J. Med. 354:1796 –1806. Brause R (2001) Medical Analysis and Diagnosis by Neural networks, Computer Science Department, Frankfurt a. M., Germany.
IFMBE Proceedings Vol. 29
First Trimester Diagnosis of Trisomy-21 Using Artificial Neural Networks 5. 6.
Temurtas F (2009) A comparative study on thyroid disease diagnosis using neural networks, Expert Systems with Applications: An International Journal archive, 36:1. Tourassi G, Floyd C, Lo J (1999) A constraint satisfaction neural network for medical diagnosis, Neural Networks, vol.5.
7.
583 Neocleous C, Anastasopoulos P, Nikolaides K, Schizas C, Neokleous K, (2009) Neural networks to estimate the risk for preeclampsia occurrence. Proceedings of the International Joint Conference on Neural Networks, Atlanta, USA.
IFMBE Proceedings Vol. 29
Numerical Analysis of a Novel Method for Temperature Gradient Measurement in the Vicinity of Warm Inflamed Atherosclerotic Plaques Z. Aronis1, E. Massarwa2, L. Rosen3, O. Rotman1, R. Eliasy2, R. Haj-Ali2,4, and S. Einav1 1
Tel-Aviv University/Biomedical Engineering Department, Tel-Aviv, Israel Tel-Aviv University/Mechanical Engineering Department, Tel-Aviv, Israel 3 CorAssist Cardiovascular Ltd., Herzliya, Israel 4 Georgia Institute of Technology/School of Civil Engineering, Atlanta, GA 30332, USA 2
Abstract— Thermography is a method used mainly for the detection of warmer arterial wall regions, as an indication for the presence of inflamed atherosclerotic plaques. A new method, utilizing injection of cold saline to the bloodstream and measuring temperature gradients within the flow instead of the wall is numerically investigated. Results show an almost 12-fold increase in expected temperature gradients, emphasizing the usefulness of such method for novel catheter design. Keywords— Thermography, Atherosclerosis, Plaque, Numerical Simulations.
I. INTRODUCTION Atherosclerosis is an inflammatory disease [1]. It is generally assumed that macrophages play an important role in atherosclerotic plaque inflammation, by secreting cytokines, growth factors, and matrix metalloproteinases (MMPs), which destabilize the plaque and promote its rupture. During the past decade, several studies aimed to assess thermal heterogeneity over plaque surfaces, to demonstrate that inflamed lesions are hotter. One of the first studies [2] targeted in predicting thrombotic events by heat released from activated macrophages on the plaque surface or under its thin cap. By measuring intimal surface temperatures at different sites of ex-vivo human carotid artery plaques, temperature differences of 0.2 to 2.2°C have been measured. Temperature differences also correlated positively with the density of the underlying cells (mostly macrophages), and inversely with cap thickness. To further investigate the subject, a thermography catheter was designed and developed for in vivo measurements of thermal heterogeneity in the human arterial system [3]. The distal tip of the intravascular catheter was equipped with accurate thermistor temperature microsensors, enabling measurement of temperature when in close contact with the vascular wall. This research included patients with unstable angina and with acute myocardial infarction, and found that median temperature differences at the site of the lesion were increased by 1.025°C and by 2.15°C, respectively, from the core temperature. It has also been found that systemic
markers of inflammation such as CRP (C-reactive protein) and SAA (Serum amyloid A) correlated with temperature differences, hence implying that increased local heat production of coronary atherosclerotic lesions may be due to inflammatory response. In another research conducted by this group [4], it was demonstrated that the difference in atheromatous plaque temperature from background temperature was a strong predictor of cardiac events in patients after a successful percutaneous intervention. Moreover, a threshold of 0.5°C has been found, above which the rate of these adverse cardiac events has significantly increased. These findings were followed by an in vivo animal study [5], which demonstrated the presence of temperature heterogeneity in hypercholestrolemic rabbits and its absence in normocholestrolemic rabbits. Another animal model showed a thermal heterogeneity of 1.5 to 2.0°C inside hypercholestrolemic rabbits' aorta [6]. Following the aforementioned pioneering human thermography studies, smaller scale ones [7] measured more modest temperature increase in the range of 0.1 to 0.36°C in patients with stable angina, unstable angina, and acute myocardial infarction, yet such increase was not found in all of the subjects. Consequently, analytic and numerical studies, modeling inflamed coronary artery plaques, and their relevance to temperature heterogeneity, have been initiated, taking advantage of the fact that such models can easily change various meaningful model parameters. Mathematical simulation of a coronary artery model with heat source confirmed that measured temperature is strongly influenced by blood flow and also by cap thickness and source geometry [8]. Cases in which the heat source was located in the proximal or distal shoulder of the plaque were further studied [9], as well as the implications of blood temperature profile in multifocal coronary artery disease and the design of the thermography catheter. Another aspect that should be taken into consideration, per another numerical study [10] is the presence of the thermography catheter in the artery, and its effect on measurement, which was previously demonstrated to be meaningful. Another significant result of a numerical model was
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 584–587, 2010. www.springerlink.com
Numerical Analysis of a Novel Method for Temperature Gradient Measurement
the prediction of temperature increase (due to metabolism) in the center of the plaque to be less than 0.1°C, thereby claiming that higher reported values should be attributed to other factors [11]. Recently, a contra-hypothesis, claiming that previous measurements of lesion wall temperature increase were attributed to blood pressure effects rather than heat release from the unstable plaque wall was presented in-vitro and in-vivo [12]. This hypothesis was based on pressure and temperature recordings acquired while inflating a balloon proximal and distal to the plaque. To further analyze this assumption and its implication on the interpretation of previous clinical findings, there is obviously a need for additional thorough in-vitro and in-vivo study, with more changing parameters (i.e. flow, pressure, temperature, etc.). Furthermore, a better diagnosis method, with improved sensitivity to warmer regions of the arterial wall is sought in order to improve its robustness. As a first step toward such study, the present work targets to estimate temperature differences of the blood flow in the vicinity of warmer arterial wall-regions, compared to the core temperature, both in a baseline condition of physiologic blood temperature measurement, and also while injecting the blood flow with cold saline in order to amplify the temperature gradients. Varying flow conditions in both situations is conducted next, in order to map flow or pressure influences on the measured temperature gradients.
II. METHODS A 3-D parametric model of an LAD coronary artery with a plaque has been designed in a CAD software (Solidworks Corporation©, Concord, Massachusetts, USA) and im-ported into a numerical Fluid-Structure-Interaction (FSI) software (ADINA R & D, Inc., MA, USA). The outer dimensions for the media and lumen were taken from a sequence of IVUSVH LAD artery images, and incorporated into the CAD software only as the circular outer dimensions of these tissues. The LAD was chosen for this simulation due its significance in the left ventricle perfusion, and also as one third of coronary stenosis tend to occur in this artery [13]. The model consisted a 3-D stenosed vessel section (see Fig. 1), with length and average diameter of 35 mm and 3.5 mm, respectively. A 36 mm long extension of the vessel was added to its proximal part in order to allow the fluid to fully develop as it enters the arterial section. The stenosed LAD model was investigated under a typical physiological LAD flow profile. The flow inside the vessel was pulsating. It is generally agreed that under physiological conditions, the Newtonian model for blood rheology can be considered acceptable for a first level approximation. For this reason,
585
the simulations considered blood as an incompressible and Newtonian fluid. The flow was assumed to be laminar, as the mean Reynolds number in the coronary arteries is about 120 under resting conditions [14]. Blood and arterial wall properties were taken from the literature [8].
Fig. 1 A. The geometry of the Lumen, B. a longitudinal cross-section of the artery, including the media (brown), adventitia (green), eccentric plaque at the bottom (purple) and necrotic tissue within the plaque (blue) The parametric model allows a variety of geometrical factors to be altered, including the luminal and medial dimensions, the depth and width of a plaque or necrotic volume, and the location, size and distribution of microcalcifications. Altering these parameters allows for a better comprehension of each structural factor both to the temperature distribution in the vessel wall and the blood flow, the mechanical stress distribution resulting from the fluidstructure coupling. The simulations presented in the current work include only the Adina-CFD module, presenting temperature distribution of the blood flow in the luminal space (Fig. 1-A) as the region of the eccentric plaque coming in contact with the lumen was set with a uniform temperature bound-ary condition of 39°C, which was warmer than the core temperature set at 37°C. In a second case the core temperature was set at 17°C, which is well in the range of temperatures used for similar procedures, such as Thermodilution. The temperature of the plaque was maintained at 39°C, increasing the initial temperature difference by 20°C compared to the first case. Next, the blood flow velocity was reduced by half for both cases to check its influence on temperature distribution.
III. RESULTS The released heat from the arterial wall is transferred by conduction and convection to the surrounding neighborhood. Regions in which the velocity is higher contribute to faster cooling of the blood, while regions in which velocity
IFMBE Proceedings Vol. 29
586
Z. Aronis et al.
is lower are characterized with higher blood temperature. Peak velocity of the flow reached 59 cm/sec at the center of the narrowest area of the lumen (Fig. 2-A). The temperature distribution is presented in Fig. 2-B, as a warm region of blood flow develops in the vicinity of the hot plaque, and cools down towards the counter wall, which is closer to the core temperature.
and II was 22.42°C and 17.18°C respectively, reducing the temperature gradient by a mere 0.38% from 5.26°C to 5.24°C.
Fig.
2 A. Velocity vectors, B. Temperature distribution of case I (blood core temperature at 37°C), warm plaque wall is located at the bottom. Points I and II indicate temperature measurement points
Two points were selected on the same length of the artery (Fig. 3) – Point I at a distance of 0.9 mm from the border of the warm plaque with the lumen, representing a heated area of the flow which is in the vicinity of the plaque, but located midway between the plaque wall and the center of the flow, and point II at a distance of 0.3 mm from the counter wall, representing a cooler region with temperature closer to the core temperature, which actually can be measured almost anywhere in the artery, as long as it is not in the proximity of the plaque. The difference between the measured values at points I and II in both cases is depicted in Fig. 3. When temperature is measured, at peak velocity, in measurement points I and II, the values are 37.49°C and 37.02°C, respectively, for case 1 in which the blood core temperature is kept at a physiological value of 37°C (Fig. 3A). However, if the blood is cooled down to 17°C in case 2, the temperatures measured in points I and II are 22.46°C and 17.2°C, respectively. Thus, cooling the blood by 20°C for the period of temperature acquirement may increase the temperature gradient between a region in the vicinity of the warm plaque, and a region representing the core temperature, almost 12-fold from 0.47°C to 5.26°C. Reducing the blood flow by half (thus, reaching a maximal peak velocity value of 0.29 cm/sec) did not affect the temperature gradients in case 1. In case 2 reducing the flow had a minor effect, as the temperature measured at points I
Fig. 3 A. Temperature measured at points I and II in A. case 1 (blood core temperature at 37°C), and B. case 2 (blood core temperature cooled down to 17°C). Temperature values are shown in °K
IV. DISCUSSION Current thermography studies focus on arterial wall temperature as a marker of atherosclerosis. Different kinds of catheters were developed in order to map temperature distributions along blood vessels in vivo, most of which equipped with termistors or thermocouples that come in contact with the blood vessel wall to follow its contour while pulled backwards along a guidewire. The results of the present study demonstrate how by broadening current methods, searching for blood temperature variations in the coronary arteries, and in particular the temperature levels distal to inflammatory-suspicious areas, may benefit in locating vulnerable plaques. When measurements of wall temperature are compared in-vivo to blood core temperature, the variations are mostly in the range of less than 1°C, and only in sever cases, such as MI, may reach 2°C [5,6]. The current results suggest a possible benefit by applying a method in which a cold saline is injected to the area of measurement prior to acquirement of temperatures. Such method doesn’t require the catheter to be in touch with the wall, which may have the risk of injuring the already vulnerable thin-capped plaques. Even by
IFMBE Proceedings Vol. 29
Numerical Analysis of a Novel Method for Temperature Gradient Measurement
measuring the temperature in the lumen, at a distance of 9 mm from the plaque, and comparing it with a reference temperature measured at any other region (e.g. at the vicinity of the opposing wall, or at any region which is proximal or distal to the inflamed region), an almost 12-fold increase in temperature gradients can be reached, rendering this method much easier for detecting regions suspected as being biologically and actively inflamed. Reducing the blood flow did not have a significant effect on the temperature gradients. Nevertheless, only two flow waveforms were currently inspected, and a much more thorough analysis should be conducted in order to reach firm conclusions regarding the effect of flow or pressure on temperature distribution within the lumen. The results of this study emphasize the need and the potential benefits of new designs of thermography catheters, having spatial thermistor configuration, enabling measurement of temperature variations in various cross sections of the artery, and by utilizing injection of cold saline for the period of measurement. In the current study, uniform plaque temperature was used. In practice, the various components within the plaque may have different temperature distribution. Thus, further analysis will be conducted in order to map the temperature distribution of an overall design of a vessel wall embedded with warm cores, acting as internal heat generators and conducting heat through the thin fibrotic cap of vulnerable plaques.
V. CONCLUSIONS The vulnerable highly-inflamed atherosclerotic plaque is hard to detect by conventional imaging techniques, and the current efforts focus on development of new modalities that will enable its localization by focusing not only on its morphologic characteristics but also relying on its cellular activity and its consequences. The present work demonstrates the potential contribution of applying cold saline injection to the temperature gradients measured for the detection of active inflamed regions of the artery. These high temperature gradients can indicate the location and severity of the inflammation of atherosclerotic coronary vulnerable plaques.
587
REFERENCES 1. Ross, R., Atherosclerosis--an inflammatory disease. N Engl J Med, 1999. 340(2): p. 115-26. 2. Casscells, W., et al., Thermal detection of cellular infiltrates in living atherosclerotic plaques: possible implications for plaque rupture and thrombosis. Lancet, 1996. 347(9013): p. 1447-51. 3. Stefanadis, C., et al., Heat production of atherosclerotic plaques and inflammation assessed by the acute phase proteins in acute coronary syndromes. J Mol Cell Cardiol, 2000. 32(1): p. 43-52. 4. Stefanadis, C., et al., Increased local temperature in human coronary atherosclerotic plaques: an independent predictor of clinical outcome in patients undergoing a percutaneous coronary intervention. J Am Coll Cardiol, 2001. 37(5): p. 1277-83. 5. Verheye, S., et al., In vivo temperature heterogeneity of atherosclerotic plaques is determined by plaque composition. Circulation, 2002. 105(13): p. 1596-601. 6. Madjid, M., et al., Thermal detection of vulnerable plaque. Am J Cardiol, 2002. 90(10C): p. 36L-39L. 7. Schoenhagen, P., et al., Coronary plaque morphology and frequency of ulceration distant from culprit lesions in patients with unstable and stable presentation. Arterioscler Thromb Vasc Biol, 2003. 23(10): p. 1895-900. 8. ten Have, A.G., et al., Temperature distribution in atherosclerotic coronary arteries: influence of plaque geometry and flow (a numerical study). Phys Med Biol, 2004. 49(19): p. 4447-62. 9. Rosen, L., et al. Temperature distribution in atherosclerotic coronary arteries: influence of plaque in multifocal coronary artery disease - a CFD model. in 3rd European Medical and Biological Engineering Conference. 2005. Prague, Czech Republic. 10. ten Have, A.G., et al., Influence of catheter design on lumen wall temperature distribution in intracoronary thermography. J Biomech, 2007. 40(2): p. 281-8. 11. Lilledahl, M.B., E.L. Larsen, and L.O. Svaasand, An analytic and numerical study of intravascular thermography of vulnerable plaque. Phys Med Biol, 2007. 52(4): p. 961-79. 12. Cuisset, T., et al., In vitro and in vivo studies on thermistor-based intracoronary temperature measurements: effect of pressure and flow. Catheter Cardiovasc Interv, 2009. 73(2): p. 224-30. 13. Wang, J.C., et al., Coronary artery spatial distribution of acute myocardial infarction occlusions. Circulation, 2004. 110(3): p. 278-84. 14. Nerem, R.M. and W.A. Seed, Coronary artery geometry and its fluid mechanical implications, in Fluid Dynamics as a Localizing Factor for Atherosclerosis, G. Schettler, Editor. 1983, Springer: Berlin.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Ze’ev Aronis Tel-Aviv University Kiryat-Ha’Universita Tel-Aviv Israel [email protected]
A multilevel and multiscale approach for the prediction of oral cancer reoccurrence Konstantinos P. Exarchos1,2, G. Rigas1, Yorgos Goletsis3, Dimitrios I. Fotiadis1,* 1
Unit of Medical Technology and Intelligent Information Systems, Dept of Materials Science and Engineering, University of Ioannina, Ioannina, Greece 2 Dept of Medical Physics, Medical School, University of Ioannina, Ioannina, Greece 3 Dept of Economics, University of Ioannina, Ioannina, Greece
Abstract— Oral cancer is the predominant neoplasm of the head and neck. Annually, more than 0.5 million new patients are diagnosed with oral cancer, worldwide. After the initial treatment and patient remission, reoccurrence rates still remain quite high. Early identification of such relapses is of crucial significance. Up to now, several approaches have been proposed for this purpose yielding however, unsatisfactory results. This is mainly attributed to the non-unified nature of these studies which focus only on a subset of the factors involved in the development and reoccurrence of oral cancer. Here we propose an orchestrated approach based on Dynamic Bayesian Networks (DBNs) for the prediction of a potential relapse after the disease has reached remission. A broad range of heterogeneous data sources featuring clinical, imaging and genomic information are assembled and analyzed during a predefined time-span, in order to decipher new and informative feature groups that correlate significantly with the progression of the disease and identify early potential relapses (local or metastatic) of the disease. Keywords— Oral Cancer, Dynamic Bayesian Networks, Cancer Evolution Monitoring. I. INTRODUCTION
Oral cancer is the cancer type that arises in the head and neck region, i.e. in any part of the oral cavity or oropharynx. Oral cancer constitutes the eighth most common cancer in the worldwide cancer incidence ranking [1]. Oral cancer is highly related to the sex of the patient, with men facing twice the risk of being diagnosed with oral cancer than women [2]. Furthermore, plenty more risk factors have been associated with oral cancer, such as smoking, especially coupled with alcohol consumption, as well as sun exposure, [1] and HPV infection [3]. In terms of reoccurrence, oral cancer is quite aggressive as locoregional relapses during remission have been reported in the range of 25-48%, and 95% of cases occur within 24 months from remission. Currently implemented methods aiming to predict oral cancer reoccurrence after remission, have reported quite inadequate results [4,5,6]. Although several factors have been associated with the reoccurrence of oral cancer, such as age, site and stage of the primary tumor as well as certain
histological features, they have not been studied altogether in a collective research. Moreover, especially in the molecular basis of the disease, currently available biomarkers are limited in number and efficiency [7, 8]. The identification of new features and the efficient combination of the already known ones will also greatly benefit the accurate stratification of the patients in terms of staging. In the general framework of disease prognosis and modeling, several diverse approaches have been proposed in the literature. Most of them involve a prognostic model which implements a risk score depicting the progression of the disease and the general condition of the patient. Based on this score, simple decision rules are used to stratify the patients into several risk categories [9, 10]. More recent approaches utilize advanced machine learning algorithms, such as Artificial Neural Networks (ANNs) or Support Vector Machines (SVMs) which accept as input several variables and provide prediction about the desired outcome. However, most of these approaches work as “black-boxes” and thus do not provide adequate reasoning about their decisions [11, 12]. In addition, it is very cumbersome, if not infeasible to represent properly temporal problems using these algorithms. These issues pose significant limitations for the acceptability of the produced decision systems both by the medical community and the patients. Especially for the case of oral cancer, the physicians are extremely interested in knowing if, when and why a reoccurrence will appear. Hence, especially for the problem under consideration (i.e. oral cancer reoccurrence prediction) it is very important to provide sufficient justification about the prediction, but also to introduce the time dimension in the modeling procedure. In this work, we propose a unified approach in order to collect and systematically analyze the factors that are putatively associated with the reoccurrence of oral cancer, after the remission. To this end, we utilize heterogeneous clinical, imaging and genomic data, thus facilitating the multiscale and multilevel modeling of the disease progression over time. Due to the constantly evolving nature of the disease, we employ DBNs, which efficiently cope with temporal causalities, thus, identifying the timing of a potential reoccurrence. Moreover, the intuitive design of DBNs
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 588–591, 2010. www.springerlink.com
A Multilevel and Multiscale Approach for the Prediction of Oral Cancer Reoccurrence
allows for comprehensible decisions coupled with adequate reasoning. The diverse set of gathered data is likely to uncover the factors that dictate the evolution and development of the disease during remission, both at the molecular level but also at the phenotype level. Subsequently, we are able to stratify patients more accurately according to a risk status; knowing in advance a potential deterioration in the disease progression is a key factor towards the determination of the most proper treatment. II.
MATERIALS AND METHODS
A. Clinical scenario The clinical scenario employed in the current study is depicted in Figure 1. Initially a patient is diagnosed with cancer through traditional clinical procedures. At this point the physician collects the required data in order to extract the baseline profile and the patient is treated properly. After the therapeutic intervention (i.e. surgical removal, chemo/radio-therapy), the patient either reaches complete remission or particles of the cancer tissue still remain intact. In the latter case the patients do not qualify for the purposes of our study, whereas from the patients in complete remission, data are further collected in scheduled regular visits, forming the post-treatment profile. Afterwards, and during an 18-month time span, data are gathered from the patient in a regular basis, thus, formulating a set of follow-up "snapshots". The analysis of this data is prone to stratify the patients in two clusters: i) low risk of disease reoccurrence and ii) high risk of reoccurrence. Hence, we are able to dully identify relapses of the disease and adjust the follow-up treatment accordingly. Identification of baseline profile
Data collection
Cancer patient TREATMENT
Identification of post treatment profile
Data collection
Complete remission
Partial remission or non-responder to treatment
Reoccurrence?
Follow-up signature
Reoccurrence? Reoccurrence? Time Non-reoccurrence
589
B. Data collection In the present study, the progress of the disease in a total of almost 150 patients with oral squamous cell carcinoma is evaluated. According to available literature 70-80% of these patients are expected to achieve complete remission of the disease after treatment, and an approximate 30-40% of them will develop a reoccurrence [1]. Due to the complex nature of cancer, a major challenge towards its diagnosis and treatment is to formulate a collective approach in order to “frame” every possible aspect. For this purpose we propose a orchestrated approach which involves the integration and analysis of multiscale and multilevel data. Specifically clinical, imaging and genomic data are assembled ranging in the scale of dimension and localization, as follows: Clinical Data For the diagnosis and monitoring of patients with oral cancer the following types of clinical data are assembled: • Anamnesis • Demographics • Risk factors • TNM staging • Pathological data Anamnesis refers to the detailed medical review of the patient’s past health state. Demographic data along with several risk factors are also assembled in order to aid the diagnosis. Next the tumor’s clinicopathological stage and developmental phase are evaluated, using the TNM staging, developed and maintained at the American Joint Committee on Cancer (AJCC) and the International Union Against Cancer (UICC). The T stands for tumor and describes its size and whether it has invaded nearby tissue. The N in TNM refers to the node status, and determines whether or not the cancer has spread into nearby lymph nodes. Due to the large number of lymph nodes in the head and neck area, careful assessment of the lymph nodes is an important part of staging. The M refers to the presence of metastasis or distant metastasis (spread of cancer from one body part to another).. Moreover, several markers have been proven to affect the patient’s response to adjuvant and neo-adjuvant treatments [13, 14]. In the present study we compile an extensive list containing all relevant clinical factors, featuring in total 97 attributes in order to perform a collective study of their relation with oral cancer progression and treatment efficacy.
Fig. 1: Clinical scenario of the proposed study.
Genomic Data In the present study we employ oligonucleotide and complementary DNA arrays in order to unravel the molecuIFMBE Proceedings Vol. 29
590
K.P. Exarchos et al.
lar basis of oral cancer. After obtaining the gene expression data from the microarray experiments several preprocessing steps take place aiming to enhance the quality of the obtained features. Furthermore, data with high variability, too low signal and genes with a large number of missing values, constituting unreliable expression levels are carefully filtered out. Imaging Data Image data from the cancerous tissue can reveal certain significant characteristics of the localization and progress of the disease; the present study employs MRI and CT images. The manipulation of the employed images involves the following main steps, which are also depicted in Figure 2. • Image preprocessing • Definition of regions of interest (ROIs) • Extraction and selection of features • Classification of the selected ROI
into account in order to formulate the diagnosis. However, in some cases features with no apparent physical meaning can be extracted due to their enhanced discriminative potential. Personalized genetic signature A personalized genetic signature aims to capture patientspecific perturbations of the disease evolution in its molecular basis. For each patient the gene expression values before treatment (cancerous profile) and in the first stages of remission (cancer-free profile) are compared. The outcome is a limited set of differentially expressed genes representative for each patient, which constitute a personalized genetic signature. The expression of these genes from all follow-up visits is compared in turn with the cancerous and the cancerfree profile, calculating the correlation and the Euclidean distance; these metrics provide, respectively, a qualitative and quantitative measure of the patient's prognosis. In the case of the Euclidean distance a weighted variant is employed which takes into account the significance of each gene in the personalized genetic signature. This weighting factor is proportional to the differential expression of each gene between the cancerous and the cancer-free profile. C. Disease Evolution Monitoring In the present study we employ DBNs in order to early identify potential relapses of the disease, during the followup. As described in the clinical scenario, a snapshot of the patient’s medical condition is acquired during every predefined follow-up by the doctor. By exploiting the information of history snapshots we aim to model the progression of the disease in the future. The proposed prognostic model is based on DBNs, which are temporal extensions of Bayesian Networks (BNs.) [15]. A BN can be described as
Fig. 2: Image data analysis and manipulation.
Initially, the images need to be preprocessed properly in order to remove certain types of imaging data contamination, such as noise and artifacts. These problems can be attributed to several factors such as human error, measuring device limitations, etc. Other types of image preprocessing involve edge enhancement (e.g. unsharping, wavelet transformation), image contrast enhancement (histogram equalization) and image standardization. In the next step, we detect certain regions of interest, i.e. regions of the preprocessed image bearing enhanced role for our purposes. Afterwards, from the ROIs several features are extracted in order to uniquely characterize the image itself or structures contained in the data. Some of these features represent quantitative measurements with certain physical meaning, that a specialized physician would take IFMBE Proceedings Vol. 29
A Multilevel and Multiscale Approach for the Prediction of Oral Cancer Reoccurrence
In order to build a model that successfully evaluates the current state or predicts a state in the future (next time slice), we need to fine-tune both the intra and the inter slice dependencies of the DBN network, using both expert knowledge as a prior model and experimental data to get a more accurate posterior model. After the training procedure, we obtain a model as the one shown in Figure 3. By providing some evidence to the model, we are able to conjecture about the probability of any variable for every time slice, including of course the probability for reoccurrence.
Pre-treatment
Follow-up 1 Clinical (1) • Age • Gender ...
Clinical (n) • Age • Gender ...
Imaging (0)
Imaging (1)
Imaging (n)
• Periosteal infiltration • Perineural spread ...
• Periosteal infiltration • Perineural spread ...
• Periosteal infiltration • Perineural spread ...
Genomic (1)
are able to predict a certain outcome but also gain insight about the rationale of every decision. In overall, the currently proposed framework contributes significantly towards the monitoring of oral cancer evolvement since it can answer if, when and why a reoccurrence might appear.
ACKNOWLEDGMENT This work is part funded by the European Commission NeoMark project (FP7-ICT-2007-224483 – ICT enabled prediction of cancer reoccurrence).
Follow-up n
Clinical (0) • Age • Gender ...
Genomic (0)
591
...
REFERENCES 1. 2. 3.
Genomic (n)
• p53 • EGFR ...
• p53 • EGFR ...
• p53 • EGFR ...
Personalized genetic signature (0)
Personalized genetic signature (1)
Personalized genetic signature (n)
Reoccurrence (0)
Reoccurrence (1)
Reoccurrence (n)
Fig. 3: Provisional architecture of the employed DBN model.
4. 5. 6. 7. 8.
III. DISCUSSION AND CONCLUSIONS
9.
In the present study we propose an advanced framework that implements heterogeneous sources of data towards the prediction of oral cancer reoccurrence in patients that have reached remission. A large amount of clinical, genomic and imaging features are analyzed in order to extract biomarkers that are highly associated with relapses of oral cancer. Thus, we overcome a major limitation of similar studies in the field that employ only a confined subset of features that are associated with oral cancer. Another significant challenge is to capture the disease progression over time. For this purpose we employ DBNs, which are specifically designed to represent temporal causalities. The inclusion of the time dimension is very important as most doctors are interested – even with a rough approximation – in the timing of the reoccurrence. Furthermore, DBNs are able to provide reasoning for the reported decisions, thanks to their transparent architecture. This characteristic is very appealing, if not prerequisite by the medical community. Hence, not only we
10. 11. 12. 13.
14.
15.
Haddad, R.I., Shin, D.M.: Recent advances in head and neck cancer. The New England journal of medicine 359 (2008) 1143-1154 http://www.cancer.net: Oral and Oropharyngeal Cancer (2008) Mork, J., Lie, A.K., Glattre, E., Hallmans, G., Jellum, E., Koskela, P., Moller, B., Pukkala, E., Schiller, J.T., Youngman, L., Lehtinen, M., Dillner, J.: Human papillomavirus infection as a risk factor for squamous-cell carcinoma of the head and neck. The New England journal of medicine 344 (2001) 1125-1131 Forastiere, A., Weber, R., Ang, K.: Treatment of head and neck cancer. The New England journal of medicine 358 (2008) 1076; author reply 1077-1078 Godden, D.R., Ribeiro, N.F., Hassanein, K., Langton, S.G.: Recurrent neck disease in oral cancer. J Oral Maxillofac Surg 60 (2002) 748753; discussion753-745 Sciubba, J.J.: Oral cancer. The importance of early diagnosis and treatment. American journal of clinical dermatology 2 (2001) 239-251 D'Silva, N.J., Ward, B.B.: Tissue biomarkers for diagnosis & management of oral squamous cell carcinoma. The Alpha omegan 100 (2007) 182-189 Lippman, S.M., Hong, W.K.: Molecular markers of the risk of oral cancer. The New England journal of medicine 344 (2001) 1323-1326 Knaus, W.A., Wagner, D.P., Draper, E.A., Zimmerman, J.E., Bergner, M., Bastos, P.G., Sirio, C.A., Murphy, D.J., Lotring, T., Damiano, A., et al.: The APACHE III prognostic system. Risk prediction of hospital mortality for critically ill hospitalized adults. Chest 100 (1991) 1619-1636 Le Gall, J.R., Lemeshow, S., Saulnier, F.: A new Simplified Acute Physiology Score (SAPS II) based on a European/North American multicenter study. Jama 270 (1993) 2957-2963 Cruz, J.A., Wishart, D.S.: Applications of Machine Learning in Cancer Prediction and Prognosis. Cancer Informatics 2 (2006) 59-78 Delen, D., Walker, G., Kadam, A.: Predicting breast cancer survivability: a comparison of three data mining methods. Artificial intelligence in medicine 34 (2005) 113-127 Woolgar, J.A., Rogers, S., West, C.R., Errington, R.D., Brown, J.S., Vaughan, E.D.: Survival and patterns of recurrence in 200 oral cancer patients treated by radical surgery and neck dissection. Oral oncology 35 (1999) 257-265 Woolgar, J.A., Scott, J., Vaughan, E.D., Brown, J.S., West, C.R., Rogers, S.: Survival, metastasis and recurrence of oral cancer in relation to pathological features. Annals of the Royal College of Surgeons of England 77 (1995) 325-331 Murphy, K.P.: Dynamic Bayesian Networks: Representation, Inference and Learning. UNIVERSITY OF CALIFORNIA (2002).
IFMBE Proceedings Vol. 29
An Automated Method for Levodopa-Induced Dyskinesia Detection and Severity Classification M.G. Tsipouras1, A.T. Tzallas1, G. Rigas1, P. Bougia1, D.I. Fotiadis1 and S. Konitsiotis2 1
Unit of Medical Technology and Intelligent Information Systems, Department of Material Science and Engineering, University of Ioannina, 45110 Ioannina, Greece 2 Department of Neurology, Medical School, University of Ioannina, 45110 Ioannina, Greece Abstract— In this paper we propose an automated method for Levodopa-induced dyskinesia (LID) detection and classification of its severity. The method is based on the analysis of the signals recorded from accelerometers which are placed on certain positions on the patient’s body. The signals are analyzed using a moving window and several features are extracted. Based on these features a decision tree is used to detect if LID symptoms occur and classify them related to their severity. The method has been evaluated using a group of patients and the obtained results indicate high classification ability (95% classification accuracy). Furthermore, extensive evaluation has been done in order to determine the optimal positioning of the sensors and the selection of the classification algorithm. Keywords— Levodopa-induced dyskinesia detection, Levodopa-induced dyskinesia severity classification, automated diagnosis.
I. INTRODUCTION
Levodopa-induced dyskinesia (LID) is a disabling and distressing complication of chronic levodopa therapy in patients who suffered from Parkinson's disease [1]. LID is more commonly known as a jerky, dance-like movement of the arms and/or head. These movements (called as choreic or dystonic) range from 1-5 Hz [1,2]. LID symptoms can be rated in various ways by their topography (affected body regions), by their duration or consistency of effect, by the disability they impart, by the extent of enhanced severity from activation due to volitional movement and by the severity [2]. The presence and severity of LID can change during the day and as such, detection, assessment and following the changes of these signs during daily activities are of great interest. In addition, the effective characterization and quantification of LID not only improves our understanding of its pathophysiological mechanisms, but also helps diagnosis and the evaluation of treatment. Current assessment of LID mainly relies on clinical methods [2,3]. Unfortunately, clinical methods lack objectivity and they are not feasible for long-term assessment by the experts [2,4]. To overcome the limitations of subjective assessment of LID and to gain insight into their pathophysi-
ology several computer-based methods are developed using quantitative instrumental techniques such as: movement sensors (accelerometers and gyroscopes) [2-6], electromyography (surface) [2,7], force gauges (which are instruments used to measure the force during a push or pull tests) [2,8], position transducers (force transducer that measured arm movements) [2,8] and Doppler ultrasound systems [2,8,9]. Methods which are based on accelerometer signal analysis, greatly differ in the body segments, from which movements are measured, and the number of accelerometers per segment. A major challenge for a method that automatically assesses LID is to be able to distinguish dyskinesias from voluntary movements. Most of the studies tried to detect LID focused on the frequency domain of the signals from the movement sensors, while time-domain features have also been used. Severity of LID has been determined using linear discriminate analysis and artificial neural networks (ANNs). An important drawback of the aforementioned studies is the small number and the short-time of tasks involved as well the fact that they have been performed in laboratory settings. Keijsers et al. [6] monitored patients while performing a large variety of daily life activities in a natural environment for a long period of time. Hoff et al. [10] use a continuous ambulatory multi-channel accelerometry (CAMCA) to identify accelerometer characteristics of LID. In this study we propose a method for LID detection and classification of its severity. Six accelerometers are placed on the patient’s body and the recorded signals are analyzed in order to extract several features. The analysis is performed using moving windows. All features extracted from a specific window of the signal for all signals (from different sensors) form a feature vector that is used to detect if LID symptoms which are present on this window and determine their severity. The classification technique that is employed is a decision tree. Several experimental settings related to the number of sensors used have been evaluated and the results are presented. In addition, based on the best experimental settings determined from the above analysis, other classifications techniques are also tested and the obtained results are presented.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 592–595, 2010. www.springerlink.com
An Automated Method for Levodopa-Induced Dyskinesia Detection and Severity Classification II. MATTERIALS AND METHODS
A. Experimental Setup In this study three patients, two males (aged 65 and 75 years) and one female (aged 60 years) were enrolled. They suffered from LID and showed a severity of LID varying between no dyskinesia to moderate (rating between 0 and 3 on the Unified Parkinson’s Disease Rating Scales (UPDRS) [11]. The experiments were approved by the Medical Ethical Committee of the Hospital of the University of Ioannina in Greece. Following a standard procedure, used also in clinical trials with medication which accepted in the literature, patients should have received the last dose of their medications 12 hrs before testing time, which is usually around 8 pm of the night before. Twelve hours after the last dose of medication the patient is expected to be in the “off” state. Recording started with the patient always being in the “off” state and lying on his bed. The protocol consists of three major tasks: x lying on bed (5 min), x rising from the bed and sitting on a chair located just by bed (5 min), x standing up from the chair and performing a series of activities (for totally of approximately 8 min): walking for a distance of 5 m, opening a door, closing the door, opening the door step out of the room, walking in the corridor for a straight distance of 10 m, returning in the room, making a stop, drinking a few sips from a glass of water, returning to the chair. Then, the patient takes his first dose of medication for that day and when he turns “on” (verified on site by an expert neurologist), another cycle of recording with the above prespecified tasks follows. If the patient had LID (while in the “on” state) then the recording is selected for this study. UPDRS obtained immediately before the patient started performing the predefined tasks. The final annotation related to the LID severity based on UPDRS is made based on video recordings obtained during the protocol procedure from the patient. B. Experimental Setup The movements and postures are measured using accelerometers and a portable data recorder. Six sets of three orthogonal accelerometers (ANCO Devices [12]) are used. These are placed at six different positions of the body: right and left wrist (LW and RW), right and left leg (LL and RL), chest (CH) and waist (WS). Each accelerometer records
593
three signals, one for each axis (x,y and z axis). The above sensor’s placement on the patient’s body is illustrated in Fig. 1. All sensors transmit data using Bluetooth to a portable PC equipped with data acquisition hardware and software to collect and store the signals. The sensors’ size is no bigger than a matchbox. Sensors on the arms and legs are attached on specially designed elastic bands which allow fixation to any wrist or ankle size. Sampling rate is set to 62.5 Hz.
Fig. 1 Schematic overview of the position of accelerometers on the patient’s body.
C. Signal Analysis The recorded signals are used for feature extraction. A moving window with 2 seconds duration and 1.75 seconds antepossition is used over a single lead and, for each window the mean value of the signal is calculated: ݉݅ (݆) =
1 2݂ݏ+1
) ݆( ݓ+݂ݏ σ݊=݆(ݓ )െ݂)݊( ݅ݔ ݏ,
(1)
where ݉݅ (݆) is the mean value of the ݆ ݄ݐwindow of the ݅ ݄ݐ signal, ݂ ݏis the sampling frequency, )݆(ݓis the time position of the ݆ ݄ݐwindow and )݊( ݅ݔis the ݊ ݄ݐsample of the ݅ ݄ݐsignal. The above procedure is applied to each recorded signal. Then, very slow movements are modeled: = )݆( ݅ݕ
1 2݂ݏ+1
+݂ݏ σ݆݊=݆ െ݂)݊( ݅ݔ ݏ,
(2)
and subsequently subtracted from the recorded signals: ݅ݔԢ = ݅ݔെ ݅ݕ.
(3)
A moving window with 2 seconds duration and 1.75 seconds anteposition is used over a single lead and for each window the standard deviation of the signal is calculated: = )݆( ݅ݏට
1 2݂ݏ+1
)+݂ݏ σ݆(ݓ ( ݔԢ (݊) െ ݔഥ݅Ԣ )2 , ݊=) ݆( ݓെ݂݅ ݏ
(4)
where )݆( ݅ݏis the standard deviation of the ݆ ݄ݐwindow of the ݅ ݄ݐsignal and ݔഥ݅Ԣ is the corresponding mean value.
IFMBE Proceedings Vol. 29
594
M.G. Tsipouras et al.
D. LID Assessment
Table 2 Classification Accuracy (%) of the Experimental Settings
For each window a feature vector is created. This vector consists of the two features (݉݅ , ) ݅ݏfor each signal. Thus, the dimension of the feature vector is 2 כ3ܰ, where ܰ is the number positions on the patient’s body (for each position three signals are recorded). This feature vector is used for LID assessment using a decision tree. Decision trees are a widely used classification technique. They represent the acquired knowledge in the form of a tree. The tree can be easily transformed to a set of rules with mutually exclusive and exhaustive rules. The construction of the decision tree is implemented using the C4.5 inductive algorithm [13]. This algorithm constructs a decision tree from the training data. Each internal node of the tree corresponds to a principal component, while each outgoing branch corresponds to a possible range of that component. The leaf nodes represent the class to be assigned to a sample. The C4.5 algorithm applies to a set of data and generates a decision-tree, which minimizes the expected value of the number of tests for the classification of the data.
Experimental setting
Positions
Classification Accuracy (%)
1
LW
82.4
2
RW
80.6
3
LL
85.4
4
RL
85.4
5
CH
83.3
6
WS
88.9
7
LW, CH
87
8
LW, WS
89.7
III. RESULTS
Based on the signal analysis described above a classification dataset was formed. The number of instances related to the patients and the LID severity are shown in Table 1. Table 1 The Dataset used in this Study. Patient
LID severity
1
2
3
Total instances per LID severity
0
4109
0
1359
5468
1
2264
446
3780
6490
2
0
2482
1857
4339
3
0
4748
47
4795
Total
21092
9
RW, CH
87
10
RW, WS
89.1
11
LL, CH
89.3
12
LL, WS
89.4
13
RL, CH
88.8
14
RL, WS
89.4
15
LW, RW
85
16
LL, RL
89.4
17
LW, LL
89.1
18
RW, RL
87.1
19
LW, LL, WS
89.3
20
RW, RL, WS
90
21
LW, RW, LL, RL
90.2
22
LW, RW, LL, RL, CH
92
23
LW, RW, LL, RL, WS
90.4
other classifiers are evaluated. This include Naive Bayes Classifier (NBC), k- Nearest Neighbour (k-NN), Fuzzy Lattice Reasoning (FLR [14]), Decision Trees (C4.5) and Random Forests (RF [15]). The results in terms of classification accuracy (%) are presented in Table 3. Table 3 Classification Accuracy (%) for the 21 and 22
Several different experimental settings have been used, related to the combination of signals which are used for LID assessment. For each one of them, results are obtained in terms of sensitivity, specificity and classification accuracy. The 10-fold stratified cross validation is used in all cases. The various combinations of signals used in each experimental setting and the obtained classification accuracy are presented in Table 2. Additionally, for the final two experimental settings i.e. the hands, legs and chest sensors (setting 22 in Table 2) and hands, legs and waist sensors (setting 23 in Table 2),
Experimental Settings (Table 2). Experimental setting Classifier LW, RW, LL, RL, CH (21)
LW, RW, LL, RL, WS (22)
NBC
73.06
73.75
k-NN
92
90.41
FLR
71.84
72.51
C4.5
92
90.4
RF
92.4
90.8
IFMBE Proceedings Vol. 29
An Automated Method for Levodopa-Induced Dyskinesia Detection and Severity Classification IV. DISCUSION
595
REFERENCES
A method for the automated LID detection and classification of its severity based on the analysis of signals obtained by accelerometers placed on the patient’s body is presented. The method has been evaluated using recordings from three patients that presented LID severities 0 to 3 at the UPDRS. The features extracted from the signals carry sufficient information for the LID severity detection and classification since they are the local mean value, which is related to very slow dystonic movements, and local standard deviation, which depicts the faster jerky, dance-like movement of the limps and/or head. LID effects may be present in a single part of the patient’s body (i.e. only one hand) or to several (i.e. both hands and head). Also, the effects may be present in the limps (hands/legs) or affect the whole body (waist/chest). Thus, the feature vector included accelerometer signals from several positions of the patient’s body. The obtained results indicate that the proposed method is highly efficient for automated LID severity detection and classification. The results presented in Table 2 indicate that experimental settings that include signals from almost all positions of the patient’s body present the best results. However, this conclusion was anticipated since (as mentioned earlier) LID effects may be present to a single or to several parts of the patient’s body. This also confirms that in our dataset the presence of LID effects to the patient’s body is time varying i.e. the same patient may presents LID effects in different parts of his body for different time intervals. Thus, a method that is based on a selection that includes signals from several position of the patient’s body (such as experimental settings 22 and 23) is expected to present the best results (compared to selections that include a limited number of recorded signals). In this study, the classification technique that is selected is a decision tree based on the C4.5 algorithm. The results presented in Table 3 indicate that a selection of a more advanced technique, such as random forests, does not improve significantly the obtained results.
1.
2. 3.
4.
5.
6.
7. 8.
9.
10.
11.
12. 13. 14.
15.
Keijsers NL, Horstin MW and Gielen SC et al (2003) Online Monitoring of Dyskinesia in Patients with Parkinson’s disease. IEEE Eng Med Biol Mag 22:96 – 103 Hoff JI, van Hilten BJ and Roos RA (2001) A review of the assessment of dyskinesias. Mov Disord 14(5): 737 – 743 Keijsers NL, Horstin MW and Gielen SC (2003) Movement parameters that distinguish between voluntary movements and levodopainduced dyskinesia in Parkinson's disease. Hum Mov Sci 22(1): 67-89 Burkhard PR, Shale H, Langston JW and Tetrud JW (1999). Quantification of dyskinesia in Parkinson's disease: validation of a novel instrumental method. Mov Disord 14(5):754-63 Keijsers NL, Horstink MW, van Hilten JJ, Hoff JI and Gielen SC (2000) Detection and assessment of the severity of Levodopa-induced dyskinesia in patients with Parkinson’s Disease by neural networks. Mov Disord 15( 6):1104 - 1111 Keijsers NL, Horstink MW and Gielen SC (2003) Automatic assessment of Levodopa-induced dyskinesias in daily life by neural networks. Mov Disord 18(1):70-80 Yanagisawa N (1984) EMG characteristics of involuntary movements. In: Dyskinesias. Bruyn GW, Ed. Sandoz BV, Uden, 142-159 Xuguang Liu, Carroll CB, Wang SY, Zajicek J and Bain PG (2005) Quantifying drug-induced dyskinesias in the arms using digitized spiral-drawing tasks, J Neurosc Meth 144(1): 47-52 Haines J and Sainsbury P (1972) Ultrasound system for measuring patients' activity and disorders of movement. Lancet 14:2(7781):802803 Hoff JI, van den Plas AA, Wagemans EA, and van Hilten JJ (2001) Accelerometric assessment of levodopa-induced dyskinesias in Parkinson’s disease. Mov Disord 16:58–61 Fahn S, Elton R (1987) UPDRS Development Committee. Unified Parkinson’s Disease Rating Scale. In: Fahn S, Marsden C, Calne D, eds. Recent Developments in Parkinson’s Disease. Florham Park, NJ: Macmillan Health Care Information, 153–164 ANCO at http://www.anco.gr/ Quinlan JR (1993) C4.5: Morgan Kauffman California Hoff JI, van den Plas AA, Wagemans EA, and van Hilten JJ (2001) Accelerometric assessment of levodopa-induced dyskinesias in Parkinson’s disease. Mov Disord 16:58–61 Fahn S, Elton R (1987) UPDRS Development Committee. Unified Parkinson’s Disease Rating Scale. In: Fahn S, Marsden C, Calne D, eds. Recent Developments in Parkinson’s Disease. Florham Park, NJ: Macmillan Health Care Information, 153–164 Author: D.I. Fotiadis Institute: Department of Material Science and Engineering, University of Ioannina City: Ioannina Country: Greece Email: [email protected]
ACKNOWLEDGMENT This work is part funded by the ICT program of the European Commission (PERFORM Project: FP7-ICT-2007-1215952).
IFMBE Proceedings Vol. 29
Electrospinning Poly(o-methoxyaniline) Nanofibers for Tissue Engineering Applications 1,3
1
2
Wen-Tyng Li , Mu-Feng Shie , Chung-Feng Dai2 and Jui-Ming Yeh 1
Department of Biomedical Engineering, Chung-Yuan Christian University, Chung-Li, Taiwan 2 Department ofChe mi s t r y ‚Chung -Yuan Christian University, Chung-Li, Taiwan 3 Center for Nano-Te c hnol og y ‚Chung -Yuan Christian University, Chung-Li, Taiwan
Abstract—In this study, we prepared electroactive nanofibers of poly(o-methoxyaniline) (POMA) using an electrospinning method and investigated the effect of nanofibers on the proliferation and differentiation of myoblasts and in vivo biocompatibility. The POMA nanofibers showed a smooth fiber structure and consistent fiber diameters ranging from 200 to 300 nm. The material exhibited redox potential by cyclic voltammetry analysis. The attachment and proliferation of C2C12 myoblasts cultured on POMA nanofibers were comparable to those grown on tissue culture plates. The myogenic gene expression of myoD, myogenin, myf-5 and MRF4 was not affected under electrical stimulation. In vivo biocompatibility was performed by implanting POMA nanofibers in the back of Wistar rats subcutaneously. Some neutrophils and macrophages appeared after 1 and 4 weeks, and angiogenesis was observed after 8 and 12 weeks. Our study suggested that electrospinning POMA nanofibers have good biocompatibility and are suitable for the growth and differentiation of myoblasts. Electroactive POMA nanofibers may be applied in muscle tissue engineering. Keywords—poly(o-methoxyaniline); electrospin; conducting polymer; myoblast
I. INTRODUCTION Conducting polymers were shown to be compatible with many biomolecules and able to modulate cellular activities via electrical stimulation. Besides biocompatibility, other advantages of using conducting polymers in biomedical applications include ability to entrap and controllably release biological molecules, ability to transfer charge from a biochemical reaction, and the potential to alter their electrical, chemical, physical properties easily to better suit the purpose of the specific application. The application of conducting polymers at the interface between biology and electronics is becoming an area of great importance. Conducting polymers can be applied as biosensors, scaffolds for tissue engineering, neural probes, drug-delivery devices, and bioactuators [1]. The most commonly used conducting polymers are polypyrrole, polyaniline polythiophene, and their derivatives. Several groups have demonstrated that both the non-conductive emeraldine base and its conductive salt forms of polyaniline were biocompatible in vitro and in vivo
[2-4]. Furthermore, Bidez III et al. reported that polyaniline supports adhesion and proliferation of cardiac myoblasts [5] Most of research investigated the biological properties in the form of films, instead of nanofibers. Electrospinning provides a simpler and more cost-effective means to produce scaffolds for tissue engineering with an interconnected pore structure and fiber diameters in the submicron range [6]. The rigid macromolecular chain and strong intramolecular interaction of polyaniline have limited its processability. Li et al. described that addition of polyaniline to gelatin results in homogeneous electrospun nanofibers and polyaniline-gelatin blend fibers are biocompatible [7]. Jun et al. developed electrically conductive composite fibers of poly(L-lactide-co-ε -caprolactone) blended with polyaniline via an electrospinning method and demonstrated that these fibers can modulate the induction of myoblasts into myotube formation [8]. Here, we have prepared neat poly(o-methoxyaniline) (POMA) nanofibers without blending with other thermoplastic polymers via an electrospining technique. The present work investigated the effects of POMA nanofibers on myoblast expansion, the expression of typical myogenic genes and in vivo biocompatibility. II. MATERIALS AND METHODS A. Electrospinning of poly(o-methoxyaniline) nanofibers The o-methoxyaniline (Fluka) was purified by distillation under reduced pressure before use. The o-methoxyaniline was added to CaCl2/H2O which was cooled with stirring to 0oC. A homogeneous solution was obtained when ammonium peroxydisulfate (Aldrich) was dissolved in the remaining solution and stirring for 12 hr at 0oC throughout the entire polymerization reaction. An intense blue-green precipitate was collected on a funnel using a water aspirator. The precipitate was dedoped by stirring in 500 mL 1.2 M NH4OH for 48 hr. Afterwards, the emeraldine base of POMA was filtered and dried at 70oC under vacuum drier for 24 hr. To obtaining electrospinning solution, the emeraldine base of POMA powder was dissolved in THF/DMF (1: 1) to make 5 wt % solution. The solution was placed in a
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 596–599, 2010. www.springerlink.com
Electrospinning Poly(o-methoxyaniline) Nanofibers for Tissue Engineering Applications
plastic syringe and the flow rate was at 0.02-0.04 mL/min. The syringe was connected with a metallic needle. The needle connected to a high voltage power supply set at 8-12 kV. A piece of aluminum plate was place 8-14 cm horizontal to the tip of the needle to collect the nanofibers. The samples were then baked at 100oC for 1 hr and sterilized using ultraviolet irradiation. Chemical structure and morphology of POMA nanofibers are shown in Figure 1. (A)
597
The nanofibers (1 x 1 cm) was secured in 24-well culture plates and soaked in DMEM prior seeding with C2C12 cells (104 cells/well). Cell attachment and proliferation of C2C12 cells on POMA were measured by staining with 80 μL PI (propidium iodide, 50 μg/mL) and 20 μL FDA (fluorescein diacetate, 5 μg/mL). After 10 min, viable cells and dead cells showed green and red fluorescence, respectively, under excitation wavelength of 450-490 nm. Cell counts were done in six randomly selected microscope fields (100 X) using a fluorescence microscope (Leica DMIL, Germany). D. Electrical stimulation
(B)
The nanofibers (1 x 1 cm) were placed between the platinum electrodes connected to a power supply within the well filled with 1 mL of DMEM in a 24-well plate. Electrical current at 100 mV was applied through the voltage direct current power supply for 60 min. After electrical stimulation, the nanofibers were kept in DPBS until use.
(C)
E. Reverse-transcriptase polymerase chain reaction Fig. 1 Chemical structure and SEM morphology of POMA nanofibers B. Electroactivity measurement The electrochemical behavior of POMA nanofibers prior to and after electrical stimulation (ES) was determined by a potentiostat/galvanostat Autolab PGSTAT-10 (Eco chemie, Utrecht, Holland). This was interfaced to a computer running a GPES (General Purpose Electrochemical System) software. The sensing part of the system is a three-electrode cell compartment including a bare platinum working electrode (2 mm in diameter), an Ag/AgCl (saturated KCl) reference electrode, and a platinum wire counter electrode. All potentials applied in this work were versus the electrode of Ag/AgCl. DPBS (Dulbecco's phosphate buffered saline) was used as electrolyte in the three-electrode cell. Cyclic voltammograms were registered at a scan rate of 100 mV/s. C. Cell attachment, proliferation and differentiation C2C12 myoblast (Food Industry Research and Development Institute, Hsinchu, Taiwan) was used for the cell attachment, proliferation and differentiation on nanofibers. C2C12 cells were maintained in DMEM (Dulbecco’ s Modified Eagle’ s Medium) high glucose supplemented with 10% fetal bovine serum, 1% penicillin-streptomycin under standard conditions (37oC, 5% CO2). Differentiation of myoblasts into myotubes was induced when the cells had achieved 80% confluence by replacing the medium to DMEM supplemented with 2% horse serum, 10 ng/mL insulin and 5 ng/mL transferrin. After 48 h of differentiation, the medium was changed every day.
RT-PCR (Reverse-transcriptase polymerase chain reaction) was used to measure the expression levels of myoD, myogenin, myf-5 and MRF4 semi-quantatively. Total RNA from the C2C12 cells cultured on the nanofibers for 5 days was extracted with TRIzol reagent. RNA purity was ensured by obtaining a 260/280 nm OD ratio >1.80. RNA (5 μg) was reverse transcribed with oligo (dT)20 and SuperScript® III First Strand Synthesis System. cDNA (2.5 μL)was then amplified using 5 μM of each primer (Table 1), 1× PCR buffer, 1.5 mM MgCl2, 0.2 mM dNTPs, and 0.25 unit of Taq DNA Pol y me r a s ei naf i n a lv ol umeof25μL. PCR products were visualized on a 1.4% Tris-acetate-EDTA agarose gel stained with ethidium bromide and the relative quantity of products were estimated via normalization with 100 bp DNA ladders and GADPH (Glyceraldehyde 3phosphate dehydrogenase) by ImageJ software. Table 1 Nucleotide sequences of primers and annealing temperature (TA) used for Polymerase Chain Reaction amplification Gene
Forward/Reverse sequence
GADPH 5’ -GGTGAAGGTCGGTGTGAACGGATT-3’ 5’ -ATGCCAAAAGTTGTCATGGATGACC-3’ myoD 5’ -GGGTACGACACCGCCTACTA-3’ 5’ -GTTCTGTGTCGCTTAGGGAT-3’ Myo- 5’ -CCAGTGAATGCAACTCCCACAGC-3’ genenin 5’ -AGACATATCCTCCACCGTGA-3’ myf-5 5’ -CCTGTCTGGTCCCGAAAGAAC-3’ 5’ -TAGACGTGATCCGATCCACAAT-3’ MRF-4 5’ -GCACCGGCTGGATCAGCAAGAG-3’ 5’ -CTGAGGCATCCACGTTTGCTCC-3’
IFMBE Proceedings Vol. 29
TA (oC) 55 60 60 60 68
598
W.-T. Li et al.
F. Subcutaneous implantation The animal study followed the guidelines approved by the Institutional Animal Care and Utilization Committee. Nanofibers (0.5 x 0.5 cm) were subcutaneously implanted into female Wistar rats (150-200 g, Laboratory Animal Center, National Taiwan University Hospital, Taipei, Taiwan). General anesthesia was induced and maintained by an intraperitoneal injection of Zoletil 50 (Virbac, France) (50 mg /kg). Each animal was shaved in the dorsal lumbar region and the sterile material was implanted subcutaneously through a 2 cm incision in each dorsal lumbar region. The implants and the surrounding skin and subcutaneous tissue were retrieved after 1, 4, 8 and 12 weeks implantation from the euthanized animals, washed in DPBS, immersed in 10% formalin for 2 days, and dehydrated in an ethanol series and xylol. The specimen was embedded in paraffin wax, and then sectioned (4-6 μm)a l on gt h el ong i t udinal axis of the implant. The sections were stained with hematoxylin and eosin (H&E). G. Statistical analysis The result was expressed as the mean standard deviation. A two-tailed Student's t test was used for comparing the results between the test and control groups. A P value of <0.05 was considered to be statistically significant.
nanofibers with or without ES exhibited similar growth curve compared to those grown on the TCP.
Fig. 2 Cyclic voltammograms of POMA nanofibers before and after electrical stimulation in DPBS at a scan rate of 100 mV/s
C. C2C12 cell gene expression MyoD and Myf5 are expressed in proliferative myoblasts, whereas myogenin and MRF4 play a pivotal role in terminal differentiation of myotubes. The expression levels of MyoD, Myf5, myogenin and MRF4 were determined by RT-PCR. There was no significant difference in the expression level of the myogenic genes among the groups of TCP control, POMA and POMA with ES as shown in Fig. 4. D. Subcutaneous implantation
III. RESULTS AND DISCUSSION A. Electroactivity of poly(o-methoxyaniline) nanofibers The electrochemical characteristics of POMA nanofibers were studied by voltammetry as shown in Fig. 2. The cyclic voltammogram of the nanofibers before and after ES in DPBS displays well-defined redox peaks with the peak cathodic potential (Epc) of ca. 375 and 425 mV, and the peak anodic potential (Epa) of ca. -60 and -125 mV (at 100 mV/s), respectively. The redox potential (E0′ ) decreased from 157.5 mV to 150 mV after ES. The peak cathodic current (ipc) corresponding to Epc increased from 0.9 to 1.25 μA after ES, suggesting the electroactivity was increased.
All animals survived the duration of the study with no adverse effects. At 1 week after subcutaneous implantation (Fig. 5A and 5B), some acute inflammatory cells such as neutrophils appeared around the implants. After 4 weeks (Fig. 5C and 5D), neutrophils disappeared. Macrophage and fibroblast infiltration was observed. Angiogenesis started at 8 weeks postimplantation (Fig. 5E and 5F). More capillary vessels infiltration and a layer of thin fibrotic tissue surrounding the implants were found at 12 weeks (Fig. 5G and 5H). The healing process after implantation with POMA nanofibers was similar to normal wound healing process. The results from histological analysis suggested that electrospinning POMA nanofiber had good biocompatibility.
B. C2C12 cell attachment and proliferation In Fig. 3A, C2C12 cell attachment, as determined by double-staining with PI and FDA, is reported as the percentage of cells adhered after 24 hr incubation, normalized to the total cell number seeded on the tissue culture plate (TCP). Under these standard conditions, C2C12 cell attachment to POMA nanofibers was 65±11%, while cell adhesion to electrical stimulated POMA nanofibers was 74±2%, which was higher. In Fig. 3B, cells on the POMA
IV. CONCLUSIONS Here, we prepared electroactive POMA nanofibers via an electrospinning technique, and found that they had good biocompatibility and were suitable for myoblast proliferation and differentiation. Taken together, our results indicate that POMA nanofibers can potentially be used as substrate for muscle tissue engineering.
IFMBE Proceedings Vol. 29
Electrospinning Poly(o-methoxyaniline) Nanofibers for Tissue Engineering Applications
110
5.
(A)
100 90
6.
Cell adhesion (%)
80 70 60 50
7.
40 30 20
8.
10 0 Control
300
Cell number
250
POMA
POMA+ES
599
Bidez III PR, Li S, Macdiarmid AG et al. (2006) Polyaniline, an electroactive polymer, supports adhesion and proliferation of cardiac myoblasts. J Biomater Sci Polymer Edn 17:199-212 Pham QP, Sharma U, Mikos AG (2006) Electrospinning of polymeric nanofibers for tissue engineering applications: a review. Tissue Eng 12:1197-1212 Li M, G Y, Wei Y et al. (2006) Electrospinning polyaniline-contained gelatin nanofibers for tissue engineering applications. Biomaterials 27:2705-2715 Jun I, Jeong S, Shin H (2009) The stimulation of myoblast differentiation by electrically conductive sub-micron fibers. Biomaterials 30:2038-2047
(B) TCP control POMA POMA+ES
200 150 100 50 0 12
24 Time (hr)
48
Fig. 3 C2C12 cells attachment and proliferation on POMA and TCP control. Cell attachment (A) and proliferation (B) were measured by doublestaining with PI and FDA.
Fig. 4 Gene expression of C2C12 cells cultured on POMA nanofibers
ACKNOWLEDGMENT This work was supported in part by the National Science Council, Taiwan, under Grants NSC 97-2218-E-033-001 and NSC 97-2627-M-033-005, and the project of the specific research fields in the Chung Yuan Christian University, Taiwan, under grant CYCU-98-CR-BE.
Fig. 5 H&E staining of POMA nanofibers and surrounding tissue. Repre-
REFERENCES 1. 2. 3. 4.
Guimard NK, Gomez N, Schmidt CE (2007) Conducting polymers in biomedical engineering. Prog Polym Sci 32:876–921 Wang C, Dong Y, Kang E et al. (1999) In vivo tissue response to polyaniline. Syn Metals 102:13131314 Kamalesh S, Tan P, Wang J et al. (2000) Biocompatibility of electroactive polymers in tissues. J Biomed Mater Res 52:467-478 Mattioli-Belmonte M, Giavaresi G, Biagini G et al. (2003) Tailoring biomaterial compatibility: in vivo tissue response versus in vitro cell behavior. Int J Artif Organs 26:1077-1085
sentative histological sections of implants at 1 (A, B), 4 (C, D), 8 (E, F) and 12 (G, H) weeks postimplantation are shown at 100 X (A, C, E, G) and 200 X (B, D, F, H) magnification. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Wen-Tyng Li, Ph.D. Chung-Yuan Christian University 200 Chung-Pei Road Chung-Li Taiwan [email protected]
Diagnosis of Asthma Severity Using Artificial Neural Networks E. Chatzimichail1, A. Rigas1, E. Paraskakis2, and A. Chatzimichail2 1
Department of Electrical and Computer Engineering, Democritus University of Thrace, Xanthi, Greece 2 Department of Pediatrics, Democritus University of Thrace, Alexandroupolis, Greece
Abstract— During the last years, neural networks have become a very important method in the field of medical diagnostic. In this work, a technique is proposed that involves training a Multi-Layer Perceptron with back-propagation learning algorithm, in order to recognize three classes of asthma severity, through the results of breathing tests. The breathing test parameters and the diagnosis of physicians for 200 cases of children- patients, aged 10-12 years from Alexandroupolis Hospital in Greece, are used in the supervised training method to update the network parameters. This method was implemented to diagnose three asthma cases according to their severity: mild, moderate and severe asthma. Results obtained by using Neural Network Toolbox of Matlab, show that the proposed ANN can be used in asthma diagnosis with 98% success. This research work improves the asthma diagnosis accuracy with higher consistency in order to specify the seriousness of the condition of a patient and the appropriate course of medical treatment. Keywords— Artificial Neural Networks, Multilayer Perceptron Network, Back-Propagation Learning Algorithm, Asthma Diagnosis.
I. INTRODUCTION Artificial Neural Networks (ANNs) are particular implementations of Artificial Intelligence (AI) systems and they are being thought as the wave of the future in computing [1]. ANNs have been applied in many fields such as finance [2], telecommunications [3], robotics [4], defense [5], medicine [6-8], energy problems [9], signal processing [10] etc. . Due to the progress that has been made in artificial intelligence techniques, ANNs provide a superior tool for many problems in medical science [11]. The implementation of the ANNs has been investigated in problems such as the analysis of electrocardiogram and electroencephalogram signals using for medical images interpretation [12] and the diagnosis of a variety of diseases [6,7]. In orthopedics, research has been done using ANNs, in order to predict the osteoporosis risk factor [6]. Osteoporosis risk prediction has been viewed as a pattern classification problem, based on a set of clinical parameters. Multi-Layer Perceptrons (MLPs) and Probabilistic Neural Networks (PNNs) were used in order to face the osteoporosis risk factor prediction. Moreover, ANNs have also been used in Pediatrics for the
abdominal pain estimation in childhood [7]. In this study the implementation of ANN architectures was examined, using MLP neural networks and PNNs architectures, in order to specify the appropriate ANN structure for abdominal pain estimation in childhood. The aim of proposed MLP neural network was to assist surgeons in appendicitis prediction, avoiding an unnecessary operative treatment. One of the major problems in medicine is making the diagnosis. A lot of software applications have been tried in order to help human experts, offering solutions. In some cases, computer-assisted diagnoses have been claimed to be even more accurate than those by clinicians [11], as computers have the advantage of not being affected by fatigue, distractions or emotional stress. Computers in medicine can not replace human experts but they can be used by experts to double-check their diagnosis. In this paper, with the help of pediatricians who are specialized in childhood asthma, a novel applicable method for diagnosing asthma is proposed. The remainder of the paper is organized as follows: Section II describes the data collection and the newly proposed neural network asthma model. The experimental verification is presented in section III and finally, the conclusion of the whole paper is given in Section IV.
II. MATERIALS AND METHODS A. Data Collection Asthma is a chronic lung disease that inflames and narrows the airways [13]. It causes recurring periods of wheezing, chest tightness, shortness of breath, and coughing. The coughing often occurs during the night or early in the morning. Although it is most commonly starting in childhood, asthma can start at any age. At least 1 out of 10 children, and 1 out of 20 adults, have asthma. Asthma runs in some families, but many people with asthma have no other family members been affected. The symptoms may flare up from time to time. There is often no apparent reason why symptoms flare up. However, some people find that symptoms are triggered, or made worse, under certain conditions. It may be possible to avoid certain triggers which might help to reduce the symptoms. Things that may trigger the asthma
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 600 – 603, 2010. www.springerlink.com
Diagnosis of Asthma Severity Using Artificial Neural Networks
symptoms are the following: infections, exercise, climate, altitude, diet, living conditions, allergies to animals and some foods. Patients with asthma frequently have poor recognition of their symptoms and poor perception of severity. Monitoring of coughing, wheezing, and breathing patterns may be inaccurate or incomplete in the assessment of the severity of asthma. Breathing tests are today available to measure severity of diseases with airway obstruction, such as asthma. Such tests are done to measure lung functions by spirometers and peak flow meters. These tests provide direct assessment of airflow limitation, variability, and reversibility. These measurements contribute to the diagnosis and monitoring of the disease. The diagnosis of asthma can typically be confirmed using these breathing tests. If the first set of breathing tests shows abnormal results, the patient will be asked to get bronchodilator drugs and repeat the tests. Measurements of lung function are expressed as a percentage of predicted normal values, using the European Respiratory Society prediction equations for PEFR and FEV1 [14]. Such measurements were used to determine asthma severity with <60%, 60– 80%, and > 80% predicted FEV1 and PEF values. These values represent severe, moderate and mild asthma in accordance with National Asthma Education and Prevention Program (NAEPP) guidelines [15] for the diagnosis and management of asthma. The asthma data, which were used at the design of specific ANN structure, were received from the Pediatric Clinical Information System of Alexandroupolis University Hospital in Greece. Two artificial neural networks have been designed in order to predict the severity of asthma. The first one estimates the predicted normal value of PEFR using the sex and the height, while the second uses the output of the first one and the other parameters in order to approximate a function, which estimates the situation of the asthmatic patient. This study is based on a data set consisted of 200 records (children-patients aged 10-12 years with symptoms of asthma). The measured data was divided into a set of 150 records and another set of 50 records. The first set was used for training of the artificial network, whereas the second one for the testing.
601
values are able to move only from the input to the hidden to output layers; no values are fed back to earlier layers. In order to predict the normal value of PEFR according to the sex and height, a MLP network with one input and one output has been created. The development of ANNs requires the data preprocessing. For this study, the sex variable was coded as 1 for females and 2 for males, whereas height was obtained as recorded in the database. The asthma severity classification is based on 22 clinical and laboratorial factors, thus these parameters have been used as the inputs of the network. These factors are: age, height, sex and the results of various medical tests such as FVC(forced vital capacity), FEV1(forced expired volume in one second), PEF(peak expiratory flow), FEF50%(forced midexpiratory flow), their values before bronchodilator, their values after bronchodilator, their values before bronchodilator percent and their values after bronchodilator percent. The determination of the number of hidden layers, hidden neurons, connections and transfer functions, which represent the ANN structure, was achieved by the “trial and error” method. The used transfer functions were three: log sigmoid, positive linear and saturating linear function. Mathematical equations of these transfer functions are shown in Table 1. Table 1 Transfer functions Transfer function
Log sigmoid (logsig)
Positive linear (poslin)
Saturating linear (satlin)
Mathematical Equation
f(x) =
1 1 + e−x
⎧0, x ≤ 0 f (x) = ⎨ ⎩x, x ≥ 1 ⎧0 , x ≤ 0 ⎪ f(x) = ⎨ x, 0 ≤ x ≤ 1 ⎪1, x ≥ 1 ⎩
B. Proposed Artificial Neural Network Structure The proposed network is a full-connected, four-layer, feed-forward perceptron neural network which was trained with the back-propagation learning algorithm. “Fully connected” means that the output from each input and hidden neuron is distributed to all of the neurons in the following layer. On the other hand “feed-forward” means that the
The Levenberg – Marquardt back-propagation learning algorithm was selected for the ANNs training [28]. As an estimation criterion of the performance of the ANNs, the mean squared error energy which is obtained by summing E(n) over all n and then normalizing with respect to the set size N has been used. This is shown in equation (1).
IFMBE Proceedings Vol. 29
602
E. Chatzimichail et al.
MSE =
1 N 2 1 N 2 ∑ [d(n) − y(n)] ∑ e (n) = N n =1 N n =1
(1)
where e(n) is the error signal at the output, N is the number of patterns, d(n) is the predicted output and y(n) the target.
III. EXPERIMENTAL RESULTS The networks have been simulated using MATLAB Neural Network Toolbox [16] with 150 series of 200 available data series which were used for training the network and 50 series for testing it. The ANN, which predicts PEFR, consists of three layers and has a 1-3-1 structure. The proposed ANN architecture performs well over the overall testing set (98% successful prognosis) as well as the overall training set (99% successful prognosis). Table 2 shows the results of the best-implemented ANNs for the asthma severity prediction. Especially, the ANN architectures, the transfer function for each of architecture,
the Mean Square Error (MSE) for training and testing set, the percentage of the severe situations against overall situations of test set and finally the percentage of successful prognosis over the 50 patients which are the test set, are presented in this table. During the phase of implementation, the initial weights and biases of MLP neural networks have been varied, keeping the other parameters unchangeable. Particularly, the 2nd - 4th , 6th -7th and 8th -9th neural network models have the same architecture, but their initial conditions were different. It is obvious that different initial conditions for MLPs training result to a variation of neural networks’ performance. The ANN that has the best performance over the overall test set (98%) as well as the severe cases of the test set (94%) is the 8th with 4 layers. The input layer consists of 22 neurons, the two hidden layers of 4 neurons, while the output contains 2. The log sigmoid transfer function is used on hidden layers and the positive linear transfer function is applied to the output neuron.
Table 2 Results of the best used artificial neural networks Architecture of artificial neural network 1
22-3-5-3-1
2
22-3-5-1
3
22-3-5-1
4
22-3-5-1
5
22-3-5-2
6
22-3-4-1
7
22-3-4-1
8
22-4-4-2
9
22-4-4-2
Transfer function logsig logsig logsig poslin logsig logsig poslin logsig logsig poslin logsig logsig poslin logsig logsig poslin logsig logsig poslin logsig logsig poslin logsig logsig poslin logsig logsig satlin
Training algorithm
M.S.E. over the training set
Percentage of successful M.S.E. over the prognosis over severe cases test set of the test set
Percentage of successful prognosis over the test set
trainlm
0.0087
0.0053
89
96
trainlm
0.0083
0.0245
89
96
trainlm
0.00063295
89
96
trainlm
7.8800e-005
0.0222
94
98
trainlm
0.0100
0.0312
89
96
trainlm
8.1280e-004
0.0413
84
94
trainlm
9.9129e-005
0.0401
84
94
trainlm
0.0047
0.0117
94
98
trainlm
0.0498
0.0634
89
96
0.0186
IFMBE Proceedings Vol. 29
Diagnosis of Asthma Severity Using Artificial Neural Networks
In order to classify the three classes of asthma severity, the results of the networks were encoded. In the networks with one output layer, the predicted output was encoded to 0 for mild type of asthma, 1 for moderate asthma and 2 for severe asthma. In the networks with 2 output layers, three binary output nodes were employed, corresponding to the three classes of asthma. The target values for each node are either zero or one depending on the desired output class. For example, a target output of 0-0 corresponds to a mild case, 0-1 to a moderate case, and 1-0 to a severe case. The final ANN structure used in asthma prediction, which is a combination of two smaller ANNs is depicted in Fig.1. 1 1 2 2 3 4
22 Input Layer
Hidden Layer
Output Layer
Fig. 1 The final ANN architecture used for asthma classification
IV. CONCLUSION In this paper the potential of neural networks in childhood asthma severity diagnosis is investigated. The data were collected from breathing test results of 200 children patients that were examined by pediatricians, in Alexandroupolis Hospital in Greece. These data samples were used for training and testing the neural network. The number of hidden layers, neurons and transfer functions in each layer were determined through the “trial and error” method. After several trials, the best result was obtained from a four-layer network. In this network the logsigmoid function was used in the input and hidden layers, while the positive linear function in the output layer. The most suitable network configuration that has been found was 22 x 4 x 4 x 2. It means that the number of neurons was 4 for the first hidden layer and four for the other one. For training network with back-propagation learning algorithm, the MSE performance function with 0.01 goal value and “trainlm” function were used. The proposed ANN architecture faces the asthma prediction quite satisfactory. The method succeeded up to 94% successful prognosis over the test set. This computational
603
method assists general physicians and specialists in predicting asthma severity, thus avoiding further examinations. In future work, artificial intelligent techniques will be applied in order to classify the different type of asthma according to the symptoms of the patient, such as allergic, intrinsic, exercise-induced, occupational or cough-variant asthma.
REFERENCES 1. Hecht-Nielsen R. (1990) Neurocomputing. Addison-Wesley 2. Zhang G., Patuwo B. E., Hu, M. Y. (1998) Forecasting with artificial neural networks: The state of the art. International Journal of Forecasting 14:35–62. 3. Ansari N., Chen Y. (1990) Dynamic digital satellite communication network management by self-organization, vol. 2 International Joint Conference on Neural Networks, 1990, pp 567-570 4. Ding H., Wang J. (1999) Recurrent neural networks for minimum infinity-norm kinematic control of redundant manipulators. IEEE Trans. Syst. 29:269–276 5. Azimi-Sadjadi M.R., Huang Q., Dobeck G. (1998) Underwater target classification using multiaspect fusion and neural networks, International Symposium Aerospace/Defense Sensing Control, Orlando, FL, 1998, pp 334–341 6. Mantzaris D., Anastassopoulos G., Lymberopoulos D. Medical disease prediction using artificial neural neworks, 8th IEEE International Conference on BioInformatics and BioEngineering, 2008, pp1-6 7. Mantzaris D., Anastassopoulos G., Adamopoulos A., Gardikis S., (2008) A non-symbolic implementation of abdominal pain estimation in childhood. Information Sciences 178:3860–3866 8. Wongseree W., Chaiyaratana N., Vichittumaros K., Winichagoon P., Fucharoen S., (2007) Thalassaemia classification by neural networks and genetic programming. Information Sciences 177:771–786 9. Moreno J., Ortúzar M.E., Dixon J.W., (2006) Energy-management system for a hybrid electric vehicle using ultracapacitors and neural networks. IEEE Trans. on industrial electronics 53:614 – 623 10. Arnott R., (1997) Diversity combining for digital mobile radio using radial basis function networks. Signal Processing 63:1-16. 11. Baxt WG., (1995) Application of artificial neural networks to clinical medicine. Lancet. 346 :1135- 8 12. Monoyiou E., Ventouras E., Ktonas P., Paparrigopoulos T., Dikeos D., Uzunoglou N., Soldatos C., (2001) Multi-layer perceptrons for the detection of sleep EEG transient waveforms, Proceedings of the 4th International Conference on Neural Networks and Expert Systems in Medicine and Healthcare, 2001, pp. 47–50 13. Venables KM, Chan-Yeung M.(1997) Occupational asthma, Lancet 349 :1465–1469. 14. Colice GL, Vanden Burgt J, Song J et al. (1999) Categorizing asthma severity. Am J Respir Crit Care Med, 160:1962–1967. 15. National asthma education and prevention program. (1997) Expert panel report 2: guidelines for the diagnosis and management of asthma. Bethesda, MD: National Institutes of Health, pp 15–24 16. Howard D., Mark B., (2004) Neural network toolbox for use with MATLAB, The Math Works Inc.
Author: Chatzimichail Eleni Institute: Democritus University of Thrace Street: Filis 39 City: Alexandroupolis Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
Enhanced stem cells characteristic of fibroblastic mesenchymal cells from HHT patients G.Silvani1,3, L.Benedetti 1,3, N. Crosetto4, C. Olivieri2, D. Galli1,3, B. Magnani1, G. Magenes3 and M.G.Cusella De Angelis 1,3 1 Department of Experimental Medicine, University of Pavia, Pavia, Italy Department of Hereditary and Human Pathology, University of Pavia, Pavia, Italy 3 C.I.T, Tissue Engineering Centre, University of Pavia and S. Matteo Hospital, Pavia, Italy 4 Institute of Biochemistry II, University of Frankfurt Medical School, Frankfurt am Main, Germany 2
Abstract— Mesenchymal stem cells (MSC) are self-renewing, multipotent cells that are present in many adult tissues, adipose, bone marrow, trabecular bone and muscle. Dermal skin-derived fibroblasts exhibit mesenchymal surface antigen immunophenotype and differentiation along the three main mesenchymal tissue (bone, fat and cartilage). We isolated human dermal fibroblasts from 4 patients with Hereditary Hemorrhagic Telangiectasia (HHT) and from 2 healthy controls. The aim of this work was to evaluate the mesenchymal property of HHT cells in comparison with the control cells. HHT cells possesses the same surface markers expression of mesenchymal stem cells, they are highly clonogenic, showing a high proliferation potential and an enhanced capacity of self renewal.
person with HHT has a tendency to form blood vessels that lack the capillaries between an artery and vein; a typical HHT patient has epistaxis, muco-cutaneous telangiectases and GI bleeding in later life, even though this clinical scenario represents only one of the possible HHT patterns. This disease is caused by mutations in endoglin on chromosome 9 (ENG; HHT1) and ACVRL1/ALK1 on chromosome 12 (HHT2), respectively [5]. The objective of this study was to evaluate the mesenchymal potential of fibroblasts derived from HHT patients in comparison to the ones from normal subjects.
II.
Keywords— mesenchymal stem cell, HHT, cell proliferation
I. INTRODUCTION
Mesenchymal stem cell (MSC) were first identified in the bone marrow [1] and characterized as a population of nonhematopoetic multipotent stem cells that possess the potential for self-renewal and for differentiation in vitro or in vivo into several cell types (neuronal cells, adipocytes, osteoblasts, chondrocytes, myocytes, and beta-pancreatic islets cells) [2]. MSC are present in many adult tissue including trabecular bone, amniotic fluid, peripheral blood, adipose tissue, dermis, muscle and bone marrow [3]. These undifferentiated cells do not express hematopoietic and endothelial markers (such as CD45, CD117, and CD31(Pecam)) but express mesenchymal markers, CD90, SH2 (endoglin or CD105). Dermal skin-derived fibroblasts have been found to exhibit mesenchymal surface antigen immunophenotype and differentiation potential along the three main mesenchymal tissue [4]. We isolated human dermal fibroblast from skin biopsy of 4 persons suffering from Hereditary Hemorrhagic Telangiectasia (HHT) and from two healthy controls. HHT is an autosomal dominant disease characterized by diffuse visceral and muco-cutaneous telangiectases; a
MATERIALS AND METHODS
Cell isolation and collection: Skin biopsies from adult patients where taken during surgical operation after formal consent. To establish cell cultures, tissue where minced with scissors without any enzymatic digestion. Fragment where then plated in growth medium that was composed by DMEM (Dulbecco’s Modified Eagle’s medium) supplemented with 10% FCS (Fetal Calf Serum) and left for one week.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 604–607, 2010. www.springerlink.com
Fig. 2 HHT cells
Enhanced Stem Cells Characteristic of Fibroblastic Mesenchymal Cells from HHT Patients
Culture condition: 5% CO2 and 37°C, without medium changing. When cultures reached confluence, cells were passaged using trypsin-EDTA. Flow cytometry: Cells obtained from primary culture have been incubated 20 minutes in the dark at room temperature with 10μl of fluorochrome-conjugated monoclonal antibody (BD Pharmingen and MACS). Cells have been analyzed for a battery of haematopoietic (CD34, CD45, CD68, CD14), mesenchymal (CD90, CD105, CD29, CD166), and endothelial makers (PECAM [CD31]) by fluorescent activated cell sorting (FACS). Cell fluorescence was evaluated by flow cytometry in FACS caliber instrument (Becton Dickinson; BD, Heidelberg, Germany). Colony-forming unit assays: A cytologic technique for measuring the functional capacity of stem cells by assaying their self renewal potential is “colony forming unit assay”. For these tests, cells of both origins (HHT and control) were plated at 1, 2 or 3 cells/well in a 96 multiwell plate in growth medium, and cultured for 14 days, without medium changing. At the end of the culture period, the cells were stained with Wright’s staining and CFUs were quantified by counting colonies of >50 cells. Wright’s staining: For colony forming unit tests, the cell morphology was evaluated using Wright’s Staining (VWR), that is an hematoxylin/eosin staining for cells based on the combination of acid dye (red 'eosin') and a basic dye (blue 'methylene'). Proliferation assay: The xCELLigence™ System (Roche) monitors cellular events in real time without the incorporation of labels. The System measures electrical impedance across interdigitated micro-electrodes integrated on the bottom of tissue culture E-Plates. The impedance measurement provides quantitative information about the biological status of the cells, including morphology, cell number and viability.
Fig. 2 RTCA SP station
605
The Real Time Cell Analyzer (RTCA) SP Instrument is composed of four main components: the Analyzer, the SP Station, the Control Unit and the E-Plate 96. The RTCA SP station (Fig. 2) is located inside a tissue culture incubator and is capable of switching any on of the wells on the Eplate 96 to the RTCA analyzer for measurement. This impedance technology offers a solution to a number of limitations in current assay systems. First, impedance measurements are non-invasive, so the cells remain in a more normal physiological state during assays of cell proliferation Second, the system does not require labeling the cells with expensive reagents. Finally and most importantly, it allows real-time, rather than end point, measurement of cell proliferation, viability and cytotoxicity. Cell Proliferation Assays: For each cell type (in dodecuplicate), we seeded 5000 cells/cm2 (980 cells/well) in 180 μl of DMEM 10% FCS in E-Plate 96. Cells proliferation was monitored for 1 week, with an impedance measurement every minutes, the cell-sensor impedance was expressed as an arbitrary unit called Cell Index (CI). Microarray: Total RNA from 1.5 x 106 cells from each HHT clone and from control has been extracted using PureLink™ Micro-to-Midi Total RNA Purification System (INVITROGEN). Retro transcription has been performed using 1 μg of total RNA, using BioRad IScript cDNA Synthesis Kit. cDNA was used for RT 2Profiler PCR Array (SuperArray Frederick, MD, USA), for Human apoptosis and Human cell cycle, the arrays profile the expression of 84 genes involved in apoptosis or programmed cell death and 84 genes key to cell cycle regulation, respectively. The human apoptosis array includes the TNF ligands and their receptor; members of the caspase, bcl-2, IAP, CARD, CIDE, death domain, death effector domain and TRAF families; as well as genes involved in ATM end p53 pathways. The human cell cycle array contains genes that both positively and negatively regulate the transition between each of the cell cycle phases, DNA replications checkpoints and arrest of cell cycle. β-galactosidase staining for senescence: HHT cells at passage 31 (P31) and control cells at passage 27 (P27) were plated at density of 3500 cells/cm2 in growth medium; after 10 days of culture were first rinsed twice in PBS for 10 minutes, then fixed in 4% buffered paraformaldehyde for 15 minutes and washed twice in PBS. Cells were then stained with the staining solution reported by Dimri et al. [5] at pH 6.0 for senescence-associated β-galactosidase histochemistry at room temperature. The β-galactosidasestaining solution contained 1mg 5-bromo-4-chloro-3-indolyl β-D-galactoside (X-Gal) in a 20mg/ml dimethylformamide stock solution containing 40 mM citric acid and phosphate buffer, pH 6.0, as required for senescence-associated βgalactosidase staining. The staining solution also contained
IFMBE Proceedings Vol. 29
606
G. Silvani et al.
5 mM potassium ferrocyanide, 5 mM potassium ferricyanide, 150 mM NaCl, and 2 mM MgCl2. After the appropriate staining period, cells were washed twice in PBS for 10 minutes at room temperature. β-galactosidase activity (blue cells) at pH 6 is present only in senescent cells and is not found in presenescent, quiescent or immortal cells.
Proliferation assay: after 1 week of colture HHT cells (means of CI of the 4 clones) shown a proliferation rate higher then control cells, as shown in figure 4.
III. RESULTS
Flow cytometry: FACS analysis revealed that HHT and control cells possesses the same surface markers expression of mesenchymal stem cells; the cells have shown expression of mesenchymal markers (Fig. 3a, b) and no expression of endothelial markers (Fig. 3c) or other hematopoietic markers (Fig. 3d). Examination of cell size and granularity, as shown by FACS analysis, indicates the homogeneity of the HHT cells population. Fig. 4 Growth comparison graph between ctrl and HHT cells Microarray: Results of Human Apoptosis array shows that there is an expression of three genes involved in apoptosis suppression (IGFR1, CFLAR and TRAF), but not in tumor progression. Microarray analysis for cell cycle, confirm that, in HHT cells, there is an up-regulation of genes that positively regulate the cell cycle respect to control ones (Fig. 5). β-galactosidase staining for senescence: β-galactosidase activity has recently been used as a histochemical marker of replicative senescence in human fibroblasts. β-galactosidase is microscopically revealed by the presence of a blue, insoluble precipitate within the senescent cell. Control cells at passage 27 shown a specific coloration, while HHT cells appear even when cultured for 31 passages. Fig. 3 surface markers profile: positivity for CD90 (a) and CD105 (b) negativity for CD31 (c) and CD45 (d)
Cells shown simultaneous expression of cell surface markers including CD166, CD13, CD90 and CD105 (>97%) with a concomitant absence of CD45, CD31, CD117 and CD34 (<4%), this expression represents a specific phenotype for cultured MSC. Colony-forming unit assays: When plated at 1 cell/well HHT cells showed a colony forming efficiency of 97.4%, at 2 cells/well of 96.9% and at 96.3 for 3 cells/well instead of control cells that showed a colony forming efficiency of 73.4% at 1 cell/well, at 2 cells/well of 25.8% and 28.9% for 3 cells/well. Fig. 5 Bar chart of cell cycle gene expression (microarray analysis)
IFMBE Proceedings Vol. 29
Enhanced Stem Cells Characteristic of Fibroblastic Mesenchymal Cells from HHT Patients IV.
607
ACKNOWLEDGMENTS
CONCLUSIONS
In this work, using a combination of phenotypic (flow cytometry and microarray), morphologic (senescence), and functional (colony forming unit assay, proliferation assay) criteria, we compare HHT cells characteristics with control cells. HHT cells shown a proliferation rate higher then control cells (Fig. 4), this result was confirmed also by the expression of several genes that positively regulate cell cycle and the expression of genes that suppress apoptosis (IGFR1, CFLAR and TRAF), but that not promote tumoral growth, this indicating a greater proliferative capability of these cells. Senescence is the phenomenon by which normal diploid cells lose their ability to divide. When cultured for 27 passages control cells showed a flatten shape and expressed senescence-associated β-galactosidase activity (blue areas), a marker of cellular senescence [8]. Cells from HHT patients did no change their morphology neither express βgalactosidase activity when if cultured fro 31 passages; this data suggests that HHT cells grow old later than control cells. Our data suggest that human dermal fibroblast from HHT patients share in vitro common characteristic with hMSC. furthermore they are highly clonogenic (colony forming efficiency about 97%), show a high proliferation potential, and an enhanced capacity of self renewal along with a major stability within passages even when cryopreserved or subcultured. Further studies will be necessary to evaluate differentiation capacity of fibroblast derived from HHT patients.
This project is partially found by FIRB RBIP06FH7J_001) and by Cariplo Project 2006.
(2005
REFERENCES 1.
2.
3. 4.
5. 6.
7. 8.
Friedenstein AJ, Chailakhjan RK, Lalykina KS. The development of fibroblast colonies in monolayer cultures of guinea-pig bone marrow and spleen cells. Cell Tissue Kinet. 1970;3:393–403. Pittenger MF, Mackay AM, Beck SC, Jaiswal RK, Douglas R, Mosca JD, Moorman MA, Simonetti DW, Craig S, Marshak DR. Multilineage potential of adult human mesenchymal stem cells. Science. 1999;284:143–147. Javazon EH, Beggs KJ, Flake AW. Mesenchymal stem cells: paradoxes of passaging. Exp Hematol 2004; 32:414-25. Multilineage differentiation potential of human dermal skin-derived fibroblasts Lorenz K, Sicker M, Schmelzer E, Rupf T, Salvetter J, Schulz-Siegmund M, Bader A. Exp Dermatol. 2008 Nov;17(11):92532. Epub 2008 Jun 14. Guttmacher AE, Marchuk DA, White RI Jr.: Hereditary Hemorrhagic Telangiectasia. New England Journal of Medicine 333:918-924, 1995 Dimri GP, Lee X, Baile G et al. A biomarker that identifies senescent human cells in culture and in aging skin in vivo. Proc NatiAcadSCi USA 1995;92:9363 RTCA SP instrument Operator’s Manual xCELLigence Roche Bernd van der Loo, Mark J. Fenton, and Jorge D. Erusalimsky Cytochemical detection of a senescence-associated beta-galactosidase in endothelial and smooth muscle cells from human and rabbit blood vessels. Exp Cell Res. 1998 Jun 15;241(2):309-15
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Giulia Silvani DIMES University of Pavia Via Forlanini 8 Pavia Italy [email protected]
High frequency vibration (HFV) induces muscle hypertrophy in newborn mice and enhances primary myoblasts fusion in satellite cells G.Ceccarelli 1,3, L.Benedetti 1,3, D.Prè 2,3, D.Galli1,3, L.Vercesi. 1,3, G.Magenes 2,3 and M.G.Cusella De Angelis 1,3 1 Department of Experimental Medicine, University of Pavia, Pavia, Italy Department of Informatic and Systemistic, University of Pavia, Pavia, Italy C.I.T, Tissue Engineering Centre, University of Pavia and S.Matteo Hospital, Pavia, Italy 2
3
Abstract— The present study aimed at verifying “in vivo” and “in vitro” the effects of mechanical vibrations on muscle development and on differentiation of satellite cells, the “stem cells” of muscle tissue. We realized a bioreactor composed by an eccentric motor which produces a displacement of 11 mm at frequencies between 1 and 120 Hz on a plate connected to the motor. On the plate we fixed a cage used for animals and the dishes for satellite cells and linear acceleration provoked by the motor to samples was measured. We used 30 Hz as stimulating frequency and we treated newborn mice from their birth for the next four weeks one 1h/day and satellite cells 1 and 4 days 1h/day always at 30 Hz. Every week we collected from a control and a treated mouse tibialis anterior muscles and we performed Western Blot analysis and quantitative Real-Time PCR (qRT-PCR) to investigate proteins and genes involved in hypertrophy and atrophy pathways of skeletal muscle. On satellite cells we studied some genes involved in differentiation and fusion of primary myoblasts. Results demonstrate that mechanical vibration induces muscle hypertrophy within the first week of treatment and enhances terminal differentiation of myoblasts of treated satellite cells respect to control ones. Keywords— mechanical vibration, bioreactor, satellite cells, newborn mice, Western Blot
I.INTRODUCTION
The study of the underlying mechanisms by which cells respond to mechanical stimuli represents a new and important area in the morphological sciences. Several cell types such as osteoblasts and fibroblasts as well as smooth, cardiac and skeletal muscle cells are activated by mechanical strain [1,2], and in particular, muscle offers one of the best opportunities for studying this type of mechanotransduction because it’s highly responsive to changes in functional demands. In particular, high frequency vibration (HFV) have been shown to induce improvements in muscolar strength and perfomance; in fact, Bosco’s studies regarding whole body vibration (WBV) demonstrate efficacy of this treatment on muscles [3,4], so
it’s very widespread in fitness centre and in sportsman training. On the other hand, WBV is also used in clinical therapy and medical rehabilitation for bone diseases, such as osteoporosis and others bone and muscle senile diseases[5,6]. Therefore, the aim of this work is to understand this phenomena at cellular level. We focused on the effects of the stimuli to differentiation and hypertrophy of skeletal muscle and, for this reason, we designed and realized a system to produce vibration with all the adaptations for “in vivo” and “in vitro” studies. The device is composed by an eccentric, voltage-controlled motor effecting a displacement of 11 mm and working between 1 Hz and 120 Hz, and by a plate bound to the motor that allows to stimulate a cage with mice and dishes for cells. We have taken 20 newborn mice CD1 wild type, male and female, divided in two groups, one as control and one treated group. Ten mice of treated group were stimulated with a vibration of 30 Hz for 4 weeks 5 days/week for 1h/day. Every week we collected tibialis anterior muscles from mice and extracted total RNA and total proteins in order to investigate genes and proteins involved in hypertrophy and atrophy pathways (Fig.1) such as Akt phosphorylated on Ser473, Akt, Myostatin. We used Western Blot to visualize presence/absence of proteins and quantitative Real time PCR (qRT-PCR) to better quantify gene expression of Myostatin and Foxo-1, the most important genes of atrophy pathway. Moreover, we analyzed effects of HFV on satellite cells, In fact, muscle satellite cells are considered the myogenic cells responsible for postnatal muscle growth, regeneration and repair for the maintenance of skeletal muscle [7]. We have isolated satellite cells from newborn mice, divided in treated dishes and control dishes and we have treated these cells for 1 and 4 days at 30 Hz 1h/day. For cells we performed qRT-PCR to investigate some genes involved in primary myoblasts fusion, such as dysferlin and m-cadherin [8,9], and in differentiation to mature muscle, as myosin creatin-kinase (MCK) indicates. Results suggest a strong relationship between HFV treatment and hypertrophy of muscle, especially in the first week of treatment as demonstrated by Western Blot. qRT-PCR on satellite cells
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 608–611, 2010. www.springerlink.com
High Frequency Vibration (HFV) Induces Muscle Hypertrophy in Newborn Mice and Enhances Primary Myoblasts Fusion
have indicated that HFV enhances fusion of primary myoblasts in treated satellite cells respect to control ones.
609
Every week we collected tibialis anterior muscles for Western Blot analysis to investigate proteins involved in hypertrophy/atrophy pathway. Muscle satellite cells were isolated from 6-day-old CD1 mice. After sterile dissection, muscles legs were minced in PBS/Gentamicin and digested with a solution of 1mg/ml Collagenase/Dispase (Roche). After three washes in PBS, lysates were filtered on 40μm cell strainer and subsequently cells plated on 6-cm dishes and grown in DMEM 10%FCS. In four days satellite cells differentiated into myotubes and, then, they were treated for 1 and 4 day 1/h day at a frequency of 30 Hz. C. Western Blot analysis
Fig.1: representation of hypertrophy pathway (a) and atrophy pathway (b) in skeletal muscles.
II.
MATERIALS AND METHODS
A. The vibrating system The device to produce the vibrating stimuli to mice and to muscle satellite cells is represented in Fig.2. The eccentric motor, voltage controlled (Maxon Motor™ Brushless ESeries), has a Nominal Voltage of 24V and a diameter of 22 mm. The imposed displacement is thus a half of the diameter (11 mm on both the directions of the plane that is perpendicular to the axis of the motor). We can supply voltage from 2.5 up to 24 V causing a variation of the rotational speed and thus of the frequency between 1 Hz and 120 Hz. A 3 dimensional accelerometer was fixed to the platform (range of ± 3 g, sensitivity 300 mV/g) to detect the effective acceleration forced on the platform by the motor. The knowledge of the acceleration peaks assured the reproducibility of the experiments. By using Matlab 7.1™ we were able to study these signals also in the frequency domain: so we verified that the acceleration signal imposed by the motor was in phase with that perceived by mice and by cells in the cage and with the same frequency. The signals were recorded with a National Instruments™ data logger, and its certified software and were analyzed with Matlab 7.1™ B. Mice treatment and satellite cells isolation
Muscles (20mg) were homogenized in a lysis buffer (100mM NaHCO3 , 1mM EDTA, 2%, SDS, 1000x protease inhibitor, 1000x Na3VO4). Homogenates were clarified by centrifugation at 13.000 rpm for 20 min at 4۫ C before determination of protein concentration by Micro BCATM Protein Assay Kit. Electrophoresis was performed using 8% acrylamide/bis-acrylamide ratio of 100:1. Antibodies against Akt-phosphorylated on Ser 473 and Akt were purchased from Cell Signaling Technologies®, while Myostatin from Millipore®. Blots were normalized with βactin from Santa Cruz Biotechnology®. Western blots were revealed with enhanced chemiluminescence system (Pierce® ECL Western Blotting Substrate). D. Molecular Biology analysis: RNA extraction and Real Time PCR RNA from muscles was extracted by Trizol® Reagent kit (Invitrogen) from treated and non-treated samples in order to evaluate the expression of genes. Muscles were homogenized in Trizol lysis buffer, frozen in liquid nitrogen and stored at -80°C until extraction took place. The extraction of RNA was composed by different steps, including the addition of chloroforme, isopropanol, ethanol 75%. On the other hand, satellite cells RNA was extracted with RNAesy kit Qiagen® from treated and non-treated dishes. Cells were homogenized in ß- Mercaptoethanol and lysis buffer, frozen in liquid nitrogen and stored at -80°. The extraction of RNA was composed by different steps, including the add of ethanol 70% and DNase to eliminate any genomic DNA contamination. Afterwards, we retrotranscribed total RNA extracted from muscles and cells with iScript™ cDNA Synthesis kit of Bio-Rad (Hercules, CA, USA).
20 newborn mice CD1 wild type, male and female, were divided in two groups, 10 as treated group and 10 as control group. Treated group was stimulated with high frequency vibration at 30 Hz for 4 weeks, 5 days/week for 1h/day. IFMBE Proceedings Vol. 29
610
G. Ceccarelli et al.
Fig.4: qRT-PCR of Foxo-1 and Myostatin on 1^ and 3^ week of HFV treatment
Fig.2: representation of the bioreactor using for stimulation. It was represented the controller of the motor, the cage of mice and the platform for cage and cells on the motor.
Molecular biology analysis (qRT-PCR) on satellite cells are represented by bar chart in Fig.5. Genes analyzed were Dysferlin, M-cadherin and Myosin creatin-kinase (MCK), normalized against PGK expression.
III. RESULTS Results of Western Blot assays on tibials muscles are shown in Fig. 3. The upper panel (table a) shows Western Blot of Phospho-Akt on Ser473 normalized against Akt on controls and treated tibials of four weeks of treatment, while the lower panel (table b) represents Western Blot on Myostatin normalized against β-actin.
Fig.5: qRT-PCR of Dysferlin, m-cadherin and MCK on 4-day satellite cells
IV. DISCUSSION
Fig.3: Western Blot assays on tibials muscles Molecular biology analysis (qRT-PCR) on Foxo-1 and Myostatin (1 and 3 week) for tibialis muscles are represented with bar chart graphics (Fig 4) and normalized against PGK (3-phosphoglycerate kinase) expression, an housekeeping gene.
Mechanical vibration is very widespread in sportsmen training and clinical therapy to enhance muscle functionality and to take care a lot of senile diseases. Nevertheless, this phenomena is not very clear at cellular and tissue level, so we decided to investigate intrecellular pathways that are involved in mechanostrasduction, both “in vivo” and “in vitro”. For this reason we realized this particular type of bioreactor (Fig.2) to investigate the effects of high frequency vibration (HFV) on muscle and on satellite cells, the “stem cells” of muscle tissue. We treated 10 newborn mice for 4 weeks, 1h/day at 30 Hz and satellite cells derived from newborn mice for 4 days 1h/day always at 30 Hz. We have performed on tibialis muscles Western Blot on phospho-Akt in Ser 473, Akt and Myostatin; in fact, studies both in rats and humans confirmed that Akt activity
IFMBE Proceedings Vol. 29
High Frequency Vibration (HFV) Induces Muscle Hypertrophy in Newborn Mice and Enhances Primary Myoblasts Fusion
is increased in response to muscle contractile activity, but it remains to be established how mechanical stress is converted to Akt activation [10]. On the other hand, Myostatin, a member of the TGF-β family, is expressed and secreted predominantly by skeletal muscle and functions as a negative regulator of muscle growth [10]. Our results confirmed that phospho-Akt is activated in the first week of treatment (Fig.3) in treated tibials respect to control ones, but this activity is not kept in the others weeks. On the other hand, Myostatin protein appeared downregulated in treated tibials respect to control ones (Fig.3), suggesting a hypothetical role of HFV on muscle hypertrophy. qRT-PCR on two genes involved in atrophy pathway, Foxo-1 and Myostatin, (Fig.4) have confirmed Western Blot data, in fact there isnt’ expression of these genes in treated tibials in 1^ and 3^ week. Moreover, qRT-PCR on satellite cells seems to indicate that HFV promotes fusion of primary myoblasts; in fact, dysferlin has been implicated in vertebrate muscle membrane fusion and M-cadherin is present on quiescent satellite cells and it is upregulated during skeletal muscle regeneration and during myotubes fusion [8,9]. This experiment represents a preliminary investigation of the effect of high frequency vibrations at cellular level and at tissue level. Nevertheless, it demonstrates that HFV has a positive effect on muscle tissue, with short treatment in short time. Further studies will be necessary to investigate muscle development and muscle regeneration after injury and treatment with a vibrating system expecially.
REFERENCES 1.
Everett B. Lohman, Jerrold Scott Petrofsky, Colleen MaloneyHindsBE, Holly Betts-SchwabAF, Donna Thorpe. “The effect of whole body vibration on lower extremity skin blood fl ow in normal subjects”. Med Sci Monit, 2007; 13(2) PMID: 17261985 2. Martineau LC, Gardiner PF. Insight into skeletal muscle mechanotransduction: MAPK activation is quantitatively related to tension. J Appl Physiol. 2001 Aug;91(2):693-702 3. Cardinale. M, Bosco. C . “The use of vibration as an exercise intervention”. Exerc Sport Sci Rev. 2003 Jan;31(1):3-7. Review. 4. Bosco C, Colli R, Introini E, Cardinale M, Tsarpela O, Madella A, Tihanyi J, Viru A. “Adaptive responses of human skeletal muscle to vibration exposure”. Clin Physiol. 1999 Mar;19(2):183-7 5. Tihanyi TK, Horváth M, Fazekas G, Hortobágyi T, Tihanyi J. “One session of whole body vibration increases voluntary muscle strength transiently in patients with stroke”. Clin Rehabil. 2007 Sep;21(9):782-93 6. Machteld R. et al., 2004. Whole-body-vibration training increases knee-extension strenght and speed of movement in older women. JACS 52: 901-908. 7. Bruno Péault, Michael Rudnicki, Yvan Torrente, Giulio Cossu, Jacques P Tremblay, Terry Partridge, Emanuela Gussoni1, Louis M Kunkel1 and Johnny Huard.” Stem and Progenitor Cells in Skeletal Muscle Development, Maintenance, and Therapy”. The American Society of Gene Therapy, 2007 8. Katherine R. Doherty1, Andrew Cave, Dawn Belt Davis, Anthony J. Delmonte, Avery Posey,Judy U. Earley, Michele Hadhazy and Elizabeth M. McNally. “Normal myoblast fusion requires myoferlin”. Development 132, 5565-5575 Published The Company of Biologists 2005 doi:10.1242/dev.02155 9. Michael Zeschnigk, Detlef Kozian† Christine Kuch, Marion Schmoll and Anna Starzinski-Powitz. “Involvement of Mcadherin in terminal differentiation of skeletal muscle cells”. Journal of Cell Science 108, 2973-2981 (1995 10. Marco Sandri. “Signaling in muscle atrophy and hypertrophy”. Physiology 23:160-170, 2008. doi:10.1152/physiol.00041.2007. Reviews
ACKNOWLEDGMENTS This project is partially found by FIRB RBIP06FH7J_001) and by Cariplo Project 2006.
611
(2005
, Author: Gabriele Ceccarelli Institute: Department of Experimental Medicine, University of Pavia Street: Via Forlanini, n.8 City: Pavia Country: Italy Email: [email protected]
IFMBE Proceedings Vol. 29
Surface Characterization of Collagen Films by Atomic Force Microscopy A. Stylianou, S.B. Kontomaris, M. Kyriazi and D. Yova Laboratory of Biomedical Optics and Applied Biophysics, School of Electrical and Computer Engineering, National Technical University, Athens, Greece Abstract— Collagen is the most abundant extracellular matrix protein and is important for a variety of functions, including tissue scaffolding, cell adhesion, cell migration, angiogenesis, tissue morphogenesis and tissue repair. Collagen is considered as one of the most useful biomaterials since it is hydrophilic, exhibits negligible cytoxicity, good hemostatic properties and is readily available and biocompatible. As the majority of the biological reactions occurs on surfaces or interfaces it is of great importance to nanostructure the collagen thin film. These films are useful to address a variety of biological issues, including cell morphology and the influence of surface properties on intacellural signaling and can be used to cover non-biological surfaces offering them biocompatibility. In this paper thin films of collagen were formed and according to the used methodology different collagen structures were created. The atomic force microscopy enabled the observation of the different patterns and the measurement of specific surface characteristics. Additionally, the AFM tip was used to nanomanipulate the collagen structure in the air. The AFM topographs showed that differently nanostructured collagen films were formed with pre-determided characteristics. These films can be used to direct cellular processes in a variety of research and later medical applications. Keywords— Nanobiomaterials, Atomic Force Microscopy, Collagen Films, Roughness, Nanomanipulation I. INTRODUCTION
The biomaterials have to fulfill many conditions in order to be used for research or biomedical applications. The surface properties of biomaterials play an important role in biomedicine as the majority of biological reactions occur on surfaces or interfaces and the quality of the surface features limits their applications. Nanostructured biomaterials, the so called nanobiomaterilas [1], present improved surface characteristics. As a result they have wide applications in biomedicine, in fields such as the tissue engineering, the implant design and the biosensors where it is very critical to pre-determined the topography of the surfaces. Collagen is an extracellular matrix protein which is important for a variety of functions, including tissue scaffolding, cell adhesion, cell migration, angiogenesis, tissue morphogenesis and tissue repair. Additionally, its concentration or structure is different in specific pathological conditions [2,3]. The type I collagen is the most widely occurring fi-
brillar collagen and its molecules consist of three amino acid chains that form rod-shaped triple helixes, which are assembled to form fibrils [3]. Collagen is considered as a one of the most useful biomaterial since it is hydrophilic, exhibits negligible cytoxic responses, good hemostatic properties and is readily available and biocompatible [4]. Additionally, collagen contains amino acid sequences that may be recognized by specific receptors and it is shown that cells response to collagen matrix mechanics and topography [5, 6]. Thereby the characteristics of collagen can trigger cell behavior, such as adhesion and spreading. Due to its filamentous shape, its associative properties and the possibility to manipulate the surface structure, collagen type I is a very promising molecule for the creation of nanostructures and nanobiomaterials. AFM is a very powerful instrument that can offer image of biological systems in real time with molecular or submolecular resolution in air, in vacuum and even in liquid [7]. AFM images can be used to obtain qualitative or quantitative information about the properties of biomaterials and the AFM can operate in many different modes relevant to biological systems. Among the others AFM tip can be a unique tool for nanomanipulation [8] and AFM offers the possibility to probe physiochemical and mechanical properties of surface, such as roughness. Surface roughness is of much interest as it is directly related to optical properties and the thickness of the film and is an important index for assessing the quality of a surface. Additionally, surface roughness strongly affects the interaction between the biomaterial and the cells due to an increase of the surface which is available for cell adhesion and growth [9]. AFM is a widely used instrument for providing morphological information and quantitative measurements from collagen samples. By using the AFM it is possible to study several collagen structures from fibrils to separated collagen molecules [10]. The AFM have the advantages that a minimal collagen concentration is demanded and presents better performance by using collagen thin films [11]. Thin films are of great interest in the fields of biomaterials and tissue engineering since they posses unique nanoscale characteristics [6, 8]. They are useful to address a variety of biological issues, including cell morphology and the influence of surface properties on intacellural signaling and can be used to cover non-biological surfaces offering them biocompatibility.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 612–615, 2010. www.springerlink.com
Surface Characterization of Collagen Films by Atomic Force Microscopy
In a previous work it was demonstrated that collagen thin films maintain their non linear optical properties and that AFM could significantly contribute to investigate their optical characteristics [12]. Additionally, the characteristics of the collagen films could be controlled by changing simple physicochemical factors and the topography differentiations can be imaged and measured by AFM [13]. In this paper thin films of collagen were formed by using two different methodologies. The first was based on the development of oriented fibrous structures under the hydrodynamic flow of the collagen solution. The second one utilized the combination of the hydrodynamic flow and the spin coating procedure in order to form collagen structures with a double perpendicular orientation. The AFM enabled the observation of the different patterns and the measurement of specific surface characteristics, such as the average roughness and the fibrous structures diameters. Additionally, it was studied the possibility to nanomanipulate the collagen structures in the air with the AFM tip. The results suggest that the used methodologies could be applied for the formation of nanostructure collagen films with pre-determined surface characteristics.
II. MATERIALS AND METHODS
613
B. Atomic Force Microscopy AFM images were obtained in the air in the contact mode using a commercial microscope (CP II, Vecco). All the images were obtained at room temperature with a typical anisotropic AFM probe (MPP-31123-10). The image processing and the quantitative measurements, such as the average roughness (Ra) and fibers diameter (Da), were made by using the image analysis software (DI SPMLab NT ver.60.2, Veeco). Each sample was imaged in several locations in different image sizes, but only the most representative are illustrated. Under each mica disc a copper locator grid (G2761C, Agar Scientific) was glued in order to be able to map the film surface. III. RESULTS AND DISCUSSION
With the first methodology and 10 minutes of adsorption, only a few collagen structures could be found on the mica surface. The majority of them had an indefinable shape and only very rare filamentous structures were observed (Fig. 1). Generally, there was not uniformity in the surface and the observed structures were not repeatable. These characteristics are a result of both the very short adsorption time and the rinse with PBS since a few collagen molecules were adsorbed and the loosely bound were removed.
A. Collagen Films Preparation Type I collagen from bovine Achilles tendon (Fluka 27662) was dissolved in acetic acid in final concentrations 1 mg/ml. The solutions was then homogenized at 24000rpm (IKA T18 Basic) and stored in 4ȠC as a stock solution. Part of the stock solution was diluted and neutralized with buffer solution, Phosphate Buffered Saline (PBS), in final concentration 0.6 ȝg/ml (pH 7.2). Onto fresh cleaved mica discs (F7019, Agar Scientific) 30 ml of PBS were dropped and then: 1st Methodology: 30 ml of collagen solution was flushed over the mica surface by creating a hydrodynamic flow. After an adsorption time of 10, 30 60 minutes and 5 days the samples were rinsed with PBS to remove loosely bound molecules. 2nd Methodology: the same as the 1st but after the adsorption the samples were spin coated (WS-400B-6NPP/LITE, Leurell Technologies) for 40 sec at 6000 rpm. The stock solution was also diluted with PBS in a final concentration 1,2 ȝg/ml (pH 7.2) and the previous methodologies were repeated for 10 minutes in order to examined the effect of the solution concentration in the collagen films.
Fig. 1 AFM height image obtain on collagen layer adsorbed on mica for 10 minutes with a hydrodynamic flow. Only rare amorphous structures are observed. The average roughens was Ra=11nm and the structures have a height ~30nm. For 30 minutes adsorption time, fibrous structures were formed and, what is more, they had a uniform orientation (Fig. 2A). This orientation was corresponding to the direction of the hydrodynamic flow that the collagen solution had. Moreover, these fibrous structures presented a ‘dendritic’ form since small branches were observed on both sides of the fiber principle axis. This pattern might be caused by the presence of minerals in the collagen solution [14] or by the crystalline structure of the mica surface. Similar structures were observed almost on the whole area of the mica surface and the orientation was invariable. By increasing the adsorption time to 60 minutes, dendritic structures were also formed but with a noticeable increase in the size of the side brunches (Fig. 2B).
IFMBE Proceedings Vol. 29
614
A. Stylianou et al.
Fig. 2 Representative AFM height images of the dendritic formed structures. (A) 30 minutes absortion, Ra=40nm, Da=3.5 ȝm (B) 60 minutes of adsorption, Ra=47nm, Da=3.8nm and broader branches. In some occasions the growth of the branches was so big that the free space among the structures was significantly reduced (Fig. 3A). After 5 days of adsorption the collagen structures were integrated and dentritic or fibrous patterns were hardly observed (Fig. 3B).This result is due to the fact that there was plenty of time in order to be adsorbed and the collagen molecules bonded with the rest collagen structures. Additionally, the sample was completely dried and PBS could not remove the quantity of collagen that had not been adsorbed.
Fig. 3 Representative AFM height images of the loss of the dendritic shape. (A) adsorption time of 60 minutes, Ra=111nm and (B) 5 days of adsorption, Ra=99 nm. In the left top corner a dendritic structure can hardly be seen. By using the second methodology, the collagen films presented totally different surface structures. For 10 minutes, structures with two main orientations were observed (Fig. 4A). The first orientation, consisting of fibrous structures with bigger diameters, was corresponding to the direction of the hydrodynamic flow. The second orientation was almost perpendicular to the first one and was consisted of smaller fibers. This orientation might be a result of the centrifugal forces of the spin coating procedure, since its direction was always forward the center of the sample. These structures, which could be said that look like ‘ideograms’(Fig. 4B), presented a great repeatability in the whole film and in some occasions more elongate structures with both orientations were formed (Fig. 4C). When the adsorption time was increased (30 minutes) the structures covered spherical areas and were consisted of both bulky and fibrous formations (Fig. 5A). The fibrous structures, in some occasions, presented the same double orientation that was observed during the period of 10 minutes (Fig. 5B, C).
Fig. 4 Representative AFM height images of the formation of a double orientation by the 2nd methodology for 10 minutes of absorption. The Ra is 2.3nm and the Da of the fiber structures of the second orientation is 2.3 ȝm. The structures of the main orientation did not possess constant diameters as it is illustrated in figure B. The spherical pattern might be a consequence of the evaporation procedure of the collagen solution. For 60 minutes, more bulky structures were formed with the absence of the spherical coverage area which was previously noticed. Due to the increase of the absorption amount of collagen molecules, the fibrous shape was lost.
Fig. 5 Representative AFM height images of the formation of structures which cover spherical areas after 30 minutes of absorption. The Ra=30nm and the spherical areas were almost constant with a coverage area of about 80 ȝm2 (circles) and the bulky structures ~16 ȝm2 (arrows). The ratio of the spherical areas/bulky areas was remarkable constant indicating that the formation follows a repeatable pattern all over the surface. When the concentration of the collagen solution was increased in 1.2 ȝm/ml, irrespectively of the used methodology, the amount of adsorbed collagen was increased too. By using the first methodology mostly amorphous collagen structures were formed and there was only a sense of orientation (Fig. 6A). By comparing the sample with the low concentration films (Fig. 1) much more collagen was adsorbed, as it was expected. On the other hand, the second methodology created fibrous structures which presented the two orientations (Fig. 6B). The higher concentration in this occasion leaded to the formation of a collagen net. These results indicate that the collagen nanostructures on mica surface can be modulated by changing the concentration of the collagen solution and the film preparation methodology. The quantitative measurements indicated that the average roughness was increased by raising the adsorption time as more collagen molecules were adsorbed. In the majority of the samples the roughness was constant all over the surface showing that the films possessed uniform characteristics, which were also shown by the qualitative observations.
IFMBE Proceedings Vol. 29
Surface Characterization of Collagen Films by Atomic Force Microscopy
615
progress. Additionally, these nanostructured patterns could be used for mimetic collagen reach tissues, like cornea, or as templates for nanoparticles which could enhance their properties. Moreover, the ability to nanomanipulate the surface topography of these films extends their research and future medical applications. Fig. 6 AFM height images of collagen structures. The concentration was 1.2 ȝm/ml, for 10 minutes of adsortion. With the first methodology (A) amorphous collagen structures were formed (Ra=41nm) and whith the second (B) fibrous structures with two orientations were created (Ra=37 nm)
REFERENCES 1.
2.
Finally, the AFM tip was used to nanomanipulate the collagen structures. The procedure was carried out in air by enhancing the applied force to 20 nN and scanning the surface. The results indicated that by the first methodology and for short absorption times the collagen structures were malleable and could be nanomanipulated (Fig. 4). The demanded forces are higher than the forces already mentioned for the creation of nanoscopic collagen matrices [8], since the manipulation was performed in the air. The results showed that the reformation of the collagen structures could be done without the presence of liquid which would make the procedure more demanding and time consuming. After 60 minutes of absorption the films were hardened and the reorientation of the fibrous structures was implausible. The manipulation of the 2nd methodology samples was also impossible regardless of the used force. This is a consequence of the fact that these samples were fully dried by the spin coating procedure and the collagen structures were too hard to manipulate.
3. 4.
5.
6. 7. 8.
9.
10.
11. 12.
Fig. 7 Controlled nanomanipulation of collagen in contact mode, in the air. The film was first imaged at low force (1 nN) in an areĮ of about 10 ȝm (A). The center of the image was scanned with a force of 20 nȃ in order to re-oriented the collagen fibers. The image was re-imaged at low force and the thin fiber was oriented in different direction (B). IV. CONCLUSIONS
The film preparation methodologies and the nanostructure collagen films described in this paper broaden the biomedical applications of collagen biomaterials. The films could be useful for directing cellular processes in several research applications, such as the study of the healing
13.
14.
Hasirce V, Vrana E et al. (2006) Nanobiomaterials: a review of the existing science and technology, and new approaches. J Biomater Sci Polymer Edn 17:1241-1268 Kadler K.E, Baldock C, Bella J, Boot-Handford R. P (2007) Collagens at a glance, J Cell Sci 120:1955-1958 Fratzl P (2008) Collagen:Structure and Mechanics. Springer New York Skopinska-Wisniewska J, Sionkowska A et al. (2009) Surface characterization of collagen/elastin based biomaterials for tissue regeneration. Appl Surf Sci 255: 8286-8292 Plant A.L, Bhadriraju K, Spurlin T.A, Elliot J.T (2009) Cell response to matrix mechanics: focus on collagen. BBA Mo1 Cel Res 793:839902 Lu J.T, Lee C.J et al. (2007) Thin collagen film scaffolds for retinal epithelial cell culture. Biomaterilas 28:1486-1494 Gadegaard N (2006) Atomic force microscopy in biology: Technology and techniques. Biotech Histochem 81:87-97 Jiang F, Khairy K et al (2004) Creating Nanoscopic Collagen Matrices Using Atomic Force Microscopy. Microsc Res Technique 64:435440 Covani, U. et al. (2007) Biomaterials for orthopedics: A roughness analysis by atomic force microscopy. J. Biomed. Mater. Res., 82A: 723-730 Cisneros A.D, Hung C, et al. (2006) Observing growth steps of collagen self-assembly by time lapse high-resolution atomic force microscopy. J Struct Biol 154:232-245 Abraham L, et al. (2008) Guide to Collagen Characterization for Biomaterial Studies. J. Biomed. Mater. Res. B. 87:264-285 Stylianou A, Kyriazi M, Politopoulos K and Yova D (2009) Combined SHG Signal Information with AFM Imaging to Assess Conformational Changes in Collagen.9th International Conference on Information Technology and Applications in Biomedicine, 4-7 November 2009, Larnaca, Cyprus DOI 10.1109/ITAB.2009.5394311 Stylianou A., Kyriazi M., Yova D (2009) Study of the influence of physicochemical factors in the creation of thin collagen films with atomic force microscopy, 1st Meeting of Biomaterials Society & H.A.O.S.T , Athens Greece Tong W, Eppell S.J (2002) Control of surface mineralization using collagen fibrils. J Biomed Mater Res 61:346-353 Author: Stylianou Andreas Institute: National Technical University Street: Iroon Polytexneiou 9, Zografou Campus, 15780 City: Athens Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
Changes in Electrocardiogram during Intra-Abdominal Electrochemotherapy: A Preliminary Report B. Mali1, T. Jarm1, E. Gadžijev2, G. Serša2, and D. Miklavčič1 1
Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia 2 Institute of Oncology Ljubljana, Ljubljana, Slovenia
Abstract— Electrochemotherapy is an effective local antitumor treatment and reported as a safe method. High voltage electric pulses used in electrochemotherapy of internal tumors could potentially lead to adverse heart-related effects, especially for electroporation pulses delivered within the vulnerable period of the heart or if the pulses coincided with certain arrhythmias. We report on the preliminary findings about the influence of novel intra-abdominal electrochemotherapy of liver tumors on functioning of the heart of three human patients. For this purpose the electrocardiogram recorded during the surgical procedure was analyzed. Morphological changes in electrocardiograms were investigated by analyzing the RR, QR and QT intervals and the potential relation between the changes of these parameters and the electrical current and energy applied during electroporation pulse delivery. On the basis of the 95% statistical prediction interval, transient decrease of the corrected QT interval was found during electroporation pulse delivery. We found statistically significant correlation between changes in corrected QT interval and the electrical current and energy applied during electroporation pulse delivery. We suggest that it is possible to affect functioning of the heart by electrochemotherapy of internal tumors but most probably these changes would be transient. No adverse effects due to intra-abdominal electrochemotherapy were found, which may be due to synchronized electroporation pulse delivery with the electrocardiogram, but the probability for complications could increase when using pulses of longer durations or larger number of pulses of increased pulse repetition frequency and/or in the treatment of tumors in the immediate vicinity of the heart. For this reason, when internal tumors are treated, the synchronization of electroporation pulses with the refractory period of the cardiac cycle in medical equipment for electroporation should be used in order to maximize the safety of the patients. Keywords— electrochemotherapy, electrocardiogram, internal tumors.
I. INTRODUCTION The combined treatment in which delivery of chemotherapeutic drug is followed by application of high-voltage electric pulses locally to the tumor has been termed electrochemotherapy (ECT). The effect of local delivery of electric pulses causes electroporation, which transiently increases
membrane permeability also for molecules, such as bleomycin or cisplatin. ECT has been successfully used for treatment of cutaneous and subcutaneous tumors irrespective of their histological origin in different tumor models and in humans [1-2]. In these studies, a typical ECT protocol involved eight electroporation pulses (EP pulses) with amplitude of up to 1000 V, duration 100 μs, repetition frequency 5 kHz and inter-electrode distance 8 mm. However, new applications using endoscopic or surgical means to access internal tumors are also being developed where amplitudes of EP pulses reaching up to 3000 V. Recently, initial treatments of intra-abdominal ECT of liver tumors have been performed at the Institute of Oncology in Ljubljana. ECT is reported as an efficient and safe method. No adverse effects have been reported so far. EP pulse delivery causes minor side effects in patients such as the transient lesions in areas in direct contact with the electrodes [3] and acute localized pain due to contraction of muscles in vicinity of the electrodes [1]. The induced contraction could present a problem if provoked in the heart muscle [4]. Currently used EP protocols could thus interfere with functioning of the heart although no practical evidence of this has been reported so far. In our previous study, the influence of EP pulses on functioning of the heart for ECT of cutaneous and subcutaneous tumors was investigated and no arrhythmias or other pathological morphological changes due to application of EP pulses were found [5]. The only demonstrated effect of EP pulses on electrocardiogram (ECG) was a transient RR interval decrease that was however attributed to anxiety and stress of the patient undergoing ECT. Among various irregularities in functioning of the heart that the application of EP pulses could induce (e.g., atrial and ventricular flutter and fibrillation, premature atrial and ventricular contractions), the most dangerous one is ventricular fibrillation [4]. Fibrillation can be induced if electrical stimulus is delivered during vulnerable period of the heart. For ventricular myocardium, the vulnerable period coincides with the T wave and for the atria the vulnerable period is somewhere in the S wave [4]. Externally applied electric pulses delivered outside the vulnerable period have extremely low probability of inducing ventricular fibrillation [4]. Therefore the synchronization of EP pulse delivery with ECG was suggested in order to increase safety of the patient [5].
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 616–619, 2010. www.springerlink.com
Changes in Electrocardiogram during Intra-Abdominal Electrochemotherapy: A Preliminary Report
The changes in ECG during a novel intra-abdominal ECT, which was performed on three patients, are reported. Morphological changes in ECG signals by analyzing RR, QR and QT intervals and potential relation of changes of these parameters with electrical current and energy applied during EP pulse delivery were examined.
II. MATERIALS AND METHODS A. Patients and Electrochemotherapy Three patients (two male and one female) were treated with intra-abdominal ECT of metastases of colorectal carcinoma in liver. The treatment was performed under general anesthesia at the Institute of Oncology in Ljubljana and was approved by The National Medical Ethics Committee of the Republic of Slovenia. Bleomycin was administered intravenously (bolus injection) with the dose of 10 mg/m2 for the first patient and 15 mg/m2 for the second and third patient. The ECG was recorded during the surgical procedure. None of the patients had a pre-existing cardiac condition. EP pulses were generated by the electric pulse generator Cliniporator VITAE (IGEA, Italy) that allows the amplitudes of delivered EP pulses up to 3000 V and maximum current of 50 A. Needle electrodes (VG1240) with diameter of 1.2 mm with either 3 or 4 cm long active (conductive) part were used. The delivery of EP pulses was synchronized with ECG via AccuSync 42 (AccuSync Medical Research Corporation, USA), a commercially available ECG triggering device. Standard ECG lead II was used in all applications. Trigger signal from AccuSync 42 was further analyzed by the algorithm incorporated in Cliniporator VITAE, which calculated interval between two successive trigger signals. EP pulses were delivered if the value of this interval was within certain range. ECG signals from the AccuSync 42 were acquired for further analysis at sampling frequency of 1000 Hz using a BIOPAC data acquisition and measurement system (BIOPAC Systems, USA). Altogether 240 EP pulses of duration 100 µs were delivered (128 on the first, 32 on the second, 80 on the third patient). The EP pulses were delivered in trains of eight EP pulses (each pulse synchronized with normal heartbeats) alternately between different pairs of needle electrodes, thus resulting in 30 trains of EP pulses. The minimal interval between two successive trains of EP pulses was 2.5 s (an equivalent of three heartbeats). The number of electrodes, pairs of electrodes among which EP pulses were delivered, distances and voltages applied, were defined according to an individualized treatment planning. The treatment planning was made based on the size of tumor (acquired from CT scan pictures of the treatment area). The electrode
617
configuration and the protocol for delivery of electric pulses (pairs of electrodes and voltages used) were calculated based on mathematical modeling of electric field distribution within treatment area where an electric field of at least 600 V/cm was required in all regions of the tumor [6]. The time course of voltage and current during EP pulse delivery was recorded, which enabled us to establish the values of voltage, current and energy applied during EP pulse delivery for each EP pulse. B. Analysis of Electrocardiogram The primary analysis of ECG signals recorded during intra-abdominal ECT was made by using QRS detection algorithm based on the analysis of a single lead ECG, developed for synchronization of EP pulse delivery with ECG [7]. Further analysis of ECG signals was performed to estimate the effect of EP pulse delivery on ECG. For this purpose the peaks and ends of T waves were determined using program routines as described previously [5]. Briefly, the corrected QT interval (QTc interval) was calculated as the QT interval divided by the square root of the corresponding RR interval. Due to more reliable detection of Q and T wave peaks in comparison to the isoelectric level and the end of T wave, we also calculated the peak-to-peak QT interval and its corrected value (ppQTc interval). Four trains of EP pulses (32 EP pulses) were excluded from the analysis due to many small distinctive disturbances included in the ECG during EP pulse delivery that made the detection of heartbeat features unreliable. For the same reason, in six trains of EP pulses only three pulses from a train of eight EP pulses were included in the evaluation. Altogether 178 relatively noiseless heartbeats coinciding with 178 EP pulses were included in the evaluation of changes in ECG. The updated average of each parameter was calculated from eight values of parameter before EP pulse delivery for each train of EP pulses. Because there were only three heartbeats with no EP pulse delivery between two trains of EP pulses, the updated average in such cases included values of parameter extracted from these three heartbeats and five the newest values included in the previous update average of parameter. The difference between the value of parameter from a heartbeat coinciding with an EP pulse and the corresponding updated average of this parameter was calculated and termed “single-to-mean” difference. For each train of EP pulses we calculated mean of values of this parameter extracted from all heartbeats in this train of EP pulses. Afterwards the difference between this mean and the corresponding updated average of this parameter was also established and termed “mean-to-mean” difference.
IFMBE Proceedings Vol. 29
618
B. Mali et al.
C. Statistical Analysis For evaluation of the effects of EP pulse delivery on ECG the prediction interval (PI) was calculated. A PI is an estimate of an interval based on a chosen number of previous observations of certain parameter in which the next future observation of the same parameter is expected to fall with a chosen confidence level. We used a 95% PI, which means that about 95% of the time, the next observation we make will be inside this interval. The PI was calculated from 16 values of evaluated parameters (RR, QR, QTc and ppQTc interval) before EP pulse delivery. One value right before the EP pulse delivery was not included into determination of PI and was used as a control value of this parameter. The PI of parameter was updated for each train of EP pulses. Due to only three heartbeats with no EP pulses between two trains of EP pulses, the updated PI included values of parameter extracted from first two heartbeats of these three and 14 the newest values from previous PI of parameter. The value of parameter extracted from the third heartbeat was again used as control value of parameter. For the degree of association between values of electrical parameters applied during EP pulse delivery and changes of parameters of ECG, linear regression analysis was performed. A p value of less than 0.05 was considered as indication of statistically significant correlation. The statistical analysis was performed using SigmaPlot 11.0 package.
III. RESULTS No pathological morphological changes caused by EP pulses in patients subjected to intra-abdominal ECT were observed. The duration of RR, QR, QTc and ppQTc intervals of one heartbeat before train of EP pulse delivery and heartbeats during EP pulse delivery were evaluated. For both cases numbers of evaluated parameters that were below the lower and above the upper limit of PI were determined and the percentage of the values of these parameters falling outside the corresponding PI was calculated. A significant increase in the percentage of QTc values outside the PI was found during EP pulse delivery in comparison to the same value in absence of EP pulse delivery (Table 1). All 86 QTc intervals evaluated during EP pulse delivery that were outside of 95% PI fell out at lower limit of PI, thus showing transient decrease of QTc interval during EP pulse delivery (Table 1). The linear regression between the single-to-mean difference for RR and QTc interval and current was statistically significant (Table 2). Moreover, statistical significance of linear regression between mean-to-mean and single-to-mean difference for QTc interval and energy was also detected (Table 2).
Table 1 Percentage of values of evaluated parameters outside the 95% PI. On left side of the table results for control values of parameters in absence of EP pulse delivery (n = 26) and on right side of the table results for values of parameters during EP pulse delivery (n = 178) are summarized. For both cases numbers of evaluated parameters that were below the lower and above the upper limit of PI (Nbelow and Nabove, respectively) are given In absence of EP
During EP
Evaluated parameters
%
Nbelow
Nabove
%
Nbelow
Nabove
RR interval
7.69
2
0
29.21
30
22
QR interval
0.00
0
0
7.87
7
7
QTc interval
3.85
1
0
48.31
86
0
ppQTc interval
0.00
0
0
10.67
15
4
Table 2 Statistical significance of linear regression between changes in parameters RR, QR, QTc and ppQTc (dependent variable) and the electric current or energy (independent variable). For evaluation of changes in parameters, for each parameter mean-to-mean difference (n = 26) and single-to-mean difference (n = 178) was considered RR interval Current Energy
QR interval
QTc interval
ppQTc interval
n = 26 n = 178 n = 26 n = 178 n = 26 n = 178 n = 26 n = 178 0.053 <0.001 0.621 0.860 0.302 0.001 0.464 0.219 0.539 0.212 0.944 0.962 0.029 <0.001 0.453 0.097
IV. DISCUSSION QT interval is considered to be an indicator of total duration of ventricular electrical activity. A significant change in QT interval is one of the most important indicators of arrhythmias [8]. However, its value depends also on the heart rate (the faster the heart rate, the shorter the QT interval) and has to be adjusted to aid interpretation. For this reason the QTc interval is used in practice. In our study, a significant percentage increase of values of QTc intervals that fell outside the PI during application of EP pulses in comparison to percentage of values of this parameter that fell outside the PI in absence of EP pulse delivery was detected (increase from 3.85% to 48.31%, Table 1). This significant percentage increase could indicate a clinically relevant effect of EP pulse delivery on ECG. Transient decrease of QTc interval (median value -34.4 ms) disappeared immediately after the end of EP pulse delivery. No significant percentage increase of values of ppQTc interval observed in our study on the other hand could lead to the opposite assumption – there is clinically irrelevant effect of EP pulse delivery on ECG. Small disturbances sometimes included in ECG signal during EP pulse delivery could aggravate the detection of the end of T wave and thus determination of the QTc interval. EP pulses were synchronized with the refractory period of the cardiac cycle. Nevertheless, the likelihood of electroporation to influence functioning of the heart depends on
IFMBE Proceedings Vol. 29
Changes in Electrocardiogram during Intra-Abdominal Electrochemotherapy: A Preliminary Report
the applied electrical current and energy; duration, number and repetition frequency of EP pulses; and electric current pathway [4]. Our preliminary results showed statistical significance of linear regression between single-to-mean difference for QTc interval and current or energy (Table 2). In the study by Gomes et al. it was shown that the threshold energy level that stimulates the heart strongly depends on the age of the subject (i.e. old animals have lower threshold levels) [9]. Due to high values of energy of applied EP pulses (median value 7.05 J) and decreased threshold energy level for stimulus because of high age of patients (median age 54 years), it is theoretically possible that EP pulse application would lead to heart arrhythmia. This is in agreement with the results of a study by Lavee et al. using irreversible electroporation for epicardial atrial ablation for the treatment of atrial fibrillation that showed that ablation pulses (amplitudes of 1500 to 2000 V, duration 100 µs) caused transient but not permanent arrhythmia or any other rhythm disturbance apart from the rapid atrial pacing during the pulse sequence application [10]. They observed immediate resumption of sinus rhythm following the ablation. However, the results of the work by Al-Khadra et al. showed no arrhythmias in association with electroporation applied directly on the heart [11]. According to the heart strength-duration curve a very large current would be required to cause single premature ventricular contraction for very short EP pulse duration (the millisecond range) [4]. Since no appearance of premature ventricular contraction due to EP pulses was detected in our preliminary study, it is therefore most improbable that EP pulses alone could create the inhomogeneity (altered states of depolarizationrepolarization), which is a prerequisite for the onset of ventricular fibrillation. Therefore it seems possible to affect functioning of the heart by ECT treatment of internal tumors located close to the heart muscle but most probably these changes would only be transient. For this reason, when internal tumors are treated, the synchronization of EP pulses with the refractory period of the cardiac cycle in medical equipment for EP pulse delivery should be used whenever there is a possibility of electroporation pulses influencing functioning of the heart in order to maximize the safety of the patients.
V. CONCLUSIONS In our preliminary study of changes in ECG during intraabdominal ECT in three patients a transient QTc interval decrease during EP pulse delivery was identified. We also
619
found statistically significant correlation between changes of QTc interval and electrical current and energy applied during EP pulse delivery. Although no adverse effects due to intra-abdominal ECT were found, which may be due to synchronized electroporation pulse delivery with the electrocardiogram, the probability for complications could increase when using pulses of longer durations or larger number of pulses of increased pulse repetition frequency and/or in treatment of tumors in immediate vicinity of the heart.
ACKNOWLEDGMENTS The research was supported by project P2-0249 and by various grants from the Research Agency of the Republic of Slovenia.
REFERENCES 1. Mir LM, Glass LF, Sersa G et al. (1998) Effective treatment of cutaneous and subcutaneous malignant tumors by electrochemotherapy. Br J Cancer 77:2336-2342 2. Marty M, Sersa G, Garbay JR et al. (2006) Electrochemotherapy – An easy, highly effective and safe treatment of cutaneous and subcutaneous metastases: Results of ESOPE (European Standard Operating Procedures of Electrochemotherapy) study. Eur J Cancer Suppl 4:313 3. Mir LM, Orlowski S (1999) Mechanisms of electrochemotherapy. Adv Drug Deliv Rev 35:107-118 4. Reilly JP (1998) Applied Bioelectricity: from Electrical Stimulation to Electropathology. Springer Verlag, New York 5. Mali B, Jarm T, Čorović S et al. (2008) The effect of electroporation pulses on functioning of the heart. Med Biol Eng Comput 46:745-757 6. Miklavčič D, Snoj M, Županič A et al. (2010) Towards treatment planning and treatment of deep seated solid tumors by electrochemotherapy. BioMed Eng OnLine, “in press” 7. Mali B, Jarm T, Jager F et al. (2005) An algorithm for synchronization of in vivo electroporation with ECG. J Med Eng Technol 29:288296 8. Anderson ME (2006) QT interval prolongation and arrhythmia: an unbreakable connection? J Intern Med 259:81-90 9. Gomes PAP, Galvão KM, Mateus EF (2002) Excitability of isolated hearts from rats during postnatal development. J Cardiovasc Electrophysiol 13:355-360 10. Lavee J, Onik G, Mikus P et al. (2007) A novel nonthermal energy source for surgical epicardial atrial ablation: Irreversible electroporation. Heart Surg Forum 10:96-101 11. Al-Khadra A, Nikolski V, Efimov IR (2000) The role of electroporation in the fibrillation. Circ Res 87:797-804 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Barbara Mali University of Ljubljana, Faculty of Electrical Engineering Tržaška 25 SI-1000 Ljubljana Slovenia [email protected]
Studying postural sway using wearable sensors: fall prediction. A. Turcato and S. Ramat Dipartimento di Informatica e Sistemistica, University of Pavia, Italy. Abstract— The study of postural sway during quiet stance has proved a useful approach to investigate the function of the balance system. Recent studies have suggested that providing information on postural sway to vestibular patients through various biofeedback devices may improve their balance awareness and therefore reduce their risk of falling. One drawback common to these approaches is related to timing: informing a patient about current balance conditions may not allow enough time to react and avoid a fall. Here we propose a new technique for predicting relevant balance related variables based on the recording of inertial information on trunk movement using a wearable device. Our preliminary results show that such approach may allow predicting critical balance conditions with few hundreds of milliseconds advance and is thus promising for the development of a fall prevention device. Keywords— Postural stability, fall prediction, extrapolated center of mass XCoM. I. INTRODUCTION
The analysis of postural sway during quiet stance has been shown to be a useful tool for studying the function of the human balance system. Based on the popular inverted pendulum model [1] such assessment, commonly referred to as static posturography, is performed by measuring the sway of the center of pressure (CP) of the body using a force platform. Several measures have been proposed in the literature to quantify the sway of the CP and may be used for discriminating normal from pathological performance in the control of balance [2-4]. Impaired control of balance may have different causes, yet dizziness related to vestibular dysfunction is a prominent cause of unsteadiness and may lead to falls in patients and in the elderly. Peripheral vestibular function worsens with aging [5], and in the population over 65 one subject out of three experiences at least one fall per year [6]. These considerations highlight the need for science to go beyond the diagnosis of vestibular dizziness or impaired balance control and call for systems able to monitor balance performance in conditions other than quiet stance. Several studies have recently proposed to monitor movement related variables using wearable systems in order to provide a feedback to the patient with the aim of improving the ability to control balance. Two main approaches have been thus far proposed in the literature along these lines of research. One involves monitoring trunk sway using gyroscopes and providing auditory [7] or multisen-
sory [8-10] biofeedback using a head mounted device. The other focuses on monitoring plantar pressures using sensitized insoles and provides biofeedback using a tactile actuator placed on the tongue [11;12]. Common to these approaches, though is the fact that the biofeedback being provided is based on the current values of the monitored variables, i.e. the biofeedback tries to tell the patient what is currently happening to his balance based on the parameters being measured. Possible, albeit small, delays are added by the signal processing of the biofeedback system and further delays are added by the patient’s sensory system before the provided information may be taken into account to produce a postural response. Taken together these aspects undermine the possibility of using these techniques as valuable tools to prevent the patient from falling in case balance is lost. Based on these considerations we have decided to explore the feasibility of a new approach aimed at the prediction of the displacement of balance related variables, since a fall may be defined as the condition in which the center of body mass (CM) exits the base of support (BoS, i.e. the area on the ground enclosed by the contour of the feet). To this goal we have chosen to use a simple, wearable, inertial sensor for measuring angular velocities and linear accelerations of the body, which is positioned on the back of our subjects at the level of L3, a rough approximation of the location of the body center of mass. The acquired data are then used off-line for training a predictive model of the displacement of the CP, CM and extrapolated CM (XCoM). II. MATERIALS AND METHODS
A. SUBJECTS & EXPERIMENTAL PARADIGM Five younger adult subjects (26±6 y.o.) were monitored while standing upright, for a total of eight registrations, each lasting 60 s. Subjects stood on a force platform fixed on a trolley with four mobile wheels, with eyes closed and arms straight along the body. Their feet were overlapping a couple of footprints drawn on the platform so that the heels were 5 cm and the toes 15 cm apart. An inertial sensor was placed in lumbar region - close to vertebra L3 - whose height l (distance from ground) was taken at the beginning of every experimental session and considered as an anthropometric measure. Three quiet standing series (the 1st, the 4th and the last) were registered. Five registrations com-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 620–623, 2010. www.springerlink.com
Studying Postural Sway Using Wearable Sensors: Fall Prediction
prised a set of perturbations, each of them consisting in a 10 cm rapid displacement of the trolley. Every set of 8 perturbations was designed in order to be unpredictable for our subjects and their sequence was randomly changed for every trial. These were delivered every 5 seconds, four were directed along the anterior-posterior (AP) body axis and four along the medio-lateral (ML) one: two backward, two forward, two rightward and two leftward. This expedient was meant to create perturbed time series with similar directional sway content. B. DATA ACQUISITION Data were collected using a National Instruments data acquisition card, connected to a single ADXL330K 3-axis accelerometer and sampled at 120 Hz. The measurement of ground reaction forces was carried out by a low-cost force platform: the Nintendo Wii Balance Board (BB), commercially available. The BB has four piezoelectric sensors, one in every corner, and measures 51.1 x 31.6 cm. After establishing a BT connection to the PC, data were read at a frequency of 100 Hz and down-sampled to 50 Hz. C. DATA ANALYSIS Offline data processing consisted in time series resampling: data were interpolated and re-sampled to have the same sampling interval, with the aim to make a sample-tosample synchronized comparison between accelerometric and force platform data. Raw acceleration signals (a1, a2, a3) were low pass filtered (0.01 Hz) in order to extract gravitational components. G a0 is the orientation of the sensor with respect to the gravitational field (i.e. the earth vertical). Raw accelerations were first subtracted by gravity projections on axes aG0 , to obtain G G G three net translational accelerations ( at1 , at 2 , at 3 ). We then G G G low pass filtered at1 , at 2 , at 3 at 10 Hz to focus on human content of body sway. All our filters were 2nd order ButterG G G worth low-pass filters. ( at1 , at 2 , at 3 ) were rotated: ax = at1 sin(Į) + at3 sin(Ȗ) sin(ȕ) ay = at2 sin(ȕ) + at3 sin(Ȗ) sin(Į) az = - at1 cos(Į) - at2 cos(ȕ) - at3 cos(Ȗ) to get horizontal projections of net accelerations. Acquisition, filtering and projection are the only steps needed to build the main inputs of our system: (ax,ay,az) and (Į,ȕ,Ȗ) are our candidate regressors, the former representing planar accelerations of L3 sensor and the latter trunk inclination angles respect to gravity vector. BB signals were elaborated to obtain the resultant of ground reaction forces. From single sensors voltage values instantaneous means were extracted and subsequently multiplied -by BB width and length - to get CP position (X,Y). Low pass filtering (2nd order Butterworth LP filter, 0.4 Hz
621
cut-off) of CP time series provides a good approximation of CM horizontal projection [13], called CP-filtered (CPf). This technique was used to build a reference measure for CM planar excursions, instead of recurring to more complicated techniques, such as motion tracking technology. An inverse pendulum (with length l) was assumed to be the biomechanical model underlying the body dynamics involved in our study. Such simple model also justifies the assumption that CM planar acceleration behavior is hidden in CP dynamics, which has higher-frequency content. Besides in [14] Hof gives a wider interpretation of postural stability, highlighting a condition for dynamical recovery of balance. That work extrapolates CM position (XCoM) as a result of dynamical and biomechanical considerations: XCoM CM 1/ Z0 dCM / dt where Z0 g / l (1) D. PREDICTIVE MODEL LEARNING Various black-box predictive models were trained with elaborated variables (ax, ay, az, Į, ȕ, Ȗ) in order to emulate CP and CM (CPf) reference time series recorded by BB. Thus two accelerations (ax,ay) and two angular velocities E and D were chosen as inputs for a black-box model with the aim of predicting three target variables: two positions – CP, CM (i.e., CPf)- and a velocity- dCM/dt (vCM). AP and ML oscillations were considered separately, thus two independent models were identified to learn X and Y components of CM excursions. Every black-box predictive model considers n samples, for each one of the four input variables, as regressors. (2) ^ pi (t mT )` where m k 1,..., k n Thus n indicates the extent of time during which inertial information is gathered from the L3 sensor, while k determines the target prediction time advance, i = 1,..,4 indicates the input variable.
Fig. 1 Recorded (solid line), low-frequency (dashed-dotted line) and predicted (dotted line) CM displacements.
The larger is k, the further in time prediction is made. We considered n ranging 3 to 30 and k ranging 3 to 40 samples,
IFMBE Proceedings Vol. 29
622
A. Turcato and S. Ramat
which at 50 Hz sampling (20 ms sampling interval) corresponds to 60 to 600 ms (regressive time) and 60 to 800 ms (anticipatory time lead). Target signals consisted of 60 s CP, CM and vCM time series adequately re-centered by removing the position offset computed on the whole registration. Model identification was performed by leave-one-out custom training: each model is relative to one recording from one subject, while data from all the other trials act as training set. As a result, five predictive models were estimated for each subject. III. RESULTS
Since noise in the data from the accelerometers discards the possibility of relying on the double-integration technique to compute displacements, the research was directed towards a black-box model able to replicate a target variable on a sample-by-sample basis. Such a model focuses on using a limited number of data samples which are combined to predict whatever target time series. In terms of single variable approximation (considering RMSE as an eligible measure), the different tested models provided similar results, we therefore chose a least mean sqaures linear regression model for its ease of use. CM (CPf: extracted from CP) and XCoM (according to Eq.1, dCM/dt was predicted and l was measured) were the only target parameters, which we chose for the role they play in postural stability and fall prediction.
tions. This behavior can’t be perceived even with a 1-s-wide regressive window and it is supposedly due to postural adjustment and weight redistribution. Table 1: Performance of XCoM predictions 200 ms
350 ms
500 ms
650 ms
No filter HP filtered
11.1 (5.90) 11.7 (6.07) 13.1 (7.76) 14.8 (9.97) 7.95 (5.76) 8.37 (6.62) 10.3 (8.09) 12.2 (10.3) Mean distance (and standard deviation) in mm between predicted and st real position of XCoM for a representative trial. 1 row: fitting of raw data, nd including low frequency. 2 row: fitting of high pass filtered data.
We found that the quality of our predictions significantly improved (see Table 1) when model learning and parameter estimation was limited to the high-frequency content of body sway.
Fig. 4: High-pass filtered target (solid) and prediction (dotted).
Fig. 2 : CM high-frequency sway (solid) and prediction (dotted). We considered various approaches to optimize the prediction: varying the prediction interval, the number of samples, sampling frequency and noise filtering. The best strategy arose from observing the fitting: an appropriate filtering showed that errors were bigger in combination with very low frequency (<0.05 Hz) sway. Such low frequency sway was found both during quiet stance and in perturbed trials, in both ML and AP direc-
The performance improvement related to the high-pass filtering of the target variables can be further appreciated in Figure 4. A significant improvement is also noticed in a customdesigned setup, where the BoS was divided in three concentric circles (with radius ı and 2ı) and the classification accuracy of the predictive system was tested. XCoM RMSE mean decrease of 3 mm corresponds to an accuracy increase from 55% to 70%. However, despite 15% is a clear improvement, the analysis also shows that figures based on matching individual data points (e.g. MSE or classification accuracy) are inadequate to describe the performance of the approach in relation to the predictive nature of the problem. Event-detecting figures, e.g. based on radial displacement thresholds, could lead to easier identification of losses of balance, as shown in Fig.5 As expected, our prediction worsens as the required advance increases: the further the prediction is moved in the future, the worse is its precision.
IFMBE Proceedings Vol. 29
Studying Postural Sway Using Wearable Sensors: Fall Prediction
623
[3]
[4]
[5] [6]
Fig. 5: predicted (dotted) against real (solid) displacements of XCoM.
[7]
IV. CONCLUSIONS [8]
We have investigated the possibility of predicting the displacement of different parameters related to the control of balance (i.e. CP, CM and XCoM) based on the recording of linear accelerations and angular velocities of the trunk using a wearable accelerometer positioned at the level of L3. Our work has produced two main findings: 1) the high frequency components of the fitted parameters may be efficiently predicted with our approach. This was true for all three tested target parameters. 2) Body sway presents an additional, low frequency, component which causes a slow displacement of the CM average position, which was not captured by our prediction model. In summary, we have shown that our approach allows to reliably predict, by as much as 400 ms, the higher frequency displacements of the body center of mass and of its extrapolated counterpart (XCoM). Thus, considering that a fall occurs when the CM exits the base of support, the suggested approach is a potentially valuable tool for developing a new, wearable system for fall prevention. In fact, once our algorithm is integrated with a biofeedback device, it should allow the warning of the subjects about an oncoming fall with sufficient advance to react to the loss of balance and regain stability.
REFERENCES [1] L. M. Nashner, "Sensory feedback in human posture control." Sc.D. MIT, Cambridge, MA, 1970. [2] T. E. Prieto, J. B. Myklebust, R. G. Hoffmann, E. G. Lovett, and B. M. Myklebust, "Measures of postural steadiness:
[9]
[10]
[11]
[12]
[13] [14]
differences between healthy young and elderly adults," IEEE Trans. Biomed. Eng, vol. 43, no. 9, pp. 956-966, Sept.1996. L. Chiari, A. Cappello, D. Lenzi, and C. U. Della, "An improved technique for the extraction of stochastic parameters from stabilograms," Gait. Posture., vol. 12, no. 3, pp. 225-234, Dec.2000. J. J. Collins and C. J. De Luca, "Open-loop and closed-loop control of posture: a random-walk analysis of center-ofpressure trajectories," Exp. Brain Res., vol. 95, no. 2, pp. 308318, 1993. R. W. Baloh, J. Enrietto, K. M. Jacobson, and A. Lin, "Agerelated changes in vestibular function: a longitudinal study," Ann. N. Y. Acad. Sci., vol. 942, pp. 210-219, Oct.2001. R. O. Dominguez and A. M. Bronstein, "Assessment of unexplained falls and gait unsteadiness: the impact of age," Otolaryngol. Clin. North Am., vol. 33, no. 3, pp. 637-657, June2000. J. Hegeman, F. Honegger, M. Kupper, and J. H. Allum, "The balance control of bilateral peripheral vestibular loss subjects and its improvement with auditory prosthetic feedback," J. Vestib. Res., vol. 15, no. 2, pp. 109-117, 2005. C. G. Horlings, M. G. Carpenter, F. Honegger, and J. H. Allum, "Vestibular and proprioceptive contributions to human balance corrections: aiding these with prosthetic feedback," Ann. N. Y. Acad. Sci., vol. 1164, pp. 1-12, May2009. L. J. Janssen, L. L. Verhoeff, C. G. Horlings, and J. H. Allum, "Directional effects of biofeedback on trunk sway during gait tasks in healthy young subjects," Gait. Posture., vol. 29, no. 4, pp. 575-581, June2009. L. L. Verhoeff, C. G. Horlings, L. J. Janssen, S. A. Bridenbaugh, and J. H. Allum, "Effects of biofeedback on trunk sway during dual tasking in the healthy young and elderly," Gait. Posture., vol. 30, no. 1, pp. 76-81, July2009. N. Vuillerme, O. Chenu, N. Pinsault, A. Fleury, J. Demongeot, and Y. Payan, "Can a plantar pressure-based tongue-placed electrotactile biofeedback improve postural control under altered vestibular and neck proprioceptive conditions?," Neuroscience, vol. 155, no. 1, pp. 291-296, July2008. N. Vuillerme, O. Chenu, N. Pinsault, A. Moreau-Gaudry, A. Fleury, J. Demongeot, and Y. Payan, "Pressure sensor-based tongue-placed electrotactile biofeedback for balance improvement--biomedical application to prevent pressure sores formation and falls," Conf. Proc. IEEE Eng Med. Biol. Soc., vol. 2007, pp. 6114-6117, 2007. P. G. Morasso, G. Spada, and R. Capra, "Computing the COM from the COP in postural sway movements," Human Movement Science, vol. 18, no. 6, pp. 759-767, Dec.1999. A. L. Hof, M. G. Gazendam, and W. E. Sinke, "The condition for dynamic stability," J. Biomech., vol. 38, no. 1, pp. 1-8, Jan.2005.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Stefano Ramat Università degli Studi di Pavia Via Ferrata, 1 Pavia Italy [email protected]
From Biomedical Research to Spin-Off Companies for the Health Care Market O.A. Lindahl1,2, B. Andersson1,4, R. Lundström1,3, and K. Ramser1,2 1
2
Centre for Biomedical Engineering and Physics, Umeå University, Umeå, Sweden Department of Computer Science and Electrical Engineering, Luleå University of Technology, Luleå, Sweden 3 Department of Biomedical Engineering and Informatics, Umeå University Hospital, Umeå, Sweden 4 Department of applied physics and electronics, Umeå University, Umeå, Sweden
Abstract— Through research at the centre for biomedical engineering and physics (CMTF) seven new companies have been established in Northern Sweden. The activities have generated growth both in academia at the universities and in the industry in Northern Sweden. Cooperation was built up between the 23 research projects and more than 20 established companies in the field of biomedical engineering. A researcher -owned company for business development of the research results from the CMTF has been established, CMTF Business Development Co Ltd, and has launched its first spin-off company in the autumn 2009. It has also increased the interest for commercial and entrepreneurship questions among the scientists in the centre. So far seven spin-off companies have resulted from the CMTF-research.
CMT F funding 2008--2011. T otal 72 million SEK NLL/CDH VR LT U UmU
EU Obj. 2 VLL/SLU LK UK
Keywords— Spin-off companies, Biomedical engineering, business development, innovation, science centre.
I. INTRODUCTION Commercialisation of scientific research results is well established in Northern Sweden. The Centre for Biomedical Engineering and Physics (CMTF) was formed in order to form an organisation for triple-helix cooperation between scientific research, biomedical industry and health care. The goals were to have intense co-operation with the health care industry and create a good milieu for growing new innovations and start spin-off companies to the benefit of the patient. The economical basis for the centre was funded through local support from regional foundations and the EU structural foundations. In total the CMTF turned over 6 million Euro during the years 2000-2007, and has a budget of 7,2 million Euro for 2008-2011 (Figure 1.) Since the year 2007 the two northernmost universities in Sweden, Umeå University (UmU) and Luleå University of Technology (LTU) have joined forces combining the strong technical research at LTU with the strong medical/ biomedical research at UmU.
AC
BD
Fig. 1 Circle diagram showing founding of research at CMTF. NLL=County council Norrbotten, CDH=Center for distant spanning health care, VR= Swedish scientific council, LTU= Luleå University of technology, UmU= Umeå University, VLL= County council Västerbotten, SLU= Swedish University of Agricultural Sciences, LK=Luleå City, UK=Umeå City, BD=County administrative board Norrbotten, AC= County administrative board Västerbotten, EU Obj.2= EU Structural foundation Objective 2 The joint work at CMTF is organised in 23 research projects and one joint management. A joint company CMTF Business Development (CMTF BD) Co. Ltd., owned by the scientific leaders and the local innovation system represented by Uminova Innovation Co. Ltd. and LTU Holding Co. Ltd., was inaugurated in 2007 to help with the business development of the scientific research results from the centre. The aim of the CMTF establishment was to create a strong, sustainable and virtual organisation for scientific research and business development in Northern Sweden. A further aim was to form a model for how to develop new biomedical viable spin off companies from the research results in the centre.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 624–626, 2010. www.springerlink.com
From Biomedical Research to Spin-Off Companies for the Health Care Market
II. METHODS
625
Eleven patents have been filed from the research in CMTF and 7 innovations are currently under business verification through CMTF BD and the innovation system in Umeå and Luleå.
The CMTF was organised with a board of directors assigned by the universities. The board was chosen to give the CMTF a stable leadership and to reinsure a good cooperation in-between the two counties and the two universities as well as with the industry and health care. The broad expertise collected in the board guaranteed high competency to make decisions on industrial as well as scientific matters as representatives of the users. Before joining the CMTF, all 23 research projects were evaluated by the board concerning three criteria; scientific excellence, clinical and industrial relevance and scientific research management. Approved projects could use the CMTF logotype and refer to CMTF as their research milieu. About 150 researchers were engaged within CMTF at the start of 2010. A majority of the projects in CMTF had both scientific and industrial cooperation with international partners both outside EU, e.g. Japan and USA, and within EU, e.g. Norway, Finland and Italy. As a next step a research company CMTF BD was established in 2007. The company was owned by the scientists/project managers from CMTF in order to form an organisation that could be a part of the existing innovation system but with a special emphasis to launch the biomedical engineering research results on the commercial market. The Scientific leaders signed over the future IPR to the company through an agreement. For identified business ideas, a contract was signed with the scientists about the sharing of future profit from the innovation, a so called incentive agreement. The CMTF research and development was funded by several organisations (Figure 1). The biggest founder was the EU structural foundation, objective 2, with 36 million SEK. The CMTF BD was funded with support from private means from the scientific leaders, Uminova Innovation Co Ltd, LTU Holding Co. Ltd, the County Administrative Board in Västerbotten and Norrbotten, and from consulting activities.
Eighteen workshops has been arranged together with industry and established spin off companies with a mean of 60 participants, except for one workshop that was the Nordic Baltic Conference on Biomedical Engineering and Physics in the year 2005, NBC2005, with about 200 participants. CMTF has built up an industrial network for cooperation and currently about 15 national and international companies are involved in the CMTF projects. The cooperation with investment companies is intense in order to finance new company start-ups.
III. RESULTS
IV. DISCUSSION
At the moment 7 companies are established from research results from CMTF (Table 1). Five of them were based on patents and one is the CMTF Business Development Co. Ltd. (CMTF BD). Since the establishment 2007 the CMTF BD has started one new company for the health care market.
The CMTF BD model has been successful with the implementation of the research results on the health care market. Six new product companies have been established and several new innovations are under development at CMTF BD. The CMTF BD is established within the innovation system in Luleå and Umeå and the strategy is to be a branch
Table 1 Results from ten years of work with the CMTF, the amount of spin off companies and related activities. The figures show the amount of activity Year/Activity
2000-2006
2000-2009
Spin-off companies
5
7 (+2)
Senior scientists
15
25 (+10)
Sc publications
100
150 (+50)
PhD/Lic exams
15
25 (+10)
Graduation works
50
75 (+25)
Projects
10
23 (+13)
Patents
6
11 (+5)
Workshops/conferen.
10
18 (+8)
New employments
24
34 (+10)
Industry cooperation
10
15 (+5)
IFMBE Proceedings Vol. 29
626
O.A. Lindahl et al.
and market oriented complement to the existing organisations, e.g. Uminova Innovation Co Ltd1 and LTU Innovation2 (Figure 2). The aim is to minimize the so called “time to market”.
CMTF has also been very successful in fundraising (Fig. 1) and got strong support from the EU structural foundation and from local and national funds. This funding has been of outermost importance to build up the centre organisation and thus for establishing of new spin-off companies.
Ideas
V. CONCLUSION Co-operation with the innovation system
Evaluation
Verification
Development
Incentive contract
CMTF AB as a support in the system Business idea Product Company
The CMTF research network and CMTF BD have stimulated the initiation of the spin-off companies in the area of biomedical engineering in Northern Sweden. This has resulted in increased growth of the biomedical engineering activities both in academia and in the industry in Northern Sweden.
Comercial phase
ACKNOWLEDGMENT
Fig. 2 The business model for CMTF BD Co Ltd. Ideas from the CMTF and from hospitals (VLL, NLL) and from the Universities (UmU, LTU, SLU) are captured by the CMTF BD together with the innovation system. The ideas are developed into a product and in the commercial phase CMTF BD establish a spin-off company
The growth environment established by CMTF BD3 has also become a place for the CMTF scientific leaders to meet and discuss business development of research results. The company has contributed to the encouragement of entrepreneurship. As can be seen from the results (Table 1) there are several spin off companies from the CMTF4. For example, Bioresonator Co Ltd was originally developed from a patent about a new eye pressure device using resonance sensor technology. This company has now a wider business idea to develop new sensor products and devices for better diagnosis in general and work sclose to CMTF. Another spin off company is Likvor Co Ltd that has developed equipment form pressure-flow measurements in the spinal column. A third example is DDD North Co Ltd that develops sensor systems for dermatologic diagnosis.
The study was supported by the EU structural foundation objective 2 Norra Norrland.
REFERENCES 1. 2. 3. 4.
Uminova Innovation at http://www.uminova.se LTU Innovation at http://www.ltu.se www.cmtfab.se www.cmtf.umu.se
Corresponding author: Author: Olof Lindahl Institute: Department of Computer Science and Electrical Engineering Street: Luleå University of Technology City: Luleå Country: Sweden Email: [email protected]
IFMBE Proceedings Vol. 29
A Continuous-Time Dynamical Model for the Vestibular Nucleus A. Korodi, V. Ceregan, T.L. Dragomir, and A. Codrean University “Politehnica” of Timisoara/Department of Automation and Applied Informatics, Timisoara, Romania Abstract— The Vestibular Nucleus (VN) nervous center is involved in several reflex mechanisms, among them being the vestibular-sympathetic reflex. No model for the VN exists in the literature. In the current paper, based on experimental frequency characteristics, a continuous-time dynamical model of the VN is developed. In order to approximate successfully its behavior, the parameters of the provided model are determined through a genetic algorithm (GA). Keywords— vestibular nucleus, frequency characteristics, genetic algorithm, vestibular-sympathetic reflex.
Fig.
I. INTRODUCTION Among the many different nervous centers of the Central Nervous System (CNS), the vestibular nucleus (part of the vestibular system) has received an increasing amount of attention due to it’s involvement in several reflex mechanisms – from the vestibular spinal reflex, to the vestibular-ocular reflex, and most recently to the vestibular-sympathetic reflex. Numerous reports in the literature have confirmed the role of the vestibular-sympathetic reflex in the nervous control of the cardiovascular system (CVS), along the well known baroreflex mechanism (e.g. [1], [2]). The interactions between these two reflex mechanisms occur at the level of the Medulla Oblongata (MO) and, from an informational point of view, they are illustrated in the block diagram from fig. 1. The nervous center involved in the baroreflex is the Nucleus Tractus Solitarius (NTS), while the nervous center relevant for the vestibular-sympathetic reflex is the Vestibular Nucleus (VN). The NTS receives information from the baroreceptors (nbr signal) and from the cardiopulmonary receptors (ncp signal), while the VN receives information from the vestibular receptors VR - (nvr signal). The VR in turn senses a change in head acceleration – a signal. The outputs of the VN and the outputs of the NTS converge at the level of the Ventrolateral Medulla (VLM), which transmits an output signal ns to the sympathetic system. The NTS also transmits a signal to the Nucleus Ambiguous/Dorsal Motor Nucleus of the vagus (NA/DMNX) which in turn influences the parasympathetic system (np signal). Next, the sympathetic and parasympathetic systems transmit the nervous control signals to the CVS. All mentioned nervous centers are influenced in certain situations and to certain varying degrees by Higher Nervous Centers.
1 Block diagram of the nervous centers in the MO involved in the nervous control of the CVS (taken from [5])
While the actions of the baroreflex mechanisms on the CVS are quite well understood through models validated with experimental data (e.g. [3]), a quantitative understanding of the vestibular - sympathetic reflex mechanism is still lacking and no current model exists in the literature. In order to obtain a more complete model for the nervous control of the CVS that could then be used in different clinical scenarios (e.g. orthostatic stress- head up tilt test), a quantitative model of the vestibular sympathetic-reflex must be developed. Because several models for the VR currently exist in the literature, the main task is to develop a model of the VN, and after that, the interactions between the output the VN and the output of the NTS at the level of the VLM should be determined. S. du Lac et all. have obtained nonparametric models for the VN neurons, through experimental frequency characteristics ([4]). This class of nonparametric models has been generalized in [5] through interpolation. Based on these results as a starting point, the current paper focuses on developing a method for obtaining parametric models of the VN from nonparametric models using genetic algorithms.
II. THE STRUCTURE OF THE GENERAL MODEL The informational transfer features of the channel svr → nvn were illustrated in [4] through a family of some Bode diagrams (nonparametric models) depending on the parameter smfr. A more complete characterization was obtained in [5] by using a pair of interpolators, tuned according to the diagrams taken from [4]. Basically, the interpolators are able to produce Bode Diagrams for all values of the parameter smfr in
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 627–630, 2010. www.springerlink.com
628
A. Korodi et al.
range [10, 56] spikes/sec. In order to provide a parametric mathematical model for the channel svr → nvn of the VN, the Bode diagrams generated through interpolation are converted in continuous-time dynamical systems. The Bode diagrams marked with (1) in fig. 2 were determined in [5]. They correspond to smfr = 56 spikes/sec. It is important to observe that both characteristics are following somehow a specific pattern that reveals a combination between low pass and high pass filters. Generally the combination depends on the mean firing rate smfr.
a time-delay block with the parameter τd. A second timedelay block (τd) lies on the feedback path. Beside these 10 parameters other two gains, Kr and Kd, were necessary.
Fig. 3 The general structure of the model b s 3 + b s 2 + b s + b0 PB ( s ) = 4 3 3 2 2 1 PA ( s ) s + a3 s + a2 s + a1s + a0 s
Fig. 2 The original frequency characteristics, the characteristics of the initial individual and the characteristics obtained through GA for smfr=56 Following this observations, it was desired to design a general model which would emulate the dynamic behaviors for all mean firing rates without modifying its structure. After a cvasi-empirical search, we obtained in the frequency domain the linear model from Fig. 3 and in the time domain the corresponding input – output dependence (1). n vn ( 4) (t ) + a 3n vn ( 3) (t ) + b3 K r n vn ( 3) (t − τ 1 − τ 2 ) + a 2 n vn ( 2) (t ) + + b2 K r n vn
( 2)
(1)
(1)
(t − τ 1 − τ 2 ) + a1n vn (t ) + b1 K r nvn (t − τ 1 − τ 2 ) +
+ a 0 n vn (t ) + b0 K r n vn (t − τ 1 − τ 2 ) = b3 K d s vr (3) (t − τ 1 ) + + b2 K d s vr ( 2) (t − τ 1 ) + b1 K d s vr (1) (t − τ 1 ) + b0 K d s vr (t − τ 1 ) (1)
On the direct path, the structure from Fig. 3 has a fourth order filter with the transfer function (2) (8 parameters) and
(2)
To determine the values of these 12 parameters the genetic algorithms (GA) approach from chapter III was used. We wanted to get as close as possible to the mentioned diagrams (1) from Fig. 2 and we started with an initial individual (chromosome) corresponding to the diagram-pair (2) from the same figures. Finally, with GA, a set of 12 parameters was obtained, i.e. a mathematical model of form (1), to which the diagrams (3) correspond. Considering that the shape of the Bode diagrams changes continuously in respect with the values of smfr, the result for smfr = 56 spikes/sec was then used to initialize the calculus of all 12 parameters of the model (1) for a smaller value of smfr. In this manner, step by step, the lowest level smfr=10 spikes/sec was reached. By doing so, the following advantages were foreseen: shorter search duration, increased probability of finding an acceptable solution, smaller number of generations used to obtain acceptable solutions and smaller space of solutions.
III. THE GENETIC ALGORITHM The conceived GA is used to obtain a chromosome (with 12 genes corresponding to the above mentioned 12 parameters) I = [b3 b2 b1 b0 τd τr Kr Kd a3 a2 a1 a0],
(3)
as good as possible, for the Bode diagram-pair generated by interpolation for each given smfr in range [10, 56]. Naturally, in the best case that means that the Bode diagrams determined for the system (1) with the parameters taken from such a chromosome reproduce the given diagrams. To evaluate an individual (chromosome) a function with two terms was used: (4) feval = fe1+fe2.
IFMBE Proceedings Vol. 29
A Continuous-Time Dynamical Model for the Vestibular Nucleus
where fe1 is resulted after evaluating the amplitudefrequency characteristics and fe2 is resulted after evaluating the phase-frequency characteristics. At first, the evaluation algorithm calculates the differences between the given characteristic, obtained through interpolation, respectively the one obtained through the model (1) along 19 points (frequencies). Then, each difference between characteristics (for all of the 19 analyzed points) is compared with two limits, and as result, discrete values are added to fe1 and fe2 (0, 1 or 7). By using discrete values for feval the algorithm avoids getting stuck in local maxima. When the desired feval is obtained (as close as possible to 0) for a certain number of individuals then the best individual will be chosen by decreasing the lower comparison limits. The flowchart in Fig. 4 shows the calculation steps with GA.
629
The decision block “Solution?” is foreseen to stop the calculus. Two situation were taken into account: i) feval < 4; ii) the generation index g exceeds a maximum limit gmax. Some important details are listed bellow. The Initial Population: The values of the individuals genes of the 100 individuals from the initial population are assigned randomly from an interval of ±p% of the initial individual genes values. We chose p=2 for 10 individuals and p=50 for the others. Selection 1: The Selection procedure consists in two operations, an evaluation of the individuals, and a sorting operation which retains only the fittest N = 100 of them. Let Ci, i = 1, N be the retained individuals. Obviously, for the first generation all N individuals are retained. Crossover: The Crossover procedure takes over and operates with the Ci, i = 1, N individuals mentioned above, and it creates 3N new individuals. The structure of the new intermediate population is as follows; 10 descendents are obtained through a crossover between the parent pairs (C2i+1, C2i+2), i = 0,4 :the crossover point is at the 4th gene: 40 individuals are obtained setting the crossover point at the 6th gene, operating with the parent pairs (C2i+1, C2i+2), i = 5,24 ; 50 individuals are obtained through a crossover between the parent pairs (C2i+1, C2i+2), i = 25,49 ; the crossover point is random; 100 individuals are created from the parent pairs (Ci, Ci+j+1), i = 1,10, j = 1,5 ; the crossover point is set randomly for every pair; 100 individuals are created from the parents pairs (Ci, Ci+50), i = 1,50 ; the crossover point is set randomly for every pair;
Fig. 4 The flowchart of the genetic algorithm Each time, the first generation is generated around an initial individual. The initial individual for smfr = 56 spikes/sec was the chromosome Ind0=[-120.540 -5962.67 53699.62 18933.36 0.04 0.04 20 1850 -23.58 -55886.54 1041073.00 246375.30] determined through a cvasi-empirically investigation. To generate the initial individual of all other values of smfr the mentioned step by step approach was used. After initialization the calculus loop contains, as usual, crossover, selection and mutation procedures. The size of initial generation is N = 100 individuals. For the next generation, during the crossover and mutation operations the size becomes 3N and after the selection operation it will be reduced back to N.
This newly created population is sent to the Selection 2 procedure where it is filtered and the best N will be kept for further usage. Let Mi, i = 1, N be the retained individuals. Mutation: The Mutation procedure uses the Mi, i = 1, N individuals mentioned above, and it creates 3N new individuals. The structure of the new intermediate population is: 10 descendents are obtained through a mutation at the 12th gene, assigning a randomly chosen value from an interval of ±p010% from the original one; 90 individuals are created through a mutation realized at a random gene, assigning a randomly chosen value from an interval of ±p090% from the original one; 100 descendents are obtained through a mutation realized at a random gene, assigning a new value in an interval of ±p100% from the original one;
IFMBE Proceedings Vol. 29
630
A. Korodi et al.
100 individuals are obtained through a mutation of two genes chosen randomly. The random selection is realized once for the first 50 descendents and once for the other 50. In both situations the new assigned value belongs to an interval of ±p200% from the original one.
It has to be mentioned that p010
IV. RESULTS The parameters of the general model (1) for twelve values of smfr covering the whole range [10, 56] spikes/sec were determined by applying the GA. The values of the parameters for six situations are shown in Table 1. For smfr = 56 spikes/sec and smfr = 16 spikes/sec the Bode diagrams are plotted in figures 2, respectively 5. As we can see, curves (3) obtained with the model (1) and the parameters of the table represent a good approximation for the curves (1).
Table 1 Set of parameters for six values of smfr smfr
b3
b2
b1
b0
τd
τr
10 16 24 32 44 56
1.5636 1.4915 1.9091 2.1935 -1.287 -5.937
108.6 231.6 391.79 120 51.062 24.443
9003.9 22117 53807 74292 72954 19557
2210.9 4815.3 12625 5137.9 6265.5 2354.1
0.02 0.02 0.01 0.02 0.02 0
0 0 0.01 0.01 0.01 0
smfr
Kr
Kd
a3
a2
a1
a0
10 16 24 32 44 56
13.4 15.1 17.9 20 17.9 16.7
1992.3 2045.7 1933.2 1609.2 1372 1305.7
-0.2 -0.475 -0.773 -0.661 -0.425 -0.004
-8207 -14685 -12538 -11233 -2384 1840.2
126310 345150 681340 405150 408975 157580
35360 59605 45811 36803 13366 -157
V. CONCLUSIONS The present work continues the development of a mathematical model for the channel svr → nvn of Vestibular Nucleus, begun in the previous paper [5]. The linear continuous-time dynamical system (1), i.e. the system with the structure from Fig. 3 is associated to the Bode diagrams obtained in [5]. The 12 parameters that appear, are obtain using GA. The Bode characteristics of the model (1) obtained in this manner approximate sufficiently well the initial Bode diagrams taken from [5]. All the results were computed using the Matlab/Simulink environment.
REFERENCES 1. J.R. Carter, A.R. Chester, “Sympathetic responses to vestibular activation in humans”, Am. J. Physiol. Integr. Comp. Physiol., 294: R681R688, 2008. 2. A. Radtke, K. Popov, A.M. Bronstein, M.A. Gresty,“Vestibuloautonomic control in man: Short- and long-latency vestibular effects on cardiovascular function”, Journal of Vestibular Research, Vol. 13, Nr. 1, pag 25-37, 2003. 3. M. Olufsen, J. Ottesen, H. Tran, L. Lipsitz, and V. Novak, ”Modeling baroreflex regulation of heart rate during orthostatic stress”, Am. J. Physiol. Reg. Integrative Comp. Physiol. 291: R1355- R1368, 2006. 4. S. du Lac, S.G. Lisberger, “Cellular Processing of Temporal information in Medial Vestibular Nucleus Neurons, The Journal of Neuroscience, pp. 8000-8010, December 1995. 5. A. Codrean, V. Ceregan, T.-L. Dragomir, A. Korodi, “Interpolative frequency characteristics generators for the Vestibular Nucleus Activity”, IMECS Proc., Hong Kong, 2010, in press.
Fig. 5 The original frequency characteristics, the characteristics of the initial individual and the characteristics obtained through GA for smfr=16
Author: Adrian Korodi Institute: University “Politehnica” of Timisoara, Faculty of Automation and Computers, Department of Automation and Applied Informatics Street: Bd. Vasile Parvan no. 2 City: Timisoara Country: Romania Email: [email protected]
IFMBE Proceedings Vol. 29
Fast optical signal in the prefrontal cortex correlates with EEG A.V. Medvedev1, J.M. Kainerstorfer2,3, S.V. Borisov4 and J. VanMeter1 1
Center for Functional and Molecular Imaging, Georgetown University Medical Center, Washington, DC 2 Dept. of Physics, University of Vienna, Vienna, Austria 3 Section on Analytical and Functional Biophotonics, PPITS, NICHD, National Institutes of Health, Bethesda, MD 4 Department of Neurology and Brain Imaging Center, Goethe University, Frankfurt, Germany Abstract—Near-infrared spectroscopy (NIRS) is a developing technology which provides a cost-effective imaging tool for noninvasive functional brain imaging complementary to more traditional fMRI and PET techniques. An attractive feature of NIRS is that it can potentially measure both brain hemodynamics (slow signal) and neuronal activity (fast optical signal, FOS). FOS is presumed to be generated by changes in light scatter as a result of electrophysiological activity at neuronal membranes. Because of its relatively low signal-to-noise ratio (SNR) it is still debatable how reliable is FOS measured from the human scalp. We recorded FOS in combination with high density EEG and compared the temporal profiles of both signals during a Go-NoGo task (fast presentation of visual scenes with animals as targets) from 11 right-handed subjects. Optical probes were placed bilaterally over the prefrontal areas. Cardiac and movement artifacts were removed using Independent Component Analysis (ICA). Independent components of FOS and EEG were correlated pairwise over all trials. Correlation coefficient in the best correlated FOS-EEG pairs reached ~0.1 and was highly significant (p < 10-8). Several typical components of event-related optical signal (EROS) could be identified within the grand average response, which were similar to the ERP components. The most robust and stable optical response developed at t = 200-300 ms as a negative wave due to a decrease in signal amplitude (‘optical N200’, oN200). oN200 also showed a significant difference between targets and nontargets. EROS was well localized and greater in the right hemisphere in the majority of subjects. We demonstrate for the first time significant correlation between electrical and optical signals recorded from the scalp and that at least some FOS components ‘reflect’ electrical brain processes directly. Compared to EEG, FOS is better localized and therefore can aid in localization and cortical mapping of cognitive processes. Keywords— Fast optical signal (FOS), Event-related optical signal (EROS), High density EEG, Independent Component Analysis, Object recognition.
[3]. Optical methods can use contrast agents (extrinsic signal) or measuring light detected from a source (e.g., laser diode) propagating through the tissue (intrinsic signal). Optical signal that depends on changes in optical properties of nervous cells has high temporal resolution (1-10 ms) and much faster than a more traditional hemodynamic signal (510 s), which depends on the absorption of light by hemoglobin. It has been suggested that fast optical signal (FOS) ‘reflects’ fluctuations in membrane potential associated with neuronal activity, which lead to changes in the refractive properties of those membranes and therefore cause changes in light scattering [4]. Another possible mechanism influencing light scattering may be related to changes in cell volume. During the last 10 years there have been several attempts to record fast optical signal noninvasively from human subjects [5-10]. The results of these studies, however, have been controversial because some studies failed to demonstrate FOS reliably [10, 11]. Importantly, Steinbrink et al. (2005) have emphasized the problem of motion artifacts in optical recordings and cautioned that even small stimuluscorrelated movement artifacts may potentially mimic fast optical signals [12]. Fast optical signal has a relatively low signal-to-noise ratio (SNR) and it remains debatable whether it can be reliably detected from the human scalp. In this study, we recorded optical signals along with high density EEG and this approach allowed us to directly compare signals from both modalities and verify FOS with EEG. In the text below, we refer to the raw and preprocessed electrical and optical data as ‘EEG’ and ‘FOS’, respectively, and to the event-related electrical and optical data (after trial averaging) as ‘event-related potential’ (ERP) and ‘eventrelated optical signal’ (EROS). II. MATERIALS AND METHODS
I. INTRODUCTION
Optical methods have been used to explore brain function since 1945 when it was discovered that neuronal activity caused changes in the optical properties of the nervous tissue [1]. This finding has been confirmed by many research groups using brain slices [2] and intact cortical tissue
A. Participants and recording probes Experiments (approved by Georgetown University Institutional Review Board) were performed on eleven righthanded individuals (five females; mean age 23) after signing the consent form. All participants had normal (or cor-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 631–634, 2010. www.springerlink.com
632
A.V. Medvedev et al.
rected to normal) vision and performed a battery of behavioral tests including the assessment of IQ and handedness. An EEG sensor net had 128 electrodes (Electrical Geodesic, Inc. (EGI), Eugene, OR). On top of the EEG net, two optical probes were placed (Fig. 1). The ends of optical fibers transmitting light from/to the head (‘optodes’) were arranged on a supporting flexible plastic base (15 x 8 cm) holding 3 source optodes and 8 detector optodes. Two such probes were placed bilaterally on each side of the head to cover the inferior and middle frontal gyri (IFG and MFG) using the EGI electrodes and the anatomical landmarks of the international 10-20 EEG system as reference points.
line (200 ms) and were considered significant if at least three consecutive time bins showed significant deviation from zero within the time window of 100-500 ms in the right inferior anterior channel (channel s4-d10 in Fig. 1 where the locus of activation was observed in all subjects).
Fig. 1 Positions of optical probes with sources (red) and detectors (blue) on subject’s head
B. Experimental paradigms A Go-NoGo task was based on a paradigm introduced by Thorpe et al [12] to study rapid detection of familiar objects (animals). Black-and-white pictures of natural scenes were presented on a computer LCD monitor at a viewing distance of 75 cm and an angular size of 10o. Stimulus onset asynchrony (SOA) was varied randomly from 900 to 1700 ms. Stimulus presentation was organized in 12 blocks each containing 100 pictures. Subjects were instructed to detect if an animal (target) was present in the picture displayed for 26 ms and, in response to targets only, press two buttons on a button box using both thumbs as soon and as accurate as possible. C. Data collection and analysis Optical signals were recorded with a continuous-wave NIRS instrument CW5 (TechEn, Milford, MA). Each probe accommodated 11 optodes with three dual-wavelength (690 and 830 nm) laser sources and eight detectors for each hemisphere (Fig. 1). EEG data were simultaneously recorded using a high density EGI instrument. Sampling rate (200 Hz), data preprocessing and artifact removal using Independent Component Analysis (ICA) were the same for both FOS and EEG signals. Event-related responses were calculated in ‘z-score’ units relative to the prestimulus base-
Fig. 2 Representative segments of EEG (black) and FOS (red). A – the raw data. B – EEG and FOS independent components containing cardiac artifact. C, D – the best correlated FOS-EEG pairs from two subjects. Horizontal lines in D indicate segments with a good match between individual waves of FOS and EEG
III. RESULTS
Raw FOS and EEG data were not correlated (Fig. 2A). After ICA decomposition and artifact identification (Fig. 1B) and removal as described previously [13], all optical and EEG independent components (IC) were correlated pairwise over all trials and the best correlated pair was identified for each subject. Correlation coefficient for such pairs reached ~0.1 and was highly significant (p < 10-8) in every subject. Although not high in absolute terms (~0.1), highly significant correlation was a result of a good ‘match’ between the corresponding components of FOS and EEG, which was observed over all 1200 trials (Fig. 2C-D). After trial averaging, statistically significant eventrelated responses were found in all subjects. Typically, only several independent components (from the total number of 28 for each wavelength of FOS and of 127 ICs for EEG)
IFMBE Proceedings Vol. 29
Fast Optical Signal in the Prefrontal Cortex Correlates with EEG
633
showed significant responses in each subject as demonstrated in Fig. 3. The temporal profile of optical ICs showing a response was very similar to the profile of the corresponding best correlated electrical ICs (Fig. 3, bottom).
and the oN200 wave (for EROS) were most sensitive to the target detection process. Behavioral response (button pressing) occurred at t = 419 ± 45 ms and it was followed by the late waves developed at t = 500-600 ms, which also showed significant difference between targets and nontargets in both ERP and EROS (Fig. 4).
Fig. 3 Representative examples of event-related responses (produced by
Fig. 4 Grand average event-related responses for optical (EROS) and electrical (ERP) signals. Red asterisks indicate significant differences between targets and nontargets (p < 0.05)
trial averaging of independent components of EEG and FOS) from one subject. Best optical responses are shown in bold. Note a good correspondence between the temporal profiles of the best correlated ERP and EROS ICs (bottom panels) as well as similar responses for targets and nontargets observed in both ERP and EROS
After removal of artifact components, FOS and EEG signals were restored from the remaining ICs. Artifact-free signals were averaged over all trials and subjects. Several typical components of EROS could be identified within the grand average response, which were similar to the ERP components (Fig. 4). Thus, after an initial increase in signal amplitude (optical positive wave) at t = 100 ms (oP100), the most robust and stable component developed at t = 200-300 ms as a negative wave due to a decrease in signal amplitude, oN200), which was followed by later components oP400, oN500 and oN600 (Fig 4; red asterisks denote significant ‘target > nontarget’ differences; p < 0.05). EROS was well localized (observed in one or a few anterior channels) and greater in the right hemisphere in the majority of subjects. Comparison between the grand average EROS and ERP showed good temporal correspondence between individual waves of both signals (Fig 4). The earliest significant difference between target- and nontarget-related responses occurred at t = 200-250 ms i.e., the N200 wave (for ERP)
Differential (target > nontarget) scalp maps of optical signals showed initial activation in the right middle frontal cortex (140 ms), then right inferior frontal cortex (210 ms) followed by co-activation of the left inferior frontal cortex (250 ms) (Fig. 5). IV. DISCUSSION
We demonstrate for the first time significant correlation between electrical and optical signals recorded from the scalp and that at least some FOS components are likely to ‘reflect’ electrical brain processes directly. Using Independent Component Analysis for artifact removal, we were able to improve the SNR of optical signals and reliably record event-related optical signals in all subjects. Moreover, highly significant correlation was found between independent components of optical and electrical signals. The GoNoGo paradigm used in this study allowed us to directly compare target- and nontarget-related trials and use the latter as a ‘control’ without any possible contamination by motor artifacts. Given very similar waveforms of target- and
IFMBE Proceedings Vol. 29
634
A.V. Medvedev et al.
nontarget-related responses (Figs. 3-4), we conclude that responses observed in this study are not motor artifactproduced. In the majority of previous studies of fast optical signals the investigators have used simpler tasks (e.g., sensorimotor stimulation or an oddball task). During the oddball tasks when targets are rare (10-20% of all stimuli), only responses to targets have been reported (see for example [14]. In the current study, significant responses were observed for both targets and nontargets, which was probably due to the equal presence of targets and nontargets (50% each) in the stimulus set.
ACKNOWLEDGMENT Supported by NIH/NIBIB/NEI grant EB006589 to A.M. and DARPA grant HB1582-05-C-0045 to J.V.
REFERENCES 1. 2. 3.
4.
5.
6.
7.
8.
9.
10.
Fig. 5 Temporal evolution of scalp maps of a grand average differential EROS (targets versus nontargets). Note initial activation in the right prefrontal cortex at 140-210 ms followed by co-activation of the homologous area in the left hemisphere at 230-250 ms
11.
12.
Importantly, optical responses showed excellent temporal correspondence to the ERPs and several components were identified within the EROS closely matching the ERP components. The greatest ‘target vs. nontarget’ difference was observed in the oN200 optical component and this corresponded well to a similar ‘target vs. nontarget’ sensitivity observed in the N200 ERP component. Involvement of the prefrontal cortex (PFC) at the early phases (200-250 ms) of object recognition is debatable. Our results provide supportive evidence for top-down influences from the PFC onto the ascending processing route [15]. Compared to ERP, optical signal is better localized and therefore can aid in localization and cortical mapping of cognitive processes.
13.
14.
15.
Hill DK, Keynes R (1945) Opacity changes in stimulated nerve. J Physiol 108:278-281 Lipton P (1973) Effects of membrane depolarization on light scattering by cerebral cortical slices. J Physiol 231:365-383 Rector DM, Poe GR, Kristensen MP et al. (1997) Light scattering changes follow evoked potentials from hippocampal Schaeffer collateral stimulation. J Neurophysiol 78:1707-1713 Stepnoski RA, LaPorta A, Raccuia-Behling F et al. (1991) Noninvasive detection of changes in membrane potential in cultured neurons by light scattering. Proc Natl Acad Sci U S A 88:93829386 Gratton G, Fabiani M, Corballis PM et al. (1997) Fast and localized event-related optical signals (EROS) in the human occipital cortex: comparisons with the visual evoked potential and fMRI. Neuroimage 6:168-180 Steinbrink J, Kohl M, Obrig H et al. (2000) Somatosensory evoked fast optical intensity changes detected non-invasively in the adult human head. Neurosci Lett 291:105-108 Gratton G, Fabiani M (2003) The event-related optical signal (EROS) in visual cortex: replicability, consistency, localization, and resolution. Psychophysiology 40:561-571 Morren G, Wolf U, Lemmerling P et al. (2004) Detection of fast neuronal signals in the motor cortex from functional near infrared spectroscopy measurements using independent component analysis. Med Biol Eng Comput 42:92-99 Franceschini MA, Boas DA (2004) Noninvasive measurement of neuronal activity with near-infrared optical imaging. Neuroimage 21:372-386 Steinbrink J, Kempf FC, Villringer A et al. (2005) The fast optical signal--robust or elusive when non-invasively measured in the human adult? Neuroimage 26:996-1008 Radhakrishnan H, Vanduffel W, Deng HP et al. (2009) Fast optical signal not detected in awake behaving monkeys. Neuroimage 45:410-419 Thorpe S, Fize D, Marlot C (1996) Speed of processing in the human visual system. Nature 381:520-522 Medvedev AV, Kainerstorfer J, Borisov SV et al. (2008) Eventrelated fast optical signal in a rapid object recognition task: improving detection by the Independent Component Analysis. Brain Res 1236:145-158 Low KA, Leaver E, Kramer AF et al. (2006) Fast optical imaging of frontal cortex during active and passive oddball tasks. Psychophysiology 43:127-136 Bar M, Kassam KS, Ghuman AS et al. (2006) Top-down facilitation of visual recognition. Proc Natl Acad Sci U S A 103:449-454 Author: A.V. Medvedev Institute: Center for Functional and Molecular Imaging, Georgetown University Medical Center Street: 3900 Reservoir Rd, NW, Preclinical Science Bldg, LM-14 City: Washington, DC Country: USA Email: [email protected]
IFMBE Proceedings Vol. 29
Using Social Semantic Web Technologies in Public Health: A Prototype Epidemiological Semantic Wiki C. Bratsas1, A. Tzalavra1, V. Vescoukis2, and P. Bamidis1 1
Lab. of, Medical Informatics, Medical School, Aristotle University of Thessaloniki, Thessaloniki, Greece 2 School of Surveying Engineering, National Technical University of Athens, Athens, Greece
Abstract— Public health information systems could play a really important role in the step-up of human life quality. Such systems require a user friendly environment that enables public organizations, health centers and even citizens to acquire knowledge in an organized way. That means that content can be accessed, evaluated, organized and reused with ease by anyone concerned. Social Software (i.e Wikis, Weblogs) as well as Semantic Web technologies could be of major importance in such public health information systems. Social Software technologies support the collaboration of people anytime, anywhere and enable users to choose their own processes. Semantic Web technologies on the other hand enable information structure for easy retrieval, reuse and exchange between different systems and tools. Semantic wikis is a really powerful tool that combines Social Software and Semantic Web technologies. In this article a prototype Epidemiological Semantic Wiki is described. Moreover the functionality of this wiki is illustrated though a use case. Keywords— Social Software, Semantic Web, Semantic Wiki, public health, information systems.
I. INTRODUCTION The obvious and really severe dangers of a global epidemiological outburst lead the developed countries to implement centers to supervise the infectious diseases. There are several special departments in these centers which are responsible to early detect the infectious diseases as well as to investigate and encounter them. It has been admitted since a longtime ago that the infectious diseases spread worldwide without abuttals and that in order to encounter them the collaboration of all countries in a local as well as in a global level is needed. All these continuously increasing needs as well as the healthcare systems of these countries need to be reformed and reconstructed to cope with the new data. Web 2.0 technologies have significantly helped toward this direction. Social networking, wikis and blogs are only some of its applications that are certainly related to public health information systems. Web 2.0 emphasizes on participation and it provides content in many cases by turning the input data to obvious and useful results in medicine [1, 2]. The seven
principles which it is based to are: 1) participation, 2) decentralization, 3) standards, 4) openness, 5) modularity, 6) identity and 7) user control. Despite all these advantages the web 2.0 has a serious disadvantage as it is only based on the syntactic writing of content. This way the web pages based on web 2.0 technologies are understood only by human beings and the automatic elaboration is definitely not allowed. The need of a web that could elaborate the content assumes a better content description as well as algorithms that could lead the computing machines to be smarter [3]. That is all what Sir Tim Berners Lee had envisaged and finally created the solution that is today called “Semantic Web” [4]. The traditional wikis refer to web pages which allow users to edit their content really quickly and easily without having enrolled to them. Wikis are a really helpful tool for web developers since they provide a really friendly environment for users as well as lots of collaborative features. Semantic wikis extend traditional wikis by using technologies like RDF (Resource Description Framework) [5], OWL (Ontology Web Language) [6] or LinkedData [7]. They inherit from traditional wikis all their basic characteristics and they also allow the annotation of the existing navigational links by symbols that describe their meaning. This is done by separating them into categories according to their type. That way the information described in these wikis can be accessible to machines and not only to human beings. Moreover the semantic wikis can alternate the way they describe the web pages according to semantic annotations of each page. The semantic search allows respects the context as well as the content whereas the semantic wikis enable users to easily make semantic queries. Furthermore semantic wikis the web pages presentation is enriched through easy access to relevant related information. Nowadays Semantic Media Wiki [8] is the most popular among all the semantic wikis mainly because of the fact that it makes easy the access to the semantic technologies to all users even if they are experts or no. Semantic Media Wiki (SMW) is an extension of Media Wiki. SMW supports WYSIWYG “what you see is what you get” editing of page content and metadata, as well as
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 635–638, 2010. www.springerlink.com
636
C. Bratsas et al.
page tagging. Despite traditional wikis that contain only text which computers can neither understand nor evaluate, SMW allows a wiki to function as a collaborative database through semantic annotation. Some of the benefits of SMW are: automatically-generated lists, visual display of information, improved data structure, searching information, interlanguage consistency, external reuse, integrate and mash-up data. This paper describes an architecture for using social semantic web technologies in public health. Specifically a prototype epidemiological semantic wiki is presented. Moreover a test case of how the proposed semantic wiki could be used in order to inform about the H1N1 flu is illustrated.
•
Semantic Drilldown [11] to browse data by using categories and filters on semantic properties. • Semantic Google Maps [12] in order to view and edit coordinate data. Google • Geocoder [13] to convert addresses to geographic coordinates. • SimpleFeed [14] in order to depict an RSS feed. • SemanticSignup [15] to add semantic data on users pages when they register and finally • Semantic Results Format [16] in order to use special graphs within the wiki. c) An ontology editor tool to provide the semantic framework for the content of the Epidemiological Semantic Wiki pages.
II. DESCRIBING THE APPROACH Epidemiological Semantic Wiki offers a simple user interface to create Wiki pages for public health including metadata according to W3C standards. Figure 1 illustrates the Epidemiological Semantic Wiki architecture which consists of the following modules: a) The Semantic Media Wiki as a powerful tool to semantically share data. b) The following extensions of Semantic Media Wiki: • Semantic Forms [9] in order to add, edit or even query data using forms. • Halo extension [10] in order to facilitate the use of Semantic wiki for the users, by using a semantic advanced annotation, auto-completion, a graphical query interface and ontology browser.
III. THE ONTOLOGY USED Each page of the Epidemiological Semantic Wiki corresponds to an instance of the Epidemiological ontology. Thus, Wiki pages can play many roles such as individual elements, categories which are used to classify the individual elements, properties and which are used to relate pages (Object properties) and types that are used to distinguish the property types (Data type properties). Semantic Media Wiki provides a number of types that can be used with properties. The most commonly used are “String” (used to describe characters), “Number” (used to describe numbers), “Geographic coordinate” (used to describe locations) and “Page” (used to create links to other pages).
Application Server
DB Server
Semantic MediaWiki Client application
Semantic Forms Halo extension Semantic Drilldown Semantic Google Maps GoogleGeocoder SimpleFeed Semantic Signup Semantic Results Format
OWL Ontologies
Ontology editor
Fig. 1 The architecture of Semantic Media Wiki
IFMBE Proceedings Vol. 29
Semantic MediaWiki Database
Using Social Semantic Web Technologies in Public Health: A Prototype Epidemiological Semantic Wiki
637
Fig. 2 Overview of the ontology Figure 2 depicts the main classes of the ontology used in building the Public Health Semantic Wiki. More specifically six main classes are defined: Diseases, Hospitals, Location, Symptoms, LocationDiseaseRisk and Users. The Diseases Class (structured according to the ICD-10 [17]) semantically describes the registered diseases with the following properties: causativeAgent (the disease’s causative agent), diseaseName (its name), hasSymptoms (its symptoms), hasLocation (the locations it appears to as instances of Location Class), hasPrevention (how it can be prevented), hasSource (from where we got information about this disease), susceptibleGroup (its susceptible group), transmissionRoute (its transmission route) and hasTreatment (its treatment). The Hospitals Class semantically describes the public hospitals globally and is annotated by the following properties: hospitalName (its name), hasAddress (its address), hospitalCity (the city it is located), hasTelephone (its telephone number), hasLocation (the hospital’s location as an instance of Location Class). The Location Class semantically describes all the locations where infectious diseases have been registered worldwide with the following properties: hasLocationName (the location’s name) and hasMap (the spatial data on a map). The Symptoms Class describes all of the symptoms mentioned in the wiki and the only property it is annotated by is
the property hasDescription (a short description of the each symptom). The LocationDiseaseRisk Class semantically describes the disease risk, in every graphical area it appears according to the semantic wiki, with the following properties: hasLocation (where the disease appears to as an instance of Location Class), hasDisease (the epidemiological disease as an instance of Diseases Class), hasMortalityRate (the disease’s mortality rate), hasMorbidity (the disease’s morbidity rate). The Users Class semantically describes the users profile with the following properties: hasFirstName (the user’s first name), hasLastName (the user’s name), hasLocation (the user’s location as an instance of Location Class).
IV. EXAMPLE TEST CASE In order to illustrate the utility of our system a test case is presented. Let’s assume that a public organization during a research about public health would like to be informed about the H1N1 flu. From the main page of Epidemiological Semantic Wiki (Fig.3) the organization could: 1) find from the list of the registered diseases the one that it is concerned about, H1N1 in our case, 2) see the geographical distribution of H1N1, 3) inform about the H1N1 latest RSS (Really Simple Syndication) news, and 4) see pie chart representing
IFMBE Proceedings Vol. 29
638
C. Bratsas et al.
The potential of using the Epidemiological Semantic Wiki in public health appear to be significant. People can reach remotely, well-structured and valid information anytime, whereas health organizations can edit it according to new conditions (diseases futures can be altered, hospitals can be added etc). Furthermore the system is user friendly and easy manageable. Though Semantic Media Wiki is under development and numerous new extensions are planned for the near future, which will enhance its abilities.
REFERENCES
Fig. 3 The main page of the wiki the total mobility of all infectious based on ontology instances. The organization could navigate through all of the registered diseases’ features or semantically classify them according to their symptoms or their geographic location. Furthermore information about all the public hospitals is provided. Moreover semantic queries (e.g. in order to retrieve all the diseases and their location where the morbidity rate is more or equal to 0.4) could easily be composed (fig. 4).
Fig. 4 A Semantic Query
1. Kaldoudi E, Dovrolis N, Konstantinidis ST, Bamidis PD, “Social Networking for Learning Object Repurposing in Medical Education”,The Journal on Information Technology in Healthcare, Vol. 7, pp. 233–243, 2009. 2. C Bratsas., G Kapsas., S Konstantinidis., G Kotsouridis., P Bamidis. A Semantic Wiki within Moodle for Greek Medical Education, In Proc. of 22th IEEE International Symposium on Computer-Based Medical Systems, , IEEE Comp Soc Press, 3-4 August 2009, Albuquerque, N Mexico, USA, pp. 1-6. 3. C. Bratsas, V. Koutkias, E Kaimakamis, P.D. Bamidis, G. I. Pangalos, N. Maglaveras,“KnowBaSICS-M: An ontology-based system for semantic management of medical problems and computerised algorithmic solutions“, Comput. Meth. Progr. Biomed., vol. 88, pp. 39-51, 2007. 4. Berners-Lee,Tim, Hendler, James, and Lassila, Ora. (2001). The Semantic Web. Scientific American, May 2001 5. Resource Description Framework at http://www.w3.org/RDF 6. Ontology Web Language at http://www.w3.org/TR/owl-features/ 7. LinkedData at http://linkeddata.org/ 8. Semantic Media Wiki at http://semantic-mediawiki.org 9. Semantic Forms at http://www.mediawiki.org/wiki/Extension:Semantic_Forms 10. Halo extension at http://www.mediawiki.org/wiki/Extension:Halo_Extension 11. Semantic Drilldown at http://www.mediawiki.org/wiki/Extension:Semantic_Drilldown 12. Semantic Google Maps at http://www.mediawiki.org/wiki/Extension:Semantic_Google_Maps 13. Google Geocoder at http://www.mediawiki.org/wiki/Extension:Google_Geocoder 14. SimpleFeed at http://www.mediawiki.org/wiki/Extension:SimpleFeed 15. SemanticSignup at http://www.mediawiki.org/wiki/Extension:SemanticSignup 16. Semantic Results Format at http://www.mediawiki.org/wiki/Extension:Semantic_Result_Formats 17. International Classification of Diseases at http://apps.who.int/classifications/apps/icd/icd10online/
V. CONCLUSIONS Looking to the remaining 21st century, with the infectious diseases like malaria, AIDS and H1N1 in exacerbation, information on public health matters is of major importance. Considering this a prototype Epidemiological Semantic Wiki is implemented by using Social Semantic Web technologies.
Author: Charalampos Bratsas Institute: Laboratory of Medical Informatics, School Of Medicine, Aristotle University of Thessaloniki Street: P.O. Box 323 54124 City: Thessaloniki Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
Patellofemoral Contact during Simulated Weight Bearing Squat Movement: A Cadaveric Study A. Van Haver1,4, J. Quintelier1,4, M. De Beule2, P. Verdonk3, F. Almqvist3, and P. De Baets4 1
University College Ghent/ Mechanical Engineering Department, Ghent, Belgium Ghent University/ Department of Civil Engineering, bioMMeda – IBiTech, Ghent, Belgium 3 University Hospital Ghent/ Orthopedic Department, Ghent, Belgium 4 Ghent University/ Department of Mechanical Construction and Production, Ghent, Belgium 2
Abstract— The Ghent Knee Rig was built in 2006 for studying the biomechanical behavior of post-mortem human knees. To validate this test rig the patellofemoral contact pressures and areas were investigated in 3 post-mortem knees tested under the same circumstances and compared to results in literature. To load the quadriceps, the vastus intermedius and rectus femoris were separated and clamped together. The pulling cable was aligned according to the shaft of the femur to keep the Q-angle at physiological values. A pressure film was inserted in the patellofemoral joint to measure the patellofemoral contact area and pressure. The results follow the general accepted trends of patellofemoral contact during knee flexion and extension; when the patella enters the trochlear groove at approximately 20 degrees of knee flexion, the intra articular contact pressure and area start to build up and the contact area on the patella shifts from distal to proximal. Though working with cadaveric specimens remains a simulation of in vivo situations with well known limitations, the test rig shows a good repeatability and reliability. The next stage of this research project is a comparison of normal with pathological knees. Keywords— knee biomechanics, contact pressure, cadaveric study.
patellofemoral
joint,
I. INTRODUCTION The purpose of this study was to validate the Ghent Knee Rig, developed in 2006 by the department of mechanical construction and production at Ghent University [1]. Our main interest is the patellofemoral contact area and pressure since disturbed patellofemoral contact is often associated with anterior knee pain. The patella plays an essential role in increasing the mechanical advantage of the quadriceps mechanism. The main biomechanical function of the patella consists of increasing the moment arm of the quadriceps by shifting the quadriceps tendon anteriorly [2, 3]. As a result the knee extension torque expands during extension. Due to the insertion of the patellar tendon on the tibial tuberosity, a great amount of force is necessary to displace the rather small weight of the foot, leading to high compressive forces in the patellofemoral joint. In closed kinetic chain movements, like squatting, the force of the quadriceps rises radically towards
90°, the contact area also increases but not in proportion, so the contact stress in the patellofemoral joint rises with deeper knee flexion. Research on patellofemoral biomechanics often focuses on patellar kinematics, extensor forces, and patellofemoral contact pressure and contact area. In this study cadaveric knees were mounted in the Ghent Knee Rig to simulate a weight bearing squat. During this dynamic flexion-extension movement, the patellofemoral contact areas and pressures are continuously monitored.
II. MATERIALS AND METHOD A. Specimens and Specimen Preparation Three post mortem knees were obtained from the anatomy lab of Ghent University and were tested in the Ghent Knee Rig. The mean age was 90 years (± 7.4). All knees were embalmed with a mixture of formol, phenol and thymol and were considered to be macroscopically intact, radiographic images didn’t reveal any bony abnormalities. Each knee was amputated through the tibia and femur at approximately 20 cm from the apex of the patella. For mounting purposes, a complete dissection of all structures surrounding the bones was done at approximately 8 cm at the free end of the tibia, fibula and femur. The bones were placed in an aluminium cylinder and fixed with a polyester resin. At the knee joint care was taken to protect the retinacula, the medial and lateral collateral ligament and the tendon of quadriceps and patella from damage. The quadriceps were then further dissected into its 4 parts. The VI and RF were separated from the femur and their tendons were clamped together at approximately 5cm from the proximal pole of the patella. The clamping system (Figure 1), based on a polymer toothed rack, was designed especially for this purpose.
Fig. 1 The clamping system for the rectus femoris and vastus intermedius
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 639–642, 2010. www.springerlink.com
640
A. Van Haver et al.
By applying the definition given by Insall et al. [4] and Minkowitz et al. [5], the pulling cable was aligned according to the shaft of the femur. Doing so, the Q angle was kept at physiological values. B. Test Set-Up For the kinematic tests the department of mechanical construction and production developed a test rig, based on the Oxford Knee rig [6]. The set-up of the Ghent Knee Rig is shown in Figure 2.
In each test, the knees expressed a similar pattern of contact pressure distribution during the flexion and extension cycles. Pressure increased as the knee flexed and decreased as the knee extended. The mean patellofemoral contact area measured in this study ranged from 68.8 (± 8) mm2 at 20° to 336.5 (± 64.7) mm2 at 60° knee flexion. The mean contact pressure ranged from 0.7 (±0.15) MPa at 20° flexion to 5.5 (± 1) MPa at 60°. The contact areas on the patellar facets shifted from distal to proximal during knee flexion and from proximal to distal during extension, which is visualized in figure 3. Another remarkable observation is the difference between the flexion and extension phase.
Fig. 2a & 2b Experimental set up of the Ghent Knee Rig Starting form a solid table, two vertical bars are responsible for the smooth gliding of a bridge construction, allowing the knee to flex and extend smoothly. The total weight of the moveable part of the construction is approximately 30 kg, which serves as a simulation of the body weight of a person during a squat movement. The linear electrical motor unit is secured on the bridge construction, the force is transmitted to the quadriceps tendon through a steel cable and two pulleys. As a result the tension in the quadriceps tendon is being build up until the knee starts to extend. The knee flexion angle, the forces applied on the quadriceps tendon and the axial forces on the tibia are continuously measured together with the rotations of the tibia during flexion and extension. This test rig is built in such a way that it allows the six degrees of freedom of the knee joint. In order to measure the contact area of the patellofemoral joint a 5051 I-scan pressure film (Tekscan Inc., South Boston, MA, USA), especially designed for intra articular measurements, was inserted in the patellofemoral joint through a lateral incision as described by Ostermeier et al. [7].
Fig. 3 Patellofemoral contact area for 3 different knee angles (20°, 40°, 60°) and for the flexion and extension phase separately Statistical analysis of the contact area and pressure was done for knee flexion angles of 20°, 30°, 40°, 50° and 60° and for the flexion and extension phase separately. Out of the 5 x 2 conditions, 3 conditions did not have a normal distribution, so a Wilcoxon Signed Ranks Test was performed; a significant difference between the 2 movement phases for the contact area (p < 0.001, z = -4.341) as well as for the contact pressure (p < 0.001, z = -4.627) was found, with higher values for the extension phase compared to the flexion phase. This difference can be observed in Figure 5.
III. RESULTS For the three selected knees, all together 8 successful tests, each consisting out of 5 flexion – extension cycles were obtained.
Fig. 4 Mean patellofemoral contact area and contact pressure ± 2SE for 5 different knee angles
IFMBE Proceedings Vol. 29
Patellofemoral Contact during Simulated Weight Bearing Squat Movement: A Cadaveric Study
To reveal the predicting variables of the contact area and pressure, a linear regression was performed with the knee angle, flexion-extension phase and quadriceps force as independent variables. For the contact area as well as the contact pressure, the multiple regression models with these 3 independent variables show a very good correlation with the data, with respectively R² = 0.88 and R² = 0.85. However, care should be taken in the interpretation of these results since the quadriceps force is highly correlating with the knee angle (p < 0.001) as well as the movement phase (flexion - extension) (p < 0.001). Mean values for the applied quadriceps force are reported in table 1. Table 1 Mean quadriceps force for different knee angles and for the flexion – extension phase separately
This collinearity does not reduce the predictive power or reliability of the model as a whole; it only affects the reliability of the individual predictors. The model cannot provide valid results about any individual predictor, or about which predictors are redundant with others.
IV. DISCUSSION The experiments with the Ghent Knee Rig differ from previous experiments because the current knee rig simulates a dynamic weight bearing squat, which is a very high demanding exercise for the knee joint. Previous work on cadaver specimens either focused on dynamic measurements without simulation of a body weight or on static measurements with a certain load. The assumption was made that a more complete image of pressure alterations could be obtained by a dynamic simulation of a squat movement. Technical limitations however have lead to some restrictions. The RF and VI are the only two vasti loaded in this test rig; this choice was based on a study performed by Elias et al. who performed EMG research on the quadriceps muscle and revealed that the forces necessary for knee extension are for 70% generated by the RF and the VI [8] Nevertheless, this approach remains a simplified representation of the complex activity pattern of the quadriceps muscle. More and more authors tend to shift towards loading all four or even six parts of the quadriceps in order to obtain a more realistic representation of the quadriceps force. Besides the challenge of
641
implementing very heavy extra linear motors on the bridge construction, it is not feasible to realistically simulate the physiological interactions between different muscles such as co-contraction, muscle activation sequence and the interaction between anta- and agonists in cadaver studies. In order to measure the intra articular contact pressures and contact areas, an I-scan pressure sensitive film was inserted in the patellofemoral joint. Although the applicability of the I-scan system in the patellofemoral joint has been proven, careful interpretation is warranted. To insert the sensor, the knee joint was opened through a lateral incision, which might have an effect on the pressure distribution. Ostermeier and colleagues studied the effect of a lateral release on eight cadaver knees and no medialisation of the patella due to a lateral release was found and they observed no reduction of lateral instability of the patella, especially in extension. However, they state that there might be a relieving effect on the lateral patellar facet in knee flexion [7]. Therefore the influence of the sensor placement through lateral release on the measurements remains uncertain. Despite this potential influence on the knee joint, the results follow the general accepted pattern of pressure distribution during knee flexion and extension. When the patella engages in the femoral trochlear groove at approximately 20 degrees of knee flexion, the intra articular pressure and contact area start to build up, and increase further with deeper knee bending. Early in-vitro studies by Hehne, Huberti et al. and Goodfellow et al. have reported on this phenomenon [9, 10] More recently, Luyckx et al. found that a maximum contact area is obtained at ninety degrees of knee flexion [11]. Besides the increase in contact area, a shift in contact area from distal to proximal on the patella was also observed during knee flexion. The results presented here show a similar pattern. As shown in figure 4, the contact area increases with the flexion angle. Furthermore, figure 3 nicely demonstrates that the contact area moves from distal to proximal during knee flexion. Besides these well described patterns in literature, the present results also revealed a difference in contact area and intra articular pressure between the upward and downward squatting phase: lower contact pressures and smaller contact areas were observed during the downward phase. This difference needs to be further investigated; in the upward phase the linear motor systematically produces a greater force than in the downward phase, this difference in quadriceps force is correlated with the knee flexion angle, which makes it impossible to determine in which amount the difference in contact area and pressure between extension and flexion phase can be attributed to the knee angle, the quadriceps force and the direction of movement. Some investigators have pointed out that the contact areas in the patellofemoral joint alter when the quadriceps
IFMBE Proceedings Vol. 29
642
A. Van Haver et al.
tendon is loaded with different amount of forces [9, 12]. Most likely, these changes can be attributed to the greater amount of cartilage that comes in contact if the loads on the patellofemoral joint are augmented. Therefore the load on the quadriceps tendon should always be taken in mind when investigating the contact areas and pressures. Comparing the results of contact pressures and areas of the present study with those found in literature, some similarities and some differences can be noticed. One possible explanation for the wide variation between all results is the difference in methods used to investigate the knee joint. These differences are demonstrated in table 2. First of all, none of the other studies performed measurements in a weight bearing situation, and only in the study by Bohnsack et al. measurements were performed under dynamic conditions. Comparing the contact areas, it can be observed that the contact areas at 30° knee flexion in both our and Bohnsack’s study are smaller. This could possibly be explained by the test circumstances, a dynamic versus a static test setup. Table 2 Overview of cadaver experiments
A wider variation can be seen in the intra articular patellofemoral pressure. The intra articular pressures found in current work seem to be greater than those found by other investigators. A possible explanation for this phenomenon are the substantially greater forces applied on the quadriceps tendon in the current study, which might result in larger intra articular pressures. Due to different test setups, it remains difficult to compare different studies with each other. In addition, a limited number of knees are presented in this study and further work is necessary, however, the general trends that are found in biomechanical research of the patellofemoral joint by previous work, can be seen in the results of current work as well.
V. CONCLUSIONS Analysis of these data shows that there is consistency of previous findings reported in the literature and our findings. It has also been proven that good quality cadaveric knees
tested in the Ghent Knee Rig can provide us with reproducible results. It must be noted, however, that in the invivo patellofemoral joint, the investigated variables can’t be considered as independent but should be seen as a complex related unit that interacts in a way that is still not fully understood. Thanks to the repeatability and reliability of the designed test rig, it offers great potential for further research on knee biomechanics.
REFERENCES 1. Quintelier J, Lobbestael F, Verdonk P et al. (2008) Patellofemoral contact pressures. Acta of Bioeng Biomech. 10(2):23-28 2. Bellemans J (2003) Biomechanics of anterior knee pain. Knee 10(2):123-126 3. Mason JJ, Leszko F, Johnson T et al. (2008) Patellofemoral joint forces. J Biomech. 41(11):2337-2348 4. Insall J, Falvo KA, Wise DW (1976) Chondromalacia Patellae. A prospective study. J Bone Joint Surg Am. 58(1):1-8 5. R. Minkowitz, C. Inzerillo, O.H. Sherman (2007) Patella instability. Bull NYU Hosp Jt Dis., 65(4):280-293 6. Zavatsky AB (1997) A kinematic-freedom analysis of a flexed-kneestance testing rig. J Biomech 30(3):277-280 7. Ostermeier S, Holst M, Hurschler C (2007) Dynamic measurement of patellofemoral kinematics and contact pressure after lateral retinacular release: an in vitro study. Knee Surg Sports Traumatol Arthrosc. 15(5):547-554 8. Elias JJ, Bratton DR, Weinstein MD et al. (2006) Comparing two estimations of the quadriceps force distribution for use during patellofemoral simulation. J Biomech 39(5):865-872 9. Hehne HJ (1990) Biomechanics of the patellofemoral joint and its clinical relevance. Clin Orthop Relat Res. (258):73-85 10. Goodfellow J, Hungerford DS, Zindel M (1976) Patello-femoral joint mechanics and pathology. 1. Functional anatomy of the patellofemoral joint. J Bone Joint Surg Br. 58(3):287-290 11. Luyckx T, Didden K, Vandenneucker H et al. (2009) Is there a biomechanical explanation for anterior knee pain in patients with patella alta?: influence of patellar height on patellofemoral contact force, contact area and contact pressure. J Bone Joint Surg. Br. 91(3):344350 12. Matthews LS, Sonstegard DA, Henke JA (1977) Load bearing characteristics of the patello-femoral joint. Acta Orthop. Scand. 48(5):511516 13. Huberti HH, Hayes WC (1984) Patellofemoral contact pressures. The influence of q-angle and tendofemoral contact. J Bone Joint Surg Am. 66(5):715-724 14. R.P. Csintalan, M.M. Schulz, J. Woo et al. (2002) Gender differences in patellofemoral joint biomechanics. Clin Orthop Relat Res. (402):260-269 15. Bohnsack M, Klages P, Hurschler C et al. (2009) Influence of an infrapatellar fat pad edema on patellofemoral biomechanics and knee kinematics: a possible relation to the anterior knee pain syndrome. Arch Orthop Trauma Surg. 129(8):1025-30, 2009. 16. Melegari TM, Parks BG, Matthews LS (2008) Patellofemoral contact area and pressure after medial patellofemoral ligament reconstruction. Am J Sports Med. 36(4):747-752 Author: Annemieke Van Haver Institute: University College Ghent Street: Schoonmeersstraat 52 City: 9000 Ghent Country: Belgium Email: [email protected]
IFMBE Proceedings Vol. 29
Rapid Prototype Development for Studying Human Activity* A. Fevgas1, P. Tsompanopoulou1,2, and S. Lalis1,2 1 2
Computer & Communication Eng. Dept., University of Thessaly, Volos, Greece Centre for Research and Technology – Thessaly (CE.RE.TE.TH.), Volos, Greece
Abstract— Recent years there is an extremely growing interest in the study of human motion. A large amount of scientific research projects deal with problems like monitoring human motion, gesture and posture recognition, fall detection etc. Wearable computers and electronic textiles have been successfully used for the study of human physiology, rehabilitation and ergonomics. We present a platform and a methodology for rapid prototype development of e-textile applications for human activity monitoring. Keywords— Gait analysis, motion analysis, wearables, electronic textiles, prototype construction.
I. INTRODUCTION The study of human activity is one of the most popular research areas in the recent years, with applications in healthcare, sports and electronic games. Many research efforts have dealt with fall detection, gesture, posture and activity recognition. The progress in wearable computers and electronic textiles has a significant contribution in this growth, as their main asset is that are pervasive during the activity. Wearable sensor networks (WSNs) comprise devices attached to human body, which acquire, process and/or transmit sensor data to a host device (e.g. PDA). A wireless sensor infrastructure for healthcare applications presented in [1]. The described infrastructure uses 802.15.4/ZigBee protocols for communication and has three types of devices: fixed wireless communication devices, mobile personal devices and mobile sensor nodes. Hanaoka et al. [2] also introduces an infrastructure for wearable sensor networks, where the sensor nodes (called cookies) have the size of a small coin. Cookies communicate with the outside world through Muffin, a linux based host that acts as the gateway. Cinnamon, a high level programming interface, provides the sensor data. E-textiles are one of the most challenging areas of wearable computing, since they claim that pervasive computers should be truly wearable. E-textile research involves, among others, creation of woven sensors [3], and stretchable electronics [4], development of computation models [5], and construction of prototype platforms [6][7]. Specifically, e-TAGs, originally, presented in [6], are small computational * This study was supported by the Innovation Pole, Center for Research and Technology Thessaly (CE.RE.TE.TH).
devices which can be attached to fabric textiles. They comprise four different node types (master, microphone, led, input) that are based on PIC microcontrollers. Moreover, Lilypad [7] is a construction kit for building electronic textiles. This includes a central processing node and several sensors as well as actuator boards. Lilypad was originally based on conductive laser-cut fabric PCB. It can be programmed in C via Arduino open source environment. Wearables have successfully used accelerometer for capturing physical activity. Karantonis et al. [8] introduce a real time human movement classifier using a 3-axis accelerometer unit placed in waist. The system processes the acceleration data and it discriminates between activity and rest (with accuracy 100%), detects falls (accuracy 95.6%), and classifies walking (accuracy 83.3%). Accelerometers and gyroscopes have been used in [9] in order to automate the distinction between certain levels of expertise and examine the quality of executing movements in martial arts. Serious amount of work also have been done on fall detection. An initial effort to design a reliable fall detector is presented in [10]. A fall detector embedded in a wrist watch is presented in [11]. It detects forward falls by accuracy of 100%, but in backwards and sidewards the success is reduced to 58% and 45% correspondingly. In [12] two ±50g, 2-axis accelerometers are placed orthogonally behind ear lobe. The rationale behind placing sensors on the head is based on the assumption that high acceleration values in this area are associated by abnormal situations like falls.
II. PLATFORM DESIGN The design and development of wearable applications for human activity monitoring involves decisions about the sensors and their placement, data processing, communications, power consumption etc. Frequently, design adoptions (e.g. sensor position) are based on empirical criteria [9][12] and/or experimental data [10]. Constructing prototypes to verify design goals may end to increased cost and production time. The main goal of the proposed platform is to provide the tools and the methodology for rapid development of etextile applications for human activity monitoring. The platform aims to provide a toolbox for analyzing position
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 643–646, 2010. www.springerlink.com
644
A. Fevgas, P. Tsompanopoulou, and S. Lalis
and acceleration data and a hardware infrastructure for fast prototype construction. The toolbox should process motion data in order to provide an accurate estimation of acceleration in different body points enabling the user to choose the points that accelerometers could be placed on. Moreover, it could be used in the design and verification of data processing algorithms before any prototype manufacturing. In the other hand, the hardware infrastructure provides the required components to construct smart clothes' prototypes.
III. HARDWARE INFRASTRUCTURE The proposed rapid prototype development environment provides a hardware infrastructure for constructing smart clothes. Factors like cost, usage, power consumption, communication, programming and scaling, have been considered in the platform design. Wearable sensor networks and e-textile platforms like eTags and Lilypad can also be used for prototyping smart clothes. In WSNs a wireless node can be placed on any body point where a sensor or an actuator is required by the application. This introduces maintenance overhead of the wireless nodes (e.g. battery replacement), moreover complicates communication and software development. Electronic textiles based on E-Tags' approach have similar drawbacks, as different type of nodes are used. LilyPad introduces a different model, where a single processing node is used and connected to many sensors, actuators and power nodes. In our work, the proposed architecture has been designed simultaneously with Lilypad and comprises the techniques used in it and in body area networks. Specifically, a processing node is considered with several sensors and actuators connected directly on it. The position of the sensors and/or actuators is independent of the position of the processing node. The processing node is equipped with wireless communication for both software update and interaction with other systems. It acquires data from sensors, processes them and if it's necessary activates actuator(s) and/or establishes communication with external systems. Node's functionality can be changed through application code update. The processing node, as well as its peripherals (sensors and actuators), are battery powered. More than one nodes can be incorporated into a smart clothe for both scaling and redundancy. A first prototype board, Figure 1(left), has been constructed for the evaluation of the design approach, followed by a final, more compact, version, Figure 1(right). The selection of an efficient processor for the central unit was based on factors like, the number of I/O ports, the performance and the power consumption. PIC18LF4550, an 8bit microcontroller from Microchip, has been adopted for the processing unit. This microcontroller incorporates 35
I/O ports, 13 A/D channels, 2KB of data memory and 32KB of flash program memory and several peripherals with a 10bit AD converter and a USART module among them. It supports a wide range of operating frequencies from 32KHz to 48MHz. It also provides a range of features that reduce significantly the power consumption. Another important feature is its self-programmability, enabling it to write in its own program memory under software control. Microcontroller used in the prototype was clocked at 12MHz and its operating voltage was set to 3.3V. XBee from Digi (former Maxstream) has been used for radio communication. Xbee is a low-power, low-cost RF communication module build on ZigBee/802.15.4.
Fig. 1 The processing node The processing node is battery powered, a 6V, 128mAh PX28L battery has been used in the experiments. The voltage is regulated to 3.3V by an LM2937-3.3 regulator from National Semiconductors. Its power consumption is 4.36mA in idle state, 5.76mA when sampling an accelerometer and 57.09mA when transmitting the samples. As has been mentioned above, the proposed system should support application code update. A bootloader and a host application have been developed for this reason. The bootloader supports read, write, erase and execute application commands. The host application provides all required functionality to control programming procedure through a convenient user interface. The bootloader was developed in C, based on the architecture described in [13] and the host application was developed in Java.
IV. MODA Motion Data Analysis (MoDA) is a toolbox implemented in MATLAB and it is designed to load, process and display motion data (i.e., position or acceleration data) of human activities. Its graphical user interface provides the user the ability to process data with easiness, using existing functionalities or adding new user defined functions specific for each particular case.
IFMBE Proceedings Vol. 29
Rapid Prototype Development for Studying Human Activity
645
cases (i.e., walking, fall etc). Also, the user can study/ compare the data for more than one markers/accelerometers, to decide on the body points that offer more information about a particular problem and proceed to a prototype construction (for further experimentation) placing sensors on these points. Already implemented and user defined functions can be used to study the recorded and computed data.
V. PLATFORM EVALUATION Fig. 2 Observation window picturing data of a fall Data describing the position of specific points of the human body are collected in labs equipped with motion capture systems. These use markers placed on the model, and they record their locations in 3 dimension axes system during the experiment. The location data treated with interpolation and differentiation methods, provide the acceleration of the particular body parts at all time instances, but the results may contain a large amount of noise. Prior to the MoDA implementation, experiments on interpolation using Spline Toolbox of MATLAB, were done to figure out the most appropriate method for the particular data type. This proved to be the smoothed cubic splines with a given tolerance, and therefore was used in MoDA, since it smoothes the acceleration according to a user defined tolerance. The correct choice of the tolerance is a matter of experience, and results to clearer data with reduced noise. Similarly as in location data, acceleration can be measured by high accuracy accelerometers stitched on the model during experiments performed in labs equipped with the appropriate devices. Since the computation of position, given the acceleration is an inverse type problem, it is very sensitive and the solution depends a lot on the used method. Once the data sets are available, the user is able to load, picture, smooth, observe, compute special quantities and properties, and study the body behavior during different motion ways. The graphs of location data (in blue) of a marker placed on the waist and the corresponding acceleration data (in red) for the three axes are depicted in the Observation Window, Figure 2. On the right part of the graphic window one can see the enlargement of the left graphs during fall moment. MoDA provides the ability to study the motion data of a particular body point by loading data from more than one
The proposed platform design is evaluated by building a smart cloth for fall detection, a popular research subject in movement analysis with many practical applications in healthcare, sports, etc. MoDA was utilized to study falls by using position data obtained from [14] and downloaded from [15]. Acceleration amplitude on each axis (Figure 2) was studied along with Euclidean norm for different body spots in order to locate suitable points for placing acceleration sensors. It was figured out that acceleration values on the waist do not differ a lot for various fall types. Considering the observation that all recovery attempts are initiated with upper extremity movement, an accelerometer on the waist was used for fall detection and an accelerometer on each wrist to identify subject’s condition (responding or not) after a fall. MoDA was also used for designing and testing the fall detection algorithm. Position data were used, along with acceleration values gathered by an acceleration logger attached to the waist. The collected acceleration data include the gravitational vector of acceleration, which may determine the postural orientation. An acceleration threshold and the torso orientation were employed in the proposed algorithm to detect falls, which identifies subject’s condition using wrist acceleration and tracks recovery attempts. An alarm is raised, if the subject is not responding or recovery fails. To build our fall (Figure 3) detector, a processing node and three accelerometers were attached on a conventional jacket. 3-axis accelerometers (ADXL330, Analog Devices) with range ±3g were used and integrated to a sensor board by Dimension Engineering (DE-ACCM3D). The board features integrated op amp buffers for direct connection to a microcontroller’s analog inputs. The processing node was placed in the spine above waist in order not to affect user comfort. The sensors were attached to the wrists and waist, according to the findings of fall analysis carried out by MoDA. The bootloader and the programming environment were used to program the smart jacket with the fall detection application.
IFMBE Proceedings Vol. 29
646
A. Fevgas, P. Tsompanopoulou, and S. Lalis
design. Our future work aims to enhance MoDA’s functionality, as well as to develop a high level programming interface for the processing node.
REFERENCES
Fig. 3 Smart jacket A set of experiments were accomplished to test smart jacket's functionality. The tests comprised by intentional falls and daily living activities (ADL) (Table I). The ADL were included in the tests to check for false positives. The tests were performed by three healthy subjects of ages 24 to 32. All activities were repeated three times per subject. None of the ADL recognized as fall, while all falls were successfully recognized except the backward fall wherein torso remained on upright stance. Table 1 Activity Daily Living
Falls
Evaluation scenarios Description
From standing position sit down to a chair From seated position stand up From standing position lying down to a bed From lying position stand up Walking regularly for 15m Walking fast for 15m Walking up to a 15-step staircase Walking down to a 15-step staircase Fall forward from a standing position, ending lying Fall forward while walking, ending lying Fall lateral from a standing position, ending lying Fall backward from a standing position, ending lying Fall backward from a standing position ending with torso upright
VI. CONCLUSIONS We presented a platform for rapid prototyping of e-textile applications that monitor human activities. The platform comprises a toolbox named MoDA for studying human motion data and a hardware infrastructure for building etextiles. MoDA provides studies for motion problems, by processing available data, prior to hardware involvement. Evaluation results confirm the technical feasibility of our
1. Arriola A, Brebels S, Valderas D et al. (2008) A wireless sensor network infrastructure for personal monitoring, 5th International Workshop on Wearable Micro and Nanosystems for Personalized Health, Valencia, Spain, 2008. 2. Hanaoka K, Takagi A, Nakajima T (2006) A Software Infrastructure for Wearable Sensor Networks, IEEE Proc. vol. 0, International Workshop on Real-Time Computing Systems and Applications (RTCSA’06), Sydney, Australia, 2006, pp. 27-35. 3. Paradiso R, Loriga G, Taccini N (2005) A wearable healthcare system based on knitted integrated sensors, IEEE Trans Inf Tech Biomed, 9:337-344. 4. Loher T, Manessis D, Heinrich R et al. (2007) Stretchable electronic systems, IEEE Proc. Conference on Electronics Packaging Technology, Singapore, 2006, pp. 271 – 276. 5. Marculescu D, Marculescu R, Khosla P (2002) Challenges and opportunities in electronic textiles modeling and optimization ACM, Conference on Design Automation, New Orleans, USA, pp. 175-180. 6. Lehn D, Neely C, Schoonover K, Jones M et al. (2004) e-TAGS: etextile attached gadgets, Proc. Conference on Communication Networks and Distributed Systems Modeling and Simulation, San Diego, California, USA. 7. Buechley L, Eisenberg M (2008) The LilyPad Arduino: toward wearable engineering for everyone. IEEE Perv. Comp. 7(2):12-15. 8. Karantonis D, Narayanan M, Mathie M, et al. (2006) Implementation of a real-time human movement classifier using a triaxial accelerometer for ambulatory monitoring. IEEE Trans Inf. Tech. Biomed. 10(1): 156-167. 9. Heinz E, Kunze K, Gruber M et al. (2006) Using wearable sensors for real-time recognition tasks in games of martial arts - an initial experiment, IEEE Symposium on Computational Intelligence and Games pp. 98-102. 10. Doughty K, Lewis R, McIntosh A (2000) The design of a practical and reliable fall detector for community and institutional telecare. J Telemed Telecare, 6: 50-54 11. Degen T, Jaeckel H, Rufer M et al. (2003) SPEEDY: a fall detector in a wrist watch, 7th IEEE International Symposium on Wearable Computers, NY, USA, pp. 184 - 187. 12. Lindemann U, Hock A, Stuber M et al. (2005) Evaluation of a fall detector based on accelerometers: a pilot study. Springer, Med Bio Eng Comp, 43(5): 548-551 13. Fosler R, Richey R (2002) A FLASH bootloader for PIC16 and PIC18 devices at http://www.microchip.com 14. Laboratory for Human Movement Analysis, CERETETH, http://www.inhuper.cereteth.gr/laboratories/biomechanics?set_langua ge=en 15. Motion Capture Database, CMU at http://mocap.cs.cmu.edu/
Author: Athanasios Fevgas Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Computer & Communication Eng Dept, Univ of Thessaly Glavani 37 Volos Greece [email protected]
Rheological and Electrical Properties of RBC Suspensions in Dextran 70. Changes in RBC Morphology N. Antonova1, I. Ivanov1, Y. Gluhcheva2, and E. Zvetkova2 2
1 Institute of Mechanics and Biomechanics, Bulgarian Academy of Sciences, Sofia, Bulgaria, Institute of Experimental Morphology and Anthropology with Museum, Bulgarian Academy of Sciences, Sofia, Bulgaria
Abstract— Apparent viscosity and conductivity of red blood cells (RBC) suspensions in dextran 70 (Dx 70) with various concentrations were evaluated in vitro under steady and unsteady flow conditions. Conductivity time and shear rate dependences in parallel with the rheological properties of the samples were studied under transient flow regimes at different local structure of the uniform Couette flow. Their relationships on Dx 70 concentrations were evaluated too. A concurrent measurement system, using a Contraves Low Shear 30 rotational rheometer was used in the study [1-3]. Low shear viscosity increased and conductivity decreased of RBC suspensions in dextran 70, depending on dextran concentration, compared to non-aggregating control RBC suspension in PBS. A time course of conductivity of RBC suspensions in Dx 70 was recorded under different flow conditions and provides experimental description of RBC aggregation-disaggregation processes and other cell-cell interactions. Dx 70 induces morphological alterations in RBC shape and arrangement in the suspensions. Echinocytes are observed at low Dx 70 concentrations while spherocytes are found mainly in smears at higher Dx 70 concentrations. Their morphological characteristics affect blood electrical and mechanical properties.
increase plasma volume, improve blood flow and exhibit antithrombotic effects [10]. Large number of investigations is focused on the evaluation of RBC aggregation, using various optical, ultrasound, viscometric and other methods [4, 11]. In our previous investigations using a novel technique we quantified electrorheological properties of blood under electric field of 2 kHz and found that valuable information could be received about the mechanical properties of blood, in particular about the structuring and kinetics of “rouleaux formation” [1-3]. The aim of the present study is to examine rheological and electrical properties of normal human RBC suspensions in dextran 70 in PBS under steady and non-steady flow conditions as well as morphological characteristics and transformations, observed in the same RBC suspensions.
Keywords— RBC suspension, dextran 70, apparent viscosity, conductivity, RBC morphology.
A. RBC Suspension Samples
I. INTRODUCTION RBC in the presence of high molecular weight polymers, such as plasma proteins or solutions containing large polymers (e.g. dextrans ≥ 40 kDa), aggregate to form rouleaux and rouleaux networks, which are the major determinant of the in vitro rheological properties of blood [4-8]. Red blood cell aggregation is a reversible phenomenon and aggregates can be easily disrupted by mechanical forces. In vivo RBC aggregation occurs at low shear forces or stasis and is a major determinant at low shear blood viscosity and thus in vivo flow dynamics. Dextrans are complex polymers with molecular weight ranging from 10 to 500 kDa. They absorb to the cell surface and promote the formation of RBC aggregates with different morphology and size (from short linear rouleaux to continues network) depending on the molecular weight and concentration of the dextrans used [4, 6, 9]. In vivo dextrans
II. MATERIALS AND METHODS
Human red blood cells (RBCs) from healthy donor purchased from the National Center for Hematology and Transfusion in Sofia and conserved with CPDA-1 (63 ml CPDA-1 in 450 ml blood) were washed twice with phosphate-buffered solution (PBS, pH=7.4) and centrifuged for 15 min at 3 000 rpm. Washed RBC were resuspended either in PBS (control) or in Dx 70 and PBS with final dextran concentration in the samples of 1, 2, 3 and 3.5 g/dl and hematocrit H=40%. Blood smears were prepared, fixed and stained with May-Grünwald-Giemsa (Sigma). Morphological characteristics were observed on a light microscope Opton, at magnification x 630. Measurements were carried out within 2 h after suspensions were prepared. B. Apparent Viscosity and Conductivity Measurements In the present study apparent viscosity of RBC suspensions in PBS with Dx 70 and control were measured using a rotational viscometer Contraves Low Shear 30 at a steady flow at shear rates from γ=0,0237 s-1 to γ=128,5 s-1.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 647–650, 2010. www.springerlink.com
648
N. Antonova et al.
III. RESULTS
At high shear rates (94.5 s-1-128.5 s-1) the differences in the apparent viscosity between control and RBC suspensions in Dx 70 are less pronounced. These results are in correlation with the literature data [9], where it is known that addition of dextran 70 leads to increased erythrocyte aggregation and blood viscosity, respectively (Fig. 1).
η, mPa.s
30 25
η, Dx 70, 2 g/dl η, Dx 70, 3 g/dl
20
η, Dx 70, 3,5 g/dl η, control sample
15 10 5 0 0.1
1
10
100
shear rates, s-1
Fig. 1 Shear rate dependence of apparent blood viscosity for RBC suspensions in dextran 70 and of control RBC in PBS with concentrations 2, 3, 3.5 g/dl. T=37 °C 12
η, 20.4 s-1 η, 51.2 s-1 η, 94.5 s-1
10 8
η, mPa.s
RBC suspensions conductivity is quantified at nonequilibrium flow conditions by means of electro-rheological techniques. A concurrent measurement system, using a Contraves Low Shear 30 rotational rheometer was used in the study. It includes a resin replica of the Couette type measuring system MS 1/1 of the rheometer with a pair of platinum electrodes embedded into the wall; a device, constructed by the conductometric method and software (Data acquisition system) [1]. A method, based on dielectric properties of dispersed systems in Couette viscometric blood flow was applied to investigate the kinetics of RBC aggregation and the formation and break-up of the aggregates [13]. The main advantage of this technique is that blood is subjected to a uniform shearing field in a Couette rheometric cell, and information about the mechanical and electrical properties of the fluid is obtained in parallel. Shear rate changes were programmed and controlled by Rheoscan 100 programming unit. The measurements were done at different shear rates and at different local structure of the flow field, including triangular and trapezoid change of shear rates, which were as follows: I --- 0.277 - 128.5 - 0.277 s-1; II --- 0.277 - 128.5 - 0.277 s-1; III --- 0.277 - 128.5 - 0.277 s-1; IV --- 0 – 27.7 – 0 s-1; V --- 0 - 94.5 - 0 s-1 (Fig. 3). The time variation of RBC suspensions conductivity under the above transient flow conditions at triangular and trapezium-shaped Couette viscometric flow were investigated under electric field of 2 kHz. To investigate aggregation process in stasis and under flow conditions after subjected to shearing from 20 to 30 seconds to disperse all aggregates, RBC suspension flow was stopped or decreased to allow RBCs aggregation. Immediately after beginning and complete stoppage of shearing kinetics of conductivity and torque signals were recorded. If the higher shear rates had no further effect on σ values measured during shearing, the applied shear rate was considered to be sufficient for complete dispersion of the aggregates. The time interval between the measurements was 0,2 s. All the measurements have been carried out immediately after the sample preparation at 37 °C.
6 4 2 0
RBC suspensions at hematocrit H=40% in PBS and dextran 70 with final concentrations in the sample 1, 2,3 and 3,5 g/dl exhibit non-newtonian rheological behaviour over a wide range of shear rates - 0.0237 s-1 to 94.5 s-1 (Fig. 1). Comparison with the apparent viscosity of control RBC suspension in PBS at hematocrit H=40% showed that investigated RBC suspension samples in dextran 70 have increased viscosity and this increasing is more pronounced at low shear rates, confirming the increased rouleaux formation in the presence of Dx 70 with different concentrations.
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
C, %
Fig. 2 Concentration dependence of apparent blood viscosity for RBC suspensions in Dx 70 and of control RBC in PBS at different shear rates. T=37 °C The obtained results for concentration dependence show that the apparent viscosity of the RBC suspension is increasing with increasing Dx 70 concentration. These relationships depend also on the shear rates: the increasing is
IFMBE Proceedings Vol. 29
Rheological and Electrical Properties of RBC Suspensions in Dextran 70. Changes in RBC Morphology
σ, μS/cm
more pronounced at lower shear rates than at higher shear rates in comparison to the non-aggregating control of RBC in PBS. The apparent viscosity of the RBC suspensions in Dx 70 is more than two times higher at lower shear rates than at higher shear rates in comparison to the control RBC in PBS with the same hematocrit (H=40%). But the trend of change with Dx 70 concentration between 2 and 3.5 g/dl, where the apparent viscosity is almost the same, is not similar for separate cases. For shear rates 20.4 s-1 and 51.2 s-1 blood viscosity increased, but at 94.5 s-1 it rose at first and after that a fall was registered (Fig.2).
649
These red blood cells were flat, discoid, irregularly contoured - with a few sharp superficial angulations (small cytoplasmic protrusions). Stage 1 echinocytes (Fig.4) have been observed after treatment of RBCs with low Dx70 concentrations (1-2 g/dl).
σ, Dx 70, 3 g/dl regime shear rates σ, control sample
20 18 16 14 12 10 8
Fig. 4 Stage 1 echinocytes in blood smears after treatment with low Dx 70 concentrations (1-2 g/dl). May-Grünwald-Giemsa staining, x 630 -1
128.5 s-1
128.5 s
128.5 s-1 -1
94.5 s -1
27.7 s 0
200
300
400
500
600
700
Spherocytes (spheroechinocytes; stage 2 echinocytes) are flat ovoid red blood cells, arranged in a continuous network - observed by us in suspensions/smears treated with high Dx 70 concentrations (3 g/dl and 3.5 g/dl) (Fig.5). In the same blood smears one could see also rare discocytes and stomatocytes (see arrows and big arrowheads).
t, s
Fig. 3 Time-dependent change of blood conductivity of RBC suspensions in Dx 70 (3 g/dl) and of control sample at different regimes of shear rates. T=37 °C
Time-dependent changes of conductivity of RBC suspensions in Dx 70 (3 g/dl) and of control sample at different regimes of shear rates are shown on Fig.3. Results showed that time-dependent conductivity (σ) changes of RBC suspensions in Dx 70 is strongly dependent on the regime of the applied shear rate. Analysis showed that increased shear rate led also to increased σ of the samples. Our results show that the conductivity of the control sample of nonaggregating RBC suspension in PBS with the same hematocrit (H=40%) is not shear rate and time dependent and has a constant value (Fig.3). Microscopic observations performed within this time, reveal changes in RBC morphology. Morphological studies showed dextran concentration dependence of the transition in erythrocyte shape, related to transformation of normal discocytes to echinocytes (stages 1 and 2). In blood smears of RBCs treated in suspensions with different Dx 70 concentrations, echinocytes were mainly cells observed (Figs.4-5).
Fig. 5 Stage 2 echinocytes, two stomatocytes (arrows) and discocyte (arrowhead), after treatment with high Dx 70 concentration (3.5 g/dl). May-Grünwald-Giemsa staining, x 630
IV. DISCUSSION The results demonstrate that the shear rate, concentration and time-dependent changes in the apparent viscosity and conductivity in RBC suspensions in Dx 70 during the aggregation process differ in nature for suspensions with different Dx 70 concentrations. The conductivity changes of
IFMBE Proceedings Vol. 29
650
N. Antonova et al.
RBC suspensions in Dx 70 measured at different concentrations and flow conditions follow the morphological transformations of RBCs and of the RBC aggregates during the aggregation-disaggregation process. Our results confirm data of Reinhart et al., (2008), that at different Dx 70 concentrations (from 1 to 3.5 g/dl) in suspensions of normal RBCs, the erythrocytes which have been coated with considerable amounts of this polymer, could be transformed in echinocytes and stomatocytes – thus deeply changing their morphological characteristics and rheological properties, increasing the suspension viscosity and RBC aggregability. The shape of a normal red blood cell is well known: under resting conditions it is that of biconcave discocyte. RBCs can easily undergo shape deformation to echinocytes and stomatocytes under influence of both - intrinsic and extrinsic factors, and return to their resting shape when these factors are removed [8, 12, 13]. Echinocytes and stomatocytes are the most commonly induced morphological shape changes of RBCs in vitro and in vivo - under the influence of various agents, as well as in different physiological and pathophysiological conditions [9]. Data of Eriksson (1990) [14] show that when RBCs are suspended in buffered salt solutions, they could be transformed in echinocytes. On the other hand, RBC shape and arrangement depend on the kinetics and dynamics of flow, deformation rate and shear stress distribution [9, 15]. Every transformation in RBC shape, including related to the changes in cell membrane biochemistry and morphology, could induce rheological disturbances which are of great importance in clinical praxis [9]. The same authors have analyzed the influence of drug-induced, increasing echinocytosis and stomatocytosis on suspension viscosities. It was found that the viscosity was increased by echinocytosis - a result which correlated well to our observations here reported. Reinhart and co-authors obtained that under experimental conditions with erythrocyte aggregation (4 g/dl Dx 70 at low shear rate: 0.1 s-1), a small degree of echinocytosis produced the highest viscosity of suspension. Interestingly, in all cases, the suspension viscosity could be normalized by re-transforming echinocytes into normal RBCs - discocytes (characterized with lower viscosity and thus - with the best oxygen transport efficiency).
V. CONCLUSIONS Understanding the morphological, mechanical, electrical properties of cells as well as elucidating the complex cellto-cell and cell-polymer interactions of RBC suspensions in Dx 70 gives instrument for understanding the mechanism of RBC aggregation and offers a better tool for diagnostics, therapeutics and effective drug assays.
ACKNOWLEDGMENT The authors thank Dr. Rumen Todorov from the Institute of Physical Chemistry to the Bulgarian Academy of Sciences for the assistance. This work was supported by the EU Operative program “Human resources development” – project grant BG051PO001/07/3.3-04/21 / 28.08.2009.
REFERENCES Antonova N, Riha P (2006) Studies of electrorheological properties of blood. Clin. Hem. and Microcirculation 35:19-29 Antonova N, Riha P, Ivanov I (2008) Time dependent variation of human blood conductivity as a method for an estimation of RBC aggregation. Clin. Hem. and Microcirculation 39:53-61 Kaliviotis E, Ivanov I, Antonova N, Yianneskis M (2009) Erythrocyte Aggregation at Non-Equilibrium Flow Conditions: a Comparison of Characteristics Measured with Electrorheology and Image Analysis.. Clin. Hem. and Microcirculation (in press) Barshtein G, et al (1998) Red blood cell rouleaux formation in dextran solution: dependence on polymer conformation. Eur. Biophys. J., 27/2: 177 – 181 Suresh S. (2006) Mechanical response of human red blood cells in health and disease: Some structure- property-function relationships. J Mater Res 21:1871-1877 DOI: 10.1557/JMR.2006.0260. Pribush A, Zilberman-Kravits D, Meyerstein N. (2007) The mechanism of the dextran-induced red blood cell aggregation. Eur Biophys J 36:8594 DOI 10.1007/s00249-006-0107-1 Mohandas N et al (1980) Analysis of factors regulating erythrocyte deformability. J. Clin Invest. 66: 563 - 573 Van Oss CJ, Arnold K, Coakley WT (1990) Depletion flocculation and depletion stabilization of erythrocytes. Cell Biochem and Bioph, 17/1: 1 - 10 Reinhart WH, Singh-Marchettin M, Straub PW (2008) The influence of erythrocyte shape on suspension viscosities. Eur. J. Clin. Invest., 22/1: 38 – 44 Gruber U. (1975) Dextran and the prevention of postoperative thromboembolic complications. Surg Clin North Am 55:679-696 Pribush A, Meyerstein D, Meyerstein, N (2004) Conductometric study of shear dependent processes in red blood cell suspensions. I. Effect of red blood cell aggregate morphology on blood conductance, Biorheology 41(1):13-28 Van Oss CJ, Coakley WT (1988) Mechanisms of successive models of erythrocyte stability and instability in the presence of various polymers. Cell Biophys., 13/2:141 – 150 Fischer T. (2004) Shape memory of human red blood cells. Biophys J 86:3304-3313 Eriksson L. (1990) On the shape of human red blood cells interacting with flat artificial surfaces – the “glass effect”. Biochim Biophys Acta 1036:193–201 Balan C., Balut C, Gheorghe L, Gheorghe C, Gheorghiu E, Ursu G. (2004) Experimental determination of blood permittivity and conductivity in simple shear flow. Clin Hemorheol Microcirc 30:359-364
Author: N. Antonova, Institute of Mechanics and Biomechanics Acad. G. Bonchev Street, Bl.4, 1113 Sofia, Bulgaria Email: [email protected]
IFMBE Proceedings Vol. 29
Numerical Simulation In Magnetic Drug Targeting. Magnetic Field Source Optimization. A. Dobre and A.M. Morega University POLITEHNICA of Bucharest, Bucharest, ROMANIA
Abstract—This paper presents numerical simulation model and results on magnetic drug targeting therapy. The study aims at investigating the aggregate blood – magnetic carrier flow interaction with an external magnetic field. Another objective was finding the optimal magnetic field source configuration that provides for flows that best assist in magnetic drug targeting. In order to evaluate the effects we used finite element analysis. The computational domains range from various ideal 2D blood vessel models to 3D more realistic models. Keywords—magnetic drug targeting, hemodynamic, numerical simulation, finite element analysis. I. INTRODUCTION
Current research on methods used for chemotherapy drugs targeting in the human body includes the investigation of biocompatible magnetic nanocarrier systems. For example, magnetic liquids such as ferrofluids can play an important role as drug carriers in the human body [1]. For instance, they can be used for drug targeting in modern locoregional cancer treatment. Magnetic have controllable sizes ranging from a few nanometres up to tens of nanometres, i.e. at dimensions comparable to those of a cell (10–100 μm), a virus (20–450 nm), a protein (5–50 nm) or a gene (2 nm wide and 10–100 nm long), which means that they can ‘get close’ to a biological entity of interest [2]. They can be coated with biological molecules providing a controllable means of ‘tagging’ or addressing biological entities. Super-paramagnetic nanoparticles are susceptible to interact with external magnetic fields. This way they can be used as carriers to deliver anticancer drug, or radionuclide atoms, to a targeted tumor region. Other applications concern the ability of magnetic nanoparticles to interact with time-variable magnetic fields, e.g., the may be heated up, and used as hyperthermia agents [2]. Collateral effects, including the damage to healthy human cells from chemotherapy drugs, poses quantitative, upper limits in the delivery of medication doses. On the other hand, these limits reduce the chances of successful annihilation of the tumor formations. Hence, a main objective of modern cancer research is to devise means and tools to better focus chemotherapeutical drugs on tumor tissue
while reducing the overall exposure of the organism [1]. A remaining challenge for this medical application is the choice of clinical settings, e.g., the optimal adjustment of the magnetic field, the ferrofluid properties, etc. [1]. This paper is concerned with the mass transport interactions of a ferrofluidflow that models the blood and the magnetic carrier substance, in an external magnetic field. The model treats the blood and the magnetic carrier species as a homogeneous, isotropic fluid with super-paramagnetic macroscopic properties [1], [3,4]. II. THE MATHEMATICAL MODEL
In this study the electromagnetic field is static (magnetostatic), produced by a permanent magnet. The bloodcarrier aggregate fluid is Newtonian, and its flow is incompressible, unsteady (pulsatorial). Therefore, the mathematical model is made of Maxwell’s equations, momentum balance, mass conservation, and the magnetic and fluid constitutive laws. We assume that the magnetic field – magnetic fluid flow interaction is one-way: the flow is influenced by the external magnetic field (the fluid is magnetized, hence a body force term occurs in the momentum equation). Therefore, first, the magnetic field problem is solved first, in the computational domain that includes the magnetic field source (the permanent magnet) and the blood vessel. A magnetic body force term couples the magnetic field to the flow problem in the blood-vessel domain [3,4]. A. The Magnetic Field Model The magnetic field is static [7,8], described by so Ampere law for the magnetic field strength (H)
∇ × H = 0,
(1)
∇ ⋅B = 0,
(2)
the magnetic flux law and the constitutive law describing the relation between B and H in the different parts of the computational domain
B = μ 0μ r H + B rem , (permanent magnet)
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 651–654, 2010. www.springerlink.com
652
A. Dobre and A.M. Morega
[
]
B = μ 0 H + M ff ( H ) , (aggregate fluid)
(3)
B = μ 0 H , (tissue and air).
Here μ0 is the magnetic permeability of air; μr is the relative magnetic permeability of the permanent magnet; Brem is the remanent magnetic flux density; and Mff is the magnetization of the super-paramagnetic aggregate stream, which is a function of the magnetic field strength, H. The analysis is carried out for a 2D computational domain – a slice that cuts through the magnet and the blood vessel. A common approach is to use the magnetic vector potential A (and the divergence free gauge condition) B = ∇ × A, ∇ ⋅ A = 0.
(4)
The magnetic vector potential has a single component, perpendicular to the computational domain, A = Azk. Summing up, the partial differential equation that makes the mathematical model for the static magnetic field is
(
) ∇ × (μ -1 0 ∇ × A − M ff ) = 0 , (aggregate fluid) ∇ × (μ -1 0 ∇ × A) = 0 , (tissue and air).
-1 ∇ × μ -1 0 μ r ∇ × A − B rem = 0 , (permanent magnet)
The boundary conditions that close the flow problem are as follows: on the vessel walls, u = v = 0; uniform pressure profile at the outlet; a parabolic, time dependent, normal inflow velocity profile at the inlet u( t ) = 4 ⋅U in ( t ) ⋅ s ⋅ (1 − s) ,
where s is the curvilinear coordinate along the inlet boundary (a segment in the 2D model). To emulate the mass flow produced by the heart beat we use the velocity [3] 2 U in ( t ) = 1 / 2 ⋅U 0 ⋅ ⎡sin(ωt ) + sin(ωt ) ⎤ . ⎢⎣ ⎥⎦
(10)
The pulsation ω is set to 2π [rad/s], which generates a heart beat rate of 60 [beats/minute]. The mathematical model made of eqs. (5)-(7), the boundary conditions and the constitutive laws was solved by the finite element method (FEM), as implemented by COMSOL Multiphysics [5]. This numerical simulation environment is particularly suited for coupled, multiphysics problems such as the current magnetic field – magnetic fluid flow.
(5)
The boundary conditions that close the model are magnetic insulation (Az = 0). Due attention is devoted to put this condition far enough – while at a convenient distance in what concerns the numerical model.
(9)
III. NUMERICAL MODEL
The simplified 2D computational domain represents an idealized (straight, circular, rigid walled) blood vessel, a permanent magnet, the surrounding tissue and air, Fig. 1,a.
B. The Hydrodynamic Model Several simplifying assumptions are made: the arterial flow is assumed incompressible, laminar; the fluid (blood) is Newtonian, with constant properties; the aggregate blood and magnetic carrier is assumed super-paramagnetic, and no mass transfer is considered; the vessel walls are rigid, and no flow-structure interaction occurs. The arterial flow is then governed by the momentum (Navier-Stokes) balance and by the mass conservation law
[
(
⎡ ∂u ⎤ T ρ⎢ + ( u ⋅∇) u ⎥ = −∇ − pI + η ∇u + (∇u) ∂t ⎣ ⎦ .
)] + f
mg ,
(6)
b. Detail of the FEM mesh – the bifurcation region Fig. 1 The FEM mesh made of quadratic Lagrange elements.
An example of FEM mesh is shown in Fig. 1,b. Meshes made of approx. 6,000-8,000 Lagrange quadratic triangular elements (flow), and second order vector elements (magnetic field) provide mesh-independent numerical results.
(7)
Here u is the velocity field, p is pressure, ρ is the mass density, η is the dynamic viscosity, and I is the unity matrix; fmg is the magnetic body force due to the fluid magnetization under the influence of the external magnetic field. f mg = μ 0 ( M ⋅ ∇) H .
a. FEM mesh discretized by Delaunay technique.
(8)
IV. RESULTS AND DISCUSSION
The physical coupling between the two problems (magnetic field and flow) is one-way (through the magnetic body force term in the flow equation), hence we solve first for the magnetic field. Next, using this solution we solve for the unsteady hydrodynamic problem.
IFMBE Proceedings Vol. 29
Numerical Simulation In Magnetic Drug Targeting Magnetic Field Source Optimization
A. Arterial Aggregate Flow in an Arterial Vessel With One Level of Bifurcation Figure 2 shows the flow in the absence of the external magnetic field. The results evidence two regions of intense flow, immediately downstream the bifurcation, and the upstream flow that adapts to the vessel constraints.
653
present a different layout for the magnetic field source (Fig. 4): a couple of permanent magnets 20 mm high, and 10 mm wide, placed 1 mm from each-other. Compared to the layout in Fig. 3, it can be easily seen that the magnetic field has a more focused and intense effect on the blood flow.
Fig. 2 The flow in the region of an arterial bifurcation in the absence of the external magnetic field– velocity field at t = 1 s.
Fig. 4 Velocity field for the arterial flow with a single bifurcation – optimized magnetic field source [6] – velocity field at t = 1 s.
Figure 3 shows the effect that an external magnetic field (produced here by a permanent magnet).
The effect of the magnetic field and the accuracy of the numerical solutions were investigated also through the interaction between the flow and the blood vessel, namely the wall shear stress. Figure 5 shows this quantity for the cases reported in Fig. 2, 3, 4.
Fig. 3 The flow in the region of an arterial bifurcation – t = 1 s.
The flow is significantly modified, and seen by the recirculation cells. These secondary flows – that occur during the minimum mass flow sequence – recirculate the fluid aggregate, and may contribute to a more vigorous mass transfer of the active substance to the vessel walls. During the peak of the mass flow sequence, the effect of the magnetic field is minute, the flow is virtually unconstrained. It should be noticed that, as expected, the magnetic body force term, eq. (8), is more important in regions with nonuniform, higher gradient magnetic field strength. We conjecture that magnetic targeting (localization) may be enhanced by designing the magnetic field source such as to “spread” as much as possible the field (e.g., by utilizing the end-effect). This may be done, for instance, by using the geometric aspect ratio height/width of the magnet as an optimization parameter, while keeping the area (volume) constant. This optimization result is reported in [7]. Here we
a. The flow in the region of an arterial bifurcation in the absence of the external magnetic field in Fig. 2.
b. The flow in the region of an arterial bifurcation in the presence of an external magnetic field in Fig. 3.
IFMBE Proceedings Vol. 29
654
A. Dobre and A.M. Morega
c. Optimized magnetic field source (Fig. 4)
Fig. 5 The wall shear stress for the arterial flow in Fig. 4. These results evidence the paired permanent magnets configuration as of high precision and effect upon the blood flow. The magnetic field is highly concentrated strictly within its action range, slightly affecting neighboring areas. B. The 3D Model More realistic hemodynamic studies may be conducted on 3D models for the blood vessels that rely on computational domains obtained via imagistic reconstruction based on DICOM image sets acquired via angioMRI [6]. We conducted such a study and the results are reported in [8]. The computational domain, generated using Simpleware package [9], was imported in Comsol, where the arterial flow was simulated under steady and unsteady (pulsatory) mass transport conditions. Figure 6 exemplifies this study.
approach, we searched and identified an optimal permanent magnet configuration. This magnet design provides for the increase of the magnetic carrier time of residence in the targeted region. During this time span that corresponds to the minimum mass flow rate phase, a more vigorous, recirculation flow is stimulated, which facilitates the mass transport of the magnetic carrier from the stream to the vessel walls, and contributes to increasing the efficiency of drug delivery. A more accurate analysis may be conducted by using 3D models, generated by image reconstruction techniques. This paper reports the arterial flow simulated by this approach. The analysis if magnetic field targeting in this situation makes the object of future study research.
ACKNOWLEDGMENT The work was conducted in the Laboratory for Electrical Engineering in Medicine – Multiphysics Models, the BIOINGTEH platform, at UPB. Part of this research was developed within the framework of the research grant “New cardiovascular planning and diagnostic tool for coronary arteries in BSEC countries using computational simulation”.
REFERENCES 1. 2. 3. 4. 5. 6.
Fig. 6 Arterial flow and stress for steady state flow – the pressure drop is ~160 Pa, Uin ~ 0.16 m/s. a. – FEM mesh made of Lagrange tetrahedral elements. b. – Total force per surface area; c. – Velocity field [8].
The magnetic targeting for this 3D model is object of future research.
7. 8. 9.
V. CONCLUSIONS
In this paper we report the study on the effect that an external magnetic field may have on the arterial flow of the blood – magnetic carrier aggregate. Several 2D simplified models were analyzed. Using the numerical simulation
Voltairas P.A., Fotiadis D.I., and Michalis L.K., “Hydrodynamics of Magnetic Drug Targeting,” J. Biomech., 35, pp. 813–821, 2002. Pankhurst, Q.A., Connolly, J., Jones, S.K., Dobson, J., Applications of magnetic nanoparticles in biomedicine—Topical review, J. Phys. D: Appl. Phys. 36, pp.167–181 (2003). Oldenburg C.M., Borglin S.E., and Moridis G.J., “Numerical Simulation of Ferrofluid Flow for Subsurface Environmental Engineering Applications,” Transport in Porous Media, 38, pp. 319–344, (2000). Rosensweig R.E., Ferrohydrodynamics, Dover Publications, New York (1997). COMSOL Multiphysics, v. 3.5a, COMSOL A.B., Sweden (2009). Masumoto T., Hayashi N., Mori H., Aoki S., Abe O., Ohtomo K., Kaji N., Takahashi T., and Abe T. “Initial clinical experience with a hybrid interventional angio-MRI system”, Proc. Intl. Soc. Mag. Reson. Med. 11, 2693 (2004). Dobre A., Magnetic field control on transport processes in the arterial system, Diploma Thesis, University POLITEHNICA of Bucharest, June 2009. Morega A.M., Dobre A., Morega M., and Mocanu D., “Computational Modeling of Arterial Blood Flow”, MediTech International Conference, 23-26 September 2009, Cluj-Napoca, Romania. Simpelware v. 3.2, Simpleware Ltd., UK (2009).
• • • • • •
IFMBE Proceedings Vol. 29
Author: Institute: Street: City: Country: Email:
Alexandru Morega University POLITEHNICA of Bucharest Splaiul Independenţei nr. 313, sector 6 Bucharest, 060042 ROMANIA [email protected]
Ontology for Modeling Interaction in Ambient Assisted Living Environments J.B. Mocholí1, P. Sala1, C. Fernández-Llatas and J.C. Naranjo1 1
TSB-ITACA, Universidad Politécnica de Valencia, Spain
Abstract— This paper describes a set of ontologies created in the framework of the European project VAALID that allows designers of Ambient Assisted Living (AAL) services to model and characterize an AAL environment, the involved actors and different kinds of spaces and devices. These ontologies also include aspects related to interactions among the different elements that have been defined in the modelled AAL solution. Interactions are described in terms of capabilities of each element by means of the Common Accessibility Profile. Keywords— Ontology, Modeling framework, AAL services, Accessibility validation, Human Computer Interaction.
I. INTRODUCTION
Ambient Assisted Living refers to electronic environments that are sensitive and responsive to the presence of people and provide assistive propositions for maintaining and independent lifestyle. The design of user interaction in AAL services is perhaps the most complex interaction design that a usability engineer can deal with, because the challenge for the designer is to experiment with new and innovative modalities of interaction that must be, of course, accessible and usable. AAL services and solutions are gaining more and more presence in the common vocabulary; the reasons for this popularity are diverse; one of the most recurrent reasons is related to ageing and helping the elderly to follow an independent living at home. It is based on the fact that the population of Europe is ageing and there is also an increasing number of people with impairments and disabilities is also increasing. As a result of prolonged life expectancies, 61.4 million inhabitants of EU27 are expected to be aged 80 and over in 2060, an increase of almost triple compared with 21.8 million in 2008 [1]. At the same time there is a growing acknowledgement of the need to integrate older people and people with disabilities into society by enabling them to sustain their independence for as long as possible. The accessibility of products and services is an absolute prerequisite for the inclusion of the elderly and persons with disabilities in our modern information and communication society. Consequently, it is necessary to radically improve the accessibility and usability of new Information and Communication Technologies (ICT) solutions. Even though accessi-
bility is more and more taken into account in ICT applications, a tool that facilitates validation of accessibility in AAL systems from the first step of the design is missed. Validation before deploying any AAL solution saves big amounts of money, also saves time, and permits the personalization and a better matching of the requirements and necessities of the end user. Therefore, it is necessary to have tools that allow designers to model and simulate the final AAL scenario. Ontologies are used to model the basic conceptual terms, the semantics of these terms, and define the relationships among them. The ontologies described in this paper have been developed in the framework of the European project VAALID [2]. The VAALID project aims at creating an open and descriptive formal model to define “the users”, “the environments” and their “interactions” of AAL services and solutions [3]. VAALID project intends to develop tools in order to assess that AAL products and services fulfill the accessibility and usability requirements for this new combination of interaction modalities when used in a service design. In VAALID, ontologies are used to provide a powerful and growing specification of the concepts involved in a user interaction scenario, and workflows are used to specify the service and the behavior of the environment.
II.
MATERIALS AND METHODS
AAL services, as other ambient intelligence solutions do, need to model the context in which the user is involved, being this one of their main goals. Information regarding context is sometimes related to environment, users, devices and descriptions of the available services. The information collected refers to: • •
•
Information about the user concerning his/her profile (anthropologic, demographic), social relationships, activities (all types: sports, hobbies, ...), etc. Information about the environment concerning the objects, software, localization (geographical or abstract, global or local), environmental conditions (lighting, humidity, noise, temperature, etc.), etc. Information about services dealing with the description of their functionalities, the flow, the parameters, type of invocation, etc.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 655–658, 2010. www.springerlink.com
656
•
J.B. Mocholí et al.
Information about the devices regarding potential users, network connections, CPU, display features, type of device, whether it describes a specialized device like a sensor or not, the type and value of the signal measured, etc.
This information allows the designer to define flexible AAL services with a level of detail as high as needed by describing the potential users (who will consume the applications and services offered) and the localization, the activity of the users, their preferences, the time stamp, etc. Several studies have been performed in the area of modeling contexts and environments; most of them relate to the definition of Domotic applications. In such studies, concepts are related to the definition of spaces (rooms, walls, geo-localization, …), the definition of the objects placed inside the spaces (furniture, appliances, sensors, …), and sometimes the functionalities and services that some objects provides (devices, sensors, actuators, …). Concerning the definition of the features of users, several attempts have been made to define the nature of age-related changes; more precisely in categorizing them as pathological versus normal, but due to the heterogeneity of the elderly population this is a difficult matter. According to the “Design for All” guidelines for ICT products and services developed by the ETSI [4], the attributes that should be considered to have a direct impact on a successful use of ICT products and services include: • • •
Sensory abilities such as seeing, hearing, touch, taste, smell and balance. Physical abilities such as speech, dexterity, manipulation, mobility, strength and endurance. Cognitive abilities such as intellect, memory, language and literacy.
The International Classification of Functioning, Disability, and Health (ICF) [5] covers all these aspects. ICF attempts to combine what is true of both the medical and social approaches without reducing the entire notion of disability into one or the other's aspects. ICF describes the disabilities and functioning as the product of interactions between health conditions and contextual factors each of which has an impact on how disability is experienced by the person [6]. The descriptors used by ICF are divided in four main groups: Body Functions (Mental functions; Sensory functions and pain; Voice and speech functions; …), Body Structures (Structures of the nervous system; The eye, ear and related structures; Structures related to movement; …), Activities and Participation (Learning and applying knowledge; Communication; Mobility; …), and Environmental Factors (Products and Technology; Natural environment; Attitudes; …). Each descriptor is categorized by using one
or more qualifiers, being the ICF Qualifier the most used. The ICF Qualifier is used to indicate the extent or magnitude of impairment. The values and meaning of the qualifications are: • • • • • • •
0 means NO impairment (none, absence, ...), and represents a magnitude of 0-4 %. 1 means MILD impairment (slight, low, ...) 5-24 %. 2 means MODERATE impairment (medium, fair, ...) 25-49 %. 3 means SEVERE impairment (high, extreme, ...) 5095%. 4 means COMPLETE impairment (total, ...) 96-100 %. 8 means not specified. 9 means not applicable.
On the topic of defining interaction and accessibility, several approaches have been produced. Some of them are the vision provided by Richter and Hellenschmidt in [7] or the vocabularies described by Obrenović in [8] and [9]. All these approaches describe interactions; however, a wider expressivity is needed in order to deal with the description of all the possible features of a user and the functionalities provided by devices and systems in terms of accessibility constraints. Concerning this point Fourney presented the Common Accessibility Profile (CAP) [10]; Fourney defines CAP as a framework for identifying the accessibility issues of individual users with particular systems configurations, defining and describing the needs and capabilities of systems, devices and users to communicate among them. CAP has been taken as basis of the standard ISO/IEC 24756:2009 [11] (this standard defines CAP as Common Access Profile). CAP can be used to evaluate the accessibility of systems, services or solutions deployed in an environment for a specific user.
III.
RESULTS
Based on the resources described in the previous point, VAALID has developed a set of ontologies that are used to model the environment, the user and the interactions between them. This chapter summarizes the work done. A. Modeling the environment In order to model the environment VAALID has taken as basis the concepts and terms related to the Domotic domain. However, the management of the huge amount of possible terms and concepts related to this domain is unapproachable in an efficient way. For this reason VAALID has simplified the concepts managed by defining Environments as entities that can be placed at several Floor levels; each Floor level
IFMBE Proceedings Vol. 29
Ontology for Modeling Interaction in Ambient Assisted Living Environments
can be composed by several Spaces, each one of these Spaces can be categorized as Rooms, Stairs or Ramps. Spaces have Dimensions, have Environmental Conditions (temperature, humidity, …) and can be bounded by a Polygon. Spaces can contain several objects or Elements; Elements are categorized as Controllable or Uncontrollable. Controllable elements can define States, Services (functionality) and Communicate events. VAALID have categorized Controllable elements as Appliances (White and Brown goods), Devices (Sensors and Actuators), and Lighting. Uncontrollable elements have been categorized as Junctions (Doors and Windows) and Furniture (Beds, Tables, Closets, …). B. Modeling the user When defining a complete AAL service, in addition to model the context and the environment, it is also needed to model the target user, in other words, to identity and characterize their abilities. Most of the current AAL services and solutions are addressed to the elderly and people with especial requirements, disabilities or impairments; therefore it is also necessary to use concepts defining the characteristics and functionalities of the user to be modeled. In this way, VAALID Project has identified the elderly as the end beneficiaries of the VAALID framework. As a result, in addition to demographic (name, address, etc.) an anthropological data (age, gender, height, weight, etc.) the user is also characterized by providing habits and abilities. The different types of habits identified and defined in the
657
user model ontology were selected in the same way as other concepts used in VAALID; this is based on the idea of selecting concepts that can be useful when creating AAL solutions. That is why some habits were discarded, like the ones related to smoking and alcohol consumption. After taking this approach, the concepts related to habits that were identified and added to the user model ontology were those related to medication intakes, sleep related habits and home related tasks; all these habits can be used in an AAL service to determine if the user is doing something out from the usual; for example: if the user usually wakes up at 8am, an AAL service can use this data to perform a related action if the user is in bed more than a given time after 8am. Concerning Sleep related habits, they were classified as the usual time for going to bed, waking up, or taking a nap; whereas home related tasks were classified as the usual timetable for going shopping, cooking, meals timetable, dressing, washing, and cleaning their home, etc. In relation to the definition of user abilities, VAALID has chosen the ICF. The concepts defined by the ICF relevant to simulation of interactions were adopted and defined in an ontology for user modeling. Concepts define the users features regarding Sensory, Physical and Cognitive abilities, in terms of ICF Qualifiers. Sensory abilities refer to Seeing (Visual Field, Visual Acuity and Quality of Vision), Hearing (Sound and Speech discrimination), Balance and Touch (Temperature, Vibration and Pressure). Physical abilities refer to Endurance, Manipulation (Lifting, Carrying or Putting down objects), Speech (Production of sounds and Production of Speech sounds), Strength, Dexterity (Pulling,
Fig. 1 Interaction Model IFMBE Proceedings Vol. 29
658
J.B. Mocholí et al.
Catching, Pushing, …) and Mobility (Voluntary and Involuntary Movements). Finally Cognitive abilities refer to Intellect functions, Attention (Sustaining, Shifting or Sharing), Orientation (Time, Place and Person), Language (Reception and Expression of language) and Memory (Short and Long term). C. Modeling interactions In order to model interaction VAALID has used the CAP. The Fig. 1 depicts how this model is used in VAALID. The CAP Overall of the AAL service to be modeled and used during the simulation is composed by the CAP of the user (CAPuse), the CAP of the devices (CAPsys and CAPat) and the CAP of the environment (CAPenv). By following this approach, a set of CAPuse it will be use in VAALID in order to model the capabilities of interaction of a user; these CAPuse are aligned (partially automatically aligned) with the description done using the ICF descriptors. CAP is specified by describing Interacting Components (IC) that have Component Features (CF) and CF can express a set of Capabilities. These CF can be features related to Input Receptor (IR), Output Transmitter (OT) or the Processing Functions (PF) that transform IR to OT. IRs and OTs involve a IC for which it is defined a direction (In, Out or Dual) a modality (Visual, Auditory or Tactile) and a set of properties; the properties define a media, a language and an interaction style. The definition of the IC and its properties (direction, modality and media) permits to perform matching and checking of constraints automatically by creating a easy set of rules. An easy example of how it works can be the following: a designer is developing an AAL service with an auditory alarm (fills the CAP of the system, the CAP of the devices, …), and selects a user with auditory problems (described with the ICF descriptors and his CAP); then, before going to simulate, a warning will be thrown. In the same way the designer can test the solution with a vast variety of users (with different CAPs) and check how it adapts to the necessities of each user.
IV.
CONCLUSIONS
In this paper an ontology to model AAL services was presented. This ontology collects concepts to model the environment, the context, the user involved, and especially the concepts related to model interactions. In order to define
the possible interaction among the elements of an AAL solution the Common Accessibility Profile has been adopted.
ACKNOWLEDGMENT The authors wish to thank the European Commission for the project funding and the VAALID consortium for their support.
REFERENCES 1.
Giannakouris K, (2008) Ageing characterises the demographic perspectives of the European societies. Eurostat, Statistics in focus Issue 72/2008. 2. Consortium VAALID. VAALID Project: Accessibility and Usability Validation Framework for AAL Interaction Design Process. http://www.vaalid-project.org 2008-2010. 3. Naranjo J C, Fernández C, Sala P et al. (2009) A modelling framework for Ambient Assisted Living validation. Universal Access in HCI, Part II, HCII 2009, LNCS 5615:228-237 4. ETSI, EG 202 116 v1.2.1 (2002-09): Human Factors (HF); Guidelines for ICT products and services; “Design for All” at http://www.etsi.org 5. World Health Organization (2001), International Classification of Functioning, Disability, and Health (ICF) at http://www.who.int/classiffications/icf/en/. 6. World Health Organization, (2002). Towards a common language for functioning, disability and health: ICF. Training materials, ICF training Beginner's Guide at http://www.who.int/classiffications/icf/en/ 7. Richter K, Hellenschmidt M. (2004) Interacting with the Ambience: Multimodal Interaction and Ambient Intelligence, Proc of the W3C Workshop on Multimodal Interaction 2004. 8. Obrenović Ž, Abascal J, Starčević D. (2007). Universal Accessibility as a Multimodal Design Issue, Communications of the ACM Vol. 50 Issue 5:83-88 9. Obrenović Ž; Troncy R, Hardman L (2007). Vocabularies for Description of Accessibility Issues in Multimodal User Interfaces, Proc. of Workshop on Multimodal Output Generation, MOG'07, Aberdeen, Scotland, UK, 2007, pp 117-128 10. Fourney D, (2007) Using a common accessibility profile to improve accessibility. Master Thesis submited to the College of Graduate Studies and Research, University of Saskatchewan, Saskatoon, Canada. 11. ISO/IEC 24756:2009, Information technology - Framework for specifying a common access profile (CAP) of needs and capabilities of users, systems, and their environments at http://www.iso.org/
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Juan B. Mocholí ITACA – Universidad Politécnica de Valencia Camino de Vera s/n Valencia Spain [email protected]
Protein Surface Atom Neighborhood Functional Description P.D. Cristea, R. Tuduce, and O. Arsene University "Politehnica" of Bucharest, Biomedical Engineering Center, Bucharest, Romania Abstract— The paper presents an image-oriented modality to describe artificial and biological nanostructured surfaces, with specific applicability to the functional characterization of atom neighborhoods on the surface of proteins. The considered properties include the hydrophobicity around each surface atom. The actual discrete hydrophobicity distribution attached to the atoms that form the given atom's vicinity is replaced by an approximately equivalent hydrophobicity density distribution, computed in a standardized hexagonal or octagonal pattern around the atom. The purpose of this work is to create a database of molecular surfaces that will be used in several nanotechnology research fields. Keywords— Molecular surface, Hydrophobicity, Molecular local resemblance, Molecular local interaction.
I. INTRODUCTION The Protein Database Bank (PDB) [1] is the currently the most used protein molecule repository. The PDB format contains a description of the molecule structure and properties, based on an internationally adopted standard [2]. We have analyzed a set of molecules from PDB, selected by a team at the University of Liverpool1. The molecular solvent-excluded surface (Connolly surface) has been computed using the Connolly algorithm [3, 4] and the atom hydrophobicity was established as in [5-9]. The surface of such molecules contains hundreds to several thousands atoms. Most proteins have about 12.000 atoms, mainly carbon (C), oxygen (O), nitrogen (N), sodium (Na) and sulfur (S), out of which about 4000 are at the molecule surface domain. Each atom is described by a set of properties that includes 3D coordinates, hydrophobicity, charge and radius [7, 8]. We introduce a new algorithm which uses a simplified standardized description for the surface atom neighborhood. The large variety of discrete atom properties in the neighborhood of each surface atom is replaced with a distribution of surface densities of these properties, in a standardized hexagonal or octagonal frame. Each neighborhood frame is divided into equal area patches, at various resolutions. The density of a chosen property (e.g., the hydrophobicity), is computed cumulatively, by the simple addition of the discrete 1
Please, see acknowledgement.
values for all atoms in a patch. This approach generates a pattern which describes locally the molecule, in the proximity of an atom, from the point of view of the considered property [10].
II. DEFINITION OF LOCAL SURFACE PARAMETERS The geometry of the neighborhood of a surface atom is described by using the following basic parameters: Rsphere – the radius of the sphere which contains the atoms of the considered molecule; Rmax – the radius of the atom neighborhood; h – the average maximum distance at which atoms interact by hydrogen bonds, which determine a surface atom neighborhood; These parameters are linked by the relation: Rmax = ⋅ Rsphere − (Rsphere − h ) 2 2
(1)
The distance between two atoms linked by a covalent bond, also called the inner radius r0, puts a lower limit to the circular domain in which the first order neighbors of an atom should be considered. We have chosen r0 = 1.45 Å, as an average of most frequent covalent bond lengths in amino acids (C-N = 1.47 Å and C-O = 1.43 Å).
III. SURFACE ATOM NEIGHBORHOOD DESCRIPTION The input of the algorithm we developed consists of a comma separated values (CSV) file containing the following surface atom properties: type of each atom (C - carbon, O - oxygen, N - nitrogen, Na - sodium and S – sulfur), amino acid to which it belongs, position coordinates (x, y, z) and hydrophobicity [7-9]. We have locally approximated a molecule surface around an atom by a sphere. We considered as general fixed parameters h = 2 Å and r0 = 1.45 Å, based on the average molecular and atomic parameters. The radius Rsphere of the sphere containing all the atoms of a molecule is found on the basis of the known coordinates of the molecule's atoms. Using the neighborhood geometry and the equation (1), one determines Rmax, thus the segment of the molecule containing the atoms in
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 659–662, 2010. www.springerlink.com
660
P.D. Cristea, R. Tuduce, and O. Arsene
the neighborhood of a considered atom. The atoms in this volume contribute by their hydrophobicities to the molecule interaction with an ideal flat wall. The forces determined by the local hydrophobic/hydrophilic character of molecules is one of the main forces in protein folding and protein mutual interaction. There are more than 80 different currently published hydrophobicity scales. Fortunately, the various scales provide values which vary quite monotonously to each other, so that most scales can be interchanged without essential contradictions, and, on the other hand, it is possible to approximately derive an atomlevel hydrophobicity [9]. The atomic hydrophobicities can be used to describe the distribution of hydrophobicity on the surface of a protein and even to design artificial surfaces that mimic biomolecular surfaces and therefore elicit an expected activity from immobilized biomolecules. Consequently, we have used the scales of atomic hydrophobicities derived by M. Held and D. Nicolau [9], which are based on molecular dynamic simulation of peptides in an aqueous environment. To reduce the extremely large variety of possible distributions of hydrophobicities attached to surface atoms, we have build a regular hexagon or octagon frame to partition the surface atom neighborhood into equal area patches characterized by constant hydrophobicity densities, as shown in figure 1 a. This approach approximates the discrete hydrophobicity distribution with a standardized piece-wise constant hydrophobicity densities distribution. To avoid difficulties in computing comparable property distributions and interaction intensities for molecules of different sizes, we have used a unique reference frame, which is independent of the size of the molecules [10]. We have divided the surface of the flattened atom neighborhood into equal area patches. The set starts with the innermost circle (radius R1 >= r0), containing – at the considered resolution – the central atom and its closest neighbors. It continues with sets of eight octagonal (or six hexagonal) segments of the circular lanes delimited by the family of circles: r0 <= R1 < R2 < ... < Rnmax <= Rmax. With the notation in the Figure 1 a, from the equal area condition: \
A1 = A20 = A21 = ... = A27 = A30 = A31 = ... = A37,
(2)
one finds the radiuses of the successive frame circles:
Rn = R1 ⋅ a ⋅ (n − 1) + 1 ,
(3)
where a is the chosen number of polygon vertices (e.g., 6 or 8) and n is the circle index, going from 2 to nmax – the highest resolution of the description (typically, 8 or 9): 2 ⎡ R ⎤ 1 nmax = ⎢⎛⎜ max ⎞⎟ − 1⎥ ⋅ + 1 r0 ⎠ ⎣⎝ ⎦ a
(4)
The atoms within the volume defined by h and Rmax parameters have been used to compute the density of hydrophobicity in each of the patches of the atom neighborhood.
Fig. 1 a. Octagonal standardized pattern; b. Sample of atom-loaded , for a surface atom flattened neighborhood area For the example in figure 1 b, the nitrogen atom N is the central atom, and the other atoms are the neighbor atoms that determine the hydrophobicity densities in the 24 patches. As the area of any patch is the same, we simply compute the hydrophobicity density of a patch (in fixed units) as the sum of the hydrophobicities of all atoms within that area. To illustrate this type of local surface description, we give in figure 2 the distribution of the surface atoms around the carbon atom with index 10 of the Human Immunoglobulin G (IgG) b12, which has the protein index 1HZH in the PDB protein database [1]. The human antibody IgG1 b12 recognizes the CD4-binding site of human immunodeficiency virus-1 (HIV-1) gp120 and is one of only two known antibodies against gp120 capable of broad and potential neutralization of primary HIV-1 isolates. A key feature of the antibody-combining site is the protruding, finger-like long CDR H3 that can penetrate the recessed CD4-binding site of gp120. Figure 2a shows the point representation of the surface atoms specifying their types by the shadow. The atoms are concentrated in the lower part of the field, the upper vicinity being almost empty because of the distorted local shape of the protein surface around the atom C index 10. Figure 2b and 2c give the neighborhood of the same C atom in a standardized hexagonal and, respectively, octagonal frame, in which the hydrophobicity density is represented by colors or shades. The scale goes from full hydrophobicity (conventionally represented by red) , to full hydrophilicity (conventionally represented by blue), passing through neutral state (zero hydrophobicity, conventionally represented by white if hydrophobic and hydrophilic atoms compensate each other in a given area, or by black if the atoms are simply missing).
IFMBE Proceedings Vol. 29
Protein Surface Atom Neighborhood Functional Description
661
a
b
c
Fig. 2 a. Distribution of surface atoms in the neighborhood of the C atom index 10 of Human Immunoglobulin G b12 antibody (index 1HZH in PDB); b and c. Description of the neighborhood in standardized hexagonal and octagonal panels, respectively
IV. RESULTS
Rmax ( A, B) = max R( A, B, h) h∈{0,…, a −1}
To locally compare and relate the hydrophobicities of two molecules for the neighborhoods centered on any pair of atoms (A and B) belonging to the surfaces of the two molecules (or to the same), we have defined two magnitudes: (1) similitude, (2) interaction. These magnitudes depend on the relative orientations of the two molecules around the common normal in the centers of the two atom neighborhoods. For each pair of atoms A and B, there are a = 6 (for hexagonal frames) or a = 8 (for octagonal frames) different values, for the corresponding possible rotations. Both the similitude and the interaction are computed in terms of the hydrophobicities of the neighborhoods of the considered pair of atoms A and B and are defined in terms of the resemblance R(A, B, h) of the atom neighborhoods, equal to the sum of the products of the hydrophobicity densities in the patches having the same positions. R( A, B, h) = H A (1) H B (1) +
∑H
A n ∈{2,…,nmax } k∈{0 ,…,a −1}
(n, k ) H B (n, k ⊕ h); h ∈{0,…, a − 1},
(6)
which takes into account all mutual orientations of the molecules around the common nornal. For illustration, figure 3 gives the maximum interaction distribution (the hodograph) for all the pairs of surface atoms of the proteins 135L and 1HZH [1]. Notice that the attractive (positive) interactions are given by the hydrophilic - hydrophilic, or hydrophobic - hydrophobic interactions between the molecular surfaces, while the repulsive (negative) interactions occur between hydrophobic-hydrophilic surface atoms neighborhoods. The hodograph in figure 3 gives the number of instances for various values of the maximum attraction interaction () between the two considered proteins, for any pair of atoms on their surfaces and for any mutual orientation. These interactions are truly attractive – when positive, but repulsive – when negative.
(5)
where HA(1) and HB(1) are the hydrophobicities of the two central patches of radius R1, HX(n, k), X∈{A, B} – the hydrophobicities of the patches between the radiuses Rn-1 and Rn, radius, n ∈ {1, .., nmax}, nmax – the resolution, and in the angular sector k ∈ {0, .., a-1}, a = 6 or 8, h ∈ {0, .., a-1} – the measure of the relative rotation around the common normal. The symbol ⊕ designates here the sum modulo a. The difference between similitude and the interaction is determined by the mutual orientation of the two common normals on the molecular surfaces, which are parallel – for similitude, when one compares the distribution of hydrophobicities for similarly oriented surfaces, and anti-parallel – for interaction, when the molecules face one another. For each pair of surface atoms A and B, a maximum interaction (and a maximum similitude) can be defined, starting from the maximum resemblance:
Fig. 3 Maximum interaction hodograph for all pairs of surface atoms of the proteins 135L and 1HZH [1] Analogously, one can define for each pair of surface atoms A and B, a minimum interaction (and a minimum similitude), on the basis of the minimum resemblance which takes into account all mutual orientations of the molecules around the common normal:
Rmax ( A, B) = min R( A, B, h)
IFMBE Proceedings Vol. 29
h∈{0 ,…,a −1}
(7)
662
P.D. Cristea, R. Tuduce, and O. Arsene
Figure 4 gives the hodograph of the minimum interactions for all the pairs of surface atoms of the same proteins. These interactions are repulsive – when negative, but attractive – when positive.
and interaction, but taking into account not only the global values of these magnitudes and the hodographs of their distribution for all the surface atoms, but also the actual structure of the surface atom neighborhoods. Such a classification will allow to predict the behavior of the protein molecules when interacting to each other or with a nanostructured wall.
ACKNOWLEDGMENT The work was partially supported by the project 214538 – 2008 - BISNES – “Bio-Inspired Self-assembled NanoEnabled Surfaces”, in the framework of the NMP-2007-1.1-2 Self-assembling and self-organisation, NMP-2007-1.1-1 Nano-scale mechanisms of bio/non-bio interactions.
REFERENCES
Fig. 4 Minimum interaction hodograph for all pairs of surface atoms of the proteins 135L and 1HZH [1]
Finally, one can use the maximum absolute resemblance, defined by: Rmax abs ( A, B) = sgn R( A, B, hmax abs ) max | R( A, B, h) | h∈{0 ,…,a−1}
(8)
where hmax abs is the argument that maximizes the absolute value |R(A, B, h)|, for all mutual orientations of the two molecules.
Fig. 5 Maximum absolute resemblance hodograph for 135L and 1HZH Figure 5 shows the hodograph of the maximum absolute interactions for all the pairs of surface atoms of the same two proteins considered before. The distribution refers to the interactions maximum in absolute value, both positive attractive, and negative – repulsive.
V. CONCLUSIONS The study of molecular surfaces will continue with the classification of local molecule properties using resemblance
[1] Protein Data Bank [Online]. Available: http://www.rcsb.org/pdb/home/home.doc [2] Lodish H., Berk Arnold , Molecular Cell Biology. W.H.Freeman, 5th edition , 2007. [3] Connolly, M. L., MS: Molecular Surface Program, QCPE Program 429, Quantum Chemistry Program Exchange, Univ. of Indiana, Bloomington, 1983. [4] Connolly, M. L., Molecular Surfaces: A Review, 1996. P. A. Karplus, Hydrophobicity regained, Protein Science, Vol. 6, 1997, http://www.cs.ucy.ac.cy/isccsp2010/. [5] J. L. Cornette et all, Hydrophobicity Scales and Computational Techniques for Detecting Amphipathic Structures in Proteins, Mol. Bid., Vol. 195, 1987, pp. 659-685. [6] D. V. Nicolau and D. V. Nicolau, A database comprising biomolecular descriptors relevant to protein absortion on microarray surfaces, Proc. SPIE 4626, 2002, pp. 109-115. [7] D. V. Nicolau, F. Fulga and D. V. Nicolau, A new program to compute the surface properties of biomolecules, Asia-Pacific Biotech, 7(3), 2003, pp. 29-34. [8] D. V. Nicolau and D. V. Nicolau, Towards a Theory of Protein Adsorption: Predicting the Adsorption of Proteins on Surfaces Using a Piecewise Linear Model Validated Using the Biomolecular Adsorption Database, 2nd Asia-Pacific Bioinformatics Conference; Conferences in Research and Practice in Information Technology, Vol. 29, 2004. [9] Marie Held, Dan V. Nicolau, Estimation of atomic hydrophobicities using molecular dynamics simulation of peptides, Proc. of SPIE Vol. 6799, Modelling and THZ Technology, 2007 [10] P. D. Cristea, Rodica Tuduce, O. Arsene, Alina Dinca, D. V. Nicolau, F. Fulga, Modeling of Biological Nanostructured Surfaces, Proc. of SPIE Vol. 7574, Nanoscale Imaging, Sensing, and Actuation for Biomedical Applications VII, San Francisco USA, 2010
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Paul Dan Cristea University “Politehnica” of Bucharest Spl. Independentei 313, sect 6 Bucharest Romania [email protected]
Performance evaluation of a grid-based Heart Simulator R. S. Campos1 , M. P. Xavier1 , M. Lobosco1 and R. W. dos Santos1 1
FISIOCOMP: Laboratory of Computational Physiology and High Performance Computing, Computer Science Department, Federal University of Juiz de Fora, Juiz de Fora, Brazil
Abstract— Over the last few years, computer models have become valuable tools for the understanding of complex biological phenomena, such as the electric propagation on cardiac tissue. We have developed a Heart Simulator that models this electric propagation as a set of non-linear system of partial differential equations. Solving this problem is computationally expensive even when a parallel approach is used. Depending on how accurate is the simulation, it can take many hours or even some days to compute a single heart beat. In addition, the simulation of some cardiac diseases as well as the quantification of the cardiac response to drugs demand thousands of single-beat simulations. This would only be feasible today in a Grid, an environment that provides computational resources that are remotely available through the Internet. In this work, we evaluate our Heart Simulator on a Grid environment. The preliminary performance tests suggest that the Heart Simulator is a promising tool and the computational resources of the Grid may promote the use of new and more complex cardiac simulations. Keywords— Heart Modeling, Grid Computing, Cardiac Electrophysiology
I. I NTRODUCTION In silico experiments have helped investigators to understand the multi-scale and multi-physics phenomena that underlie the complex biophysical structures and processes of the heart. They have allowed us to track the electro-mechanics of the heart from sub-cellular to the whole-organ level, and simulate many distinct pathologies such as Ventricular Arrhythmia, Myocarditis, Infarct, Chagas Disease, and Diabetes. Furthermore, there are some models that predict the heart response to drugs. But these simulations demand a lot of computational resources, and the execution of a simulation can take many hours, even using a parallel version of the code on a cluster of computers. Therefore, this kind of problems may only be tackled today using Grids, environments that provides the access to thousands of computers through the Internet. In this work, we evaluate a cardiac simulator on a Grid environment. We have previously developed a parallel tool that simulates the electrical activity in cardiac tissue. This Heart Simulator is based on the bidomain equations, a set of non-
linear system of partial differential equations that models the intracellular and extracellular domains of cardiac tissue. The solution of the equations is a computationally expensive task due to the fine spatial and temporal discretization needed. The preliminary results reported in this work present the behavior of this application on a Grid environment. The overheads associated with the Grid execution, such as the time spent on queues and on moving files to and from the Grid, are also measured and presented. This paper is organized as follows. In the next section we present the Heart Simulator and the gridification process. Section III. presents our preliminary results while Section IV. concludes the work.
II. H EART S IMULATOR The implementation of the Heart Simulator (HS) [1] is based on a set of non-linear partial differential equations (PDEs) that simulate the electric activity on the cardiac tissue, known as bidomain equations [2] [3]. They describe the spread of electrical wave through whole cardiac tissue. The equations model both the intracellular and extracellular domains of the cardiac tissue. The coupling of the two domains is performed via non-linear models describing the current flow through the cell membrane, which leaves one domain to enter the other. The equations model how cellular electric potentials (transmembrane voltage, Vm , and extracellular potential, Ve ) depend on the concentrations of several ionic species (Na+ , K + , Ca2+ ), the conformation of several protein arrangements that cross the cell membrane (the ionic channels) and the associated ionic currents (Iion ). The bidomain model is a reliable way to simulate the heart behavior and has been validated through animal experiments [4][5]. The model has also become a useful tool to understand defibrillation [6]. During fibrillation cardiac electric activity works disorderly, so the heart muscle quivers instead of contracting. Defibrillation consists in applying electrical currents to the heart aiming the restoration of normal cardiac rhythm. The Heart Simulator involves the solution of three different set of equations: a parabolic partial differential equation, an elliptic partial differential equation, and a nonlinear system of ordinary differential equations. To solve the partial dif-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 663–666, 2010. www.springerlink.com
664
R.S. Campos et al.
ferential equations, the Finite Element Method and the Conjugated Gradient algorithm are employed. Although a lot of numerical methods have been studied in order to make the simulations faster, parallel computing is still necessary. The parallel strategy of domain decomposition is adopted and the implementation uses the MPI library. The HS was developed for GNU/Linux using the C programming language and the MPI (Message Passing Interface) and PETSc (Portable, Extensible Toolkit for Scientific Computation) libraries. PETSc is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations. However, even using a cluster environment, solving these equations is still a time-consuming task, which makes a Grid infrastructure an important tool to improve the quality of the simulations, i.e., doing more realistic simulations in a shorter time. A. Gridification Process The application was gridified to the EELA Infrastructure[7]. The gLite Middleware was adopted. This framework is responsible for submitting jobs to the grid, as well as for handling files and storage elements. It is also responsible for choosing the most suitable computational element to run a job. Other gLite services include security, information system, workload management, storage and cataloging and metadata. To access this grid infrastructure, the user is required to have a x509 certificate issued by Certification Authority. The certificate gives the user access permissions to different grid resources, identify users and their data, and guarantee security. This certificate should be stored in the user interface, a machine where the user can access through SSH, and the certificate is used to start a proxy, a temporary certificate digitally signed by the grid user. They are required to do any operation on a grid environment, such as submitting and retrieving job results. Before submitting a job, the job characteristics must be described using the Job Description Language (JDL). A special file is created for this purpose. The file describes the job characteristics and requisites, such as the number of cores required, if the application uses MPI, the name of the executable file, its parameters, and input and output files required. After sending the job through the gLite line command, the middleware searches for the most suitable computational elements to run the job, according to the requirements described in the JDL file. The direct transfer of files between the grid front end and the working nodes is limited to 10 MB. This restriction affected the porting of our application since the simulator requires input and output files of sizes
about 100 MB. The solution to this problem was to use the socalled Storage Elements (SE) to store the required files. The SE is the grid virtual distributed file system and it does not impose any restriction regarding file sizes. We developed a script that, at each execution of our simulator, copies all required files from the SE to each working node. This script is also responsible for changing file permissions and for configuring environmental variables to run the MPI job. B. Methods The results presented were obtained from a complex and realistic simulation of the cardiac electrical activity on a 2dimensional slice of the human left ventricle. The geometric information of the model was obtained from a magnetic resonance image of a healthy person, as presented in Figure 1. After segmenting the short-axis MRI, different tissues and cell types were modeled, such as the torso, the cardiac tissue, the blood inside the ventricular cavities, as well as the epicardial, endocardial, and M-type ventricular myocytes. The model was discretized using a mesh of 769 X 769 points. All bidomain formulation parameters were taken from [8]. The spatial and temporal discretization steps of the numerical model were set to 150μm and 10μs, respectively. The simulation was carried out for 20 ms, a total of 2000 time steps. For simulating the ionic currents and concentrations of cardiac cells, we used the human ventricular model of ten Tusscher. Figure 1 shows a simulation result of the short-axis models. The color-coded image represents the transmembrane voltage distribution Vm for a certain time instant.
Fig. 1: Short-axis Simulation. Simulated electrical wave propagation (Vm distribution) overlapped to the Magnetic Resonance Image.
The simulations were run on the Grid and on a local cluster. The local cluster is a 32-Linux cluster composed of 16x dual-core Intel Core Duo processor (2.13GHz) connected by a Gigabit Ethernet Switch. The EELA Grid provides access to distributed computing, storage and network resources needed by applications from European - Latin American Scientific Collaborations. The current infrastructure has 41 resource
IFMBE Proceedings Vol. 29
Performance Evaluation of a Grid-Based Heart Simulator
centers, with 3000 computing cores and more than 700 terabytes of storage space.
III. E XPERIMENTAL R ESULTS This section presents the experimental results obtained when we execute our HS on the EELA Grid. We report the speedups obtained and the overheads associated with Grid execution, such as time spent on queues and on transferring files to and from the Grid. Depending on the configuration parameters passed to the simulator, the execution can take a lot of hours. We chose a configuration that leads to a small execution time. Our methodology consists in analyzing the log of the job status. The job can assume one of the following status: Table 1: Job Statuses Status Submitted
Description The job has been submitted by the user but not yet processed by the Resource Broker (RB)
Waiting
The job has been accepted by the RB but not yet matched to a Computational Element (CE), i.e. a cluster
Ready
The job has been assigned to a CE but not yet transferred to it
Scheduled
The job is waiting in the local batch
665
As we can observe in Table 1, the grid log system does not register the time spent in transferring files. To calculate the time spent transferring files, we collected the statistics of the MPI-wrapper script. This script is responsible for the transference of files and the execution of the application. So the time spent in transferring files can be calculate as the difference between the running time of the script, collected by the grid log system, and the time executing the simulator, collected by our application. Figure 2 presents the results. The time spent on queue is influenced by the number of jobs running and waiting on queues. When there is a lot of jobs running on the Grid or waiting in queues, the time on queue increases, which therefore increases the total overhead and leads to a high standard deviation on execution. To avoid this effect, the Grid Information System was consulted to be sure that no other jobs were executing on the Grid when our jobs were submitted. This is the reason why the queue time is almost the same for all the submitted jobs. In a similar way, the transfer time is influenced by the current network traffic. Again, we observed that the transfer time is also almost the same to all cases. This happened because there were no other jobs running, and the network kept the same transference rate to all nodes. Table 2 compares the contribution of the overhead time to the total execution time. In the table, the values reported as Run Time, do not account for any of the overheads introduced by the Grid. Since the overhead is almost constant and the running time decreases as more nodes are added, the contribution of the overhead increases as more nodes are added.
system queue on the CE
Running
The job is running
Done (Success)
The job has finished successfully
Cleared
The Output Sandbox has been re-
trieved by the user
Each change in status causes the grid log system to register automatically the event. In addition, we have instrumented our code to register the time spent in executing the simulation. The total time is the difference between the time the job is submitted and the time it has finished. The total time is broken into three distinct components: a) run, b) queue and c) transfer, that represents, respectively, the time spent running the application, waiting on queue and transferring files. We submitted the HS ten times to the grid, and reported the average times. The time spent running the simulation was collected by our instrumented code. The queue time is calculated as the difference between the time we submit a job and the time it started to run. Both values are provided by the grid log system.
Fig. 2: Execution time breakdown. Figure 3 presents the speedups of the HS on the Grid and on the local cluster. The blue bars present the speedups obtained by the Grid. The red bars present speedup values for a Grid without any overheads, i.e. speedups calculated with the Run Times, instead of the Total Times (see Table 2). The yel-
IFMBE Proceedings Vol. 29
666
R.S. Campos et al.
Table 2: Average Execution Times (in hours)
IV. C ONCLUSION
Nodes
Total Time
Run Time
Overhead
2
01:57
01:45
00:12 (11%)
4
01:33
01:20
00:12 (14%)
8
00:58
00:46
00:12 (22%)
16
00:38
00:25
00:13 (34%)
32
00:34
00:21
00:13 (38%)
In this paper we have reported the performance of the Heart Simulator, a cardiac electrical-physiological simulator, on the EELA grid. The cardiac simulator was successfully ported to the Grid which allows several cardiac simulations to be executed now in this powerful computational environment. Our initial performance tests indicate that the overhead introduced by the Grid may slow down the performance of the simulations. The overheads are mainly due to the time on queues and of file transfers and can be as much as 40% of the total simulation time. In addition, as more nodes are used by the simulator, the impact of the overhead increases. Therefore, this must be taken into account when planning the execution of several cardiac simulations on the Grid. As future work we plan to study and employ new parallelization techniques to improve the performance of our cardiac simulator. We also plan to study the effects of distinct file sizes, configuration parameters and queues lengths on the application overhead.
ACKNOWLEDGEMENTS
The authors would like to thank CAPES, CNPq, FAPEMIG and UFJF for supporting this work.
R EFERENCES
Fig. 3: Speedups calculated when the overhead is considered (red) and when it is not (blue).
low bars present the speedups obtained by the local cluster. We observe that even if the overhead is not considered, the speedups obtained with 32 computing cores are poor. However, the poor speedups were expected and are mainly due to the small size of the simulation considered. In addition, a previous analysis [9] concluded there is a lot of time spent with communication among processes, most of this time is spent synchronizing messages and with allreduce primitives. There is also the problem of unbalanced workload distribution. Nevertheless, the local cluster performed similarly. In order to improve the parallel efficiency of the Heart Simulator, new parallel implementations are needed, such as pipeline and asynchronous methods as those recently proposed in [8]. Therefore, we may conclude that the performance penalty of executing the Heart Simulator on the Grid are due to the overheads reported by Table 2 and accounts for the time spent on queues and of file transfers.
1. VIGMOND E J, SANTOS R W, PRASSL A J, DEO M, PLANK G. Solvers for the Cardiac Bidomain Equations 2. Miller W T, Geselowitz D B. Simulation studies of the electrocardiogram 3. Thung L. A bi-domain model for describing ischemic myocardial D-C potentials. PhD thesis. 4. Jr. J P Wikswo, Lin S F, Abbas R A. ”Virtual electrodes in cardiac tissue: a common mechanism for anodal and cathodal stimulation” Biophys. J.. 1995;69:2195–2210. 5. Muzikant A, CHenriquez . ”Validation of three-dimensional conduction models using experimental mapping: are we getting closer?” Prog. Biophys. Mol. Biol.. 1998;69(23):205–223. 6. Trayanova N A. ”Defibrillation of the heart insights into mechanisms from modelling studies” Exp. Physiol.. 2006;91(2):323–337. 7. EELA-2 2010. 8. XAVIER C R, OLIVEIRA R S, VIEIRA V F, SANTOS R W, Jnior W Meira. ”Multi-level Parallelism for the Cardiac Bidomain Equations” International Journal of Parallel Programming. 2009. 9. al. R W SANTOS Et. ”Parallel multigrid preconditioner for the cardiac bidomain model” IEEE Trans Biomed Eng. 2004.
Author: Rodrigo Weber Dos Santos Institute: Universidade Federal de Juiz de Fora Street: Instituto de Ciencias Exatas, Bairro Martelos City: Juiz de Fora, Minas Gerais Country: Brazil Email: [email protected]
IFMBE Proceedings Vol. 29
A Web-Based Tool for the Automatic Segmentation of Cardiac MRI T.H. de Paula, M. Lobosco, and R.W. dos Santos FISIOCOMP: Laboratory of Computational Physiology and High Performance Computing, Computer Science Department, Federal University of Juiz de Fora, Juiz de Fora, Brazil Abstract— In this work we describe the initial implementation of a web-based tool for the automatic segmentation of cardiac Magnetic Resonance Images. The application uses an active contours algorithm called Snakes, which are adapted and tailored to the specific task of automatic segmentation of the left ventricle of the heart in Magnetic Resonance Images. The application uses a client-server approach and both the client and the server are implemented in Java. In particular, the server uses threads to explore data parallelism on sharedmemory machines. Tests are performed on 150 short-axis images acquired from two healthy volunteers. Preliminary results suggest the proposed methods are promising and with further development and validation may be used, for instance, for the automatic calculation of cardiac ejection fraction. Keywords— MRI, Automatic Segmentation of Images, Medical Imaging, E-Health, Cardiovascular System.
I. INTRODUCTION A common task performed in cardiac exams of Magnetic Resonance Imaging (MRI) is the segmentation of the heart. The segmentation is the process of partitioning a medical image into multiple regions, or sets of pixels, representing important objects and their boundaries [1]. After the segmentation and extraction of the contours of the ventricles, clinical parameters and information that characterize the function and the anatomy of the heart can be calculated. This information is then used to assist the medical specialist in the diagnosis of cardiac illnesses. For instance, the cardiac ejection fraction (CEF) can be calculated using the MRI. The CEF is the relation of the blood cavity volume of the left ventricle during diastole per the volume during systole. Low CEF values may indicate the damage of the myocardium caused by myocardial infarction or a cardiomyopathy, since such diseases impairs the heart's ability to eject blood and therefore reduces its ejection fraction [2]. In one typical exam many images are obtained from different positions of the heart and at different phases of contraction (from systole to diastole). The segmentation is then performed off-line by an image specialist. For instance, in order to calculate the CEF, the medical specialist may need to segment near one hundred of two dimensional images for a single patient. For estimating each volume one segments the endocardium in different shortaxis images, or slices, of
the ventricle (around 10 slices from apex to base) and calculates the areas of the blood cavity in each of these images. The majority of today's commercial software provides segmentation in a semi-automatic way. Therefore, during the segmentation of the cardiac endocardial surface the specialist is forced to pick around six points in the border between cardiac tissue and the blood cavity of each short-axis image. The segmentation of the cardiac MRI images is of extreme importance but it is, today, a tedious and an error-prone task. In a previous work [3], we investigated and proposed the automatic segmentation of the endocardium in cardiac MRI images using the active contour technique named Snakes [4] method. The new proposed method provided better results than those of the traditional Snakes method by combining Genetic Algorithms and a new force for the Snakes method. As mentioned before, a single patient exam may consist of over a hundred of cardiac segmentations. Therefore, the computer implementations proposed in [3] explore the embarrassingly data parallelism of the algorithms using Java threads on a shared-memory machine. This work extends our previous work through the development of a Java-based interface that allows a physician to access our automatic segmentation tool as an internet service. The physician can use any device with internet access, such as a cell phone, a PDA, a netbook or a computer to access the system, as Figure 1 illustrates. The MRI exams can be transmitted and stored in a remote server, which performs the segmentation and returns the results. Also, making the MRI exam and the automatic segmentation services available on the internet contributes to the collaboration of different experts that may be located miles apart. The remainder of this paper is organized as follows. Section 2 gives an overview of the snake method. Section 3 gives a brief description of the architecture of our web-based tool. Section 4 presents some ideas of future work and we state our conclusions in Section 5.
II. SNAKES METHOD The idea behind the Snakes Method is straightforward: it builds iteratively and in an evolutionary fashion a curve in order to approach the object boundary. The main idea of the method lies in the minimization of an energy function that involves the snake curve and features of the image [4, 5].
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 667–670, 2010. www.springerlink.com
668
T.H. de Paula, M. Lobosco, and R.W. dos Santos
III. ARCHITECTURE AND SERVICES
Fig. 1 Any device with internet access can use the system The energy associated to the curve is defined in such a way its value is minimum when the curve is near the region of interest, i.e. near the borders of the image. This causes the snake curve to be deformed over a series of iterations. At each iteration, two forces cause the deformation of the Snake: internal and external forces. The first force controls the stretching and the bending of the curve, and prevents the Snake to become non-continuous or to break during the iteration process of the optimization problem. The second one pushes the curve towards the edges of the images. The image gradient is used for this purpose. In a previous work [3] we modified the Snake method to refined it for the task of automatic segmentation of the left ventricle of the heart in cardiac MRI images. The main ideas of our proposed modifications were: (i) the use of a new external force with the purpose of increasing the region of influence of the external forces; (ii) the use of a new force named the Adaptive-Balloon, whose intensity varies along the Snake and during the iterations of the method. The strength of this force depends on the information of the curve neighborhood. The objective of this force is to overcome the problems caused by the existence of artifacts that have the same contrast of the object of interest; (iii) the use of a genetic algorithm [6,7] that automatically adjusts the parameters of the Snake algorithm; (iv) and the use of parallel processing to speedup the segmentation of the images. The input of our implementation is a set of MRI images. These images usually are represented as a stack. The stake is then split onto many subsets that will be processed in parallel. Each thread uses the Snake algorithm to segment its subset of images. The final result is then built and exhibited to the user, that can see each of the images that form the complete set of images. Figure 2 illustrates the process.
The application uses a client-server approach and both the client and the server are implemented in Java [8]. The Java language was chosen due to its portability, its rapid prototyping feature, its embedded support for concurrent and distributed programming and the wide availability of libraries for numerical methods and image processing. Figure 3 illustrates the architecture used. The server is implemented as a service that runs on the internet. The server listens to a port, whose address is configurable, waiting for requests from clients. When a request arrives, a new thread [9] is created to treat it. Then the main server threads returns to it task of waiting for new requests. The thread just created to treat the request determines which service is required by the client. Examples of services available are: file requests (send a new file, delete a file, create a directory, and list the contents of a directory, among others), segment a set of images and change default parameters used to segment an image. If the client requests the segmentation of a set of images, the server creates new threads to do the job using the same algorithm described in Section II. The set of segmented images are then transferred back to the client.
Fig. 2 The parallel implementation of the Snake Method. Each slave thread applies the Snakes method to a set of images of the stack and draws the segmentation results directly in the images To run the services remotely, the client must first establish a connection to the server. The user must inform the IP and the port where the service is running. The user than
IFMBE Proceedings Vol. 29
A Web-Based Tool for the Automatic Segmentation of Cardiac MRI
choose, in a menu, the desired operation. If the image segmentation is chosen, the MRI files must be already stored in the server. After the segmentation has finished, the client exhibits the images received from the server. The user can use a slide bar to examine each image of the set individually. Figure 4 illustrate one of the images returned by the server after the execution of the segmentation method. The application can be executed in a stand-alone mode or as an applet in a web page. The application uses Sockets for communicating.
Fig. 3 The architecture of the application
669
The Grid technology is very important for improving the performance of our application, as well as for storing large amounts of images. The Grid is the combination of distinct computational resources, such as processors and disks, from distinct administrative domains applied to problems that require a great number of processing cycles or storage capacities [10]. For this reason, we believe that it meets the future requirements of our application and we propose the gridification of the services described in this paper. Recently, part of this task was completed. In [11], the segmentation service was achieved via the Grid called GridSnake. In this first version the Snake algorithm has been embedded in a Grid service deployed on a Globusbased Grid [12]. The architecture of this GridSnake implementation comprises of two main modules: (i) a GridSnake client implemented as a modified ImageJ plugin [13] (named GridSnake plugin), and (ii) a GridSnake service, implemented as a Grid service. The communication between these two modules is based on SOAP [14] messages and GridFTP. The user first invokes ImageJ, then can transparently call the GridSnake plugin, and finally can select the main parameters of the algorithm. The client initially authenticates itself within the Grid by interacting with the authentication service, then it sends parameters and image data trough the GridFTP protocol to the GridSnake service. The service collects data, process them and finally sends the results back to the client, i.e. the detected contours and the segmented images. At the end of the computation the user can visualize the output of the segmentation. In the near future we will extend this implementation of the Snakes on the Grid by replacing the GridSnake, that needs the ImageJ software to run, by a new client code that uses pure Java. This new client is very similar to the client described in this work, but some modifications are necessary: a) the use of SOAP messages, b) the use of the GridFTP protocol and c) the use of an authentication service. Currently we use Sockets for communicating clients and server.
V. CONCLUSION Fig. 4 Output of the segmentation algorithm
IV. FUTURE WORK A typical MRI exam is constituted by 500 tiff images. Uncompressed, one exam occupies about 200 MB of disk space. The segmentation of a complete exam takes about 15 seconds on a dual Xeon 1.6 GHz with 4 MB of cache, 4 GB of main memory when using 4 threads to compute.
The segmentation of the cardiac MRI images is of extreme importance but a tedious and an error-prone task, whereas the automatic segmentation is a challenging task. The use of an automatic tool that helps this activity is of great relevance and importance. In this work we described a new tool for the automatic segmentation of cardiac MRI. This tool is available as a web-service, so it can be used from anywhere and in almost any device, allowing a physician to access patients’ records even while in transit or far
IFMBE Proceedings Vol. 29
670
T.H. de Paula, M. Lobosco, and R.W. dos Santos
from the hospital. Also, making the MRI exam and the automatic segmentation services available on the internet contributes to the collaboration of different experts that may be located miles apart. Although a typical MRI is composed by hundreds of images, the use of parallel techniques in the implementation reduces the computation cost. In fact, the automatic segmentation algorithm is very effective, segmenting automatically 150 images in less than 5 seconds. The preliminary results suggest that the methods are promising and with further development and validation they may be used, for instance, for the automatic calculation of cardiac ejection fractions.
ACKNOWLEDGMENT The authors thank CAPES, FAPEMIG, UFJF and CNPq for the financial support. T. G. de Paula is a scholarship holder of PIBIC/CNPq.
REFERENCES 1. D. L. Pham, C. Xu, and J. L. Prince. Current methods in medical image segmentation. Annual Review of Biomedical Engineering, 2(1):315–337, 2000. 2. Kühl, H. P.; Schreckenberg, M.; Rulands, D.; Katoh, M.; Schäfer, W.; Schummers, G.; Bücker, A.; Hanrath, P.; Franke, A.: High-resolution Transthoracic Real-Time Three-Dimensional Echocardiography: Quantitation of Cardiac Volumes and Function Using Semi-Automatic Border Detection and Comparison with Cardiac Magnetic Resonance Imaging. In: J Am Coll Cardiol, vol. 43, pp. 2083-2090. (2004) 3. G. M. Teixeira, I. R. Pommeranzembaum, B. L. de Oliveira, M. Lobosco, and R. W. dos Santos. Automatic segmentation of cardiac mri using snakes and genetic algorithms. In M. Bubak, G. D. van Albada, J. Dongarra, and P. M. A. Sloot, editors, ICCS (3), volume 5103 of Lecture Notes in Computer Science, pages 168–177. Springer, 2008.
4. Kass, M.; Witkin, A.; Terzopoulos, D: Snakes: Active Contour Models. In: International Journal of Computer Vision,. vol. V1, no. 4, pp. 321- 331 (1987) 5. Xu, C.; Prince, J. L.: Gradient Vector Flow: A New External Force for Snakes. In: IEEE Proc. of the Conference on Computer Vision and Pattern Recognition, pp. 66-71. IEEE Computer Society, Washigton (1997) 6. Eiben, A. E; Smith, J. E: Introduction to Evolutionary Computing. Springer (2003) 7. Eshelman, L. J.; Schaffer, J. D.: Real-coded Genetic Algorithms and Interval-Schemata. In: Foundations of Genetic Algorithms-2, pp. 187202. Morgan Kaufman Publishers, San Mateo (1993) 8. Arnold, K.; Gosling, J.; Holmes, D.: The Java Programming Language, 4th. Edition. Prentice Hall PTR (2005) 9. Doug, L.: Concurrent Programming in Java: Design Principles and Pattern, 2nd Edition. Prentice Hall PTR (1999) 10. The Grid: Blueprint for a New Computing Infrastructure, I. Foster and C. Kesselman (Eds), Morgan Kaufmann, 2005. 11. Cannataro, M. Guzzi, P.H. Lobosco, M. dos Santos, R.W. GridSnake: A Grid-based implementation of the Snake segmentation algorithm. In Proceedings of the 22nd IEEE International Symposium on Computer-Based Medical Systems, pp; 1-6, Albuquerque, USA, Spetember 2009. 12. Globus: A Metacomputing Infrastructure Toolkit. I. Foster, C. Kesselman. Intl J. Supercomputer Applications, 11(2):115-128, 1997 13. ImageJ, http://rsb.info.nih.gov/ij/ 14. D. Box, D.Ehnebuske, G. kakivaya, A. Layman, N. Mendelsohn, H.F. Nielsen, S. Thatte and D. Winer, “Simple Object Access Protocol (SOAP) 1.1,” http://www.w3.org/TR/2000/NOTE-SOAP-20000508, 2000.
Author: Rodrigo Weber dos Santos Institute: Federal University of Juiz de Fora Street: Rua José Lourenço Kelmer, s/n – Campus Universitário – DCC / ICE / Terceira Plataforma - Bairro São Pedro - CEP: 36036-900 City: Juiz de For a/MG Country: Brazil Email: [email protected]
IFMBE Proceedings Vol. 29
Improved Modeling of Lane Intensity Profiles on Gel Electrophoresis Images C.F. Maramis1 and A.N. Delopoulos1 1
Dept. Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece
Abstract— The quantitative information extraction from PCR-RFLP gel electrophoresis images requires the efficient modeling of the lane intensity profiles. To improve the acquired modeling accuracy, we introduce two novel ideas that can be incorporated in the modeling process. The first one proposes the use of the simplified integrated Weibull function as the basis function of the employed superposition model and the second proposes switching the domain of the intensity profile tobe-modeled to the unexploited fragment length domain. Keywords— intensity profile modeling, integrated Weibull, fragment length domain, gel electrophoresis, PCR-RFLP
I. I NTRODUCTION Gel electrophoresis is a very common technique for separating macromolecules (usually proteins or DNA molecules) on the basis of their size. Digitized images of gel electrophoresis experiments are widely used in many molecular biology applications (e.g. DNA footprinting [1], HPV Typing [2]) to extract valuable information about the molecular material that exists on a matrix covered with gel (gel matrix). Although, at first, the extracted information was mainly of qualitative nature [3], modern applications are more and more based on the extraction of quantitative information regarding the size and the concentration of molecular material on the gel matrix [1, 4]. In most cases, the problem of extracting the above information ends up to be a task of modeling one-dimensional curves by an appropriate model function (see next section). This modeling procedure, however, is often not performed efficiently enough and this has an impact of the accuracy of the extracted information. To this direction, we introduce two novel ideas that can be applied on digitized images of PCR-RFLP one-dimensional gel electrophoresis experiments to help improve the aforementioned modeling task. The paper is structured as follows: Section II. describes the modeling problem we are treating. Section III. presents the proposed methodologies for improving the modeling results. Section IV. describes the experiments that investigate the efficiency of the proposed methodologies. Section V. comments on the experimental results. Finally, Section VI. draws the conclusion of this work.
II. PROBLEM S TATEMENT Molecular biologists often attempt to identify the DNA macromolecules that exist on a subject’s molecular sample by combining the established molecular biology technique of PCR-RFLP with one-dimensional gel electrophoresis (e.g. [2]). First, the sample of interest is being collected and the DNA that is contained in it is amplified with the use of the PCR technique. Next, the RFLP technique is employed to segment the DNA into a set of fragments of predefined length in base pairs. Then, a solution of the resulting material is injected into a gel matrix and is forced by an electrophoretic force to migrate in a direction parallel to the electric field. Larger DNA fragments have lower mobilities thus covering smaller distances, while smaller fragments are more agile and cover greater distances. After the end of the electrophoresis, a digitized image of the gel matrix is acquired looking like the one in Fig. 1. Such images consist of vertical stripes (five in the aforementioned image) called lanes which bear the DNA that exists on the gel. On each lane, the DNA fragments of the same length tend to be grouped into blobs of horizontal orientation called bands.
Fig. 1: A sample PCR-RFLP gel electrophoresis image with five lanes.
The main idea behind the analysis of gel electrophoresis images for quantitative information extraction is the fact that the intensity of the image at some position can be related to the amount of material (material load) at the corresponding position of the gel matrix. Molecular biologists employ this idea in order to identify the DNA molecules that exist on each lane. This identification task involves locating the positions of the bands on the vertical axis and, then, associating these band positions with the corresponding lengths of the DNA fragments that form the bands. The set of discovered fragment lengths provides the information required for identify-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 671–674, 2010. www.springerlink.com
672
C.F. Maramis and A.N. Delopoulos
ing the existing DNA molecules. So far, it may seem to the reader that the band position information alone is sufficient. However, there are applications (for instance HPV typing in the case of multiple infections) for which quantitative information about the volume of the material that forms each band has also to be inferred. In other words, not only the position but also the area of each band has to be computed. The early approaches to this problem included the binary detection (using some intensity threshold) of the bands on the two-dimensional lane image and the approximation of the material load as the sum of the intensities of the band’s pixels. However, these approaches have proved to be inaccurate. Thus, we have passed to the next generation of methods which are currently in use. These methods involve the extraction of the one-dimensional intensity profile of the lane along the vertical axis (lane’s intensity profile), i.e., the mean of the lane’s intensity image along the horizontal direction. These approaches assume that the contribution of each band to the intensity profile can be modeled by a parametric function of appropriate shape (e.g. Gaussian [4]). With the appropriate band shape determined, a superposition model of the corresponding basis function is employed to fit the extracted intensity profile and the resulting parameters of the model are used to estimate the position and area of the bands. Unfortunately, this modeling effort rarely results in the desired fitting efficiency and accuracy. Thus, in the following section, we describe two novel approaches for improving the attempted modeling.
III. P ROPOSED M ETHODOLOGIES A. Simplified Integrated Weibull Function The most important step towards modeling the lane’s intensity profile by a superposition of parametric functions is to determine the shape that best describes the contribution of each band to the profile. A lot of attention has been drawn to this issue, with many functions proposed to serve as basis functions [5] . The Gaussian and the Lorentzian functions are currently the prevailing candidates [4, 5, 6] and, indeed, the great majority of profile peaks can be satisfactorily modeled by one of the above functions: Gaussian for more “sharp” bands and Lorentzian for bands with more prominent tails. However, there are also cases where the actual shape of some bands lies somewhere in the middle. In such cases, a more “agile” parametric basis function has to employed, a function that has the freedom to take a wide range of shapes (including that of a Gaussian and a Lorentzian). This function, inspired from the probability density function of the
integrated Weibull distribution [7], is called simplified integrated Weibull and is given by the following equation: 1 x − x0 γ W (x ; β , γ , x0 ) = exp(− | | ) γ β
(1)
where x is the position along the vertical axis, and W (x) is the corresponding mean intensity along the horizontal axis. As one can observe from the Eq. (1), the simplified integrated Weibull function outnumbers both the Gaussian and the Lorentzian function in terms of independent parameters, and thus it is the perfect candidate for expressing a wide variety of band shapes. If we assume a lane that includes N bands, then the lane’s intensity profile can be modeled by a superposition of N simplified integrated Weibull functions. The mathematical expression of the superposition model is given by the following equation: N
I(x) = ∑ AiWi (x ; βi , γi , x0i )
(2)
i=1
B. Switching to the Fragment Length Domain In all the existing scientific efforts on parametric modeling of intensity profiles, the domain of the intensity profile function is the one of pixel positions [3, 1, 4, 5, 6]. This is also evident in our above methodological proposition by observing Eq. (1) and (2). However, no matter how common the use of the pixel position domain is, it is not the straightforward approach. Let us elaborate more on this. As we have already mentioned in Sect. II. , the ultimate goal of this modeling procedure is the determination of the DNA fragment lengths that are present on a lane and the subsequent identification of the DNA macromolecules that they compose. This means that we do not actually care about the bands positions in pixels, we just employ them to estimate the associated lengths of the DNA fragments that form the bands. If this association between pixel positions and fragment lengths is known, then the utilization of an intensity profile function in the domain of fragment lengths makes much more sense. For this reason, we propose switching the intensity profile function from the pixel position domain to the fragment length domain and attempting to model the switched intensity profile by a superposition of appropriately shaped functions. This domain switching is performed by employing the association between positions along the vertical axis of the lane and DNA fragment lengths, which is provided by special lanes (called ladders) containing DNA of predefined length. The modeling of a intensity profile function which is defined on
IFMBE Proceedings Vol. 29
Improved Modeling of Lane Intensity Profiles on Gel Electrophoresis Images
IFL (y) = I(m(y))
(3)
IV. E XPERIMENTAL R ESULTS In order to check whether our proposed methodologies improve the intensity profile modeling procedure, we have designed and executed a series of experiments. The first experiment investigates whether the simplified integrated Weibull function can be used for modeling single bands and it provides a quick check on whether the proposed function could serve as the basis function of the superposition model. We have located a set of isolated bands, i.e., bands that practically do not overlap with other bands in the lane and we attempted to fit the resulting data points into a Gaussian, a Lorentzian (i.e. the state of the art approaches) and a simplified integrated Weibull function. The results where very satisfactory as in almost all case the simplified integrated Weibull model outperformed the other two models. The outcome of the fitting procedure for four randomly selected isolated bands is given in Fig. 2 and the corresponding Root Mean Squared Error (RMSE) results in Table 1.
Table 1: RMSE of fitting for four isolated bands. Gaussian 0.4179 0.7353 1.2647 1.2919
Band A Band B Band C Band D
Lorentzian 0.8652 0.8325 0.5434 1.1984
int. Weibull 0.3704 0.6693 0.5466 1.0669
bands that are often overlapping. For this reason, in the second experiment we have extracted the intensity profile of a number of lanes with the methodology described in [8] and attempted to fit the resulting data points into three parametric models: A superposition of Gaussian functions, a superposition of Lorentzian functions, and a superposition of simplified integrated Weibull functions. For each profile, the number of components for the three superposition models has been set equal to the number of bands on the corresponding lanes by visually inspecting the lane’s image. This experiment has revealed that the proposed function generally provides better results in fitting the extracted intensity profiles when used as the basis function of the superposition model in comparison with the Gaussian and the Lorentzian function. The results of the fitting procedure for the leftmost lane of Fig. 1 is given in Fig. 3.
data Gaussian Lorentzian simpl. int. Weibull
180
160
140
120
100 intensity
the fragment length domain is a much more direct approach, since the modeling result will provide directly the lengths of the DNA fragments that are present in the lane as the centers of the model’s basis functions. Assuming that x = f (y) is the function that associates the DNA fragment lengths ( y ) to pixel positions ( x ), the intensity profile function on the fragment length domain which will be employed for the estimation of the parameters of the superposition model is given by the following equation:
673
80
60
40
Band A
Band B
20
20 data Gaussian Lorentzian simpl. Int. Weibull
15
data Gaussian Lorentzian simpl. Int. Weibull
15
10
10
5
5
20
0
0
0
0
2
4
6
8
10
12
14
16
0
0
2
4
6
8
Band C
10
12
14
16
18
Band D
15
10
data Gaussian Lorentzian simpl. Int. Weibull
25 20
150 pixel position
200
250
300
profile on the pixel position domain.
15 5
100
Fig. 3: The fitting result of the three models to the lane’s intensity
30 data Gaussian Lorentzian simpl. Int. Weibull
50
20
10 5
0
0
5
10
15
20
25
0
0
5
10
15
20
25
30
Fig. 2: The fitting result of the three models to four isolated bands. The first experiment pleads for the appropriateness of the proposed function as a model of single bands. The next step is to check whether the simplified integrated Weibull function can also be used as the basis function for the modeling of entire intensity profiles. This role is obviously more demanding since an intensity profile usually consists of many
The last experiment deals with the second proposed methodology, namely the switching of the intensity profile to the fragment length domain and aims at investigating the effect of this approach on the modeling process. In other words, in this experiment we change the domain of the intensity profile that were extracted for the previous experiment and fit our new datasets into the three superposition models described above (Gaussian, Lorentzian, and integrated Weibull) in order to examine whether this domain switching improves or deteriorates the fitting results when compared to the previous
IFMBE Proceedings Vol. 29
674
C.F. Maramis and A.N. Delopoulos
experiment. This experiment reveals that, contrary to what we expected, in the domain of fragment lengths the simplified integrated Weibull is not the prevailing basis function. However, the domain switching approach results in a noticeable improvement of the fitting results for the Gaussian basis function. The results of the fitting procedure for the leftmost lane of Fig. 1 is given in Fig. 4. The RMSE of fitting for the same lane in the pixel position domain (experiment 2) and in the fragment length domain (experiment 3) are presented in Table 2.
data Gaussian Lorentzian simpl. int. Weibull
180
160
the classic approaches. Thus, it seems to us that the simplified integrated Weibull approach can be proved very helpful in intensity profile modeling. Regarding the third experiment, the results were not the expected ones. It seems that the change of the intensity profile domain does not improve the overall best model which is the simplified integrated Weibull. However, it appears to sensibly improve the Gaussian model. This provides a hint that domain switching could be scientifically/biologically sound, and a motivation to further investigate the proposed idea. Moreover, in the case where all the bands of a lane are sharp enough, i.e. when the Gaussian is the prevailing model in the pixel position domain, the proposed domain switching will indeed improve the overall modeling accuracy.
140
120
VI. CONCLUSION
intensity
100
80
60
40
20
0
0
50
100
150
200
250 fragment length
300
350
400
450
Fig. 4: The fitting result of the three models to the lane’s intensity profile on the fragment length domain.
Table 2: RMSE of fitting for the lane’s intensity profile on the two
In this paper, we have presented the issue of modeling the one-dimensional lane intensity profiles from digitized images of PCR-RFLP gel electrophoresis experiments. Seeking ways to improve the modeling accuracy, we have introduced two innovations at the modeling procedure. There are the use of new function to serve as the basis function of the parametric superposition model and the switching of the intensity profile function, which is the data to be modeled, from its commonly used domain to an unexploited domain. Finally, we have presented a series of experiments that investigate the effect of the proposed approaches on the modeling accuracy.
domains. Pixel Position Domain Fragment Length Domain
Gaussian 1.9793 1.7749
Lorentzian 1.8658 3.5506
int. Weibull 1.4241 3.3108
V. D ISCUSSION The first two conducted experiments have confirmed our guess: The simplified integrated Weibull function can very efficiently serve as the basis function for modeling the intensity profiles of interest. In fact, it is better than the current “golden standards”, i.e., Gaussian and Lorentzian function since it has the ability to take the shape of both the latter functions with the appropriate selection of parameters. This feature is extremely useful in the case where, in the same lane, other bands are better expressed by a more sharp function (currently modeled as Gaussians) and others by functions with more prominent tails (currently modeled as Lorentzians). Based on our experience, such lanes with bands of diverse types do exist and they cannot be treated by
R EFERENCES 1. Das R., Laederach A., Pearlman S.M., Herschlag D., Altman R.B.. SAFA: semi-automated footprinting analysis software for highthroughput quantification of nucleic acid footprinting experiments Rna. 2005;11:344. 2. Santiago E., Camacho L., Junquera M.L., V´azquez F.. Full HPV typing by a single restriction enzyme Journal of clinical virology. 2006;37:38– 46. 3. Horgan G.W., Glasbey C.A.. Uses of digital image analysis in electrophoresis Electrophoresis. 1995;16:298–305. 4. Takamoto K., Chance M.R., Brenowitz M.. Semi-automated, single-band peak-fitting analysis of hydroxyl radical nucleic acid footprint autoradiograms for the quantitative analysis of transitions Nucleic Acids Research. 2004;32:e119. 5. Vohradsk`y J., P´anek J.. Quantitative analysis of gel electrophoretograms by image analysis and least squares modeling Electrophoresis. ;14:601– 612. 6. Shadle SE, Allen DF, Guo H., Pogozelski WK, Bashkin JS, Tullius TD. Quantitative analysis of electrophoresis data: novel curve fitting methodology and its application to the determination of a protein-DNA binding constant Nucleic acids research. 1997;25:850. 7. Geusebroek J.M., Smeulders A.W.M.. A six-stimulus theory for stochastic texture International Journal of Computer Vision. 2005;62:7–16. 8. Maramis C.M., Delopoulos A.N.. Efficient Quantitative Information Extraction from PCR-RFLP Gel Electrophoresis Images in International Conference on Pattern Recognition(Istanbul, Turkey) 2010.
IFMBE Proceedings Vol. 29
Affective Learning: Empathetic Embodied Conversational Agents to Modulate Brain Oscillations C.N. Moridis1, M.A. Klados 2, V. Terzis1, A.A. Economides1, V.E. Karabatakis3, A. Karlovasitou4, and P.D. Bamidis2 1 University of Macedonia, Information Systems Department, Thessaloniki, Greece Aristotle University, School Of Medicine, Laboratory of Medical Informatics, Thessaloniki, Greece 3 Aristotle University, School Of Medicine, Laboratory of Experimental Ophthalmology, Thessaloniki, Greece 4 Aristotle University, School Of Medicine, Laboratory of Clinical Neurophysiology AHEPA Hospital, Thessaloniki, Greece 2
Abstract— Integrating emotional feedback to educational systems has become one of the main concerns of the affective learning research community. This paper provides evidence that Embodied Conversational Agents (ECAs) could be effectively used as emotional feedback to improve brainwave activity towards learning. Further research, integrating ECAs into tutoring systems is essential to confirm these results. Keywords— Affective learning, brain oscillations, Embodied Conversational Agents, emotional feedback.
I. INTRODUCTION Due to the work of neuroscientists [1], [2] and other humanistic psychologists and educators [3], [4], [5] the role of emotions in learning is more and more acknowledged. Theoretical models of learning support that learning occurs in the presence of emotions [6]. Positive and negative emotional states trigger different types of mental states [7] and this can have an important influence on the learning process. Certain studies of neurofeedback concerning test anxiety indicated that, the enhancement of the alpha frequency band would probably lead to a significant reduction in test anxiety [8]. Moreover, the stimulation of alpha rhythm seems to improve personal competence [9], while the beta stimulation has appeared to improve attention [10], [11], overall intelligence, short-term stress, and to relieve emotional exhaustion [9]. However, high beta frequencies have been associated with intensity or anxiety [12]. Thus it is logically assumed that a tutoring system capable of providing students with the appropriate emotional feedback could probably help them to improve their emotional state towards learning [13], [14], [15]. A key focus of research concerning any kind of computerized environment, ranging from video games to tutoring systems, are the Embodied Conversational Agents (ECAs), which are digital models determined by computer algorithms as well as the avatars[16], which are digital models guided by real-time humans. In other words avatars’ interaction is humancontrolled, while embodied agents have an automated,
predefined behavior. However, individuals react as in a social context to both human and computer-controlled entities [17], [18],[19]. Commonly humans use empathy to express their affection. According to [20], empathy is the ability to perceive another person’s inner psychological frame of report with precision, but without ever losing consciousness of the fact that it is a hypothetical situation. Therefore, empathy is to feel, someone else’s emotional state and to perceive the source of this state as perceived by the other person, without setting aside self-awareness. Relatively, several studies support that the existence of empathic emotion in a computer agent has significant positive effects on the user’s impression of that agent and consequently will ameliorate human-computer interaction [21]. The aim of this study is to examine the objective impact of an ECA. The aforementioned objectivity is based on the evaluation of the cerebral responses, as they were recorded by the electroencephalogram (EEG), when individuals are exposed to empathy with emotional facial expressions, to empathy with neutral facial expressions, and to an empathetic encouragement with emotional facial expressions, as a feedback to fear, sadness, and happiness, emotions provoked by pictures of the International Affective Picture System collection (IAPS; [22]). While other researches have attempted to study empathy employing brain imaging methods [23], [24], [25] to our best knowledge this is the first brain imaging study that attempts to measure the impact of empathetic agents as feedback to human emotions for improving brainwave activity towards learning.
II. MATERIALS AND METHODS A. EEG Data Real EEG data have been obtained from thirty healthy subjects [15 males (mean age: 23.47±3.39) and 15 females (mean age: 22.8±3.74)] during an emotion evocative-stimuli experiment. EEG was recorded by nineteen scalp electrodes
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 675–678, 2010. www.springerlink.com
676
C.N. Moridis et al.
placed according to the International 10-20 System. More specific sensors were placed at Fp1, Fp2, F3, F4, F7, F8, Fz, C3, C4, Cz, T3, T4, T5, T6, P3, P4, Pz, O1 and O2 sites. B. Experimental Procedure
Table
1 The above table summarizes our experimental protocol. FEAR, SAD and HAPPY indicate the emotional content of the three displayed images, while 1, 2 and 3 denotes the ECA’s behavior in terms of synchronized speech and facial expression FEAR-1
The experiment protocol consisted of an Euler square of order three over three emotions (fear, sad, happy) triggered by IAPS and three ECAs (displaying empathy with emotional facial expressions, empathy with neutral facial expression, and empathetic encouragement with emotional facial expressions) displayed as emotional feedback. All subjects were exposed to three IAPS images with either fear or sad or joy content for twelve seconds (four seconds each), followed by a female ECA (Fig.1) performing for five seconds. The ECA depicted either empathy with neutral facial expression, or empathy with relevant to the image emotional facial expression, or empathetic encouragement with relevant to the image emotional facial expression for empathy, and then a happy facial expression for encouragement when images had a sad or fearful content. A neutral facial expression was used for encouragement by the “empathetic encouragement” ECA when the images had a happy content. A more detailed description of the protocol as well as the synchronized speech and facial expressions of the ECAs are shown in Tab.1. It has to be mentioned that after each ECA an eight second phase with neutral or relaxing IAPS images intervened (two images four sec each). In order to evaluate the accuracy of our results, it is very crucial to address if each ECA’s facial expression can be assigned to the relevant emotion as well as if the participants were capable of perceiving it. Therefore, we have to evaluate each subject’s ability to recognize the ECA’s emotions through its facial expression, in order to assess the correctness of the EEG results. So at the end of the experimental protocol, each subject was asked to complete a questionnaire, composed by the images of all ECA’s facial expressions. More specific subjects were called to assign in each ECA’s image an emotional state among angry, neutral, sad, happy, disgusted, surprised and scared.
SAD-2
Voice: Somehow these Voice: Somehow these images make you fear. images make you sad
Facial expression: Fear
Facial expression: Neutral
HAPPY-3 Voice: Somehow these images make you happy. Continue watching with attention Facial expression: Happy and then neutral
SAD-3 HAPPY-1 FEAR-2 Voice: Somehow these Voice: Somehow these Voice: Somehow these images make you fear images make you images make you sad. happy Cheer up, continue watching. Facial expression: Facial expression: Facial expression: Happy Neutral Sad and then happy HAPPY-2 FEAR-3 SAD-1 Voice: Somehow these Voice: Somehow these Voice: Somehow these images make you images make you fear. images make you sad happy. Cheer up, continue watching Facial expression: Neutral
Facial expression: Fear and then happy
Facial expression: Sad
C. Pre-processing and Artifact Rejection EEG signals were digitized at a rate of 256Hz and they were further filtered using a band pass filter at 0.5-40Hz and a notch filter at 50Hz for line noise extraction. It has also to be mentioned that, the double banana bipolar montage was used in order to isolate external noise common to neighbor electrodes. Double banana resulted to eighteen electrodes because the Cz site was only used for referencing purposes. A robust version of Second Order Blind Identification (SOBI) algorithm [26],[27] was used to decompose EEG signal to statistical independent components. Then three independent observers marked and rejected the contaminated components by ocular and/or heart artifacts resulting to the artifact-free EEG signals. D. ERD/ERS
Fig. 1 The ECA in sad and happy facial expressions
ERD/ERS illustrates the percentage of the power spectrum changes during a test interval compared to a reference interval for certain brainwaves. For the purposes of our analysis the band power method [28] was adopted for the computation of ERD/ERS index. Following this methodology, each EEG signal was band-pass filtered in the alpha1, alpha2, beta1, beta2 frequency bands (8-10Hz, 1012Hz,12-18Hz and 18-22Hz respectively), squared in order to obtain each band’s power and the mean value for each
IFMBE Proceedings Vol. 29
Affective Learning: Empathetic Embodied Conversational Agents to Modulate Brain Oscillations
test and reference interval was computed. Finally in order to obtain the ERD/ERS the next typo was used:
ERD / ERS =
T −R ⋅ 100% R
where T and R denotes the power of a certain brain rhythm during the test and the reference interval respectively. It is obvious that positive ERD/ERS values stand for greater power in test interval rather than in reference interval, and that reveals synchronization (ERS) of the certain brain oscillations, while negative ERD/ERS values denote desynchronization (ERD). E. Statistical Analysis Data at all groups were far from the normal distribution. Thus, the non parametric Mann-Whitney test (two sided Pvalue) was applied to check the null hypothesis that ECAs shown after the fear, the sad, and the happy IAPS images had no significant influence on the alpha1, alpha2, beta1, and beta2 frequency bands. In order to obtain the confidence interval for the facial expression’s emotion recognition a binomial proportion confidence interval was used. The Adjusted Wald interval provides the best coverage for a specified interval, when the number of sample size is small. So, Adjusted Wald was used with a confidence level of 95%.
III. RESULTS A statistical significant difference (p<0.05) at the modulation of the beta2 band power, was observed for the FEAR-1(+10.14%) and FEAR-3(-4%) ECAs. The ECA HAPPY-1 as well, significantly increased the beta2 frequency band power by 5.39%. Concerning the beta1 band, the ECAs SAD-2 and HAPPY-2 resulted in a significant increase by 2.83% and 5.49% respectively. Alpha2 band power was significantly increased by the FEAR-3 (10.06%), SAD-2 (5.85%), and SAD-3 (18%) ECAs. Alpha1 band was significantly increased by the FEAR-3 (9.99%), SAD-2 (1.45%), and HAPPY-1 (48.47%) ECAs. A summary of these results is provided by table 2. Table 3 summarizes the results for the emotion recognition of the facial expressions shown with their relevant confidence intervals. Happy and sad facial expressions were easily recognized by the participants with high percentages, 93% and 97% respectively. Scared and neutral facial expressions were recognized with smaller percentages by the participants, 73% and 77% respectively. Scared was mostly confused with surprised, and neutral with angry. Moreover, male percentage is higher than female percentage in all four categories. Scared and neutral
677
are two emotional states that are difficult to be perceived only by showing one image. Most likely, during the experimental procedure the combination of facial expressions with the voice’s tone made the recognition easier. Table 2 Significant differences to beta1, beta2, alpha1, alpha2 due to ECA emotional feedback. “-” signifies non-significant results Beta1
Beta2
Alpha1
FEAR-1
-
+10.14%
-
Alpha2 -
FEAR-2
-
-
-
-
FEAR-3
-
-4%
+9.99%
+10.06%
SAD-1
-
-
-
-
SAD-2
+2.83%
-
+1.45%
+5.85%
SAD-3
-
-
-
+18%
HAPPY-1
-
+5.39%
+48.47%
-
HAPPY-2
+5.49%
-
-
-
HAPPY-3
-
-
-
-
Table 3 95% confidence interval for facial expression-emotion recognition Facial expression
95% confidence interval
95% confidence interval for Male
Happy
77% - 99%
76% - 100%
95% confidence interval for Female 60% - 97%
Sad
81% - 99%
76% - 100%
68% - 99%
Scared
55% - 86%
54% - 93%
41% - 85%
Neutral
59% - 88%
54% - 93%
48% - 89%
IV. DISCUSSION AND CONCLUSIONS This paper provides evidence that ECAs could be effectively used as emotional feedback to improve brainwave activity towards learning. The empathetic encouragement ECA appeared to be an effective emotional feedback to fear IAPS images as it desynchronizes (- 4%) beta2 oscillations and synchronizes considerably alpha2 (+10.06%) and alpha1 (+9.99%) brain oscillations. Interestingly, the empathetic ECA, showing a fearful facial expression after the fear IAPS images, appeared to increase (+10.14%) the beta2 band power, indicating that its presence provoked even more intense emotions. Concerning the sad IAPS images, the empathetic ECA with a neutral facial expression could be an effective emotional feedback, as its appearance resulted in an increase to beta1 (+2.83%), alpha1(+1.45%), and alpha2 (+5.85%) frequency band powers. The empathetic encouragement ECA could also be an effective emotional feedback to a sad emotional state, as it considerably synchronizes (+18%) the alpha2 oscillations. Regarding happy emotional states and learning, an emotional feedback that would help maintain concentration and avoid excessive
IFMBE Proceedings Vol. 29
678
C.N. Moridis et al.
relaxation would be preferable. In this context, the empathetic ECA with a neutral facial expression appears to be a good solution, as it increases (+5.49%) the beta1 band power. Surprisingly, the empathetic ECA displaying a happy facial expression appears to increase (+5.39%) the beta2 band power, while it increases excessively (+48.47%) the alpha1 band power. However, these results should be confirmed by further research and be tested in the context of a tutoring system, so as to prove their efficacy as emotional feedback for instructional technology.
ACKNOWLEDGMENT We would like to thank the nurse Amalia Giannopoulou from the Lab of Clinical Neurophysiology, AHEPA Hospital, Thessaloniki, Greece, for handling the EEG during the experiment.
REFERENCES 1. Damasio, A. R. (1994). Descartes error: Emotion, reason and the human brain. New York: G. P. Putnam Sons. 2. Damasio, A. R. (2003). Looking for Spinoza: Joy, sorrow and the feeling brain. London: Heinemann. 3. Best, R. (2003). Struggling with the spiritual in education. In Tenth international conference education spirituality and the whole child conference, University of Surrey Roehampton, London. 4. Bechara, A., Damasio, H., Tranel, D., Damasio, A. (1997). Deciding advantageously before knowing the advantageous strategy. Science, 275(5304), 1293-1295. 5. Goleman, D. (1995). Emotional intelligence. New York: Bantam Books. 6. Craig, S. D., Graesser, A. C., Sullins, J., Gholson, B. (2004). Affect and learning: An exploratory look into the role of affect in learning with AutoTutor. Journal of Educational Media, 29(3), 241-250. 7. Lithari C, Frantzidis CA, Papadelis C, Vivas AB, Klados MA, Kourtidou-Papadeli C, Pappas C, Ioannides AA, Bamidis PD.” Are females more responsive to emotional stimuli? A neurophysiological study across arousal and valence dimensions”, Brain Topogr. 2010 Mar;23(1):27-40 8. Garrett BL, Silver MP. The use of EMG and alpha biofeedback to relieve test anxiety in college students. In: Wickramasekera I, editor. Biofeedback, behavior therapy, and hypnosis. Chicago7 Nelson-Hall; 1976. 9. Ossebaard, HC., (2000) Stress reduction by technology? An experimental study into the effects of brainmachines on burnout and state anxiety. Appl Psychophysiol Biofeedback,25(2):93-101. 10. Patrick GJ. Improved neuronal regulation in ADHD: An application of 15 sessions of photic-driven EEG neurotherapy. J Neurother. 1996;1(4):27-36. 11. Lane JD, Kasian SJ, Owens JE, Marsh GR. Binaural auditory beats affect vigilance performance and mood. Physiol Behav. 1998;63(2):249-252. 12. Huang, L., & Charyton, C. (2008). A comprehensive review of the psychological effects of brainwave entrainment. Alternative Therapies. 14, 38-49. 13. Economides, A. A. (2006). Emotional feedback in CAT (Computer Adaptive Testing). International Journal of Instructional Technology & Distance Learning, 3, 11-20.
14. Economides, A. A. (2005). Personalized feedback in CAT. WSEAS Transactions on Advances in Engineering Education, 2(3), 174-181. 15. Konstantinidis, E. I., Hitoglou-Antoniadou, M., Luneski, A., Bamidis, P. D., and Nikolaidou, M. M. 2009. Using affective avatars and rich multimedia content for education of children with autism. In Proceedings of the 2nd international Conference on Pervsive Technologies Related To Assistive Environments. ACM, 1-6. DOI= http://doi.acm.org/10.1145/1579114.1579172 . 16. Bamidis PD, Luneski A, Vivas A, Papadelis C, Maglaveras N, Pappas C. Multi-channel physiological sensing of human emotion: insights into emotion-aware computing using affective protocols, avatars and emotion specifications. Stud Health Technol Inform. 2007;129(Pt 2):1068-72. 17. Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81-103. 18. Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places. Cambridge: Cambridge University Press. 19. P.D. Bamidis, C. Papadelis, C. Kourtidou-Papadeli, C. Pappas, A. Vivas, ` Affective Computing In The Era Of Contemporary Neurophysiology And Health Informatics`, Interacting With Computers, 2004, 16(4):715-721 20. Rogers, C. R. (1959). A theory of therapy, personality and interpersonal relationships, as developed in the client-centered framework. In S. Koch (Ed.), Psychology: A study of science (Vol. 3; pp. 210-211, 184-256). New York: McGraw-Hill 21. Dehn, D. M., & Van Mulder, S. (2000). The impact of animated interface agents: A review of empirical research. International Journal of Human-Computer Studies, 52(1), 1-22. 22. Lang, P.J., Bradley, M.M., and Cuthbert, B.N. International Affective Picture System (IAPS): Affective Ratings of Pictures and Instruction Manual, University of Florida, Gainesville, FL (2005). 23. Shamay-Tsoory, S.G., Lester, H., Chisin, R., Israel, O., Bar-Shalom, R., Peretzd, A. Tomer, R., Tsitrinbaum, Z, and Aharon-Peretza, J., (2005). The neural correlates of understanding the other’s distress: a positron emission tomography investigation of accurate empathy. Neuroimage, 27, 468–72. 24. Jackson PL, Brunet E, Meltzoff AN, Decety J. (2006). Empathy examined through the neural mechanisms involved in imagining how I feel versus how you feel pain: an event-related fMRI study. Neuropsychologia 44, 752–61. 25. Cheng Y, Yang CY, Lin CP, Lee PL, and Decety J., (2008). The perception of pain in others suppresses somatosensory oscillations: a magnetoencephalography study. NeuroImage, 40, 1833–1840. 26. Belouchrani A., A. Cichocki, (2000). Robust whitening procedure in blind source separation context, Electronics Letters, 36:2050-2053. 27. M.A. Klados, C. Papadelis, C. Lythari, P. D. Bamidis, “The Removal of Ocular Artifacts From EEG Signals: A Comparison of Performances For Different Methods”, J. Vander Sloten, P. Verdonck, M. Nyssen, J. Haueisen (Eds.): ECIFMBE 2008, IFMBE Proceedings 22, pp. 1259–1263, 200 28. Pfurtscheller, G., Aranibar, A., (2000). Event-related cortical desynchronization detected by power measurements of scalp EEG. Electroencephalogr Clin Neurophysiol, 42:817-826.
Author: Christos Moridis Institute: University of Macedonia, Information Systems Department Street: 156 Egnatia Str. City: Thessaloniki Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
The Role of Electrically Stimulated Endocytosis in Gene Electrotransfer M. Pavlin, M. Kandušer, G. Pucihar, and D. Miklavčič University of Ljubljana, Faculty of Electrical Engineering, Laboratory of Biocybernetics Trzaska 25, SI-1000 Ljubljana, Slovenia
Abstract— Gene electrotransfer is an established method for gene delivery, which uses high-voltage pulses to increase permeability of a cell membrane and also enables the transfer of genes into cells. Numerous studies analyzed the influence of different pulse parameters on transfection efficiency. It was shown that it is crucial that electric field strength is above a certain threshold value and that the process of DNA transfer into cells can not be described with simple diffusion. However, the mechanisms of DNA uptake are still not known. It was suggested that electrically stimulated endocytosis – electroendocytosis could be the mechanism of DNA entry, but none of the studies could provide clear evidence. For this reason we decided to expose cells to electric pulses, which were shown previously to introduce genes into cells and to observe if these pulses also stimulate the endocytosis by staining the membrane with membrane dye FM 1- 43FX. First, we observed temperature dependency of endocytosis and obtained formation of endocytotic vesicles for increasing temperatures from 4°C to 37°C. Furthermore, cells were exposed to electric pulses below and above threshold field for gene electrotransfer. We found that electric pulses do not stimulate endocytosis but cause intracellular vesiculation for electric fields below and above the threshold value for gene electrotransfer. Thus, electro-stimulated formation of vesicles, which was observed, does not correlate with gene electrotransfer efficiency, since expression of GFP could be observed only above the threshold electric field. Therefore, our results suggest that electroendocytosis may not be the crucial mechanism for gene electrotransfer. Keywords— endocytosis, cells, gene electrotransfer, electroporation.
I. INTRODUCTION Gene electrotransfer of cells was first achieved already 25 years ago [1,2]. It was shown that high-voltage pulses enable delivery of DNA into the cell and successful expression of the gene. The method combines the addition of plasmid DNA and local application of electric pulses, which increase permeability of the membrane also for DNA molecules. Gene electrotransfer is already an established method for gene transfer in vitro and in vivo [3, 4] and first clinical trials are in progress [5]. Gene electrotransfer presents a nonviral method for gene therapy, which in comparison to
viral gene therapy, represents a safer method [6]. It is also the most versatile and efficient method compared to other nonviral methods; gene gun delivery is limited to exposed tissues, while complexes of DNA and cationic lipids or polymers can be unstable, inflammatory and toxic. Recent studies show that gene electrotransfer is a promising method for cancer gene therapy, DNA vaccination, autoimmune and inflammatory diseases and several other illnesses [3,5]. Up to now several mechanisms were proposed for electric-field mediated gene transfer. First hypothesis suggested that the electric pulses create pores in the cell membrane, which enable diffusion of the DNA into the cell [1]. Later studies confirmed that gene electrotransfer is a threshold process where electric field has to be higher than the threshold value, however, it was also demonstrated that transfer of DNA molecules across the cell membrane is a more complex process then diffusion through pores created during electroporation [7]. Further, it was shown that one of the crucial steps is also the interaction of DNA molecules with the cell membrane (formation of a complex), which is then followed by the translocation of DNA through the membrane pores in the minutes following pulse application by some yet unknown process [3, 7, 8]. While electroendocytosis was suggested several times as a possible mechanism for the uptake of DNA into cytoplasm during gene electrotransfer [9-12] none of the published studies clearly demonstrated that endocytosis was indeed the crucial process for DNA entry into the cytoplasm. Up to now there is no clear explanation of the mechanisms involved in gene electrotransfer, which can be attributed to the fact that direct visualization of DNA translocation across the cell membrane after exposure to electric pulses was so far not done. For this reason we set to systematically analyze endocytosis during the exposure to electric pulses, which were shown previously to introduce genes into cells [13]. By this we wanted to determine if endocytosis is the dominant process or are there other processes of DNA translocation. By set of experiments we first demonstrated that the process of endocytosis can be observed with membrane dye FM 143FX and further, we analyzed if endocytosis can be stimulated by electric fields used for gene electrotransfer.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 679–682, 2010. www.springerlink.com
680
M. Pavlin et al.
II. MATERIALS AND METHODS A. Cells Chinese hamster ovary cells (CHO-K1) were plated in Lab-Tek II chambers (Nalge Nunc International, USA) at 5×104 cells per chamber in the culture medium HAM-F12 supplemented with 8% fetal calf serum, 0.15 mg/ml L-glutamine (all three from Sigma-Aldrich, Steinheim, Germany), 200 units/ml benzylpenicillin (Pliva, Zagreb, Croatia), and 16 mg/ml gentamicin (Sigma-Aldrich, Steinheim, Germany) and incubated in 5% CO2 at 37°C. The experiments were performed 24 hours after plating.
with 1 ml of culture medium. Cells were kept either at 4°C, 24°C or 37 °C. Cells were monitored under a fluorescence microscope (×100 oil immersion objective, AxioVert 200, Zeiss, Germany), equipped with a Visicam 1280 CCD camera and a monochromator (both Visitron, Germany). The excitation wavelength was set to 510 nm and the emission was measured at 605 nm. The images of cells were acquired before and after the pulses in 1 minute time intervals for 30 minutes, using MetaFluor 5.1 software (Molecular Devices, GB).
III. RESULTS AND DISCUSSION B. Live Cell Monitoring of Endocytosis Endocytosis was observed using a fluorescent lipophilic styryl dye FM 1-43FX (Invitrogen, Eugene, Oregon, USA), which stains plasma membrane. On the day of experiments the culture medium was replaced with 500 µl of isoosmolar pulsing buffer (pH 7.4, 10 mM Na2HPO4/NaH2PO4, 1 mM MgCl2 and 250 mM sucrose). After five minutes of incubation, 6 µl of stock solution FM 1-43FX (100 µg/ml) was added to the pulsing buffer and the cells were stained with this solution for additional five minutes. Subsequently, the cells were thoroughly washed with fresh pulsing buffer to remove the excess dye from the medium. To determine the influence of the temperature on the endocytosis the above mentioned procedures of cell incubating, staining, and washing were performed at either 4°C, 24°C or 37 °C, and the cells were kept at these temperatures throughout the experiments. Since endocytosis is a temperature dependent process, cells maintained at 4°C provided a reference for the absence of endocytosis, while cells at 37°C provided a reference for endocytosis occurring at physiological temperatures. In addition, the influence of electroporation on the process of endocytosis was determined by exposing cells at different temperatures to electric pulses. C. Pulse Delivery Cells were exposed to a train of four rectangular electric pulses with duration of 200 µs and pulse repetition frequency of 1 Hz, generated with a CliniporatorTM device (IGEA s.r.l., Carpi, Modena, Italy). The amplitude of the pulses was set to 0.3 kV/cm or 1 kV/cm, which was below or above the critical amplitude for gene electrotransfer, respectively. The pulses were delivered to a pair of parallel Pt/Ir wire electrodes with 0.8 mm diameter and 4 mm distance between them (d), which were positioned at the bottom of the Lab-Tek chamber. Ten minutes after electric pulse delivery, pulsing buffer in the chamber was replaced
A. Visualization of Endocytosis – Temperature Dependency Since endocytosis is a temperature dependent process, we first analyzed temperature dependent formation of endocytotic vesicles. For cells maintained at 4°C practically no endocytosis was observed (Fig. 1A), while at 37° a number of endocytotic vesicles can be seen (Fig. 1B). Cells maintained at 4°C provided a reference for the absence of endocytosis (negative control), while cells at 37°C provided a reference for endocytosis occurring at physiological temperatures (positive control). A
B
Fig. 1 Visualization of endocytosis on CHO cells stained with membrane dye FM 1-43FX. Cells were kept at (A) 4° and (B) 37°C. Images were recorded 15 min after staining B. Correlation between Gene Electrotransfer and Endocytosis In the second part of our study we investigated the hypothesis that endocytosis is the dominant mechanism for uptake of plasmid DNA during gene electrotransfer. Therefore, we studied the effect of electric pulses used for gene electrotransfer on the process of endocytosis for two cases: (i) cells exposed to electric fields below the threshold value for gene electrotransfer (0.3 kV/cm) and (ii) cells exposed to electric fields above the threshold (1 kV/cm), where
IFMBE Proceedings Vol. 29
The Role of Electrically Stimulated Endocytosis in Gene Electrotransfer
approximately 30 % of gene expression can be observed (see Fig. 2).
Fig. 2 Transfection efficiency determined by GFP expression (triangles) and cell survival (squares) of CHO cells exposed to a train of four electric pulses with 200 µs duration and repetition frequency of 1 Hz for different applied electric field strengths (U/d) [13] Figure 3A shows cells stained with membrane dye FM 1 – 43FX after exposure to electric pulses with the amplitude below the threshold value for gene electrotransfer (4×200 µs, E = 0.3 kV/cm). Intracellular vesiculation was observed immediately after pulse application, while vesicles typical for endocytosis, such as shown in Fig. 1B were not present. In Fig. 3B, cells were exposed to electric pulses with the amplitude above the threshold value for gene electrotransfer (4×200 µs, E = 1 kV/cm). In comparison to Fig. 3A, intracellular vesiculation is more pronounced, however, endocytotic vesicles were again not observed. When comparing our results of GFP expression (Fig. 2) with observation of vesiculation/endocytosis shown in Fig. 3 it is evident that formation of electrically stimulated vesiculation does not correlate with gene electrotransfer efficiency. Namely, at 0.3 kV/cm GFP expression was not obtained, even though considerable vesiculation was detected (Fig. 3A). The electric pulses which we used caused considerable intracellular vesiculation, however, these pulses apperantly do not stimulate formation of endocytotic vesicles similar to those observed in control experiments (see Fig. 1B). Our results suggest that electrostimulated endocytosis or vesiculation is not the dominant mechanism for gene electrotransfer. One possible explanation for the observed vesiculation inside cytoplasm after exposure to electric field could be the response of the cell to mechanic and osmotic stress. Namely, when a cell is exposed to electric field electromechanical force acts on the cell membrane, however, due to cell cytoskeleton cell deformation as seen on giant unilamenar vesicles can not be observed. In addition to that,
681
when electropores are formed osmotic imbalance causes the inflow of water, which leads to cell swelling For this reason cell repairing mechanisms trigger exocytosis of vesicles from Golgi apparatus in order to repair membrane damage [10]. Most of the studies, which suggested endocytosis as a mechanism for DNA entry after exposure to electric pulses have made this conclusion only based on observations of uptake or some vesiculation inside the cell. For example, in the paper of Rols et al. macropinocytosis was observed post-pulse but it was not related to gene electrotransfer [11], while in Šatkauskas et al. [9] no endocytotic marker was used in order to determine that endcytosis is the mechanism of DNA uptake. Even thou FM 1-43FX is an endocytotic marker which stains plasma membrane our results show that a more specific dye should be used to unambiguously determine if endocytosis is the mechanism for DNA uptake during gene electrotransfer. Otherwise, it is impossible to distinguish between cell vesiculation caused by electromechanical and osmotic stress, and endocytosis. A
B
Fig. 3 Observation of electro-stimulated intracellular vesiculation 10 min after electric pulse delivery. (A) Cells exposed to electric field below the threshold value for gene electrotransfer (E = 0.3 kV/cm), and (B) cells exposed to electric field above the threshold value for gene electrotransfer (E = 1 kV/cm). In both cases, the train of four 200 µs pulses with 1 Hz repetition frequency was delivered. Cells were kept at 24°C
IV. CONCLUSIONS Our results show that electric pulses stimulate intracellular vesiculation, which can not be attributed to endocytosis. One can observe formation of electrostimulated intracellular vesicles below and above the threshold value of electric field for gene electrotransfer. This is not in correlation with the results of gene electrotransfer, where transfection can be observed only above the threshold value of the field. However, additional experiments are needed in order to unambiguously confirm that electrostimulated endocytosis is not involved in gene electrotransfer. Thus the quest for
IFMBE Proceedings Vol. 29
682
M. Pavlin et al.
understanding of mechanism involved in electric field mediated DNA uptake continues.
ACKNOWLEDGEMENT This work was supported by the Slovenian Research Agency within projects J2-9770 and P2-0249.
REFERENCES 1. Wong TK, Neumann E (1982) Electric-field mediated gene-transfer. Biochem Biophys Res Com 107: 584-587 2. Neumann E, Schaeferridder M, Wang Y et al. (1982) Gene-transfer into mouse lyoma cells by electroporation in high electric-fields. EMBO J 1: 841-845 3. Escoffre JM, Mauroy C, Portet T et al. (2009) Gene electrotransfer: from biophysical mechanisms to in vivo applications, Biophysical Reviews, 1: 177-184 4. Prud’homme GJ, Glinka Y, Khan AS et al. (2006) Electroporationenhanced nonviral gene transfer for the prevention or treatment of immunological, endocrine and neoplastic diseases. Curr Gene Ther. 6: 243-273 5. Daud AI, DeConti RC, Andrews S et al. (2008) Phase I trial of interleukin-12 plasmid electroporation in patients with metastatic melanoma. J Clin Oncol 26: 5896-5903
6. Ferber D (2001) Gene therapy: safer and virus free? Science 294: 1638-1642 7. Golzio M, Teissié J, Rols MP (2002) Direct visualization at the single cell level of electrically mediated gene delivery. Proc Natl Acad Sci USA 99: 1292-1297 8. Čemazar M, Golzio M, Serša G, (2006) Electrically-Assisted Nucleic Acids Delivery to Tissues In Vivo: Where Do We Stand? Curr Pharm Des, 12: 3817-3825 9. Šatkauskas S, Bureau MF, Mahfoudi A et al. (2001) Mol Ther 4: 317323 10. Glogauer M, Lee W, McCulloch AG (1993) Induced endocytosis in human fibroblasts by electrical fields. Exp Cell Res. 208: 232-240 11. Rols MP, Femenina P, Teissié J (1995) Long-lived macropinocytosis takes place in electropermeabilized mammalian cells. Biochem Biophys Res Com 208: 26-35 12. Zimmermann U, Schnettler R, Klock et al. (1990) Mechanisms of electrostimulated uptake of macromolecules into living cells. Naturwissenschaften 77: 543-545 13. Kandušer M, Miklavčič D, Pavlin M (2009) Mechanisms involved in gene electrotransfer using high- and low-voltage pulses — an in vitro study. Bioelectrochem 74: 265-271 Author: Mojca Pavlin Institute: University of Ljubljana, Faculty of Electrical Engineering, Department of Biomedical Engineering Street: Trzaska 25 City: Ljubljana Country: Slovenia Email: [email protected]
IFMBE Proceedings Vol. 29
A Frequency Synchronization Study on the Temporal and Spatial Evolution of Emotional Visual Processing Using Wavelet Entropy and IAPS Picture Collection C.A. Frantzidis, C. Pappas, and P.D. Bamidis Lab of Medical Informatics, Medical School, Aristotle University of Thessaloniki, Greece Abstract— Traditional analysis of emotional neurophysiological data was mainly performed either by detecting temporal changes in the event-related potentials or by simple frequency analysis. Here, the notion of the time-evolving wavelet entropy is used for providing a measurement of the frequency synchronization during passive viewing of emotion evocative pictures selected from the International Affective Picture System (IAPS). The aim of this work is to study the temporal and spatial evolution of the dynamical frequency changes modulating the event-related potentials evoked by pleasant and unpleasant visual stimuli of varying arousal. The results obtained from the method’s application to short EEG data, indicate that both the emotional dimensions and gender modulate the frequency synchronization patterns. Therefore, their study over short temporal windows may enhance the knowledge extracted from the classical methods used. Keywords— EEG, Emotional Processing, Synchronization, IAPS, Wavelet Entropy.
Frequency-
I. INTRODUCTION During the past years, the interest for the study of emotions has been regenerated. The establishment of a simple and robust theoretical model greatly facilitated the methodological investigation of the cognitive processes involved during emotional processing [1]. Early research attempts have highlighted the biological needs that form the human behavior. Evolutionary roots motivate species to adapt their behavior towards the promotion of their survival. These needs may be divided to the preservative and the protective ones [2]. The appearance of a novel situation evokes either an appetitive or a defensive behavior. Therefore, the emotional attitude is determined by two motivational mechanisms activated according to the affective value (attractive or aversive) of the stimulus. The activation degree of the two distinct motivation circuits is proportional to the arousal dimension of the stimulus appeared [3]. Therefore, except from genetics and learning influences that also occur, the current emotional theory is based on a biphasic emotional model consisted of two emotional dimensions (the affective valence describing the activation of the appetitive or the defensive motivational strategy and the arousal characterizing the activation level of each motivational system).
Most of the previous research attempts investigated the way that valence and/or arousal modulate the brain activity [4], [5], [6], [7]. Temporal information (both amplitude and latency features) was extracted from the event-related potentials (ERPs) [5]. The ERP components are superimposed to the ongoing electroencephalographic (EEG) activity and reflect the synchronization of large neuronal cells due to the stimulus onset. The most prominent ERP named as P300 is recorded 300-400 ms after the stimulus onset. It is consisted of two sub-components which are linked with attention tasks (P3a) and later with memory processes (P3b) [8]. It was found to be greater for pleasant emotional pictures and is followed by a positive slow deflection which is proportional to the stimulus intensity [4]. Another study proposed the facilitated processing of emotional information, since emotional stimuli elicit augmented negative amplitudes (N1 and N2 ERPs) even when the participants were focused on attention tasks not related with emotional processing [9]. Despite their great contribution, all these studies face a common limitation since they are focused only on the temporal ERP analysis. However, recent findings demonstrated that the ERP waveforms are formed by the oscillatory brain activity at the different frequency ranges [10]. Combination of signal processing and statistical techniques were employed to confirm that the rhythmic activity, mainly in the delta and theta range, is superimposed in order to form the various ERP components such as N200 and P300 [11]. Therefore, aiming towards a more adequate investigation of the emotions, forthcoming studies should integrate the temporal ERP study with the examination of the frequency synchronization of the brain activity. Previous attempts to derive frequency synchronization patterns from EEG data have used nonlinear dynamics [12], [13] and entropy measures derived either by the Fourier [14] or the Short Time Fourier Transform (STFT) [15]. Nonlinear dynamics analyze the signal’s complexity, while spectral entropy provides a measurement of the signal’s distribution over the various frequency bands. However, both methods require time series stationarity, whereas long term-recordings are needed for the computation of a reliable estimation of the chaoticness quantifier. In most cases neurophysiological time series are not stationary and they last for a few seconds. Similarly, spectral entropy computation
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 683–686, 2010. www.springerlink.com
684
C.A. Frantzidis, C. Pappas, and P.D. Bamidis
using the Fourier Transform requires stationary data and does not provide any temporal information. When adopting the STFT, computations are performed in data segments which may be assumed as stationary. Moreover, the entropy is defined as a function of time but a compromise between time and frequency resolution has to be done by the choice of the window size. Therefore, there is no adequate accuracy for all the EEG rhythms. Aiming to provide an accurate and robust quantification measure of the complexity of the frequency distribution, recent studies introduced the notion of the time-evolving wavelet entropy (WE) [16]. The Orthogonal Discrete Wavelet Transform (ODWT) was used to extract the brain rhythmic activity from short ERP data without requiring stationarity or parameter selection and assuring accurate frequency and time resolution for all the frequency bands [17]. Then, the energy for each frequency band was computed using the wavelet coefficients. Finally, similarly with the classic Shannon entropy, the wavelet entropy was extracted as a function of time from the probability distribution of the energy concentrated in each frequency band. So, WE provides a frequency synchronization measurement which can be computed for very short data, tracking thus reliably the dynamical changes occurring due to cognitive stimuli. Adopting this methodology, we aim to investigate whether the use of WE will provide a robust frequency synchronization measure able to differentiate the various emotional states. We hypothesize, that the emotional modulation of both the early and late ERP data results in alterations of the brain oscillatory activity providing thus an ideal framework for the application of this frequency synchronization measure. Therefore, we aim to examine how the two emotional dimensions (valence / arousal) and gender influence the various frequency bands during early, mid and late emotional processing. Moreover, we investigate both temporal and spatial frequency synchronization patterns which will shed light to the dynamical evaluation of the neuronal mechanisms involved.
II. MATERIALS AND METHODS A. Experimental Procedure
characterized as having high or low value for each emotional dimension (valence and arousal). Therefore, the categories were denoted as HVHA (high valence and highly arousing stimuli), HVLA (high valence and low arousing stimuli), LVHA (low valence and highly arousing stimuli) and LVLA (low valence and low arousing stimuli). Each emotional category contained forty (40) stimuli. So, one hundred and sixty pictures were totally presented to each subject in a completely random order.[20]. B. Neurophysiological Recordings and Pre-processing The neurophysiological data were recorded using Ag/AgCl electrodes placed at 19 scalp sites according to the 10-20 International System. The sampling rate was 500 Hz. Two reference electrodes were placed at the ear lobes. Four additional electrodes were used for the identification of horizontal and vertical eye movements. All electrode impedances were kept lower than 10 Hz. The EEG data were pre-processed off-line using custom code written in MATLAB. Initially data were filtered by means of 2nd order Butterworth filters (high-pass with cutoff frequency of 0.5 Hz, notch filter centered at 50 Hz and finally a low-pass with cut-off frequency at 50 Hz). The ocular artifacts were then removed using the Independent Component Analysis (ICA) implemented in the EEGLAB [21]. Finally, neurophysiological data were synchronized with stimuli onsets and ERP data were extracted. Each trial’s (ERP’s) duration was 2.5 seconds consisting of 0.5 s pre-stimulus data, 1 sec of stimulus duration and 1 sec of post-stimulus data. C. Discrete Wavelet Analysis The first step is the choice of the mother wavelet ψ(t). This is going to be compared with the ERP data. So, its pattern should be as similar as possible with a typical ERP waveform. Therefore, the biorthogonal wavelet of order 3 was selected [20], [22]. A wavelet family ψa,b is produced from the mother wavelet by scaling and translating it according to the following equation: ψa,b (t)=(1/|a|) 0.5 × ψ((t-b)/a)
Twenty-eight volunteers (14 males) participated in this study. Their mean age was 28.2±7.5 for males and 27.1±5.2 for females. They had normal or corrected to normal vision. History of psychiatric or neurological illness was exclusion criterion. The study was approved by the local medical ethics committee and subjects signed an informed consent form. This work was part of the AFFECION project [18]. Emotion evocative pictures were selected from the IAPS collection [19]. Adopting the current emotional theory, the stimuli were divided to four emotional categories according to their valence and arousal distribution. Each category was
(1)
The discrete wavelet transform (DWT) is used because it results in non-redundant and robust wavelet representation [23]. It is implemented with a simple recursive filtering scheme. The signal decomposition is performed in 6 levels (k=1…6) and the total number of samples in each ERP segment is N=1250. Let the ERP data be given by the time series X=x[n], n=1…N. Then, the wavelet expansion is defined as
IFMBE Proceedings Vol. 29
6
N
X (t ) = ∑∑ Cj (n)ψ j , n (t ) j =1 k =1
(2)
A Frequency Synchronization Study on the Temporal and Spatial Evolution of Emotional Visual Processing
During each decomposition level, a convolution between the signal and the wavelet takes place using a high-pass and a low-pass filter. The detail coefficients Dj are obtained by the high-pas and the approximation coefficients Aj from the low-pass filter for each decomposition level. Therefore, each ERP was decomposed in the following frequency bands: gamma (D3, 32-62 Hz), beta (D4, 16-32 Hz), alpha (D5, 8-16 Hz), theta (D6, 4-8 Hz) and delta (A6, 0.5-4 Hz). For each participant and for each one emotional category, the aforementioned wavelet coefficients were computed for each one of the forty trials. Then, the coefficients were averaged over the trials in order to obtain responses phaselocked to the stimuli. The DWT mode was set to “periodization” so as the j-1 decomposition level to result in half of the coefficients obtained during the j level. Then, the ERP data were divided in short segments chosen appropriately in order each segment to contain at last one coefficient from each frequency band. So, we used 1024 sample points which were divided in 16 time interval. Each time interval lasted for 128 milliseconds. The signal’s energy Ej for each frequency band as well as with the total energy Etot was computed for all the intervals: 5
2
5
Ej = ∑ Cj , Etot = ∑ Ej j =1
(3)
j =1
Then, the normalized energy values representing the relative wavelet energy were computed by pk=Ek/Etot, k=1…5. Finally, the time-evolving wavelet entropy is computed for each frequency band by 5 (4) WE = − p × log p
∑
k
2
k
k =1
III. RESULTS Statistical analysis was separately performed for each one of the nineteen electrodes in order to detect frequency synchronization differences occurring during the aforementioned intervals. The wavelet entropy values corresponding to the early (12-140 ms), mid (140-268 ms) and late (268396 ms) post-stimulus intervals were analyzed by repeated Analysis of Variance (ANOVA) measures using the SPSS software. Both emotional dimensions (valence and arousal) served as within subjects’ factors, while the gender was set as between subjects’ factor. The statistically significant results (both main effects and interactions) are presented in Fig. 1. Different color codes of varying intensity are used for each effect, whereas no significant effects are left without any shading. As depicted in this figure, statistically significant results are regarded those with p <0.05. Then five classes were defined according the p-values as indicated by the color bar. Scalp distributions of the wavelet entropy values over the electrode locations for the 3 intervals and the 4 emotional categories are visualized in Fig. 2.
685
The entropy visualization was performed using the CARTOOL software [24].
Fig. 1 Visualization of the statistical analysis results s performed for all the electrodes and for 3 temporal windows (early, mid & late processing)
IV. DISCUSSION The objective of this study was to detect both temporal and spatial evolution of the frequency synchronization occurring due to passive viewing of emotional stimuli selected from the IAPS collection. Therefore, WE measures during early (P100 & N100), mid (P200 & N200) and late (P300) emotional processing were computed from ERP data from nineteen electrode sites. The analysis results verified the objective of the study since both emotional dimensions affect the WE values. Moreover, interactions between them as well as with the gender also occur. The triple interaction (valence by arousal by gender) is reported during early emotional processing at the right temporal area (T6 electrode) and later at the left temporal lobe (T5 electrode). The spatial evolution of the WE should also been taken into consideration. Major alterations of the WE distribution over the various electrode sites are observed as a function of time. As depicted in Fig. 2, right central areas and the entire parietal lobe (C4, P4, Pz and P3 electrodes) are strongly synchronized during the processing of intense stimuli. Synchronization due to the valence dimension is later encountered on fronto-central areas of the left hemisphere. Finally, lower entropy values, especially for unpleasant stimuli are located on frontal locations. This paper investigated the feasibility of applying a frequency synchronization quantifier with accurate temporal and frequency evaluation for the characterization of emotional stimuli differing in both their valence and arousal dimension. Future work should consider reference WE values extracted from the pre-stimulus period in order to obtain more reliable synchronization measurements. Moreover, WE alterations during later stages of emotional processing should also been investigated.
IFMBE Proceedings Vol. 29
686
C.A. Frantzidis, C. Pappas, and P.D. Bamidis
Fig. 2 Spatial WE distribution. Indicative pictures from the 4 emotional states (HVHA, HVLA, LVHA and LVLA) are presented on the top. Then the scalp distributions for the early, mid and late temporal windows are presented from up to bottom
ACKNOWLEDGMENT The Cartool software has been programmed by Denis Brunet, from the Functional Brain Mapping Laboratory, Geneva, Switzerland, and is supported by the Center for Biomedical Imaging (CIBM) of Geneva and Lausanne.
REFERENCES 1. Lang P J, Bradley M M and Cuthbert B. N. (1998) Emotion, motivation, and anxiety: brain mechanisms and psychophysiology. Biological Psychiatry, Vol. 44, Issue 12:1248–1263 2. Konorski J. (1967) Itegrative Activity of the Brain: An Interdisciplinary Approach. Chicago. University of Chicago Press. 3. Cacioppo J T, Berntson G. G (1994) Relationship between attitudes and evaluative space: a critical review, with emphasis on the separability of positive and negative substrates. Psychological Bulletin, Vol. 115(3):401-423 4. Cuthbert B. N, Schupp H T, Bradley M M, Birbaumer N and Lang P J (2000) Brain Potentials in affective picture processing: Covariation with autonomic arousal and affective report. Biological Psychology, Vol. 52(2):95-111 5. Olofsson J K, Nordin S, Sequeira H and Polich J (2008) Affective picture processing: An integrative review of ERP findings. Biological Psychology, Vol. 77, Issue 3:247-265 6. Lithari C, Frantzidis C A, Papadelis C, Vivas A B, Klados M A, Kourtidou-Papadeli C, Pappad C, Ioannides A A and Bamidis P D (2010) Are females More Responsive to Emotional Stimuli? A Neurophysiological Study Across Arousal and Valence Dimensions. Brain Topography 23(1):27-40 7. Frantzidis C, Bratsas C, Klados M, Konstantinidis E, Lithari C, Vivas A, Papadelis C, Kaldoudi E, Pappas C and Bamidis P (2010) On the classification of emotional biosignals evoked while viewing affective pictures: an integrated data mining based approach for healthcare applications. IEEE transactions on Information Technology in Biomedicine DOI 10.1109/TITB.2009.2038481
8. Polich J (2007) Updating P300: An integrative theory of P3a and P3b. Clinical Neurophysiology 118:2128-2148 9. Schupp H T, Junghofer M, Weike A I and Hamm A O (2003) Attention and emotion: An ERP analysis of facilitated emotional stimulus processing. Neuroreport 14:1107-1110 10. Basar E, Basar-Eroglu C, Karakas S and Scurmann M (1999) Are cognitive processes manifested in event-related alpha, theta and delta oscillations in the EEG? Neuroscience Letters 259:165:168 11. Karakas S, Erzengin O U and Basar E (2000) A new strategy involving multiple cognitive paradigms demonstrates that ERP components are determined by the superposition of oscillatory responses. Clinical Neurophysiology 111:1719-1732 12. Stam C J, van Cappellen van Walsum A-M, van Dijk B W (2003) Nonlinear synchronization in EEG and whole-head MEG recordings of healthy subjects. Human Brain Mapping 19:562-574 13. Polychronaki G E, Ktonas P, Gatzonis S, Asvestas P A, Spanou E, Siatouni A, Tsekou H, Sakas D, Nikita K S (2008) Comparison of fractal dimension estimation algorithms for epileptic seizure onset detection. 8th IEEE International Conference on BioInformatics and BioEngineering DOI 10.1109/BIBE.2008.4696822 14. Inouye T, Shinosaki K, Sakamoto H, Toi S, Ukai S, Iyama A, Katsuda Y and Hirano M (1991) Quantification of EEG irregularity by use of the entropy of the power spectrum, Electroencephalography and clinical Neurophysiology 79:204-210 15. Misra H, Ikbal S, Bourland H and Hermansky H (2004) Spectral entropy based feature for robust ASR, IEEE Proc. vol. 5, International Conference on Acoustics, Speech, and Signal Processing ISBN 07803-8484-9 16. Blanco S, Figliola A, Quian Quiroga R, Rosso O A and Serrano E (1998) Time-frequency analysis of electroencephalogram series. III. Wavelet packets and information cost function. Physical Review E:932-940 17. Rosso O A, Blanco S, Yordanova J, Kolev V, Figliola A, Schurmann M and Basar E (2001) Wavelet Entropy: a new tool for analysis of short duration brain electrical signals. Journal of Neuroscience Methods 105:65-75 18. The AFFECTION Project (2008) at http://kedip.med.auth.gr/affection 19. Lang P J (1997) International affective picture system (IAPS): Technical manual and affective ratings. NIMH Center for the Study of Emotion and Attention, Gainesville, FL, 1997 20. Frantzidis C A, Bratsas C, Papadelis C, Konstantinidis E, Pappas C and Bamidis P D (2010) Towards Emotion Aware Computing: An Integrated Approach Using Multi-Channel Neuro-Physiological recordings & Affective Visual Stimuli. IEEE transactions on Information Technology in Biomedicine DOI 10.1109/TITB.2010.2041553 21. Delorme A and Makeig S (2004) EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods 134(1):9-21 22. Giannakakis G A, Tsiaparas N N, Xenikou S M-F, Papageorgiou S, Nikita K S (2008) Wavelet entropy differentiations of event related potentials in dyslexia. IEEE Proc., 8th IEEE International Conference on BioInformatics and BioEngineering 23. Rosso O A, Martin M T, Figliola A, Keller K and Plastino A (2006) EEG analysis using wavelet-based information tools. Journal of Neuroscience Methods 153:163-182 24. Cartool software at http://brainmapping.unige.ch/Cartool. htm
Author: Christos Frantzidis Institute: Lab of Medical Informatics, Aristotle University Street: Medical School, Aristotle University, PO Box 323, 54124 City: Thessaloniki Country: Greece E-mail:[email protected]
IFMBE Proceedings Vol. 29
Frontal EEG Asymmetry and Affective States: A Multidimensional Directed Information Approach P.C. Petrantonakis and L.J. Hadjileontiadis Dept. of Electrical & Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece Abstract—Electroencephalogram (EEG)-based emotion recognition is a rapidly growing research field that examines the ability of EEG signals to discriminate emotions in humans. One of the major problems that this emotion recognition approach deals with is the reliability of the emotion induction to the subjects that participate in relevant experiments. In this paper, Multidimensional Directed Information analysis was adopted to identify frontal EEG asymmetry and, thus, evaluate the emotion elicitation procedure followed to evoke affective states from 16 healthy right-handed subjects using the IAPS database. The derived results have shown that the adopted method has the potential to become an efficient emotion elicitation evaluation criterion. Keywords—frontal EEG asymmetry, multidimensional directed information, affective states. I. INTRODUCTION
A new need of imbuing the machines with the ability to adjust their functionality according to our emotional state has recently emerged due to new approaches to the Human Machine Interaction (HMI) theme [1]. Affective Computing (AC) [2] is a topic within HCI that encompasses these new approaches and relates to research field of imbuing the computer with the ability to detect, recognize, model and take into account user’s emotional state. Electroencephalogram (EEG)-based emotion recognition (EEG-ER) is a relatively new field within the AC area, yet very promissing due to the properties of EEG, such as increased time resolution reflection of emotion-related information from its origin, non-intrusiveness. Nevertheless, EEG-ER involves emotion elicitation techniques that elicit affective states from subjects participating in respective experiments. The majority of EEG-ER studies [3]-[5], elicit emotions from subjects by showing them pictures with certain affective content that is supposed to disseminate certain emotional states. An important question, however, is raised: how effectively has the emotion been evoked? In this paper, an initial effort to address the aforementioned question is attempted. The whole approach is based on the brain asymmetry [6] that is exhibited from people’s EEG signals while experiencing certain emotion. To evaluate this asymmetry a Multidimensional Directed Information (MDI) [7] analysis is implemented in EEG signals
recorded from 16 healthy right-handed subjects. The results seem to reveal great potential in the effort to evaluate the reliability of emotion elicitation techniques. II. MATERIALS AND METHODS
A. Frontal EEG asymmetry Relative psychophysiology literature has revealed the most prominent expression of emotion in brain signals, i.e., the asymmetry between the left and right brain hemispheres. Davidson et al. [6] developed a model that related this asymmetric behavior with emotions, with the latter analyzed in two main dimensions, i.e., valence and arousal, corresponding to one’s judgment about a situation as positive or negative and one’s excitation, spanning from calmness to excitement, respectively. According to that model, emotions are: i) organized around approach-withdrawal tendencies; and ii) differentially lateralized in the frontal region of the brain. The left frontal area is involved in the experience of positive emotions, such as joy or happiness (the experience of positive affect facilitates and maintains approach behaviors), whereas the right frontal region is involved in the experience of negative emotions, such as fear or disgust (the experience of negative affect facilitates and maintains withdrawal behaviors). Thus, the verification of this theory from the EEG signals that relate to emotional experience would introduce a valuable criterion of how effectively an emotion has been elicited during an artificial induction phase. B. Experiments and dataset construction A special experiment was designed in order to collect the EEG signals. In this experiment, 16 healthy volunteers participated; all were right-handed subjects (9 males and 7 females) in the age group of 19-32yrs. The whole experiment was designed to induct emotion within the valence/arousal space and specifically for four affective states, i.e. LALV=low arousal-low valence, LAHV=low arousalhigh valence, HAHV=high arousal-high valence, and HALV=high arousal-low valence. Forty pictures (10 pictures per affective state) of the International Affective Picture System (IAPS) [8] were projected (in sequence: 10 for LALV, 10 for LAHV, 10 for HAHV, and 10 for HALV) for
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 687–690, 2010. www.springerlink.com
688
P.C. Petrantonakis and L.J. Hadjileontiadis
𝑋 = 𝑥𝑘−𝑃 … 𝑥𝑘−1 𝑥𝑘 𝑥𝑘+1 … 𝑥𝑘+𝑀 = 𝑋 𝑃 𝑥𝑘 𝑋 𝑀 , (1) 𝑌 = 𝑦𝑘−𝑃 … 𝑦𝑘−1 𝑦𝑘 𝑦𝑘+1 … 𝑦𝑘+𝑀 = 𝑌 𝑃 𝑦𝑘 𝑌 𝑀 , (2) where 𝑋 𝑃 = 𝑥𝑘−𝑃 … 𝑥𝑘−1 ; 𝑋 𝑀 = 𝑥𝑘+1 … 𝑥𝑘+𝑀 ; 𝑌𝑃 = 𝑀 𝑦𝑘−𝑃 … 𝑦𝑘−1 ; 𝑌 = 𝑦𝑘+1 … 𝑦𝑘+𝑀 . The mutual information between the time series 𝑋 and 𝑌 is written as 𝐼 𝑋; 𝑌 =
𝑘 𝐼𝑘
𝑋; 𝑌 ,
(3)
where 𝐼𝑘 𝑋; 𝑌 = 𝐼 𝑥𝑘 ; 𝑌 𝑀 𝑋 𝑃 𝑌 𝑃 𝑦𝑘 +
Fig. 1 The Fp1, Fp2, F3, and F4 electrode positions (marked with black) used for the EEG acquisition according to the 10-20 system.
5s after a 5-s black screen period, a 5-s period in which countdown frames (5→1) were demonstrated to accomplish a relaxation phase and emotion-reset, due to its naught emotional content before the projection of the new picture, followed by an 1-s projection of a cross shape in the middle of the screen to attract the sight of the subject. After the picture’s projection a computerized 20-s Self-Assessment Mannequin (SAM) [9] procedure took place. The same 36second procedure was repeated for every one of the 40 pictures. The EEG signals from each subject were recorded during the whole projection phase. The EEG signals were acquired from Fp1, Fp2, F3, and F4 positions, according to the 10-20 system (see Fig. 1), related to the emotion expression in the brain, based on the asymmetry concept. The Fp1 and Fp2 positions were recorded as monopole channels (channels 1 and 2, respectively), whereas the F3 and F4 positions as a dipole (channel 3), resulting in a 3-EEG channel set. The ground was placed at the right earlobe. After the acquisition part, the EEG signals were subjected to a band-pass Butterworth filtering, to retain only the frequencies within the alpha (8-12 Hz) and beta (13-30 Hz) bands, in order to exploit the mutual relation regarding the prefrontal cortical activation or inactivation, and eliminate superimposed artifacts from various sources [10]. Then, the EEG signals were segmented into 5-s segments corresponding to the duration of each picture projection. Finally, the signals referring to the countdown phase were also cut and filtered as they were intended to be used as the ground truth for the evaluation of the emotion elicitation (see Section II.D). C. Multidimensional Directed Information (MDI) Consider the simple case of two stationary time series 𝑋 and 𝑌 of length 𝑁 divided into 𝑛 epochs of length 𝐿 = 𝑁/𝑛; each epoch of length 𝐿 = 𝑃 + 1 + 𝑀 is written as a sequence of two sections of length 𝑃 and 𝑀 before and after the 𝑥𝑘 and 𝑦𝑘 sampled value of time series 𝑋 and 𝑌 at time 𝑘, respectively, i.e.,
𝐼 𝑦𝑘 ; 𝑋 𝑀 𝑋 𝑃 𝑌 𝑃 𝑦𝑘 + 𝐼 𝑥𝑘 ; 𝑦𝑘 𝑋 𝑃 𝑌 𝑃 .
(4)
The three terms of the right-hand side of (4) correspond to the information shared by the signal 𝑥𝑘 of 𝑋 at time 𝑘 and the future part 𝑌 𝑀 of 𝑌 after time 𝑘 (first part); the information shared by the signal 𝑦𝑘 of 𝑌 at time 𝑘 and the future part 𝑋 𝑀 of 𝑋 after time 𝑘 (second part); and the information that is not contained in the past parts 𝑋 𝑃 and 𝑌 𝑃 of 𝑋 and 𝑌 but is shared by 𝑥𝑘 and 𝑦𝑘 (third part). Since 𝐼𝑘 𝑋; 𝑌 represents mutual information which has symmetry we have 𝐼𝑘 𝑋; 𝑌 = 𝐼𝑘 𝑌; 𝑋 , meaning that it contains no directivity, while the three terms in the right part of (4) contain a temporal relation which produces directivity. This directivity is defined by Kamitake et al. [11] as directed information and depicted by using the arrow for clarification. For example, the first term of the right part of (4) can be written as: 𝐼 𝑥𝑘 ; 𝑌 𝑀 𝑋 𝑃 𝑌 𝑃 𝑦𝑘 = 𝐼(𝑥𝑘 → 𝑌 𝑀 |𝑋 𝑃 𝑌 𝑃 𝑦𝑘 ),
(5)
and analyzed as: 𝑀 𝑚 =1 𝐼
𝐼 𝑥𝑘 → 𝑌 𝑀 𝑋 𝑃 𝑌 𝑃 𝑦𝑘 =
𝑥𝑘 → 𝑦𝑘+𝑚 𝑋 𝑃 𝑌 𝑃 𝑦𝑘 , (6)
where each term on the right-hand side of (6) can be interpreted as information that is first generated in 𝑋 at time 𝑘 and propagated with a time delay of 𝑚 to 𝑌, and can be calculated through the conditional mutual information as a sum of joint entropy functions: 𝐼 𝑥𝑘 → 𝑦𝑘+𝑚 𝑋 𝑃 𝑌 𝑃 𝑦𝑘 = 𝐻 𝑋 𝑃 𝑌 𝑃 𝑥𝑘 𝑦𝑘 + 𝐻 𝑋 𝑃 𝑌 𝑃 𝑦𝑘 𝑦𝑘+𝑚 − 𝐻 𝑋 𝑃 𝑌 𝑃 𝑦𝑘 − 𝐻(𝑋 𝑃 𝑌 𝑃 𝑥𝑘 𝑦𝑘 𝑦𝑘+𝑚 ).(7) According to [12], the joint entropy 𝐻 𝑧1 ⋯ 𝑧𝑛 of 𝑛 Gaussian stochastic variables 𝑧1 , … , 𝑧𝑛 can be calculated using the covariance matrix 𝑅 𝑧1 ⋯ 𝑧𝑛 as: 1
𝐻 𝑧1 ⋯ 𝑧𝑛 = log 2𝜋𝑒 2
𝑛
𝑅 𝑧1 ⋯ 𝑧𝑛
,
(8)
where ∙ denotes the determinant; by using (8), (7) can be written as:
IFMBE Proceedings Vol. 29
𝐼 𝑥𝑘 → 𝑦𝑘+𝑚 𝑋 𝑃 𝑌 𝑃 𝑦𝑘 =
Frontal EEG Asymmetry and Affective States: A Multidimensional Directed Information Approach 1 2
𝑙𝑜𝑔
𝑅(𝑋 𝑃 𝑌 𝑃 𝑥 𝑘 𝑦 𝑘 ) ∙ 𝑅(𝑋 𝑃 𝑌 𝑃 𝑦 𝑘 𝑦 𝑘+𝑚 ) 𝑅(𝑋 𝑃 𝑌 𝑃 𝑦 𝑘 ) ∙ 𝑅(𝑋 𝑃 𝑌 𝑃 𝑥 𝑘 𝑦 𝑘 𝑦 𝑘+𝑚 )
.
689
(9)
When the relation between three or more signals is to be examined, an extension to the mutual directed information is used, namely multidimensional directed information (MDI). Similarly to mutual information, the following expression is obtained for the simple case of three interacting signals 𝑋, 𝑌, 𝑍: 𝐼 𝑥𝑘 → 𝑦𝑘+𝑚 𝑋 𝑃 𝑌 𝑃 𝑍 𝑃 𝑦𝑘 𝑧𝑘 = 1 2
𝑙𝑜𝑔
𝑅 𝑋 𝑃 𝑌 𝑃 𝑍 𝑃 𝑥 𝑘 𝑦 𝑘 𝑧 𝑘 ∙ 𝑅 𝑋 𝑃 𝑌 𝑃 𝑍 𝑃 𝑦 𝑘 𝑧 𝑘 𝑦 𝑘+𝑚 𝑅 𝑋 𝑃 𝑌 𝑃 𝑍 𝑃 𝑦 𝑘 𝑧 𝑘 ∙ 𝑅 𝑋 𝑃 𝑌 𝑃 𝑍 𝑃 𝑥 𝑘 𝑦 𝑘 𝑧 𝑘 𝑦 𝑘+𝑚
.
(10)
Using (10) and (6), the total amount of information, namely S, that is first generated in 𝑋 and propagated to 𝑌 taking into account the existence of 𝑍, across the time delay range is S: 𝐼 𝑥𝑘 → 𝑌 𝑀 𝑋 𝑃 𝑌 𝑃 𝑍 𝑃 𝑦𝑘 𝑧𝑘 = 𝑅 𝑋 𝑃 𝑌 𝑃 𝑍 𝑃 𝑥 𝑘 𝑦 𝑘 𝑧 𝑘 ∙ 𝑅 𝑋 𝑃 𝑌 𝑃 𝑍 𝑃 𝑦 𝑘 𝑧 𝑘 𝑦 𝑘+𝑚 1 𝑀 𝑚 =1 2 𝑙𝑜𝑔 𝑅 𝑋 𝑃 𝑌 𝑃 𝑍 𝑃 𝑦 𝑧 ∙ 𝑅 𝑋 𝑃 𝑌 𝑃 𝑍 𝑃 𝑥 𝑦 𝑧 𝑦 𝑘 𝑘 𝑘 𝑘 𝑘 𝑘+𝑚
Fig. 2 𝑆𝑟 against the 𝑆𝑝 values for all affective states and subjects.
. (11)
The application of (11) to the emotion-related EEG recordings would provide the means for asymmetry concept evaluation during artificially evoked emotions by measuring the total amount of information shared between two EEG channels (Fp1, Fp2) considering the existence of the third (F3/F4).
𝑝
(13)
According to what has already been assumed, 𝑆𝑝 will presumably be smaller than 𝑆𝑟 , if the asymmetry concept holds. For the MDI analysis the epoch length was set to 32 samples whereas the 𝑃 length was defined to be 8 samples after thorough optimization testing.
D. The proposed approach
III. RESULTS
The experience of negative emotions is related with an increased activity in the right frontal and prefrontal hemisphere while positive emotions produce an enhanced lefthemisphere activity. Based on this fact, it is assumed that the total amount of information S (see (11)), hidden in the EEG signals and shared between the right and left hemisphere would become maximum when the subject is calm (information symmetry), whereas S would become minimum when the subjects experiences an emotion (information asymmetry). Thus, in order to investigate the above assumption, two values were calculated according to the MDI analysis, i.e., the 𝑆𝑟 and 𝑆𝑝 values. 𝑆𝑟 refers to bidirectional information sharing between channel 1 and channel 2, taking into account channel 3, when the subject does not feel any emotion; hence, s/he is relaxed, i.e., 𝑟 𝑟 𝑆𝑟 = 𝑆12 + 𝑆21 ,
𝑝
𝑆𝑝 = 𝑆12 + 𝑆21 .
(12)
whereas 𝑆𝑝 is the same sharing information during the projection of an IAPS picture, where s/he is supposed to feel an emotion, i.e.,
In Fig. 2, the 𝑆𝑟 values against the 𝑆𝑝 values are plotted for each one of the four affective states. Every dot in the figure represents the 𝑆𝑝 and 𝑆𝑟 values derived from the 5-s signals, which refer to the projection of the picture phase and the relax phase occurred before the specific picture preview, respectively. Since for each affective state 10 pictures were projected to every one of the 16 subjects,160 (𝑆𝑝 , 𝑆𝑟 ) pairs for each affective state were produced. Figure 2 shows that 𝑆𝑟 > 𝑆𝑝 holds for the majority of the (𝑆𝑝 , 𝑆𝑟 ) pairs. It is noteworthy that in the case of HALV state, this phenomenon is appeared with lesser intense. This might be due to the intensiveness of the emotion that the picture in this category elicits. For the projection of pictures in this category many subjects reported that some of the pictures produced so negatively emotional charges that they were thinking about them while the procedure was proceeding. As a result, the experience of negative emotions occurred during the relaxation phase (countdown phase), would reduce the symmetry during the relaxation phase; hence, 𝑆𝑟 values would tend to be almost equal with 𝑆𝑝 ones.
IFMBE Proceedings Vol. 29
690
P.C. Petrantonakis and L.J. Hadjileontiadis V. CONCLUSIONS
Fig. 3 Mean asymmetry index for all subjects and the four affective states, along with the mean value of asymmetry index across all affective states per subject (red line).
In this work, a novel method for the evaluation of how efficiently emotion states are elicited within an EEG-ER scenario was presented. The whole approach was based on the information shared between specific EEG locations in the brain, using MDI analysis. The asymmetry concept that rules the experience of negative or positive emotions was both verified and exploited to evaluate the degree that a subject experiences a certain affective state. The encouraging results from the implementation of the proposed method pave the way for the development of more reliable emotion elicitation techniques.
ACKNOWLEDGMENT The authors would like to express their gratitude to all 16 subjects participated in the experiment.
REFERENCES 1.
Fig. 4 𝑆𝑟 against the 𝑆𝑝 values for all affective states and for subjects 1 (·) and 16 (x). IV. DISCUSSION
In order to define a measure for the frontal EEG asymmetry, the distance of each dot in Fig. 2 from the diagonal (green line) was used as an Asymmetry Index (AsI), i.e., 𝐴𝑠𝐼 = 𝑆𝑟 − 𝑆𝑝 ×
2 2
.
(14)
The mean asymmetry index, 𝐴𝑠𝐼 across all ten pictures for each one of the four affective states, for all subjects is depicted in Fig. 3, along with the mean value of 𝐴𝑠𝐼 of each subject across the four affective states (red solid line). From Fig. 3, it is obvious that each subject reacts in regard with emotional experience, differently in comparison with the others. In order to better show this, the 𝑆𝑟 and 𝑆𝑝 values from subjects 1 and 16 only, corresponding to the greater and smaller 𝐴𝑠𝐼 , respectively, are depicted in Fig. 4. The latter shows that using the proposed method, an evaluation of the elicitation of emotion in subjects whose EEG signals are used to discriminate affective states is feasible, leading to more pragmatic EEG-ER systems.
Reeves B, Nass C (1996) The Media Equation. How People Treat Computers, Television, and New Media like Real People and Places. Cambridge University Press, New York. 2. Picard R W (1997) Affective Computing. MIT Press, Cambridge,MA. 3. Chanel G, Kronegg J, Grandjean D et al. (2005) Emotion assessment: Arousal evaluation using EEG and peripheral physiological signals. Technical Report. University of Geneva. 4. Schaaff K, Schultz T, (2009) Towards emotion recognition from electroencephalographic signals. International Conference n Affective Computing and Intelligent Interaction, 2009. 5. Schaaff K, Schultz T, (2009) Towards an EEG-based emotion recognizer for humanoid robots,” 18th IEEE International Symposium on Robot and Human Interactive Communication, 2009, pp 792-796. 6. Davidson R J, Schwartz G E, Saron C (1979) Frontal versus parietal EEG asymmetry during positive and negative affect. Psychophysiology 16: 202-203. 7. Sakata O, Shiina T, Saito Y (1999) Causality analysis of alpha rhythm by multidimensional directed information. Proc. of Natl. Conv. IEICE, 1999, pp146. 8. Lang P J, Bradley M M, Cuthbert B N (2008). International affective picture system (IAPS): Affective ratings of pictures and instruction manual. Technical Report A-8. University of Florida, Gainesville, FL. 9. Morris J D (1995). Observations: SAM the self assessment mannequin—An efficient cross-cultural measurement of emotional response. Journal of Advertising Research 35(6):63-68. 10. Fatourechi M, Bashashati A, Ward R K et al. (2007) EMG and EOG artifacts in brain computer interface systems: A survey. Clinical Neurophysiology118:480-494. 11. Kamitake T, Harashima H, Miyakawa H (1984) Time series analysis based on directed information, Trans IEICE, 103-110. 12. Sakata O, Shiina T, Saito Y (2002) Multidimensional Directed Information and Its Application, Elec. and Com. in Japan 4:3-85. Author: Leontios J. Hadjileontiadis Institute: Aristotle University of Thessaloniki Street: University Campus, GR-54124 City: Thessaloniki Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
A Game-Like Interface for Training Seniors’ Dynamic Balance and Coordination A.S. Billis1, E.I. Konstantinidis1, C. Mouzakidis2, M.N. Tsolaki2, C. Pappas1, and P.D. Bamidis1 1
Lab of Medical Informatics, Medical School, Aristotle University of Thessaloniki, Greece 2 Care Unit of Alzheimer Disease – Thessaloniki, Greece
Abstract— The current work focuses on the development of a game platform that can help elderly people to exercise and maintain their physical status and well being through an innovative, low-cost ICT platform, such as Wii Balance Board. As it is widely admitted, Third Age suffers from severe problems such as frailty and instability. Falling remains one of the main causes of severe injuries and death among people aged 65 or older. In the present paper, a set of games that make use of Wii Balance Board will be discussed, in combination with interface design principles that could improve accessibility and force seniors to engage to the training process through gaming. The main scope of the research conducted and presented here, is the design and development of a game-like interface that incorporates the characteristics of a Human-Computer Interaction system such as user input and system feedback according to user’s movement patterns and the investigation of how such a platform could meet the special needs of a target group, such as elderly people and its potential use as a physical training platform in general. Accessibility issues and seniors’ possible ease of adaptation to the system will also be thoroughly discussed. Keywords— Balance training, accessibility, Nintendo Wii, seniors, game.
I. INTRODUCTION Nowadays, it is widely documented that European population is growing [1]. Demographic surveys reveal that the European population of age 65+ during the last four decades almost doubled and it is expected that Europe will have around 173 million people aged 65+ by 2050 [2]. Older adults commonly lack physical fitness and more often suffer from severe mobility problems of their upper and lower extremities [3]. To that extent, balance and strength training are the most suitable kind of physical exercises to help seniors improve their movement patterns, gait and general body posture [4]. Balance training intervention programs mostly include exercises that target to the improvement of both dynamic and static balance by means of controlling one’s COP (Center of Pressure). Furthermore, coordination and agility tasks should also be covered and offered to seniors during a balance training set of exercises. Although a huge range of training programs are offered to seniors by health institutes and geriatric day care centers,
long-term adherence to any kind of physical activity is a serious matter of concern since seniors do not seem really interested in them and as a result they drop out of the scheduled activities, as they completely lack motivation [5]. Moreover, as a great proportion of seniors stay at home, facing mobility problems, lack the possibility to visit day care centers and to be supervised by any therapists. A possible solution that could alleviate these problems could be the use of computer games and platforms. Games in general could serve multiple objectives. Apart from their obvious aspect of usage as a form of entertainment, they could also serve as educational and training platforms [6]. Therefore games of such kind that incorporate training, can increase the attractiveness and the overall engagement to the training process, establishing games as powerful tools to be used in training and clinical settings [7], [8]. Games also enforced by recent technological achievements and advances in the fields of virtual and mixed reality, have made it possible to create an immersive place, where patients are highly motivated and are offered treatment through playing. Virtual reality environments offer the chance to therapists to implement interactive treatment and training programs in fully- or semi- controlled 2D or 3D environments, where peripheral devices or sensor networks could track and record users’ responses towards changes occurred in the game environment [9]. An alternative possibility concerning virtual reality interaction is the case of physically interaction with such systems, using for example haptic interfaces [10], tabletop interfaces [11], sensor-enabled devices like Nintendo Wii [12] and motion tracking cameras like those in EyeToy game platform [13].
II. EXERGAMING AND FITNESS Exergames were firstly defined by Bogost [14] and stands for the game category that promote and enhance users’ physical health. User’s arm and leg movements activate a virtual character, who simulates the intensity and the orientation of the user movement on the game environment. A characteristic example of an exergame platform is Yourself! Fitness [15], which helps users stay fit using personalized training programs under the guidance of a virtual coach. The game difficulty level can easily adapt to
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 691–694, 2010. www.springerlink.com
692
A.S. Billis et al.
user’s body, height and heart rate in order to make training neither too strenuous nor too uncompetitive. In addition, a positive feature of exergames is the provision of motivation to players. This is accomplished by showing total score, total progress, list with top players and motivation messages [16]. Dance Dance Revolution (DDR) and Wii Fit are representative games that provide such feedback to users. Moreover, a great amount of research has been conducted concerning game inclusion in physical therapy and treatment of elderly people. Betker et al. [17] made use of a coupling foot center of pressure (COP) device in order to assess dynamic balance of users. With this device users were able to control and interact with a video game. This research revealed that this kind of interaction actually promoted seniors dynamic balance, while higher levels of motivation encouraged seniors to participate and complete their training program with success by achieving their goals.
III. GAME ACCESSIBILITY GUIDELINES As people grow they are facing a number of changes concerning their senses (vision and/or hearing impairments), motor abilities, reaction speed and clarity of mind [18], [19]. All these declines set the need of specifying special requirements of user interface design in order for them to be easily used by seniors. Some of the most important issues that need to be taken into account when developing a game, which targets senior citizens are briefly discussed under this topic. Increasing age is accompanied by a loss in visual acuity both static and dynamic, serious decline in contrast and color sensitivity. Such problems may pose an insurmountable barrier for seniors to discriminate small objects displayed in the game screen, to read any written text instructions and to locate significant information in a complex game interface. Therefore, it should be straightforward that font, size and color should be easily adjusted and controlled by senior in order to configure game environment display in the most suitable way according to his or her special needs. Audition also decreases along with time and especially seniors face challenges when it comes to hearing high frequency tones. In addition synthetic speech is sometimes difficult to be discriminated by seniors as it is often distorted. In general it is suggested that information shall be provided to seniors by multiple alternative multimedia (text, voice, images) so that there are plenty of options for them to access information. For example a good alternative for sound effects could be a peripheral device which can deliver alert or notification to the user via vibrations, like Nintendo Wii Remote©.
What is more, mobility changes that seniors face include slower reaction speed, instability, loss of coordination and greater movement variability. Hence, too small objects or rapidly moving interface elements should be avoided and not included in the game environment. Another factor that may affect game design is cognitive functions’ decline such as working memory, attention and reasoning [20]. So the game interface shall remain as simple as possible without requiring that the seniors put much effort on remembering things or keeping strict attention to a target for a long period of time. Except for functional limitations, elderly will have to become customized to computer use in an efficient way, as it may be the first time for them dealing with such a system. This might cause elderly to feel anxiety about using them and afraid that they will finally fail to use it in an efficient way. In order to help inexperienced seniors to relax and get familiarized with the game environment it is recommended that games shall provide sufficient help and guidance throughout the whole game duration. Furthermore motivation messages shall encourage every step of the senior acknowledging by this way his effort for achieving his/her goals. As people in general love to socialize, to be competitive, collaborate and share their experiences with relatives or friends [21], a great hint is being posed for designing multiplayer games, where seniors could compete to each other, make fun and increase their self-confidence. Finally, it has to be stated that in order for a game to be considered as interesting and engaging for senior citizens, it should offer content that seniors are familiar with during their everyday activities and is acceptable by them. Therefore content and game scenarios have to match cultural and lifestyle diversity of seniors that come from different places and adapt to local customs. Also the most important requirement for a game to be widely used by this target group is to convince them about the beneficiary of its usage. An interesting game but to no value will discourage seniors to spend time in order to get to learn it.
IV. BALANCE GAME IMPLEMENTATION The implemented Balance Game makes use of both peripheral devices and software packages. The main device used as input point is Nintendo Wii Balance Board©. Wii Balance Board offers the ability to communicate its four sensors’ raw data and Center of Pressure (COP) value through Bluetooth© wireless communication protocol. These data are read using the API of an open source library written in C#.NET, namely WiimoteLib by Brian Peek [22]. Input data provided by the movements and the instant
IFMBE Proceedings Vol. 29
A Game-Like Interface for Training Seniors’ Dynamic Balance and Coordination
693
change of senior’s center of balance, handle in a realistic manner game objects that are shown on the screen of the computer. Game logic and game objects are developed via the Microsoft’s XNA game development platform, which can be implemented and manipulated in Visual Studio 2008 programming environment. XNA was chosen among several other game development API due to its free distribution and the simplicity on the way it renders and manages game content.
The second game therapy allows seniors to move their body in order to move a basket. The aim of this game is to move the basket in different direction each time so to collect as much fruits as they can. Fruits appear to the game screen at random positions at several time intervals. This game challenges users to alter dynamically their center of mass and consequently their balance so to alternate the direction of basket movement path. A score display provides feedback to the senior concerning his or her current performance. A snapshot of the second balance therapy is provided in Fig. 3.
Fig. 1 Balance Game implementation scheme
Fig. 3 Dynamic balance game
The Balance Game comprises of two game therapies. The first game is the well-known golf game. The user has to move his or her body in order to move the ball through barriers into the right direction so to score the ball into the hole. By this way seniors can practice their agility and balance by controlling their center of mass and guiding the ball with a speed dependent on user’s center of gravity. A screenshot of the golf balance game is shown in Fig.2.
Apart from the score indication there are several metrics calculated by the therapies concerning the senior’s balance efficacy. These metrics could be used by therapists who would like to track and assess seniors’ progress. These data can also be represented graphically to present senior’s performance during several periods of time. The table below shows the most important outcome measures that can be calculated by our balance game. Table 1 Outcome measures calculated for evaluation purposes and therapists’ convenience Exercise game
Metrics
Golf balance game
Total distance (meters) Total time (seconds) Speed average (meters/sec2) Deviation of optimum suggested path Success or not Total time (seconds) Path length (meters) Speed average (meters/sec2) Percentage of apples gathered out of total apples
Dynamic balance game
Fig. 2 Golf balance game IFMBE Proceedings Vol. 29
694
A.S. Billis et al.
V. CONCLUSIONS AND DISCUSSION This paper describes a set of computer-based balance game therapies which are handled by end-user’s body movements, recognized by Wii Balance Board, in order to train their dynamic balance and agility of the participant. The interpretation of movements is accomplished by the game internal logic and a number of outcome measures is recorded and saved by the game for evaluation of seniors’ progress. The use of a low-cost tangible user interface for the training of elderly seems to be really promising as it offers its end-users the ability to physically interact with the game content and the evolution of the game scenario. Further investigation needs to be done concerning the introduction of new content and game scenarios which will best fit to the preferences of senior users. In order to test and validate the above mentioned game platform a number of trials will be held with participating adults aged 65+, most of them being in moderate physical condition. Usability, user acceptance and adaptability to new interaction technologies such as Wii Balance Board, pleasure derived by the game play, as well as accessibility issues will be questioned in order to draw important conclusions and make further suggestions for the improvement of our system. Finally, except for the balance training we manage to develop a whole game training platform targeting to seniors which will incorporate strength, endurance, flexibility and static balance tasks. Furthermore, we aim to structure our platform architecture to be as open as possible in order to support the integration of already existing exercise games and their seamless operation through our environment. The expected game platform would be possible to be used in both health care centers and individual homes because of its low cost and ease of use.
ACKNOWLEDGMENT This work is partially funded by the LLM Project. ICT Policy Support Programme (ICT PSP) as part of the Competitiveness and Innovation Framework Programme by the European Community.
REFERENCES 1. World Health Organisation [WHO], Active Ageing: A Policy Framework. Available from www.who.int/hpr/ageing/ActiveAgeing PolicyFrame.pdf 2. Eurostat 2008 3. Donald IP, Bulpitt CJ. The prognosis of falls in elderly people living at home. Age Ageing 1999;28:121–5 4. Skelton DA, Beyer N. Exercise and injury prevention in older people. Scand J Med Sci Sports 2003; 13: 77-85
5. Schneider JK, Eveker A, Bronder DR, Meiner SE, Binder EF. Exercise training for older adults: Incentives and disincentives for participation. J Gerontol Nursing 2003;29: 21-31. 6. Myers, D. (1999). Simulation as play: A semiotic analysis. Simulation and Gaming: An International Journal, 30(2), 147-162 7. Morales-Sanchez, A., Arias-Merino, E., Diaz-Garcia, I., CabreraPivaral, C., & Maynard-Gomez, W. (2007). Effectiveness of an educative intervention on operative memory through popular games in the elderly. Alzheimer's and Dementia, 3(3), 127-127 8. Russ, S. (1995). Play psychotherapy research: State of the science. In Ollendick, T. and Prinz, R. (Eds.), Advances in clinical child psychology (pp. 365–391). New York: Plenum. 9. Konstantinidis, E. I., Luneski, A., Frantzidis, C. A., Pappas, C., Bamidis, P. D. (2009). A Proposed Framework of an Interactive Semi-Virtual Environment for Enhanced Education of Children with Autism Spectrum Disorders, The 22nd IEEE International Symposium on Computer-Based Medical Systems, CBMS 2009, 3-4 August, Albuquerque, New Mexico, USA 10. Bonanni L., Vaucelle, C., Lieberman, J., & Zuckerman, O. (2006). TapTap: A Haptic Wearable for Asynchronous Distributed Touch Therapy. Ext. Abstracts CHI 2006, ACM Press (2006), 580-585. 11. Mumford N., Duckworth, J., Eldridge, R., Guglielmetti, M., Thomas, P., Shum, D., Rudolph, H., Williams, G., and Wilson, P.H. A virtual tabletop workspace for upper-limb rehabilitation in Traumatic Brain Injury (TBI): A multiple case study evaluation. In Proc. of Virtual Rehabilitation, (2008), 175-180. 12. Mäyrä, F. (2007). The Contextual Game Experience: On the SocioCultural Contexts for Meaning in Digital Play. Proceedings of DiGRA 2007 Conference: Situated Play, Tokio, Japan, 810-814. 13. Kizony, R., & Weiss, P. L. (2004). Virtual reality rehabilitation for all: Vivid GX versus Sony PlayStation II EyeToy. Proceedings of the 5th International Conference on Disabilities, Virtual Reality, and Associated Technologies, Oxford, UK, 87-94. 14. Bogost, I. (2005). The Rhetoric of Exergaming. Paper presented at the Digital Arts and Cultures conference, Copenhagen, Denmark 15. Yourself! Fitness at http://www.yourselffitness.com/ 16. Zwartkruis-Pelgrim E, de Ruyter B (2008) Developing and Adaptive Memory Game for Seniors. In: Markopoulos P et al (eds) Fun and Games 2008. LNCS, Springer 17. Betker, A. L., Szturm, T., Moussavi, Z. K., & Nett, C. (2006). Video game-based exercises for balance rehabilitation: a single-subject design. Archives of Physical Medicine and Rehabilitation, 87(8), 1141-1149. 18. Czaja, S.J., & Lee, C.C. (2003). Designing computer systems for older adults. In J.A. Jacko & A. Sears (Eds.), The Human-Computer Interaction Handbook – Fundamentals, Evolving Technologies and Emerging Applications (pp. 425). Mahwah, New Jersey: Lawrence Erlbaum Associates. 19. Fisk A.D., Rogers A.R., Charness N, Czaja S.J., & Sharit J. (2004). Designing for Older Adults – Principles and Creative Human Factors Approaches. Boca Raton: CRC Press. 20. Czaja, S.J., & Lee, C.C. (2007). The impact of aging on access to technology. Universal Access in the Information Society, 5, 341-349. 21. Nielsen Interactive Entertainment (2005). Video gamers in Europe – 2005. Research Report Prepared for the Interactive Software Federation of Europe (ISFE). 22. WiimoteLib at http://www.codeplex.com/WiimoteLib Author: Antonis Billis Institute: Lab of Medical Informatics, Medical School, Aristotle University of Thessaloniki Street: City: Thessaloniki Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
Incorporating Electroporation-related Conductivity Changes into Models for the Calculation of the Electric Field Distribution in Tissue I. Lacković1, R. Magjarević1 and D. Miklavčič2 1
University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia 2 University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia
Abstract—Electroporation is a phenomenon caused by externally applied high-intensity electric field to cells that results in the increase of cell membrane permeability to ions and various molecules such as drugs or DNA. In vivo tissue electroporation is the basis for electochemotherapy and electrogenetherapy. Apart from the increased membrane permeability which is observed long after the delivery of electric pulses, there is experimental evidence that, during the application of membrane permeabilizing electric pulses electric conductivity of tissue increases. In this work we use 3D finite-element modeling approach to investigate the difference in electroporated tissue volume when tissue conductivity change due to electroporation is taken into account vs. constant conductivity case. We modeled needle electrodes and assumed that tissue has sigmoid-like conductivity dependence on electric field intensity. Our numerical studies showed that taking into account dependence of tissue conductivity on electric field intensity affects the electric field distribution in tissue and in consequence irreversibly and reversibly electroporated regions. For model validation we calculated the reaction current and compared it with results of previous study where current was measured during in vivo tissue electroporation on experimental animals. We found reasonably close match between the calculated current in our nonlinear model and current measured in the experiment. Keywords— electroporation, tissue electrical conductivity, numerical modeling, finite element method
was neglected in early models for the calculation of the electric field distribution in tissue during electroporation [7]. Later on, improved models were developed taking into account this dependence [8]. An alternative to macroscopic models that use bulk tissue properties are microscopic models like transport-lattice model described in [9], but they cannot be used for 3D large-scale modeling of electric field in tissue induced by specific electrode configuration. The importance of models for calculation of the electric field distribution in tissue during electroporation lies in the fact that they may serve as important tool for predicting the volume of tissue that will be reversibly or irreversibly electroporated, and also in the optimization of electrodes and pulse parameters for successful electroporation-based treatment [10]. When calculating electric field distribution in tissue during electroporation for various electrode configurations, it would be interesting to know the difference in electroporated tissue volume when this conductivity change is taken into account or not. This issue was recently addressed on a vegetal model (potato) and 2D modeling [6]. In our present work we present 3D finite-element modeling approach to investigate the difference in electroporated tissue volume when tissue conductivity change due to electroporation is taken into account or not.
I. INTRODUCTION
II. MATERIALS AND METHODS
Electroporation is a phenomenon caused by externally applied high-intensity electric field to cells (in culture or in tissue) that results in the increase of cell membrane permeability to ions and various molecules such as drugs or DNA. Therefore, electroporation is successfully used for in vivo drug delivery to tumors (electrochemotherapy) and for delivery of foreign genes into tissues (gene electrotransfer) [1]. Recently, irreversible electroporation was proposed as a new tissue ablation modality [2]. Apart from the increased membrane permeability which is observed long after the delivery of electric pulses, there is experimental evidence that, during the application of membrane permeabilizing electric pulses electric conductivity of cells and tissue increases [3-6]. This conductivity change
A. Model for dependence of tissue conductivity on the local electric field intensity Since liver was used in many electroporation experiments of our group and others, and electroporation thresholds for electrochemotherapy protocol with 8 pulses of 100 μs duration and 1 Hz repetition were previously determined (362±21 V/cm for reversible, 637±43 V/cm for irrevresible threshold assuming constant conductivity [7]; and 460 V/cm for reversible and 700 V/cm for irreversible threshold assuming electric field dependent conductivity [8]), we choose it for the present analysis. Nominal electrical conductivity of liver that we used in calculations was 0.126 S/m [10]. We assumed that liver tissue has sigmoid-like conductivity dependence on electric field intensity, with the increase factors 2×, 3× and 4×, Fig. 1.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 695–698, 2010. www.springerlink.com
696
I. Lacković, R. Magjarević, and D. Miklavčič
rent at the end of pulse for more than 2. If we neglect the capacitive transient and time course of conductivity increase during the pulse, we may assume that current density in tissue is divergence free and electric potential satisfies:
−∇ (σ∇ϕ ) = 0
(1)
where σ is conductivity tensor which depends on the electric field intensity E:
σ = σ ( E ) = σ ( ∇ϕ )
Fig. 1 Assumed dependences of tissue conductivity on electric field intensity. σ1 is the constant conductivity case (leading to linear model), while other three curves σ2, σ3 and σ4 assume increase factors 2×, 3× and 4×.
Actually sigmoid-like function was implemented as smoothed Heaviside’s function with continuous 2nd derivative. The conductivity increase factor was chosen to span around 3× which is in agreement with experimentally observed conductivity increase at the end of the pulse (see [3] Fig 2b and [8] Fig 3). The electric field intensity where conductivity starts to increase was chosen lower than in [8], since conductivity starts to increase even below reversible electroporation threshold. B. Geometry We modeled needle electrodes 0.7 mm in diameter 8 mm apart, inserted to the depth of 7 mm in a tissue block 32 mm × 32 mm × 17 mm. Geometry details are identical as in our previous work [12] and follow the initial experimental and modeling study where electroporation thresholds where determined [7]. Due to symmetry only one quarter of the entire geometry was modeled. We used COMSOL Multiphysics software environment.
(2)
The equation (1) is thus nonlinear and was solved iteratively using COMSOL Multiphysics. This model has no time course of conductivity increase incorporated in it, and that is its main deficiency. However, we assume this holds for the end of the pulse. III. RESULTS AND DISCUSSION
Comparison of calculated electric filed in tissue along a chosen line for 800 V applied to electrodes for constant conductivity case (linear model) and field dependent conductivity case σ3 is shown in Fig 2. Since only a quarter of geometry was modeled, to obtain the electric field intensity around the second electrode (e.g. for positive x), the solution shown in Fig. 2 should be mirrored along x = 0. When σ depends on |E| electric field near electrodes is lower than for constant σ case, but reaches higher values at larger distances form the electrode (in the middle between electrodes the difference is 45% for this particular case shown in Fig. 2).
a) b) a)
C. PDE formulation for tissue with nonlinear conductivity Field distribution in tissue is traditionally calculated by solving the Laplace’s equation for scalar electric potential [7]. However, to account for tissue conductivity increase due to electroporation some modifications are needed. Measurement of reaction current during delivery of 8 rectangular electroporation pulses of 100 µs duration and 1 Hz repetition revels that after rapid capacitive transient (which is due to tissue capacitance) the current is increasing towards the end of the pulse. The rate of increase depends on applied voltage. Also we observe that the increase of applied voltage for a factor of 2 results in the increase of cur-
b)
Fig. 2 Calculated electric field in tissue along the line connecting two electrodes at half insertion depth (see Fig 3) for 800 V applied to electrodes for a) constant conductivity case σ1, and b) field dependent conductivity σ3 according to assumed σ(|E|) dependence shown in Fig 1.
IFMBE Proceedings Vol. 29
Incorporating Electroporation-Related Conductivity Changes into Models for the Calculation of the Electric Field Distribution in Tissue a)
200 V
400 V
600 V
800 V
1000 V
1200 V
200 V
400 V
600 V
1000 V
1200 V
697
σ(E) = const.
b) σ(E) = σ3(|E|) (sigmod-like dependence)
800 V
Fig. 3 Calculated isosurfaces of the electric field intensity 350 V/cm (green) and 700 V/cm (yellow) for different voltages applied to electrodes ranging from 200 V to 1200 V for a) constant conductivity case σ1, and b) field dependent conductivity σ3 according to assumed σ3(|E|) dependence shown in Fig 1. Isosurface levels are chosen close to reversible and irreversible electroporation threshold for liver. Only one quarter of the entire model is shown i.e. 16 mm × 16 mm × 17 mm. Dashed line for 800 V case indicate location along which the electric field intensity is shown in Fig 2. IFMBE Proceedings Vol. 29
698
I. Lacković, R. Magjarević, and D. Miklavčič
Even better insight in the difference in electric field distribution for σ = σ1 = const. and σ = σ3(|E|) can be seen in Fig. 3 where electric field isosurfaces 350 V/cm and 700 V/cm are shown for a wide range of electrode voltages. Isosurface levels were chosen close to electroporation thresholds meaning that the tissue volume encompassed by inner isosurface will be irreversibly electroporated, the tissue volume between two isosurfaces would be reversibely electroporated and the remaining tissue out of the outer isosurface would not be electroporated at all. In order to validate our model for electric field distribution in tissue we decided to compare the results of current measurement at the end of the first pulse during liver electroporation (see results of previous studies [4, 8]) with reaction current calculated by our model. We calculated reaction current I by numerical integration of the current density J for all the applied voltages U0 and all assumed σ(|E|) dependences. The results showing calculated current vs. applied voltage for our models (both linear with σ = const. and nonlinear with σ2(|E|), σ3(|E|) and σ4(|E|)) are presented in Fig 4. It can be observed that nonlinear models with σ2(|E|) and σ3(|E|) fit to measured data in [8] Fig 6b considerably better than linear constant conductivity model. Thus we may conclude that conductivity increase factor due to electroporation is in this case 2-3×, which is very close to previous results [4, 5, 8].
Fig. 4 Current I vs. applied voltage U0 for needle electrode model. Dashed line is for linear constant conductivity case model (σ = const.=0.126 S/m) while other three curves correspond to sigmoid-like σ(|E|) dependences σ2(|E|), σ3(|E|) and σ4(|E|) shown in Fig. 1
IV. CONCLUSIONS
We have shown that incorporation of electroporationrelated conductivity changes into model for the electric field calculation in tissue results in different field distribution than in constant conductivity case. Visualization of electric
field isosurfaces in 3D provides a valuable insight in the volume of tissue that is electroporated or not. Nonlinear models have the potential to better fit the experimental data. However, how tissue conductivities change with electric field is still poorly known and requires more experiments.
ACKNOWLEDGMENT This work was funded within the program of bilateral scientific cooperation between the Republic of Croatia and the Republic of Slovenia.
REFERENCES 1.
Mir LM (2001) Therapeutic perspectives of in vivo cell electropermeabilization. Bioelectrochemistry 53:1-10 2. Edd JF, Horowitz L, Davlos RV, Mir LM, Rubinsky B (2006) In vivo results of a new focal tissue ablation technique: irreversible electroporation. IEEE Trans. Biomed. Eng. 53:1409-1415 3. Pavlin M, Kandušer M, Reberšek M, Pucihar G, Hart FX, Magjarević R, Miklavčič D (2005) Effect of cell electroporation on the conductivity of a cell suspension. Biophys. J. 88:4378-4390 4. Cukjati D, Batiuskaite D, André F, Miklavcic D, Mir LM. (2007) Real time electroporation control for accurate and safe in vivo nonviral gene therapy. Bioelectrochemistry 70: 501-507 5. Ivorra A, Rubinsky B (2007) In vivo electrical impedance measurements during and after electropration of rat liver. Biolectrochemistry 70:287-295 6. Ivorra A, Mir LM, Rubinsky B (2009) Electric field redistribution due to conductivity changes during tissue electroporation: Experiments with simple vegetal model. IFMBE Proc 25/XIII 59-62 7. Miklavcic D, Semrov D, Mekid H, Mir LM (2000) A validated model of in vivo electric field distribution in tissues for electrochemotherapy and for DNA electrotransfer for gene therapy. Biochim Biophys Acta 1523:233–239 8. Sel D, Cukjati D, Batiuskaite D, Slivnik T, Mir LM, Miklavcic D (2005) Sequential finite element model of tissue electropermeabilization. IEEE Trans. Biomed. Eng. 52: 816–827 9. Gowrishankar TR, Weaver JC (2003) An approach to electrical modeling of single and multiple cells. Proc Natl Acad Sci USA 100:3203-3208 10. Miklavcic D, Snoj M, Zupanic A, Kos B, Cemazar M, Kropivnik M, Bracko M, Pecnik T, Gadzijev E, Sersa G (2010) Towards treatment planning and treatment of deep-seated solid tumors by electrochemotherapy. BioMedical Engineering OnLine 9:10 11. Haemmerich D, Staelin ST, Stai JZ, Tungjitkusolmun S, Mahvi DM, Webster JG (2003) In vivo electrical conductivity of hepatic tumours. Physiol Meas 24:251–260 12. Lackovic I, Magjarevic R, Miklavcic D (2009) Three-dimensional finite-element analysis of Joule heating in electrochemotherapy and in vivo gene electrotransfer. IEEE Trans Diel El Insul 16:1338-1347 Author: Igor Lacković Institute: Universtiy of Zagreb Faculty of Electrical Engineering and Computing Street: Unska 3 City: Zagreb Country: Croatia Email: [email protected]
IFMBE Proceedings Vol. 29
Automated Estimation of 3D Camera Extrinsic Parameters for the Monitoring of Physical Activity of Elderly Patients R. Deklerck, B. Jansen, X.L. Yao, and J. Cornelis Vrije Universiteit Brussel, Dept. Electronics and Informatics-ETRO, IBBT, Brussels, Belgium
Abstract— This paper introduces a system for monitoring physical activity of elderly. The position based monitoring system requires accurate calibration methods mapping the camera coordinate system onto a room based system. An algorithm mapping a coordinate system defined by a 3D camera onto an external coordinate system is presented. The algorithm requires three sets of points to be known, which are each located in orthogonal planes. This method only requires one image while no calibration objects or measurements are required.
coordinates into room based coordinates. As a result, the position of the elderly patient in the room is known up to an error of a few centimeters. From these 3D positions, various features can be calculated, e.g. the distance walked per time unit, the amount of time in bed, in a sofa, walking, etc. This initial prototype was described in [6].
Keywords— 3D camera, physical activity monitoring, camera calibration.
I. INTRODUCTION AND STATE OF THE ART Understanding physical activity levels and patterns of activities of daily living of elderly becomes an important aspect of telemedicine, as changes in physical activity patterns can provide viable clinical information. A variety of sensors has been used for this goal, including PIR sensors, accelerometers and cameras. A major disadvantage of previous camera based systems for this task (e.g. [1, 2]) is that regular 2D camera systems provide insufficient spatial information. Stereo vision systems could partially overcome this problem: given a mathematical model of the properties of the two cameras, 3D positions of objects can be calculated. However, correct spatial information relies on the automatic matching between corresponding pixels in each image. This process is computationally expensive; moreover it is not always reliable. Recently, 3D cameras have been developed which provide spatial information of the pixels, by using the time-of-flight principle. The camera emits modulated infrared light and measures the time of flight of the light. This provides highly accurate depth information without complex pixel matching algorithms [3]. Studies comparing the accuracy of the depth measurements of stereo systems and 3D camera systems do show a superior accuracy of 3D camera systems [4, 5]. In past research, we have used such a 3D camera for the automatic detection of (in) activity of elderly patients. Using image processing techniques, the subject was identified in the image, an ellipse was fitted around the resulting blob and the position of the subject was transformed from camera based
Fig. 1 Activities registered during the night for an Alzheimer patient in his hospital room The main requirement of our activity monitoring framework is that a good estimation of the camera extrinsics exists. Extrinsics are relating the camera coordinate system to a room based coordinate system. The estimation of camera extrinsics has been the subject of intensive research for 2D images and many of the techniques can be extended to 3D images. In brief, two types of calibration methods exist: (1) methods requiring the availability of two sets of corresponding points in both spaces [7 -10], together with their coordinates in both spaces and (2) methods requiring one or more views of a calibration object with known geometry (e.g. [11]). Both approaches have the disadvantage that they are not appropriate in telemonitoring setups which have to be installed and maintained by care-providers or nurses rather than technical personnel: they either require a large amount of high precision measurements to be performed or they require the availability of a dedicated calibration object. Therefore, we propose a novel calibration method which takes the geometry of the room as a calibration object. In our method the only restriction on the geometry of the room is that the walls are orthogonal. No manual interventions are needed to acquire physical measurement of the scene. The
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 699–702, 2010. www.springerlink.com
700
R. Deklerck et al.
major walls in the room are automatically detected using a 3D Hough transform variant. This method is similar to work on the automatic detection of orientation of the road in camera images taken from within cars, as described in [12].
obtained which correspond to three separate PCA problems estimating the normal of the best fitting planes through each of the three point sets N1 , N 2 , N 3 :
τ i = −ρiμ N
i
II. METHOD
and
The task of the calibration method is to estimate the rotation matrix R and the translation vector T (T = [t1 t2 t3] τ) which are relating the camera coordinate system X and the room coordinate system X' , such that X' = RX + T . We assume that the intrinsic parameters and lens distortion are known, as the 3D camera automatically corrects for them. Whereas in typical calibration algorithms both R and T are estimated by providing a set of points expressed in both coordinate systems [13], the method we propose operates by providing sets of N1,N2 and N3 points lying respectively in the three orthogonal planes (x’=0,y’=0 and z’=0) defining the reference frame of the room. The method is composed of three essential components: (A) an initial estimation ρ of R and τ of T, (B) the procedure to estimate the best rotation matrix R• given ρ [11], (C) a method for calculating an optimized R* and T*, starting from R• calculated in (B). A. Initial Estimate of R and T An initial estimate for R and T can be obtained by minimizing the following functional, where i,j=1,2,3
⎡ 1 Ni i i i 2⎤ ⎢ N ∑ ( ρ i1 x n + ρ i 2 y n + ρ i 3 z n + t i ) ⎥ I ( ρ ij , ti ) = ∑ ⎢ i n =1 ⎥+ i =1 ⎢⎣ − λi ( ρ i21 + ρ i22 + ρ i23 − 1) ⎥⎦ λ4 ( ρ11 ρ 21 + ρ12 ρ 22 + ρ13 ρ 23 ) +
Σiρi = min( λi1 , λi 2 , λi 3 )ρi
with ρ i = [ρi1 ρi 2
[
ρi3 ]τ , μ Ni = μ1Ni
⎡ σ 11Ni − ( μ1Ni ) 2 ⎢ Σi = ⎢σ 21Ni − μ2Ni μ1Ni ⎢σ 31Ni − μ3Ni μ1Ni ⎣
μ 2N
i
i
i
i
i
i
μ 3N
i
i=1,2, i
]
τ
3
,
σ 13N − μ1N μ3N ⎤ ⎥ σ 23N − μ2N μ3N ⎥ , σ 33N − (μ3N )2 ⎥⎦
σ 12N − μ1N μ2N σ 22N − ( μ2N )2 σ 32N − μ3N μ2N i
for
i
i
i
i
i
i
i
i
i
where for a,b = 1,2,3
μ aN = i
σ abN
i
Hence,
1 N
Ni
∑ X ni ( a ) n =1
⎧ X ni (1) = x ni 1 ⎪ X ni ( a ) X ni ( b ) ∧ ⎨ X ni (2) = y ni = ∑ N i n =1 ⎪ X i (3) = z i n ⎩ n Ni
ρ i should be chosen as the eigenvector belonging to
the eigenvalue with the smallest magnitude of the meancentered covariance matrix Σ i , in order to make it correspond to the normal of the best fitting plane. In general, due to noisy measurements, a non-orthogonal matrix ρ = [ρ 1 , ρ 2 , ρ 3 ] is obtained.
3
λ5 ( ρ11 ρ 31 + ρ12 ρ 32 + ρ13 ρ 33 ) + λ6 ( ρ 21 ρ 31 + ρ 22 ρ 32 + ρ 23 ρ 33 ) complemented by the 6 conditions for ρ to be a rotation matrix:
ρ i21 + ρ i22 + ρ i23 = 1 i = 1, 2, 3 ρ11 ρ 21 + ρ12 ρ 22 + ρ13 ρ 23 = 0 ρ11 ρ 31 + ρ12 ρ 32 + ρ13 ρ 33 = 0 ρ 21 ρ 31 + ρ 22 ρ 32 + ρ 23 ρ 33 = 0 A set of 18 non-linear equations is obtained in this way. By relaxing the orthogonality constraints on R (we assume λ4=0, λ5=0, λ6=0), three decoupled sets of equations are
B. Deriving an Orthogonal Matrix R• from ρ According to [11], the least square estimate of the best fitting orthogonal matrix R• to ρ, can be computed as R•=UVT, using the SVD (singular value decomposition) of ρ=USVT. C. Optimizing R• and τ The derived matrix R• can still be further optimized into R* by introducing a small rotation, linear in the angles, under the assumption that R• is already close to the optimum: −δ ϕ ⎤ ⎡ r1 1 ⎡ 1 ∂ R . R • = R * = ⎢⎢ δ 1 − θ ⎥⎥ . ⎢⎢ r2 1 ⎢⎣ − ϕ θ 1 ⎥⎦ ⎢⎣ r3 1 r1 2 − δ r2 2 + ϕ r3 2 ⎡ r1 1 − δ r2 1 + ϕ r3 1 ⎢ δr + r −θr δ r1 2 + r2 2 − θ r3 2 1 1 2 1 3 1 ⎢ ⎢⎣ − ϕ r1 1 + θ r2 1 + r3 1 − ϕ r1 2 + θ r2 2 + r3 2
IFMBE Proceedings Vol. 29
r1 2 r2 2 r3 2
r1 3 ⎤ r2 3 ⎥⎥ = r3 3 ⎥⎦
r1 3 − δ r2 3 + ϕ r3 3 ⎤ δ r1 3 + r2 3 − θ r3 3 ⎥⎥ − ϕ r1 3 + θ r2 3 + r3 3 ⎦⎥
Automated Estimation of 3D Camera Extrinsic Parameters for the Monitoring of Physical Activity of Elderly Patients
This has the advantage that a linear least squares problem is obtained: the minimization of the new functional I* results in a set of six linear equations with six unknowns ( t1* , t 2* , t 3* and δ, ϕ and θ). 3
∑
I* =
i =1
1 Ni
Ni
∑ ( ri1* x ni n =1
+ ri *2 y ni + ri *3 z ni + t i* ) 2
⎧ δ (σ +σ ) −ϕσ23•N1 −θσ13•N2 −t1*μ2•N1 + t2*μ1•N2 ⎪ •N3 •N3 * •N1 * •N3 ⎪ −δσ +ϕ(σ +σ11 ) −θσ12 + t1 μ3 −t3μ1 ⎪−δσ −ϕσ +θ(σ33•N2 +σ22•N3 ) −t2*μ3•N2 + t3*μ2•N3 ⎪ ⎨ −δμ2•N1 −ϕμ3•N1 −t1* ⎪ ⎪ δμ1•N2 +θμ3•N2 −t2* ⎪ ⎪⎩ ϕμ1•N3 −θμ2•N3 −t3* •N1 22 •N1 32 •N2 31
•N2 11 •N1 33 •N3 21
= σ21•N1 −σ12•N2 = −σ31•N1 +σ13•N3 = σ
•N2 32
−σ
=
μ1•N
=
μ2•N
=
μ3•N
•N3 23
1
2
3
where for a,b = 1,2,3
μ a• N = i
•N σ ab
i
1 N
1 = Ni
Ni
∑ X • in ( a ) n =1
⎧ X n• i (1) = x n• i ⎪ ∑ X n• i ( a ) X n• i ( b ) ∧ ⎨ X n• i (2) = y n• i n =1 ⎪ X • i (3) = z • i n ⎩ n Ni
⎡ x n• i ⎤ ⎡ x ni ⎤ ⎢ •i ⎥ • ⎢ i ⎥ ⎢ yn ⎥ = R . ⎢ yn ⎥ ⎢ z n• i ⎥ ⎢ z ni ⎥ ⎣ ⎦ ⎣ ⎦
701
The acquired image was shown on a computer screen and a user was asked to manually select 200 random points in each of the three planes, visible in the acquired image. For each of the three planes and corresponding point sets, M points were randomly selected for calculating R* and T*, while three randomly selected points Px’=0, Py’=0, Pz’=0 each outside the respective sets of M points were used for evaluation. The evaluation measure were the distances da’ of the transformed points Pa*'=0 to the plane a’=0 for a’={x’,y’,z’}. The entire evaluation experiment was repeated 1000 times. Average results for various numbers of M are shown in Table 1. The results show an error varying between 1.0 and 1.5 cm. For arbitrary points, measured with a ruler with 1 millimeter precision, lying outside the three planes, similar errors per coordinate are observed when the pixel indicated by the user is of constant depth and not affected by partial area effects. The accuracy of the results is primarily limited by the noise on the camera measurements. Approximately ten spreaded points per plane are required to provide reliable results. This means that the user manually needs to click 30 points in the images, which is a simple operation. Alternatively, the major planes in the images could be detected automatically using a region growing approach [14] or a 3D Hough Transform variant [15] such that no additional user input is required.
IV. FUTURE WORK
III. RESULTS AND DISCUSSION The whole procedure involving steps A, B, C has initially been successfully tested on simulated data. In order to evaluate its performance in a real setting, images were captured using a Swissranger SR-3000 3D camera. The camera was positioned such that it captured the ground floor and two intersecting walls of a simple room. The distances to the ground and the walls were respectively 80,120 and 65 cm. Table 1 Average errors of the calibration method in centimeters for the three orthogonal planes defined in the room coordinate system: x ' = 0; y ' = 0; z ' = 0; M
dx’
dy’
dz’
10 50 100 150 200
1.43 1.52 1.34 1.46 1.31
1.17 1.07 1.02 1.10 1.02
1.12 1.22 1.19 1.16 1.07
The procedure for estimating the extrinsics for a single camera can be extended into a method for relating multiple cameras to a single room based coordinate system; at least if partially overlapping views do exist. Using feature matching methods (e.g. SURF and SIFT), identical planes in the different images can be identified. Evaluation experiments of such a system are currently being performed.
V. CONCLUSION The method proposed here serves to compute the external camera parameters of a 3D camera. Within a home monitoring system for the analysis of physical activity of elderly patients, it is not feasible to perform actual measurements of positions of key points in the room, or to use calibration methods requiring multiple images or complex calibration objects. The method we proposed operates on three sets of points which are known to be orthogonal. These sets can either be defined with a simple user interface or by semiautomatic plane fitting methods.
IFMBE Proceedings Vol. 29
702
R. Deklerck et al.
(a)
(b)
Fig.
2 Application of the method to a corner of the room. Points were automatically detected for the different planes. (a) visible image; (b) detected planes
Fig. 3 Application of the method in a patient room. Points were automatically detected for the three planes. One plane (the floor) was colored in green. The depth image is shown here
ACKNOWLEDGMENT This research was partially funded by IBBT (Interdisciplinary Institute for Broadband Technology).
REFERENCES [1] H. Nait-Charif and S.J. McKenna. Activity summarisation and fall detection in a supportive home environment. In International Conference on Pattern Recognition (ICPR), Cambridge, 2004. [2] D. Chen, H. Wactlar, and J. Yang. Towards automatic analysis of social interaction patterns in a nursing home environment from video. In 6th ACM SIGMM International Workshop on Multimedia Information Retrieval (MIR’04), pages 283–290, 2004.
[3] Thierry Oggier, Michael Lehmann, Rolf Kaufmann, Matthias Schweizer, Michael Richter, Peter Metzler, Graham Lang, Felix Lustenberger, and Nicoloas Blac. An all-solid-state optical range camera for 3d real-time imaging with sub-centimeter depth resolution. In SPIE Proceedings, pages 5249–65, St. Etienne, 2003 [4] C. Beder, B. Baretczak, and R. Koch. A comparison of pmd-cameras and stereovision for the task of surface reconstruction using patchlets. In Proceedings of the second international ISPRS workshop BenCos 2007, 2007. [5] P. Einramhof, S. Olufs, and M. Vincze. Experimental evaluation of state of the art 3d-sensors for mobile robot navigation. In Proceedings OAGM07, 2007. [6] Bart Jansen, Sonja Rebel, Rudi Deklerck, Tony Mets and Peter Schelkens, 2008. Detection of activity pattern changes among elderly with 3D camera technology. Proceedings of the SPIE Europe Photonics Europe International Symposium, April 7-10 2008, Strasbourg, France. [7] Y.I. Abdel-Aziz and H.M. Karara. Direct linear transformation from comparator coordinates into object space coordinates in close-range photogrammetry. In Proceedings of the Symposium on Close-Range Photogrammetry, pages 1–18. American Society of Photogrammetry., 1971. [8] Raouf Benjemaa and Francis Schmitt. A solution for the registration of multiple 3d point sets using unit quaternions. Lecture Notes In Computer Science, 1407:34–50, 1998. [9] H. Hatze. High-precision three-dimensional photogrammetric calibration and object space reconstruction using a modified dltapproach. J. Biomechanics, 21:533–538., 1988. [10] Shinji Umeyama. Least-squares estimation of transformation parameters between two point patterns. IEEE Transactions of Pattern Analysis and Machine Intelligence, 13(4):376–380, 1991. [11] Z. Zhang. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1330-1334, 2000. [12] A.D. Sappa, D. Geronimo, F. Dornaika, and A. Lopez. On-board camera extrinsic parameter estimation. Electronics Letters, 42(13), 2006. [13] R.Y. Tsai. A versatile camera calibration technique for highaccuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE Journal of Robotics and Automation, 3(4):232-244, August 1987. [14] B. Jansen, F. Temmermans and R. Deklerck "3D human pose recognition for home monitoring of elderly," presented at The 29th IEEE EMBS Annual International Conference, Lyon, France, 2007. [15] Wouter Belmans, Tim Schaeps and Bart Jansen, 2008. Pseudo Automatic camera extrinsic estimation using 3D Hough Transform. Proceedings of the 4th European Congress for Medical and Biomedical Engineering, November 23-27 2008, Antwerp, Belgium.
Author: Institute: Street: City: Email:
IFMBE Proceedings Vol. 29
Bart Jansen; Vrije Universiteit Brussel-ETRO; Pleinlaan 2; BE-1050 Brussels; Country: Belgium; [email protected].
Robotic system for training of grasping and reaching J. Podobnik1 and M. Munih1 1
Laboratory of Robotics and Biomedical Engineering, Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
Abstract— This paper presents the HEnRiE device (Haptic environment for reaching and grasping exercises). HEnRiE is designed to be used for use for training of reaching, grasping and transporting virtual objects in haptic environments. The motivation was to develop a single system that retrains both hand grasping and releasing movements (which are essential to perform activities of daily living) and arm movements. The system combines a haptic interface and a grasping device, which is mounted on the end-effector of the haptic interface. Experiments with six healthy subjects and a series of experimental training sessions with two hemiparetic subjects will be presented. Both hemiparetic subjects were able to successfully complete the experimental training. Results show that the basic mechanism of grasp and load force coordination and control appear to remain preserved in a virtual task. Keywords— robot-aided neuro-rehabilitation, haptic interface, upper extremity, grasping, virtual environment
for reach & grasp therapy [6] was designed, which combines Gentle/S hardware and software with a dedicated grasp assist unit, the Grasp Robot Exoskeleton. Grasp Robot Exoskeleton has three active degrees of freedom, one for thumb and two for other four fingers. This paper presents the HEnRiE device, which combines a haptic device for upper extremity training with a module for grasping and computer generated haptic and graphical virtual environments. The HEnRiE device allows combined training for reaching and grasping movements. Combined training is reasonable because most of the activities of daily living require both arm movements and grasping [7]. Experiments were carried out on two chronic hemiparetic subjects and on six healthy subjects. The results of two hemiparetic subjects are compared to results of control subjects. Experiments have shown that the HEnRiE device is appropriate for robot-aided neuro-rehabilitation training.
II. M ETHODS
I. I NTRODUCTION Robot-aided neuro-rehabilitation has become in last decade widely recognized and accepted as a novel and promising rehabilitation approach [1, 2]. Studies have shown that the robot-aided neuro-rehabilitation facilitates motor recovery and changes motor map topography [2]. The robotaided neuro-rehabilitation has also become closely linked to a virtual reality, which has become a key component of systems for robot-aided neuro-rehabilitation [3]. Systems for robotaided rehabilitation are intended to supplement the conventional therapy and to aid the physiotherapists as a new tool in their effort to maximize the positive outcome of the rehabilitation. Aim of the robot-aided neuro-rehabilitation is to improve the patient’s motor performance, shorten the rehabilitation time, and provide objective parameters for patient evaluation [4]. European project Gentle/S showed that subjects were motivated to exercise for longer periods of time when using an augmented virtual reality system composed of haptic and visual reality systems. Subjects could exercise ”reach-andgrasp” type of movements but without the grasping component, which was identified as one of the shortcomings of the Gentle/S prototype [5]. Later a Gentle/G integrated system
A. Apparatus The main system components are the HapticMaster robot with additional external axes for arm weight compensation and training of grasping and a 3D projection system. The HapticMaster robot is built with three active degrees of freedom in the translation-rotation-translation configuration. In addition to the three active degrees of freedom a passive gimbal is attached at the robot end-effector. In the center of the gimbal is attached a grasping device, which is available in passive configuration. A 3 axes force sensor is originally built in the HapticMaster robot. The sensor enables measurement of three forces acting at the robot end-effector. These measured forces are available to the user to be used in the robot control system. The HapticMaster robot and all external axes that include arm weight compensation and grasping device are controlled using a custom designed controller. 3D projection system consists of two InFocus projectors, a back projection screen and a multimedia computer. The system enables generation of visual 3D virtual environments. A grasping device is a passive mechanism with two degrees of freedom, which is mounted on the force sensor on the end-point of the haptic interface. It enables grasping of vir-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 703–706, 2010. www.springerlink.com
704
J. Podobnik and M. Munih
tual objects in virtual environments. The passive mechanism of the grasping device consists of two parallelogram mechanisms mounted on a frame of the grasping device, each with one degree of freedom. Each of the two parallelogram mechanisms is equipped with a force cell for measuring the force applied on the pads by the fingers. The range of the force cell is [−100, 100] N. The parallelogram mechanism allows the finger pads to remain in parallel regardless of the distance between them. The user applies the force on the first force cell with the thumb, and on the second with the other four fingers. On the back of the frame each of the parallelogram mechanisms is connected to the frame with springs. The device for grasping can therefore be described as a passive elastic haptic device. Zhai [8] classified input devices as isometric, isotonic and elastic as an option in between. Isotonic devices can be moved freely or with a constant resistance. Isometric devices do not move and produce a reaction force which is equal to the force applied on the device. Haptic information is provided by four types of human somatosensory receptors: mechanoreceptors in joints, Golgi tendon organs, muscle spindle and cutaneous receptors in skin. Since joints of the hands and arms of the user do not move when using the isometric devices, useful information comes only from the three types of receptors and no information comes from mechanoreceptors in joints. In the elastic device, however useful information comes from all four types of receptors. This is the major deference in which our elastic grasping device differs from rigid passive grasping devices, which are based on the principle of pseudo-haptic [9]. Grasping device frame is mounted on the wrist support mechanism, as shown in Figure 1. User places the wrist in a wrist splint, which is attached on the wrist support mechanism. Splint restricts the movement of the hand in the wrist but does not affect the finger mobility. The wrist support mechanism has two passive degrees of freedom.
wrist support mechanism grasping device
wrist splint
Fig. 1: Figure (a) shows the grasping device with attached cuffs for fingers. Figure (b) shows the grasping device and the wrist support mechanism mounted on the end-point of the HapticMaster robot.
B. Pick and place task In pick and place task the subject must move the arm to the virtual object and grasp it and then transport it to the designated location and releases it there. When the object is released a new virtual object comes in to the workspace. If the subject does not apply a sufficiently large grasp force the object falls down and has to be picked up again. The virtual objects in this task were apples. The subject has to carry them on a fruit stand (see Fig. 2).
Fig. 2: The Pick and Place task. The subject has to pick the apples which fall from the tree and place them on the platter on the fruit stand.
C. Subjects Two hemiparetic post-stroke subjects, a woman (subject A) and a man (subject B), 5 and 7 years after stroke with chronic upper-extremity impairments were recruited. Subject A was 40 years old and subject B was 45 years old. Both subjects had impaired the right side of the body and the right arm was dominant arm before the stroke. Both participants were free of other neurological deficits. Six healthy subjects (all male, age between 26 and 29 years), also participated in the experiments. D. Procedures Figure 3 shows the training set-up with a hemiparetic subject. Subject was seated in a chair in front of the haptic interface. An arm weight compensation system was used to support the arm of the subject. Next the wrist was placed in a splint to securely fixate the wrist in the wrist support mechanism mounted on the end-effector of the haptic interface. Last the fingers were placed in finger attachments cuffs of the grasping device. Each healthy subject participated in one training session, while full training for the subject A lasted 10 sessions and for the subject B full training lasted 9 sessions. One session consisted of four pick and place task, with a pause between each task. In each pick and place task the subject had to pick 20 apples.
IFMBE Proceedings Vol. 29
Robotic System for Training of Grasping and Reaching
705
gravity compensation system
grasp
projection screen
grasp module
chair with four point belt
Fig. 3: The HenRiE device training set-up with a hemiparetic subject.
load force [N]
position [m]
grasp force [N]
haptic interface HapticMaster
20 18 16 14 12 10 8 6 4 2 0
0 -0.11 -0.12 -0.13 -0.14 -0.15 -0.16 -0.17 -0.18 -0.19 0 15
Figure 4 shows the grasp force, position of the wrist, and load force for 20 consecutive successful trials of transporting the virtual object. The load force is the z-component of the interaction force F applied by the user. The x-axis of the Figure 4 is shown in normalized time. A full pick and place movement was divided into three phases: grasping phase, transport phase, release phase. Coordination between grasp and load force was evaluated by computing the correlation between the grasp force signal and the load force signal. Correlation is considered as a sensitive parameter for precision of the coupling between the grasp and load force [10]. Table 1 shows correlation values between grasp and load force for control subjects and for subjects A and B. Third column gives correlation values for all three phases of pick and place movement.
IV. D ISCUSSION In our experiments we can observe same lift synergies in grasp and load forces as in the case of lifting real objects as described by the Forssberg et al [11]. This shows that adult subjects, when lifting the object in a virtual task, employ the same anticipatory control of the force output during the grasping phase as in real situation. When transporting actual objects which are held with fingers the grasp force increases in parallel with the load force [12]. In experiments performed by Flanagan and Wing [12] the grasp force was force in normal direction to the surface of the object and load force was force in tangential direction to the surface of the object, fingers thus applied both forces. In our experiments
transport phase
release phase
release
10
20
30
40
50
60
70
80
90
100
10
20
30
40
50
60
70
80
90
100
10
20
30
40
50
60
70
80
90
100
10 5 0 -5 -10
III. R ESULTS
grasping phase
0
normalized time Fig. 4: Figure shows the grasp force applied by the user, the position of the end-point of the HapticMaster and the load force applied by the user, which is measured with the force sensor mounted on the end-point of the robot. The grasping and releasing of the virtual object are indicated with x marks on the grasping and releasing intervals lines. Grasp and release thresholds are shown on the grasp force figure.
the grasp force is force in normal direction and applied by fingers, while load force is force measured between the wrist and the end-point of the haptic interface. Hence, subject does not feel the perturbations with the fingers but on the wrist. The grasp force and the load force are therefore decoupled. This was necessary for a successful use of the HEnRiE device as a rehabilitation device for upper extremity and grasp rehabilitation. The HEnRiE device supports the subject’s arm in the wrist, which is required for an upper extremity rehabilitation. The help provided by the haptic interface to the subject or a resistance was set accordingly to subject’s level of upper extremity impairment, while the grasp part of the task was set accordingly to subject’s level of grasp impairment. The HEnRiE device is thus designed intentionally for use in rehabilitation training with special emphasis on combined training of upper extremity and grasp, and can be therefore adapted to special requirements of the subject’s level of impairment. Correlation value for a whole movement for subject A is significantly lower than for control subjects indicating that the coordination is impaired. However, comparison of correlation values for grasping and transporting phase between subject A and control subjects shows that coordination in subject A is normal. In release phase load force is small since virtual objects rests on a virtual ground. Since load force is small, variations in load force are relatively larger then in
IFMBE Proceedings Vol. 29
706
J. Podobnik and M. Munih
Table 1: Correlation values between the grasp and load force during the pick and place phases.
Subjects
Phase
Controls
Grasping phase Transport phase Release phase Full movement Grasping phase Transport phase Release phase Full movement Grasping phase Transport phase Release phase Full movement
Subject A
Subject B
Correlation values for a given phase 0.67 (0.50) 0.67 (0.34) 0.34 (0.49) 0.70 (0.24) 0.63 (0.35) 0.78 (0.31) 0.12 (0.66) 0.53 (0.27) 0.21 (0.67) 0.18 (0.63) 0.11 (0.76) 0.23 (0.56)
other two phases. This variations of load force result in lower correlation between the grasp and load force in release phase. Since this phase is short at control subjects it does not affect the correlation value for a whole movement. However, this phase becomes longer in hemiparetic subjects since they have impaired ability to release the object. It is thus crucial to divide the full movement in phases to be able to properly examine the coordination between the grasp and load force. The correlation value during the release phase is significantly lower compared to control subject not because of impaired coordination but because of impaired ability to open the hand. As a result of impaired ability to open the hand the release phase is prolonged and therefore correlation value for a whole movement is degraded although the coordination in grasping and transporting phase is normal. However, correlation values for subject B show that coordination between the grasp and load force is degraded in all phases of the movement.
V. C ONCLUSION This paper provides an overview of the HEnRiE device, which allows the training of reaching and grasping movements, such that beside the elbow and shoulder movement training, it is expanded to the grasp training. The HEnRiE system was evaluated in the group of two hemiparetic poststroke subjects during a one-month period of the training and on the group of six healthy subjects. Experiment with poststroke subjects have shown that the HEnRiE device is suitable and capable of delivering long-term training of both arm movements and grasping for post-stroke subjects. Subjects have reported that the ability to grasp the objects in virtual
environment gives them the feeling of more natural interaction with the virtual objects and that they feel more motivated to actively participate in training.
ACKNOWLEDGEMENTS The authors acknowledge the financial support from the Slovenian Research Agency (ARRS). This work was partially supported by the EU Information and Communication Technologies Collaborative Project MIMICS grant 215756.
R EFERENCES 1. Teasell R W, Kalra L. Whats New in Stroke Rehabilitation Stroke. 2004;35:383–385. 2. Kwakkel G, Kollen B J, Krebs H I. Effects of Robot-Assisted Therapy on Upper Limb Recovery After Stroke: A Systematic Review Neurorehabil Neural Repair. 2008;22:111–121. 3. Holden M K. Virtual Environments for Motor Rehabilitation: Review Cyberpsychol Behav. 2005;8:187–211. 4. Harwin W S, Patton J L, Edgerton V R. Challenges and Opportunities for Robot-Mediated Neurorehabilitation Proceedings of the IEEE. 2006;94:1717–1726. 5. Loureiro R, Amirabdollahian F, Topping M, Driessen B, Harwin W. Upper Limb Mediated Stroke Therapy GENTLE/s Approach Journal of Autonomous Robots. 2003;15:35–51. 6. Loureiro R C V, Lamperd B, Collin C, Harwin W. S.. Reach & Grasp Therapy: Effects of the Gentle/G System Assessing Sub-acute Stroke Whole-arm Rehabilitation in IEEE 11th International Conference on Rehabilitation Robotics(Kyoto, Japan):755–760 2009. 7. Fritz S L, Light K E, Patterson T S, Behrman A L, Davis S B. Active Finger Extension Predicts Outcomes After Constraint-Induced Movement Therapy for Individuals With Hemiparesis After Stroke Stroke. 2005;36:1172–1177. 8. Zhai S. Investigation Of Feel For 6dof Inputs: Isometric And Elastic Rate Control For Manipulation In 3d Environments in Proceedings of The Human Factors and Ergonomics Society 37th Annual Meeting(Seattle, WA, USA) 1993. 9. Lcuyer A, Coquillart S, Kheddar A, Richard P, Coiffet P. PseudoHaptic Feedback : Can Isometric Input Devices Simulate Force Feedback? in IEEE International Conference on Virtual Reality(New Brunswick, USA):83–90 2000. 10. Podobnik J, Munih M. Robot-assisted evaluation of coordination between grasp and load forces in a power grasp in humans Adv Robot. 2006;20:933–951. 11. Forssberg H, Eliasson A C, Kinoshita H, Johansson R S, Westling G. Development of human precision grip. I: Basic coordination of force Exp Brain Res. 1991;85:451–457. 12. Flanagan J R, Wing A M. Modulation of grip force with load force during point-to-point arm movements Exp Brain Res. 1993;95:131– 143. Author: Janez Podobnik Institute: Faculty of Electrical Engineering, University of Ljubljana Street: Trzaska c. 25 City: Ljubljana Country: Slovenia Email: [email protected]
IFMBE Proceedings Vol. 29
Recognition and Identification of Red Blood Cell Size Using Angular Radial Transform and Neural Networks G. Apostolopoulos1, S. Tsinopoulos2, and E. Dermatas1 1
Department of Electrical Engineering and Computer Technology, University of Patras, Patras, Greece 2 Department of Mechanical, TEI of Patras, Patras, Greece
Abstract— In this paper, a novel method for the estimation of the human Red Blood Cell (RBC) size using light scattering images is presented. The information retrieval process includes, image normalization and features extraction using the Angular Radial Transform (ART). A Radial Basis Neural Network (RBF-NN) estimates the RBC geometrical properties. The proposed method is evaluated in both regression and identification tasks when three important geometrical properties of the human RBC are estimated using a database of 1575 simulated images generated with the boundary element method. The experimental setup consists of a light beam at 632.8 nm and moving RBCs in a thin glass and additive noise distortion is simulated using white Gaussian noise from 60 to 10 dB SNR. The regression and identification accuracy of actual RBC sizes is estimated using three feature sets, giving a mean error rate less than 1 percent of the actual RBC size, in case of noisy image data at 10 dB SNR or better, and more than 97 percent mean identification rate. Keywords— Human red blood cell, neural network, light scattering, Angular Radial Transform.
I. INTRODUCTION The Red blood cell (RBC) is the most common type of blood cell, filled with hemoglobin, a bio-molecule that can bind to oxygen. Several blood diseases change the typical size and distribution of red blood cells. The scattering of electromagnetic waves by dielectric particles is a problem of great significance for a variety of applications, ranging from particle sizing and remote sensing to radar meteorology and biological sciences [1]. In the area of medical diagnosis, understanding how a laser beam interacts with blood suspensions or a whole-blood medium is of paramount importance in quantifying the RBCs inspection process in many commercial devices. The correlation between the obtained patterns and the physical characteristics of blood, such as the concentration of hemoglobin and the degree of oxygen saturation, is determined with the aid of simple multiple-scattering theories when dilute suspensions of blood are used [2-4]. Light scattering has been used for efficient and accurate measurement of the geometrical properties of micro-particles [5-7]. Most of those methods based on light scattering spec-
troscopy are being investigated in a back-scattering geometry, since it has been shown that the spectrum over wave-number of backscattered light has a periodic component with an oscillation frequency proportional to the particle size [8]. Analytical methods based on theories such as those of Mie, Rayleigh, Fraunnhoffer, and Rayleigh-Gans and on anomalous-diffraction, ray-optics [9]. Numerical methods, such as the method of moments, the boundary element method (BEM), the surface and volume integral-equation methods, the finite-element method can solve with no significant restrictions very complicated scattering problems. Content-based image retrieval has been a topic of intensive research in recent years, and particularly the development of effective shape descriptors (SD). The MPEG-7 standard comity has proposed a region base shape descriptor, the Angular Radial Transform (ART) [10,11]. This SD has many properties: compact size, robustness to noise and scaling, invariance to rotation, ability to describe complex objects. These properties and the evaluation made during the MPEG-7 standardization process make the ART a unanimously recognized efficient descriptor. Furthermore, an important characteristic is the small size of the ART descriptor. For a huge database, it implies fast answers during retrieval processes. Recently, a device for acquisition of scattering images of RBC-flow is presented [13,14]. The block diagram of the experimental device is shown in Fig.1, consisting of a monochromatic light source, two iris diaphragms, a Fourier convex lens, the flow chamber, a semi-transparent screen, a beam stopper, and a Charge Couple Device (CCD) camera. The light source is a 5-mW vertically polarized laser and the suspension is circulated continuously into the flow chamber at a constant flow rate by a syringe pump. In this paper, a novel method for the estimation of the human RBC size using light scattering images of the simulated device [15] at 632.8 nm is presented and evaluated. The goal of this paper is twofold, to identify human RBCs and to estimate the diameter, the maximum and minimum thickness of the RBC from multiple digital images, derived when a narrow-band light beam illuminate the RBCs-flow, using image processing and non-linear regression techniques. The structure of this paper is as follows: In section 2, the proposed estimation
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 707–710, 2010. www.springerlink.com
708
G. Apostolopoulos, S. Tsinopoulos, and E. Dermatas
and identification method of human RBC uses Angular Radial Transform based-features and the RBF neural network is presented.
Am (θ ) =
Semitransparent Screen
Iris Diafragm
Iris Diafragm
CC D
He-Ne 632.8 nm
PC Laser
Fourier Convex Lens
In order to achieve rotation invariance, an exponential function is used for the angular basis function. The radial basis function is defined by a cosine function:
Chamber
Beam Stopper
Fig. 1 The schematic presentation of the simulated device used to acquire scattering images in a RBC-flow [15] The image database, the experiments, the regression and identification results are presented and discussed in Section 3. Finally, a conclusion is given.
n=0 ⎧⎪1, 1 jmθ e , Rn ( ρ ) = ⎨ ⋅ 2 cos n , n π ρ 2π ( ) ≠0 ⎪⎩
(3)
The ART descriptor is defined as a set of normalized magnitudes of the set of ART coefficients. Rotational invariance is obtained by using the magnitude of the coefficients. In MPEG-7, twelve angular and three radial functions (Fig.2) are used (n < 3, m < 12) [10]. The same coefficients are adopted to describe the scattering image content. The ART shape feature vector is composed of the magnitudes of the ART coefficients: hART = [ f 0,0 , f 0,1 ,… , f 0,11 ,… , f 2,11 ]
(4)
For scale normalization, the ART coefficients are divided by the magnitude of ART coefficient of order f0,0.
II. ESTIMATION OF RBC SIZE In each image of the simulated device, scattering phenomena of a single RBC type at 632.8 nm are recorded. Taking into account that the mean brightness not related to the RBC size, a linear normalization of the pixels’ intensity, according to the brightest value, is applied to remove this irrelevant information. In the feature extraction module, the Angular Radial Transform (ART) has been used to extract the scattering information from the image. The feature vector, composed by the ART shape coefficients, is used by the radial-basis functions neural network (RBF-NN) to derive the actual RBC size.
Angular Radial Transform (ART) is a moment-based image description method and can be used to encode regionbased information [12]. It gives a compact and efficient way to express pixel distribution within a 2-D object region. Describes both connected and disconnected region shapes. The ART is a complex orthogonal integral transformation defined on a unit disk that consists of the complete orthogonal sinusoidal basis functions in polar coordinates [10,11]. The ART coefficients, of order n and m, are defined by: 2π 1
∫ ∫ ℜ {V ( ρ ,ϑ )} ⋅ f ( ρ ,ϑ ) ⋅ ρ ⋅ d ρ ⋅ dθ n ,m
(1)
0 0
where f(ρ,θ) is an image function in polar coordinates and Vn,m(ρ,θ) is the ART basis function that is separable along the angular and radial directions: Vn ,m ( ρ ,ϑ ) = Am (θ ) ⋅ Rn ( ρ )
f0,0 f0,1 f0,2
f1,0
f2,0
f2,11
Fig. 2 Features extraction using real parts of the ART basis functions
A. Features Extraction
f n ,m =
Scattering image of the human erythrocyte
.
(2)
B. Features Normalization The generalization capabilities and the training efficiency of the RBF network can be improved using a feature normalization process [16]. Therefore, each ART coefficient of the feature vector is divided by the corresponding standard deviation estimated in the complete set of training data. C. Estimation of RBC Size Using Neural Network Artificial neural networks have been used widely in many regression and classification problems [17,18] using computing elements simulating the information processing of biological neurons. The popular RBF networks have very efficient training algorithms and only the number of hidden neurons must be defined by the network designer. The radius of the adopted radial-basis functions denoted as spread.
IFMBE Proceedings Vol. 29
Recognition and Identification of Red Blood Cell Size Using Angular Radial Transform and Neural Networks
Cylindrical Coordinate System Axes
R2
R3
z
Diameter R1 450
Spread of RBF 3 Spread of RBF 5 Spread of RBF 7 Spread of RBF 9 Spread of RBF 10
400 350
Regression Error
The proposed method was evaluated in both regression and identification tasks. In regression problems the R1, R2 and R3/R2 parameters of the RBC (shown in Fig. 3) are estimated from the scattering images. In the identification experiments, each scattering image is classified into a set of three geometrical properties of the RBC (R1, R2 and R3/R2) from 787 valid alternatives. According to [19,20], healthy human RBCs have an axisymmetric geometry with z indicating the axis of the symmetry. Typical values of R1, R2 and R3/R2 varies from 4.5 to 10.5 μm, 1.5 to 3 μm and 0.4 to 0.8, respectively. The image database has been designed to cover the suggested ranges in equal steps of 0.250 μm for R1 and R2 and 0.05 for the ratio R3/R2, producing 1575 images, sized 50 x 50 pixels each. The images were obtained by solving the scattering problems of a 632.8 nm wavelength EM plane wave and the above described human RBCs, by means of the BEM code developed in [21], taking into account both axisymmetric geometry of the scatterer and the non-axisymmetric boundary conditions of the problem.
product among the image and each mask of the ART basis functions is obtained, and thus each image can be described by vector 12 x 3 = 36 features. Due to the fact that the coefficient f0,0 is used to normalize the rest 35 coefficients of the feature vector, eventually each image is described by a 35 dimension feature vector. The regression error versus the number of hidden neurons and different spreading values of the Gaussian radial-function between 3 and 10 is shown in Figs 4, 5 and 6. Better regression rate is obtained when the number of neurons and the spread of the RBF-NN are increased in case of noise level at 10 dB SNR. The identification task is performed to analyze the discriminative power of the RBF-NN output. As it is shown in Fig 7, the mean identification rate is dramatically improved when the spread of the RBF-NN is increased, while a mean error rate less than 1% of the actual RBC size is obtained.
300
SNR = 10 dB
III. EXPERIMENTS – IMAGE DATABASE
709
250 200 150
ρ R1
100 0
20
Fig. 3 A representation of a normal RBC according to [19,20]
A. Estimation of RBC Size
60
80
100
Fig. 4 Regression Error in various RBF configurations for diameter R1 Maximum Thickness R2 Spread of RBF 3 Spread of RBF 5 Spread of RBF 7 Spread of RBF 9 Spread of RBF 10
140
SNR = 10 dB
120
Regression Error
The original images are distorted using additive white Gaussian noise at 10 dB SNR. This type of noise approximates several phenomena in real image acquisition systems, including thermal-noise in CCD cameras and distortion from optical elements. In the training of the RBF-NN 788 images are used, and for the regression and identification experiments the remaining 787 images are selected. Both training and testing sets are uniformly distributed in the RBC sizes. The identification process is completed by the nearest classification rule of the Euclidean distance between the regression values of the RBF-NN and the RBC sizes used to build the image database.
40
Number of neurons
100
80
60
40
In the regression experiment the ART shape features are evaluated and the mean absolute error between the actual geometrical properties of the RBC and the RBF based estimations, denoted as regression error, is calculated. Each scattering image is converted in polar coordinates, the inner
0
20
40
60
80
100
Number of neurons
Fig. 5 Regression Error in various RBF configurations for maximum thickness R2
IFMBE Proceedings Vol. 29
710
G. Apostolopoulos, S. Tsinopoulos, and E. Dermatas
IV. CONCLUSIONS
REFERENCES
In this paper, two novel fully automated methods for the estimation and classification of human RBC from scattering images using image processing and supervised neural network techniques are implemented and evaluated. Excellent detection and identification rates for three important geo-metrical properties of human RBC are obtained by the proposed methods, even in cases where the simulated images are distorted by white Gaussian noise at the very noisy environment of 10 dB SNR. The excellent performance of the proposed methods in all experiments is an important guideline for medical implementation in real-life applications.
1. C. F. Bohren and D. R. Huffman (1983) Absorption and scattering of light by small particles, Wiley, New York 2. J. Plasek and T. Marik (1982) Determination of undeformable erythrocytes in blood samples using light scattering, Appl. Opt. 21, 4335-4338 3. J. M. Steinke and A. P. Shepherd (1988) Comparison of Mie theory and the light scattering of red blood cells, Appl. Opt. 27, 4027-4033 4. M. Hammer, D. Schweitzer, B. Michel, E. Thamm, and A. Kolb (1998) Single scattering by red blood cells, Appl. Opt. 37, 7410-7418 5. V. S. Lee and L. Tarasenko (1991) Absorptions and multiple scattering by suspensions of aligned red blood cells, J. Opt. Soc. Am. A 8 1135-1141 6. J. Kim and J. C. Lin (1998) Successive order scattering transport approximation for laser light propagation in whole blood medium, IEEE Trans. Biomed. Eng. 45 pp 505-510 7. A.H. Gandjbakhche, P. Mills, and P. Snarbe (1994) Light-scattering technique for the study of orientation and deformation of red blood cells in a concentrated suspension, Appl. Opt. 33, 1070-1078 8. G. N. Constantinides, D. Gintides, S.E. Kattis, K. Kiriaki, C.A. Paraskeya, A.C. Payatakes, D. Polyzos, S.V Tsinopoulos and S. N. Yannopoulos (1998) Computation of light scattering by axisymmetric nonspherical particles and comparison with experimental results, Appl. Opt. 37, 7310-7319 9. A. Katz, A. Alimova, M. Xu, E. Rudolph, M. Shah, H. E.Savage, R. Rosen, S. A. McCormick, and R. R. Alfano (2003) Bacteria size determination by elastic light scattering, IEEE J. Sel. Top. Quantum Electron. 9, 277 10. S. Jeannin (2001) Mpeg-7 Visual part of eXperimentation Model Version 9.0, in ISO/IEC JTC1/SC29/WG11/N3914, 55th Mpeg Meeting, Pisa, Italia 11. W.-Y. Kim and Y.-S. Kim (1999) A new region-based shape descriptor, in TR 15-01, Pisa 12. M. Bober (2001) Mpeg-7 visual shape descriptors, IEEE Trans. Circuits Syst. Video Technol., vol. 1(6) 13. V. Twersky (1991) Absorption and multiple scattering by biological suspensions, J. Opt. Soc. Am. A 8, 1135-1141 14. L. T. Perelman, V. Backman, M. Wallace, G. Zonios, R. Manoharan, A. Nusrat, S. Shields, M. Seiler, C. Lima, T. Hamano, I. Itzkan, J. Van Dam, J. M. Crawford, and M. S. Feld (1998) Observation of periodic fine structure in reflectance from biological tissue: A new technique for measuring nuclear size distribution, Phys. Rev. Lett. 80, 627 15. V. Backman, G. Gurjar, K. Badizadegan, I. Itzkan, R. R. Dasari, L. T. Perelman, and M. S. Feld (1999) Polarized light scattering spectroscopy for quantitive measurement of epithelial cellular structure in situ, IEEE J. Sel. Top. Quantum Electron. 5, 1019 16. W.C.O. Tsang (1975) The size and shape of human red blood cells, M.S. thesis, University of California at San Diego, San Diego, Calif. 17. L. Song and R. M. Donovan (1988) Segmentation of cell image using an expert system, 10th Annual Inter. Conf. , IEEE EMBS 1383-1384 18. F. Arman and J. A. Pearce (1990) Unsupervised classification of cell images using pyramid node linking, IEEE Trans. BME (37)6: 647-652 19. P.W. Kuchel and E.D. Fackerell (1992) Parametric equation representation of biconcave erythrocytes, Bull. Math. Biol. 61:209-220 20. S. Munoz San Martin, J.L Sebastian, M. Sanchol and G. Alavarez (2005) Modeling Human Erythrocytes shape and size abnormalities, q-bio. QM/0507024 21. S. V. Tsinopoulos and D. Polyzos (1999) Scattering of He-Ne laser light by an average-sized red blood cell, Appl. Opt. 25, 5499
Ratio n = R3/R2 Spread of RBF 3 Spread of RBF 5 Spread of RBF 7 Spread of RBF 9 Spread of RBF 10
0,11
0,09
SNR = 10 dB
Regression Error
0,10
0,08
0,07
0,06 0
20
40
60
80
100
Number of neurons
Fig. 6 Regression Error in various RBF configurations for the ratio R3/R2
Spread of RBF 3 Spread of RBF 7 Spread of RBF 10
Spread of RBF 5 Spread of RBF 9
100
Number of Neurons, 100
Mean Identification Rate (%)
120
80
60
40
20
10
20
30
40
50
SNR (dB)
Fig. 7 Mean identification Rate
60
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Apostolopoulos George University of Patras Kato Kastritsi, 26500 Patras Greece [email protected]
Collagen Gel as Cell Extracellular Environment to Study Gene Electrotransfer S. Haberl, D. Miklavčič, and M. Pavlin University of Ljubljana, Faculty of Electrical Engineering, Tržaška 25, SI-1000 Ljubljana, Slovenia Abstract— Gene electrotransfer is a promising nonviral method that enables transfer of plasmid DNA into cells with electric pulses. Several in vitro and in vivo studies analyzed different pulse duration however the question of the mechanisms involved in gene electrotransfer remains open. One of main obstacles toward efficient gene electrotransfer in vivo is relatively poor mobility of DNA in tissues. Since cells in tissues are connected to their extracellular environment they behave differently than in standard in vitro conditions. We first grew cells on collagen layer and furthermore developed a three-dimensional (3-D) in vitro model of CHO cells embedded in collagen gel as an in vitro model of tissue. We analyzed gene electrotransfer efficiency and viability using different pulse duration. We obtained that gene electrotransfer efficiency of cells in 3-D collagen model has similar dependence on pulse duration as in in vivo studies. We suggest that our 3-D collagen model resembles the in vivo situation more closely than conventional 2-D cell cultures and thus provides an intermediate between in vitro and in vivo conditions to study mechanisms of gene electrotransfer. Keywords— 3-D in vitro model, collagen gel, gene electrotransfer, GFP, CHO cells.
exposed to conditions that mimic tissue environment would be beneficiary to study mechanisms of gene electrotransfer. Namely, behavior of cells in classical in vitro culture considerably differs from cells in tissue. At present, diverse 3-D in vitro models of cell cultures are used in biomedical research [11, 12]. Also 3-D spheroid models were used as models of tumors for analyzing transport of small molecules [13, 14] and for gene electrotransfer [14]. However, up to now there is no analysis of gene electrotransfer in the environment made of collagen gel. Therefore the aim of our study was to develop a cell environment, which would enable studies of gene electrotransfer in conditions closer to in vivo environment based on using collagen gel. In our present study we used CHO cells grown on collagen layer or embedded in 3-D collagen gels to study effect of different pulse duration on gene electrotransfer efficiency. The results which we obtained from cells embedded in collagen gel (3-D model) are comparable to results obtained in vivo; therefore such models could be used for optimization of gene electrotransfer protocols.
I. INTRODUCTION
II. MATERIALS AND METHODS
Many different biochemical methods have been developed to transfer genes into cells but many of them have either low efficiency or potential side effects [1, 2]. In 70’s temporary increase in membrane permeability was achieved by means of electric pulses and small molecules could enter the cell. The method was named electroporation [3]. A decade later Neumann et al. was first to achieve successful transfection of a gene into eukaryotic cells by applying electric pulses [4]. Gene electrotransfer has since due to ease of its application and efficiency become a routine method for introducing foreign genes into cells in vitro [5] and into different tissues [6, 7] in vivo. Nevertheless, mechanisms involved in gene electrotransfer in vitro or in vivo remain largely unknown. One of the main obstacles for efficient gene electrotransfer in vivo is extracellular matrix, which hinders DNA mobility in tissues [7, 8]. In vivo studies require large quantity of sacrificed animals and there are many other factors that influence the gene electrotransfer efficiency [9, 10]. For this reason development of a reproducible model, where cells would be
A. Preparation of Cells Grown on Collagen Gel Layer Type I collagen from rat tail was obtained from SigmaAldrich Chemie GmbH (Deisenhofen, Germany) as a powder and mixed with acetic acid to achieve collagen solution concentration 4.0 mg/ml and stored at 4°C. After 24 h 10x PBS, pH=7.4 was added to collagen solution, in the ratio of 1:8. pH of mixture was adjusted to 7.2-7.6 with 0.1 M NaOH. To prevent gelation, temperature of mixture was maintained at 2–8°C. 200 μl of collagen was pipeted into each space of multiwell dish and stored for 1 h at 37°C in a humidified 5% CO2 atmosphere in the incubator. Collagen polymerized and formed a gel layer. Chinese hamster ovary cells (CHO-K1) were grown as a monolayer culture on collagen layer in cell density of ρ = 5 x 104 cells/ml. Ham culture medium was added and cells were stored for 24 h at 37°C in a humidified 5% CO2 atmosphere. B. Preparation Collagen Gel with Embedded Cells Collagen solution was prepared as described above. CHO-K1 were prepared as a cell suspension and cell pellet
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 711–714, 2010. www.springerlink.com
712
S. Haberl, D. Miklavčič, and M. Pavlin
was resuspended with liquid collagen solution to a cell density of ρ = 5.6 x 105 cells/ml. 180 μl of collagen with cells was pipeted into each space of multiwell dish and stored for 1 h at 37°C in a humidified 5% CO2 atmosphere in the incubator. After raising the temperature to 37°C collagen polymerized and formed a gel with cells embedded. Ham culture medium was gently added and cells were stored for 24 h at 37°C in a humidified 5% CO2 atmosphere.
conventional in vitro 2-D cell culture (cells plated on plastic material). For this purpose we grew cells on collagen layer (see Fig. 1) and embedded in collagen gel (3-D model) and analyzed the effect of different pulse duration on gene electrotransfer efficiency (Fig. 2). Also cell viability was tested (Fig. 3).
C. Gene Electrotransfer Electroporation was performed on 24 h old cell culture with standard electroporation medium (pH 7.4, 10 mM NaH2PO4/Na2HPO4, 1 mM MgCl2 and 250 mM sucrose). Culture medium was removed and cells were incubated with plasmid DNA (pEGFP-N1) that codes for GFP in electroporation medium for 30 min at a room temperature (22°C). A train of eight square wave pulses of different pulse durations: 200 μs, 1 ms, 5 ms and 10 ms, repetition frequency: 1 Hz, electric field strength: 0.8 kV/cm was used to deliver DNA into the cells. For pulsing Jouan GHT 1287 electroporator was used. The distance between a pair of two plate stainless steel parallel electrodes was d = 4 mm and plasmid DNA concentration was 90 µg/ml. After exposing cells to electric pulses, 70 µl of fetal calf serum was added (35% of sample volume) to preserve cell viability. Cells were then incubated for 15 min at 37°C to allow cell membrane resealing and then grown for 24 h in cell culture medium at 37°C in a humidified 5% CO2 atmosphere in the incubator. Gene electrotransfer efficiency was determined by fluorescent microscopy (Zeiss 200, Axiovert, ZR Germany) by the ratio between the number of cells expressing GFP and the number of viable cells. Cell viability was determined by measuring propidium iodide (PI) uptake. PI enters the cell if the membrane is damaged. Culture medium was removed and 200 µl of PBS with 6 µl of 0.15 mM PI was added to cells. After 3 min incubation cell viability was determined by fluorescent microscopy by the ratio between the number of dead cells (cells with incorporated PI) and the total number of cells. The images were recorded using imaging system (MetaMorph imaging system, Visitron, ZR Germany).
III. RESULTS Main objective of our study was to show how collagen gel which mimics cell extracellular environment in tissues affects gene electrotransfer efficiency. Namely, cells surrounded with collagen environment provide physiologically more relevant approach for the analysis of gene electrotransfer than
Fig. 1 Gene electrotransfer of CHO cells plated on collagen layer 24 h after pulse application. Eight pulses with electric field strength E = 0.8 kV/cm and of different durations were applied with repetition frequency of 1 Hz to deliver DNA into the cells. No electric pulses were applied on the control sample. (A) phase contrast image of treated cells; (B) control sample-no pulses; (C) pulse duration-8 x 200 µs; (D) pulse duration-8 x 1 ms; (E) pulse duration-8 x 5 ms; (F) pulse duration-8 x 10 ms Fig. 2 shows percentage of transfection, which represent gene electrotransfer efficiency for different pulse duration of cells grown on collagen layer and of cells embedded in collagen gel (3-D model). Applied electric field 0.8 kV/cm, repetition frequency 1 Hz were the same for all experiments. We obtained gradual increase in gene electrotransfer efficiency for increasing pulse duration. The highest efficiency was obtained when we applied 8 x 5 ms long pulses. Under this condition 9.5% of viable cells grown on collagen layer and 2.6% of viable cells embedded in collagen gel were transfected. At 8 x 10 ms long pulses transfection was close to zero, which is due to the fact that almost no viable cells were observed (Fig. 3).
IFMBE Proceedings Vol. 29
Collagen Gel as Cell Extracellular Environment to Study Gene Electrotransfer
713
IV. DISCUSSION
Fig. 2 Effect of different pulse duration on gene electrotransfer of: (●) cells grown on collagen layer; (○) cells embedded in collagen gel (3-D model). Eight pulses of different durations, pulse repetition frequency of 1 Hz and E = 0.8 kV/cm were applied. The percentage of transfected cells is plotted as a function of different pulse duration. Plasmid concentration in electroporation medium was 90 μg/ml. With increasing pulse duration also decrease in cell viability was observed as it was shown in vitro before [5]. Fig. 3 shows percentage of viable cells grown on collagen layer and embedded in collagen gel (3-D model) for different pulse duration. The highest viability was observed when we applied shorter pulses (8 x 200 µs). Under this condition 96% of cells grown on collagen layer and 82% of cells embedded in collagen gel survived. At longer pulses viability of cells was significantly lower. At 8 x 10 ms 15% of cells grown on collagen layer and 10% of cells embedded in collagen gel survived.
Gene electrotransfer is an established method to deliver genes both in vitro and in vivo. The main problem in gene electrotransfer of mammalian cells in vivo is currently its relatively low efficiency [7]. It was shown that extracellular matrix represents a major obstacle and decreases mobility of DNA through tissue [8], which hinders transport of DNA in proximity of cells consequently leading to relatively low transfection. In order to improve efficiency and to understand mechanisms of gene electrotransfer in vivo it is crucial to study gene electrotransfer of cells in environment resembling extracellular environment in tissue. Therefore the aim of our study was to grow cells on and inside collagen gel - an environment, which would mimic in vivo conditions. In the first part of the study where cells were grown on collagen layer we obtained that pulse duration is crucial for successful gene electrotransfer which in agreement with in vivo studies [5]. Electric pulses of longer duration are supposed to contribute to the electrophoretic mobility of DNA towards and into the cells [5, 15-17]. The result of our experiments show that gene electrotransfer efficiency increases with increasing duration of pulses up to 8 x 5 ms, where almost 10% of cells grown on collagen layer were successfully transfected, which could be compared to in vivo study of Rols et al. [7]. In order to mimic more closely in vivo tissue environment we also developed 3-D model of cells embedded inside collagen gel and tried to achieve successful gene elec-trotransfer. Efficiency of gene electrotransfer in 3-D model was considerably lower (see Fig. 2) than in cells grown on collagen layer, which suggest that mobility of DNA in collagen gel is severely impaired as already shown in tissues [17]. However at longer pulses cell viability dropped which is in agreement with other studies, where it was suggested that pulse duration should be optimized to obtain sufficient gene electrotransfer efficiency and to avoid irreversible cell damage [5, 6].
V. CONCLUSION
Fig. 3 Effect of different pulse duration on viability of: (●) cells grown on collagen layer; (○) cells embedded in collagen gel (3-D model). Eight pulses of different durations, pulse repetition frequency of 1 Hz and E = 0.8 kV/cm were applied. The percentage of viable cells is plotted as a function of different pulse duration. Plasmid concentration in electroporation medium was 90 μg/ml
Classical 2-D cell cultures do not reproduce the morphology and biochemical features that cells possess in tissue. As alternative cells grown in more in vivo like environment such as collagen gel, offer the possibility to study different parameters of gene electrotransfer under the conditions that resemble tissue more closely. Namely, collagen matrix acts as a physical barrier that limits the diffusion of plasmid DNA to the cells similarly as extracellular matrix in vivo [8].
IFMBE Proceedings Vol. 29
714
S. Haberl, D. Miklavčič, and M. Pavlin
We successfully achieved gene electrotransfer of cells embedded in collagen gel. Such a model could be used to study mechanisms of gene electrotransfer as well as to design better protocols for in vivo gene electrotransfer.
ACKNOWLEDGMENT This work was supported under various grants by Slovenian research Agency.
REFERENCES 1. Cotten M, Wagner E (1993). Non-viral approaches to gene therapy. Curr Opin Biotechnol 4:705-710 2. Marshall E (1999). Gene therapy death prompts review of adenovirus vector. Science 286:2244–2245 3. Neumann E, Rosenheck K (1972). Permeability changes induced by electric impulses in vesicular membranes. J Membr Biol 10:279-290 4. Neumann E, Schaefer-Ridder M, Wang Y, et al. (1982). Gene transfer into mouse lyoma cells by electroporation in high electric fields. EMBO J 7:841-845 5. Rols MP, Teissie J (1998). Electropermeabilization of mammalian cells to macromolecules: control by pulse duration. Biophys J 75:1415-1423 6. Mir LM, Bureau MF, Gehl J, et al. (1999). High-efficiency gene transfer into skeletal muscle mediated by electric pulses. Proc Natl Acad Sci USA 96:4262-4267 7. Rols MP, Delteil C, Golzio M, et al. (1998). In vivo electrically mediated protein and gene transfer in murine melanoma. Nat Biotechnol 16:168-171 8. Zaharoff DA, Barr RC, Li CY, et al. (2002). Electromobility of plasmid DNA in tumor tissues during electric field-mediated gene delivery. Gene Ther 9:1286-1290
9. Miklavcic D, Beravs K, Semrov D, et al. (1998). The importance of electric field distribution for effective in vivo electroporation of tissues. Biophys J 74:2152–2158 10. Somiari S, Glasspool-Malone J, Drabick JJ, et al. (2000). Theory and in vivo application of electroporative gene delivery. Mol Ther 2:178187 11. Harkin DG, Hay ED (1996). Effects of electroporation on the tubulin cytoskeleton and directed migration of corneal fibroblasts cultured within collagen matrices. Cell Motil Cytoskeleton 35:345-357 12. Chevallay B, Herbage D (2000). Collagen-based biomaterials as 3D scaffold for cell cultures: applications for tissue engineering and gene therapy. Med Biol Eng Comput 38:211-218 13. Canatella PJ, Black MM, Bonnichsen DM, et al. (2004). Tissue electroporation: quantification and analysis of heterogeneous transport in multicellular environments. Biophys J 86:3260-3268 14. Wasungu L, Escoffre JM, Valette A, et al. (2009). A 3D in vitro spheroid model as a way to study the mechanisms of electroporation. Int J Pharm 379:278-284 15. Kanduser M, Miklavcic D, Pavlin M (2009). Mechanisms involved in gene electrotransfer using high- and low-voltage pulses -An in vitro study. Bioelectrochemistry 74:265-271 16. Bureau MF, Gehl J, Deleuze V, et al. (2000). Importance of association between permeabilization and electrophoretic forces for intramuscular DNA electrotransfer. Biochim Biophys Acta 1474:353-359 17. Satkauskas S, Andre F, Bureau MF, et al. (2005). Electrophoretic component of electric pulses determines the efficacy of in vivo DNA electrotransfer. Hum Gene Ther 16:1194-1201
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Mojca Pavlin University of Ljubljana, Faculty of Electrical Engineering Tržaška 25 Ljubljana Slovenia [email protected]
The Influence of Seat Pan and Trunk Inclination on Muscles Activity during Sitting on Forward Inclined Seats A. Mastalerz1,2 and I. Palczewska1 2
1 Institute of Industrial Design, Warsaw, PL University of Physical Education, Warsaw, Faculty of Physical Education in Biala Podlaska , PL
Abstract— Sit-to-stand position required lower muscle activity during motion and sitting. The prolonged low-level static load on the back during sitting can be hypothesized to an effect back muscles adversely. The aim of the study was the estimation of the influence of seat pan inclination on muscle activity during sitting on forward inclined seats. Two types of sitting positions were applied with thighs angles: 120o and 135o. Those conditions were modified by trunk inclinations: -30o, -15o, 0o, 15oand 30o and the backrest heights: Th5, Th8 and L1. During unsupported sitting statistical influence of the trunk inclination were noticed for ES, GL and TB (p<0,05). Especially, inclination of seat pan influenced on TB activity (10%), although EMG measured during sitting did not exceed 20 % of MVC. No effect of applied seatpan and backrest adjustments on trunk muscles EMG was found during different backrest heights. Rectus femoris and gastrocnemius lat. reveal higher EMG activity with the seatpan adjusted to 135o. The backrest position influenced the EMG of tibialis anterior only. Keywords— sit-to-stand, inclined seats, muscle activity, seatpan and backrest adjustments.
I. INTRODUCTION Standing up to sitting down should be treated as a postural transition which is performed many times during the course of a day. Rising from a sitting position is taken as one of the most difficult and mechanically demanding functional operations. The prolonged low-level static load on the back during sitting can be hypothesized to an effect back muscles adversely. Westgaard and DeLuca [1] indicated that prolonged low-level activity of muscle influenced on higher muscle activity of other muscle group. Nearly forty years ago many scientist argued that sitting with right angles is biomechanically incorrect [2,3,4,5]. Bendix [6] advocated a forward sloping seat with a tilted desk as a means of improving seated posture. Mandal [7,8] was a principle exponent of steeply forward sloping seats, especially for schools, and chair manufacturers were quick to take up the idea in office furniture. In practice, most traditional office chairs have a 2-4° backwards slope at the area where the ischial tuberosites rest when the back of the sitter is in contact with the backrest. But, the simple information about correct slope selection of the seat and their influence on
muscles load during sitting without back support are still necessary. Therefore the aim of the study was the estimation of the influence of seat pan inclination on muscle activity during sitting, on forward inclined seats.
II. METHODS The group of subjects included thirteen healthy women, without acute or chronic problems of the muscular-skeletal system (body mass: 68±12 kg, body height: 170±5 cm, age: 22±1 year). Measurements were done on the specially prepared stand consisted of self-adjustable chair and working table. Based on anthropometric data two types of sitting positions were applied with two different seatpan height and angle leading to the angles between thighs and vertical line of the trunk: 120 and 135 degrees (called in this paper: position 120 deg and position 135 deg of seatpan slope). During the first stage the five unsupported trunk inclination relatively to the perpendicular were applied during sitting at both seatpan position: -30, -15, 0, 15 and 30 degrees. Subjects were asked to sit for ten seconds with trunk inclined at both seatpan positions. During second stage subjects were asked to sit for ten seconds with trunk inclined backward to the angles: -30, -15, and vertically at 0 deg with the back support adjusted at thoracic (Th) level at Th5 and Th8 and lumbar (L) level of spine at L1. Activities of five muscles were recorded by Octopus EMG (Bortec-Biomedical Ltd): trapezius p. transversus (TR), erector spinae (ES), rectus femoris (RF), gastrocnemius lat. (GL), tibialis anterior (TA). Surface EMG electrodes (silver-silver chloride) were used to monitor muscle activity (EMG) The electrodes were placed bilaterally, in pairs 2 cm apart, and a reference electrode was placed over the acromion process of the scapula. The EMG signals were amplified and then A/D converted with a 12-bit, 8-channel A/D converter at 2048 Hz. The EMG signal was full wave rectified and low-pass filtered with a second order Butterworth filter. The parametric two way ANOVA test (p<0.05), was employed to compare muscle activity (independent factors were thigh angles (120 and 135 deg) and trunk inclination
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 715–718, 2010. www.springerlink.com
716
A. Mastalerz and I. Palczewska
( -30, -15, 0, 15 and 30 degrees). A Duncan post-hoc test was used to compare the means of the positions.
III. RESULTS AND ANALYSIS Normalized mean value of muscle activity (AMG) are presented in figure 1, 2, 3 and 4. Each EMG values represent their activity during sitting with trunk inclinations at two seatpan angles relatively to their MVC activities. A. The Influence of Trunk Inclination on Muscle Activity during Sitting on Forward Inclined Seats without Backrest Seatpan angles did not influence significantly on EMG of the trunk extensors. The EMG of trapezius was 5 % lower in position of 120 deg when the trunk was moved forward and 3,8 % higher when trunk moved backward. EMG of erector spinae did not change myoelectric activity in backward and forward position of the trunk. 3.5 % difference of the EMG was observed in the middle position of the trunk.
Fig. 2 The influence of seatpan angle (120 and 135 deg - angle of thigh vs. perpendicular), trunk inclination (0, 15 and 30 deg backward) on the relative AMEG values of back muscles: rectus femoris (RF), gastrocnemius lateralis (GL) and tibialis anterior (TA)
Fig. 1 The influence of seatpan angle (120 and 135 deg - angle of thigh vs. perpendicular), trunk inclination (0, 15 and 30 deg backward) on the relative AMEG values of back muscles: trapezius p. transversus (TR) and erector spinae (ES) Additionally, the EMG of TR (18 %) and ES (17 %) was lower in backward position. However, statistical significance for those differences was confirmed only for ES (F=5,13; p=0,001).Statistical significance of the seatpan position on EMG of lower extremity muscles was observed.
The EMG of rectus femoris (RF) during sitting did not exceed 7% of its MVC activity (Fig 2). Although, only 4 % differences of RF EMG among all trunk angles was observed, the influence of seatpan angle was statistically significant at p= 0,027 (F=5.20). Normalized activity of gastrocnemius lateralis was lower twice then rectus femoris. Both factors: seatpan (F=6,911; p=0,011) and trunk (F=3,61; p=0,011) angles influenced on EMG of this muscle. However, significant (p<0,05) influence of seatpan adjusting on EMG were confirmed only when trunk was inclined to position 30 and 15 degrees. Moreover, significantly lower EMG were recorded for all trunk inclination with references to 30 deg but only at seatpan adjusted to 135 deg. Opposite direction of the activity of tibialis anterior was noticed in reference to other muscles described in this paper. That effect was over higher twice for EMG recorded at seatpan adjusted to 135 deg. Significant influence on EMG was noticed for both factors: seatpan (F=5,348; p=0,024) and trunk (F-3,319; p=0,017). However, only for that seatpan agle adjusted to 135 deg, EMG recorded at
IFMBE Proceedings Vol. 29
The Influence of Seat Pan and Trunk Inclination on Muscles Activity during Sitting on Forward Inclined Seats
trunk position of -30 deg was significantly higher than EMG recorded at 0, 15 and 30 deg (p<0,05).
717
position with back support adjusted at Th5 and position 135 deg (p<0,05).
B. Muscle Activity during Sitting on Forward Inclined Seats with Various Heights of Trunk Support The analysis of the influence of seatpan height and angle (position 120 and 135 deg), trunk inclination (0, 15 and 30 deg backward) and back support level (Th5, Th8 and L1) on the normalized EMG values of muscles is presented in figure 3 and 4.
The influence of seatpan height (120 and 135 deg angle of thigh vs. vertical line), trunk inclination (0, 15 and 30 deg backward) and backrest support level (Th5, Th8 and L1) on the relative AMEG values of back muscles: trapezius p. transversus (TR) and erector spinae (ES)
Fig. 4 The influence of seatpan height (120 and 135 deg angle of thigh vs. vertical line), trunk inclination (0, 15 and 30 deg backward) and backrest support level (Th5, Th8 and L1) on the relative AMEG values of back muscles: rectus femoris (RF), gastrocnemius lateralis (GL) and tibialis anterior (TA)
The analysis showed that the only statistically significant differences took place in ES muscle at position 120 deg with backrest support placed at Th8 level and trunk inclination 15 (p<0,05) and 30 deg (p<0,05). The activity of all examined lower extremity muscles was significantly influenced by the height of the seatpan – EMG values were higher at the position 135 deg than in the position 120 deg. The difference of RF activity was significant at p<0,001 (F=50,23). In position 135 deg the activity of RF was significantly higher than in 120 deg with back support adjusted at Th5 level and 30 deg backward trunk inclination (p<0,003), with back support adjusted at Th8 and 15 deg (p<002) and 30 deg (p<0,008) and with back support adjusted at L1 and 0 (p<0,05), 15 (p<0,01) and 30 (p<0,03) deg of trunk backward inclination. There was also a significant difference between RF activity registered with trunk inclined backward 30 deg and positioned vertically in
Gastrocnemius lat. (GL) also revealed significantly higher overall activity in position with higher placed seatpan leading to the wider angle between thighs and vertical (F=13,44, p<0,001), however there were no significant influences of various trunk inclination and backrest height adjustments. Tibialis anterior showed a significantly higher activity in position 135 than 120 deg both in respect to the trunk backward inclination (F=10, p<0,001), back support height (F=32,9, p<0,001) and both these factors simultaneously (F=6,3, p<0,002). Significantly higher AEMG values of TA were observed in position 135 deg than 120 deg with trunk inclination 30 deg backward and back support adjusted at Th5 (p<0,004), Th8 (p<0,006) and L1 (p<0,001). In position 135 and were also observed significant differences between TA activity in trunk inclination 30 deg and 15 deg
Fig. 3
IFMBE Proceedings Vol. 29
718
A. Mastalerz and I. Palczewska
and back support at Th5 (p<0,02), Th8 (p<0,03) and L1 (p<0,003).
IV. DISCUSSION Advantages of forward inclined seats are limited, up to this time, only to some professions. First of all, people with considerably mobility should take this type of chair. However, results of this work show limitation of the seat inclination. As result of seat adjusting to 135 deg the EMG of lower extremity muscles were higher more than twice with reference to 120 deg. Moreover, EMG of trunk muscles did not depend on seat inclination. Anderson and Helander [9] also investigated the effects of a forwardly-inclined seatpan (0°, 15° and 30°), with and without a backrest, on the pressure in the lumbar spine using EMGs. These studies indicate that increasing forward slope up to an absolute maximum of 15° decreases spinal load, whereas greater seat slopes result in increased loading, but the results depend on the presence of a backrest. Mid and upper back comfort were improved by the forward seat inclination but leg comfort decreased. Prolonged sitting with the forward leaning posture should affect on leg pain, especially in its external part. We proved that trunk deviations as 15 deg did not influence on muscle activity. Studies showed that if the user reclines as little as 20 degrees, the backrest can carry up to 47 percent of the upper body’s weight [10,11]. Our result indicate that 30 deg of forward bending of the trunk significantly increased EMG (20%) of the trunk extensors, and back side of the leg. Reveres inclination influenced only on higher activity of tibialis anterior. This effect is more then higher twice for 135 deg of seatpan slope. In our opinion this position of the trunk influenced on the position of the center of gravity and this displacement may result in unsupported foot position. Bidard et al. (2000) also investigated the stabilization effort was greater for sitting unsupported than standing because of the not optimal alignment of the centers of mass. Our results suggest that back muscles activity is in small degree affected by the back support height (maximal differences of AEMG did not exceed 7%), however the most advantageous backrest height, from this point of view, is the L1 height. Backrest height did not have any influence on lower extremity muscles activity. The activity of lower extremity muscles resulted mostly from trunk inclination angle. In our experiment the lowest activity of the calf and thigh was observed in vertically positioned trunk. The
exception was the GL muscle, which activity in 135 deg position diminished along with trunk backward inclination. This muscle is the plantar flexor of the foot and his function depends on the contact of the foot with basis. In position 135 deg, with the highly elevated seatpan, and backward trunk inclination, the loss of contact of feet with basis is possible. From the ergonomic and user’s safety point of view the chairs designed for work in sit-stand position should not allow backrest inclination exceeding 15 degrees.
ACKNOWLEDGMENT The work financed supported by the Polish Ministry of Science and Higher Education grant no N R16 0001 04.
REFERENCES 1. Westgaard R.H., DeLUCA C.J. (1999) Motor unit substitution in long-duration contractions of the human trapezius muscle, Journal of Neurphysiology 82: 501 - 504. 2. Branton P. (1969) Behaviour, body mechanics and discomfort. Ergonomics 12(2): 316-327. 3. Grandjean E., Hünting W. (1977) Ergonomics of posture — Review of various problems of standing and sitting posture. Applied Ergonomics 8(3): 135-140. 4. Bendix T. (1986) Sitting postures - a review of biomechanical and ergonomic aspects. Manual Medicine 2:77-81. 5. Mandal A.C. (1981) The seated man: Homo sedens. The seated work position theory and practice. Applied Ergonomics 12(1): 19 - 26 6. Bendix T. (1984) Seated trunk posture at various seat inclinations, seat heights, and table heights. Hum Factors 26: 695-703. 7. Mandal A.C. (1976) Work chair with tilting seat. Ergonomics 19(2):157 - 164. 8. Mandal A.C. (1991) Investigation of the lumbar flexion of the seated man. International Journal of Industrial Ergonomics 8: 75 - 87. 9. Anderson L.M., Helander M.G. (1990) The effects of spinal shrinkage and comfort of a forwardly-inclined seat with and without a backrest. Proceedings of the International Ergonomics Association Conference on Human Factors in Design for Manufacturability and Process Planning, 1-10. 10. Grandjean E., Hunting W., Pidermann M. (1983) VDT workstation design: Preferred settings and their effects, Human Factors 25: 161 – 175. 11. Corlett E.N. Eklund J.A.E. (1984) How does a backrest work? Applied Ergonomics 15: 111 - 114.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Andrzej Mastalerz Institute of Industrial Design Swietojerska 5/7 Warsaw Poland [email protected]
Radiation Exposure in Routine Practice with PET/CT and Automatic Infusion System – Practical experience report P. Tomše and A. %LþHN Department of Nuclear Medicine, University Medical Centre, Ljubljana, Slovenia Abstract— Specially when handling higher energy radioactive sources, such as F-18, the principle that radiation exposure is to be kept as low as possible must be considered. In the first two months of operation of our PET/CT scanner we were performing measurements of whole-body dose received at different steps of PET diagnostic procedure. We found doses received by radiopharmacist were 0.6 ± 0.7 µSv per day and by physician 1.2 ± 0.4 µSv per patient; both are reduced due to use of automatic F18-FDG infusion system. Doses received by technologist were 1.3 ± 0.9 µSv due to morning QC and 2.5 ± 2.1 µSv was cumulative dose per patient imaging. Estimation on the basis of measurements shows yearly doses would remain well bellow legal limits but large standard deviation implies reduction of doses with routine practice is still possible. Keywords— radiation infusion system
exposure,
PET/CT,
automatic
I. INTRODUCTION
In December 2009 a second system for Positron Emission Tomography – Computed Tomography (PET/CT) scanning in Slovenia was installed in Department of Nuclear Medicine at University Medical Centre in Ljubljana. Introduction of this new technology to the department increased the risk of high staff radiation dose because of higher radiation energy of PET isotopes (511 keV) compared to conventional nuclear medicine isotopes, most common of which is technecium-99m (140 keV). Aim of our study was to measure radiation doses of our staff involved in various steps of PET diagnostic procedure and evaluate its standard deviation. II.
A. F18-FDG Manipulation and Protection Devices Classical diagnostic procedures in PET departments begin with morning quality control (QC) of PET (or PET/CT) system, preparation of positron-emitting radiopharmaceutical, injection of radiopharmaceutical to the patient, imaging and occasionally transport of wheel-chair patient to their residing hospital department. All these steps of diagnostic procedure involve radiation exposure that depends on the distance from the radiation source, shielding and the time of exposure. Lately increasing number of PET centers is replacing manual dosage of radiopharmaceuticals and their application to the patient with the use of automatic infusion systems and thus reducing the radiation exposure of the staff. In our department Medrad Intego PET Infusion System was installed along with the PET/CT system and has thus been used from the beginning of scanner operation. Other forms of shielding of positron-emitting isotope in our department include mainly concrete and lead walls or screens.
MATERIALS AND METHODS
PET/CT system at University Medical Centre in Ljubljana is Siemens Biograph mCT. Routine work on the PET/CT scanner started in beginning of 2010 and currently twice or three times per week on average 7 patients are imaged. Our study included 115 PET/CT procedures conducted in the first two months of scanner operation.
Fig. 1 Automatic infusion system used in Department of Nuclear Medicine at University Medical Centre in Ljubljana
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 719–721, 2010. www.springerlink.com
720
P. Tomše and A. Biþek
B. Dosimetry
III.
Routinely the staff in our department is monitored for radiation exposure with thermoluminescent dosimeters (TLD) which are changed once per month and will in longer term reflect radiation exposure due to PET/CT procedures compared to our conventional practice. Still because of a time lag the information provided by TLD is not appropriate for current study. We designed our study to record wholebody dose at each working step with electronic personal dosimeters Rados RAD-60S worn at waist level. Doses were collected for each worker after each working day at the PET center. Intermediate values (after each patient) were read out as well. Our F18-FDG PET procedure was divided into five working steps. The first step was receipt of container with daily amount of F18-FDG, placement of this container into the automatic infusion system and preparation of system for daily use, which included daily QC, tubing setting and priming; this step was done by radiopharmacist. Daily activity placed into the infusion system was 9.7 ± 1.5 GBq and was delivered to the department within tungstenshielded multi-dose vial that was not opened during placement. The second step was injection of 365 ± 31 MBq activity of F18-FDG into the patient. Activity of tracer infusion for individual patient was automatically measured and calibrated inside the shielded infusion system cart and then automatically delivered directly to the patient. Injection was supervised by physician who afterwards escorted the patient to the waiting room behind the lead wall if needed and after the imaging withdrew the catheter. The third step was daily QC procedure of the PET/CT scanner including test with Ge-68 PET phantom and CT test and was performed by technologist. The forth step included collecting the patient about 30 minutes to 1 hour after tracer infusion, escorting the patient to the PET/CT scanner, positioning within the camera and escorting the patient out of the PET room after the image acquisition. This step was performed by team of two technologists, who spent most of the scanning time in control room supervising the patient over video camera. The fifth, last, step was optional transfer of the wheel-chair patient to their residing hospital department. Besides whole-body individual dose measured with electronic dosimeter we also performed measurements of dose rate at various distances from the radiation source (patient and F18-FDG vial) with dose rate meter EGG-Berthold LB123.
RESULTS
Instantaneous dose rates measured at various distances from the radiation source are presented in Table 1 (F18-FDG vial) and Table 2 (patient). Table 1 Dose rates (DR) measured from F18-FDG vial with activity 9.7 ± 1.5 GBq Distance from DR from vial in DR from vial placed in F18-FDG vial [m] transport container[μSv/h]infusion system [μSv/h] 0 0.5 1.0 2.0
157 ± 23 43 ± 9 19 ± 7 10 ± 3
1.5 ± 0.3 0.5 ± 0.2 0.3 ± 0.2 background level
Dose rates near F18-FDG vial in transport container are very high but trained radiopharmacist is directly exposed to this dose rate for no more than 10 seconds. Since dose rate from the F18-FDG vial placed in automatic infusion system is reasonably small no special restrictions regarding movement in proximity of infusion system is required. Dose rates at close contact and near patient are also very high, but time of staff exposure to these dose rates varies. Table 2 Dose rates (DR) measured from patients injected with 365 ± 31 MBq F18-FDG Distance from patient [m] DR immediately after injection [μSv/h] 0 0.5 1.0 2.0
135 ± 21 43 ± 9 19 ± 7 10 ± 3
The average doses received at different working steps of PET/CT diagnostic procedure are given in Table 3. All recorded doses are due to F18 radiation, there is no recorded contribution due to CT because during CT operation staff is kept outside the scanning room and walls of the room are sufficiently shielded to keep background dose rate outside. Whole-body dose received by radiopharmacist during morning infusion system preparation was 0.6 ± 0.7 μSv, whereas physician received 1.2 ± 0.4 μSv per injection to the patient. Both doses are reasonably low, as a result of use of automatic infusion system [1, 2]. Whole-body dose to the technologist during QC procedure of PET/CT system was 1.3 ± 0.9 μSv, but during imaging patients cumulative dose received by two technologists was 2.5 ± 2.1 μSv per patient. We notice large standard deviation of this measured dose which is the consequence of
IFMBE Proceedings Vol. 29
Radiation Exposure in Routine Practice with PET/CT and Automatic Infusion System – Practical Experience Report
different patient mobility and therefore great variation in time the technologists spend in proximity of the patient. On the basis of these dose measurements we estimate additional doses due to PET procedures on a yearly level would reach up to two times the doses received in our conventional nuclear medicine practice (in 2009: physician 0.36 ± 0.30 mSv; radiopharmacist 0.33 ± 0.34 mSv, technologist 0.97 ± 0.42 mSv) but would still remain well within the annual radiation dose limits. Our estimation is based on yearly plan of 250 working days and 1500 patients with 3 physicians, 3 radiopharmacists and 4 technologists working. Continuing with good operators’ practice and improving operational skills staff doses may be lower. In the time of this introductory study transport of the patients to their resident hospital department was conducted in only 10 cases and accounted for radiation dose from 1 μSv to 6 μSv. This value is not easy to evaluate due to small number of cases and great variation of path distance. Transport of patients is expected to be needed in 10-20% of cases in the future and will be performed by out of department service that employs few tens of people therefore very low doses are expected per person. Nevertheless our result implies a note should be issued to suggest the service potentially pregnant workers should not be assigned to transfers of PET patients. Table 3 Whole-body doses (WBD) received at different working steps and activity of corresponding radiation source. In steps 4 and 5 decay of injected activity is taken into account.
Step 1 2 3 4 5
Short description Infusion system preparation F18-FDG injection PET/CT QC Patient imaging (1 or 2 technologists) Wheel-chair transport
Activity [MBq] 9680 ± 1487
WBD per procedure [μSv]
365 ± 31
1.2 ± 0.4
94
1.3 ± 0.9
280 ± 50
2.5 ± 2.1
~ 200
1–6
0.6 ± 0.7
IV. CONCLUSIONS
721
dure which has been recently introduced to our department. Measured whole-body radiation doses of our staff showed different level of received doses at five working steps of the procedure. Automatic infusion system provides safe and accurate dose preparation and infusion of F18-FDG in PET procedure. Radiopharmacist involved in morning preparation is exposed to the highest dose rates, but due to the use of automatic injection system duration of exposure is no more than few seconds. In this regard usage of automatic infusion system virtually eliminates manual dose preparation and occupational risk for radiopharmacist involved in PET procedure is lowered. Also dose rates at close contact and near patient are very high and draw special attention to received radiation dose of staff handling the patient. Dose depends on the time spent in proximity of patient. Time that physician spends near the patient during F18 tracer application is minimally influenced by patient’s condition and can be in the future optimized with increased physician’s skills. On the other hand patient’s condition greatly influences the time spent for positioning within the PET/CT scanner and technologist's radiation exposure. Our results show large standard deviation of measured dose and therefore stress necessity of tracing of individual effective doses of technologists which should result in temporary removal of individual from working in PET if recommended dose limit is approached.
REFERENCES 1. 2.
Guillet B, Quentin P, Waultier S et al. (2005) Technologist Radiation Exposure in Routine Clinical Practice with 18F-FDG PET. J Nucl Med Technol 33:175–179 Pant GS, Senthamizhchelvan (2006) Radiation Exposure to Staff in a PET/CT Facility. IJNM 21:100-103 Author: Petra Tomše Institute: Department of Nuclear Medicine, University Medical Centre Ljubljana Street: Zaloška 7 City: Ljubljana Country: Slovenia Email: [email protected]
Results of our study provide first data regarding routine occupational exposure to radiation in PET diagnostic proce-
IFMBE Proceedings Vol. 29
A Pilot Study for Development of Shoulder Proprioception Training System Using Virtual Reality for Patients with Stroke: The Effect of Manipulated Visual Feedback S.W. Cho1, J.H. Ku1, Y.J. Kang2, K.H. Lee2, J.Y. Song3, H.J. Kim2, I.Y. Kim1, and S.I. Kim1 1
2
Department of Biomedical Engineering, Hanyang University, Seoul, Korea Department of Rehabilitation medicine, Eulji University School of Medicine, Eulji General Hospital, Seoul, Korea 3 Nowon Eulji Hospital, Seoul, Korea
Abstract— The shoulder can control the range of motion of upper-limb. However, proprioception declines of shoulder of patients with stroke have affected the control of upper-limb motion. The motor learning regulates body movement posture by integrating the proprioception feedback (muscle force, joint position, etc.) as well as exteroceptive feedback (vision, audition). Proprioception feedback plays important role in the motor learning. Virtual reality (VR) is able to provide an environment which manipulates visual feedback of movement of stroke patients. In this study, we developed a system that can provide the continuous matching angle task with manipulating visual feedback using virtual reality for shoulder proprioception training of patients with stroke. Nineteen patients with stroke (age: 58.16 ± 12.27 years, onset: 40.16 ± 49.76 months, males: 17, females: 2) were recruited for this experiment. Participants performed the angle matching task and continuous matching angle task. In the results, error angle of angle matching task were not different between the first half of angle matching task and the second half of angle matching task (p = 0.202). Accumulated error angle of continuous matching angle task were more reduced in the second half of continuous matching angle task than in the first half of continuous matching angle task (p = 0.002). These results are similar to the result from a study assessing balance training using vision cue deprivation. It can be explained that manipulated visual feedback using virtual reality affects the proprioception of shoulder of patients with stroke. As conclusion, we found that visual feedback manipulation using virtual reality could provide an effective proprioception feedback for proprioception training of shoulder of patients with stroke. Keywords— Virtual reality, Shoulder, proprioception, Visual feedback.
I. INTRODUCTION The 80% of stroke patients need the continuous rehabilitation in life [1]. But, stroke patients have been trained mostly with therapist’s assistance and the posture correction with visual feedback of movement for their rehabilitation. The upper limb had regulated by the integration of multifeedback of three joint (shoulder, elbow, wrist). Especially,
the shoulder can control the range of motion of upper-limb. However, the upper-limb motion of patients with stroke has received an affect of decline of shoulder proprioception [2]. Declines of shoulder proprioception of patients with stroke have affected the control of upper-limb motion [3]. In the rehabilitation training of stroke patients, the motor learning regulates body movement posture by integrating the proprioception feedback (muscle force, joint position, and body position) as well as exteroceptive feedback (vision, audition) [4]. Proprioception feedback refers information about movement and posture of body, which transverse from muscle spindles into central nervous system [5] and it plays important role in the motor learning. Therefore, motion improvements of upper-limb of patients with stroke need the method for enhancing a manipulation of proprioception feedback. In the previous study, proprioception assessment of stroke patients have been used the movement that adjusts to a position of target angle in situation without visual feedback of movement [6]. However, the methods have demerits in that it is only for assessment but it cannot be applicable into the training. As previously described, patients with stroke require a training environments in which they could be trained for augementating sense of proprioception with continuous proprioception enhancing paradigm during rehabilitation training. Virtual reality (VR) has received considerable attention to supplement the demerit of rehabilitation training [7]. Because VR is able to provide environments which manipulate visual feedback of movement of patients with stroke, the training is possible with the remove or preservation of visual feedback to one’s movement. In this study, we developed a system that can provide the task with manipulated visual feedback of shoulder movement for proprioception training of patients with stroke. We hypothesized that patients with stroke would get more proprioception effects of shoulder from the task with manipulated visual feedback of movement.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 722–724, 2010. www.springerlink.com
A Pilot Study for Development of Shoulder Proprioception Training System Using Virtual Reality for Patients with Stroke
II. METHOD
723
III. RESULTS
A. System Composition Developed proprioception training system intercepted the visual feedback of upper-limb movement using head mounted display (eMagin) and measured the shoulder movements of patient with stroke using the encoder (Autonics). B. Virtual Reality Contents a) Angle Matching Task The angle matching task (AMT) has been manufactured to assess proprioception of shoulder using the assessment method about proprioception of previous study. The AMT has the five target angles that composed from 10° to 50°. The AMT indicated target angle using red bar when manipulated the visual feedback of shoulder movement. If participant performed shoulder movement accord with target, red bar returned start position. The AMT presented total twenty angles that presents a four times per target angle. Participants performed passive movement by red bar. To assess the proprioception accuracy of shoulder, error angle calculates an absolute difference angle between target angle and performance angle. b) Continuous Matching Angle Task The developed VR system manipulated visual feedback of shoulder for enforcement of proprioception feedback of shoulder. The continuous matching angle task (cMAT) has the five target angle that composed from 10° to 50°. Target angle was provided by cylinder in 3D environment. The visual feedback of shoulder was represented by semitransparent cylinder when participants have specific response (click a button). If participant made an error angle within 3°, target cylinder had transformed red cylinder. And, participants returned the shoulder position to the start position. The cMAT presented total twenty angles that presents a four times per target angle. To assess the accuracy variance of shoulder proprioception, accumulated error angle calculates the total error angle until participants gain the target cylinder. C. Participants Nineteen patients with stroke (age: 58.16 ± 12.27 years, Onset: 40.16 ± 49.76 months, males: 17, females: 2) were recruited for this experiment. Ten patients with stroke injured the right upper-limb and nine patients with stroke injured the left upper-limb. Patients with Stroke performed an AMT and a cMAT one by one.
To investigate the proprioception of shoulder of patients with stroke, we analyzed results that consist of the first half of two tasks and the second half of two tasks. In results of AMT, error angle were not different between the first half of AMT and the second half of AMT (p = 0.202). In result of cMAT, accumulated error angle were more reduced in the second half of cMAT than in the first half of cMAT (p = 0.002).
IV. DISSCUSION AND CONCLUSIONS In this study, we developed the shoulder proprioception training system of patients with stroke which could provide manipulated visual feedback of shoulder using VR. In the results of this investigation, an AMT was not reduction of error angle of proprioception of shoulder between the first of half of AMT and the second half of AMT (p = 0.202). It can be explained that AMT can assess the proprioception falloff of shoulder of patients with stroke. But, AMT cannot propose the proprioception training for shoulder of patients with stroke. On the other hand, we observed that the accumulated error angle of the second half of cMAT were more reduced the accumulated error angle of the first half of cMAT (p = 0.002). This result is similar to the results from a study assessing balance training using vision cue deprivation [8]. Considering the results, the visual feedback manipulation using VR could provide an effective proprioception feedback for proprioception training of shoulder of patients with stroke. However, this study have limit that cannot provide the proprioception training effect of shoulder from continuous training using developed system. For the robust conclusion, future study must be the proprioception training test about shoulder of patients with stroke.
ACKNOWLEDGMENT This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (No. 2009-0060581).
REFERENCES 1. Asuman D, Guldal F, Meryem D et al. (2004) The Rehabilitation Results of Hemiplegic patients. Turk J Med Sci 34:385-389 2. Janwantanakul P, Margarey M, Jones M et al. (2001) Variation in Shoulder Position Sense at Mid and Extreme Range of Motion. Arch Phys Med Rehabil 82:840-844
IFMBE Proceedings Vol. 29
724
S.W. Cho et al.
3. Niessen M, Veeger D, Koppe P et al. (2008) Proprioception of the shoulder after stroke. Arch Phys Med Rehabil 89:333–338 4. Schmit R, Wrisberg C. (2000) Motor Learning and performance. HUMAN KINETICS, IL 5. Fremerey R, Lobenhoffer P, Zeichen J et al. (2000) Proprioception after rehabilitation and reconstruction in knees with deficiency of the anterior cruciate ligament. J Bone Joint Surg 82:801-806 6. Lonn J, Crenshaw A, Djupsjobacka M et al. (2000) Position Sense Testing: Influence Starting Position and Type of Displacement. Arch Phys Med Rehabil 81:592-597 7. Betker A, Szturm T, Moussavi Z et al. (2006) Video game-based exercises for balance rehabilitation: a single-subject design. Arch Phys Med Rehabil 87:1141-1149
8. Bonan I, Yelnik A, Colle M et al. (2004) Reliance on Visual information After Stroke, Part II: Effectiveness of a Balance Rehabilitation Program With Visual Cue Deprivation After Stroke: A Randomized Controlled Trial. Arch Phys Med Rehabil 85:274-278 Author: Jeonghun Ku Institute: Department of Biomedical Engineering, Hanyang University Street: City: Seoul Country: Korea Email: [email protected]
IFMBE Proceedings Vol. 29
Computer Modeling to Study the Dynamic Response of the Temperature Control Loop in RF Cardiac Ablation J. Alba1, M. Trujillo2, R. Blasco3, E. J. Berjano1 1
2
Instituto de Investigación e Innovación en Bioingeniería (I3BH), Universidad Politécnica de Valencia, España. Dpto. Matemática Aplicada. Instituto Universitario de Matemática Pura y Aplicada, Universidad Politécnica de Valencia. 3 Instituto Universitario de Automática e Informática Industrial, Universidad Politécnica de Valencia.
Abstract— Radiofrequency (RF) ablation is currently used to treat some types of cardiac arrhythmias such as Atrial Fibrillation (AF). Currently, the most used protocol type of delivering RF energy is controlled-temperature. The commercially available RF generators employ a PI controller. This controller has fixed parameters denominated Kp and Ki. There are no experimental or theoretical studies on the behavior of this controller against to variations in electrical and thermal characteristics of the tissue, electrode design, tissue penetration, and circulating blood flow. We built a theoretical 2D model which we solved using the Finite Element Method (FEM). We used COMSOL Multiphysics for implementing thermalelectric coupling problem and MATLAB for implementing the temperature control loop. The results suggest that the values of Kp and Ki do not affect generally the dynamic response of the controller. However, for some values of tissue characteristics, oscillations in target temperature were observed.
tantly help to have more control over the lesion size. In fact, there are no experimental or theoretical studies on the effect of the mentioned above factors on the dynamic behavior of the PI controller. Our work is a theoretical study using computational modeling. This modelling technique provides vital information about the electrical and thermal behaviour of ablation in a fast and cheap way. Our aims were: (1) Implementation of the electrical-thermal coupling problem of cardiac ablation with COMSOL Multiphysics program. (2) Implementation of a protocol constant temperature using a PI controller. (3) Study of the dynamic behaviour of this controller against changes in the tissues characteristics (density, specific heat and thermal conductivity) and procedural factors (electrode size, blood flow, depth of penetration of the electrode).
Keywords— Computational modeling, PI controller, Radiofrequency cardiac ablation, Temperature control loop. I. INTRODUCTION
Radiofrequency (RF) ablation is currently used to treat some types of cardiac arrhythmias such as Atrial Fibrillation (AF) [1]. This ablation technique uses RF currents (500 kHz) producing tissular necrosis [2]. Currently the most used protocol of delivering RF energy is controlledtemperature [3, 4]. This protocol may influence the dimension of thermal injury [5]. The temperature is recorded in the thermistor, which is included in the electrode tip and this reading is used to modulate the applied voltage, which produces heating. The modulation is conducted by means of a controller inside the RF generator. The commercially available RF generators employ a PI controller (Proportional Integral). This controller has parameters denominated Kp and Ki. These parameters are always fixed. We hypothesized that the under some conditions, the value of these parameters could not be suitable and could influence the dimension of thermal injury. E.g., the dispersion in the tissues characteristics and some procedural factors (pressure between electrode and tissue) could imply a wrong behavior of the PI controller. Thus, the obtaining of all information about the dynamic response of the PI controller can impor-
II. METODS
A. Description of the theoretical model The RF ablation technique is based on the application of current of RF (500kHz) between an active electrode of small dimensions inside the cardiac chamber another dispersive electrode of large dimensions located on patient’s back. In our study, we considered two active electrodes: one 7Fr and 4 mm length, and other 8Fr and 8 mm length, both made of platinum-iridium, and placed perpendicular to the tissue surface. We also modelled a thermistor inserted into the active electrode tip to records the temperature. This temperature is used for modulating the voltage applied. We built a theoretical model which we solved by the Finite Element Method (FEM). The model presented a rotational symmetry axis and a two dimensional approach was possible. Figure 1 we can see the geometry of the proposed theoretical model, which represents the active electrode with the section of the probe of polyurethane, a thermistor which is inserted into the active electrode tip and a section of coating material to fix the thermistor and a fragment of cardiac tissue [5]. We considered three values for insertion of the electrode in the cardiac tissue (P): 0.75 mm, 1.25 mm and 2.5 mm [6]. We used COMSOL Multiphysics (Stockholm, Sweden) to create the geometry of the model, intro-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 725–728, 2010. www.springerlink.com
726
J. Alba et al.
duce the electrical and thermal conditions, mesh the spatial domain, solve the problem and post-processing the results. Then, we exported the FEM structure to MATLAB (The MathWorks, MA, USA), where we implemented the PI control algorithm. The dimensions R, Z and L (Fig. 1) were estimated by means of a sensitivity analysis in order to avoid boundary effects [2].
Figure 1: Description of the theoretical model. The active electrode tip inserted into heart tissue at a depth P. The lengths of R, Z and L, are obtained by means of a sensitivity analysis (see construction of the numerical model). We introduce the design of the electrode, the thermistor, coating and polyurethane active electrode.
B. Governing Equations The electro-thermal heating in RF ablation is described by the Bioheat equation and Laplace's equation. The combinations of these equations describe the spatial and temporal distribution of temperature in the tissue. Bioheat equation is as follows: (1) where is the density of the tissue, c is the specific heat, k is the thermal conductivity, T is the temperature, q represents the internal heat sources (i.e. the RF power density in this case), Qp is the lost heat by blood perfusion and Qm and metabolic heat generation. In cardiac ablation modeling, the terms Qp and Qm are insignificant and they can be neglected [2]. We use a quasi-static approach to solve the electrical problem. The distributed heat source q (Joule losses) is given by: where J is the current density and E is the electric field strength. The values of these two vectors are obtained through the Laplace equation: (2) where is the voltage and is the electrical conductivity. The combination of equations (1) and (2) gives the solution of electro-thermal problem.
the blood-tissue interface, and initially fixed voltage in the active electrode of V = 50 V (Dirichlet boundary condition) and the dispersive electrode V = 0 V. The dispersive electrode is modelled as the right and botton boundary in Fig. 1. The value of voltage in the active electrode was modulated. Thermal boundary conditions were absence of heat flux on the symmetry axis, and constant temperature in the dispersive electrode and the outer end of the plastic probe. The initial temperature was of 37ºC. The effect of the blood circulation inside the cardiac chamber was modeled by means of boundary conditions based on forced thermal convection at the surfaces of tissue and electrode. We studied the effect of three thermal convection coefficient values for the tissue and the electrode (htissue and helec) taken at three flux levels: low, medium and high. We used for htissue: 721, 3636 and 5446 W/m2K respectively and helec: 44, 708 and 1417 W/m2K [7]. D. Physical characteristics of materials Table 1 shows the value of the density , specific heat c, thermal conductivity k and electrical conductivity that we employed for the model. These values were evaluated at 37°C, and collected from previous studies [7]. We considered a change in electrical conductivity in the cardiac tissue with a temperature of +1.5%°C-1 and changes in the thermal conductivity of the cardiac tissue with a temperature of +0.001195 K-1 [5]. Table 1: Characteristics of materials used in the model. (Evaluated at 37 ° C, and estimated values were taken from previous studies [7]). Material
Region
c
Cardiac Tissue Myocardial 0.541 1060 3111 Pt-Ir Electrode 4 x 106 21.5x 103 132 5 Coating Coating 10 32 835 5 Thermistor Thermistor 10 32 835 70 1045 Polyurethane Catheter body 1 x 10-5 c specific heat, k thermal conductivity, Electric conductivity and density.
k 0.531 71.1 0.038 0.038 0.026
E. Protocol RF power delivery We used a protocol of temperature controlled. Briefly, the error (e) produced between temperature measured in sensor (Tm) and target temperature selected for the user (Ttarget), is used by the controller to create a modulating signal (u), which modulates the applied voltage (see Figure 2). The PI controller implies that the modulating signal has two parts: one proportional and other integral to the error signal. Both parts have fixed constants, Kp (proportional constant) and Ki (integration constant). The modulating signal has the following expression:
C. Initial and boundary conditions
(3)
The electrical boundary conditions were zero current (Neumann boundary condition) on the symmetry axis and IFMBE Proceedings Vol. 29
Computer Modeling to Study the Dynamic Response of the Temperature Control Loop in RF Cardiac Ablation
when Kp is proportional constant, Ki is integration constant and e is error. The design of parameters Kp and Ki were based on two parts: 1) Identification of system response, and 2) design IP. In our work, the system represents the tissue, which has an input signal (applied voltage) and an output signal (temperature measured in the thermistor).
Figure 2: Protocol of temperature controlled. The error between temperature in sensor inserted electrode tip and target temperature selected for the user e=Ttarget-Tm which the PI controller modulated the voltage applied means actuator u. The tissue was modulated as a no-lineal system and we simulated as FEM.
1) Identification of system response The aim of this part was to obtain the response of the system, i.e. the cardiac tissue. This response was assessed by the transfer function relating input and output signals. We applied a constant voltage of 30 V for 300 s to characterize the system. The modulating signal is u (t) = V2 (t). The results showed that the temperature was stabilized around 50°C. Then, using the command ident of MATLAB, which identifies an arbitrary graphic and returns an estimation of this graphic in form of transfer function. We estimated the response of the system and used a design of poles and zeros. 2) PI controller design After obtaining the transfer function, we designed the PI controller, which implies to obtain the parameters Kp and Ki. The design was conducted with MATLAB sisotool. The movement of these poles and zeros indicate us the parameters of the controller. The transfer function has been calculated using techniques of root locus. The settling time controller designed for the standard was 20 s. The standard case corresponds with a 7Fr and 4 mm electrode, medium flow, an insertion depth of 1.25 mm into the tissue, and tissue characteristics shown in Table 1. The parameter Vmax describes the maximum voltage applied to the sensor. We chose a value of Vmax = 50 V. The target temperature Ttarget was 55°C. The duration of ablation in all cases was 120 s. F. Description of the analyzed cases Apart of the standard case, other cases were considered, which were divided in two groups: 1) those where vari-
727
ations in tissue characteristics (specific heat, thermal and electrical conductivity) were considered, and 2) those where variations ablation procedure (electrode size, flow blood, depth of penetration of the electrode) were considered. Table 2 shows variations in the case of tissue characteristics and ablation procedure. Our study were divided in two parts: First, we study the behaviour of the PI controller designed for the standard case against variations in the tissues characteristics and factors to procedural factors. We also studied the behaviour of the PI controller against changes of blood flow, we considered that these variations occurred at 60 s. We studied two changes of flow, from low flow to high flow and vice versa. And second, we studied the behaviour of a PI controller designed for each specific case. Table 2: Description of the cases analyzed. Variations in tissues characteristics were used of studies previous [8]. Variations tissue characteristics Variations ablation procedures 50%
50%
h
h
k 100%
k 50%
P=0.75mm
P=2.5mm
c 100%
c 50%
Electro 8Fr
Vmax=100V
Electric conductivity, c specific heat, k thermal conductivity, P depth penetration electrode, h thermal coefficient convention and Vmax maximum voltage applied.
G. Assessed parameters We evaluated two types of parameters: First, those parameters related with the thermal lesion. Specifically, the dimensions of thermal lesion by means of the 50°C isotherm, (width W and depth D, see Figure 1), and the maximum temperature reached in the tissue at 120 s (Tmax). Second, a parameter related with the dynamic response of controller: the time that response of the controller takes to reach 55ºC (tsub). III. RESULTS
The dimensions of our model were R = Z = 70 mm and L = 10 mm. The automatic meshing was optimal and the time-step was of 1 s. The temperature at the active electrode thermistor was stabilizes around 55ºC with a duration of tsub = 30.5 s (see Figure 3). The maximum temperature Tmax did not reach values above 100ºC (overheating). The electrical and thermal conductivity were the most influential in the behaviour of the PI controller. The case 50% decreased the lesion size. The settling time increased around 104 s. The case c50% increased Tmax about 9°C producing increase lesion size in 5 mm. In this case, we observed oscillations around target temperature. The case h decreased Tmax and the lesion size. The h there were not differences respect to standard case. When the depth pen-
IFMBE Proceedings Vol. 29
728
J. Alba et al.
etration was 0.75 mm we observed a decreasing of lesion size. When the depth penetration was 2.5 mm we observed an increase the lesion size. The lesion size was greater using 8Fr and 8 mm electrode than 7Fr and 4 mm electrode.
produce oscillations in the target temperature. These oscillations can be removed using a specific controller, i.e. using Kp and Ki designed for the specific case.
ACKNOWLEDGMENT This work is being partially funded by the “Plan Nacional de I+D+I del Ministerio de Ciencia e Innovación” (TEC2008-01369/TEC) de España, y por el Ministerio de Educación y Ciencia y el Fondo Europeo de Desarrollo Regional Proyecto MTM2007-64222. Figure 3: Simulation of standard case. Figure 3A shows the temperature at the active electrode thermistor, which stabilizes around 55ºC with duration of tsub = 30.5 s. Figure 3B shows the distribution of the temperature in the tissue simulated and 50ºC isotherm. The maximum temperature Tmax did not produce overheating (temperatures above 100ºC).
Figure 4 shows the performance of the PI controller of standard case when a blood flow step occurs. Figure 4A shows when there is step of low flow to high flow. The maximum temperature recorded by the thermistor decreases about 3.5ºC. Then the controller acted to resolve the decrease of temperature and applied voltage around a 23.5 V. The duration of the transition was 22.2 s. Figure 4B shows when there is a step of high flow to low flow. The maximum temperature recorded by the thermistor increases about 2°C. Then the controller acted to resolve this temperature decreasing the voltage applied around 17 V. The duration of the transition was 39 s. The maximum temperature in 60 s did not produce overheating.
REFERENCES 1. 2. 3.
4.
5. 6. 7.
8. Figure 4: Simulation of effect of changing flow. Figure 4A shows the behaviour of controller when there is leap of low flow to high flow. Figure 4B shows the behaviour of controller when there is a leap of high flow to low flow.
The results the PI controller design for each case were similar. However, the oscillations were eliminated in the case c50%. As a result, the settling time was influenced taking a slower response. This slowing was due a reduction about a half of its value Kp and Ki. IV. CONCLUSIONS
9.
Chiappini B, Di Bartolomeo R, Marinelli G. Radiofrequency ablation for atrial fibrillation: different approaches. Asian Cardiovasc Thorac Ann. 2004 Sep;12(3):272-7. Review. Berjano EJ. Theoretical modeling for radiofrequency ablation: stateof-the-art and challenges for the future. Biomed Eng Online. 2006 Apr 18;5:24. Review. Anfinsen OG, Aass H, Kongsgaard E, Foerster A, Scott H, Amlie JP. Temperature-controlled radiofrequency catheter ablation with a 10mm tip electrode creates larger lesions without charring in the porcine heart. J Interv Card Electrophysiol. 1999 Dec;3(4):343-51 Petersen HH, Chen X, Pietersen A, Svendsen JH, Haunso S. Tissue temperatures and lesion size during irrigated tip catheter radiofrequency ablation: An in vitro comparison of temperature- controlled irrigated tip ablation, power-controlled irrigated tip ablation, and standard temperature-controlled ablation. Pacing Clin Electrophysiol 2000;23(1):8–17. Schutt D, Berjano EJ, Haemmerich D. Effect of electrode thermal conductivity in cardiac radiofrequency catheter ablation: a computational modeling study. Int J Hyperthermia. 2009 Mar;25(2):99-107. Berjano EJ, Hornero F. Thermal-electrical modeling for epicardial atrial radiofrequency ablation. IEEE Trans Biomed Eng. 2004 Aug;51(8):1348-57. Panescu D, Whayne JG, Fleischman SD, Mirotznik MS, Swanson DK, Webster JG. Three-dimensional finite element analysis of current density and temperature distributions during radio-frequency ablation. IEEE Trans Biomed Eng. 1995 Sep;42(9):879-90. Tungjitkusolmun S, Woo EJ, Cao H, Tsai JZ, Vorperian VR, Webster JG. Thermal--electrical finite element modelling for radio frequency cardiac ablation: effects of changes in myocardial properties. Med Biol Eng Comput. 2000 Sep;38(5):562-8. Haemmerich D, Webster JG. Automatic control of finite element models for temperature-controlled radiofrequency ablation. Biomed Eng Online 2005;4(1):42. Corresponding author: Author: J. Alba Martínez Institute: Instituto de Investigación e Innovación en Bioingeniería Street: Cami de Vera S/N City: Valencia Country: Spain Email: [email protected]
Our results suggest that only under some specific conditions, the values of Kp and Ki designed for the standard case
IFMBE Proceedings Vol. 29
Mechanical Properties of Long Bone Shaft in Bending S.M. Rajaai1, K. PourAkbar Saffar2, and N. JamilPour3 1
Dept. of Mechanical Engineering, Faculty of Engineering, Azad University, Abhar, Iran Dept. of Mechanical and Manufacturing Engineering, University of Calgary, Alberta, Canada 3 Dept. of Mechanical Engineering, Faculty of Engineering, Semnan University, Semnan, Iran
2
Abstract–– Long bones usually suffer bending loads. It is important to predict appropriate limits of these loads, bone fracture behavior, and their relations with bone geometry. In this research, ten fresh specimens of sheep tibiae were provided. Whole bone specimens were loaded in three-point bending according to standard wet bone test protocols and mechanical properties were determined. Finite element modeling was made with simplified geometry, assuming linear elastic and isotropic properties. Elasticity modulus and fracture load, evaluated from load-deformation curve, were applied to the finite element model and close results of maximum stress in both test specimen and model obtained. There was a difference of about 2% between ultimate strength of bone specimens and maximum stress occurred in the models. The results showed that fracture bending moment and bone extrinsic stiffness had significant relations with fracture cross-section dependent parameters. However, fracture energy and ultimate strength did not have such a relation with these parameters. Keywords–– Mechanical properties of bone, three point bending, finite element modeling, sheep tibia, bone geometry.
I. INTRODUCTION Bone undergoes complex patterns of loading during lifetime. Prediction of bone behavior under applied loads necessitates delineating its mechanical properties. Mechanical testing of bone is a common way to determine its properties. In some studies, compact or spongy bone is machined and tested with desired geometry. But, there is a risk of damage in bone tissue, caused by high temperatures due to machining process, which can affect the results. Also, bone has an anisotropic structure and its structural properties vary with load direction [1]. It is thought that, while performing tests, related materials and geometric dimensions affect the bone strength (in flexion, torsion or compression), depending on load type and direction applied on the whole bone [2]. Therefore, frequent testing methods for other materials cannot determine the bone structural characteristics [2] and testing whole bone may be another way to study its properties. Usually, three-point and four-point bending and torsion tests are used to determine whole bone mechanical strength.
Three-point bending test has commonly been used in the evaluation of bone strength in earlier studies, which have shown that bending fracture load and stiffness, as well as the intrinsic parameters, ultimate stress, and elastic modulus are good indicators of the mechanical strength of cortical bone [3]. In this paper, the procedure and results of performing three-point bending test of sheep tibia is discussed, and it is shown that variations of long bone specimen geometry along its shaft did not significantly affect bending strength of the whole bone.
II. MATERIALS AND METHOD Test specimens can be chosen from animal models, like rabbits, dogs, pigs and sheep [3]. Sheep are a promising model for various reasons; they are docile, easy to handle and house, relatively inexpensive, available in large numbers, and spontaneously ovulate. Ten sheep tibiae were chosen for tests. It was necessary to ream the bone specimens from the corpses immediately after slaughtering, to provide wet conditions, in order to minimize the changes in bone in-vitro properties. Thus, they were to be taken from slaughter-house. Some factors were considered before slaughtering to diminish probable faults in results, such as sex, color, and race. All specimens were chosen from left tibiae of cream-colored female sheep of a same herd of Kurdish race, in weight range of 52-65 Kg. Immediately after slaughtering, specimens were reamed from corpses, cleaned, and sank in physiologic saline 0.009, and kept at 4ºC [4]. All bone specimens were tested after less than 24 hours. Bending can be applied to the bone using either three-point or four-point methods. The advantage of three-point bending is its simplicity, but it has the disadvantage of creating high shear stress near the midsection of the bone. Four-point loading produces pure bending between the two loading points, which ensures that transverse shear stresses are zero. However, four-point bending requires that the force at each loading point be equal. This requirement is simple to achieve in regularly shaped specimens but difficult to achieve in whole bone tests [3]. Therefore, three-point bending is used
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 729–732, 2010. www.springerlink.com
730
S.M. Rajaai, K. PourAkbar Saffar, and N. JamilPour
more often for measuring the mechanical properties of long bones. According to standard testing method, published by American Society of Agricultural Engineers, three-point bending test should be used only when the bone is straight, has a symmetrical cross-section, and has a support length to diameter ratio greater than 10. Testing machine should be capable of applying constant rate of crosshead movement with reproducible speed and accuracy of ±1.0%. Threepoint bending test fixture with adjustable fulcra should be used in order to obtain a support length to bone diameter ratio greater than 10. Details and previous histories about the animals from which the bones were taken also should be recorded [5]. A. Three-Point Bending Tests Zwick/Roell 321 htm 123 testing machine, in Biomaterials Physical-Mechanical Properties Lab., Department of Biomedical Engineering, Amirkabir University of Technology, was used. Loading capacity of the apparatus was 2.5 tons. Sheep tibia was assumed to be a hollow shaft with an elliptical cross-section [5] (Figs. 1). Compact bone was considered as homogeneous with linear elastic properties.
B. Bending Equations for Bone Using beam bending theory and assuming that bone has a linear elastic behavior, the quantities of ultimate bending strength, Young’s modulus, and fracture strain, can be determined respectively from following equations: σ=
FLC 4I
(1)
E=
FL3 48Iδ
(2)
12C ) L2
(3)
ε =δ(
Where, F is the fracture force, ε is the deformation at the fracture instant, L is the active length of bone shaft (fulcra span length), and C is half of the small external diameter of the cross-section (D/2), at load applying point (midsection of bone shaft). Area moment of inertia for the hollow elliptical cross-section is calculated using following equation: I=
π 64
[(B.D ) − (b.d )] 3
3
(4)
In bending test, intrinsic stiffness is equal to Young’s modulus (E), which is the slope of the straight line portion of the stress-strain diagram. Flexural rigidity is equal to EI, and extrinsic stiffness is calculated from the term 48EI . L3
Fracture energy is the area under force-deformation curve up to the point of fracture [3,4,5]. C. Finite Element Modeling and Analysis
Fig. 1 A,B) Anterior and posterior views of sheep tibia, C) Variations of cross-section, and its similarity to ellipse along the bone shaft Specimens were propped up horizontally in the apparatus with the flattest side down and anterior surface upwards, centered on the supports. The two support points were rounded to avoid shear load and cutting . The fulcra spam length was adjusted to the active length of each bone shaft and the load point was set on the middle of this length. At this point, the exterior dimensions of the cross-section were measured using a vernier. While applying load with a rate of 10 mm/min [5], the machine produced the force-deformation curves. Interior dimensions were measured after fracture occurred.
After evaluation and classification of mechanical properties, the specimens were modeled by ANSYS software. Geometric dimensions of the models were defined accordingly by the measured dimensions of each specimen, and the variations of these dimensions along the specimens’ length were disregarded. It means that the shaft was considered as a uniform elliptical cylinder. This assumption had interesting results albeit varying cross-section dimensions along the bone shaft, which is mentioned ahead. Models were considered as linear elastic and isotropic. Each model was meshed by 3-D, hexagonal and eight-node elements with the same Poisson’s ratio (γ= 0.3) [3]. Young’s modulus calculated from test results was appointed to each model. Providing simple support boundary conditions at two ends of the shaft of each model, fracture force was applied to the upper surface nodes of midsection upper elements.
IFMBE Proceedings Vol. 29
Mechanical Properties of Long Bone Shaft in Bending
III. RESULTS Using force-deformation curves, quantities of fracture force and deformation, fracture bending moment, fracture energy and extrinsic stiffness were evaluated for all specimens. In Table 1; average, minimum and maximum values and the values of standard deviation of studied groups are presented. Standard deviations prompt acceptable satisfaction. It is clear that more extensive statistics will bear much unerring results. After finite element modeling and analysis, deformations and maximum (tensile) stresses which were evaluated from nodal and element solutions, were compared with experimental results. There are interesting proximities between maximum (tensile) stresses in models and experimental fracture strength of the bone specimens. Noticing Table 2; the average difference between experimental ultimate (fracture) strength and maximum (tensile) stress occurred in the model is acceptably about 2%.
731
the shaft as a uniform elliptical cylinder and sufficient to know the geometric dimensions of mid-cross-section which is the critical cross-section of the shaft in three-point bending, to analyze bending in long bone shaft. It should be reminded that finite element modeling and analysis was done with the assumption of linear elastic behavior of bone, and the differences between experimental and analytic deformations were due to the difference between bone actual behavior and the above-mentioned assumption. Thus, experimental deformations were larger than analytic deformations. (See Fig. 2) Several quantities were plotted versus each other. The correlations showed that fracture bending moment had proximate relations with bone cross-section area and area moment of inertia. (With 95% confidence bounds) Extrinsic stiffness had also significant relations with bone crosssection area and area moment of inertia. However, the quantities of ultimate strength and fracture energy did not have such significant relations with geometric parameters, i.e., cross-section area, area moment of inertia and active length of bone shaft.
IV. DISCUSSION Since, in all specimens, the ratio of support length to bone small external diameter was greater than 10, transverse shear stress in the shaft cross-section could be disregarded and the loading could be assumed to be pure bending. Presented ultimate strength is the bone flexural strength, and compressive, tensile or shear strengths could not be obtained using bending test. Fracture strain was averagely under 4%. In addition, failure firstly began in lower side of bone shaft, i.e., tensile failure occurred before compressive failure. This represents the quasi-brittle behavior of bone. As mentioned before, fracture energy is equal to the area under force-deformation curve, up to the point of fracture. However, toughness modulus is evaluated from the area under stress-strain diagram. In some studies, the definition of cross-section area moment of inertia has been based on the elliptic approximation of the cortical bone cross-section. However, this has been shown to result in significant errors. In this study, geometric irregularities and variations of bone shaft cross-section dimensions were disregarded. For example, the cross-section was considered symmetric and neutral axis was assumed to pass the center of elliptical cross-section of the shaft, and only midsection dimensions were measured. However, proximity of experimental to analytic results of maximum (tensile) stresses may validate these assumptions. Therefore, it might be useful to consider
Fig. 2 Force-deformation curve for specimen 6
V. CONCLUSIONS In this study, the intention was to consider mechanical properties of whole bone as a crucial tissue in living system, and the effects of whole bone geometry on its mechanical properties, which cannot be accomplished for machined bone specimens. Thus, an idea was presented to test long bones in three-point bending and the action was done according to standard wet bone test protocols. In this research, it was shown that the whole bone shaft bending strength did not have effective relations with the
IFMBE Proceedings Vol. 29
732
S.M. Rajaai, K. PourAkbar Saffar, and N. JamilPour
Table 1 Statistical results for the specimens Quantity 4
Minimum
Maximum
Average
Standard Deviation
1
Area Moment of Inertia (mm )
901.1
1889.0
1472.8
310.5
2
Midsection Cross-Section Area (mm2)
88.00
138.66
118.67
16.03
3
Bone Shaft Active Length (mm)
99
116
106
5.1
4
Fracture Energy (J)
3.41
7.17
4.73
1.10
5
Fracture Bending Moment (N.m)
29.16
50.62
40.92
6.78
6
Elasticity (Young's) Modulus (Gpa)
5.42
8.65
6.58
0.93
7
Toughness Modulus (Mpa)
2.38
5.35
3.77
0.90
8
Ultimate (Fracture) Strength (Mpa)
168.71
201.68
177.88
9.23
9
Extrinsic Stiffness (kN/mm)
0.271
0.484
0.386
0.066
10
Fracture Strain
0.0304
0.0477
0.0394
0.0056
11
Fracture Load (kN)
1.178
1.947
1.547
0.244
12
Deformation (mm)
4.88
6.64
5.75
0.56
Table 2 Some experimental results and comparable results evaluated from finite element modeling Specimen
Maximum Stress in Model (Mpa)
Fracture Force (kN)
Young's Modulus (Gpa)
Experimental Deformation (mm)
Fracture Strength (Mpa)
Model Deformation (mm)
1
1.178
6.08
6.58
182.01
5.03
178.18
179.44
2
1.947
6.61
6.58
201.68
4.88
202.03
202.99
variations of cross-section geometric dimensions along the bone length relative to midsection while the whole bone was straight and its lateral curvature could be slighted, and modeling could be done while disregarding these variations.
Nodal Solution
Element Solution
3. Cowin S.C.(2001) Bone Mechanics Handbook, 2nd Ed., CRC Press, Ch. 6 & 7. 4. Turner C.H., and Burr D.B.(1993) Basic Biomechanical Measurements of Bone, Bone, Vol.14, pp.595-608. 5. ASAE/ANSI S459 DEC01, (2003) Shear and Three-Point Bending Test of Animal Bone, ASAE Standards, American Society of Agricultural Engineers, pp.611-614.
REFERENCES 1. Liu D., Weiner S., Wagner H.D.(1999) Anisotropic mechanical properties of lamellar bone using miniature cantilever bending specimens, Journal of Biomechanics, 32, pp.647-654. 2. Van Der Meulen, M.C.H., Jepsen, K.J., and Mikic, B.(2001) Understanding Bone Strength: Size Isn't Everything, Bone, Vol.29, No.2, 2001.
Corresponding Author: Seyed Mohammad Rajaai Institute: Faculty of Engineering, Azad University, Abhar, Iran Email: [email protected]
IFMBE Proceedings Vol. 29
Active Behavior of Peripheral Nerves during Magnetic Stimulation M. Cretu, R. Ciupa, and L. Darabant Technical University of Cluj-Napoca/Electrotechnics Department, Cluj-Napoca, Romania [email protected] Abstract— We present a model that predicts the electric field induced in the arm during magnetic stimulation of a peripheral nerve. The arm is represented as a homogeneous, cylindrical volume conductor. The electric field arises from two sources: the time – varying magnetic field and the accumulation of charge on the tissue – air surface. The effect of the induced electric field upon the nerve is determined with a cable model which contains active Hodgkin-Huxley elements. Once the coil’s position and shape are given, and the resistance, capacitance and the initial voltage of the stimulating circuit are specified, the resulting transmembrane potential of the fiber is calculated as a function of distance and time. Keywords— Magnetic stimulation, Stimulating coil, HodgkinHuxley model, Activation function.
M a g n e tic fie l d
M a g n e ti c c o il
N e rv e
Fig. 1 An
alternating current flowing through the magnetic coil causes a time varying magnetic field, which induces an electric field
According to the electromagnetic field theory, the electric field inside the tissue can be computed by means of the scalar electric potential and the vector magnetic potential: ∂A E=− − gradV
∂
t E
I. INTRODUCTION Magnetic nerve stimulation is a painless method for cortical stimulation or for the activation of deep lying peripheral nerves. Being a new medical instrument, scientific research in this domain continuously brings substantial improvements, especially by controlling the stimulus’ parameters (amplitude, duration) and by accurate stimulus localization [1], [2]. Although many papers have demonstrated this phenomenon experimentally [3], both for peripheral nerves and for the cerebral cortex, a physical description of the interaction between the induced electric field and the excitable tissues must be clearly explained and understood. The paper starts by emphasizing the mechanism of magnetic stimulation (computation of induced electric field, the description of the stimulating circuit and the behavior of the nerve fiber – active cable model). Then, a computer model with all its characteristics is presented. Finally, the activation function and the active transmembrane potential are computed for two different position of the stimulating coil with respect to the nerve fibber, at the end important conclusions are drawn.
II. THEORETICAL BACKGROUND A. Mechanism of Magnetic Stimulation The principle of magnetic nerve stimulation is emphasized in Figure 1.
f ib r e
Eddy c u r re n ts
EA
(1)
V
The first term of the electric field is called “primary electric field”, and it is due directly to the electromagnetic induction phenomenon, while the second term represents the “secondary electric field”, due to charge accumulation on the tissue-air boundary. According to formula (1), the computation of the electric field due to electromagnetic induction is done by means of the magnetic vector potential: A(r , t ) =
μ 0 ⋅ N ⋅ I (t ) 4π
∫
coil
dl
(2)
r
where the vector dl represents the differential element of the coil, the vector r is the distance from the coil element to the field point, and N is the number of turns of the coil. A common application of magnetic stimulation is to excite peripheral nerves. We assume that the arm can be modeled as a cylindrical volume conductor. The secondary electric field depends on the geometry of the tissue-air interface, considered a cylindrical surface. This term is computed knowing that on the surface, the boundary condition to be fulfilled is: n ⋅ E A = − n ⋅ E V . The electric potential inside this domain, V, is numerically evaluated by solving Laplace equation (ΔV = 0) with Neumann boundary conditions
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 733–736, 2010. www.springerlink.com
734
M. Cretu, R. Ciupa, and L. Darabant
inside the tissue ⎛⎜ ∂V = n ⋅ E A ⎞⎟ [4]. In order to solve this ⎝ ∂n
⎠
problem we implemented a Matlab routine based on the Finite Difference Method. The system of equations created is solved using Gauss elimination algorithm. For computing the electric field induced in a cylindrical volume conductor, the computation domain is divided into a certain number of small surfaces, the mesh was created, like in Figure 2.
The resistance is evaluated using the analytical formula: ρ 2π ⋅ r ⋅ N (5) R = Cu π ⋅ rw 2 where ρCu represents the copper resistivity. C. Hodgkin-Huxley Model Neuronal structures can be modeled in the form of a cable and the membrane response can be computed by solving the equations describing the transmembrane potential across the membrane of the cable in the presence of induced electric fields. The relation between the transmembrane potential along an infinitely long nerve fiber (placed along the x axis) in the presence of induced electric fields is given by the passive cable model: τ
(6)
∂E x ∂V m ∂ 2Vm = − λ2 + Vm − λ2 ∂
x ∂t ∂x 2 = f ( x)
Fig. 2 Model of the cylinder mesh for solving Laplace problem The mesh size is: hRHO=3.125mm, hTHETA=22.5º, hz=3.125mm and the arm length is 500mm while its radius is 25mm. An intuitive representation of the mesh is given in Figure 2 (this doesn’t represent all the mesh and is not realize on the scale). We assume that the tissue is homogeneous and isotropic. B. Stimulating Circuit The electric current required to induce the electric field ( A is proportional to I – see (2)) is delivered by a magnetic stimulator (LRC circuit). The circuit works in an over damped transient state and the condition in this case is: 2
1 ⎛ R ⎞ . The current waveform through the discharg⎜ ⎟ > 2 L LC ⎝ ⎠
ing of a capacitor, with an initial voltage U0, to the coil is: I = U 0 ωL ⋅ sinh (ωt )exp(− αt )
(3)
where, α = R /( 2 L) , ω = α 2 − 1 / LC , C is the capacitance, and R and L are the resistance and inductance of the coil, respectively. The inductivity of a circular coil, having the radius r, the number of turns N and the radius of the wire conductor rw, is given by the following formula: ⎛ ⎛8⋅r L = μ 0 ⋅ r ⋅ N 2 ⋅ ⎜ ln ⎜⎜ ⎜ ⎝ ⎝ rw
⎞ ⎞ ⎟⎟ − 1, 75 ⎟ ⎟ ⎠ ⎠
(4)
where Vm is the transmembrane voltage, Ex the axial component of the induced electric field, λ the space constant of the cable and τ the time constant. The term on the right of equation (6) represents the activation function and is computed using the method described in paragraph A. While the passive cable model provides the way of the interaction between the induced electric field and the nerve, it does not completely describe the dynamics of nerve stimulation. In order to study the stimulation and propagation of action potentials, we must consider an active membrane model. We use the Hodgkin-Huxley model to represent the nerve membrane [5]. To implement this model, we modify the initial passive cable model. The extracellular potential produced by the fiber’s own activity is negligible. This assumption is valid because the extracellular potential produced by an action potential propagating along a single nerve axon lying in a large extracellular volume conductor is less than 1 mV. The resistance per unit length of the fiber ri can be expressed in terms of the fiber radius a and the resistivity of the axoplasm Ri, as: ri= Ri/π a2. The membrane current per unit length im is related to the membrane current density Jm by the expression: im=2πa Jm; similarly the membrane capacitance per unit length cm is related to the capacitance per unit area Cm by: cm=2πaCm. Finally we replace the membrane resistance per unit length rm by an active model of time and voltage dependent sodium, potassium and leakage channels. With these changes, the cable equation becomes: a ∂ 2Vm − g Na m 3 h(Vm − E Na ) + g K n 4 (Vm − E K ) + g S (Vm − E S ) 2 Ri ∂x 2 ∂V a ∂E x ( x, t ) = Cm m + 2 Ri ∂x ∂t
(
)
(7)
where gNa, gK and gS are the peak sodium, potassium and leakage membrane conductance’s per unit area, and ENa, EK IFMBE Proceedings Vol. 29
Active Behavior of Peripheral Nerves during Magnetic Stimulation
735
and ES are the sodium, potassium and leakage Nerst potentials. The gating variables m, n, h are dimensionless functions of time and voltage which vary between zero and one. We assumed that the resting potential is -65mV, Vm is measured in mV, α and β in ms-1. The values of model parameters used in our computations are given in Table 1:
For the following simulations, the initial voltage on the circuit’s capacitor is set to U0=30V. Figure 4 shows the induced electric field gradient as a function of time and distance along the fiber.
Table 1 ENa
Sodium Nerst potential
50 mV
EK
Potassium Nerst potential
-77 mV
ES
Leakage Nerst potential
-54.387 mV
gNa
Sodium conductance
120 mΩ/ cm2
gK
Potassium conductance
36 mΩ/ cm2
gS
Leakage conductance
0.3 mΩ/ cm2
Cm Ri a
Membrane capacitance Resistivity of axoplasm Fiber radius
1 μF/ cm2 0.0354 kΩּcm 0.0238 cm
a)
Fig. 4 The activation function evaluated along the length of the nerve fiber a) The coil is parallel with a displacement of 25 mm with the cylinder’s axis. b) The coil is perpendicular on the cylinder’s axis
III. RESULTS AND DISCUSSIONS The magnetic coil considered in our simulation has 30 turns, a radius of 25mm and the wire’s radius is 1mm. The computed inductance of this coil is 0.165mH and its resistance is 3Ω. The coil is part of a magnetic stimulator that also comprises a capacitance, C=200 μF. Two cases were studied, for different positions of the coil relative to the arm. Figure 3 (a, b) shows the two analyzed case; first the coil is perpendicular on the cylinder’s axis and in the second case the coil is parallel with the tissue, but with 25mm displacement with respect to the cylinder axis. In order to obtain the transmembrane potential (the response of the nerve fibber to stimulation) as a function of distance and time, first we modulate the electric field gradient in time ( ∂E z ( z , t ) / ∂z ). The electric field gradient represents the activation function and is calculated along the cylinder (the arm), on a line with y=0mm and x= 25–6.25 = 18.75mm, that is on a depth of 6.25mm in the tissue, below the edge of the coil.
Fig. 3 Geometry of the problem
b)
For describing the subthreshold behavior of the nerve fiber inside the tissue, we employ the cable equation (equation 6) to model the passive properties of the nerve fiber. Figure 5 shows the three–dimensional plot of the transmembrane potential Vm, as a function of distance along the fiber and elapsed time since the pulse is applied. The stimulus strength is below threshold, so the fiber behaves like a passive cable and the induced voltage is dissipated.
a)
b)
Fig. 5 The transmembrane potential for the subthreshold behavior of the cylindrical conductor (arm). a) The coil is parallel with the tissue but with 25mm displacement with respect to the cylinder’s axis; b) The coil is perpendicular on the cylinder’s axis Direct comparison of Figure 4 and Figure 5 shows that the resulting transmembrane potential has a time course that resembles the time course of the activation function, although the response of the nerve is somewhat delayed due to the time required for the accumulation of charge on the membrane. The transmembrane potential Vm(x,t) and the three gating parameters m(x,t), n(x,t) and h(x,t) are computed using the method of finite differences, implemented in Matlab with an
IFMBE Proceedings Vol. 29
736
M. Cretu, R. Ciupa, and L. Darabant
iterative algorithm (we compute the value of each parameter knowing its value for the previous time step - 0.1 ms). The space discretization uses a step of 5 mm. It is assumed that the membrane is initially at rest: ∂V m = ∂m = ∂h = ∂n = 0 for t = 0 ∂t ∂t ∂t ∂t
(8)
The boundary conditions of the problem, applied for x = ± L , far from the region where the stimulus strength is large, are that the axial gradients in the transmembrane potential and the three gating parameters vanish. ∂V m = ∂m = ∂h = ∂n = 0 ∂x ∂x ∂x ∂x
(9)
The model is used to determine the response of the nerve membrane to the applied electric field, for different values of the initial voltage on the capacitor of the stimulation circuit. For the case when the coil is parallel with a displacement of 25mm compared to cylinder’ axis, an action potential is evoked at U0 = 40V after a latency period of about 2ms. When the coil is perpendicular, the action potential appears at U0 = 140V, after a latency period of 2.25ms. After the latency periods the transmembrane potential rises rapidly to the value of about 50 V.
Fig. 7 A three-dimensional plot of the response of the nerve fiber to electromagnetic stimulation – the action potential - above threshold. The vertical axis is the action potential, and the horizontal axes represents the distance along the fiber x and the time after capacitor is discharged, t
IV. CONCLUSIONS In our paper we have computed the response of the nerve fiber to magnetic stimulation. If the stimulus strength is below threshold, the nerve fiber behaves like a passive cable and the induced voltage is dissipated. If the stimulus strength is slightly larger, an action potential is evoked. The position of the coil with respect to the tissue that is going to be stimulated influences the response of the nerve. When the coil is parallel to the tissue (with a displacement of 25 mm) the stimulation appears at a smaller voltage (U0 = 40V) compared with the case when the coil is perpendicular on the nerve (U0 = 140V). If the coil is parallel then the latency period is shorter, and the action potential is initiated at x=0 mm which corresponds to the position of the maximum of −∂E x / ∂x .
ACKNOWLEDGMENT This work has been supported within the research programme PNII_IDEI, No. 1078/2007 - “Neural Magnetic Stimulation: Research Concerning the Improvement of the Electrical Equipment Performances and Clinical Efficiency in Diagnostic and Treatment”.
Fig. 6 Variation
of the transmembrane potential and the three gating parameters in time for the two analyzed cases. When the coil is parallel to the tissue the latency period is 1ms, when the coil is perpendicular the latency is about 1.55ms
In figure 6 we plotted the action potential for these two positions of the coil, for different values of U0. Figure 7 (a, b) shows the three-dimensional plot of the response of the nerve fiber to electromagnetic stimulation above threshold. This figure clearly indicates that the depolarized portion of the nerve has been stimulated, while the hyperpolarized portion is not.
REFERENCES 1. Thielscher A., Kammer T. (2004) Electric field properties of two commercial figure-8 coils in TMS: calculation of focality and efficiency. Clin. Neurophysiol. 2004 Jul;115(7):1697-708 2. V. Lin, I. Hsiao, V. Dhaka (2000) Magnetic Coil Design Considerations for Functional Magnetic Stimulation. IEEE Transactions on Biomedical Engineering, vol. 47, no. 5 3. Ruohonen J., Ravazzani P., Nilsson J., Panizza M. (1996) A volumeconduction analysis of magnetic stimulation of peripheral nerves. IEEE Trans. Biomed. Eng. Jul;43(7):669-78 4. Plesa M., Darabant L., Ciupa R., (2008) A Medical Application of Electromagnetic Fields: The Magnetic Stimulation of Nerve Fibers Inside a Cylindrical Tissue. OPTIM 2008, , IEEE Catalog Number: 08EX1996C, ISBN: 2007905111, pp.87-92 5. Roth B.J., Basser P.J. (1990) A Model of the Stimulation of a Nerve Fiber by Electromagnetic Induction. IEEE Transactions on Biomedical Engineering, vol. 37, no. 6
IFMBE Proceedings Vol. 29
Preparations’ methodology for the introduction of Information Systems in Hospitals J. Sarivougioukas1 A. Vagelatos2 and Ch. Kalamara1 1
General Hospital of Athens “G. Gennimatas”, Athens, Greece 2 R.A. Computer Technology Institute, Athens, Greece
Abstract— In the early stages of the introduction of an Information System in a Hospital and before the initiation of the procedures for the actual installation, numerous preparatory actions must be concluded as part of the project. The set of these actions comprise a certain preparatory phase which must be considered part of the project and it has to be well suited in the initial plans. Omitting or limiting the preparatory phase will have as a consequence noticeable delays and in some cases the failure of modules or even endanger the entire project. The successful completion of the preparatory actions rely on a systematic approach which ensures that all the influencing factors have been considered and all interacting conflicts have been resolved prior to the project’s commencement; this can be realized by employing a methodological framework such as the 4-by-3 structure with shafts, which is proposed in the present paper. The considered preparations provide a clear picture and a feedback of the existing organizational readiness and the necessary existing IT cultural status, thus allow taking in advance, the appropriate and adequate measures. Keywords— Hospital Information Systems, Health Informatics.
I. INTRODUCTION
On a regular basis, when Healthcare authorities consider that it is about to look in the market for an Information System (I.S.), the concerns focus on the preparations of a document adequate for a tender. The tender’s document contains descriptions requiring the latest state of the art technology to be applied in an implied way at the intended healthcare organization to achieve a set of business goals. However, each of the desired information systems requires the properly trained personnel following well established internal procedures, applying the appropriate set of data along with the suitably set and configured hardware in order to succeed in promoting the healthcare organization’s aims. In a tender’s process, most of the time period is consumed examining the available market’s solutions visualizing each vendor’s offer for the particular healthcare organization’s purposes or even visiting other installations of the same vendor to evaluate the applied solution. Also, another concern is about the employment of the latest technology through offers without assuring the satisfaction of the prerequisites related to the compatibility with the existing tech-
nological environment and the ability of the personnel to be promptly familiarized and successfully adapted in the application requirements. Hence, the healthcare organization’s business needs, the available personnel, the so far followed procedures, the current workload, and the organizational capacity issues are all considered to be fulfilled or to be satisfied by technology which most of the times turns out to be incapable of satisfying the expectations. From the past experience earned with public tenders concerning the procurement of Hospital Information Systems in Greece [8, 9], typically in a tender’s document, the interest is focused on the description of the carried procedures followed by a set of documents describing with the greatest possible accuracy the technical specifications of the latest technology of both hardware and software items required. Especially, the state healthcare organizations go over an exhaustive analysis of the technical specifications of each and every of the items included on the requesting list according to which a tender is going to take place. The situation is exaggerated more in the cases when the elapsed time between the technical specifications’ writing and the actual tender time turns to be quite long (e.g. more than a year) and new hardware models (or even software revisions) have been announced in the market. The work flow of the healthcare organization is rarely placed on a detailed diagram; it is usually included in specifications describing either abstract or specialized works that it is intended to be escalated by the employed technology lacking parameters like trained personnel, time, consumables, and resources. In the past, we faced all these obstacles during the introduction of IS in “G. Gennimatas” hospital [8, 9]. More specifically, after reviewing that project’s outcomes, it was realized, that most of its complications emerge due to the lack of a preparatory phase that was needed in order to take the appropriate actions and measures for the smooth realization of the actual project. In this paper, the preparative procedures prior to the introduction of a hospital information system are discussed from a methodological point of view. More specifically in the following paragraphs, a literature review is presented. Then the appropriate preparation steps are proposed. Next follows the discussion on the impact of lack of such preparation. And finally the conclusions are presented.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 737–740, 2010. www.springerlink.com
738
J. Sarivougioukas, A. Vagelatos, and Ch. Kalamara
II.
A BRIEF LITERATURE REVIEW
The preparation phase of a project for the implementation of any information system in a hospital is most of the times implicitly addressed in the relevant literature. Thus one has to look for “lessons learned”, “barriers to success” or “factors of failure” in order to collect the appropriate material for his research needs. Nevertheless, the common root of all the announced in the relative literature problems is the lack of a well defined plan of preparations at the early stages of the project’s schedule to meet the requirements of the chosen for implementation system. In the following, some of the key findings of the relevant literature are described concerning information systems for hospitals in general (that is even modules of a hospital information system). The authors in [1] identify a number of four crucial parameters as barriers towards the adoption of Computer Physician Order Entry (CPOE) – a significant module of any hospital I.S. - which have to be considered ahead of the implementation time. The first parameter is concerned with the physicians’ work practice including the working conditions at the wards, the office or at the emergency rooms. The second parameter refers to the current level of technology and the concessions that the medical professionals are able to agree on. The third parameter points at the status of the existing commercial systems and the involved reliability and standardization. The forth parameter is the most complicated since it involves incentives of various origins spanning from financial to scientific motivations. In [2] the authors, while examining the systemic dimension of CPOE argue that CPOE should be avoided to be the first IT module introduced in the wards or in clinics due its high complexity that might develop an even higher degree of resistance to the system’s acceptance by the discouraged personnel. In [4] the authors claim that a systematic information plan (SIM) must be developed prior to the actual introduction of any new information system in the hospital, predicting and satisfying all the future needs. On the other hand, the introduction of CPOE influences the quality of the delivered health care in the hospital with respect to both the administrative and clinical operations too [3]. The successful implementation of CPOE requires the proactive preparations to ensure the personnel’s participation in the introduction of the new system, high level of leadership from the influential staff, specified commitment in the project’s framework for the personnel, and wide communication of the project’s issues. Recorded opinions in the relative literature state that the lack of a well known and standardized application which drive the respective market gets very important of the commonly developed perception [5]. With this in mind, all ef-
forts focus on the provision and the inclusion of the most advanced technology in HIS applications that inevitably will face the raised issues. Other opinions emphasize the prime importance of personnel during the preparation and the actions to be followed for education and training of staffing at all levels [6]. In particular, in [6] new positions are proposed as a result of the lack of continuous knowledge and simultaneous expertise in the clinical, the administrative, the informatics, and the managerial duties. There are success parameters that have to be considered ahead of the introduction of IS to hospitals, referring to the locally holding conditions [7]. The influence of the social culture of both patients and staff too, the local perception and rules for the professional practice of hospitals operations, the abilities of the available computers’ users and the design of an adequate training program, the complexity of the clinical and administrative procedures, the duration of the project, and the experience from similar projects constitute some of the most essential reasons for failing to introduce a new system.
III.
METHODOLOGICAL STRATEGY AND PROPOSED PREPARATION STEPS
From a systemic point of view hospital information systems are comprised by four major parameters: the technological parameter (H/W and S/W) which refers to both the hardware and the software too, the information parameter (data) implying the type and the kind of data to be processed, the procedural parameter (procedures) which is concerned with the flows of works, and the human parameter (personnel) that corresponds to people who both use the system and get benefits from it too. Each of those parameters can be further analyzed into categories and subcategories. All parameters are pair wised interacting provoking the complexity level of Hospital Information System (HIS) as a whole. Among the aims of the introduction of an I.S. in a hospital, is the application of Good Practices (GP) at all levels of a hospital’s activities (see “e-Health Impact project”: http://www.ehealth-impact.org/). In spite of the fact that there is a limited number of exceptional paradigms, GP itself provides two key components: standardization and localization. The application of GP standards seems straight forward process, like a recipe, hides less pitfalls but it demands for localization which requires adaptation, adjustment, etc. Therefore, the inclusion of GPs must be applied to all four major system’s parameters mentioned above. Hospitals provide medical and social services to the local society and they can not be considered as being isolated. In fact, the locally holding conditions in the government, in the legal system, in the market, and in the society in general, affect the operation of the hospital and consequently the
IFMBE Proceedings Vol. 29
Preparations’ Methodology for the Introduction of Information Systems in Hospitals
requirements from HIS. In other words, the four major parameters mentioned above, they must be examined against each and every one of the influencing external parameters. Organizing the preparation for the introduction of an Information System in a hospital, a set of actions must be performed based on an organizational structure consisted of three layers and four coordinates. As it was described above, the highest layer is dedicated for the interactions with all external to the hospital parties, the intermediate layer concerns the decision to apply GPs, and the lowest layer refers to the system itself as it was analyzed above. The described structure provides the framework within which all preparation considerations and actions will take place, in a sound and complete manner. Additional layers may be added in the direction to increase the level of detailed description of the preparation steps. For reasons of convenience, the organizational structure mentioned above will be referred by the 4-by-3 structure (see Table 1.). Table 1 Functions as Preparatory Actions Business Requirements Systemic Parameters
Hospital specific operation (so)
Good Practices (gp)
External to the Hospital (ex)
Software and Hardware (sh) Procedures (pr) Personnel (pe) Data (da)
F1(sh, ex)
F2(sh, ex)
F3(sh, ex)
H1(pr, ex) G1(pe, ex) R1(da, ex)
H2(pr, ex) G2(pe, ex) R2(da, ex)
H3(pr, ex) G3(pe, ex) R3(da, ex)
The preparations and the actions prior to the attempts to introduce a HIS depend primarily on the decided quantified goals that will be achieved by the operation of HIS, on the organizational readiness of the hospital, on the availability of time and resources to perform the preparations, and on the understanding of the functionality of the new system. As it happens with ordinary system analysis, there are available two approaches, top-down and bottom-up. Thus, given the 4-by-3 structure, the analysis can either start with the lowest or the upper most layer. In either case, the 4-by-3 structure must be penetrated and traversed from either side until the opposite last layer is reached. Each penetration’s routing corresponds to a single preparatory functional action which is called, for convenience as a shaft. Therefore, visualizing the setting, shafts are traversing the 4-by-3 structure and each shaft corresponds to preparatory actions. As an application example let us address the issue of considering as candidate the codification systems, the selection criteria and the preparations required prior to its usage. In this example consider the necessity for all HIS stakeholders to share the same meanings over the employed coding systems as it is required by the software vendor. Following a bottom-up approach, the firstly examined dimension refers to the information coordinate or the data. In
739
this case we consult the vendor’s requests for all codification used in the application software ending with a set of codes. Continuing the process, each codification is examined against the second dimension which is the procedural coordinate, the applied procedures, trying to locate the corresponding workflow where the coding is used. On the next step, the coordinate of people, personnel, is inspected against all the findings of the previous coordinates testing the needs of the various personnel’s classes involved in the usage of each coding, as professionals classes are assigned in this coordinate, revealing the required characteristics of each coding along with the needs of each of the people’s classes. The medical professionals’ needs for each coding formulate the educational and training requirements. The training requirements span from the formal training to the concepts included in the coding to the extent of applying the coding in the execution of the everyday duties with the application software. The previous step ends the works placing shafts at the lowest level of 4-by-3 structure but the shafts continue at the intermediate level where the GP is applied. All decisions taken at the previous level are examined against the known GPs. Once a GP is chosen, for example, the directions of the World Health Organization about the coding of diseases, it must, then, be examined the limits and the completeness of the chosen codification, e.g. for ICD-10 (http://www.who.int/classifications/icd/en/), and the appropriate actions have to be taken on all previously decided items. Moving forward the shaft on the intermediate level, the localization attributes have to be revealed and adequate measures to be taken on the initial plan.
IV.
THE IMPACT OF MISSING THE PREPARATION
There are many failure stories in the introduction of an information system to a healthcare organization [9, 10]. There have been stated a number of serious causes in each such case without succeeding to address the similarities and differences of the observed cases. In this paper, the lack of the preparation phase is considered among the major factors that influence a project’s evolution. The preparation phase must be considered as part of the project itself since a number of important decisions have to be accounted and addressed accordingly. The time required in performing contacts with the stakeholders inside and outside of the healthcare organization settling contractual agreements, usually, turn out to be in excess compared to the rest of the project’s duration. In the past, in Greece, many of the projects announced for the introduction of an information system to a healthcare organization, stopped in the development process to make up decisions related to issues such as the unavailability of pro-
IFMBE Proceedings Vol. 29
740
J. Sarivougioukas, A. Vagelatos, and Ch. Kalamara
cedures, lack of personnel, codifications, incomplete training, insufficient support, just to mentioned a few. The time frame for the project’s completion of the works execution is substantially less than the time required for the preparation phase due to the complexity internal to the organization. The order of the project’s deliverables may have to be altered if the preparation stage is omitted. Some deliverables require either to meet decisions in place or organizational adaptations prior to the installation of certain items, for example, the store of the pharmacy have to wait until a coding system and the actual descriptions of the handled materials have been entered into the system’s database. Most developed projects dealing with the introduction of an I.S. to a healthcare organization, have evidently showed that the most expensive part of the project is related to services. The lack of controlling a project’s schedule, it may turn out to be devastating for the future and the success of the entire project. It is impossible to let specialized personnel to hold waiting for either data or instructions. The extended intervals of time in the project’s duration causes the personnel to sense minimal progress which is discouraging and it turns even harder in the future to trust the project’s management, get motivated, and participating again. Hence, the promises must be controlled and given with confidence meeting timely the requirements. The introduction of a Hospital Information System during the late years of last decade in the General Hospital of Athens, “G. Gennimatas”, Greece, provided fruitful conclusions and evidence in the writing of this paper [8]. At that time, the project lasted two years, with rather discouraging operational efficiency and results. To overcome the difficulties, outsourced services had been hired from the manufacturer who provided skilful operators [10]. A decade later, the General Hospital of Nicosia, Nicosia, Cyprus, faced analogous, but in a smaller scale, problems related mostly to the organizational readiness of the Hospital. The schedule of the project was initially planned to last one year which was considered very optimistic and finally it was proved to be so, since the project lasted three years converging gradually to the initial set quality targets. In the case of the “G. Gennimatas” State Hospital of Athens, the parameters of the developed model of 3-by-4 with shafts were identified leading to the development of a managerial approach while in the case of the State Hospital of Nicosia the same approach has been tested and verified with minor improvements. V. CONCLUSIONS
The preparatory phase of a project that declares the ambitions to introduce an information system in a hospital is equally important, if not more precious, than the actual
implementation since it secures both investments and expectations. The various preparations described above aim to assist the IT systems supplier to easily adopt the proposed hospital information system design. In order to achieve the intended purposes, a systematic approach must be applied based on a method which provides procedural steps exhausting all sources of influence that include the principal of feedback at every task. Standardizing the preparation stages of the project is still difficult to be defined and for the time being, it is limited to the discrete measures as it was described above with the employment of shafts in the 4-by-3 structure. Following such directions, it is certain that the involved costs both, with respect to, time and resources will be drastically reduced since the project’s implementing team has available every required information and the procedural set up have been addressed.
REFERENCES 1.
Doolan D, Bates D, Computerized Physician Order Entry Systems in Hospitals: Mandates and Incentives, Health Affairs, vol. 21, N 4 (2002), 180-188. 2. Kuperman G, Gibson R, Computer Physician Order Entry: Benefits, Costs, and Issues. Annals of internal medicine, Vol. 139, N 1 (2003). 3. Upperman J, Staley P, Friend K, Benes J, Dailey J, Neches W, Wiener W, The introduction of computerized physician order entry and change management in a tertiary pediatric hospital, Pediatrics, vol. 116, N 5 ( 2005), 634-642. 4. Brigl B, Ammenwerth E, Dujat C, Graber S, Grobe A, Haber A, Jostes C, Winter A, Preparing strategic information management plans for hospitals: a practical guideline SIM plans for hospitals: a guideline, Int. journal of Medical Informatics, vol. 74 (2005), 51-65. 5. Giuse D, Kuhn K, Health information systems challenges: the Heidelberg conference and the future, Int. journal of Medical Informatics, vol. 69 (2003), 105-114. 6. Ash J, Stavri P, Dykstra R, Fournier L, Implementing computerized physician order entry: the importance of special people, Int. journal of Medical Informatics, vol. 69 (2003), 235-250. 7. Littlejohns P, Wyatt J, Garvican L, Evaluating computerized health information systems: hard lessons still to be learnt, BMJ, vol. 326, (2003), 860-863. 8. Sarivougioukas J, Vagelatos A, Introduction of a Clinical Information System in a Regional General State Hospital of Athens, Greece, Proceeding of Medical Informatics in Europe Conference (MIE2000), Hannover, Germany (2000). 9. Vagelatos A, Sarivougioukas J: Critical Success Factors for the Introduction of a Clinical Information System, Proceedings of IX Mediterranean Conference on Medical and Biological Engineering and Computing, Pula, Croatia (2001). 10. Sarivougioukas J, Vagelatos A: IT outsourcing in the Healthcare sector: The case of a state general hospital, SIGCPR 2002 Conference, Kristiansand, Norway, May 2002. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
John Sarivougioukas “G. Gennimatas” Hospital Mesogeion 137 Athens Greece [email protected]
Supervised and Unsupervised Finger Vein Segmentation in Infrared Images Using KNN and NNCA Clustering Algorithms M. Vlachos and E. Dermatas Department of Electrical Engineering and Computer Technology, University of Patras, Patras, Greece Abstract— In this paper, two new methods to segment infrared images of finger in order to perform the finger vein pattern extraction task are presented. In the first, the widespread known and used K nearest neighbor (KNN) classifier, which is a very effective supervised method for clustering data sets, is used. In the second, a novel clustering algorithm named nearest neighbor clustering algorithm (NNCA), which is unsupervised and has been recently proposed for retinal vessel segmentation, is used. As feature vectors for the classification process in both cases two features are used: the multidirectional response of a matched filter and the minimum eigenvalue of the Hessian matrix. The response of the multidirectional filter is essential for robust classification because offers a distinction between vein-like and edgelike structures while Hessian based approaches cannot offer this. The two algorithms, as the experimental results show, perform well with the NNCA has the advantage that is unsupervised and thus can be used for full automatic finger vein pattern extraction. It is also worth to note that the proposed vector composed only of two features is the simplest feature set which has proposed in the literature until now and results in a performance comparable with others that use a vector with much larger size (31 features). NNCA also quantitatively evaluated on a database which contains artificial images of finger and achieved the segmentation rates: 0.88 sensitivity, 0.80 specificity and 0.82 accuracy. Keywords— KNN, NNCA, Vein pattern, Hessian matrix, matched filter, morphological postprocessing.
I. INTRODUCTION An application specific processor for vein pattern extraction, and its application to a biometric identification system, is proposed in [1]. The conventional vein-pattern-recognition algorithm consists of an original image grab part, a preprocessing part, and a recognition part, and the last two parts take most of the processing time. The preprocessing part consists of a Gaussian low-pass filter (which works iteratively), a high-pass filter, and a modified median filter. Consequently the conventional algorithm [1, 2, and 4] consists of low pass spatial filtering for noise removal, high pass spatial filtering for emphasizing vascular patterns and thresholding. An improved vein pattern extracting algorithm is proposed in [3] which compensate the loss of vein patterns in the edge area, gives more enhanced and stabilized vein pattern information, and shows better performance than the existing algorithm. In [5], a direction-based vascular pattern extraction algorithm based on the directional information of vascular patterns is
presented for biometric applications. It applies two different filters: row vascular pattern extraction filter for abscissa vascular pattern extraction, and column vascular pattern extraction filter for effective extraction of the ordinate vascular patterns. The combined output of both filters produces the final hand vascular patterns. Unlike the conventional hand vascular pattern extraction algorithm, the directional extraction approach prevents loss of the vascular pattern connectivity. In [6-7] a method for personal identification based on finger-vein patterns is presented and evaluated using line tracking starting from various positions. Local dark lines are identified and line tracking is executed by moving along the lines pixel by pixel. When a dark line is not detectable, a new tracking operation starts at another position. This procedure executes repeatedly, so the dark lines that tracked a lot of times have a great probability to be veins. An algorithm for finger vein pattern extraction in infrared images is proposed in [8]. The low contrast images, due to the light scattering effect, are enhanced and the fingerprint lines are removed using 2D discrete wavelet filtering. Kernel filtering produces multiple images by rotating the kernel in six different directions, focus into the expected directions of the vein patterns. The maximum of all images is transformed into a binary image. Further improvement is achieved by a two-level morphological process: a majority filter smoothes the contours and removes some of the misclassified isolated pixels, and a reconstruction procedure removes the remaining misclassified regions. The final image has segmented into two regions, the vein and the tissue. In [9] a certification system that compares vein images for low-cost, high speed and high precision certification is proposed. The equipment for authentication consists of a near infrared light source and a monochrome CCD to produce contrast enhanced images of the subcutaneous veins. As recognition algorithm used only phase correlation and template matching. In [10], the theoretical foundation and difficulties of hand vein recognition are introduced at first. Then, the threshold segmentation method and thinning method of hand vein image are deeply studied and a new threshold segmentation method and an improved conditional thinning method are proposed. An initial work for localizing surface veins via near-infrared (NIR) imaging and structured light ranging is presented in [11]. The eventual goal of the system is to serve as the guidance for a fully automatic (i.e., robotic) catheterization device. The proposed system is based
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 741–744, 2010. www.springerlink.com
742
M. Vlachos and E. Dermatas
upon near-infrared (NIR) imaging, which has previously been shown effective in enhancing the visibility of surface veins. In [12, 13] a Vein Contrast Enhancer (VCE) has been constructed to make vein access easier by capturing an infrared image of veins, enhancing the contrast using software, and projecting the vein image back onto the skin. On the other hand supervised methods have been used mainly for pixel classification and retinal vessel segmentation. In [14] a vessel segmentation method based on pixel classification has been proposed. For each pixel in the image, a feature vector is constructed and a classifier is trained with these feature vectors. Features are extracted only from the green plane of the retinal image. They conducted experiments using three classifiers (KNN, linear and quadratic) and they concluded that KNN classifier was superior for all experiments. The feature vector consisted of 31 features including Gaussian and its derivatives up to order 2 at different scales. In [15] the authors proposed the use of three features instead of 31 in conjunction with the KNN classifier to classify the pixels in retinal image as vessels or not. They also claimed that the processing time decreased something which is logical due to the smaller feature set used. Finally, in [16] the same authors as in [15] introduced a novel clustering algorithm named nearest neighbor clustering algorithm (NNCA) which is unsupervised and they segmented retinal images using the same feature set as in [15].
II. ALGORITHM OVERVIEW A. Image Acquisition The original image acquired under infrared light using an inexpensive CCD camera. The finger was placed between the camera and the light source which consists of a row of infrared leds (five elements) with adjustable illumination.
a
b
Fig. 1 a. Original image b. ROI image Due to the fact that haemoglobin has strong absorption in the infrared light the veins are shown in the image darker than the other human tissues. So, the goal of our study is to extract these dark regions, corresponding to veins, from the background, corresponding to the other human parts (tissue). The original image which acquired as described above is shown in fig. 1a.
B. Preprocessing ROI extraction and Intensity Normalization. From the original image a region of interest is extracted for further processing in order to isolate the part of the image which contains the maximum extent of useful information and to eliminate the background. The ROI image is shown in fig. 1b. In many cases, the acquired image may have not the desirable contrast or may have been distorted by noise. Thus, a preprocessing step is inevitable in order to give in the next stage an image with the desirable characteristics. A local normalization procedure is proposed which uniformizes the local mean and variance of an image. This is especially useful for correcting non-uniform illumination or shading artifacts. The local normalization of acquired image is computed as follows (and uses two smoothing operators) f ( x, y ) − m f ( x, y ) , where f(x,y) is the original image, g ( x, y ) =
σ f ( x, y )
mf(x,y) is an estimation of a local mean of f(x,y), σf(x,y) is an estimation of the local standard deviation and g(x,y) is the output image. C. Feature Extraction In the case of finger vein pattern extraction the veins oriented mainly along the x-axis and their diameter does not vary enough. So, it can be done the assumption of a fix diameter, something that simplifies the problem and does not require multiscale analysis. In our study features which can effectively segment the image in two regions: vein and no vein are seek. The two appropriate features could be the response of a multidirectional matched filter and the minimum eigenvalue of the Hessian matrix. The first feature, the maximum response of a multidirectional matched filter is selected in order to perform a distinction between vein-like and edge-like structures. The second feature is selected in order to account for the fact that veins are tubular piecewise linear structures recognized by extracting centerlines (image ridges). In a next subsection also the selection of these two features is evaluated based on the estimation of the covariance matrix. Maximum multidirectional response of a matched filter. A filter which represents a kind of a cross sectional profile is designed. It is a symmetrical filter kernel with odd number of coefficients. The centre element has the minimum negative value and the first adjacently elements has this value increment by one, the second adjacently elements has this value increment by two and so on until the two final terminally elements. These elements have the appropriate value that ensures that the sum of the values of all elements that belongs to the same row of the kernel is zero. The number of the negative elements represents the width of the cross sectional profile. All the rows of the kernel are identical. This attribute orientates the filter at a unique
IFMBE Proceedings Vol. 29
Supervised and Unsupervised Finger Vein Segmentation in Infrared Images Using KNN and NNCA Clustering Algorithms
direction. The filter spectrum has this shape because in order to detect veins that have darker pixel values than the background. For our process of detection of finger veins a filter kernel of size 11x11 is used because the average vein diameter has almost six pixels width and does not vary too much along the finger. This filter kernel is applied to whole image by convolution, in six different directions from 0 to 180 degrees (0, 30 … 150) to detect finger veins in these directions. This is accomplished by rotating sequentially the filter kernel by the appropriate amount of degrees and convolves it with the preprocessed image. Then the maximum of all six responses is estimated in a pixel by pixel basis and is selected as the first element of the two dimensional feature vector of each pixel. Minimum eigenvalue of Hessian matrix. The second directional derivatives describe the variation in the gradient of intensity in the neighborhood of a point. A ridge is located on top of a line structure where the gradient undergoes large changes and occurs where there is a local maximum in one direction. Therefore it must have a negative second directional derivative and also a zero first directional derivative in the direction across the ridge [l7]. In order to obtain useful information from the partial derivatives, a 2x2 Hessian matrix for each pixel is constructed. The Hessian matrix is symmetrical with real eigenvalues and orthogonal eigenvectors which are rotation invariant [18]. The eigenvalues of the Hessian matrix measure convexity and concavity in the corresponding eigendirections. A ridge is a region where λ1 ≈ 0 and λ2 << 0 . Therefore we are interested in the minimum eigenvalue. The sign is an indicator of brightness darkness. We designate the magnitude of this eigenvalue as the feature λ. So, for every image pixel a two dimensional vector is constructed with values the corresponding maximum multidirectional response of the matched filter and the corresponding value of the minimum eigenvalue of the Hessian matrix. In the next the covariance of the two predefined features is computed in order to verify that the two features used for finger vein pattern extraction has the property to increase together. We have two features with n samples (where n is the size of each image). Let us denote as f k and f l the two characteristics The covariance of ck ,l =
these
two
n
1 ⋅ ∑ ( f k ,i − mk ) ⋅ ( fl ,i n − 1 i =1 2
features is defined as , where mk and ml denote −m ) l
2 l
mean values and s k and s variances. The covariance of the two prereferred features is computed in matrix form as ⎡ 1040 945.9⎤ . We can extract some significant concluc = k ,l
⎢945.9 911.1⎥ ⎣ ⎦
sions from this computation. First of all, we note that the covariance matrix is 2x2 because we compute it for two
743
features. In the case of M features the covariance matrix would be MxM. Secondly, we have to mention that ck ,k = sk2 . Finally, because as we can observe from the previous computation the values of the covariance between the two features is positive, the two features have the property to increase their values together. Only if this value were zero the two features would be independent. Thus, it seems that the selection of the two features is reasonable. D. Classification K-nearest neighbor algorithm (KNN). The K-nearest neighbor classifier is one of the simplest and oldest methods for performing general, non-parametric classification [19]. To classify an unknown pixel choose the class of the nearest example in the training set as measured by a distance metric. A common extension is to choose the most common class in the K nearest neighbors. A fuzzy clustering, where all pixels are allowed to belong to all clusters with different degrees of membership, is achieved by obtaining the mean value of the K nearest neighbors for each pixel in the image. Therefore, hard partition as well as soft partition can be obtained. For an image pixel xq to be clustered, let x1 , x2 ,..., xk denote the nearest K clustered pixels to xq and C ( xi ) ∈ {1, 2,..., Nc} (where Nc is the number of cluters) is the cluster index for pixel xi . Hard partition for
xq is
K
C ( xq ) = arg Maxn∈C ∑ (n == C ( xr ))
and soft
r =1
K
partition is
C ( xq ) =
∑ C(x ) r
r =1
K
. In this case in every pixel is as-
signed a probability to belong to each class. Nearest neighbor clustering algorithm. The NNCA algorithm is a modified version of the KNN classifier and it is fully unsupervised. Here, we briefly explain this algorithm. The NNCA can be divided in two stages for creating clusters. In the first a random number N of pixels is selected. Then, from these N pixels, non overlapping clusters are created, each of maximum size Kinit (the choice of Kinit ensures that more than Nc clusters are generated). Afterwards an iterative procedure is followed to update the clusters and their memberships by increasing the number of neighbors until Nc non overlapping clusters are created. In the second stage the remaining pixels are clustered. For each unclustered pixel q, K nearest clustered pixels are found. Then, the cluster to which most of these K clustered pixels belong is deemed to be one to which the pixel q belongs to (hard partition). E. Postprocessing Usually, in the binary classification some isolated regions which have been erroneous classified as veins still
IFMBE Proceedings Vol. 29
744
M. Vlachos and E. Dermatas
remained. In these cases the binary image is postprocessed by applying morphological filtering and morphological image reconstruction [20].
III. EXPERIMENTAL RESULTS In our experiments, infrared images are segmented using the supervised KNN and unsupervised NNCA algorithms in conjunction with the extracted two dimensional feature set. For the KNN classifier the training is done by interactively select representative pixel for each class (using a specific function implemented in Matlab) and assume that these assignment is valid. Furthermore, we selected K=100 as the number of the K nearest neighbors. For the NNCA we use the following values: number of randomly selected image pixels N=1000, number of desirable clusters in the final segmentation Nc=2, initial maximum size of each cluster Kinit=200, and number of clustered pixels used for classification of the remaining unclustered K=200. In the next figures we present the results obtained both for KNN and NNCA. Fig.1b shows the original ROI image while fig.2 shows the results of the application of the KNN and NNCA in the ROI image. Fig.2c shows the image of fig.2b after postprocessing.
a
b
Fig. 2 Finger vein pattern extracted using
c a. KNN classifier b. NNCA c.
NNCA after postprocessing
IV. CONCLUSIONS In this paper KNN and NNCA are presented for finger vein pattern extraction. Infrared images are segmented using the supervised KNN and the unsupervised NNCA in conjunction with a novel feature set which can effectively and robustly distinguish between vein like and edge like structures. The first element of the feature vector includes the minimum eigenvalue of the Hessian matrix while the second involves the maximum response of a multidirectional matched filter. Experimental results show that both algorithms can produce satisfactory results, with KNN performs slightly better but has the disadvantage that is supervised and thus required training.
REFERENCES 1. G.T. Park, S.K. Im, and H.S. Choi, “A Person Identification Algorithm Utilizing Hand Vein Pattern,” Proc. of Korea Signal Processing Conf., Vol. 10, No. 1, 1997, pp. 1107-1110.
2. D.U. Hong, S.K. Im, and H.S. Choi, “Implementation of Real Time System for Personal Identification Algorithm Utilizing Hand Vein Pattern,” Proc. of IEEK Fall Conf., 1999, Vol. 22, No. 2, pp. 560-563. 3. S.K. Im, H.M. Park, S.W. Kim, C.K. Chung, and H.S. Choi, “Improved Vein Pattern Extracting Algorithm and its Implementation,” Proc. of IEEE ICCE, 2000, pp. 2-3. 4. S.K. Im, H.M. Park, and S.W. Kim, “A Biometric Identification System by Extracting Hand Vein Patterns,” Journal of the Korean Physical Society, Vol. 38, No. 3, 2001, pp. 268-272. 5. Sang Kyun Im, Hwan Soo Choi, and Soo-Won Kim, “DirectionBased Vascular Pattern Extraction Algorithm for Hand Vascular Pattern Verification” Korea University, Seoul, Korea. ETRI Journal, Vol. 25, No. 2, April 2003. 6. Naoto Miura, Akio Nagasaka, Takafumi Miyatake, “Feature extraction of finger-vein patterns based on repeated line tracking and its application to personal identification”, Machine Vision and Applications (2004) Vol. 15, pp.194-203. 7. Naoto Miura, Akio Nagasaka, Takafumi Miyatake, “Feature extraction of finger-vein patterns based on iterative line tracking and its application to personal identification”, Systems and Computers in Japan, Vol. 35, No. 7, 2004. 8. M. Vlachos, and E. Dermatas, “A finger vein pattern extraction algorithm based on filtering in multiple directions”, 5th European Symposium on Biomedical Engineering, July 2006. 9. Toshiyuki Tanaka, Naohiko Kubo, “Biometric Authentication by Hand Vein Patterns”, SICE Annual Conference in Sapporo, August 2004. 10. Yuhang Ding, Dayan Zhuang and Kejun Wang, “A Study of Hand Vein Recognition Method”, Proceedings of the IEEE International Conference on Mechatronics & Automation, Niagara Falls, Canada, July 2005. 11. Vincent Paquit, Jeffery R. Price, Ralph Seulin, Fabrice Meriaudeau, Rubye H. Farahi, Kenneth W. Tobin and Thomas L. Ferrell, “Nearinfrared imaging and structured light ranging for automatic catheter insertion”. 12. H. D. Zeman, G. Lovhoiden, and C. Vrancken, “The Clinical Evaluation of Vein Contrast Enhancement”, Proc. SPIE, vol. 4615, pp.61-70, 2002. 13. H.D. Zeman, G. Lovhoiden, and C. Vrancken, “Prototype Vein Contrast Enhancer”, Proc. SPIE, vol. 5318, in press, 2004. 14. N. Niemeijer, et al., “Comparative Study of Retinal Vessel Segmentation Methods on a New Publicly Available Database”, Proc. SPIE Med. Imaging, vol. 5370, pp. 648-656, 2004 15. N. M. Salem and A. K. Nandi, “Segmentation of retinal blood vessel using scale-space features and K-nearest neighbor classifier”, Proc. ICASSP06, 2006 16. N. M. Salem and A. K. Nandi, “Segmentation of retinal blood vessel using a novel clustering algorithm”, Proc. ICASSP06, 2006 17. D. Eberly. “Ridges in Image and Data Analysis” Computational Imaging and Vision. Kluwer Academic Publishers, Netherlands, 1996. 18. R. Whitaker and G. Gerig. Vector-valued diffusion. In B.M. ter Haar Romeny, editor, Geometry-Driven Diffusion in Computer Vision, Computational Imaging and Vision, chapter 4, pages 93-135. Kluwer Academic Publishers, Dordrecht, 1994. 19. R. Duda, P. Hart, and D. Stork, “Pattern Classification”, John Wiley and Sons, New York, 2nd ed., 2001 20. R. Gongalez, R. Woods, S. Eddins, “Digital image processing using matlab”, Prentice Hall. 21. Vincent, L. (1993), “Morphological grayscale reconstruction in image analysis: Applications and efficient algorithms, IEEE Trans.Pattern Anal. Machine Intell., vol. 13, no. 6, pp. 583-598.C. F. Bohren and D. R. Huffman, Absorption and scattering of light by small partcles, Wiley, New York, 1983 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Marios D. Vlachos University of Patras Kato Kastritsi, 26500 Patras Greece [email protected]
An Echocardiographic Study for Assessment the Indices of Arterial and Ventricular Stiffness C.M. Stanescu1, K. Branidou1 A. Dan1, I. Daha1, C. Baicus1, V. Manoliu2, C. Adam1 and A.Gh. Dan1 1
Colentina University Hospital, Bucharest, Romania 2 Politehnica University of Bucharest, Romania
Abstract— Arterial stiffness represents an important risk factor for cardiovascular diseases. The cardio-ankle vascular index (CAVI) has been recently reported as a new index of arterial stiffness, which is less influenced by blood pressure than pulse wave velocity. We investigated the relationship between CAVI and myocardial ischemia as assessed by supine exercise echocardiography (SEE). In order to study the ejection characteristics of the left ventricle (LV) a computer model describing the LV systole in different preloading conditions was developed, allowing the evaluation of the wall stress at end-systole and the end-systolic pressure. Using echocardiographic recordings, we investigated the relationship between the vascular and ventricular stiffness indices. Keywords— Cardiovascular, echocardiography, stiffnes, myocardial, ischemia. I. INTRODUCTION
Cardiovascular and cerebrovascular diseases remain major causes of death in developed countries, and they are not entirely predicted by traditional risk factors such as aging, hypertension, hyperlipidemia, smoking, and diabetes mellitus. Aortic stiffness is a non-traditional risk factor, and increasing evidence has emerged, suggesting that arterial stiffness, evaluated by measuring pulse wave velocity (PWV), is a marker of cardiovascular mortality, coronary events, and fatal strokes in both patients with essential hypertension and in the general population [1], [2]. Arterial stiffness is defined by a reduction in arterial distensibility, and may be quantified by the measurement of different parameters. Clinically, the gold standard parameter is the PWV; the problem is that PWV itself essentially depends on blood pressure (BP). Recently, a novel convenient arterial stiffness parameter, the cardio-ankle vascular index (CAVI), was developed by measuring PWV from the starting point of the aorta from the heart to the ankle, as well as BP [3]. CAVI, which represents the stiffness of the aorta, femoral artery and tibial artery, is essentially independent of BP because of the adjustment of BP based on a stiffness parameter ȕ [4]. Shirai et al demonstrated that CAVI in hemodialysis patients who have undergone percutaneous coronary intervention, or in
patients with ischemic changes on electrocardiogram, was higher than in those without arteriosclerotic disease. Okura et al reported that CAVI was associated with carotid arteriosclerosis in patients with hypertension, but only a relatively small number of patients were studied [5]. Those findings determined us to investigate the importance of CAVI for the development of coronary artery disease. Characterization of the total LV behaviour is done using a time-varying elastance function with a maximum value that relates to the contractility. A computer model, based of the equations stated in [6] allows the study of timevariations of some important quantities, such as the circumferential wall stress and left ventricular pressure. Increased ventricular stiffness and afterload are major contributors to the pathogenesis of heart failure. Based on the echocardiographic recordings, arterial and diastolic and systolic elastance indices can be expressed, allowing the evaluation of contractile reserve of the LV. II. MATERIAL AND METHODS
A. Study population We recruited 76 consecutive patients who were referred for SEE to our echocardiography laboratory between 20082009. The mean patients age was 56 ± 9 years; of these 40 were female and 36 male. All the patients were examined with CAVI measurement, high resolution B- mode ultrasonography of the carotid arteries and complete rest echocardiography before the SEE. All participants gave written informed consent for the studies performed.The CAVI was measured in the supine position in all of the patients with monitoring of the phonocardiogram and electrocardiogram. CAVI was determined by the following equation: CAVI
a (2 U / p p ) ln( p s / p d ) PWV 2 b
(1)
where ps and pd are the systolic and diastolic blood pressures, respectively; PWV is the pulse wave velocity between the heart and the ankle obtained by measuring the length from the aortic valve to the ankle divided by the time for the pulse wave to propagate from the aortic valve to the ankle,
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 745–748, 2010. www.springerlink.com
746
C.M. Stanescu et al.
pp is ps - pd; ȡ is blood density and a and b are constants to match aortic PWV. CAVI reflects the stiffness of the aorta, femoral, tibial artery and it is theoretically not affected by blood pressure. All measurements and calculations were made automatically using a VaSera VS-1000 (Fukunda Denshi, Tokyo, Japan) The blood pressure was measured at the brachial artery. The average coefficient of variation for this measurement has been reported to be less than 4 %, which indicates that CAVI has good reproducibility.
ages. Exercise protocol begun at a workload of 25 W and a cadence of 60 rotations per minute and it was increased by 25 W every 2 minutes. Images were monitored through exercise and at peak exercise a full series of images was obtained. After cessation of exercise, wall motion was monitored to document resolution of possible induced ischemia. The supine exercise echocardiographic test was tagged as positive or negative for myocardial ischemia.
B. Carotid ultrasonography
To study the isovolumic contraction and the ejection characteristics of the left ventricle, a computer model describing the LV systole at different preloading conditions was developed, based on equations stated in [6]. The LV has a spheroidal geometry, and the time varying elastance for isovolumic contraction is approximated by half a cycle of a sine function. The pressure within the spheroidal LV, with an internal short semiaxis r1, a long semiaxis r2, and a wall of thickness h, is related to the radius and circumferential wall stress V(t) by:
Ultrasound images were acquired using a 7,5 MHz linear array transducer. The common carotid artery intima-media thickness (IMT) was determined by high resolution B-mode ultrasound imaging. The arteries were examined with the head slightly tilted upward after the subjects had rested at supine position for at least 10 minutes. The carotid IMT was measured as the leading edges corresponding to the transition zones between the lumen-intima and media-adventitia over a length of 1 cm proximal to the reference point. Maximum and mean IMTs were defined as the greatest and mean values respectively, of IMTs measured from 3 contiguous sites at 1 cm intervals. Maximum and mean IMTs are expressed as the mean values for both of the common carotid artery measurements. Mean IMT value less than 0.9 mm was considered as normal. C. Echocardiography Examinations were performed using an Siemens Omnia system with a 3,5 MHz transducer. An ECG was recorded simultaneously and all examinations were stored on videotapes for subsequent analysis. All measurements were made during end-expiration from the standard parasternal and apical views with the patient in the left lateral recumbent position. All ventricular measurements were made from end-diastolic frames defined as the frame closest to the onset of the QRS complex. Left and right atrial dimensions were measured at the end of ventricular systole using the mitral valve opening as guidance.
E. Left ventricular model
V (t )
§ r2 ¨ ¨1 1 ¨ 2r 2 2 ©
· ¸ ¸ ¸ ¹
(2)
The wall stress at end-systole (Ves) has been shown to be directly related to end-systolic dimension and volume. As such, Ves is a major determinant of overall left ventricular performance and can be considered the afterload that limits ventricular fiber shortening at end-ejection. The model contains the equations which describe the isovolumic contraction and, also, nonisovolumic contraction. Some of the system parameters which are used in the model are taken from the literature. Using M-mode echocardiographic recordings, the incremental temporal variations of the radii can be evaluated; since it was considered a constant variation of radii, result circumferential speed constant (dr1/dt = -dr2/dt). The value of the LV pressure is approximated by the aortic pressure (the pressure gradient across the aortic valve is assumed negligible) and can be calculated, according to the Windkessel approach [7] by:
D. Supine exercise echocardiography Supine exercise echocardiography was performed to all patients, using an Ergoline supine bicycle, after complete rest echocardiography. A specific protocol was used. The patient was prepared for standard stress testing and was instructed how to perform bicycle exercise. He was positioned on supine ergometer, secured in place and rest images were recorded with table inclined to optimize the im-
p (t ) r1 h
p(t )
e
t § · 1 ¸ t / R C ¨ p0 e p art Q(t )dt ¸ (3) C art ¨ ¸ 0 © ¹
t / R p Cart ¨
³
where Rp is the peripheral resistance and Cart is the compliance of the arterial system. The computed value of end-systole pressure avoids the use of invasive-based measurement of aortic pressure by micromanometer-tipped catheter.
IFMBE Proceedings Vol. 29
An Echocardiographic Study for Assessment the Indices of Arterial and Ventricular Stiffness
IMT thickness was significantly higher in group 1 patients compared with group 2 (0.79 ± 0.08 vs 0.72 ±0.12 mm, p=0.008) (Figure 2). CAVI greater than 8.05 mm predicted the presence of myocardial ischemia at SEE with a sensitivity of 78 % and a specificity of 73 % (Figure 3). 10,0
9,0
8,0
IMT
The arterial compliance can be approximated as stroke volume (SV) over pulse pressure. In the echocardiographic study, early mitral filling flow velocity divided by early mitral annular tissue relaxation velocity (E / E’), reflecting left atrial pressure, can be used to express the left ventricular diastolic elastance (Ed § (E/E’)/SV). In the same time, a ventricular-vascular index, based on the arterial elastance index (Ea) (end-systolic pressure/stroke volume) over Ees index (peak LV output tract velocity to acceleration time) was determined based on the echocardiographic recordings.
747
F. Statistical analysis
7,0
6,0
As the continuous variables (CAVI and LVEF) had a normal distribution, they were resumed as means+/- SD, and were statistically analyzed with the Student (t) test. SPSS 16.0 software (SPSS, Inc., Chicago, IL., USA) was utilized for the statistical analysis, including the ROC curve with area under the curve and coordinate points in order to establish cut-off points. The significance level (bidirectional p) was set at 0.05.
5,0
50 4,0 0
1
SEE test
Fig. 2 Intima-medie thickness (IMT) in patients with and without ischemia at supine exercise echocardiography (SEE) ROC Curve CAVI 1,0
III. RESULTS
0,8
Sensitivity
Among the investigated subject, 28 patients (36.8 %) had a positive SEE test for myocardial ischemia (group 1). The remaining 48 patients (63.2 %) had a negative SEE test and were considered free of coronary artery disease (group 2). Rest left ventricular ejection fraction was not significantly different between the two groups (61.14±3.8 vs 62.23±7.6%, p=0.72). CAVI was significantly higher in group 1 patients compared with group 2 (8.46±0.77 vs 7.82±0.83, p=0.001) (Figure 1).
0,6
0,4
0,2
0,0 0,0
0,2
0,4
0,6
0,8
1,0
1 - Specificity Diagonal segments are produced by ties.
Fig. 3 Reicever operation characteristic (ROC) curve
10,00
At the multivariate analysis, CAVI (p=0.011) and IMT (p=0.035) were identified as independent predictors of myocardial ischemia at SEE, whereas LVEF was not.
cavi
9,00
8,00
7,00
27 6,00 0
1
SEE test
Fig. 1 Cardio-ankle vascular index (CAVI) in patients with and without myocardial ischemia at supine exercise echocardiography (SEE)
Fig. 4 LV pressure variation (isovolumic contraction)
IFMBE Proceedings Vol. 29
748
C.M. Stanescu et al.
In Figure 4 was represented the simulated variation of the LV pressure during isovolumic contraction, for r1 = 2 cm, volume at zero pressure V0 = 10 ml, the maximum elastance D = 4 mmHg/ml.
The present study is the first report of the relationship between CAVI and myocardial ischemia as assessed by supine exercise echocardiography. Our study confirmed the known association between IMT and coronary atherosclerosis. We demonstrated the close relationship between CAVI, IMT and myocardial ischemia in the same group of patients. To evaluate whether CAVI might predict the presence of coronary atherosclerosis, we have investigated its ability to identify patients with myocardial ischemia as assessed by SEE. We found that a CAVI greater than 8.05 mm predicted the presence of myocardial ischemia at SEE with a sensitivity of 78 % and a specificity of 73 %. At the multivariate analysis, CAVI (p=0.011), together with IMT (p=0.035) were identified as independent predictors of myocardial ischemia at SEE. Although obtained in a small sample, these results indicate the usefulness of CAVI, together with carotid ultrasound as further screening tools to identify patients with myocardial ischemia.
REFERENCES Fig. 5 The wall stress V and LV pressure variations during ejection phase
Figure 5, for the same parameters, shows the simulated variations (during ejection phase) of the circumferential wall stress V and of the LV pressure. The preload is represented here by the LV initial volume or radius. An increase in the initial short axis, which can be translated to an increase in end diastolic volume, is followed by a moderate increase in the ejection fraction, and by major increases in the end systolic pressure and stroke volume. The final value of the LV pressure (ejection phase) allows the evaluation of the LV end-systolic elastance. Eed and Ea/Ees has a close, linear relationship (Ed = 0.69(Ea/Ees) + 0.111). The LV end-systolic elastance (Ees) index can be a good predictor of contractile reserve of LV. Total ventricular stiffness index (EedxEa/Ees) can be reliably used as a predictor for exercise capacity in patients.
1.
2. 3.
4. 5.
6. 7. 8.
IV. DISCUSSION AND CONCLUSIONS
The present study provides evidence that CAVI is significantly associated with the presence of coronary artery disease. These results are consistent with previous reports. It was shown that CAVI is useful for discriminating the probability of coronary arteriosclerosis [8] To our knowledge, no studies have specially assessed the relationship between CAVI and myocardial ischemia assessed by exercise echocardiography. Moreover, CAVI is independently associated not only with arterial stiffness but left ventricular diastolic dysfunction in its early stages [9].
9.
Mattace-Raso FU, van der Cammen TJ, Hofman A, van Popele NM, Bos ML, Schalekamp MA, et al. (2006) Arterial stiffness and risk of coronary heart disease and stroke: The Rotterdam Study. Circulation pp. 657– 663. Willum-Hansen T, Staessen JA, Torp-Pedersen C, et al. (2006) Prognostic value of aortic pulse wave velocity as index of arterial stiffness in the general population. Circulation; 113, pp. 664–670. Huck CJ, Bronas UG, et al. (2007) Noninvasive measurements of arterial stiffness Repeatability and interrelationships with endothelial function and arterial morphology measures. Vasc. Health Risk Manag. 3 pp. 343–349. Shirai K, Utino J, Otsuka K, Takata M. (2006) A novel blood pressure-independent arterial wall stiffness parameter: Cardio-ankle vascular index (CAVI) J. Atheroscler. Thromb.13: 101– 107. Okura T, Watanabe S, Kurata M, Manabe S, Koresawa M, Irita J, et al. (2007) Relationship between cardio-ankle vascular index (CAVI) and carotid atherosclerosis in patients with essential hypertension. Hypertens. Res., 30: 335 – 340. Beyar R., Sideman S. (1984) Model for left ventricular contraction combining the force length velocity relationship with the time varying elastance theory, BIOPHYS. J., 45: 1167-1177 Noordegraff A. (1978) Circulatory System Dynamics, Academic Press Inc., New York. Nakamura K, Tomaru T, Yamamura S, Miyashita Y, Shirai K, Noike H. (2008) Cardio-ankle vascular index is a candidate predictor of coronary atherosclerosis. Circ. J. 72: 598 – 604. Mizuguchi Y, Oishi Y, Tanaka H, Miyoshi H, Ishimoto T, Nagase N, et al. (2007) Arterial stiffness is associated with left ventricular diastolic function in patients with cardiovascular risk factors: Early detection with the use of cardio-ankle vascular index and ultrasonic strain imaging. J. Card. Fail., 13: 744 – 751. Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Vasile Manoliu Politehnica University of Bucharest Splaiul Independentei Bucharest Romania [email protected]
Estimation of linear parametric distortions and motions in the frequency domain D.S. Alexiadis1 and G.D. Sergiadis1 1
Aristotle University of Thessaloniki, Dept. of Electrical and Computer Engineering, Div. of Telecommunications, Thessaloniki, Greece
Abstract— The image registration and motion estimation problems have been largely studied in the spatial frequency domain. However, most of the existed frequency-domain methodologies consider that the moving objects’ motions can be approximated as simply translational. In this work, we study the linear parametric motion model (affine distortion plus translation) in the spatial frequency domain and we present an algorithm for the estimation of its parameters. The experimental application of the methodology presents promising results. Keywords— Affine motion, Fourier transform, phase correlation
I. I NTRODUCTION The estimation of the distortion/motion parameters between two images is a fundamental task in image processing applications, including among others image registration and motion estimation in image sequences [1]. The related methods can be coarsely divided into three classes [1, 2]: a) methods that use directly the intensity values [3, 4], b) methods that use extracted features, e.g. edges and corners [5] and c) methods that work in the frequency domain [6–9]. Neurophysiological evidence suggests that the human visual system probably works in the frequency domain [10, 11]. Frequency domain approaches are mainly based on the shift Fourier theorem, i.e. the fact that simple translations are manifested as phase-shifts in the Fourier Transform (FT) domain. Therefore, most of the FT-based methodologies consider pure translational models, assuming that complicated motions can be locally approximated as translational. In this work, we present a frequency-domain approach for the estimation of linear-parametric distortions (affine coordinates transformations plus translations), that can be used for image registration and motion analysis. We study the linear parametric motion in the spatial frequency domain and we present a step-by-step algorithm for the estimation of its parameters. The idea is based on the fact that translations affect only the phase of the 2-D FT, while affine distortions affect its amplitude. Therefore, the latter can be separately used for the estimation of the affine transformation. Logarithmic sampling of the 2-D FT and the Phase-Correlation (PC) function [7, 12] are used. The experiments verify the theoretical results and the effectiveness of the presented algorithm.
II. T HEORETICAL
BACKGROUND
A. The motion model in the intensity domain At this point, let f1 (x) = f (x;t) and f2 (x) = f (x;t + 1) represent two consecutive images, the “anchor”-frame and the “target”-frame, respectively. The motion in the general case is given by: f2 (xa ) = f1 (x), i.e. the point x = (x, y)T in the “anchor”-frame is translated to the position xa = (xa , ya )T . The linear parametric motion A = {[A], v} is described by the formula [1]: (1) xa = [A] · x + v, where the positive-definite matrix [A] (det([A]) > 0) stands for the affine distortion matrix and v = (vx , vy )T represents the simple translation vector. The model describes accurately 3-D rotation and translation of a planar object, which is orthographically projected to the image plane [1], or perspectively projected given that the object’s viewing distance is relatively large. Using singular values decomposition, the positive-definite affine matrix is analyzed as follows: 1 0 cos(φ ) sin(φ ) cos(τ ) − sin(τ ) [A] = · s1 · 0 s2 · , − sin(φ ) cos(φ ) sin(τ ) cos(τ ) s1 (2) where s21 and s22 are the eigenvalues of [A][A]T . Based on equation (2), the affine distortion can be considered as a sequence of separate operations: a) Rotation through an angle τ (tilt), b) Non-uniform scaling in only one direction (slant), c) Uniform scaling and d) further rotation through φ (swing). B. The motion model in the frequency domain The affine transform leads to a similar transform of the 2D FT amplitude, while the shift v affects only the phase of the 2-D FT. Mathematically: T f2 (x)= f1 [A]−1 (x − v) → F2 (u)= det[A] · F1 [A]T u · e ju v , (3) where u = (ux , uy )T is the vector of spatial-frequency coordinates. In correspondence to (1) and (2), the affine transform in the frequency domain is described by: |F2 (ua )| = det[A] · |F1 (u)|,
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 749–752, 2010. www.springerlink.com
(4)
750
D.S. Alexiadis and G.D. Sergiadis
tion are separated, since translations do not affect the amplitude of the 2-D FT. An example of linear coordinates transform in the intensity and in the spatial-frequency domain is given Fig. 1(a),(b), where it is clear that the amplitude of the 2-D FT undergoes a similar transformation.
III. T HE
PROPOSED METHODOLOGY
The transformation of the spatial-frequency coordinates in (5) can be rewritten as: 1 cos(φ ) − sin(φ ) cos(τ ) − sin(τ ) s1 0 sin(φ ) cos(φ ) · ua = 0 s1 · sin(τ ) cos(τ ) · u.
uy
uy
(a) f 1 (x) and f2 (x)
2
ux
(6) Consider at this point that the rotation angles φ (swing) and τ (tilt) are known. Equation (6) states that the rotated version of the spectrum |F1 (u)|, through an angle φ , and the rotated version of |F2 (u)|, through an angle τ , are identical up to a non-uniform scaling. Based on this fact, we sample the 2D spectrum |F2 (u)| with respect to two orthogonal directions φ and φ + π /2, with logarithmic manner, i.e.:
ux
ω1 (c) N11 =N1 (τi = τ )
ω2
ω2
ω2
(b) F1 (u) and F2 (u)
ω1
ω1 = log(|ux cos φ − uy sin φ |) ω2 = log(|ux sin φ + uy cos φ |),
ω1
(d) N21 =N2 (φi = φ )
(e) N23 =N2 (φi = φ )
(g) PC {N11 ,N23 }
Fig. 1: (a) Two consecutive CT slices of a fetus skull. The second slice is affine distorted. The affine distortion parameters are τ = −2o (tilt), s1 = 1.1, s2 = 0.95 and φ = 3o (swing). (b) The corresponding 2-D FTs. (c) The spectral representation N1 (ω1 , ω2 ; τi = τ ) with τi being equal to the actual one. (d)-(e) The spectral representations N2 (ω1 , ω2 ; φi ) for φi = {−3,3}. (f)-(g) The corresponding PC functions. Notice that N23 matches well with N11 and consequently the PC method produces a sharp peak. where 1 cos(τ ) − sin(τ ) cos(φ ) sin(φ ) s1 0 · ua = − sin(φ ) cos(φ ) · sin(τ ) cos(τ ) · u. 0 s1 2
(5) The above equations reveal the importance of studying the linear parametric motion in the frequency domain. The problems of estimating the affine distortion matrix and the transla-
(7b)
which leads to a 2-D spectral representation N2 (ω1 , ω2 ; φ ). Similarly, sampling |F1 (u)| with respect to the orthogonal directions τ and τ + π /2, we obtain the spectral representationN1(ω1 , ω2 ; τ ). Then, the representations N1 (ω1 , ω2 ; τ ) and N2 (ω1 , ω2 ; φ ) are related to each other: N2 (ω1 , ω2 ; φ ) = det([A])N1 (ω1 − ds1, ω2 − ds2; τ ),
(f) PC {N11 ,N21 }
(7a)
(8)
where ds1 = log(s1 ),
ds2 = log(s2 ).
(9)
Consequently, if the rotation angles φ and τ are a-priori known, then the 2-D spectral representations N1 (ω1 , ω2 ; τ ) and N2 (ω1 , ω2 ; φ ) are simply translated versions of each other (up to the scale factor det([A])). This is shown in Fig. 1. The translation depends only on the scaling factors s1 and s2 . For the estimation of the shift between N1 (ω1 , ω2 ; τ ) and N2 (ω1 , ω2 ; φ ) we propose the application of the PC methodology [7, 12], which presents immutability to multiplication factors, such as det([A]). Namely, the following function is calculated: PC(d1 , d2 ; τ , φ ) = |PC {N1 (ω1 , ω2 ; τ ), N2 (ω1 , ω2 ; φ )}|, (10) which is presented directly below. The shifting parameters ds1 and ds2 are given by the position of the maximum in the
IFMBE Proceedings Vol. 29
Estimation of Linear Parametric Distortions and Motions in the Frequency Domain
751
A. The detailed algorithm 0.08
The proposed algorithm for the estimation of the affine distortion parameters is formulated as follows: A LGORITHM 1: PART A
m(τ , φ )
0.06 0.04 0.02 5
φ
5
0 0 −5
−5
τ
(a)
(b)
Fig. 2: The actual parameters are τ = −2o (tilt), s1 = 1.1, s2 = 0.95 and φ = 3o (swing), like in Fig. 1. (a) The similarity measure (12) for (τi , φi ) ∈ T × Φ. It is maximized for (τi , φi ) = (τ , φ ). From the corresponding PC function, the scaling factors estimates are: sˆ1 = 1.05, sˆ2 = 0.97. (b) The compensated image f2 (x) = f2 ([A]−1 x).
PC function. The corresponding scaling factors s1 , s2 are then calculated from (9). The phase-correlation (PC) function: For two images g1 (x) and g2 , the PC function [7, 12] is calculated from: r(u) , (11) PC {g1 (x), g2 (x)} = F T −1 |r(u)| where r(u) = F T {g1 (x)} · F T {g2 (x)} is the 2-D cross power spectrum of g1 (x) and g2 (x). If the images are shifted versions of each other, the inverse 2-D FT produces a strong peak at the position corresponding to the shift between the images. Due to the normalization (division by |r(u)|) the PC method is robust against illumination changes. A proposed similarity measure: The peak’s sharpness in the PC function PC(d1 , d2 ; τ , φ ) consists a criterion for the similarity between N1 (ω1 , ω2 ; τ ) and N2 (ω1 , ω2 ; φ ). Consequently, it is an indirect measure of the closeness of the used rotation angles τ and φ to the actual ones. Furthermore, it is a confidence measure for the estimation of the scaling factors s1 and s2 . Thus, the use of the following measure is proposed: m(τ , φ ) =
Pmax (τ , φ ) P(τ , φ ) − Pmax (τ , φ )
(12)
where Pmax (τ , φ ) = max{PC(d1, d2 ; τ , φ )}2 and P(τ , φ ) =
∑ PC(d1 , d2; τ , φ )2 ,
(13)
1. Compute the 2-D FTs of f1 (x) and f2 (x). 2. For both FTs, apply a 2-D high-pass filter H(u) (in the frequency domain by multiplication), for example: H(u) = 1 − cos(ux /2) cos(uy /2), where the spatial frequency (ux , uy )T is considered normalized in the interval [−π , π ) × [−π , π ). For natural images the energy is mainly concentrated in low spatial frequencies. Therefore, the amplification of the high-pass components is helpful. Furthermore, compute the logarithm of the 2D FT’s amplitudes. Let those be denoted as |F1 (u)| and |F2 (u)|. Our experiments showed that the use of amplitude’s logarithm leads to enhanced results. 3. For different angles τi and φi on a grid (τi , φi ) ∈ T × Φ: • Calculate the PC function PC(d1 , d2 ; τi , φi ) between the representations N1 (ω1 , ω2 ; τi ) and N2 (ω1 , ω2 ; φi ). • Determine the corresponding similarity measure from (12). PART B 4. Find the angles τˆ and φˆ for which the similarity measure is maximized. 5. From the position of the maximum in the PC function PC(d1 , d2 ; τˆ , φˆ ), estimate the scaling factors s1 and s2 , using (9). As shown in Fig. 2, the application of the algorithm to the images of Fig. 1 lead to the correct estimation of the affine parameters. B. Algorithm for multiple consecutive frames Let us now consider Nt consecutive frames f (x;t), each one related to the previous one by a linear coordinates transform, which is fixed with time, i.e: x(t) = [A] · x(t − 1) + v. In order to render our methodology more robust, one could exploit the information of all Nt consecutive frames. Let N(ω1 , ω2 ;t; τ ) and N(ω1 , ω2 ;t + 1; φ ) denote the spectral representations of f (x;t) and f (x;t + 1). Furthermore, the PC function between them is denoted as:
(14)
d1 ,d2
The proposed measure is the energy of the peak, divided by the energy in the remaining area of the PC function. For the example of Fig. 1, the calculated similarity measure is depicted in Fig. 2. The similarity measure is maximized for the actual values of τ and φ .
PC(d1 , d2 ;t; τ , φ )=|PC {N(ω1 , ω2 ;t; τ ), N(ω1 , ω2 ;t +1; φ )}|. (15) Finally, let m(t; τ , φ ) stand for the corresponding similarity measure. We define the mean similarity measure for all Nt −2 consecutive frames: m(τ , φ ) = Nt1−1 ∑tN1t=0 m(t; τ , φ ).
IFMBE Proceedings Vol. 29
752
D.S. Alexiadis and G.D. Sergiadis
(a) f (x;t),
3. Find the pair of rotation angles τi , φ j for which the mean similarity measure is maximized. These angles constitute the estimates of the actual tilt and swing values. Let these be denoted as τˆ and φˆ , respectively. 4. For each time instant t ∈ [0, Nt − 2]: • From the position of the maximum in the PC function PC(d1 , d2 ;t; τˆ , φˆ ) find the scaling factors sˆ1 (t) and sˆ2 (t), using (9). 5. Find the medians of the sequences sˆ1 (t) and sˆ2 (t).
t = 0,1,2,3
−3
x 10
(τ , φ )
10 8
Results from the application of the extended algorithm are given in Fig. 3. Despite the existence of the static background, the algorithm correctly determined the dominant motion.
6 4 −2
τ
0
2
5
φ
(b) m(τ , φ )
0
−5
IV. C ONCLUSION
(c) PC(d1 ,d2 ;t = 0; τˆ , φˆ )
, (d) COMPt=0 f (x;t); [A]
Studying the linear parametric motion model in the spatialfrequency domain, we presented a frequency-domain algorithm for the registration of two affine distorted images. The idea extends existed PC techniques, e.g. [7]. Also, the algorithm was modified in order to handle multiple consecutive frames for the case of the motion estimation problem. The experimental results verify the correctness of our arguments
t = 0,1,2,3
Fig. 3: “Sunset & seagull” sequence: (a) The frames #1, #2, #3 and #4. (b) The mean similarity measure m(τ , φ ), which is maximized for τˆ = −1 and φˆ = 3, i.e. for angles equal to the actual rotation angles of the “seagull”-object, despite the existence of the background. (c) The PC function PC(d1 ,d2 ;t; τˆ , φˆ ) for t = 0. It contains a strong peak at the appropriate location, which corresponds nearly to the actual scaling factors. (d) The motion compensated frames, with respect to the time instant t = 0, One can notice that the “seagull”-object based on the estimation [A]. remains almost unchanged. The estimation of the rotation angles τ and φ can be based on the maximization of the mean measure m(τ , φ ). Having estimated the actual τ and φ , the scaling factors s1 and s2 are calculated for each pair of consecutive frames f (x;t) and f (x;t + 1). Let those be denoted as sˆ1 (t) and sˆ2 (t). The final estimation of the actual scaling factors can be based on the mean or the median (for the rejection of the outlier values) of the sequences sˆ1 (t) and sˆ2 (t),t ∈ [0, T − 1]. The overall algorithm for the estimation of the parametric motion A = {[A], v}, is summarized as follows: E XTENDED A LGORITHM : 1. For each time instant t ∈ [0, Nt − 2], with f1 (x) = f (x;t) and f2 (x) = f (x;t + 1): • Apply PART A of A LGORITHM 1 and calculate the similarity measures m(t; τi , φ j ) for (τi , φi ) ∈ T × Φ. 2. Calculate the mean similarity measures m(τi , φ j ), for (τi , φi ) ∈ T × Φ.
R EFERENCES 1. Y. Wang, J. Ostermann, Y. Zhang, Video Processing and Communications. New Jersey: Prentice Hall, 2002. 2. J. L. Barron, D. J. Fleet, and S. S. Beauchemin, “Systems and experiment: Performance of optical flow techniques,” Intern. J. of Comput. Vision, vol. 12, no. 1, pp. 43-47, 1994. 3. B. K. Horn and B. G. Schunck, “Determining optical flow,” Artif. Intell., vol. 17, pp. 185-203, 1981. 4. M. J. Black and P. Anandan, “The robust estimation of multiple motions: Parametric and piecewise-smooth flow-fields,” Computer Vision and Image Understanding, vol. 63, pp. 75-104, Jan. 1996. 5. J. Shi and C. Tomasi “Good Features to Track,” IEEE Conference on Computer Vision and Pattern Recognition, Seattle , Jun. 1994. 6. D. S. Alexiadis, and G. D. Sergiadis, “Estimation of Multiple, TimeVarying Motions using Time-Frequency Representations and MovingObjects Segmentation,” IEEE Trans. Image Processing, vol. 7, pp. 982990, Jun. 2008. 7. B. S. Reddy and B. N. Chatterji, “An FFT-based technique for translation, rotation and scale-invariant image registration,” IEEE Trans. Image Processing, vol. 5, pp. 1266-1271, 1996. 8. P. Burlina and R. Chellappa, “Analyzing looming motion components from their spatiotemporal spectral signature,” IEEE Trans. Pattern Anal. Machine Intell., vol. 18, pp. 1029-1033, Oct. 1996. 9. J. Ben-Arie and Z. Wang, “Pictorial Recognition of Objects Employing Affine Invariance in the Frequency Domain,” IEEE Trans. Pattern Anal. Machine Intell., vol. 20, pp. 604-618, Jun. 1998. 10. E. H. Adelson and H. R. Bergen, “Spatiotemporal energy models for the perception of motion,” J. Opt Soc. Am. A, vol. 2, pp. 284-299, 1985. 11. B. A. Watson and A. J. Ahumada, “Model of human visual-motion sensing,” J. Opt Soc. Am. A, vol. 2, pp. 322-342, 1985. 12. H. Foroosh, J. B. Zerubia and M. Berthod, “Extension of Phase Correlation to Subpixel Registration,” IEEE Trans. Image Processing, vol. 11, pp. 188-200, Mar. 2002.
IFMBE Proceedings Vol. 29
Ex Vivo and In Vivo Regulation of Arginase in Response to Wall Shear Stress R.F. da Silva1,2, V.C. Olivon3, D. Segers4, R. de Crom4, R. Krams5, and N. Stergiopuloss1 1
Laboratory of Hemodynamics and Cardiovascular Technology, Swiss Federal Institute of Technology. Lausanne, Switzerland 2 Department of Fundamental Neurosciences, University of Geneva, Geneva, Switzerland 3 Department of Pharmacology, University of São Paulo, Ribeirão Preto, Brazil 4 Department of Cardiology and Genetics, Erasmus Medical Center, Rotterdam, The Netherlands 5 Department of Bioengineering, Imperial College London, London, United Kingdom
Abstract— Background: Alterations of wall shear stress can predispose the endothelium to the development of atherosclerotic plaques. We evaluated the modulation of arginase by wall shear stress. Material and methods: We perfused isolated carotid arterial segments to either unidirectional high mean shear stress (HSS) or low mean and oscillating shear stress (OSS) for 3 days. Vascular function was analyzed by diameter measurement, arginase expression and localization by western blot and immunohistochemistry, respectively. These effects were also evaluated in right carotid artery of apolipoprotein E (apoE-/-) deficient mice, fed with high cholesterol diet, which was exposed to HSS, LSS and OSS flow conditions by the placement of a shear stress modifier for 9 weeks. ApoE-/- mice received either the arginase inhibitor nor-Noha (20mg/kg, 5 days/week) or placebo for 9 weeks. Plaque size and I/M ratio were determined by histology. Results: Our data from ex vivo perfusion showed that exposure of carotid segments to both low and oscillatory flow conditions significantly increase arginase II protein expression and activity as compared to high shear stress athero-protective flow condition. Long-term treatment with nor-Noha effectively decreased arginase activity at LSS and OSS regions, which in turn was accompanied by a decreased I/M ratio and the size of atherosclerotic lesion. In the lesion, inhibition of arginase decreased the number of CD68 positive cells at LSS and OSS zones. Exposure of carotid artery to OSS induced a more pronounced activation of arginase as compared to HSS. Conclusions: Arginase is modulated by patterns of wall shear stress. Long-term treatment of apoE/- mice with arginase inhibitor decreased carotid I/M ratio and atherosclerotic lesion at LSS and OSS regions. Therefore, inhibition of arginase by nor-Noha may emerge as a distinct way to target atherosclerosis disease.
in carotid artery perfused ex vivo. The contribution of arginase to the process of shear stress-induced atherogenesis in vivo will also be discussed in the present paper.
II. MATERIAL AND METHODS To evaluate the effects of wall shear stress in a wellcontrolled hemodynamic environment and without neuroendocrine factors, carotid arterial segments were perfused ex vivo[1] under either HSS or OSS for 3 days. After 3 days of flow exposure, vascular function was analyzed by diameter measurement, arginase expression and localization by western blot and immunohistochemistry, respectively. These effects were also evaluated in an innovative mice model of shear stress-induced atherogenesis developed by the group of R. Krams [2-4]. Carotid arteries of apolipoprotein E deficient (apoE-/-) mice fed with high cholesterol diet were exposed to different hemodynamic by the placement of shear stress modifier device (referred as a cast, Fig. 1A) in the mice carotid for 9 weeks. The vessel diameter and velocity of blood flow were then measured by Doppler analysis in order to precisely derive the shear stress values The cast is able to impose a fixed geometry on the vessel wall, thereby causing a gradual stenosis, resulting in increased shear stress in the vessel segment inside the cast, a decrease in blood flow and consequently a lowered shear stress region upstream the cast, and a vortex upstream from the cast (oscillatory shear stress) (Fig. 1B).
Keywords— Atherosclerosis, hemodynamics, wall shear stress, vulnerable plaque, arginase pathway.
I. INTRODUCTION Alterations of wall shear stress, the frictional force generated by blood flow, can predispose the endothelium to the development of atherosclerotic plaques. In parallel, growing evidence suggests a key role for vascular arginase II in the pathophysiology of atherosclerosis disease. We are currently investigating the modulation of arginase by wall shear stress P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 753–756, 2010. www.springerlink.com
Fig. 1
754
R.F. da Silva et al.
In our study design protocol, apoE-/- mice (n=20 per group) were randomly assigned to receive either the N-ωHydroxy-nor-L-arginine (nor-Noha, 20mg/kg, 5 days/week) or placebo for 9 weeks. Arginase activity in carotid tissue lysates was measured by colorimetric determination of urea formed from L-arginine. Plaque size and intima/media (I/M) ratio were determined by immunohistochemistry analysis after classical eosin-hematoxilin coloration. Collagen fibers were stained with 0.5% picrus sirus red solution. Collagen type I and III were visualized by polarized light Olympus microscopy. Lipid deposition was determined by Sudan IV coloration and analyzed by the later microscopy. Systolic blood pressure (SBP) and heart rate were measured in conscious mice two days before sacrifice by using the indirect tail-cuff method method (BP2000, Visitech System, Apex, North Carolina, USA). After mice were deprived of food for at least 12 hours, blood was collected by cardiac puncture into heparin-coated tubes. Plasma was obtained through centrifugation of blood for 15 minutes at 4500g /4°C and stored at –80°C until each assay was performed. Total cholesterol, triglyceride, LDL, HDL and creatinine levels were determined using an automated clinical chemistry analyzer. Data were analysed using one-way ANOVA and Bonferroni's post-test or unpaired ‘t' test (Graph Pad Prism 5.0; GraphPad Software Inc., San Diego, CA). Value of P ≤ 0.05 was considered significant.
Fig. 2
III. RESULTS Only arginase II (ArgII) isoform was detected on isolated porcine carotid endothelial cells and on carotid artery. Exposure of arteries to OSS increased ArgII expression and activity as compared to HSS in a time-dependent manner (Fig. 2). Inhibition of arginase by nor-Noha improved NOdependent endothelial function and decreased total vascular ROS formation in arteries submitted to OSS. For in vivo experiments, arginase activity was significant increased on contra-lateral carotid artery of apoE-/- mice as compared to WT. Nor-Noha treatment decrease arginase activity in the carotid of apoE-/- mice. In arteries containing the shear stress-modifier, vascular arginase activity was significantly increased at both low and oscillatory shear stress regions as compared to HSS region. Long-term treatment with nor-Noha effectively decreased arginase activity at these regions, which in turn was accompanied by a decrease I/M ratio and % of atherosclerotic lesion (Fig.3 A and B). In the lesion, inhibition of arginase decreased the % CD68 positive cells at LSS and OSS zones (Fig.3C and D), with no significant changes on the % of smooth muscle cells and lipid content.
Fig. 3
IV. DISCUSSION ArgI and II isoforms have very distinct subcellular localization and tissue distribution. In the vascular system, arginase can be either constitutively expressed or activated upon stimuli. Moreover, changes on vascular arginase appear to
IFMBE Proceedings Vol. 29
Ex Vivo and In Vivo Regulation of Arginase in Response to Wall Shear Stress
be specie-dependent. The present study demonstrated that only ArgII isoform was detectable in carotid artery tissue. Emerging evidence indicates that increase in arginase expression and/or activity correlates with several risk factors for cardiovascular diseases including atherosclerosis [5-7]. In particular, the group of Berkowitz elegantly showed that inhibition of ArgII decreased the formation of atherosclerotic plaque in apoE-/- mice [8]. It is well recognized that alterations of wall shear stress plays an important role as a local risk factor for the development of atherosclerosis disease [3]. In the present study, we investigated the regulation of arginase in response to wall shear stress in a wellcontrolled hemodynamic environment. Our results showed that exposure of carotid arteries to HSS values for 3 days significantly increased the expression of ArgII as compared to fresh artery. Oscillatory flow, a hemodynamic condition known to favour plaque development, increased the expression of vascular ArgII already 1 day after flow. Interestingly, after 3 days OSS significantly increased the expression and activity of ArgII as compared to HSS. These results indicated that ArgII is stimulated by wall shear stress depending on duration and type of flow pattern. Vascular arginase compete with eNOS for the same substrate L-arginine, thus lowering the formation of NO [9]. For the above reasons and also due to the pronounced increase of ArgII in the endothelium, we investigated whether changes in arginase expression/activity by OSS would affect NO-dependent endothelial function. Our results demonstrated that OSS significantly decreased the vasorelaxation capacity in response to bradykinin and that the arginase inhibitor nor-Noha restored the NO-dependent endothelial function. High levels of vascular ROS can react with NO to form peroxynitrite and to lower NO bioavailability. In our ex vivo model, perfusion of arteries under OSS increased total vascular ROS formation as evidenced by DHE staining. Several key mediators of atherothrombosis such as thrombin [10], inflammatory cytokines [11] and oxidized LDL [5] have been shown to up-regulate the activity of ArgII isoform in EC. More recently, vascular increase in arginase activity has been shown to contribute to the mechanism of endothelial dysfunction present in atherosclerotic vessel [12]. Whether arginase mediates the endothelial dysfunction induced by OSS is not yet known. Our results showed that arginase expression and activity is modulated by patterns of wall shear stress in vivo. In arteries containing the shear stress-modifier, vascular arginase activity was significantly increased at both low and oscillatory shear stress regions as compared to HSS region. Long-term treatment with nor-Noha effectively decreased arginase activity
755
at these regions, which in turn was accompanied by a decrease I/M ratio and % of atherosclerotic lesion. In the lesion, inhibition of arginase decreased the % CD68 positive cells at LSS and OSS zones, with no significant changes on the % of smooth muscle cells and lipid content.
V. CONCLUSIONS Exposure of carotid artery to oscillatory flow induced a more pronounced activation of arginase as compared to HSS. Inhibition of arginase in arteries exposed to OSS improved NO-dependent endothelial function and decrease smooth muscle cell proliferation rate, both processes that are important for the focal development of atherosclerotic plaque. Vascular expression and activity of ArgII is regulated by patterns of wall shear stress in vivo by decreasing plaque size and I/M thickness. Inhibition of arginase emerges as a distinct way to target atherosclerosis disease by acting via NO-dependent and independent pathways, which in turn could respectively improve endothelial function and arterial remodelling. Nevertheless, it is important to note that so far commercially available inhibitors of arginase are not selective to vascular ArgII isoform and therefore it may induce adverse effects through inhibition of ArgII and/or ArgI in other tissues.
ACKNOWLEDGMENT We would like to thank Elodie Deluz for her excellent technical assistance. This work was supported by the Swiss National Science Foundation (grant numbers 3100A0103823 to N.S., PIOIB117205/1 to RF.S.), by Olga Mayenfisch Stiftung and Novartis Consumer Health Foundation (to RF.S.).
REFERENCES 1. Gambillara, V., et al. 2006. Plaque-prone hemodynamics impair endothelial function in pig carotid arteries. Am J Physiol Heart Circ Physiol. 290: H2320-2328. 2. Cheng, C., et al. 2007. Shear stress-induced changes in atherosclerotic plaque composition are modulated by chemokines. J Clin Invest. 3. Cheng, C., et al. 2006. Atherosclerotic lesion size and vulnerability are determined by patterns of fluid shear stress. Circulation. 113: 2744-2753. 4. Cheng, C., et al. 2005. Shear stress affects the intracellular distribution of eNOS: direct demonstration by a novel in vivo technique. Blood. 106: 3691-3698. 5. Ryoo, S., et al. 2006. Oxidized low-density lipoprotein-dependent endothelial arginase II activation contributes to impaired nitric oxide signaling. Circ Res. 99: 951-960.
IFMBE Proceedings Vol. 29
756
R.F. da Silva et al.
6. White, A. R., et al. 2006. Knockdown of arginase I restores NO signaling in the vasculature of old rats. Hypertension. 47: 245-251. 7. Zhang, C., et al. 2004. Upregulation of vascular arginase in hypertension decreases nitric oxide-mediated dilation of coronary arterioles. Hypertension. 44: 935-943. 8. Ryoo, S., et al. 2008. Endothelial arginase II: a novel target for the treatment of atherosclerosis. Circ Res. 102: 923-932. 9. Ash, D. E. 2004. Structure and function of arginases. J Nutr. 134: 2760S-2764S; discussion 2765S-2767S.
10. Ming, X. F., et al. 2004. Thrombin stimulates human endothelial arginase enzymatic activity via RhoA/ROCK pathway: implications for atherosclerotic endothelial dysfunction. Circulation. 110: 3708-3714. 11. Satriano, J. 2004. Arginine pathways and the inflammatory response: interregulation of nitric oxide and polyamines: review article. Amino acids. 26: 321-329. 12. Ryoo, S., et al. 2008. Endothelial arginase II: a novel target for the treatment of atherosclerosis. Circ Res. 102: 923-932.
IFMBE Proceedings Vol. 29
Process choreography for Interaction simulation in Ambient Assisted Living environments C. Fern´andez-Llatas1 , J.B. Mochol´ı1 , C. S´anchez1 , P. Sala1 and J.C. Naranjo1 1
TSB-ITACA, Universidad Polit´ecnica de Valencia, Spain
Abstract— AAL solutions and Universal Design are more and more used to empower individuals to manage his needs by using ICTs. Nevertheless, the use of design-for-all principles in ICTs is a very difficult task. Products that may seem great ideas in design time can be useless for people with limitations. The use of systems that can simulate the interaction between the user and AAL services and devices can help designers to make fewer design errors and create more useful products. In this paper, a simulator of AAL processes and services is presented. This simulator, based on process choreography principles, is intended to be used in all stages of design process. For that the system allows not only the simulation of single processes, devices and its relationship but also the connection with real devices to test the interaction between real and simulated processes and devices. Keywords— Choreography, Orchestration, Workflow, processes, AAL, Simulation
I. I NTRODUCTION With increase life expectancy, the quantity of people with age-specific disorders are also growing more and more, thus increasing the number of people who will require assistance and care in the future. Inside Ambient Intelligence paradigm, AAL [1] (Ambient Assisted Living) is the solution that appears in the horizon to empower individuals to manage his needs by using ICT. According to AAL thesis, technologies should be fully adapted to human needs and cognition, and totally accessible regardless of the specific condition of the user (Universal Design). AAL puts the technology in the hands of any person in a way that is transparent to him and specifically developed to manage his needs. Hence, seniors will have all the advantages of the AAL services and will avoid all the disadvantages of using technology. However, in many cases, the complexity and lack of accessibility and usability of ICT is a major barrier. There exist several R&D projects currently dealing with this challenge, mainstreaming accessibility in goods and services, as well as developing up-to-date assistive technologies. ASK-IT project [2] uses the design-for-all principles to develop services based on ICT that allow mobility impaired people to live and travel more independently. eAbilities [3]
project aims at developing a framework for current and future actions in research, education and technology transfer in the field of ICT accessibility in the home, vehicle and working environments in Europe. The MAPPED [4] project takes into consideration all the accessibility needs of disabled users to provide them with the ability to plan excursions from point to point, at any time, using public transport, their own vehicle, walking, or using a wheelchair. Although accessibility and usability are crucial concepts in current society and the seven principles of the Universal Design [5] or Design for All are well-known and applicable to a wide variety of domains, its use on real systems is very limited. These ideas are usually taken into account in research and development activities but present significant reservations in production and deployment phases. This is because business stakeholders are still reticent to use them in fact. The main cause of that reticence is the high cost of validating the services and devices in accessibility and usability terms. To solve that it is necessary to develop validation tools and methodologies that support the designers along the whole development cycle, for example, allowing them to simulate the interaction of AAL components during the design phase to detect conceptual errors in early stages of the development process. In order to simulate the interaction among the system components in design phase it is necessary to describe the components execution flow in an formal way. To solve that, the use of workflow technology [6] is usually the best choice. Workflow technology allows the specification and simulation of processes by non programming experts. This allows the accessibility and usability experts define the AAL component process without knowing programming languages. Nevertheless, although the workflows can describe processes flow, they are not able to simulate interaction among different processes. Workflows are designed to be orchestrated. The orchestration assumes that the flow is managed by a central component that decides the next step in the process. In an interaction among processes the flow is decided in a distributed way. In this case, the next step is decided by the combination of the flows of the interacting components. That kind of execution of processes is usually known as process choreog-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 757–760, 2010. www.springerlink.com
758
C. Fernández-Llatas et al.
raphy [7]. Choreography assumes that the processes are able to exchange data to execute processes in a distributed way. In this paper, a computer-aided supporting tool to simulate AAL components interaction by using choreography and orchestration principles is presented. The solution is intended to be used by Interaction Designers and Usability Engineers at all development stages. This tool was developed in the framework of VAALID european project [8].
II. M ATERIALS AND M ETHODS In order to simulate the interaction between AAL components, we need to describe two levels of simulation: • The first level is the simulation of AAL components. Those components are the services and the devices that will be executed in the system. This simulation requires a formalism that allow experts to describe the flow of each component to be simulated. • The second simulation level is the interaction among the components processes. This simulation requires an environment that allows the message exchange among the simulated processes. Workflow technology can deal with first level of simulation. A wide-accepted definition of workflow defines it as the formal specification of a process. It defines actions, which are performed by humans or by computerized systems, and the set of allowed transitions among them that defines the possible paths that a custom process can follow. More specifically, the Workflow Management Coalition [6] defines a workflow as the automation of a business process, in whole or part, during which documents, information or tasks are passed from one participant to another for action, according to a set of procedural rules. The use of workflows allows experts to formally describe AAL components by using easy to use graphical tools. Workflows can be executed using Workflow Engines. A Workflow Engine is a software system that can understand and perform the actions described in a workflow in the specified order. There are several Workflow systems available in the literature depending on the final system needs: jBPM [9], the java open source Workflow Engine from jBOSS, the Windows Workflow Foundation [10] from Microsoft, the proprietary Staffware [11] or the created in the scientific community YAWL [12] inter alia. The second simulation level cannot be executed using workflows. The workflow technology is designed to execute exhaustively described processes where the workflow engine (orchestrator) is who takes the decisions of passing from one state to other. This paradigm is called orchestration. The simulation of interactions requires that the decision of changing
from one state to other has to be taken in a distributed way. In that case, there is no central module that takes the decision, but the transition among states occurs by the combination of individual flows for each of the system components. This paradigm is called choreography [7]. Multi-Agents technology can be used to simulate the interaction among components. A multi-agent system (MAS) is a system composed of multiple interacting intelligent agents. Intelligent agents (IA) are autonomous entities which observe and act within an environment. There are frameworks like JADE [13] to build MAS systems and rich MAS communication protocols like FIPA [14] that allows agents exchange all the data necessary to interact. However MAS is not a good solution for simulation due to its inefficiency. Multi-Agent systems require too much resources, a problem in tasks that involve high number of simulation elements, and in certain situations they take a long time before providing a response. A more efficient alternative is the Enterprise Service Buses (ESB) [15]. ESB are general-purpose interconnection software systems based on Service Oriented Architecture (SOA) [16]. In that paradigm, the ESB are thought to interconnect different endpoints. Typically, the ESBs are implemented using event-driven messaging in a central engine who actuates as the message dispatcher among the choreography components. ESB promote flexibility in the transport layer and enable loose coupling and easy connection between services. There are powerful ESBs like OpenESB [17] and jBossESB [18] that require containers to be executed or lightweight ESB like Mule [19]. ESB are specifically appropriate to interconnect existing applications. For that, these systems are mainly focused in using the maximum quantity of inputs and outputs formats. This problem is not a critical issue in simulating interactions because the data exchange format can be previously defined. Other alternative to create SOA systems by using efficient technologies is OSGi [20]. OSGi framework is a dynamic modular system written for Java technology that allows building applications from small, reusable and collaborative components. These components can be registered in OSGI framework and interact with others using the provided mechanism. Using OSGi primitives is possible to subscribe/unsubscribe services and define interfaces to interconnect those services. In addition, OSGI framework is providing more and more components in order to ensure the security, provide communication protocols, connection to handheld devices... etc. OSGi is becoming one of the most popular technology for SOA systems, even jBOSS is planning to program its ESB using OSGi framework. Comparing the models, while ESB provides a closed bus for data exchange, OSGi provides a framework that allows us
IFMBE Proceedings Vol. 29
Process Choreography for Interaction Simulation in Ambient Assisted Living Environments
to create a tailored choreographer designed to simulate interactions in a simple way, but it is necessary to develop it.
III. R ESULTS In our problem, although MAS systems provide a very rich exchange data format and very powerful model to represent AAL components as agents, Workflows are more efficient to represent the services and devices, and are more readable to be used by interaction experts. In case of service choreography, ESB systems are more efficient than MAS Systems. ESBs are more focused on system interoperability and are usually very closed and difficult to be tailored. The VAALID project election was the creation of a tailored processes choreographer to simulate interaction using OSGi framework.
Fig. 1: Choreographer Architecture In Figure 1 it is shown the general architecture of the VAALID Choreographer presented in this paper. The choreographer core is a OSGi module that allows the interaction among other OSGi modules (Bundles). The choreographer has implemented a listener that automatically connects OSGI bundles that are registered in the framework using the specific software interface (IService). Once the choreographer gets a service, the service is started and can be publically called from other services connected to choreographer. The choreographer dispatches the messages among the modules using an specific XML message protocol called XMSG based on the combination of FIPA and SOAP protocols. The classic FIPA protocol, defined for communication in MAS, allows sharing knowledge using different protocols. XMSG in based on FIPA headers to route and characterize the messages. The content in XMSG is SOAP protocol. SOAP is a well known and widely used protocol to perform service calls. The XMSG protocol allows broadcast, multicast, and P2P message calls using wild symbols in the destiny address. In addition to the choreographer, some general purpose modules are implemented to perform the interaction simulation:
759
• A workflow engine, based on jBPM, which executes the workflows defined for the different devices and systems. • A rule engine, based on Drools, that allows the execution of deductive rules to simulate human decisions or intelligent systems. • A wrapper to connect to 3D scenes, based on Instant Reality engine, to show the interaction simulation effects. • A wrapper to connect with real devices by using the domotic bus KNX • Some wrappers to interconnect the Choreographer with other Choreographers, real local or remote objects (written in Java or DotNet platforms) or 3D simulation environments. Using these components the choreographer is able to interconnect Workflow orchestrations as well as 3D representations and real modules. Workflow orchestration allows the formal representation and simulation of the flow of devices and services in a simple way directly defined by interaction engineers. The interconnection of the choreographer with real modules allows the test of the interaction of individual real systems in combination with simulated modules in simulated environments. This permits the use of the system with real devices and services after they have been implemented. The use of 3D environments to show the interaction simulation makes easier the evaluation of the real interaction problems. In addition, a .NET wrapper module has been written in order to probe the interconnection capabilities with other programming platforms. In this case, it is possible to connect modules written in Java with Microsoft .NET platform modules. This interoperability allows the simulation of services implemented on different platforms. In practice, these modules are intended to be used from design to testing phases of the AAL product cycle of life. On design phase, interaction designers can describe the behaviour of the new product by using workflows and the look and feel using 3D tools. The interaction of the product that is being developed can be simulated in some predefined environments with other real or simulated products with real or simulated users in order to detect design problems in early stages. Once the product is implemented the system is useful too. The simulation of the interaction of the real product in simulated environments allows the engineers detecting implementation errors before it’s used in real environments.
IV. C ONCLUSION In this paper a new process choreographer for simulating interactions was presented. The service choreography can be
IFMBE Proceedings Vol. 29
760
C. Fernández-Llatas et al.
used to simulate the interaction among AAL services and devices. This solution can empower AAL designers to create better defined products to be introduced on real markets. The simulator implemented has the capability of simulating the interaction of processes defined in different design stages (in functional specification stage using workflows or in testing stage using the real module) and in different platforms (Microsoft .NET and Java) testing the use of the simulator with modules implemented in different platforms. This makes the choreographer useful independently of the final implementation language chosen. The simulator is connected to a workflow engine (jBPM) to provide a formal way to execute workflows; it uses a rule engine (Drools) to allow the executions of business rules that permits the simulation of complex decisions and intelligent systems; it is connected to a domotic bus (KNX) that allows the communication with real hardware; and finally it uses a 3D visualization Module (Instant Reality) allowing the visual feedback to designers using 3D environments. The choreographer is being tested in VAALID project using simulated modules and real devices in a real living lab.
ACKNOWLEDGMENTS The authors wish to thank the European Commission for the project funding and the VAALID consortium for their support.
R EFERENCES 1. European Communities Commission. Ageing well in the information society: An i2010 initiative tech. rep.European Comision 2007. 2. Consortium ASKIT. ASK-IT Project: Ambient Intelligence System of Agents for Knowledge-based and Integrated Services http://www.askit.org/ 2003-2008. 3. Consortium . A virtual platform to enhance and organise the coordination among centres for accessibility resources and support http://www.eabilities-eu.org/ 2006-2009.
4. Consortium MAPPED. Mobilisation and Accessibility Planning for People with Disabilities http://services.txt.it/MAPPED/ 2004-2007. 5. Story Molly Follette, Mueller James L., Mace Ronald L.. THE UNIVERSAL DESIGN FILE Designing for People of All Ages and Abilities. The Center for Universal Design 1998. 6. WfMC . Workflow Management Coalition Terminology Glossary. WFMC-TC-1011, Document Status Issue 3.0 1999. 7. Zimmermann Olaf, Doubrovski Vadim, Grundler Jonas, Hogg Kerard. Service-oriented architecture and business process choreography in an order management scenario: rationale, concepts, lessons learned in OOPSLA ’05: Companion to the 20th annual ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications(New York, NY, USA):301–312ACM 2005. 8. Consortium VAALID. VAALID Project: Accessibility and Usability Validation Framework for AAL Interaction Design Process http://www.vaalid-project.org/default.aspx 2008-2010. 9. Division JBoss Red Hat. JBoss jBPM White Paper: http://jboss.com/pdf/jbpm whitepaper.pdf 10. Corporation Microsoft. Introducing Microsoft Windows Workflow Foundation: http://msdn2.microsoft.com/en-us/library/Aa480215.aspx 11. Inc TIBCO Software. StaffWare ProcessSuite WhitePaper: http://about.reuters.com/partnerships/tibco/material/Staffware whitepaper.pdf 12. Aalst W., Hofstede A.. YAWL: Yet Another Workflow Language Information Systems. 2005;30:245-275. 13. Bellifemine Fabio Luigi, Caire Giovanni, Greenwood Dominic. Developing Multi-Agent Systems with JADE. Wiley 2007. 14. O’Brien P. D., Nicol R. C.. FIPA Towards a Standard for Software Agents BT Technology Journal. 1998;3:51-59. 15. Chappell David. Enterprise Service Bus. O’Reilly Media, Inc. 2004. 16. Erl Thomas. Service-Oriented Architecture: Concepts, Technology, and Design. Upper Saddle River, NJ, USA: Prentice Hall PTR 2005. 17. Microsistems Sun. OpenESB: The Open Source ESB for SOA & Integration https://open-esb.dev.java.net/. 18. Community. JBoss. JBoss Enterprise Service Bus http://www.jboss.org/jbossesb/. 19. Community Mulesoft. Mule Enterprise Service Bus http://www.mulesoft.org/display/COMMUNITY/Home. 20. Alliance Osgi. Osgi Service Platform, Release 3. IOS Press, Inc. 2003.
Author: Carlos Fern´andez-Llatas Institute: ITACA-Universidad Polit´ecnica de Valencia Street: Camino de Vera S/N City: Valencia Country: Spain Email: cfl[email protected]
IFMBE Proceedings Vol. 29
Measuring Device for Determination of Forearm Force P. Hlavoˇn1 , J. Krejsa2 , and M. Zezula1 1 2
Brno University of Technology/Faculty of Mechanical Engineering, Brno, Czech Republic Institute of Thermomechanics AS CR, v. v. i., Academy of Sciences of the Czech Republic
Abstract— Maximum achievable force at the end of person’s forearm depends on elbow joint angle. Specialized measuring device was designed to identify this relationship. The force is measured during continuous movement of forearm from one limit position to the other. This movement is forced by electric motor so it is not influenced by the subject. The force at the end of forearm and the elbow joint angle are captured using electric transducers connected to the Spider8 data acquisition system. Keywords— Spider8, arm, forearm, elbow.
I. I NTRODUCTION The idea of designing the measuring device arised during the development of experimental mobile motorized orthosis, that acts against patient’s muscles, and must be able to overcome their force. We need to determine this force to design the orthosis correctly. Moreover, the magnitude of this force heavily depends on elbow joint angle (among other factors, such as patient’s actual condition). If we determine this relationship, we can further optimize the driving mechanism of the orthosis and focus its maximum force to certain range of elbow joint angle only.
Fig. 1 Basic measuring method Basic measuring method is illustrated in figure 1. The forearm is set to desired position (which can be expressed using angle ϕ ) and the force gauge is mounted between frame and wristband. Subject exerts a force F and peak magnitude of the force is captured. This procedure can be repeated for different values of ϕ to get full relationship.
To get sufficient amount of data to evaluate the relationship, we need to measure in at least 6 positions. This implies that the method is time-consuming, because the force gauge and its mounting must be manually repositioned between individual measurements. Moreover, the magnitude of the force depends on actual mood and will of the subject, which can change in time. This results in bad repeability of measurement. This problem can be solved by using single continuous measurement from one limit position to the other instead of performing individual discrete measurements. Actual values of F and ϕ are captured during the motion, therefore we obtain both variables as functions of time. These functions can be further processed to obtain desired relationship.
II. D ESIGN OF M EASURING D EVICE A. Concept The movement must not be controlled by the subject but it must be enforced by a drive unit. The duration of continuous measurement (and resulting speed of movement) must be chosen correctly. If the speed is too high, the dynamic effects can significantly influence measured value of the force. If the speed is too low, the subject can get tired and this can influence the force at the end of the measurement. These requirements are contradictory, the duration of 5 seconds was chosen as a compromise. Because the typical range of ϕ is between 0 and 150 degrees, this implies the speed of movement to be 30 degrees per second. Rapid change of angular position and force requires application of electric transducers. The direction of force on the forearm varies as the forearm moves, but the force vector remains perpendicular to the forearm. This implies the force gauge must change its orientation during measurement or the force must be transferred into the place where the force retains its orientation. The latter approach was used to eliminate potential problems with mobile connection to the transducer. It is necessary to fix both the arm and the forearm (while enabling the rotation in elbow joint) to ensure good repeability of the measurement. Main operational dimensions of fixtures must be adjustable. It is beneficial if the fixtures enable quick mounting and unmounting of subject’s upper limb.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 761–763, 2010. www.springerlink.com
762
P. Hlavoň, J. Krejsa, and M. Zezula
Fig. 2 Kinematic scheme Previously mentioned requirements resulted in kinematic design displayed in figure 2. There is main pulley next to the arm of the subject, the axis of pulley is coincident with the axis of elbow joint. Forearm of the subject is fixed to the pulley near the wrist. A cable is running over the pulley and it is attached to it in one point, so the pulley acts as a one-turn cable drum. The cable runs over another smaller pulley. Axis of smaller pulley is mounted to the base via force transducer, so the transducer indirectly measures the tension in the cable, thet is proportional to the force on the forearm of the subject. The other end of the cable is mounted on the nut, that moves on the screw. The screw is directly driven by electric motor. B. Realization Figure 3 shows CAD model of measuring device, reference [1] covers more design details. The device is mounted on the metal plate with T-grooves. The frame of a tripod shape is built from thin-walled steel profiles. Main pulley is carried in ball bearings mounted on the frame. Arm and forearm fixtures are made of wooden board. All fixtures are open in direction towards subject, who sits next to the device. Such design allows to fix arm and forearm easily, the subject just inserts his/her arm and forearm between the boards. The device is driven by induction motor. Generally, the rotation velocity of induction motor is only little affected by its load. This keeps the steady speed of movement during the measurement without the need of closed-loop speed regulator. The range of motion is limited by two limit switches mounted above opposite ends of the driving screw. The limit switches are triggered by the nut, that moves on the screw. The nose on the nut fits into the T-groove in the metal plate and prevents the rotation of the nut. All control circuitry is placed in the control panel near the motor. Circuitry includes
Fig. 3 CAD model motor reverse switch, start and stop buttons and braking circuit (the motor is braked by powering its windings with DC). The device is capable to measure both left and right arm. To measure the other arm, main pulley bearing house has to be repositioned to the opposite side of the frame and the fixtures have to be adjusted. The device can also measure in both directions (forearm moving up or down) depending on the way the cable is looped around the main pulley. Force transducer U9A by HBM (range 0 to 10 kN, full strain-gauge bridge) was used. The angle transducer is custom made (resolution 1 deg, quadrature encoder) and it is located above the bearing house. The encoder wheel is mounted directly on the main pulley. Both transducers are connected to the Spider8 data acquisition system [2]. During initial tests it was discovered that it is very difficult to switch on the motor and trigger the capturing of data exactly at the moment when the subject starts to exert the force. This was solved using Spider8 internal triggering feature (the capturing starts when some measured value crosses preset threshold). Moreover, Spider8 provides two status signals (MSR and RDY – measure and ready) on its I/O socket. Status signals were used to start the motor and the motor control circuitry was modified accordingly. This further simplifies the whole measurement: subject just sits on the chair next to the device and inserts his/hers arm and forearm between the fixtures. Once the subject starts to exert the force, the rise of the force triggers data capturing and the motor is turned on. When the nut reaches the limit switch, the motor is stopped. The data capturing is stopped by timeout.
IFMBE Proceedings Vol. 29
Measuring Device for Determination of Forearm Force
763 160
III. R ESULTS
140 120 100 F [N]
Obtained data are processed after the measurement. The signal from force transducer is filtered using low-pass filter to remove the noise. The signal from the angle transducer is interpolated using shape-preserving piecewise cubic Hermite interpolant. Because the change of the angle is monotonic, this results in virtually increased resolution of angle transducer. Finally, both courses are matched to get desired relationship between the force and elbow joint angle.
80 60 40 20 0
0
45
90
135 150
phi [deg]
160
Fig. 5 Force/angle relationship for “down” direction
140
F [N]
120 100
IV. C ONCLUSION
80
Figures 4 and 5 shows that maximum force was obtained at ϕ = 80 deg in the direction “up” and at about ϕ = 100 deg in the direction “down”. This corresponds to our expectations and previous preliminary results obtained by the basic measuring method. Measurement results were utilized in mobile motorized orthosis design. The presented measuring device proved its usability for determination of the relationship between the force at the end of forearm and forearm position. The device can be used to compare this relationship between right and left upper limb, or among different people.
60 40 20 0
0
45
90
135 150
phi [deg]
Fig. 4 Force/angle relationship for “up” direction
To illustrate the general course of force/angle relationship, figure 4 shows results from 18 individual measurements. Their arithmetic average is shown in bold. These results were obtained from 3 subjects. Only the right limb was measured. In all measurements, the forearm was moving up, towards the body of subject. Individual courses does not span entire range from 0 to 150 degrees, because boundary parts (where the device accelerates and decelerates) were cropped. Although individual courses are imperfect and contain some peaks, they all exhibit similar trend. Figure 5 shows the results of another 18 measurements with their average shown in bold. In this case, the forearm was moving down, away from the body of the subject, so different muscles were loaded. This explains why the courses are slightly different. The results were obtained from 3 subjects, only the right limb was measured as in the previous case. One course lies away from the cloud of others. This course belongs to an unsuccessful measurement and is an example of the outlier automatically excluded from further processing.
A CKNOWLEDGEMENTS This paper was written with the support of the research project AV0Z20760514 and the project CZ.1.07/2.2.00/07.0406 “Problem Based Learning”.
R EFERENCES 1. Zezula M. Design and realization of measuring device for determination of human arm force depending on elbow joint angle. Brno University of Technology, Faculty of Mechanical Engineering 2009. 2. Hottinger Baldwin Messtechnik GmbH. Spider8 Spider8-30 and Spider8-01 : PC measurement electronics Operating Manual [2006]. Available from: “http://www.hbm.pl/pdf/b0405.pdf”.
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Ing. Pavel Hlavoˇn, Ph.D. BUT, Faculty of Mechanical Engineering Technick´a 2896/2 Brno Czech Republic [email protected]
Subclavian Steal Syndrome – A Computer Model Approach C. Manopoulos and S. Tsangaris National Technical University of Athens/School of Mechanical Engineering, Laboratory of Biofluidmechanics & Biomedical Engineering, Greece Abstract— Subclavian steal syndrome is a constellation of signs and symptoms that arise from retrograde (reversed) flow of blood in the vertebral artery or the internal thoracic artery, due to a proximal stenosis (narrowing) and/or occlusion of the subclavian artery. The present study aim to describe this syndrome in patients with subclavian steal steno-occlusive disease based on a computer lumped parameter model. It is studied the cerebral circulation starting from the aortic arch till the arterial branches leaving the circle of Willis. The governing equations of the model are based on the conservation of mass on each node of arteries and on conservation of energy in every loop consisted of artery branches. In addition, the energy loss equation is employed for every artery branch. The system of the equations describing the model is solved with MATLAB. The results show that when the short low resistance path (along the subclavian artery) becomes a high resistance path (due to narrowing) blood flows around the narrowing via the arteries that supply the brain (left and right vertebral artery, left and right internal carotid artery). Keywords— Subclavian Steal Syndrome, Subclavian Steal Steno-Occlusive Disease, Arterial Stenoses, Cerebral Circulation, Circle of Willis.
I. INTRODUCTION The subclavian steal syndrome (SSS) occurs when there is stenosis or occlusion of the subclavian artery proximal to the origin of the vertebral artery. This may cause flow reversal in the ipsilateral vertebral artery as blood is 'stolen' from the circular vertebro-basilar system, to supply the distal territory of the occluded or stenosed artery. Retrograde flow in the vertebral artery, associated with a subclavian or innominate (brachiocephalic) artery stenosis, can be an incidental finding during Doppler ultrasound examination of the cerebral supply. Contorni first described retrograde flow in the vertebral artery in 1960 [1]. Reivich in 1961 first recognized the association between this phenomenon and neurologic symptoms [2]. The same year Fisher dubbed this combination of retrograde vertebral flow and neurologic symptoms subclavian steal syndrome, suggesting that blood is stolen by the ipsilateral vertebral artery from the contra lateral vertebral artery [3]. It was later suggested that such "steal" may cause brainstem ischemia and stroke, either continuously or secondary to arm exercise.
Lord et al. evaluated statistically in vivo the arteriograms and other findings of 42 patients with the subclavian steal syndrome to determine which factors predisposed to vertebrobasilar ischemia [4]. In vitro studies on the syndrome were done by Rodkiewicz et al. in 1992-93 [5] [6]. They used an experimental model of the arterial system in order to show that, for some specific large occlusions magnitude in the left or right subclavian, or in the brachiocephalic artery the blood flow reverses its direction in the left or the right vertebral or right common carotid, or the right internal carotid arteries. It is shown that besides the known single steal syndrome there exists also a double steal syndrome, i.e., blood reverses its flow direction simultaneously in two arteries, both on the right side of the arterial system. This blood is taken from the circle of Willis, which at the same time is significantly supplemented by the increased blood flow through the other arteries leading into the circle of Willis.
II. MATERIALS AND METHODS A. Cerebral Circulation The blood supply of the brain is achieved through a network of blood vessels (arteries and veins) consisting the cerebral circulation. The arteries deliver oxygenated blood, glucose and other nutrients to the brain and the veins carry deoxygenated blood back to the heart, removing carbon dioxide, lactic acid, and other metabolic products. Since the brain is very vulnerable to compromises in its blood supply, the cerebral circulatory system has many safeguards. Failure of these safeguards results in cerebrovascular accidents, commonly known as strokes. Blood is supplied to the human brain by two pairs of arteries: the internal carotid arteries (right & left) and the vertebral arteries (right & left). They have called the supra-aortic arteries. The arteries of the brain originate from the two internal carotid arteries and from the basilar artery, which is formed by the union of the two vertebral arteries (Fig. 1). The intracranial circulation assumes the form of a polygon, called the circle of Willis [7]. The posterior cerebral artery is the anatomical and functional junction between the anterior circulation (carotid system) and the posterior one (vertebro-basilar system) of the circle of Willis.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 764–767, 2010. www.springerlink.com
Subclavian Steal Syndrome – A Computer Model Approach
765
B. The Theoretical Model Figure 2 shows a lumped parameter model focused on cerebral circulation. This model assumes symmetric circulation routes via the brain vessels (with non-flexible walls) from the aorta to the right heart atrium. The model shown in Figure 2 is equivalent to the circuit shown in Figure 1. For simplicity reasons, the local resistances of the individual vessels are expressed in lumped parameters.
Fig. 1 Heart to brain main artery circuit. Are depicted: (1) Circle of WillisCoW, (2) arch of aorta, (3) ascending aorta, (4) descending aorta, (5) right & left main coronary arteries, (6) brachiocephalic trunk or innominate artery-IA, (7) left subclavian artery-SA, (8) right subclavian artery-SA, (9) left [right] common carotid artery-CCA, (10) left [right] external carotid artery-ECA, (11) left [right] internal carotid artery- ICA, (12) left [right] vertebral arteryVA, (13) basilar artery-BA, (14) left [right] posterior cerebral artery-PCA, (15) left [right] posterior communicating artery-PCoA, (16) left [right] middle cerebral artery-MCA, (17) left [right] anterior cerebral artery-ACA, (18) anterior communicating artery-ACoA, (a) right [left] posterior inferior cerebellar artery, (b) anterior spinal artery, (c) right [left] anterior inferior cerebellar artery, (d) pontine arteries, (e) right [left] superior cerebellar artery, (f) right [left] anterior choroidal artery, (g) right [left] ophthalmic artery, (h) anteromedial central (perforating) arteries, (i) Heubner's recurrent artery, (j) right [left] labyrinthine (internal acoustic) artery The blood supply to the brain in a given time defined as cerebral blood flow (CBF). In an adult, CBF is typically 750 ml/min or 15% of the cardiac output. This equates to 50 to 54 ml of blood per 100 grams of brain tissue per minute [8]. CBF is tightly regulated to meet the brain's metabolic demands. The arrangement of the brain's arteries into the circle of Willis (Fig. 1) creates redundancies in the cerebral circulation. If one part of the circle becomes blocked or narrowed (stenosed) or one of the arteries supplying the circle is blocked or narrowed, blood flow from the other blood vessels can often preserve the cerebral perfusion well enough to avoid the symptoms of ischemia [9].
Fig. 2 Lumped-parameter
model of the cerebral circulation shows extra stenosis resistance (Rs) for the right subclavian artery
Assuming the pathological condition where the right subclavian artery demonstrates stenosis equal to a percentage α %≥50 % which is defined as follows: α =ˆ
A0 − As × 100 % A0
(1)
where Α0 is the internal cross-section of the healthy proximal part of the right subclavian artery and Αs the minimum cross-section of the narrowed part of right subclavian artery due to the development of atheromatous plaque. In this circuit, for each side, R1 is the total resistances due to the heart cardiovascular system, R0 is the total resistance after the aortic arch up to the right atrium, R2 is the total of resistances from the end of subclavian artery up to the right atrium, Rs is the resistance that causes stenosis to the right subclavian artery, R3 is the total of resistances from the end of common carotid artery up to the right atrium and R4, R5 and R6 are the total of resistances from
IFMBE Proceedings Vol. 29
766
C. Manopoulos and S. Tsangaris
the end of posterior, middle and anterior cerebral artery, respectively, up to the right atrium. Considering steady blood flow and assuming that the pressure gradient between aorta and right atrium is 80 mmHg with mean blood supply equal to 6.1 l/min (corresponding to the normal total cardiac output) is supposed that a normal reference blood flow distribution is achieved as described by reference [6] from the aortic valve up to the circle of Willis. By this assumption are determined the unknown resistances and the normal distribution of cerebral blood flow of the model taking into account the blood flow symmetry in the brain and the ratio R6/R5=2 according to reference [10]. The resistances of the normal (unoccluded) arteries are calculated by the equation resulting from the Poiseuille formula: Ri−j =
8μL i − j πri4− j
8πμL s 4.8 πμ ρ + + 2 Qs A s2 As A 3s
Re =
4A 0 ρQ s
(3)
which represents the stenosis as a sudden contraction followed by a sudden expansion [11]. In equation (3), Ls=10 mm is the length of the stenosis in the right subclavian artery showing the development area size of atheromatous plaque, ρ=1.055 gr/cm3 is the blood density and Qs is the blood flow-rate in the right subclavian artery. As the stenosis percentage increases the Reynolds number becomes higher due to the turbulence and inertia phenomena
(4)
πμA 0
III. RESULTS AND DISCUSSION The quantitative analysis of the results concerning the blood flow-rates for different degrees of stenosis is shown in the Table 1. The numerical order of the nodes in the first column of the Table 1 shows the direction of normal blood flow when α=0%. Table
1 Dimensionless blood flow-rates Qi-j/(Qi-j)n for several stenosis percentages α,[(Qi-j)n is the normal blood flow of each branch when α=0%]
(2)
where μ=0.0365 poise is the dynamic viscosity of the blood, Li-j and ri-j is the length and the radius of the artery, respectively starting from the i node and ending to the j one. The length and diameter of each arterial segment have been taken from the references [6] and [10]. The solution of the circuit of Figure 2 for the non pathological condition when α=0% is achieved by way of calculating all the blood flow-rates (Qi-j)n of the branches and the unknown resistances. The equations used are based on the Continuity of mass on each node of arteries and on Conservation of energy in every loop consisted of artery branches. In addition, the energy loss equation is employed for every artery branch. The non-linear system of the equations describing the model is solved with MATLAB. After the normal distribution blood flow assessment of the model, the solution of the circuit of Figure 2 is achieved calculating all flow-rates of the branches when the stenosis percentages equal to α=50, 60, 70, 80, 90 and 100 %. The resistance caused by the stenosis in the right subclavian artery is calculated according to the following equation: Rs =
occur especially in the blood outlet from the stenosis point. The Reynolds number is calculated as follows:
Branch (i-j) 2-6 6-17 2-5 2-3 3-4 4-9 5-9 3-7 6-0 7-0 4-0 5-0 9-11 7-14 11-12 11-13 13-10 12-8 17-13 14-12 17-18 14-21 14-15 17-16 15-20 16-19
α 50%
60%
70%
80%
90%
100%
1.0953
1.1376
1.2139
1.3651
1.6870
2.1290
1.1333
1.1926
1.2993
1.5108
1.9612
2.5796
1.0786
1.1151
1.1807
1.3109
1.5882
1.9688
0.9116
0.8716
0.7996
0.6570
0.3532
-0.0638
0.9194
0.8817
0.8139
0.6795
0.3931
0.0000
0.8122
0.7243
0.5663
0.2529
-0.4146
-1.3308
1.1975
1.2893
1.4544
1.7817
2.4789
3.4359
0.9059
0.8643
0.7894
0.6408
0.3245
-0.1097
0.9997
0.9996
0.9994
0.9990
0.9981
0.9969
1.0003
1.0004
1.0007
1.0012
1.0022
1.0037
0.9914
0.9874
0.9801
0.9658
0.9353
0.8934
0.9999
0.9999
0.9998
0.9997
0.9994
0.9990
1.0033
1.0046
1.0068
1.0113
1.0207
1.0336
0.8680
0.8096
0.7045
0.4961
0.0524
-0.5567
1.0144
1.0206
1.0316
1.0535
1.1003
1.1644
0.9923
0.9886
0.9820
0.9689
0.9411
0.9028
0.9954
0.9933
0.9895
0.9820
0.9659
0.9439
0.9953
0.9931
0.9892
0.9814
0.9648
0.9420
1.1085
1.1621
1.2584
1.4495
1.8564
2.4150
0.2916
-0.0173
-0.5729
-1.6747
-4.0211
-7.2423
0.9973
0.9961
0.9939
0.9895
0.9803
0.967
1.0027
1.0039
1.0061
1.0104
1.0196
1.0323
0.6141
0.4433
0.1358
-0.4738
-1.7720
-3.5542
1.4028
1.5818
1.9037
2.5423
3.9020
5.7687
1.0075
1.0108
1.0167
1.0286
1.0537
1.0883
0.9922
0.9887
0.9825
0.9701
0.9437
0.9075
Observing the blood flow-rate values (dimensionless form) of the Table 1 the following behavior of the blood
IFMBE Proceedings Vol. 29
Subclavian Steal Syndrome – A Computer Model Approach
flow distribution is noticed for stenosis percentages α>50%. As the degree of stenosis in the right subclavian artery increases, the flow-rates to the left side increase too in comparison with the ones to the right side, where the flow-rates decrease. Namely, flow-rates from the aortic arch up to the CoW of LSA, LCCA, LICA and LVA increase significantly, while the corresponding ones of RSA, RCCA, RICA and RVA decrease significantly, with the last three reversing their directions for high degrees of stenosis. Figure 3 presents the dimensionless flow-rates of some vessels, along with the increment stenosis in the RSA. 3.5
LVA
3
LICA
2.5 2 1.5
Q/Qn
LCCA LSA
increment area
1
reduction area
0.5 0
RSA RCCA
reversed area
-0.5
767
significantly, while Q11-12 rises a little. Moreover, flow-rate of ACoA is directed from the left to the right (16 to 15) and increases, as the degree of stenosis does the same.
IV. CONCLUSIONS A simple model has been developed in order to describe the subclavian steal syndrome, when the heart to brain main artery circuit is occluded at one location on the right side (e.g. beginning of RSA). It is considered steady blood flow in a circuit with non flexible vessels. It has been shown that when the short low resistance path (along the subclavian artery) becomes a high resistance path (due to narrowing) blood flows around the narrowing via the arteries that supply the brain (left and right vertebral artery, left and right internal carotid artery). Three areas of blood flow are distinguished as the RSA is narrowed. The area of blood flow increment that is on the left side vessels, the reduction and the reversed areas of blood flow that are on the right side. However, further investigation is needed to show the blood flow influence due to more occluded locations (e.g. in the IA and/or LSA), and an unsteady blood flow consideration should be taken into account, in order to achieve more realistic results.
RICA
-1
REFERENCES
RVA
-1.5 0
10
20
30
40
50
α (%)
60
70
80
90
100
Fig. 3 Dimensionless blood flow-rates Q /Qn along with stenosis percentages α, Qn is the normal blood flow of each branch when α=0%
The reversed flow is mentioned to the IA too, since it lies on the right side. Α slight blood flow-rate increment is observed in BA. This increment helps the blood to move (together with the rise of the LICA flow-rate) through CoW around stenosis. In this way the blood circulation is maintained. Due to the high values of resistances R2 and R3 the blood flow-rates to the branches 4-0, 5-0, 6-0 and 7-0 are kept almost constant. Slight variations of blood flow are mentioned in the brain, as the percentage of the stenosis increases. Namely, a small blood flow reduction happens in the LPCA, RPCA, LMCA and LACA, while a small increment takes place in the RMCA and RACA. Finally, in the CoW the blood flow reverses in the RPCoA, as soon as α=60%. More stenosis increment, up to 75%, also makes the blood flow-rate Q14-15 to reverse. Flow-rates Q17-13 of LPCoA and Q17-16 rise
1. Contorni L (1960) The vertebro-vertebral collateral circulation in obliteration of the subclavian artery at its origin. Minerva Chir 15:268-271. 2. Reivich M, Holling H E, Roberts B, Toole J F (1961) Reversal of blood flow through the vertebral artery and its effect on cerebral circulation. N Engl J Med 265:878-885. 3. Fisher C M (1961) A new vascular syndrome: "The subclavian steal". N Engl J Med 265:912-913. 4. Lord R S A, Adar R, Stein R L (1969) Contribution of the Circle of Willis to the Subelavian Steal Syndrome. Circulation XL:871-878. 5. Rodkiewicz C M, Centkowski J, Zajac S (1992) On the subclavian steal syndrome. In vitro studies. J Biomech Eng 114:527-532. 6. Rodkiewicz C M, Centkowski J, Zajac S (1993) On the subclaviancarotid transportation for the subclavian steal syndrome. In vitro studies. J Biomech Eng 115:205-206. 7. Feneis H, Dauber W (2000) Pocket Atlas of Human Anatomy. Thieme, Stuttgart - New York. 8. Nuffield Department of Anaesthetics at: http://www.nda.ox.ac.uk/wfsa/html/u08/u08_013.htm 9. De Boorder M J, Van der Grond J, Van Dongen A J, Klijn C J M, Kappelle J L, Van Rijk P P, Hendrikse J (2006) SPECT measurements of regional cerebral perfusion and carbondioxide reactivity: Correlation with cerebral collaterals in internal carotid artery occlusive disease. J Neurol 253:1285–1291. 10. Hillen B, Hoogstraten H W, Post L (1986) A mathematical model of the flow in the circle of Willis. J Biomech 19:187-194. 11. May A G, De Weese J A, Rob C G (1963) Hemodynamic effects of arterial stenosis. Surgery 53:513-524.
IFMBE Proceedings Vol. 29
Mullins effect in human aorta described with limiting extensibility evolution L. Horny1, E. Gultova1, H. Chlup1,2, R. Sedlacek1, J. Kronek1, J. Vesely1 and R. Zitny3 1
Laboratory of Biomechanics, Faculty of Mechanical Engineering, Czech Technical University in Prague, Prague, Czech Republic 2 Institute of Thermomechanics, Academy of Sciences of the Czech Republic, Prague, Czech Republic 3 Department of Process Engineering, Faculty of Mechanical Engineering Czech Technical University in Prague, Prague, Czech Republic Abstract— Cyclic uni-axial tensile tests with samples of human aorta were performed with an aim to obtain data describing the Mullins effect of arterial tissue. According to presumed anisotropy, reported widely, both samples oriented longitudinally and circumferentially were tested. Each of tested samples underwent cyclic tension up to a particular value of a stretch four times, consecutively maximum limit of reached stretch was increased and subsequent four cycles were performed. Significant stress softening of aortic tissue and residual strains were confirmed. An idealization was made in such a way that reloading and unloading curves are coincident. It was hypothesized that the stress softening observed within reloading of previously loaded tissue may be described by an evolution of material parameters. These parameters should be related to an alternation of internal structure. We proposed a model based on changes in limiting fiber extensibility of fibrillar component of the aortic wall, primarily represented by a collagen. The arterial wall was assumed to be hyperelastic transversely isotropic material with different response under primary loading and unloading. A stored energy function was additively split into isotropic and anisotropic part. Preferred direction in continuum, defined in referential configuration, was assumed to be unchanged with cyclic loading. Every straining level in the cyclic test had its own value of fiber extensibility related to strain maximum previously reached. The isotropic matrix response was modeled using Neo-Hooke term with shear modulus values different under primary loading and reloading, however all reloading values were held the same. The predictions of the model described above were in good agreement with observations. Keywords— aorta, damage, limiting fiber extensibility, Mullins effect, stress softening. I. INTRODUCTION
Significant progress has been made in an area of blood vessels constitutive modeling in last decade. Since 2000 when Holzapfel et al. [1] have published their model of an artery where anisotropy arises from helically arranged bundles of collagenous fibrils, this approach seems to be dominant. Successful applications in computational analyses were reported for example by Cacho et al. [2] in a coronary artery bypass surgery simulation or by Holzapfel et al. [3] in an artery-stent interaction. This model has recently been modified to account for distributed collagen fibrils orientations [4].
A strain energy function based on exponential terms originates from Y. C. Fung and his research in seventies of the last century. This mathematical form was many times validated to be able to capture a material nonlinearity of biological tissues. In 2005 Horgan and Saccomandi [5] suggested a model of the strain energy function motivated by the idea of limiting chain extensibility which has been successfully used in polymer mechanics [6,7]. In [5] this approach was modified to limiting fiber extensibility suitable for composite materials with progressive large strain stiffening. Horny et al. [8] used this model to describe mechanical response of a coronary artery bypass graft and in a constitutive modeling of human aorta [9]. Models mentioned above are capable to describe elastic arterial response. However, it is well known that blood vessels show some inelastic effects [1]. A visco-elastic behavior (dumping, creep, relaxation) can be captured using an internal variables approach [10]. Another inelastic phenomenon in vitro observable in arteries is a stresssoftening similar to the Mullins effect well known in polymer physics. This kind of strain-induced softening is named after Leonard Mullins due to his extensive research on this theme within the last century [11,12]. The Mullins effect in idealized form is described as follows. When previously non-loaded (so-called virgin) material is loaded to a particular value of the deformation, a stress-strain curve is usually called as primary loading curve. Within unloading process the stress-strain curve does not coincide with primary loading and a stress softening is attained. Subsequent loading curve coincide with unloading related to previous cycle. When the value of previous maximum deformation is reached, then the stress-strain curve returns to previous stress maximum and primary loading continues. Described behavior is depicted in Fig. 1. Except Mullins effect soft tissues exhibit further softening within so-called preconditioning that is defined as deformation process necessary to obtain repeatable mechanical response under cyclic loading. The Mullins effect in elastomeric materials is also related with a presence of residual strains and induced anisotropy [13,14]. Many attempts were made to find suitable constitutive description of the Mullins effect [13]. However, the best choice is still in question. There are two main theories in
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 768–771, 2010. www.springerlink.com
Mullins Effect in Human Aorta Described with Limiting Extensibility Evolution
Fig. 1 Typical stress-strain curve recorded for cyclic uniaxial tension of the human thoracic aorta (the strip oriented circumferentially is depicted). Each level of maximum stretch (λ=1.1;1.2;1.3;1.4 and 1.5) was cycled four times. Unloading and reloading curves do not coincide exactly and significant hysteresis remains after four cycles. However, maximum of the stress-softening phenomenon is realized between primary loading and the first unloading.
use. The first approach is based on Continuum Damage Mechanics which operates with a damage parameter considered as an internal variable. The method was proposed by Simo [15] and applied for example by Guo and Sluys [16] or Gracia et al. [17]. Ogden and Roxburgh [18] proposed a pseudo-elastic model to describe the Mullins effect. This was further modified with Dorfmann and Ogden [19] to capture residual strains. Both continuum damage theory and theory of pseudo-elasticity are similar, however the first explicitly involves Clausius-Planck inequality. Above mentioned models are purely phenomenological. Second main approach is physically motivated and several considerations for internal structure of a material are being made [13]. The leading is so called network alternation theory, see e.g. Marckmann et al. [20]. Similar idea may be traced back to Mullins and Tobin [12] who supposed that rubber is a two phase continuum in which stiff phase is transformed to compliant one depending on a deformation. Modern network alternation incorporates knowledge about macromolecular structure of a rubber. Mullins effect is considered to be a consequence of a breakage of links and increasing length of chains, Chagnon et al. [21], thus softening is based on the deformation depending network evolution. The softening effect and pseudo-elastic mechanical response of arteries have been known for a long time [22]. However, only few attempts have been made to develop new theories. Especially in last two years a publication activity has grown. Pena and Doblare [23] used anisotropic pseudo-elastic approach to reproduce the Mullins effect
769
observed in an uniaxial tension of a vena cava. Damage mechanics was employed by Pena et al. [24] in modeling of aortic uniaxial tension. A generalized model based on internal variables applicable to arteries was also proposed by Ehret and Itskov [25]. This model can take account for a preconditioning behavior. The main goal of this paper is to present hypothesis that the Mullins effect observed in uniaxil tension of an artery can be captured by methods of pseudo-hyperelasticity using limiting fiber extensibility depending on maximum previous deformation. There are supposed two ways of material response both governed by hyperelastic description. The first is primary loading and second is unloading/reloading described with changed limiting fiber extensibility parameter. To account for changed response of an isotropic matrix shear modulus values different under primary loading and reloading are also supposed, however constant value for every unloading/reloading is held. II. METHODS
Cyclic uniaxial tensile tests with four samples of human thoracic aorta were performed with MTS Mini Bionix testing machine (MTS, Eden Prairie, USA). Samples were obtained from cadaveric donors with the approval of the ethic committee in the University Hospital Na Kralovskych Vinohradech in Prague. All experiments were performed within 48 hours after the death. Two samples were tested in the direction aligned circumferentially with respect to the natural configuration of the artery and other two were aligned longitudinally. A referential geometry was obtained via digital image analysis performed in NIS-Elements software (Nikon Instruments Inc., Melville, USA). An extension and loading force were measured by MTS testing machine. The cyclic loading was applied as follows: five levels of maximum deformation were performed according to stretch λ=1.1; 1.2; 1.3; 1.4 and 1.5, where λ is the ratio between current length l and referential length L. Each level was cycled four times. It means that for instance after four cycles limited to λ=1.1 the maximum of the deformation was increased to 1.2 and new four cycles performed. Employing incompressibility assumption loading stresses σ were obtained as σ = λ FS −1 (1) In the equation (1) F denotes applied force and S is the cross-section in the reference configuration.
IFMBE Proceedings Vol. 29
770
L. Horny et al. III. MODEL
Arteries were modeled as an incompressible hyperelastic continuum in which the anisotropy rises from the reinforcement by one family of fibers. In such a case constitutive equations can be read in the form of (2). ∂W σ i = λi (2) − p i = 1, 2, 3 ∂λi Here p is the Lagrangian multiplicator determined from boundary conditions. W denotes stored energy function. The model based on limiting fiber extensibility, originally proposed in [5] was incorporated in the form published in [8]. It is expressed in the equation (3). c W = ( I1 − 3) 2 2 λ12 cos 2 β + λ22 sin 2 β − 1 ⎞ (3) μ J f ⎛⎜ ⎟ − ln 1 − ⎜⎜ ⎟⎟ 2 J 2f ⎝ ⎠ Here I1 denotes first invariant of right Cauchy-Green strain tensor, c and μ are stress-like material parameters, and Jf is so-called limiting fiber extensibility parameter (dimension-less). β denotes declination of fibers relative to circumferential direction in a natural configuration of an artery. λ1 is the stretch ration of the strip aligned with circumferential direction of the natural (tubular) configuration of an artery (λ2 is aligned longitudinally). The model (3) is called limiting extensibility model due to the existence of a finite value of a fiber stretch λf (λf2 = λ12cos2β +λ22sin2β) in which the stored energy approaches infinity. Shear strains/stresses were not considered in the model.
(
primary loading curve and μ12 for unloading curves. The limiting extensibility parameter was being varied with successive loading as follows: Jf0 – primary loading; Jf1 – cycle limited to λ=1.1; Jf2 – cycle limited to λ=1.2. Thus, used material parameters can be summarized this way: primary loading parameters – c, μ0, Jf0 and β ; unloading/reloading limited to λ=1.1 – c, μ12, Jf1 and β ; unloading/reloading limited to λ=1.2 – c, μ12, Jf2 and β. Table 1 Material parameters c
μ0
μ12
[kPa]
[kPa]
[kPa]
[1]
[1]
[1]
[°]
83.4
1877
36.9
0.3415
0.0226
0.0917
50.94
Jf0
Jf1
Jf2
)
According to (2) σ1 and σ2 can be computed (σ3=0 to determine p). Eq. (3) generally involves three stretch ratios, reduced to two assuming material incompressibility. However, only displacements in loading directions were recorded (due to the testing machine design). Thus we incorporated additional boundary condition which restricts the value of the transversal stress to be zero (lateral surfaces of the strip are non-loaded). IV. RESULTS
Primary loading curve and the fourth unloading curve for
λ=1.1 and λ=1.2 were included in regression analysis (the idealization assumes unloading=reloading). Material parameters were estimated using weighted least square method in Maple (Maplesoft, Wareloo, Canada). They are summarized in Tab. 1. Parameters c and β were held the same for all data. Shear modulus μ0 was considered for
Fig. 2 Typical stress-strain curves and model fitting.
IFMBE Proceedings Vol. 29
β
Mullins Effect in Human Aorta Described with Limiting Extensibility Evolution V. CONCLUSIONS
It was found that the stored energy density function (3) based on limiting fiber extensibility is capable to describe stress-strain curves obtained from cyclic uniaxial tensile test. Varying two parameters (μ and Jf) was sufficient for the description of the stress softening like Mullins effect. It was suggested that changes in these parameters were straininduced, probably due to an alternation in internal structure of the material. This is only a pilot study. However, results suggest that appropriate description of the Mullins effect in arteries would be obtained if suitable mathematical expression of the dependence of Jf and μ on previous maximum stretch (or deformation history) was found.
ACKNOWLEDGMENT This work has been supported by Czech Ministry of Education project MSM 68 40 77 00 12 and Czech Science Foundation GACR 106/08/0557.
REFERENCES 1.
2.
3.
4.
5.
6. 7.
8.
9.
Holzapfel GA, Gasser TC, Ogden RW (2000) A new constitutive framework for arterial wall mechanics and a comparative study of material models. J Elast 61:1–48 Cacho F, Doblaré M, Holzapfel GA (2007) A procedure to simulate coronary artery bypass graft surgery. Med Bio Eng Comput 45:819– 827 Holzapfel GA, Stadler M, Gasser TC (2005) Changes in the mechanical environment of stenotic arteries during interaction with stents: Computational assessment of parametric stent design. J Biomech Eng Trans ASME 127:166–180 Gasser TC, Ogden RW, Holzapfel GA (2006) Hyperelastic modelling of arterial layers with distributed collagen fiber orientations. J R Soc Interface 3:15–35 Horgan CO, Saccomandi G (2005) A new constitutive theory for fiber-reinforced incompressible nonlinearly elastic solids. J Mech Phys Solids 53:1985–2015 Gent AN (1996) A new constitutive relation for rubber. Rubber Chem Technol 69:59–61 Horgan CO, Saccomandi G (2003) A description of arterial wall mechanics using limiting chain extensibility constitutive models. Biomechan Model Mechanobiol 1:251–266 Horny L, Chlup H, Zitny R, Konvickova S, Adamek T (2009) Constitutive behavior of coronary artery bypass graft, IFMBE Proc. vol. 25/IV, World Congress on Med. Phys. & Biomed. Eng., Munich, Germany, 2009, pp 181–184 Horny L, Chlup H, Zitny R (2008) Strain energy function for arterial walls based on limiting fiber extensibility, IFMBE Proc. vol. 22, 4th European Conference of the International Federation for Medical and Biological Engineering, Antwerp, Belgium, 2008, pp 1910–1913
771
10. Holzapfel GA, Gasser TC, Stadler M (2002) A structural model for the viscoelastic behavior of arterial walls: Continuum formulation and finite element analysis. Eur J Mech A-Solids 21:441–463 11. Mullins L (1969) Softening of rubber by deformation. Rubber Chem Technol 42:339–361 12. Mullins L, Tobin N (1957) Theoretical model for the elastic behavior of filled-reinforced vulcanized rubbers. 30:551–571 13. Diani J, Fayolle B, Gilormini P (2009) A review on the Mullins effect. Eur Polym J 45:601–612 14. Diani J, Brieu M, Vacherand JM (2006) A damage directional constitutive model for Mullins effect with permanent set and induced anisotropy. Eur J Mech A-Solids 25:483–496 15. Simo JC (1987) On a fully three-dimensional finite-strain viscoelastic damage model: Formulation and computational aspects. Comput Methods Appl Mech Eng 60:153–173 16. Guo Z, Sluys LJ (2006) Computational modelling of the stresssoftening phenomenon of rubber-like materials under cyclic loading. Eur J Mech A-Solids 25:877–896 17. Gracia LA, Pena E, Royo JM, Pelegay JL, Calvo B (2009) A comparison between pseudo-elastic and damage models for modelling the Mullins effect in industrial rubber components. Mech Res Commun 36:769–776 18. Ogden RW, Roxburgh DG (1999) A pseudo-elastic model for the Mullins effect in filled rubber. Proc R Soc London A 455:2861–2877 19. Dorfamnn A, Ogden RW (2004) A constitutive model for the Mullins effect with permanent set in particle reinforced rubber. Int J Solids Struct 41:1855–1878 20. Marckmann G, Verron E, Gornet L, Chagnon G, Charrier P, Fort P (2002) A theory of network alternation for the Mullins effect. J Mech Phys Solids 50:2011–2028 21. Chagnon G, Verron E, Marckmann G, Gornet L (2006) Development of new constitutive equations for the Mullins effect in rubber using the network alternation theory. Int J Solids Struct 43:6817–6831 22. Fung YC, Fronek K, Patitucci P (1979) Pseudoelasticity of arteries and the choice of its mathematical expression. Am J Physiol 237:H620–H631 23. Pena E, Doblare M (2009) An anisotropic pseudo-elastic approach for modelling Mullins effect in fibrous biological materials. Mech Res Commun 36:784–790 24. Pena E, Pena JA, Doblare M (2009) On the Mullins effect and hysteresis of fibered biological materials: A comparison between continuous and discontinuous damage models. Int J Solids Struct 46:1727–1735 25. Ehret AE, Itskov M (2009) Modeling of anisotropic softening phenomena: Application to soft biological tissues. Int J Plast 25:901– 919 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Lukas Horny Fac. of Mech. Eng. Czech Technical University in Prague Technicka 4 Prague Czech Republic [email protected]
A distribution of collagen fiber orientations in aortic histological section L. Horny1, J. Kronek1, H. Chlup1,2, R. Zitny3 and M. Hulan1 1
Laboratory of Biomechanics, Faculty of Mechanical Engineering, Czech Technical University in Prague, Prague, Czech Republic 2 Institute of Thermomechanics, Academy of Sciences of the Czech Republic, Prague, Czech Republic 3 Department of Process Engineering, Faculty of Mechanical Engineering Czech Technical University in Prague, Prague, Czech Republic Abstract–Distributions of collagen fibrils and smooth muscle cells nuclei (SMC) orientations were investigated in histological sections obtained from medial layer of human thoracic aorta. The sections were stained with van Gieson. Digital image of the sections was converted to binary pixel map with target component represented in white (logical unity). Selected image was indicated to elucidate the sensitivity to threshold conditions and three different binary conversions were performed. Consecutively images were processed by in house developed software BinaryDirections which uses an algorithm of the rotation line segment to determine significant directions in digital images. The algorithm operates in the way that in each target pixel a line segment is rotated step by step to explore neighborhood of the pixel. Exploring the neighborhood the number of unity pixels in each rotating step is determined. The distribution of orientations in the entire image is gained after normalization either as averaged density distribution from all pixels or as an histogram of the most abundant directions in the image. It was found that both collagen fibrils and SMC nuclei analyses give significant peak in distributions. Its value ranges between 45° - 65° (defined as declination from longitudinal axis of an artery in a tubular configuration) depending on the method. It implies that preferred direction in aortic medial layer was oriented circumferentially rather than longitudinally. This conclusion was almost independent of the threshold setup. Results suggest that the orientations of SMC nuclei and collagen fibrils are mutually correlated and determination of collagen fibril orientations, which may be stained in insignificant manner, could be supported with SMC nuclei orientations to obtain more realistic models. Keywords– aorta, anisotropy, collagen, fiber distribution, histology, probability density. I. INTRODUCTION
Innovative biomedical engineering moves towards patient-specific (bio-)artificial implants and computational simulations. It involves a development of sophisticated constitutive models for biological tissues. These models have to account for complex material structure. Biological tissues comprise large number of different cells, matrix proteins and bonding elements. Moreover, there are many complex interrelations between these constituents. If an arterial wall is considered, there are three main layers usually distinguished, Holzapfel et al. [1]. Each layer has different function and a composition. Briefly, innermost
layer called tunica intima consists of endothelial cells which rest on basal lamina. There is also a subendothelial layer dominated with collagen and smooth muscle cells (SMC). Tunica intima and subendothelial layer are very thin and compliant in physiological state; however they may become mechanically considerable in pathological situation, see Holzapfel et al. [3]. Tunica media, middle layer of an artery, has distinctive three-dimensional network structure. It is arranged into so-called musculoelastic fascicles which consist of SMC, bundles of collagen fibrils and elastin. They are separated with fenestrated elastin laminae and repeat through medial layer [2]. Directional arrangement of fibrillar components in media is helical with small pitch (almost circumferential orientation) [1,2]. Tunica adventitia is the outermost layer where especially fibroblasts, fibrocytes and collagen fibers are presented. Collagens form two helically arranged families of fibers with significantly dispersed individual fibers. Above mentioned layers are separated by elastic laminae. However, described structure is valid only in the case of elastic artery. Histological structure of muscular arteries may differ significantly. For more details see [1] and [2] and references therein. Models of internal structure are being implemented in constitutive theories. Nowadays constitutive theories based on continuum mechanics can incorporate information about fibrillar structure to predict appropriate anisotropic behavior of a material. The anisotropy may rise from either finite number of preferred directions or their continuous distribution (for finite number of directions see Holzapfel et al. [1,6] or Lanir [4]; and for continuous distribution Gasser et al. [2], Lanir [4] or Driessen et al. [5]). It is also possible to build up a model with composite structure (mixture theory) which will be intrinsically multiphase. Well known is the fact that fibrillar components of biological tissues are usually crimped. Models accounting for an uncrimping and successive straining within a loading process have also been proposed; see Lanir [4], Freed and Doehring [7] or Cacho et al. [8]. This paper deals with internal structure of arterial wall based on observations made within an inspection of histological sections. There are different ways to obtain such information. Analysis of histological sections and employing transmitted light microscopy (TLM) is probably the oldest and simplest one. Nevertheless any drawbacks
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 772–775, 2010. www.springerlink.com
A Distribution of Collagen Fiber Orientations in Aortic Histological Section
exist. First of all a section is mechanically loaded during sectioning and chemical fixation (in formalin) may also change internal structure. Moreover this method does not seem to be suitable for an investigation of rearrangements in internal structure under a loading, what is of cardinal importance, because of complicated load fixation. However, such papers exist; see Sokolis et al. [9], they reported straightening of elastic lamellae under uniaxial tension. The polarized light microscopy (PLM) may be used in the same way because of the collagen and SMC birefringence, Canham et al. [10], [11]. Both studies revealed approximately circumferential orientations of collagen fibrils and SMC in coronary arteries and internal mammary artery. Small-angle scattering methods may be used as an alternative to the optical microscopy. Small-angle X-ray scattering (SAXS) seems to be capable to reveal a relation between fibers orientation and external load in a nano-scale, Schmid et al. [12]. Small-angle light scattering (SALS) method may also be employed rather than for nano-scale in micro-scale investigations of collagen organization in a tissue, Sacks et al. [13]. The main goal of this paper is to present findings obtained within investigation of histological sections of human abdominal aorta using digitalized images. In house software BinaryDirections with implemented algorithm of the rotating line segment (RLS), see Horny et al. [14], was used to quantify an organization of collagen fibrils and nuclei of SMC. As first RLS algorithm is described and obtained results follow. II. METHODS
Histological sections were obtained from abdominal aorta of 36-years-old male donor. They were stained with van Gieson and digitalized under 10 x magnifications. Four images were the same as in [14] and were used to confirm previous results. It was due to the fact that the algorithm in BinaryDirections (RLS) has been improved in 2009. Additional image from medial layer was indicated to examine if the correlation between collagen fibers orientation and SMC nuclei exists. This image was also used in the analysis of threshold sensitivity to elucidate an influence of a binary conversion on results; threshold of RGB filter which transforms stained collagen to white (logical unity) pixels and non-collagen components to black (logical zero) pixels. III. RLS ALGHORITHM
Exact mathematical formulation of the rotating line segment (RLS) was described in details in [14]. Here only
773
key ideas will be repeated with emphasis on improvements made in 2009. The RLS processes binary pixel maps as follows. In non-zero pixel (target pixel) of the processed image the line segment is step by step rotated in order to evaluate the number of non-zero pixels in the target pixel neighborhood. The evaluation is based on so-called matching coefficient, C(α), which is normalized number of non-zero pixels shared with the line segment and neighborhood of target pixel at given rotating step α. Thus RLS is two-parametric algorithm which depends on given α and positive integer N. N is the linear dimension of the neighborhood (square with side length equal to N). The neighborhood and the line segment are represented as matrices with elements equal to zero or one depending on filtered pixels’ values. The rotation of the line segment is performed as a creation of new matrix with different nonzero elements. The number of non-zero pixels (collagen) in given direction (rotating step) is obtained as a product between elements in neighborhood matrix and line segment matrix, where corresponding positions in matrices are multiplied and the products are summed. This number is then normalized with respect to dimensions of selected neighborhood and the line segment, thus matching coefficient C(α) is obtained. There are two ways to reveal relevant information about directional frequency of non-zero (collagen) pixels in the neighborhood of target pixel. First, one can reduce information to the most abundant angle (rotation step) only, and create an histogram over entire image. A dominant direction distribution is then obtained. If full information is stored then average distribution of collagen pixels frequency over entire image can be computed after normalizing to unit surface. If multiple maxima in the matching coefficient are obtained, they are added to their corresponding orientations weighted with multiplicative inverse of their multiplicity (newly added). The second improvement implemented in BinaryDirections software is a penalty function which suppresses contribution of almost uniform distributions in averaging procedure. IV. RESULTS
Selected results for processed images are shown in Fig. 4–6. Fig. 1 shows original digital image of used histological section (abdominal aortic media). Filtering this image to collagen bundles is in Fig. 2; smooth muscle cell nuclei, easy distinguishable in Fig. 1, are filtered in Fig. 3. Fig. 2 and Fig. 3 are overlaid with orange grid spaced every 100 pixels to highlight length scales. A distribution of orientations is build up using information from a pixel neighborhood, thus its dimension (N) should be chosen with respect to dimensions of image texture.
IFMBE Proceedings Vol. 29
774
L. Horny et al.
Fig 1 Additional histological section obtained from abdominal aortic media; stained with van Gieson. Bundles of collagen fibrils are stained in red. Black nuclei of smooth muscle cells are also apparent.
Fig. 4 Empirical probability density distribution of collagen fiber orientations obtained from the binary map in Fig. 2. Sensitivity to neighborhood dimension is shown.
Fig. 5 Comparison between empirical probability density distributions of Fig. 2 Binary pixel map of the section from Fig. 1 filtered with respect to collagen (white). Orange grid spacing is 100 pixels; 0.64μm/pix.
Fig. 3 Binary pixel map of the section from Fig. 1 filtered with respect to SMC nuclei (white). Orange grid spacing is 100 pixels; 0.64μm/pix.
collagen fibers orientations obtained form Fig. 2 with different filter thresholds and empirical probability density distribution of smooth muscle cell nuclei in Fig. 3.
Fig. 6 Comparison between empirical probability density distribution for collagen orientation and smooth muscle nuclei orientation for medial section published in [14].
IFMBE Proceedings Vol. 29
A Distribution of Collagen Fiber Orientations in Aortic Histological Section
Fig. 4 shows small sensitivity of the texture (Fig. 2) on N. Density peak is located approximately at 55°. Filtering to target component is not unambiguous process and may depend on individual skills of an operator. To find out this influence the section (Fig. 1) was binary converted three times with small perturbations in collagen threshold. Fig. 5 shows that all images gave similar results, the peak at ≈55°. Due to existence of so-called musculoelastic fascicles in aortic media it was hypothesized that the correlation between collagen fibers and SMC orientations may exist. In the case of histological section in Fig. 1 (and its binary conversions above mentioned) this hypothesis was confirmed (Fig. 5). Both methods revealed density maximum between 45°-65°. Fig. 6 shows that the section previously analyzed in [14] suggests correlation between SMC and collagen orientations. However, additional analyses of further sections did not provide this result in every case.
REFERENCES 1.
2.
3.
4. 5.
6.
7. 8.
V. CONCLUSIONS
The results suggest that the rotation line segment may be powerful instrument in analyses of arterial architecture. It was shown that analyzed histological image from human aortic media (Fig. 1) gives continuous density distribution with maximum at app. 55° for the collagen (the angle is defined with respect to longitudinal axis in cylindrical configuration). Similar result was obtained within the analysis of smooth muscle cell nuclei orientations (continuous distribution with one peak located between 45°65°). This may not be a rule, but may serve as confirmation for purely collagen-based analyses. The correlation between SMC nuclei and collagen fibrils orientations may be expected especially within medial layer where so-called musculoelastic fascicles are presented. Prospective mismatches between SMC and collagen distributions may be attributed to finite thickness of histological section (5 μm in this study) which may contain more that one fascicle. Results show that preferred direction is oriented circumferentially rather than longitudinally, however the orientations density maximum does not meet exactly circumferential direction as in [6]. Within analysis of histological sections published in [14] with improved algorithm, obtained results were similar in character with reported in [14].
775
9.
10.
11.
12.
13.
14.
Holzapfel GA, Gasser TC, Ogden RW (2000) A new constitutive framework for arterial wall mechanics and a comparative study of material models. J Elast 61:1–48 Gasser TC, Ogden RW, Holzapfel GA (2006) Hyperelastic modelling of arterial layers with distributed collagen fiber orientations. J R Soc Interface 3:15–35 Holzapfel GA, Sommer G, Gasser CT, Regitnig P (2005) Determination of layer-specific mechanical properties of human coronary arteries with nonatherosclerotic intimal thickening and relative constitutive modeling. Am J Physiol Heart Circ Physiol 289:2048–2058 Lanir Y (1983) Constitutive equations for fibrous connective tissues. J Biomech 16:1–12 Driessen NJB, Bouten CVC, Baaijens FPT (2005) A structural constitutive model for collagenous cardiovascular tissue incorporating the angular fiber distribution. J Biomech Eng Trans ASME 127:494– 503 Holzapfel GA, Gasser TC, Stadler M (2002) A structural model for the viscoelastic behavior of arterial walls: Continuum formulation and finite element analysis. Eur J Mech A-Solids 21:441–463 Freed AD, Doehring TC (2005) Elastic model for crimped collagen fibrils. J Biomech Eng Trans ASME 127:587–593 Cacho F, Elbischger PJ, Rodríguez JF, Doblaré M, Holzapfel GA (2007) A constitutive model for fibrous tissues considering collagen fiber crimp. Int J Non-Linear Mech 42:391–402 Sokolis DP, Kefaloyannis EM, Kouloukoussa M, Marinos E, Boudoulas H, Karayannacos PE (2006) A structural basis for the aortic stress-strain relation in uniaxial tension. J Biomech 39:1651– 1662 Canham PB, Finlay HM, Dixon JG, Boughner DR, Chen A (1989) Measurements from light and polarised light microscopy of human coronary arteries fixed at distending pressure. Cardiovasc Res 23:973–982 Canham PB, Finlay HM, Boughner DR (1997) Contrasting structure of the saphenous vein and internal mammary artery used as coronary bypass vessels. Cardiovasc Res 34:557–567 Schmid F, Sommer G, Rappolt M., Schulze-Bauer CAJ, Regitnig P, Holzapfel GA, Laggner P, Amenitsch H (2005) In situ tensile testing of human aortas by time-resolved small-angle X-ray scattering. J Synchrot Radiat 12:727–733 Sacks MS, Smith DB, Hiester ED (1997) Small angle light scattering device for planar connective tissue microstructural analysis. Ann Biomed Eng 25:678–689 Horny L, Hulan M, Zitny R, Chlup H, Konvickova S, Adamek T (2009) Computer-Aided Analysis of Arterial Wall Architecture. IFMBE Proc. vol. 25/4, World Congress on Med. Phys. & Biomed. Eng., Munich, Germany, 2009, pp 1494–1497
Author: Institute: Street: City: Country: Email:
ACKNOWLEDGMENT This work has been supported by Czech Ministry of Education project MSM 68 40 77 00 12 and Czech Science Foundation GACR 106/08/0557.
IFMBE Proceedings Vol. 29
Lukas Horny Fac. of Mech. Eng. Czech Technical University in Prague Technicka 4 Prague Czech Republic [email protected]
An Innovative Approach for Right Ventricular Volume Calculation during Right Catheterization P. Toumpaniaris1, I. Skalkidis1, S. Markatis2, and D. Koutsouris1 1
National Technical University of Athens / School of Electrical and Computers Engineering, Biomedical Engineering Laboratory, Athens, Greece 2 National Technical University of Athens / School of Applied Mathematics and Physical Sciences, Athens, Greece
Abstract— In this paper, an innovative approach is presented for the right ventricular volume calculation during right catheterization. The right catheterization is really important in critically ill patient management. However, there is a concern about the sufficiency of outcomes from pulmonary artery catheter (PAC) which is used for right catheterization. In order to improve the reliability of the particular catheterization, the additional parameter which should be used is the volume of right ventricle chamber and it is proposed to be measured with the aid of ultrasonic technology. In order to compute the volume, five circular groups of six ultrasonic sensors are mounted along the catheter surface. Each sensor measures the distance from the corresponding vertical point on the ventricular wall forming five hexagons in five different planes. Between any two of them a polyhedron of volume V is created . The sum of the polyhedra which are formed in the cavity is equivalent to the volume of the right ventricle chamber. As this method is dependent only on the distances measured by the ultrasonic sensors and the fixed separation distance between the groups of sensors, it is expected to be a reliable method in volume calculation during catheterization. Keywords— Right Catheterization, Right Ventricular Volume.
I. INTRODUCTION Heart failure, sepsis and multiple organ failure are the main factors for the use of right catheterization. The pulmonary artery catheter is generally used in the intensive care unit for the estimation of hemodynamic condition in critically ill patients. It is a powerful diagnostic and monitoring tool that has been used extensively in the assessment of cardiovascular physiology [1]. The pulmonary artery catheter allows direct, simultaneous measurement of pressures in the right atrium, right ventricle, pulmonary artery, and the filling pressure ("wedge" pressure) of the left atrium. So far, the right catheterization of heart evaluates only pressures. Hemodynamic monitoring is essential for treatment of critical ill patients. However the existing techniques have a lot of restrictions in their basic technical parameters [2]-[4].
The problem can be solved if this method is enriched with the precise calculation of right ventricle volume. Right Ventricle (RV) volume could be calculated with high accuracy with method of magnetic resonance (MRI). However, with this method of measurement, the volume will be recorded at a specific instance. Repeated volume measurements of the right ventricle for a time period of a few hours in a hemodynamic laboratory or in an intensive care unit, could contribute in efficient management of heart failure patients, since potential changes of right ventricle’s volume can cause alteration of parallel conductance volume and thus affect the left ventricle’s mechanical parameters. Unlike the left ventricle, the shape of which is close to ellipsoid and not difficult to evaluate it, the right ventricle shape is mostly a mixture of crescent and conical shape which easily could be regarded as a complex nongeometrical shape making its evaluation more difficult [5]. There are many studies presented in the literature for volume measurement of cardiac ventricles [6]-[8]. Most of them use methods related with measurements from outer surface, such as MRIs and ultrasounds on the top of the skin. However there are a few works where the volume measurements are taken from its inner space [8] but no one known to the authors, related with real time RV volume measurement using PAC. This paper will introduce a novel method of right ventricle volume measurement using pulmonary artery catheter.
II. ARCHITECTURE The PAC is a catheter with length usually at 110cm and external diameter of 7French (or 2.33mm). In order to access the right heart it enters from a large vein which often is the internal jugular or the subclavian vein and though the superior vena cava enters the right atrium as shown in Figure 1. Through the tricuspid valve is inserted to the right ventricle cavity and then curves in order to enter through the pulmonary valve to the pulmonary artery [9].
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 776–779, 2010. www.springerlink.com
An Innovative Approach for Right Ventricular Volume Calculation during Right Catheterization
777
Fig. 1 Right Catheterization procedure Fig. 2 Ultrasound Beam from PAC The curvature of the catheter in the right ventricle chamber has the form of an arc of a circle and could be considered to be the same in any heart as the location and the distance between the valves is approximately unchanged. Therefore, as a result from the above, if a number of ultrasonic sensors are mounted on the catheter surface with certain geometry, the angle of ultrasound beam from the sensor to the ventricular wall will be constant. Each ultrasonic sensor measures the distance from the sensor to a point of the wall of the cavity as shown in Figure 2. In fact, it measures the time that the ultrasound beam takes to reach the cavity and return to the point of emission. Since the speed of sound is known it is easy to calculate the distance d by the formula:
d=
ct 2
Where c is the velocity of sound in the medium (in blood= 1566m/s) and t the sound travel time (from transmission of sound to echo reception). We assume 5 circular dissections on the tube of the catheter. Let the total length of the catheter that is inside the RV be which can be considered as a piece of circular arc. We will appoint the 5 sections where the ultrasound transducers (UsT) will be mounted with a separation of /5 between them along the catheter, so that the first point will be just after the tricuspid valve (the valve from which the catheter enters to right ventricle chamber) and the 5th point is just before from the pulmonary valve (the valve from which the catheter leaves the ventricle). In each of the 5 points, 6 UsT will lay rotatively on the surface of the catheter with 60o separation.
III. METHODOLOGY The catheter inside the RV is considered to be of circular shape, of length , K is considered to be the center of the circle and ρ the radius. So ρ=ΚΟ1 =ΚΟ2 =ΚΟ3=ΚΟ4=ΚΟ5 The length of the arc (Ο1Ο5)= and (Ο1Ο2)= (Ο2Ο3)= (Ο3Ο4)= (Ο4Ο5)= /5 The angle The planes u1, u2, u3, u4, u5, that the UsT’s define in each position are perpendicular to the catheter and all of them pass through the line which is perpendicular to the plane of the circle that crosses through the center K. In every plane of transducers there exist 6 UsT crosssectional lines in pairs of two, with 60o angle between them. In the planes u1, u3, u5, the UsT pair Ai∆i is in the radius ΚΟi while the planes u2, u4, the UsT BiEi is in the perpendicular to the radius ΚΟi, considering that Α, Β, Γ, ∆, Ε, Ζ are the points where the ultrasound beam reached the wall of the RV and O is the point where the UsT is, as shown in Figure 3. These are true since UsTs from plane to plane are twisted by 30o.
IFMBE Proceedings Vol. 29
Fig. 3 Hexagons created by consecutive UsT beams
778
P. Toumpaniaris et al.
The solid volume is defined by the planes for example u1, u2 and has the shape that is shown in Figure 4
Fig. 5 Height calculation
Fig. 4 Solid volume between consecutive planes The volume V12 between the planes u1, u2 is comprised by the sum of the following: V12= Vpyramid(Ο2Α1Β1Γ1∆1Ε1Ζ1) + Vtetrahedrons(Ο2Ζ2Ζ1Α1 + Ο2Ζ2Α2Α1 + Ο2Α2Α1Β1 + Ο2Α2Β2Β1+ Ο2Β2Β1Γ1+ Ο2Β2Γ2Γ1+ Ο2Γ2Γ1∆1+ Ο2Γ2∆2∆1+ Ο2∆2∆1Ε1+ Ο2∆2Ε2Ε1+ Ο2Ε2Ε1Ζ1+ Ο2Ε2Ζ2Ζ1) where Vpyramid(Ο2Α1Β1Γ1∆1Ε1Ζ1) = Area(base)·height = ε·h ε = Area (Α1Β1Γ1∆1Ε1Ζ1) = Area(Ο2Α1Β1) + Area(Ο2 Β1Γ1) + Area(Ο2 Γ1∆1) + Area(Ο2 ∆1Ε1) + Area(Ο2 Ε1Ζ1) + Area(Ο2Z1A1) = (Ο2Α1)( Ο2 Β1)·sin 60 +… = rarb·sin 60 +… where ra is the distance that the UsT Ai has calculated and rb is the distance that the UsT Bi has calculated. The height h is the length of the projection of O2 to the plane u1 = O2Θ. In the triangle Ο2ΚΘ (where Θ is a right angle) sin =ρ · sin So h= Using the above the volume of the pyramid is calculated. The volume of the tetrahedron Ο2A2B2B1 is Ο2Α2B2·height(B1). The area of Ο2Α2B2 is computed similar to the base of the pyramid. The height(B1) O1β = O1B1cos60 = rB1cos60 βb = (Kβ)·sinφ = (KO1+O1β)·sinφ = (ρ+Ο1B1cos60)·sinφ = (ρ+rB1cos60)·sinφ
So it can be calculated and the same procedure is followed for Ζ1, Α1, Γ1, ∆1, Ε1. For the points Γ1, ∆1, Ε1 it should be noted that –rΓ1 should be used and for A1 because the angle is 90 the cos90 will be 1. So the volume for the tetrahedrons Ο2Ζ2Α2Α1, Ο2Α2Β2Β1, Ο2Β2Γ2Γ1, Ο2Γ2∆2∆1, Ο2∆2Ε2Ε1, Ο2Ε2Ζ2Ζ1 can be calculated. To calculate the tetrahedrons Ο2Ζ2Ζ1Α1, Ο2Α2Α1Β1, Ο2Β2Β1Γ1, Ο2Γ2Γ1∆1, Ο2∆2∆1Ε1, Ο2Ε2Ε1Ζ1 we consider a coordinate system Kxyz. The coordinates are A1(ρ+rA1,0,0) Z1(ρ+rZ1cos(-60), rZ1sin(-60),0) O2(ρcosφ, 0, ρsinφ) Z2(ΚΞcosφ, rZ2sin(-30), ΚΞsinφ)=((ρ+rZ2cos(-30))cosφ, rZ2sin(-30),(ρ+rZ1cos(-30))sinφ) The volume Ο2Ζ2Ζ1Α1 is the absolute value of the determinant. 1 1 VolumeΟ2Ζ2Ζ1Α1= = 1 1 ሺɏ ͳǡͲǡͲሻ ͳ ͳ ሺɏ ͳ
ሺെͲሻǡ ͳሺെͲሻǡͲሻ ͳ ተ ተ ሺɏ
ɔǡ Ͳǡ ɏɔሻ ͳ ሺሺɏ ʹ
ሺെ͵Ͳሻሻ
ɔǡ ʹሺെ͵Ͳሻǡ ሺɏ ͳ
ሺെ͵Ͳሻሻɔሻ ͳ
So the total volume of the solid that is created between two consecutive planes can be calculated and the total volume of the RV will derived by summing up the 4 solid volumes created by the 5 groups of sensors.
IV. DISCUSSION The volume measurement of right ventricle during right catheterization is an important factor for the estimation of
IFMBE Proceedings Vol. 29
An Innovative Approach for Right Ventricular Volume Calculation during Right Catheterization
hemodynamics of critically ill patients. Therefore the accuracy in measurement is compulsory. A thorough effort is been paid to the appropriate layout geometry of the ultrasonic sensors as they will build along the pulmonary artery catheter’s surface, so that the RV volume could be measured. Because of the complex shape of right ventricle it is difficult to define a method that fulfils all the requirements and limitations that arise in order to measure its volume, as the volume should be measured by the distances taken from ultrasonic sensors mounted along the PAC. In a theoretical aspect, the particular method seems to meet those requirements to a large extend. As the MRI method is considered to be a gold standard in ventricles volume measurement, a calibration of this proposed method is suggested so that a number of right ventricles volumes are to be measured both with MRI method and the proposed method. The mean difference between two methods could be regarded as an offset, the algebraic value of which should be taken into account in the new method for calibration.
REFERENCES 1. Takala J, (2006) The pulmonary artery catheter: the tool versus treatments based on the tool. Critical Care 10:162 2. Harvey S, Harrison DA, Singer M et al. (2005) Assessment of the clinical effectiveness of pulmonary artery catheters in management of patients in intensive care (PAC-Man): a randomised controlled trial. Lancet 366(9484):472-7
779
3. Shah MR, Hasselblad V, Stevenson LW et al. (2005) Impact of the pulmonary artery catheter in critically ill patients: meta-analysis of randomized clinical trials. JAMA 294(13):1664-70 4. Binanay C, Califf RM, Hasselblad V et al. (2005) Evaluation study of congestive heart failure and pulmonary artery catheterization effectiveness: the ESCAPE trial. JAMA 294(13):1625-33 5. Czegledy F, Aebischer N, Tarnari A et al. (1999) A new mathematical model for right ventricular geometry. Annual Intemational Conference of the IEEE Engineering in Medicine and Biology Society 12:4:18131814 6. Fritz D, Rinck D, Unterhinninghofen R et al. (2005) Automatic segmentation of the left ventricle and computation of diagnostic parameters using regiongrowing and a statistical model. SPIE Medical Imaging 1844–1854 7. Fritz D, Rinck D, Dillmann R et al. (2006) Segmentation of the left and right cardiac ventricle using a combined bi-temporal statistical model” Medical Imaging 2006: Visualization, Image-Guided Procedures, and Display. SPIE 6141:605-614 8. Vazquez de Prada JA, Jose A, Chen MH et al. (1996) Intracardiac echocardiography: in vitro and in vivo validation for right ventricular volume and function. The American heart journal 131:2:320-328 9. Headley JM. (1989) Invasive hemodynamic monitoring: Physiological principles and clinical applications. Baxter Healthcare
Author: P. Toumpaniaris Institute: NTUA – Biomedical Engineering Laboratory Street: Iroon Polytechniou 9, Zografou City: Athens Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
Service composition to support ambient assisted living solutions for the elderly V.Moumtzi1, C.Wills2 and A. Koumpis1 1
2
ALTEC Software S.A./ Research Programmes Division, Thessaloniki, Greece Kingston University/Faculty of Computing, Information Systems & Mathematics, London, UK
Abstract— Recent studies show that over the last decade electronic systems designed, created and continuously developed with the common goal to help the elderly live a more independent life. Each of the existing electronic systems offer services that cover different needs of ageing people living, such as self-determined life at home, social interaction, etc. In this paper we propose a set of service components that constitute a model for an e-service architecture capable to support the autonomy of older people. Furthermore, we argue that by exploiting existing operational electronic systems and integrate them according to the services they offer we can possibly converge to an ideal system that enhances assisted living of elderly people. Keywords— Ambient Assisted Living (AAL), elderly, eservices, Service Oriented Architecture, autonomy.
I. INTRODUCTION
The older population is on thethreshold of a boom [1]. According to U.S. Census Bureau projections, a substantial increase in the number of older people will occur during the 2010 to 2030 period. As a result of the aforementioned projection, there will be a dramatically increasing need to support the independent living of this globally growing elderly population. The fact that physical and mental routines are getting more difficult influences their whole lives. Institutions and medical facilities support old people needs, especially those caused by illness, however, there have been many efforts helping assisted people living . Technological solutions seem to be the way out of the expanding costs of health, help and support. [2] Taking into account the increasing number of elderly population in Europe and the identification of its subsequent social and financial consequences, national and European research efforts have focused on such independent living solutions. Apart from e-Health solutions, the field of Ambient-Assisted Living (AAL) has been developed, aiming on alleviating the difficulties of every day life for the elderly or people with disabilities in general. Senior person notification in case of an emergency, sensors monitor movements, person location, cognitive and emotional processes and comfort services are some of the main features implemented separately from different such systems.
The challenge is to take advantage of the many possibilities that existing Ambient Assisted Living technology systems entail but that take account of the fears that may arise amongst users when introducing new forms of AAL and integrate them appropriately. Thus the impact of the research will be to stimulate ICT innovation in favour of the elderly. II. THE ELDERLY NEEDS
It is very important to clarify what is it that the elderly require to satisfy their needs to enable them to enjoy a high level of quality of life. In this respect, needs mean a lack of something. These needs influence behavior as well as cognitive and emotional processes. Often there is a separation between primary needs, e.g. biological / physiological deficiency such as hunger, and secondary needs, e.g. all desires and wishes that determine man’s satisfaction on a personal, intellectual, (ir-) rational, social or cultural level. [3] We consider emergency treatment to be the kernel of any assistive living system. It aims at the early prediction of and recovery from critical conditions that might result in an emergency situation and the safe detection and alert activation in emergency situations such as sudden falls, heart attacks, strokes, etc. Autonomy enhancement services is the term used to denote all services that make it possible to supplant or replace previous direct care administered by medical and social care personnel or relatives, and replace it by appropriate system support. Comfort services cover all areas that do not fall into the above categories. Examples of comfort services are social contact assistance, logistic assistance, etc. [4] Therefore the types of services are categorised into three types of services which should provide the systems set up to support the needs of elderly: (a) Emergency treatment (b) Autonomy enhancement (c) Comfort III. THE CURRENT SOLUTIONS
Substantial advances have been made over recent years in applying technology to meet the needs of older people. In
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 780–783, 2010. www.springerlink.com
Service Composition to Support Ambient Assisted Living Solutions for the Elderly
the field of Ambient-Assisted Living (AAL) has been developed. This offers a sense of safety and reassurance to the elders themselves and their relatives that they will receive the care required in a time of need, without having to be succumbed to intensive care. A. Emergency treatment solutions AttentiaNet and Seniority comprise two already completed projects which aim at improving the quality of assistance and hence quality of life of elder people in Europe by utilizing advanced technologies for telemonitoring and telecommunications. A different approach is followed by MobilAlarm which enables older people to initiate an alarm call whenever and wherever they need to do so (using GPS; mobile telephony; body-worn alarm devices; service centres; geographic localisation and alerting software). More recently, a number of solutions have been proposed that utilize various sensor networks, from audio and movement to micro- and nano- sensors hand, to detect common elderly accidents, like falls. Projects Netcarity, INHOME, EMERGE and OLDES fall under this category. Finally, the R&D project called Companionable, which is still in its initial stages, will try to synergistically combine the strengths of a mobile robotic companion with the advantages of a stationary smart home, improving the elder person’s interaction with the system as well as the care itself. B. Autonomy enhancement solutions Besides AAL systems, a second dimension has been introduced in the field of ICT solutions for the elderly, comprising of the systems that aim on compensating for their cognitive decline. In accordance with mainstream eInclusion targets, this approach’s objective is to retain elderly people socially active and more self-reliant for a wider period of time. An example of such projects is VM (Vital Mind), which provides cognitive training by using related psychology, a TV-set and advanced ICT. The reasoning of VM is to enable elders to exercise actively and autonomously in front of the familiar to them television medium. On the other hand, the FP7 HERMES project aims at providing an integrated approach to cognitive care, based on assistive technology that reduces age-related decline of cognitive capabilities. HERMES offers cognitive training through games, while also supporting them in indoor as well as outdoor environments, when necessary. Support for elderly people with cognitive disabilities, and especially mild dementia and Alzheimer’s disease, is provided by COGKNOW which aims to develop a cognitive prosthetic device which will help elder “navigate through their day”.
781
C. Comfort solutions T-Seniority is an innovative ICT-solution especially fitted in areas of public interest that is directly addressing the main themes of the i2010 initiative, especially focusing in ICT for accessibility by the elderly and their social integration, reinforcing both European and existing national initiatives. T-Seniority Services are designed to be accessible to the greatest number of people and thus seeks to minimize designs that exclude the elderly from the benefits of Information Society, by thinking beyond the conventional media and creating a better overall experience throughout TV. [6] VITAL project proposes a combination of advanced information and communication technologies that uses a familiar device like the TV as the main vehicle for the delivery of services to elderly users in home environments. IV. THE DEVICE LIMITATIONS
It is highly recommended that technology systems which facilitate independent living of ageing people should be accessible through technological devices that elderly can use easily. In the following we list specific requirements that AAL devices should fulfill. [5] The main Design requirements directly related to the ability of handling the device are physical ones. Regarding the portable devices apart from the decreasing the size and weight of the device itself we should keep acceptable size of buttons and displays. More specifically, elderly people require having buttons as large as their finger, definitely not smaller. Pressing a button needs to give a sound alert so that the user can feel and hear that the action is done. Easy operation of device is tightly connected with the number of control elements. In case of simple inputs, two or three buttons (Yes, No, Quit) with obvious meaning can be implemented by the elderly without discomforting him. However a little bit more complex devices can be also used by elderly if there is a minimum balance among the buttons such as Enter and Escape buttons, which is enough to navigate through menus. Regarding the supply, wireless devices connected with the host system should have a power- off button in case of danger or malfunction the user is given a chance to say stop. In addition battery powered devices should be autorecharging showing a signal (like a light or sound alert) to remind the elderly recharge batteries by himself. On the other hand, devices which are wired connected with the host system should have one cable. The designer of a device for elderly should take into account that (s)he is not capable of solving installation conflicts (missing drivers, port numbers). The designer should
IFMBE Proceedings Vol. 29
782
V. Moumtzi, C. Wills, and A. Koumpis
be careful that the devices will not occurrence of any trouble discourages user from using relatively good device. V. THE PROPOSED SOLUTION
Our research aims to propose a well-defined and complete set of services that an integrated home based ICT solution should provide to facilitate the needs of elderly people we addressed above, emergency treatment, autonomy enhancement and comfort. Firstly, the recommended solution should provide cognitive and physical training for elderly people inside the framework and safety of an assisted living environment. The service should be installed in homes and institutions and ensure the accident-free, personalized and monitored corporal and mental training of its users. Meanwhile, users should be able to take advantage of the features of an independent living solution. This can be accomplished by home automations that compensate for the disabilities of people with cognitive problems or mild dementia during their daily activities. Finally, an elaborate distributed sensor network would guarantee immediate response in case of an emergency, by calling for help through public telephone lines. In this respect, the proposed solution aims to offer emergency treatment needs. Extensive research on this topic has proven that sensory training is one of the most effective countermeasures against cognitive deterioration and mild dementia. Furthermore, corporal activation is a well acknowledged factor of healthy living, having significant positive effects in the physical well-being of people. As regards to elderly autonomy enhancement, assistance services at home are necessary for this kind of system. Featuring typical AAL solution characteristics should be deployed, like accident detection, support in daily activities and third party notification in case of emergency, and combining them with preemptive measures like training. Finally, the ICT solution we suggest to be appropriate for various groups of elderly people also according to their individual social needs to guarantee safe, socially rich independent life. To this end, the elderly should be able to choose among many different options of public or personalized services: communicate with their relatives, friends and colleagues; ask for shopping, repairs, appointments, on-line banking, etc. The ICT solution should offer a flexible combination of general public services and personalized services demonstrating the versatility of its technological platform according to the user’s preferred or available ICT media. To accommodate the deliverance of these services the suggested solution should utilize state-of-the-art hardware
and software technology. In particular: x
User Interfaces: touch screens or simple screens for interaction with the users. All system functionalities, including home environment management, cognitive training and physical exercising performance monitoring, are displayed and set from the Local User Interface, which is a touch screen. x Sensors: to monitor movements inside the house. They are deployed as a distributed network of wirelessly connected sensors that identify moving patterns and detect deviations from those patterns or falls. In these cases they notify the user’s relatives or caretakers via the Remote User Interface. x Facility to connect Instrumented Power Outlets: which are sensors measuring voltage and current (power) fed into appliances. These sensors detect of forgotten switched on electrical appliances. x Both aforementioned sensor components guarantee the safe living of the elderly inside their home environments without need for exclusive intensive care. x Facility to connect Actuators: which facilitate acts like opening windows, doors and blinds and are remotely operated with the Local User Interface. x Processing units: an embedded processor and a general purpose PC which are used for coordinating and management of the AAL environment, for executing the cognitive training software and for storing and processing the physical training performance information. x Use digital TV as the most widely available and preferred channel for info-marginated sectors, helping to reach difficult-to-reach audiences, such as “disabled people getting older”, who may have less access to other forms of digital technology, improving current situation and affording the demands of a growing elderly population. The concept of the proposed integrated ICT solution comprises a cluster of different kind of users, needs and devices that should be totally replace their vertical integration into a virtual one. SOA is an architectural style in which software applications are organized as a set of loosely coupled services. The service-orientation paradigm advocates building functionality from services that align with functional processes. It is a process and a path to achieving diverse business goals. SOA is being adopted by many enterprises today because it results in agile IT systems that can be more easily adapted in response to change. The SOA paradigm enables the linking of business and computational resources - mainly organizations, applications and data - on demand. It is seen to be essential for delivering business agility and IT flexibility [8][9].
IFMBE Proceedings Vol. 29
Service Composition to Support Ambient Assisted Living Solutions for the Elderly
The proposed architecture of the AAL integrated ICT solution is shown in Fig. 1. The central point of user interaction is the application and media layer which should aggregate the content available via enabling technologies and composite services. The suggested components which comprise the integrated system are the following: x Integration technologies: Data store and history of collaborators which should be access via a web service. x Reasoning technologies: the nature of technologies which support elderly autonomy. x Service packaging: a software module that handles different components. x Portal: A portal from which the user has access to the system. x Web x: Web applications that can transfer the information that the user wants to share and access on the World Wide Web. x Windows x: Software operating environment which is an interface between hardware and user that acts as a host for computing applications run on the machine. x Devices and people: People insert service alerts to devices to interact with the ICT system. x Emergency treatment, autonomy enhancement and comfort: services provided to the final user.
initiative, today eInclusion is an issue affecting all European countries. Even if the technology system towards the improvement of people’s lives is growing rapidly unfortunately it has been unknown and unexploited by a significant amount of people, especially from elderly. The wider uptake of existing ICT technology as a means to facilitate the assisted living of senior citizens constitutes one of the most significant targets of our solution. To sum up, the proposed solution presented in this paper contributes towards a number of problems that arise all over Europe. It covers the technological gap between elderly. It offers them the possibility to monitor their health status, it motivates them to follow a personalized training program, thus strengthening their cognitive and physical capabilities, it helps them to use interactive e-services and finally reduces the marginalization chances against people of the Third Age.
REFERENCES 1. 2.
3.
4.
5.
6.
Fig. 1 Scalable Service Oriented Architecture for integrated AAL ICT solutions
7.
8.
VI. CONCLUSIONS
In this paper we tried to list a set of services focused on the needs of elderly people that are necessary for their independent living and in relation to these, getting benefit from the existing ICT technology, we proposed a solution which accommodates independent living and provide useful, familiar and comprehensive content. Our vision is to improve the current situation and meet the demands of a growing elderly population. As it has been demonstrated by the European e-Inclusion
783
9.
He W, Sengupta M, et al (2005) Current Population Reports, Series P23-209 65+ in the United States: 2005, December 2005 Fuchsberger V. (2008) Ambient assisted living: elderly people's needs and how to face them. 1st ACM international workshop on Semantic ambient media experiences, Vancouver, British Columbia, Canada, 2008, pp21-24 Dickinson A, Mick J, Gregor S. (2006) Designing a portal for older users: A case study of an industrial/academic collaboration. ACM Transactions on Computer-Human Interaction (TOCHI), September 2006, ISSN:1073-0516, 13(3), pp 347 - 375 Nehmer J, et al. (2006) Living assistance systems: an ambient intelligence approach. The 28th international conference on Software engineering, Shanghai, China May 20 - 28, 2006, ISBN:1-59593-375-1, pp 43-50. Smrz J. (2009) More than user-friendly I/O devices. The 2nd International Conference on PErvsive Technologies Related to Assistive Environments, Corfu, Greece 09 - 13 June 2009. ISBN:978-1-60558409-6. ICT PSP - 224988 Project T-Seniority (2008-2011), Expanding the benefits of Information Society to Older People through digital TV channels Schoeni, et al, (2001), Persistent, Consistent, Widespread, and Robust? Another Look in Old-Age Disability, Journals of Gerontology Series B: Psychological Sciences and Social Sciences, Vol. 56B, No. 4, pp. S206–S218. Harding C. An Open Marketplace for Services (2007) DM Review, vol. 17, September, 2007, 20-39. Tews R, Beyond IT: The business value of SOA (2007), AIIM EDOC, vol. 21, 14-17, Sept/Oct. 2007
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Vasiliki Moumtzi ALTEC Software S.A 6, M.Kalou str. 546 29 Thessaloniki Greece [email protected]
Oscillations in subthalamic nucleus measured by multi electrode arrays J. Stegenga1, T. Heida1 1
MIRA institute, dept EEMCS, BSS-group, University of Twente, Enschede, The Netherlands
Abstract— The subthalamic nucleus (STN) of the basal ganglia, is involved in the generation of Parkinsonian symptoms and forms one of the main targets for Deep Brain Stimulation (DBS). Effective frequencies of DBS are around 130 Hz. The effect of such stimuli in the STN is largely unknown but has been hypothesized to result in neuronal block, interrupting the pathophysiological oscillatory behavior which is observed in the Parkinsionian basal ganglia. Modelling studies suggest that synchronized oscillation at tremor (4-8 Hz) or beta (14-30 Hz) frequencies may occur. To study synchronicity of the STN in detail, we record actionpotential activity from rat brain slices using multi electrode arrays (MEAs). These arrays consist of 60 recording sites and thus allow the study of spatio-temporal activity patterns. Here we show the characteristics of spike trains which we recorded in the STN.
may self-perpetuate through mediation by the Ttype calcium current. This membrane current causes cells to depolarize after prolonged inhibition, leading to bursts of action potentials when released from inhibition. The possibility of such a mechanism in the basal ganglia has been demonstrated using organotypic culture preparations [4]. 1.4 mm
Keywords— Parkinson’s disease, multi electrode arrays, brain slices, oscillatory activity I. INTRODUCTION
The origin of the three cardinal symptoms of Parkinson’s Disease (tremor, rigidity and bradykinesia) is speculated to lie in the basal ganglia. Specifically, it is in the substantia nigra pars compacta of the basal ganglia that deterioration of dopaminergic neurons occurs. The loss of subsequent dopaminergic innervation to the basal ganglia, the striatum and also the motor cortex ultimately leads to PD symptoms. From a number of studies it has been shown that oscillatory activity in the lower frequency bands (<30 Hz) increase in PD [1-3]. The application of the precursor of dopamine (L-dopamine or levodopa) decreases oscillatory power in the lower frequency bands, and increases power in the gamma frequency bands (50-90 Hz) [1]. It has been suggested that, next to an increase in oscillatory behaviour, also an increase in synchronicity between neuronal oscillations may occur [4, 5]. Once established, such synchronicity
Fig. 1 A multi electrode array. The round culture chamber has an inner diameter of 2.4 mm. The 60 electrodes were spaced 200 µm apart, in an 8 by 8 grid (see inset). The electrodes shape is conical, with the tip protruding 50-70 μm from the glass surface.
It is our goal to elucidate the extent of oscillatory behaviour in the basal ganglia. We first concentrate on the subthalamic nucleus (STN), since it is the most effective target for suppression of most PD symptoms by Deep Brain Stimulation (DBS). The clinical effects mimic those of ablation, even though it has been shown that DBS increases activity in the target nucleus [6]. Therefore, more complex interactions within the basal ganglia, due to DBS are likely to occur. Invivo, the location of the STN prohibits the use of
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 784–787, 2010. www.springerlink.com
Oscillations in Subthalamic Nucleus Measured by Multi Electrode Arrays
785
200 µm
Fig. 2. (Left panel) Microscope image of a coronal slice placed on a multi-electrode array. (Right panel) Corresponding plate (62, Paxinos & Watson 2007) with MEA overlay. The mamillothalamic tract (mt) and fornix (f) are visible also in the microscope image. The STN is highlighted, as well as the electrodes which recorded activity.
EEG or EMG to record activity, while microelectrode recordings suffer from a limited number of electrodes and thus provide a poor spatiotemporal resolution. The usage of brain slices as an alternative is therefore appropriate to study the underlying mechanisms of the oscillatory network behaviour in Parkinson’s disease and the effect of electrical stimulation. However, such experiments have mostly been conducted using patch clamp techniques, making it impractical to record from more than two neurons simultaneously. We use a multi electrode array with a grid of 60 micro electrodes in combination with brain slices of the STN to observe and induce neuronal spiking patterns. The results presented pertain to the spiking characteristics of single neurons. II. METHODS
A. Slice preparation
Coronal brain slices (300 μm) from 16 to 52 dayold Wistar rat were cut on a Vibratome (Leica VT1000) in an ice-cold cutting medium
containing artificial cerebro spinal fluid (aCSF) with additional 1.25 mM MgSO4 and 1 mM ascorbic acid. Our aCSF consisted of (in mM): 124 NaCl, 3.3 KCl, 1.2 KH2PO4, 1.3 MgSO4, 2.5 CaCl2, 20 NaHCO3 and 10 glucose. Solutions were aerated with carbogen (95% O2/5% CO2). Rats were anaesthetized using Isoflurane before decapitation. B. Recording setup
Coronal slices were compared with a rat brain atlas (Paxinos & Watson, 6th edition, Elsevier) using a dissecting microscope and the corresponding plate number (corresponding to rostro-caudal distance from bregma) was noted. Markers for comparison were 1) shape and position of fornix, mamillothalamic tract and third ventricle 2) the presence of CA3 of the hippocampus and optical tract. The STN was identified as a gray structure superior to the cerebral peduncle (or internal capsule) and inferior to the zona incerta and the nigrostriatal bundle. Slices and aCSF were transferred to 3D-
IFMBE Proceedings Vol. 29
786
J. Stegenga and T. Heida 10
60
0
40
-10
20
-20
2.5
0
5.0
10
0
5
10
0
5
10
5
10
5 Spikes per bin
10
80 60
0
40 -10 -20
20 2.5
0
5.0
10
40 30
0
20 -10 -20
10 2.5
0
5.0
10
0
1000 800
0
600 400
-10
200 -20
2.5 5.0 Time [ms]
0
50
100
150 Time [s]
200
250
300
0
0
Fig 3. (Left panels) Waveforms of detected action potentials overlaid in gray, mean waveform in black. Peaks are aligned to maximum deflection, vertical axis is in μV. (Middle panels) Raster plot of 300 s recording. Each vertical line represents an action potential. (Right Panels) Density histograms of the four time series. Bars denote the occurrences of bins with a certain number of spikes.
multi electrode arrays (3D-MEA; Ayanda biosystems, Lausanne, Switzerland; fig. 1), and signals were amplified, bandpass filtered (10Hz10kHz) and digitized at a rate of 16 kHz per channel using a measurement system by MCS (MultiChannelSystems GmbH, Reutlingen, Germany). Slices were kept in place by a nylon mesh glued onto a silver ring lowered into the chamber by a micromanipulator and were perfused with aerated aCSF at a rate of ~3 ml/min. Signals were visualized by a custommade LabView (National Instruments, Austin Texas) program and threshold crossings exceeding 5 times the RMS noise value (typically 2 μV) were stored. Pictures of MEA and slice were taken during recording through a Nikon (Tokyo, Japan) Diaphot-TMD inverted microscope.
C. Data analysis
Offline analysis was done in MatLab (The Mathworks, Natick, Massachusetts). By semiautomatically comparing pictures with slides, we calculated the coordinates of the MEA electrodes. Only recordings of electrodes within the STN were used for further analysis. Spike trains were classified as random, regular or bursty, based on an algorithm introduced by Kaneoke & Vitek [7]. First, we computed the density histogram. To this end, a spike train was again discretized with a binwidth equal to the mean interspike interval. The number of spikes per bin was noted. The number of spikes per bin appears on the horizontal axis of the density histogram, and the number of times that a bin with this number of spikes was observed is on the vertical axis. We then checked whether this distribution was
IFMBE Proceedings Vol. 29
Oscillations in Subthalamic Nucleus Measured by Multi Electrode Arrays
significantly different from a Poisson distribution with a mean of 1 by χ-squared test (p<0.05). When this was not the case (i.e. the spike train was not considered ‘random’), we checked whether the distribution was closer to a normal distribution with a mean of 0.6 (‘regular’), or whether it was closer to a Poisson distribution with a mean of 0.8 (‘bursty’). III. RESULTS
An example of our mapping of the MEA with pictures taken during measurement can be seen in figure 2. Here, a slice located approximately 3.48 mm dorsal to Bregma (Paxinos & Watson). This corresponds to the rostral part of the STN, which is (a.o.) innervated by the motor cortex. The highlighted electrode within the STN is calculated at 2.3 mm lateral from midline and 8.6 mm ventral to Bregma. The recording lasted for 15 minutes. Action potentials from the 4 electrodes close to STN are shown in figure 3. The average firing rates of the four neurons were 0.1, 0.1, 0.06 and 1.2 Hz, respectively. Note that the neuron located within in the STN (the 4th neuron, also in figure 3) is the most active. Neurons 2 and 4 were classified as ‘bursty’, neuron 1 was termed ‘random’ and neuron 3 was labeled ‘regular’. On further inspection, many spikes were part of doubles (two spikes in close succession) and bursts of longer duration were rare.
cortex [8]. More data will be added to facilitate a meaningful comparison between our findings and literature. ACKNOWLEDGMENT The authors would like to thank Geert Ramakers and Irina Stoyanova for help with slice preparation and Richard van Wezel for discussions about experiments and also Daphne Zwartjes for help with MatLab algorithms. The work was part of the BrainGain Smart Mix programme (workpackage 4.2) of the Netherlands Ministry of Economic Affairs and the Netherlands Ministry of Education, Culture and Science. REFERENCES 1. 2. 3. 4. 5. 6.
7. 8.
IV. DISCUSSION
Despite the changes in firing rate, the method used was able to discriminate several spiking patterns. The bursty neuron was found in the part of the STN which is innervated by the motor
787
Brown, P, (2003) Oscillatory nature of human basal ganglia activity: relationship to the pathophysiology of Parkinson's disease. Mov Disord 18(4): p. 357-63 Bevan, M D et al (2002) Move to the rhythm: oscillations in the subthalamic nucleus-external globus pallidus network. Trends Neurosci, 25(10): p. 525-31 Hamani, C et al. (2004) The subthalamic nucleus in the context of movement disorders. Brain, 127(Pt 1): p. 4-20 Plenz, D and Kital S T (1999), A basal ganglia pacemaker formed by the subthalamic nucleus and external globus pallidus. Nature, 400(6745): p. 677-82 Rubin J E and Terman D (2004), High frequency stimulation of the subthalamic nucleus eliminates pathological thalamic rhythmicity in a computational model. J Comput Neurosci, 16(3): p. 211-35 Kuhn, A A et al (2008), High-frequency stimulation of the subthalamic nucleus suppresses oscillatory beta activity in patients with Parkinson's disease in parallel with improvement in motor performance. J Neurosci, 28(24): p. 6165-73 Kaneoke, Y and Vitek J L (1990), Burst and oscillation as disparate neuronal properties. J Neurosci Methods, 68(2): p. 211-23 Magill, P J et al (2004), Synchronous unit activity and local field potentials evoked in the subthalamic nucleus by cortical stimulation. J Neurophysiol, 2004. 92(2): p. 700-14 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Stegenga, J BSS/EEMCS, MIRA, Universty of Twente Drienerlolaan 5 Enschede The Netherlands [email protected]
Suitable polymer pipe to modelling the coronary veins Romola Laczko1, Tibor Balazs1, Eszter Bognar1 1
Budapest University of Technology and Economics (BME), Department of Materials Science and Engineering, 1111 Budapest, Goldmann square 3. Hungary
Abstract— Recently a new kind of in vitro testing of coronary vein was published with limitations and raised questions regarding to the method. Several examinations and in vitro testing claim a suitable material in place of coronary veins. The aim of our investigation is to find a polymer pipe which substitutes coronary veins during experiments. Four prepared coronary sinus parts and six different types of polymer materials were tested and the mechanical properties were compared. Based on these results the suitable material for further examinations was selected. Keywords—coronary veins, tensile test, substitute material
I. INTRODUCTION
Even though several publications are available and experiments were done in the topic of the mechanical properties of vessels there aren’t any data for special vessels like coronary veins [1]. It was necessary to carry out because the previously performed vessel examinations tested only saphenous veins and arteries from different areas of the body. The literature of mechanical properties of blood vessels are quite large and it is already known that the arteries can be modelled as two-layer model [2, 3, 4] and also founded that blood vessels have incremental Young’s modulus even if they were investigated both for laws of elasticity and laws of viscoelasticity [5]. It is also well known that the wall thickness and isobaric elastic properties of vein grafts increase after a few days and rearrangement of the elastic structures occurs [7, 8]. It is not enough to have experimental results for arteries from all over the body because the structural differences between arteries and veins cause differences in the mechanical properties as well. It is well known that the vein has three layers. The first is a strong outer cover of the vessel and consists of connective tissues, collagen and elastic fibres. The second one is the middle layer and consisting smooth muscle and elastic fibres, which is thinner in veins. The third one consists of smooth endothelial cells. These three layers have three different mechanical properties and three different behaviours under mechanical loading. Some previous investigations used intravascular pressure to measure the tangential elastic stress and the relative displacement. There are some similar investigations that concentrate on the blood flow simulation of the vessel systems [9, 10, 11] but these don’t focus on mechanical properties on the complete modelling of blood circuit in arteries and veins as well.
Large coronary veins are placed on the outer surface of the heart between fat and pectoral muscle and the elongation mainly possible via diameter increasing. This suggests that the mechanical properties of longitudinal and transversal direction of coronary veins could be different.
II. METHODS In certain developments the initial data should be the mechanical properties of coronary veins such as the development of left ventricular pacemaker leads or electrophysiology diagnostic catheters that are placed in the coronary vein. From the view of electrophysiology and pacemaker lead fixation mechanisms, it is more important to know the maximal force and stress that may cause coronary vein dissections. This maximal force can be determined by tensile tests. The force required to tear a material and the amount it extends before tearing are point of interest. Typically, the testing involves taking a small sample with a fixed cross-section area, and then pulling it with a controlled, gradually increasing force until the sample changes shape or breaks. Analysis of forcedisplacement or strength-relative strain curves can convey much about the material being tested, and it can help in predicting it’s behaviour. A. Vein samples testing Two pig hearts were received from slaughterhouse and had been delivered within one hour after explantation to the Semmelweis University in physiologic solution (Baxter Viaflo, Natrium Clorid 0,9% “Bieffe” infusion). Coronary veins were prepared form these hearts immediately after receiving from slaughterhouse and kept in a physiologic physiologic solution (0,9%) until testing. The total time between explantation and testing phase was less than two hours. Several examinations and in vitro testing were done with saline and the effects of storing the sample in a physiologic solution to the mechanical properties of veins were also investigated. There were no significant differences in the mechanical properties of vessels stored in a physiologic solution until testing [12]. Four longitudinal samples had been prepared from the coronary sinus (CS) parts. The length, width and thickness of prepared samples were measured immediately after preparation and recorded together the calculated cross sections (Table 1).
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 788–791, 2010. www.springerlink.com
Suitable Polymer Pipe to Modelling the Coronary Veins
789
The longitudinal direction was defined as the longitudinal plane of the vein. The tensile tests were performed with Zwick Z020 tensile test machine. Table 1 Width, thickness, lengths and cross sections of the rectangular vein samples Longitudinal samples Width
Thickness
Length
al (mm)
hl (mm)
ll (mm)
Cross section A0l (mm²)
1
6.60
0.38
22.53
2.49
2
8.10
0.40
18.53
3.21
Sample
3
7.83
4 Mean ± SD
0.48
19.07
Tearing tests were performed with fixed lower camp and a moving upper camp with a constant stretching speed about 20 mm/min. The distance between the grips was 5 mm. This distance was used as the starting length (L0) for the relative extension calculations. Maximal force and the inherent travel distance values from the recorded force in function of displacement curves were used as initial values of the calculations (Table2). Table 2 Travel distance and maximal force values based on the forcedisplacement curves Displacement
Maximal force
∆Ll (mm)
Flmax (N)
1
6.25
5.94
2
5.84
9.54
3
6.33
11.68
4
7.37
11.18
Mean ± SD
6.45 ± 0.65
9.59 ± 2.59
Sample
3.76
8.93
0.43
15.77
3.87
7.87 ± 0.96
0.42 ± 0.04
18.98 ± 2.77
3.33 ± 0.63
A new type of grip was developed, because the previously applied grips and fixation modes caused the sample rupture just after fixation. Two different types of surfaces were used in one grip for fixating the coronary vein parts, one side was silicone and the opposite was reticulated metal. This combination of materials was appropriate and the fixation was stable. A 37 °C degree physiologic solution (Baxter Viaflo, Natrium Clorid 0,9% “Bieffe” infusion) was used (Fig. 1) to modelling the original surround and after every single testing phase it was drained via the plug and replaced with new 37 °C degree physiologic solution.
Relative extension (1) is the percentage of elongation during the tensile test. It was calculated as the ratio of the starting distance and the displacement.
ε=
ΔL ⋅ 100% L0
(1)
Where is ε- relative extension, L0- starting length, ∆Ldisplacement (recorded by the test machine during test). Typically the cross section at the time of rupture is necessary to define the tensile strength. In case of veins only the starting cross section was available because after tearing the necessary sizes at the time of rupture couldn’t be measured. To evaluate the maximal strengths (tensile strength (2)) can be tolerated by the veins, the recorded maximal forces and the previously calculated original cross section was used.
σ=
Fm A0
(2)
Where is σ - tensile strength, Fmax- maximal force (recorded by the test machine during test), A0- original cross section. Each prepared coronary sinus sample was successfully evaluated. The fixation mode was proved to be stable. The force displacement curves were recorded. Fig. 1: Testing equipment: 1- direction of tensile force; 2 - upper clamp; 3 – grips; 4 - vein sample; 5 - physiologic solution on 37°C; 6 - plug; 7 lower clamp
IFMBE Proceedings Vol. 29
790
R. Laczko, T. Balazs, and E. Bognar mal force values based on the forceTable 4: Travel distance and maxim displacemeent curves Sample
Displacementt ∆Ll (mm)
Maximal force Flmax (N)
1 2 3 4 5 6
75,46 79,71 56,17 15,50 13,20 19,39
199,7 156,75 85,467 130,591 130,591 229,44
C. Results The mechanical properties of coronary veins were successfully evaluated and calculated with the method and equations described before and listed in Table 5. Fig. 2: Force – Displacement curves of coronaary vein samples Table 5: The calculated mechaniccal properties of coronary veins Based on the recorded curves and the defined equations the relative extension, tensile strength of the t coronary veins were calculated. B. Tested polymer materials
Sample
1 2 3 4 Mean ± SD
Longituddinal samples Relative extension Tensile strength εt (%) σt (MPa) 125 117 127 147
2.39 2.97 3.11 2.89
128.91 ± 13.01
2.84 ± 0.31
The first group of examined samples was w vascular grafts made of polyester or polytetrafluorethyllene. In the other group were polyvinylchloride and silicon pipes. p The method was the same like in the case of coronaryy veins. The length was 30 mm, the width and thickness of samples were measured and recorded together the calculated cross sections (Table 3).
The mechanical properties of the polymer pipes were also me equations and previously calculated based on the sam detailed method (Table 6.)
Table 3: Width, thickness and cross sections of the polymer tube samples
Table 6: The calculated mechanical properties of polymer tubes
Sample 1 - PVC 2 - PVC 3 - silicon 4 - PTFE 5 - polyester 6 - polyester
Width al (mm)
Thickness hl (mm)
Cross section A0l (mm²)
Sample
Relative extensiion εt (%)
Tensile strength σt (MPa)
15 16 18 21 34 38
0.60 0.70 0.50 0.45 0.60 0.60
9.0 11.2 9.0 9.45 20.4 22.8
1 2 3 4 5 6
754,63±100,77 797,1±69,1 561,65±61,6 155,02±41,9 131,2±48,4 193,9±28,8
22,18±1,43 13,99±0,83 8,487±1,23 13,8±0,48 3,11±1,60 11,33±1,48
Three longitudinal samples were testedd from each type of these materials with tensile test machine. In I the tables below there are the averages of three measures. Grips applied for polymer samples had a simple metalllic surface. The distance between the grips (L0) was 100 mm. The tensile tests were carried out on constant tempeerature (27°C) and without using physiologic solution. min but it was too The usual stretching speed is 10 mm/m slow for the polymer pipes so 100 mm/miin stretching speed was used for all the tested samples
Comparing synthetic samplees to coronary veins number 5 polyester graft and number 3 silicon tube have similar mechanical properties. Thesee materials can be used in special in vitro testing as a suubstitute material of coronary veins. The companies and physicians performing electrophysiological procedurres and resynchronization pacemaker implantations neeed experiments to develop methods. A lot of question can c be answered by in vitro testing. These investigations are a more simple and cheaper with using polymer pipe insteadd of explanted veins.
IFMBE Proceedings Vol. 29
Suitable Polymer Pipe to Modelling the Coronary Veins
III. CONCLUSIONS The results are showing the typical properties of coronary veins, but more samples needs to be tested. The basic idea is appropriate for in vitro testing and it is also good to define the properties of coronary veins. Experiments show that it is difficult to find a synthetic material behaving
791
similar to veins, but it is possible to find some materials which are suitable to replace coronary sinus vein during in vitro testing. Possibly there are differences in the mechanical properties of the longitudinal and transversal samples. In the future the longitudinal and transversal samples should be differentiated and tested in different groups.
REFERENCES [1] Balázs, T. Bognár, E. Zima, E. Dobránszky, J., 2008. Mechanical Properties of Coronary Veins. Gépészet 2008, Proceedings of Sixth Conference on Mechanical Engineering, n7.pdf [2] Holzapfel, GA. Gasser, TC. Ogden, RW., 2000. A new constitutive framework for arterial wall mechanics and a comparative study of material models. Journal of Elasticity, 61: 1-48 [3] Holzapfel, GA. Gasser, TC. Stadler, M., 2002. A structural model for the viscoelastic behaviour of arterial walls: continuum formulation and finite element analysis. European Journal of Mechanics A/Solids, 21: 441-463 [4] Matsumoto, T. Sato, M., 2002. Analysis of stress and strain distribution in the artery wall consisted of layers with different elastic modulus and opening angle. JSME International Journal Series C, Mechanical Systems, Machine Elements Manufacturing, 45: 906-912 [5] Xiao, L. Aditya, P. Ghassam, S. K., 2004. Biaxial incremental homeostatic elastic moduli of coronary artery: two layer model. American Journal of Physiology, Heart and Circulatory Physiology., 287:H1663-H1669 [6] Fung, YC. Liu, SQ., 1995. Determination of the Mechanical Properties of the Different layers of Blood Vessels in vivo. Proceedings of the National Academy of Sciences of the United States of America., 92:2169 - 2173 [7] Monos, E. Berczi, V. Nádasy, Gy., 1995. Local controls of veins: biomechanical, metabolic, and humoral aspects. Physiological reviews
[8] Jenny S., Ghassam S. K., 2006. A novel strategy for increasing wall thickness of coronary venules prior to retroperfusion. American Journal of Physiology, Heart and Circulatory Physiology, 291: H972H978 [9] Molnár, F. Till, S. Halász, G., 2005. Arterial blood flow and blood pressure measurements at a physical model of human arterial system. Embec 2005, 3rd Europian Medical & Biological Engineering Conference, ISSN 1727-1983 [10] Till, S. Halász, G., 2004. The effect of different artery wall models on arterial blood flow simulation. Proceedings of the first conference on biomechanics., Budapest, Hungary, page 480-485. [11] Till S. Halász G., 2004. Modelling and numerical computation of arterial blood flow. Proceedings of the 4. conference on mechanical engineering, Budapest, Hungary, Volume 2, page 769-773. [12] Bartels-Stringer, M. Terlunen, L. Siero, H. Russel, FG. Smits, P. Kramers, C., 2004. Preserved vascular reactivity of rat renal arteries after cold storage. Cryobiology. 48(1):95-8. Boerboom, LE. Wooldridge, TA. Olinger, GN. Rusch, NJ., 1992. Effects of storage solutions on contraction and relaxation of isolated saphenous veins. J Cardiovasc Pharmacol. 20 Suppl 12:S80-4 Author: Romola Laczko Institute: Budapest University of Technology and Economics (BME) Street: Bertalan Lajos 7. City: Budapest Country: Hungary Email: [email protected]
IFMBE Proceedings Vol. 29
Satisfaction Survey of Greek Inpatients with Brain Cancer G.K. Matis, O.I. Chrysou, N. Lyratzopoulos, K. Kontogiannidis, and T.A. Birbilis Neurosurgical Department, Democritus University of Thrace Medical School, University General Hospital of Alexandroupolis, Alexandroupolis, Greece Abstract–– Introduction: The evaluation of patients’ satisfaction based on structured questionnaires is an essential prerequisite for the assessment of the quality of health care services. The aim of this study was to investigate brain cancer patients’ satisfaction hospitalized in a tertiary care university public hospital in Alexandroupolis, Greece. Materials and methods: This cross-sectional study involved 163 patients having been hospitalized for at least 24 hours. The patients filled in a satisfaction questionnaire previously approved by the Greek Ministry of Health. Four aspects of satisfaction were investigated (medical, hotel accommodation/ organizational facilities, nursing, global). Using Principal Component Analysis, summated scales were formed and tested for internal consistency using Cronbach’s alpha coefficient. The Spearman's rank correlation coefficient (rs) was also used and the threshold p-value for statistical significance (2-sided) was set at 0.05. Statistical analysis was performed by Statistical Package for Social Sciences (SPSS v. 16.0). Results: We found a high degree of global satisfaction (73.31%), yet satisfaction was higher for the medical (88.88%) and nursing (84.26%) services. Satisfaction derived from the accommodation facilities and the general organization was found to be more limited (74.17%). Self-assessment of health status at admission was negatively correlated with medical (rs=0.157, p=0.045) and nursing (rs=-0.168, p=0.032) satisfaction. Greek citizenship contributed to bigger satisfaction scores in the accommodation/organizational facilities dimension (rs=0.158, p=0.044). Finally, age was positively linked to nursing satisfaction (rs=0.181, p=0.02). Conclusion: The present study confirmed the results of previously published Greek surveys assessing general patient populations. However, more studies are urgently needed to confirm these findings in a bigger brain cancer population. Keywords–– brain cancer, inpatients, satisfaction.
I. INTRODUCTION In general, evaluation of patients’ satisfaction based on structured questionnaires is considered to be an essential prerequisite in order to assess the quality of health care services [1]. Quality could be defined as the dynamic and continuous improvement of health care focusing on appropriateness, availability, continuity, effectiveness, efficacy, efficiency, respect, safety, and timeliness [2-4]. Yet, the notion of quality is not identical to that of satisfaction [5]. Quality refers to the customers’-users’ perception over a period of time, while satisfaction is related to specific moments of service [6]. Satisfaction
depends also on the price of the service, and on the degree of difficulty of obtaining this service [7]. The role of users’ expectations (predicted, desired, and adequate service) is of crucial importance too [8]. The consequences of a cancer diagnosis are far-reaching and complex, affecting not only the patient but his/her network of caregivers as well [9]. Among the malignant entities, brain cancer is considered unique in that the organ affected is traditionally viewed as the seat of an individual's literal sense of identity [10]. It is, thus, of major importance to assess the quality of medical services provided in this specific population [11]. The scope of the present study was to measure the level of inpatients’ satisfaction (diagnosed with brain cancer) for medical, nursing, organization and hotel services in the University General Hospital of Alexandroupolis, Greece in order to investigate the level of quality services provided in this specific hospital.
ΙΙ. MATERIALS AND METHODS A. Questionnaire Administered A satisfaction questionnaire approved by the Greek Ministry of Health and previously validated in Greek patients was employed [10]. The final questionnaire included 36 questions, which followed in chronologic order the steps from the time the patient was admitted to hospital until discharge [12]. The questionnaire included five domains: admission (3 items), medical services (4 items), nursing services (4 items), accommodation services (6 items), administration services (2 items). The questionnaire also contained socio-demographic variables (13 items). Finally, all respondents were asked about their global satisfaction and their perceived health level at the day of admission and discharge (in a scale of 0-10). The main questions linked directly to satisfaction are presented in Table 1. The responses to closed-end questions were given in a 5grade Likert scale (very satisfied, satisfied, nor satisfied nor dissatisfied, dissatisfied, very dissatisfied: 5 denoting maximum satisfaction and 1 minimum, 0 stands for “I don’t know/I don’t answer”) [13]. B. Study Population The study was conducted among brain cancer patients admitted to the University General Hospital of
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 792–795, 2010. www.springerlink.com
Satisfaction Survey of Greek Inpatients with Brain Cancer
Alexandroupolis, Alexandroupolis, Thrace, Greece, a tertiary care centre with 673 beds founded in 1939. No such study was ever reported in the region of Thrace. All patients who had remained in the hospital for at least 24 hours were eligible for inclusion. Patients with dementia or psychosis (proven by medications prescribed) were excluded. All study participants signed a consent form. Table
1 Questions included in the administered questionnaire based on which satisfaction was measured Question Q17 Q18 Q19 Q20 Q21 Q22 Q24 Q25 Q26 Q27 Q28 Q29 Q30 Q31 Q32 Q33
Description of question Emergency department services (physicians) Professional efficiency – diagnosis, therapy (physicians) Information and instructions provision (physicians) Behavior, human relationships (physicians) Professional efficiency, responsiveness, care (nurses) Behavior, human relationships (nurses) Professional efficiency (nurses paid by patients) Cleanliness of wards, hospital Toilet cleanliness Organization – noise, visiting hours Food – breakfast, lunch, dinner Behavior (food distributing personnel) Ability to communicate – television, telephone, salon Processing of medical needs – schedule, further examinations Administration – admission, payments, secretary Global satisfaction
C. Data Collection A sample of 194 patients who had been discharged between January 2005 and February 2009 was contacted at the day of discharge. The selected patients were administered the satisfaction questionnaire and were asked to complete it on the spot without any interference from the researchers. The average completion time was 9 minutes and the obtained response rate was 84.02% (163 patients). The method of telephone or mail survey was ruled out, since Greece has no tradition in such surveys and the expected response rate was too small [10]. D. Statistical Analysis The scoring scale for each domain was standardized between 0 and 100, with a score of 100 indicating the highest level of satisfaction. The same standardization was performed for global satisfaction (Question33 – Q33), satisfaction from nursing services (computed by merging Q21 and Q22) and two other dimensions constructed with the aid of Principal Component Analysis (PCA) [14], namely satisfaction from medical services and satisfaction from accommodation/administration. The Kaiser-Meyer-Olkin measure of sampling adequacy gave a value of 0.798 [15]. Data were analyzed with the PCA method and rotated with the Varimax system (Kaizer
793
Normalization), taking into account the internal consistency reliability [14,16]. A Q in order to become a part of a summated scale had to present a correlation limit > 0.5 [17,18]. In addition, the differences in correlation coefficients of each Q with different components (factor loadings) should be > 0.20 [18,19]. For exploring the possible correlation of the four aforementioned scales with various socio-demographic variables, and self-perceived health status, Spearman rank correlation coefficient was utilized. Descriptive statistics were calculated for the sociodemographic variables. In the univariate analysis, the relationships among these variables, prior admissions, and survey completion logistics variables with the four satisfaction scales were studied. Moreover, the Student’s t test, the analysis of variance (ANOVA), and the Kruskal-Wallis test were utilized for continuous variables, and the Chi-square test or Fisher's exact probability test for categorical variables. The threshold p value for statistical significance (2-sided) was set to 0.05. Statistical analysis was performed by Statistical Package for Social Sciences (SPSS v. 16.0).
III. RESULTS A. Patient Characteristics The mean age of participants was 58.9 years old; 53.4% were men, 73.0% married, 51.5% came from the city of Alexandroupolis, 33.7% had a university degree, and 44.2% had never been hospitalized in this hospital before. B. Principal Component Analysis (PCA) Two components were found which constructed two summated (multi-item) scales. The two components (C1 and C2) explained 83.479% of total variance. Q22 and Q29 gave similar factor loadings and were excluded from further analysis. Q17-21 (satisfaction mainly from medical staff) seemed to relate to C2 with a Cronbach’s coefficient of 0.829. Removal of Q17 increased the coefficient’s value to 0.947. On the other hand, Q24-28 and Q30-32 (satisfaction from accommodation and administration) seemed to relate to C1 with a Cronbach’s coefficient of 0.949. By removing Q24, internal consistency improved (coefficient=0.966). The responses in Q18-21, Q24-28, Q30-32 (and not Q17 and Q24,) of the two multi-item scales were grouped and two new summated scales were constructed; satisfaction from accommodation/administration and medical services. The mean satisfaction from medical services (88.880±1.103) was larger in comparison with satisfaction computed for accommodation/administration (74.167±1.423). The first one also presented larger minimum values (50% vs. 17.86%). Mean satisfaction from nursing services was
IFMBE Proceedings Vol. 29
794
G.K. Matis et al.
slightly smaller as compared to the physician one (84.259±1.265). The global satisfaction was even lower (73.313±1.494). Females, Greek citizens, patients in the 19-35 age group, and those with elementary education and with only one prior admission presented the best scores. The biggest reported mean medical satisfaction score was also found in married subjects living in semi-urban areas. Divorced and patients working in public sector reported the biggest satisfaction score with respect to the nursing services. High satisfaction derived from accommodation-administration and global satisfaction was found mainly in unmarried brain cancer patients living in other than Alexandroupolis urban centers, and having Pronoia (poor patients) as their insurance provider. C. Correlations of Four Summated Scales with Socio-demographic Variables No variable was found to relate to Q33 (global satisfaction). Age was related only to the nursing satisfaction (rs=0.181, p=0.021) (older patients were more satisfied). Citizenship related only to accommodation/ administration satisfaction (rs=0.158, p=0.044) (Greeks more satisfied). Self-perceived health status related to medical (rs=-0.157, p=0.045) and nursing (rs=-0.168, p=0.032) satisfaction at admission (patients with better selfperceived health were more dissatisfied). No such correlation was found for medical (rs=-0.060, p=0.446) and nursing satisfaction (rs=-0.056, p=0.483) at discharge. Finally, gender, location, insurance provider, level of education, marital status, and number of prior admissions did not seem to relate to any satisfaction scales.
IV. DISCUSSION The majority of the variables presented in this paper have been previously studied in other Greek satisfaction surveys [10,20]. However, to our knowledge, there is no published study implementing all these variables, and by no means is there a report of hospitalised brain cancer patients’ satisfaction investigation in the region of Thrace, Greece. The questionnaire completion after hospitalization seems to limit positive bias, since patients are no longer “hostages” of the hospital system [21]. Our data suggest that the four dimensions of satisfaction were affected by age, citizenship, and self-perceived level of health at admission. In our patients global satisfaction was found to be 73.313% similar to other Greek studies carried out in a general patient population; Polyzos et al (2005) reported a rate of 3.98 (in a scale 0-5) [22]. No variable was statistically related to this dimension. Other researchers
linked older age and lower education with high satisfaction [10,23,24]. However, satisfaction in patients with an age of > 80 years seems to decline [25]. In contrast to our findings, it has been documented that women [26] and married patients [27] are more satisfied with hospital care. However, Quintana et al (2006) reported higher levels of satisfaction in men and in the single/divorced subgroup [28]. Satisfaction from medical services was 88.88% (in other Greek hospitals ranged from 80.4% to 92.8%) [10,20,22] and satisfaction from nursing services was 84.259% (in other Greek hospitals ranged from 52.4% to 90.0%) [10,20,22]. Both of these dimensions were correlated with poor self-perceived health status at admission. Interestingly, patients with prior admissions were not found to be more demanding, as published research states [28], in which the bigger the number of prior admissions, the bigger the satisfaction scores. In addition, other studies linked good self-perceived health status at admission to a higher satisfaction score [23,29]. It should be stressed that patients generally give bigger rates to physicians, since they do not fully understand their services, and that the lack of nursing personnel plays a crucial role in acquired responses [30]. The accommodation/administrative dimension was graded with 74.167%. Surveys conducted in Greece reported a satisfaction rate for administrative services ranging from 75% to 96.2% [20,22], while Niakas et al (2004) reported a satisfaction score for accommodation services of 75.9% [10]. No association between length of stay or number of prior admissions and satisfaction on domains such as cleanliness was found, even though this is frequently the case in many international studies [23,28]. Certain limitations of this study should be underlined. Firstly, Q18 and Q21 (professional efficiency of physicians and nurses) could be problematic since patients rarely possess the necessary knowledge to judge on this matter. Secondly, all questions were considered as ratio scales. However, other researchers treat them as ordinal data [31] or even as interval scales [32]. Thirdly, the investigation of patient satisfaction is a dynamic and evolving process which demands not only the participation of large number of patients from different hospitals at various points in time, but also the management of lack of sensitivity of questionnaires to changes in patients’ satisfaction as well [10,24,33]. Finally, it has been documented that satisfied patients tend to answer more often [34]. Similar studies in American and European countries highlight the importance of health personnel participation in educational activities such as seminars and postgraduate courses [35]. Of major importance seems to be the adoption of a patient-centred communication, a notion incorporating patient understanding through his/her own values and special psychosocial environment [36,37].
IFMBE Proceedings Vol. 29
Satisfaction Survey of Greek Inpatients with Brain Cancer
V. CONCLUSION These results were in line with similar Greek studies, demonstrating a uniformity in patients’ responses throughout the Greek territory, even though our sample was limited to patients diagnosed with brain cancer [10,20,22,38,39].
REFERENCES 1. Andrzejewski N, Lagua RT (1997) Use of a customer satisfaction survey by health care regulators: a tool for total quality management. Public Health Rep 112:206-210 2. Cafferky ME (1997) Managed care & You. Health Information Press, Los Angeles, pp 167-176 3. Donabedian A (2003) An introduction to quality assurance in health care. Oxford University Press, New York, pp 46-57, 139-143 4. Enthoven AC, Vorhaus CB (1997) A vision of quality in health care delivery. Health Affairs 16:44-57 5. Cunningham L (1991) The quality connection in health care. Integrating patient satisfaction and risk management. Jossey-Bass Inc, Publishers, San Francisco, pp 105-144, 148-155 6. Urden LD (2002) Patient Satisfaction Measurement. Current Issues and Implications. Outcomes Manage 6:125-131 7. Bittner MJ, Booms BH, Tetreault MS (1990) The service encounter: diagnosing favorable and unfavorable incidents. J Mark 54:69-82 8. Zeithaml VA, Berry LL, Parasuraman A (1995) The nature and determinants of customer expectations of service. In: Bateson JEG (ed). Managing services marketing. Text and readings. The Dryden Press, Fortworth, pp 7-56, 133-147, 233-248 9. Bar-Tal Y, Barnoy S, Zisser B (2005) Whose informational needs are considered? A comparison between cancer patients and their spouses’ perceptions of their own and their partners knowledge and informational needs. Soc Sci Med 60:1459-1465 10. Emanuel LL, Alpert HR, Baldwin DC et al (2000) What terminally ill patients care about: toward a validated construct of patients’ perspectives. J Palliat Med 3:419-431 11. Lipsman N, Skanda A, Kimmelman J et al (2007) The attitudes of brain cancer patients and their caregivers towards death and dying: a qualitative study. BMC Palliat Care 6:7 12. Gonzalez N, Quintana JM, Bilbao A et al (2005) Development and validation of an in-patient satisfaction questionnaire. Int J Qual Health Care 17:465-472 13. Likert R (1932) A technique for the measurement of attitudes. Arch Psychol 140:1-55 14. Kline P (1994) An Easy Guide to Factor Analysis. Routledge, London, pp 28-41, 56-79 15. Kaiser HF (1970) A second-generation Little Jiffy. Psychometrika 35:401-415 16. Nunnally JC, Bernstein ICH (1994) Psychometric Theory. 3rd ed. McGraw-Hill, New York, pp 25-58 17. Westaway MS, Rheeder P, Van Zyl DG et al (2003) Interpersonal and organizational dimensions of patient satisfaction: the moderating effects of health status. Int J Qual Health Care 15:337-344 18. Labarere J, Francois P, Bertrand D et al (1999) Outpatient satisfaction: Validation of a French-language questionnaire: Data quality and identification of associated factors. Clin Perform Qual Health Care 7:63-69 19. Cronbach LJ (1951) Coefficient alpha and the internal structure of tests. Psychometrika 16:297-334
795 20. Niakas D, Gnardellis C (2000). Inpatient satisfaction in a regional hospital of Athens. Iatriki 77:464-470 (in greek) 21. Ford RC, Bach SA, Fottler MD (1997) Methods of measuring patient satisfaction in health care organizations. Health Care Manage Rev 22:74-89 22. Polysos N, Bartsokas D, Pierrakos G et al (2005) Comparative surveys for patient satisfaction between hospitals in Athens. Arch Hell Med 22:284-295 (in greek) 23. Nguyen Thi PL, Briancon S, Empereur F et al (2002) Factors determining inpatient satisfaction with care. Soc Sci Med 54:493-504 24. Campbell SM, Roland MO, Buetow SA (2000) Defining quality of care. Soc Sci Med 51:1611-1625 25. Jaipaul KC, Rosenthal GE (2003) Are older patients more satisfied with hospital care than younger patients? J Gen Intern Med 18:23-30 26. Hargraves JL, Wilson IB, Zaslavsky A et al (2001) Adjusting for patient characteristics when analyzing reports from patients about hospital care. Med Care 39:635-641 27. Hall JA, Dornan MC (1990) Patient sociodemographic characteristics as predictors of satisfaction with medical care: a meta-analysis. Soc Sci Med 30:811-818 28. Quintana JM, González N, Bilbao A et al (2006) Predictors of patient satisfaction with hospital health care. BMC Health Serv Res 6:102110 29. Jackson JL, Chamberlin J, Kroenke K (2001) Predictors of patient satisfaction. Soc Sci Med 52:609-620 30. Hyrkãs K, Paunonen M, Laippala P (2000) Patient satisfaction and research-related problems (part 1). Problems while using a questionnaire and the possibility to solve them by using different methods of analysis. J Nurs Manage 8:227-236 31. Finkelstein BS, Singh J, Silvers JB et al (1998) Patient hospital characteristics associated with patient assessments of hospital obstetrical care. Med Care 36:AS68-78 32. Krowinski WJ, Steiber SR (1996) Measuring and managing patient satisfaction. American Hospital Publishing, Illinois, pp 40-49 33. Garratt AM (2007) Parent experiences of paediatric care (PEPC) questionnaire: reliability and validity following a national survey. Acta Paediatr 96:246-252 34. Mazor KM, Clauser BE, Field T et al (2002) A Demonstration of the Impact of Response Bias on Results of Patient Satisfaction Surveys. HSR 37:1403-1417 35. Weiner JS, Cole SA (2004) Three principles to improve clinician communication for advance care planning: overcoming emotional, cognitive, and skill barriers. J Palliat Med 7:817-829 36. Van den Brink-Muinen Α (2002) The role of gender in healthcare communication. Patient Educ Couns 48:199-200 37. Vieder JN, Krafchick MA, Kovach AC et al (2002) Physician-patient interaction: What do elders want? JAOA 102:73-78 38. Gnardellis C, Niakas D (2005) Factors influencing inpatient satisfaction. An analysis based on the Greek National Health System. Int J Healthcare Technol Manage 6:307-320 39. Labiris G, Niakas D (2005) Patient satisfaction surveys as a marketing tool for Greek NHS hospitals. J Med Mark 5:324-330
Author: Georgios K. Matis Institute: Democritus University of Thrace Street: Dragana City: Alexandroupolis Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
End stage renal disease patients’ projections using Markov Chain Monte Carlo simulation A.Rodina, K. Bliznakova, N. Pallikarakis University of Patras, Department of Medical Physics, BITU Patras, Greece
Abstract— End stage renal disease (ESRD) patients’ number continuous increase remains one of the major problems of this century. In 2007 about 11300 ESRD patients were in need of renal replacement therapy (RRT) in Greece. Same year Greece have been on the 8th place internationally in the prevalence per million population (pmp) in RRT and on the 4th place in the incidence pmp of ESRD patients, dropping only behind United States, Japan and Germany. ESRD treatment methods are considered to be one of the most expensive procedures for chronic conditions worldwide. Predicting future patients’ number on RRT is essential for health care providers in order to achieve more effective resource management. In this work we present a Markov Chain Monte Carlo simulation for predicting the future number of ESRD patients for the period 2008-2012 in Greece. Continuous increase in incidence and prevalence was projected, resulting in 26% prevalence increase in 2012 compared to 2007. Keywords— End-stage renal disease, Markov model, Monte Carlo simulation. I. INTRODUCTION
End stage renal disease (ESRD) remains one of the major problems of the 21st century, despite the progress in medical technologies and disease preventing programs. It is a chronic disease related to complete or almost complete failure of kidney function. During the last decade, Greece has been among the countries with the highest number of incident and prevalent ESRD patients [1, 2]. Since 2002, Greece has never been lower than the 5th place in incidence and the 10th place in prevalence internationally. In 2007, there were about 11300 patients in Greece in need of renal replacement therapy (RRT). There are three major types of RRT available: hemodialysis (HD), peritoneal dialysis (PD) and transplant (TX). Each therapy affects the patients’ quality of life in a different degree. TX and home HD are considered to provide better quality of life of RRT patients compare to other therapies [3-5]. An investigation on cost analysis of the available RRT in Greece, performed in 2002, revealed that organ transplantation, telematic HD and home HD, besides improving patients’ quality of life, could significantly reduce treatment costs [6, 7]. Future prognosis of RRT patients is
essential for the health care authorities and policy makers to reveal the best scenarios for better resource allocation and technology assessment as well as predicting the future demands of the RRT in Greece. We have previously reported on a deterministic approach using Markov model to predict the number of ESRD patients per age group and therapy for Greece for the period 2007-2012 [8]. In this model, the transition probabilities are constant and they were derived from several assumptions due to limited data. In order to evaluate the results of the model and therefore the model itself, a sensitivity analysis should be performed. This analysis will set the variations of the model outputs as a function of the uncertainties of the input model parameters such as transition probabilities and treatment assignment probabilities. This can be achieved in several ways. Probabilistic sensitivity analysis by means of Monte Carlo techniques is among the techniques that enable easy implementation and simultaneously testing the uncertainties of the input parameters. The purpose of this work is to extend the available deterministic Markov chain model to involve Monte Carlo techniques in predicting the future number of ESRD patients in Greece for the period 2008-2012. The next step of this work is to introduce probabilistic sensitivity analysis into created model. II. METHODS
The Markov chain Monte Carlo (MCMC) comprises Monte Carlo techniques used to sample from probability distributions of the constructed Markov chain. The Markov chain consists of three mutually exclusive treatment states: hemodialysis (HD), peritoneal dialysis (PD), kidney transplantation (TX) and death, that is the absorbing state (figure 1). The possible transitions between the treatments are shown by the arrows. The backward bending arrows show that it is possible for patients to remain in the state they were in during the previous cycle.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 796–799, 2010. www.springerlink.com
End Stage Renal Disease Patients’ Projections Using Markov Chain Monte Carlo Simulation
HD
Death
PD
Tx
797
The incidence for the age groups 45-65 and >65 was modeled by a linear regression model based on data available from 1998-2007. It was observed that during the period 1998 to 2007 the number of incident patients less than 45 years of age fluctuated between 173 and 203. Therefore, the incidence for this age group was represented by the averaged number of incident patients for the period 1998 to 2007.
Fig. 1 The possible transitions of the patients between the states of the Markov chain model. Y = start year NP = prevalence (Y)
The future number of prevalent patients was calculated by the following equation: P(t+1) = P(t)*TM + I(t+1)*Id
(1)
Where t is a time, TM is the transition matrix, I is an incident patients’ number and Id is incident patients’ initial assignment probability to RRT. Monte Carlo techniques are used to sample the probabilities of the patient’s initial therapy assignment and the probability to change current therapy or to die. Figure 2 shows the realization of the MCMC model, while figure 3a and 3b show in details the implementation of Monte Carlo techniques. The simulation starts with tracing the prevalence patients from the initial year, i.e. 1998. Each patient is individually followed for the period 1998 – 2012 until the final year is reached or the patient enters the death state. For each year, the algorithm simulates the probability a patient to change the current therapy. This is depicted in figure 3a. All prevalent patients in the starting year are followed in this way. Further on, each subsequent year, there are incident patients in need to start RRT. Their initial assignment to treatment, i.e. HD, PD or TX is based on sampling from probability distributions obtained from available data for the period 1998 to 2007 (figure 3b). Each set of probabilities consists of three treatments resulting in 100% overall probability [8]. One set of the initial assignment probabilities is chosen randomly out of ten available. The statistical data used in this study were extracted from the European Renal Association - European Dialysis and Transplant annual reports covering the period from 1998 to 2007. Patients were classified by age into three groups: <45, 45-65 and >65. The model incorporates 48 possible transitions made up of three 4x4 transition matrices. Transition matrices for each age group were predetermined based on the work of Boletis [9] and Ioannidis et al [10] and in accordance with associate nephrologists.
Trace incident patients m
Y=m+1
Y=m NP1
Y > end year YES
Assign patient to therapy (3a)
YES DEATH NO
NO NI = incidence (Y)
NI1 Define the treatment (3b)
Y=Y+1 Save patients in the stack
NO
Y > end year
YES
NIi = Ni + 1
NPi = NPi + 1
NPi > NI NO
NPi > NP
NO
YES NP = NI(Y)
YES END
Fig. 2 MCMC model for patients on RRT. NP is the number of prevalent patients, NI is the number of incident patients and Y is a year.
IFMBE Proceedings Vol. 29
798
A. Rodina, K. Bliznakova, and N. Pallikarakis
The model adequacy was tested by the data splitting approach. Data from period 1998-2002 were used to project the year-end prevalence for the period 2003-2007. The highest absolute error value was less than 5%.
Incident patients in need of RRT
Read patient’s state S
(3b)
(3a) Generate a uniformly distributed random variable n
Generate a uniformly distributed random variable n
YES
n < PXĺHD
HD
YES
patients’ incidence projection by age group. 1998-2007: real data, 2008-2012: projection. Linear trend was used to project the incidence for the age groups 45-65 and >65.
n < PHD NO
NO n (PSĺHD ) && n <(PSĺHD+PSĺPD)
Fig. 4 ESRD
YES
YES
PD
n PHD && n < (PHD+PPD)
NO NO
TX
n (PSĺHD + PSĺPD) && (PSĺHD+PSĺPD+PSĺTX)
YES
Fig. 5 ESRD
patients’ prevalence projection by age group. 1998-2007: real data, 2008-2012: projection.
NO
Dead
Fig. 3 a) Monte Carlo algorithm for patients’ assignment to the treatment through the transition matrix. b) Monte Carlo algorithm for incident patients’ assignment to the initial treatment modality. P is probability of assignment to the treatment. III. RESULTS
The developed MCMC model was used to project the prevalent ESRD patients by therapy and age group for the period from 2008 to 2012. Simulations were performed 10 000 times in order to achieve less than 1% statistical error. Prevalence projections involve both incident and prevalent patients’ simulation through the model. The projection of the incidence of the Greek patients on RRT is shown in figure 4. The percent increase in incidence in 2012 compared to 2007 is expected to be 31.6%. The highest increase of 43.77% is observed in the eldest age group (>65). In the rest two groups, <45 and 45-65, the increase is expected to be 5% and 12.03% respectively. The
Fig. 6 ESRD patients’ prevalence projection by therapy. 1998-2007: real data, 2008-2012: projection.
average annual growth of the total number of the incident patients was calculated to be 5.7%. The annual average increase in prevalence of the RRT patients for the period 2008-2012 is expected to be 4.71%, resulting in 14222 patients in 2012 (figure 5). The increase
IFMBE Proceedings Vol. 29
End Stage Renal Disease Patients’ Projections Using Markov Chain Monte Carlo Simulation
of the number of patients in 2012 compared to 2007 is modeled to be as follows. In the age group <45 - 8.26%, 45-65 13.73% and the eldest group >65 - 40.85%. The total patients’ increase is expected to be 25.88%. The prognosis of patients increase by therapy is depicted in figure 6. The model predicts 25.63% prevalence increase of patients on HD in 2012 compared to 2007. The highest increase is expected to be 34.71% and 26.31% for patients on PD and TX patients respectively.
799
and better resource allocation could drastically influence RRT cost-effectiveness and quality of life of Greek patients.
ACKNOWLEDGMENT The Greek State Scholarship Foundation I.K.Y is acknowledged for financial support of Anastassia Rodina in her PhD.
REFERENCES
IV. DISCUSSION
We developed and validated a Monte Carlo simulation model in conjunction with Markov chain to project the ESRD patients in Greece from 2008 to 2012. Monte Carlo techniques were used to evaluate previously built deterministic model. The MCMC model will be further exploited in performing an extensive sensitivity analysis of the variations of the model outputs as a function of the uncertainties of the input model, an objective that can be reached easily by the implementation of Monte Carlo techniques. The model projects that in 2012 the number of ESRD patients in Greece will reach 14222, shared in between the age groups as follows: 2006 patients for the youngest age group (<45), 4489 patients for age group 45-65 and 7727 patients for the eldest age group (>65). The incident patients’ number in 2012 is expected to reach 2486. The model utilizes linear regression trend to represent the increase of the incidence of the patients aged >45. Linear trends in incidence have been reported in other countries as well. This fact has been explained by population ageing and development of medical technology being effective in co-morbidities and chronic disease treatment [11]. The predicted and experienced average annual growth of about 5-6% for both prevalence and incidence is comparable with the other studies [12]. Taking into account the evidence of RRT patients’ number constant increase, the prediction of the future trends is considered to be a valuable tool for healthcare decision makers in order to achieve a more efficient and reasonable resources allocation and management. At the moment home HD, which is reported to be more cost-effective compare to in-center HD and PD, is not well introduced in Greece [13]. Similarly there is insufficient number of transplantations resulting in long waiting lists. The reasons for the low number of transplants are reported to be lack of organization on the transplant delivery and cultural-religious points of view [6]. Further to, the transplantation is known to be the most cost-effective treatment with the most potent effect on patients’ quality of life [3-6]. Introducing new technologies
1. 2. 3. 4. 5. 6.
7.
8.
9. 10. 11. 12. 13.
European Renal Association and European Dialysis and Transplant Association at http://www.era-edta.org/ Finnish registry of Kidney disease at http://www.musili.fi/fin/munuaistautirekisteri/finnish_registry_for_ki dney_diseases/ Roberts D. S, Maxwell D. R, Gross L.T (1980) Cost-Effective Care of End-Stage Renal Disease: A Billion Dollar Question 92 (2 (Part 1)): p. 243-248. Mallik, N (1997) The cost of renal services in Britain. Nephrol Dial Transplant 12 (Suppl 1), p. 25-28 Hakkarainen P, Kapanen S, Honkanen E et al. (2005) Long slow night hemodialysis and quality of life. Hemodialysis international 9(1), pp. 97 Kaitelidou D, Ziroyanis P, Maniadakis N et al. (2005) Economic evaluation of hemodialysis: implications for technology assessment in Greece. International journal of technology assessment in health care, 21(1):40-6 Stavrianou K, Pallikarakis N (2007) Quality of life of end-stage renal disease patients and study on the implementation of nocturnal home hemodialysis in Greece. Hemodialysis International Volume 11 Issue 2, Pages 204 – 209 Rodina A, Bliznakova K (2009) Prevalence prognosis of the end stage renal disease patients in Greece. IFMBE Proceedings WC 2009 World Congress on Medical Physics and Biomedical Engineering. Vol. 25, 2009, Munich, Germany Ioannidis G, Papadaki O, Tsakiris D (2002) Statistical and epidemiological data of the renal replacement therapy in Greece. Report of the Hellenic Renal Registry, Hellenic Nephrology 14 pp. 525–548 Boletis J (2001) Renal transplantation in Greece Nephrology Dialysis Transplantation, Volume 16, Supplement 6, 25 September, pp. 137139(3) Apostolou, T (2007) Quality of life in the elderly patients on dialysis. Int Urol Nephrol 39(2): p. 679-83. Shaubel E.D, Morrison H.I, Fenton S.S (2000) Projecting renal replacement therapy-specific end-stage renal disease prevalence using registry data. Kidney Int 57(Suppl. 74): p. S-49-S54. Kontodimopoulos N, Niakas D (2006) A 12-year Analysis of Malmquist Total Factor Productivity in Dialysis Facilities. J Med Syst 30:333-342 Corresponding author: Author: Anastassia Rodina Institute: University of Patras Street: 26500 Rio City: Patras Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
Influence of Bioimplant Surface Electrical Potential on Osteoblast Behavior and Bone Tissue Formation Yu. Dekhtyar1, I. Khlusov2, N. Polyaka1, R. Sammons3, and F. Tyulkin1 2
1 Riga Technical University, Riga, Latvia Tomsk Branch of Russian Ilizarov Scientific Centre «Restorative Traumatology and Orthopaedics»,, Tomsk, Russia 3 University of Birmingham, School of Dentistry,, Birmingham, United Kingdom
Abstract— Following the general adhesion theory the bone cells attachment to the bioimplant could be controlled by its surface electrical potential. The latter was adjusted both by deposition of the electrical charge and the surface microgeometry. It was demonstrated that a relief of the surface had an influence on the potential at the nanoscale and could be engineered to promote cell adhesion. Both deposition of the charge and engineering of the potential demonstrated influence on human osteoblast attachment and bone tissue formation. The surface areas of the electrical potential demonstrated a correlation length effecting fabrication of the bone tissue. Keywords— biomaterial, bioimplant, electrical potential, surface, bone cell, bone tissue.
I. INTRODUCTION In spite of the high success in understanding of human cells interaction with bioimplants positioned in the human organism there are still biocompatibly problems. These often are connected with incapability of the eligible human cells to be attached to the implant surface. Following the general adhesion theory [1] attachment of the cell to the bioimplant could be controlled by an electrostatic force contributing interaction between the cell and implant [1,2]. Generally the electrical communication could be engineered owing to a surface electrical potential of the implant. The potential could be supplied by the both external sources and the surface itself. The article is directed to demonstrate both possibilities.
II. METHODS AND MATERIALS To demonstrate a possibility to engineer the potential from the external source the hydroxyapatite (HAP) based implant model were selected. This was stipulated due to availability of the electrical charge deposition non contact novel 3D technology invented for HAP [3].
The technologies that are typically in use [4,5] supply the electrical charge to HAP specimens because of their polarization in an electrical field [4] or due to provision of charged particles radiation [5]. In both cases the opposite surfaces of the HAP based implant acquire opposite (in sign) charges. This provides an inadequate influence to osteoinduction of the bone tissue. The cells, induced by one side of the charged implant surface could be uninduced by the other contrary charged side of the implant. This restricts implementation of the above technologies for medical applications. HAP has the hydrogen connected to the oxygen: Ca5(PO4)3OH. Disposition of the hydrogen in respect to the oxygen has an influence on the HAP surface charge, HAP surface electrical charge strongly depends on a density of the protons [6] coupled to the oxygen. Shift of the hydrogen gives an opportunity to affect the charge. For instance, external highly pressured hydrogen gas could induce proton disposition at the HAP surface layer [7]. This could be provided because of the pressure induced diffusion/migration [8] of the protons. As the result, the HAP surface charge will be influenced. The HAP ceramic specimens that had a diameter 5 mm and thickness 1 mm were supplied from the EC project PERCERAMICS. The porosity of the specimens was equal to zero and their surface morphology had a irregularity ≤0.5 μm. The specimens were processed by a 6 MPa of the hydrogen gas during 1 hour. An alteration of the surface charge was estimated because of the photoelectron emission electron work function (ϕ) measurement. The value of ϕ characterizes an energy that is necessary to supply to the electron to escape it from the solid. From the energy conservation law the emitted electron is influenced by an electrical field of the surface charge, that contributes to ϕ. To measure ϕ photoelectron emission current (I) of the electrons from the specimen was induced by ultraviolet photons in a range 3-6 eV. The value of ϕ was identified
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 800–803, 2010. www.springerlink.com
Influence of Bioimplant Surface Electrical Potential on Osteoblast Behavior and Bone Tissue Formation
because of the dependence of I to the photon energy. Particularly, the energy of the photons, when I=0 was assumed as ϕ. Measurements were provided in vacuum conditions (10-1 Pa) applying a hand made spectrometer [9]. Attachment and proliferation of the osteoblast cells were explored by the following way. SAOS-2 human osteoblasts (ATCC Cat No. HTB-85™; LGC Standards, Teddington, UK) were cultured in McCoy’s 5a medium (Gibco, Paisley, UK).containing 10% foetal calf serum (Sigma, UK), 2.5% Hepes (Gibco, Paisley, UK) and 1% Penicillin/Streptomycin (Gibco, Paisley, UK) until confluent, harvested using trypsin-EDTA (Gibco, Paisley, UK) and resuspended in the same medium at a concentration of 1 x 105 cells/ml. The cells were allowed to recover from the enzyme for 1h at 37º C and 1 ml of suspension was then added to each specimen in separate wells of a 24-well plate at 37º C in an incubator containing 5% CO2, for 30min [10]. Experiments were terminated by removal of the cell suspension and washing three times in phosphate buffered saline to remove non-attached cells. Attached cells were then fixed for 1 h in 2.5% glutaraldehyde in 0.1 M sodium cacodylate buffer, pH 7.3, dehydrated in alcohol and hexamethyldisilizane and further gold-sputter coated for scanning electron microscopy (SEM) using a Jeol lv3500 microscope at an accelerating voltage of 15kV, working distance 13mm. Images of five non-overlapping fields of view were captured using the SemAfore 4.0 programme (JEOL/Skandinaviska, Sweden) at a magnification of 150x (image area = approximately 800 x 600 µm). The individual cells were counted in each image and the average number of cells was determined. To test proliferation of the cells, the HAP samples were sterilised in 70% ethanol for 15 min and then dried in air for 10 min. Test samples in 24 well plates were covered with 1 ml of a suspension of SAOS-2 cells, prepared as above, at a density of 5 x 104 cells/ml. The cells were allowed to proliferate for 7 days. Culture medium was then removed and the samples were washed in PBS, fixed and prepared for SEM as above. As the cells were not confluent and individual cells could be identified, the cells were counted in 5 separate images at 150x magnification, as described above. Average cell number was estimated. In both microbiological tests the observed specimens were statistically selected on a base of the Smirnov criteria (significance level 0.05). After the microbiological experiment the implant models were tested with animals.
801
Different physical-chemical properties of calcium phosphate (CP) materials (crystallinity and porosity degree, solubility, surface roughness etc.) have diverse ability for osteogenesis promotion. A key combination of their structure, thickness and dissolution rate for realization of osteogenous potential of mesenchymal stromal cell pull (MSCP) is not found till now. Calcium phosphate (CP) coatings were provided on titanium discs by anodic spark technology using HAP particles (0.03-70 μm). Roughness of all coatings was characterized with the micro relief 5.13-24.16 μm. The prepared specimens passed the above hydrogenation procedure. Studies in situ were implemented by the method of ectopic bone formation when treated samples with bone marrow were implanted in mice for 45 days without additional injection of growth factors. To demonstrate a possibility to engineer the potential due to provision of the surface roughness the above specimens were coated with CP due to different technologies to reach dissimilar surface microirregularity: 0.7-14.4 µm. After that the biological experiment with mice on the above scheme was supplied, the probability on bone formation being estimated. The surface of the specimens was explored due to atomic force microscopy measurements: morphology and voltage (Kelvine probe) distribution were measured. For this 3 areas of each specimen were scanned with 3 scans at least. These data were in use to calculate average values. Alongside the values of ϕ were detected as above.
III. RESULTS AND DISCUSSION The hydrogenation of the HAP ceramic specimens increased their ϕ. This means that the electrical charge became more negative in contrast with the native specimens. The number of the attached cells correlated to the increment of ϕ (Fig. 1). The correlation coefficient was equal to 0.67 that was acceptable at the 0.05 level of significance. The number of the attached cells increased 10 times, when the increment of ϕ succeeded +0.18 eV. The result evidences that the osteoblast cells were more successfully attached to the surface, if electrical charge was shifted to the negative value. The shift of the charge to the negative value enhanced also proliferation of the cells. Proliferation increased in 1.6 time when the values of ϕ increased to +0.1 eV. The achieved results demonstrate that deposition on the surface of the negative charge enhances both attaching and proliferation capacity by the osteoblastic cells.
IFMBE Proceedings Vol. 29
802
Yu. Dekhtyar et al.
Fig. 1 Correlation of the number of the attached cells to ϕ Results of the experiments with the animals showed that due to hydrogenation of the CP surfaces their ϕ increased on ~0.1-1 eV. This had an effect on the directions of mesenchymal stromal cell pull (MSCP) differentiation. For example, connective tissue growth was improved. Probability of following ossification with growth of membrane reticulated bone improved on ≥ 20%. The surface microscopy measurements evidenced that the values of the surface voltage correlated to ϕ (Fig. 2). This taking into account the above results gave a hope to meet the correlations between the roughness of the surface, voltage and bone tissue development.
voltage. However the standard deviation of the surface voltage over the surface (SDV) correlated on SDSI (Fig. 4). On the other hand the voltage distribution should be characterized with the screening length (l). To estimate it, the auto correlation function K(V, x) was calculated for the voltage scans using the measured voltage data in dependence on the scan coordinate (x). The value of l was assumed at x, corresponding to the zero value of K(V, x). If voltage has the influence on bone fabrication, the correlation between l (that is actually the correlation length) and probability to produce tissue should be arisen. Fig. 5 demonstrates such the correlation. Because the correlation length for the surface geometrical irregularity did not demonstrate any correlations on tissue development one could assume the voltage has the major influence on bone tissue growth. The value of l related to the voltage correlates on SDSI (Fig.6). This means that l has a possibility to be engineered by SDSI that in turn could be reached owing to eligible technological processing.
Fig. 3 Probability to fabricate bone tissue in depend on standard deviation of the surface irregularity
IV. CONCLUSIONS 1.
2. Fig. 2 Correlation of the surface voltage to ϕ The best relation was provided between the probability to fabricate bone tissue and the standard deviation of the surface irregularity (SDSI) at the nanoscale (Fig.3). This result could arise a confuse on the searched influence of the
3.
Deposition on the surface of HAP of the negative charge enhances attaching and proliferation capacity by the osteoblastic cells as well as fabrication of bone tissue. The electrical potential of the surface at the nanoscale has an influence on bone formation. Influence of the potential is characterized with the correlation length. The correlation length of the electrical potential of the surface could be provided by its geometrical irregularity standard deviation at the nano scale.
IFMBE Proceedings Vol. 29
Influence of Bioimplant Surface Electrical Potential on Osteoblast Behavior and Bone Tissue Formation
803
ACKNOWLEDGMENT The authors are deeply indebted to the:
EC project PERCERAMICS NMP3-CT-2003504937 for the HAP specimens delivering; OlainFram company (Latvia) for the autoclave provision for hydrogenation.
REFERENCES
Fig. 4 SDV correlation to SDSI
Fig. 5 Correlation of probability to fabricate bone tissue to l of the voltage
1. Derjaguin B.V., Landau L.D. (1941) Theory of the stability of strongly charged lyophobic sols and of the adhesion of strongly charged particles in solution of electrolytes. Acta Physic Chimica, USSR, 14, 633-662. 2. Ratner B., Hoffman A., Schoen F., et al. (1996) Biomaterial science. London. 3. Dekhtyar Yu., Polyaka N., Sammons R (2008). Electrically charged hydrohyapatite enhances immobilization and proliferation of osteoblasts. . IFMBE proc. V20, 357-360. 4. Ito S., Shinomiya K., Nakamura S., Kobayyashi T., Nakamura M/. Yamashita K. (2006) Effect of electrical polarization of hydroxyapatite ceramics on new bone formation. Journal of the Japanese BioElectrical Research Society, 20: 23-27 5. Aronov D., Molotskii M., Rosenman G. (2007) Charge-induced wettability modification. Applied Phys. Let. 90: 104104-1-104104-1 6. Bystrov V., Bystrova N., Dekhtyar Yu/., Filippov S., Karlov A., Katashev A., Meissner C., Paramonova E., Patmalnieks A., Polyaka N., Sapronova A. (2006) Size depended electrical properties of hydroxyapatite nanoparticles IFMBE proc. 14.:3149-3150. 7. Dekhtyar Yu., Līvdane A., Poļaka N., Timofeeva A. The method to process insulator layer of the bio prostheses (2006). The patent of the Republic of Latvia LV13522. 8. Geguzin Ya. E. Diffusion zone.(1979). Moscow (In Russian). 9. Akmene R.J., Balodis A.J., Dekhtyar Yu.D., Markelova G.N., Matvejevs J.V., Rozenfelds L.B., Sagaloviąs G.L., Smirnovs J.S., Tolkaąovs A.A., Upmiņš A.I.(1993) Exoelectron emission specrometre complete set of surface local investigation. Poverhnost, Fizika, Himija, Mehanika (Russian Journal), 8:125.-128. 10. Lumbikanonda, N, & Sammons, R.L. (2001) Bone cell Attachment to Dental Implants of Different Surface characteristics. International Journal of Oral and Maxillofacial Implants. 16: 627-636. The address of the corresponding author: Author:
Institute: Street: City: Country: Email:
Fig. 6 Correlation l to SDSI
IFMBE Proceedings Vol. 29
Yu. Dekhtyar Riga Technical University, Riga, Latvia 1 Kalku Riga Latvia [email protected]
Nano-Sized Drug Carrier for Cancer Therapy: Dose-Toxicity Relationship of PEGPCL-PEG Polymeric Micelle on ICR Mice 2
J.L. Jiang 1, N.V. Cuong1, S.C. Jwo and M.F. Hsieh1 1
Department of Biomedical Engineering, Chung Yuan Christian University, 200, Chung Pei Rd., Chung Li, Taiwan 32023, ROC. 2 Division of General Surgery, Chang Gung Memorial Hospital, Keelung, Taiwan 20401, ROC.
Abstract—Amphiphilic polymeric nanoparticles garnered attention in pharmaceutical and medical fields because of its benefits of the sustained release, the extended circulation, the uptake by reticuloendothelial system in human body when used as the carrier of anti-cancer drugs. In the present study, we aimed to evaluate the biocompatibility of biodegradable triblock copolymer (PEG-PCL-PEG) used as the carrier of anti-cancer drugs. The synthesized PEG-PCL-PEG was synthesized and formed nano-sized micellar nanoparticles spontaneously in aqueous solution. The particle size was in the range of 40.2-85.3 nm. In-vitro hemolysis test indicated that 2.0 mg/mL micelle without drug loading caused no damage to red blood cells. In-vivo study on ICR mice displayed minor pathological response in liver and kidney of tested mice administrated intravenously 71.43 mg/kg nanoparticles. When the dose of nanoparticles increased to 91.95 mg/kg, both renal and liver toxicity was observed. This finding brings us further design of in-vivo study for loading anti-cancer drug, such as doxorubicin in the PEG-PCL-PEG carrier within a safe administration dose. Keywords— Amphiphilic polymer, Nanoparticles, Hemolysis, In-vivo Toxicity, Poly(εε-caprolactone), Polyethylene glycol. I. INTRODUCTION
Since the discovery of the enhanced permeation and retention effect in 1986 [1], a great variety of nano-sized dosage forms have been developing. Anti-cancer drugloaded nanoparticles as classified a kind of nano-sized dosage form, are functional materials that can undergo surface modification for targeting to cancer cells of cellular membrane, cytoplasmic or nuclei. Furthermore, the encapsulation of anti-cancer drugs in the nanoparticles can prevent the multi-drug resistance and prolong the circulation half life of drugs in the bloodstream. Amphiphilic copolymers, a kind of polymeric surfactant, can undergo self-assembly into spherical nano-sized micelles. There are two regions of a copolymeric micelle. The hydrophobic micelle core acts as a drug reservoir. The hydrophilic corona provides a protective interface between the core and the aqueous environment. The use of block copolymer micelles for the targeted delivery of drugs has proven to be a promising approach for improving the therapeutic efficacy of drugs, prolonging the circulation time and reducing the side-effect. The waterinsoluble drug, such as doxorubicin is encapsulated into the core of a polymeric micelle by chemical conjugation or physical entrapment.
The development of nano-sized dosage form for anti-cancer drug is divided into several stages including in-vitro and invivo experiments. Aside from the research and development of active pharmaceutical agents, the toxicity of drug carrier, also named excipient, is considered a key factor toward an efficacious dosage form. For example, the toxicity elucidated by excipients may lead to complex therapeutic outcomes along with the administration of drugs in the dosage form. Therefore, the in-vivo toxicity of the excipient is of critical and prerequisite for the development of nano-sized dosage form of anti-cancer drugs. Poly(ε-caprolactone) (PCL) is a biodegradable, biocompatible polyester. Moreover, PCL is highly hydrophobic polyester with degradation time could be tailored to fulfill targeted therapeutic applications [2-3]. Ring-opening polymerization of ε-caprolactone (CL) with monomethoxy polyethylene glycol (PEG) as an initiator and stannous octoate as catalyst, respectively, has been intensively investigated [4-5]. Herein we report the evaluation of the toxicity of micellar PEG-PCL-PEG nanoparticles in hemolysis test to which the damage of red blood cells by the nanopaticles was tested. In the experiment of the inflammatory reactions inducible by the nanoparticles, ICR mice were used and histological sections were taken from liver and kidney of the mice. II.
MATERIALS AND METHODS
A. Materials Monomethoxy poly(ethylene glycol) (PEG, Mn= 5000), ε-caprolactone, 2-diphenyl-1,3,5-hexatriene (DPH) and dimethylsulfoxide (DMSO) were purchased from SigmaAldrich (St.Louis, MO, USA). Stannous 2-ethyl hexanoate (stannous octoate, SnOct) was obtained from MP Biomedicals, Inc. The mPEG was purified by re-crystallization on the dichloromethane/diethyl ether system, and εcaprolactone was dried using CaH2 and distilled under a reduced pressure. Triethylamine (TEA), 1,4-dioxane, dichloromethane (DCM), diethyl ether, tetrahydrofuran (THF) and other chemicals were all reagent-grade and were used without further purification. B. Preparation of PEG-PCL-PEG nanoparticles PEG-PCL diblock copolymer, reported elsewhere [6] was synthesized by ring opening polymerization of CL in the presence of PEG as a macro initiator with SnOct as
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 804–807, 2010. www.springerlink.com
Nano-Sized Drug Carrier for Cancer Therapy: Dose-Toxicity Relationship of PEG-PCL-PEG Polymeric Micelle on ICR Mice
catalyst at 130oC for 24 h. The final polymer, PEG-PCLPEG was synthesized by coupling two PEG-PCL polymers using carbodiimide chemistry. The micelle solution of PEG-PCL-PEG was prepared by dissolving 20 mg of triblock PEG-PCL-PEG copolymer in 2.0 mL of THF, and then 1.0 mL of double distilled water was added under stirring. The resulting solution was placed at room temperature for 12 h, then was transferred to a dialysis bag and dialyzed against double distilled water for 24 h (MWCO: 50,000 Da, Spectrum Laboratories, USA). The particle size and its polydispersity were determined by dynamic light scattering (DLS) at 25 oC using a Zetasizer 3000HSA (Malvern Instruments Ltd, UK) with an excitation of 633 nm illuminated at a fixed angle of 90o. The micellar solutions were diluted into a final concentration of 0.2 mg/mL and then filtered through a Millipore 0.2 µm filter prior to measurements. The average diameter was calculated by a CONTTIN analytical method. C. The measurement of critical association concentration (CAC) of PEG-PCL-PEG micelles The critical association concentration of copolymers was determined by UV-VIS spectroscopy using DPH as the fluorescent probe. Samples for UV-VIS measurement were prepared according to the literature [7] and the concentration of aqueous copolymers solution ranged from 0.001 wt% to 1 wt% and concentration of DPH was 4×10-6 M. Note that CAC is also referred the Critical Micelle Concentration (CMC) in many research groups in this research topic. D. Hemolysis test of PEG-PCL-PEG nanoparticles The experimental procedure is based on colorimetric detection of Drabkin’s solution[8]. To a 0.7 mL of micellar solution at various concentrations (0.1 mg/mL to 2.0 mg/mL) was incubated in 0.1 mL of rabbit blood at 37 oC and for 3 h. Following incubation, the solution was centrifuged at 3800 rpm for 15 min. To determine the supernatant hemoglobin, 0.5 mL of Drabkin’s solution was added to 0.5 mL of supernatant and the sample was allowed to stand for 15 min. The amount of cyanmethemoglobin in the supernatant was measured by spectrophotometer at a wavelength 540 nm and, then, compared to a standard curve (hemoglobin concentrations ranging from 0.003 to 1.2 mg/mL). The percent hemolysis refers to the hemoglobin concentration in the supernatant of a blood sample not treated with micelles to the obtained percentage of particle-induced hemolysis. Finally, phosphate buffered solution (PBS) and double distilled water were used as negative and positive control, respectively.
805
E. In-vivo test: the toxicity of micellar PEG-PCL-PEG nanoparticles This test was conducted in accordance with Guidelines for Animal Care and Use Committee of Chung Yuan Christian University. Nine female ICR mice with average body weight of 27 g were feed in the animal facility where room temperature and humidity were maintained at 25 oC and RH 60 %. The mice were anesthetized with Rompum(R) solution and then the micellar solutions were injected intravenously to the mice at two doses of 71.43 mg/kg and 91.95 mg/kg. PBS was used as placebo. For multiple dosing tests, 71.43 mg/kg micellar solution was injected at 0, 3rd and 6th days of the test. The body weight of the mice was then monitored for a test period of 14 days post injection. To evaluate the toxicity to the liver and kidney where the most drugs are metabolized and excreted, the tissue sections with a thickness of 5 μm were taken after the mice were sacrificed. Tissue specimens were fixed in 10% formalin and embedded in paraffin. Afterward, these sections were stained with hematoxylin and eosin for microscopic examination. To quantitate the pathological tissues induced by the injection of the micellar solutions, the whole kidney and liver sections were divided into 950 and 1400 square grids observed under a upright microscope (Eclipse 50i, Nikon, Japan), then abnormal fibrotic tissue and loosen morphology were counted as adverse reaction due to the injection of micellar nanoparticles. III.
RESULTS AND DISCUSSION
A. Basic properties of PEG-PCL-PEG triblock copolymers and the micellar nanoparticles The molecular structure of the triblock copolymer used in this study is displayed in Fig. 1. These polymers were prepared with various feeding ratios of CL/PEG so as to adjust the block length of PCL. In the present work, the molecular weight (Mw) of PCL was designed to 3.6k, 8k and 22k Da, respectively. The properties of the synthesized PEG-PCLPEG are listed in Table 1. Since the molecular weight of PEG was fixed, the measured Mw reflected the length of PCL increased from 4.5k Da to 25k Da. With the increased hydrophobicity (PCL), it can be seen that the particle size increased with the length of PCL segment from 40.2 nm to 85.3 nm, while CAC of the micellar solutions decreased from 5.6 x 10-3 mg/mL to 5.1 x 10-4 mg/mL. From a view point of practical clinical application, a lowest CAC for micellar nanoparticles is desirable provided that the dilution effect may disintegrate the structure of the micelles leading to burst release of loaded drugs in the core of micelles when the micellar solution is injected into blood pool. Therefore, we chose a polymer (EC220E) having an Mw of 25k Da for the subsequent experiments.
IFMBE Proceedings Vol. 29
806
J.L. Jiang et al.
C. In-vivo toxicity test of micellar nanoparticles
Fig. 1 The molecular structure of PEG-PCL-PEG synthesized by coupling two PEG-PCL block using carbodiimide chemistry. Table 1 The molecular weights of PEG-PCL-PEG measured by gel permeation chromatography and the properties of nanoparticles Sample
Mw (mole/g)
Micellar size (nm)
CMC (mg/mL)
EC36E
14,500
40 ± 1
5.6×10-3
EC80E
19,000
56 ± 3
1.2×10-3
EC220E
35,000
85 ± 2
5.1×10-4
B. The hemolytic index of micellar nanoparticles Figure 2 shows the hemolytic index of EC220E micelle solutions having the concentrations from 0.1 mg/mL to 2.0 mg/mL. It’s evident that the concentrations were several orders of magnitude higher than CMC of EC220E. According to ASTM F-756 protocol (assessment of hemolytic properties of materials), an index of 2% or less is non-hemolytic to the testing material. In this study, the micellar solution with a concentration less than 0.1 mg/mL is non-hemolytic. However, the dosage form of this kind of polymer prepared in this concentration range may carry low drug payload. Hence, we compromised the payload and hemolytic index for the subsequent toxicity test in-vivo. Hemolytic index 100
Hemolytic index (%)
8
6
4
The body weights of ICR mice receiving single (71.43 mg/kg, 91.95 mg/kg) or multiple injections (3 x 71.43 mg/kg) of the nanoparticles in a period of 14 days increased and showed no difference to the control in which mice were injected with same volume of PBS. Meanwhile, no mouse died in the test period. So PEG-PCL-PEG nanoparticles prepared could be classified non-lethal to ICR mice receiving the tested doses. Figure 3 displays the histological sections of liver and kidney of ICR mice for single and multiple injections of micellar PEG-PCL-PEG nanoparticles. Red arrows indicate Kupffer cells and mesangial cells in the tissue sections and they stands for the activation of immune response to foreign body. We found that the single injection at a dose of 71.43 mg/kg result in no obvious immune reaction in liver and kidney, irrespective of single or multiple injections. However, when the dose increased to 91.95 mg/kg, we found intense immune cells and loosen structure of liver and kidney indicating a maximum dose was reached. In addition to qualitative observation, the quantitative analysis of the whole organ sections gave us a sign that a dose of 91.95 mg/kg is upper dose region for micellar PEGPCL-PEG nanoparticle. In a total grid of 1400 in whole liver section, 35.15% grids were marked as inflammation or fibrotic tissue. In conjunction with liver section, 18.21 % grids in whole kidney section were classified abnormal. From this in-vivo experiment, we succeeded in searching a safe dose of micellar PEG-PCL-PEG for intravenous injection. IV. CONCLUSIONS
The present study has developed a dosage form of micellar nanoparticles composed of self-assembled PEG-PCLPEG solution. This study tested the blank solution, e.g. no drug was loaded into the nanoparticles. A safe in-vitro toxicity of the dosage form to red blood cells was found to be 0.1-0.5 mg/mL. A higher and safe dose of 2 mg/mL, corresponding to 71.43 mg/kg body weight was reached for invivo toxicity in ICR mice. When the dose was elevated to 91.95 mg/kg a toxic response occurred. Therefore, we concluded that 71.43 mg/kg is a safe dose for intravenous injection of micellar PEG-PCL-PEG nanoparticles in ICR mice.
2
ACKNOWLEDGMENT
0 2
1
0.5
0.1
PBS
ddH2O
concentration (mg/mL)
Fig. 2 hemolystic indices of micellar PEG-PCL-PEG nanoparticles in the concentration range of 0.1 mg/mL to 2.0 mg/mL. The positive and negative controls were distilled water and phosphate buffered solution, respectively.
The authors would like to thank National Science Council of Republic of China for financial support under the grant number NSC98-2221-E-033-072. The gratitude is extended to Laboratory Animal Center of National Taiwan University Hospital for providing ICR mice.
IFMBE Proceedings Vol. 29
Nano-Sized Drug Carrier for Cancer Therapy: Dose-Toxicity Relationship of PEG-PCL-PEG Polymeric Micelle on ICR Mice
REFERENCES 1.
2. 3. 4. 5.
6.
7. 8.
Matsumura Y, Maeda H (1986) A new concept for macromolecular therapeutics in cancer-chemotherapy - mechanism of tumoritropic accumulation of proteins and the antitumor agent smancs. Cancer Res:6387-6392. Panyam J, Labhasetwar V (2003) Biodegradable nanoparticles for drug and gene delivery to cells and tissue. Adv Drug Deliv Rev: 329347. Sinha V R, Bansal K, Kaushik R et al. (2004) Poly-epsiloncaprolactone microspheres and nanospheres: an overview. Int J Pharm: 1-23. Hsieh M F, Lin T Y, Gau R J et al. (2005) Biodegradable polymeric nanoparticles bearing stealth PEG shell and lipophilic polyester core. Chin Inst Chem Engrs:609-615 Du Z X, Xu J T, Yang Y et al. (2007) Synthesis and characterization of poly(epsilon-caprolactone)-b-poly(ethylene glycol) block copolymers prepared by a salicylaldimine-aluminum complex. J Appl Polym Sci: 771-776. Hsieh M F, Cuong N V, Chen C H et al. (2008) Nano-sized micelles of block copolymers of methoxy poly(ethylene glycol)-poly(epsiloncaprolactone)-graft-2-hydroxyethyl cellulose for doxorubicin delivery. J Nanosci Nanotechnol:2362-2368. Hwang M J, Suh J M, Bae Y H et al. (2005) Caprolactonic poloxamer analog: PEG-PCL-PEG. Biomacromolecules:885-890. E. Beutler (1971), In: Grune and Stratton (Eds.), Red Cell Metabolism. A Manual of Biochemical Methods, New York, 1971. Author:
Ming-Fa Hsieh
Institute: Department of Biomedical Engineering Chung Yuan Christian University Street: No. 200, Chung Pei Rd. City: Chung Li Country: Taiwan, R.O.C. Email: [email protected]
Fig. 3 (continue)
Fig. 3 H&E staining of liver sections (a, c) of ICR mice receiving single injection of PBS and micellar solution (71.43 mg/kg), respectively. Kidney sections are displayed for (b) ICR mice with PBS injection and (d) ICR mice with 71.43 mg/kg injection of micellar solution. Scale bar = 100 μm.
IFMBE Proceedings Vol. 29
807
“Internet of Things”, an RFID – IPv6 Scenario in a Healthcare Environment H. Tsirbas, K. Giokas, and D. Koutsouris Biomedical Engineering Laboratory, School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece Abstract— In recent years, increasing costs in Health Care have directed efforts for more effective resource management in hospitals. “Internet of Things” is a very important step towards an increased networking state of hospital inventory. The use of RFID and IPv6 allows for a low cost, low maintenance solution. The above combination also helps to develop a transparent information infrastructure that serves the health professional and the patient alike. Apart from the obvious advantages of IPv6 such as larger address space (128 bits long), better header format, provision for extension, resource allocation support (Flow Label) and security features and the RFID’s no line of sight requirement, long read range, real-time tracking, multiple tag read/write and database portability, the combination of the two and our proposed Virtual MAC Address Generator can provide a unique tracking, low-cost, low power tool in a multi-sensor hospital environment. Keywords— Internet of things, healthcare, RFID, IPv6.
I. INTRODUCTION In recent years, almost all countries of the world have been increasing their allocated resources for health care. The market in developed countries is based primarily on the middle-aged and older people group. This trend has lead to a greater demand for health care related to health services and greater competition among healthcare providers [1]. The effective management of a hospital is a key target for developed countries. In order to achieve this they need to understand the cost structure of hospitals and the inefficient use of resources. This will help to form policies on health care and budgetary decisions. The cost of hospitalization could be controlled more effectively resulting to health care being more accessible to all. Although the healthcare resources in the last years have increased, there should be a concentrated effort by information technology to reduce the cost of care in the western world. In order to achieve reduced costs, the use of intelligent mobile systems will play important role. Intelligent systems will provide the ability to the hospital staff to intelligently process personalized information that will come from patients. New diseases are appearing while the need for care increases, but the cost, quality and delivery of health care is not improving. The management system of health care is far behind compared to other industries.
In the last two decades the average life expectancy has raised leading to an increased concern since healthcare needs to be geared towards older people. The technology of mobile sensors can give us information in real time including vital signs and other physiological indicators of health and fitness of an individual. Such a system could find wide use in hospitals, in monitoring systems at home, office and health research studies [7]. The Internet is a technology that applies to health care because they it can provide telemedicine services to remote areas. The problem using the Internet in health services is mainly focused on how to integrate its capabilities and interconnectivity in a growing number of very small mobile sensors with lower costs and better energy savings. The application of these principles can be facilitated by the use of “The Internet of Things”. “The Internet of things” is a network of billions or trillions of machines communicating with one another. It is a major or dominant theme for the evolution of information and communications over the next few decades. The Internet of Things cannot be reduced to one specific technology or application. Radio Frequency Identification (RFID), Near Field Communication (NFC), ZigBee and Bluetooth are among the key technologies currently in the “sphere” of the Internet of things. Its simplest form is already here. There are 1.3 billion radiofrequency identification tags (RFIDs) and two billion mobile service users worldwide [2]. RFID allows each object to have its own unique identifier (rather than one identity number for each product type) which can be read at a distance. This allows automatic, real time identification and tracking of individual objects. Wireless sensor technologies allow objects to provide information about their environment and context. Smart technologies such as robotics and wearable computing will enable everyday objects to think and communicate. Nanotechnology and energy scavenging technologies are packing more processing power in less space, so that networked computing can be woven into the fabric of things around us. These technologies will progressively create an almost invisible infrastructure, with far-reaching capabilities organized into global systems that serve society. In this paper, “Internet of Things”, an RFID – IPv6 scenario in a Healthcare Environment, we propose an
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 808–811, 2010. www.springerlink.com
“Internet of Things”, an RFID – IPv6 Scenario in a Healthcare Environment
RFID - IPv6 networking configuration for use in a healthcare scenario. A network using a Virtual MAC Address Generator receives an RFID tag ID of various length and generates an IP address. Also, the information of RFID tag ID and IP address is mapped and is stored in the Virtual MAC Address Generator. By using this networking configuration in healthcare any Thing/Sensor, which is RFID tagged is able to be accessible from anywhere and from any services through an IP network.
II RELATED WORKS A. RFID In the ubiquitous network environment RFID is an automatic identification method which uses radio waves to automatically identify tagged items. It is similar to a bar code but uses a microchip and radio waves. An RFID tag is made up of an RFID chip attached to an antenna. Tags can be read from several meters away and beyond the line of sight of the reader. The most common method of identification is to store a serial number that identifies a person or an object, but an RFID tag can also store some additional information depending on the size of its memory. A tag is attached physically to the object to be identified. The tag is an electrical device designed to receive a specific signal and automatically transmit a specific reply. Tags can be passive, semi-passive or active, based on their power source and the way they are used, and can be read-only, read/write or read/write/re-write, depending on how their data is encoded. Passive RFID tags take the energy from the electro-magnetic field emitted by readers. Tags use the transmitting frequency in kilohertz, megahertz, and gigahertz ranges [6]. Table 1 Tags and Features Internal Power Source Signal by backscattering the carrier wave from the reader Response Size Cost Range Sensors
Tags and Features
Passive Tag
Active Tag
Semi Passive Tag
No
Yes
Yes
Yes
No
Yes
Weaker Small Less expensive 10 centimeters to few meters No
Stronger Big More expensive Hundreds of meters Yes
Stronger Medium Less Hundreds of meters Yes
An RFID reader or interrogator is a hardware device that is used to read the transmitted data from the tag. For reading passive RFID tags, the reader has to use extra power in
809
electromagnetic wave form. The reader hardware consists of a bipolar antenna and a microprocessor chip. The bipolar antenna is used to transmit the signals and power to the tag. It is also sensitive to read the reflected signals from the tag. The microprocessor controls and runs all the reader related processes. The reader can be a dedicated handheld device or embedded in a mobile device. An RFID tag is represented through an EPC (Electronic Product Code) that is currently managed by EPCglobal. EPC has various lengths 64 bit 96 bit and 256 bit in order to identify a product. The RFID Tag ID cannot perform the IP networking by itself because of being only an object-person identifier. B. IPv6 Internet Protocol version 6 (IPv6) is the next-generation Internet Protocol version designated as the successor to IPv4. The IPV6 increases the size of the IP address from 32 to 128 bits. IPv6 defines both a stateful and stateless address autoconfiguration mechanism. Stateless autoconfiguration requires no manual configuration of hosts, minimal (if any) configuration of routers, and no additional servers. The stateless mechanism allows a host to generate its own addresses using a combination of locally available information and information advertised by routers. Routers advertise prefixes that identify the subnet(s) associated with a link, while hosts generate an "interface identifier" that uniquely identifies an interface on a subnet. An address is formed by combining the two. In the absence of routers, a host can only generate link-local addresses. However, link-local addresses are sufficient for allowing communication among nodes attached to the same link [5]. In the stateful autoconfiguration model, hosts obtain interface addresses and/or configuration information and parameters from a server. Servers maintain a database that keeps track of which addresses have been assigned to which hosts. The stateful autoconfiguration protocol allows hosts to obtain addresses, other configuration information or both from a server. Stateless and stateful autoconfiguration complement each other. For example, a host can use stateless autoconfiguration to configure its own addresses, but uses stateful autoconfiguration to obtain other information [5]. C. Dhcpv6 The DHCP (Dynamic Host Configuration Protocol) was proposed for establishing static or dynamic addresses. DHCP provides the configuration parameter to the internet host. The DHCP protocol significantly reduces the system administration workload as the network devices can be added to the network with little or no change in the device configuration.
IFMBE Proceedings Vol. 29
810
H. Tsirbas, K. Giokas, and D. Koutsouris
DHCP also allows network parameter assignment at a single DHCP server or a group of such server located across the network. The dynamic host configuration is made possible with the automatic assignment of IP addresses, default gateway, subnet masks and other IP parameters. On connecting to a network, a DHCP configured node sends a broadcast query to the DHCP server requesting for necessary information. Upon receipt of a valid request, the DHCP server assigns an IP address from its pool of IP addresses and other TCP/IP configuration parameters such as the default gateway and subnet mask [4]. DHCP allocates IP addresses to the network devices in three different modes: dynamic mode, manual mode and static mode. In the static mode, the DHCP server statically binds the IP address and physical address and maintains the managed static database. In the case of the static allocation, the DHCP server assigns the persistent IP address to a client. In the dynamic mode, firstly the DHCP server confirms whether the physical address of a client is in the static database or not. If the requested physical address exists, the permanent IP address is assigned. If not, the IP address is assigned to a client for a limited time. The address assigned by the IP address pool is temporary. The DHCP server leases the IP address during the connection time. If the lease time is up, the client releases the IP address or has to be assigned a new one. In the manual mode, the client IP address is allocated by the network operator. DHCP is used in order to deliver the address assigned to a client.
Address Generator, the RFID Reader and the RFID Tag. The role of each one is explained below [3]. The role of the main network is to deliver the packets to each destination through routers. The Internet element represents all external users outside of main network. The DHCP server dynamically assigns the IP addresses. The RFID Reader reads the RFID Tag or writes data in the RFID Tag. Also the Virtual Mac Address Generator is a server where the RFID Tag ID is delivered and creates the virtual MAC address. The generator creates the virtual physical address of forty-eight bits. The RFID Tag, the virtual address and the assigned IP at the DHCP server are mapped and stored in the generator. B. Procedures Figure 2 depicts the IP generation sequence diagram which presents how an IP is generated from the RFID tag ID.
III. RFID-IPV6 NETWORK CONFIGURATION A. Network Topology The network topology shown in the above figure consists of the Main Network, the DHCP server, the Virtual Mac
Fig. 2 IP Address Generation - sequence diagram
Fig. 1 Network diagram
The RFID tag is read from a RFID reader. The RFID reader transmits the tag ID to the Virtual MAC Address Generator. The VMAG generates a virtual MAC address using the tag ID and transmits it to DHCP server. The DHCP server assigns the IP address to the virtual MAC address and sends it back to the generator. Finally the IFMBE Proceedings Vol. 29
“Internet of Things”, an RFID – IPv6 Scenario in a Healthcare Environment
Virtual MAC Address Generator binds the IP address, the virtual MAC address and the tag ID and stores them.
IV. RFID-IPV6 NETWORKING SCENARIO IN HEALTHCARE In the proposed network topology the generated IP address is mapped with the RFID tag ID and the virtual MAC address in the Virtual MAC Address Generator which has its own unique IP address. Figure 3 shows the RFID-IPV6 Networking scenario. This networking scenario is designed to meet the requirements of a basic healthcare network. Figure 3 is comprised of the main network and the Internet. The main network routers, DHCP Server, Virtual MAC Address Generator, RFID Reader and local end-users ‘Health Professionals’ are connected. All external end-users are connected through the Internet. In case a local Health Professional wants to read an RFID tag or write it, transmits a packet with the IP address of the RFID tag ID. At this time a packet is encapsulated with the IP address of the Virtual MAC Address Generator and is transmitted. A packet transmitted through the main network arrives at the generator. The generator decapsulates the received packet and searches for the mapped RFID Tag ID by using the destination IP address. If the RFID Tag ID exists in the storage (VMAG), a packet is delivered to the RFID Reader and the reader transmits the packet to the RFID Tag having this ID.
811
the packet to Virtual MAC Address Generator. The generator handles the practical processing for the packet transmission, so it transmits a packet to the IP address of the Health Professional using the destination and source address of the previous transmission. It encapsulates a packet to the IP address of the generator. The Health Professional, finally decapsulates the received packet. In case an external Health Professional wants to read an RFID tag or write it through the Internet, he/she should follow the same procedure as above. The only difference is that the packets are transmitted from the end user to the RFID tag through the Internet and the main network and vice versa.
V. CONCLUSION In our paper we are presenting a networking scenario in a healthcare environment using an overall “Internet of Things” solution. In reality the combination of RFID and IPv6 characteristics along with the proposed Virtual MAC Address Generator constitutes among other things, an internal address generation tool for the easier and more effective management of nodes (RFID Tags therefore tagged items). Such a solution has high scalability and lowers the cost of inventory maintenance in a hospital.
REFERENCES 1. F. A. Correa, M.J. Gil, and L.B. Redín (2005) Benefits of Connecting RFID and Lean Principles in Health Care. Business Economics Series 10, Madrid 2. Gérald Santucci (2009) From Internet of Data to Internet of Things. International Conference on Future Trends of the Internet, Luxembourg 3. Dong Geun Yoon, Dong Hyeon Lee, Chang Ho Seo, Seong Gon Choi (2008) RFID Networking Mechanism Using Address Management Agent. Fourth International Conference on Networked Computing and Advanced Information Management, Gyeongju 4. R. Droms, Ed (2001) Dynamic Host Configuration Protocol for IPv6 (DHCPv6). IETF RFC3646, USA 5. S. Thomson, T. Narten (1998) IPv6 Stateless Address Autoconfiguration. IETF RFC2462, USA 6. The EPCglobal Architecture Framework Version 1.2 at http://www.epcglobalinc.org 7. Opportunities for Use in Wearable Devices for Health Monitoring at http://www.sensorsmag.com/
Fig. 3 RFID-IPV6 Networking Scenario diagram Also if the Tag is active, it can transmit data to the Health Professional. In this case the RFID reader transmits
Author: Haris Tsirbas Institute: Biomedical Engineering Laboratory Street: Iroon Polytechniou 9 City: Athens Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
Development of software tool for quantitative gait assessment in Parkinsonian patients with and without Mild Cognitive Impairment L. Iuppariello1, R. Tranfaglia1,3, M. Amboni2,3,4, L. Lista3 and M.Sansone1 1
University Federico II of Naples, Department of Biomedical, Electronic and Telecommunication Engineer , Naples, Italy 2 IDC Hermitage, Naples, Italy 3 University of Naples Parthenope, Naples, Italy 4 University of Naples Federico II, Department of Neurological Sciences
Abstract— The construct of Mild Cognitive Impairment (MCI) is considered as a transition state between normal aging and dementia. Originally it has been conceptualized as the transitional state between normalcy and Alzheimer’s disease (AD), more recently, it has been applied to patients with Parkinson’s disease (PD) as pre-dementia state. The relationship between cognition and gait disturbances has received increasing attention and is well established. In order to perform a quantitative assessment of gait in PD patients with or without MCI, an operating software tool has been developed. Ittakes advantage of the potentialities of the software Qualisys Track Manager and of the Matlab and allows the quantitative evaluation of the cinematic parameters of the human gait. Keywords— Motion Analysis, Gait Analysis, Mild Cognitive Impairment (MCI), Parkinson’s disease (PD)
I. INTRODUCTION
Analysis of human posture and movement is now a fast growing biomedical field of great interest from the clinical point of view, because the posture and movement are the result of the interaction of three major physiological systems: the nervous system, the muscle-skeletal system and the sensory system. Human movement, specifically, is the result of a complex process of signal processing carried out under the control of the central nervous system [1]. The objective of this work is the development of a multifactorial analysis software of neuromuscular and biomechanical parameters. The cinematic analysis of the movement offers a fundamental contribute for evaluation, Thanks to the preventive and therapeutic purposes. possibility to evidence associate motor strategies with the onset of pathologies of Central Nervous system and of Muscular Skeletal Apparatus, it allows to explore the residual motor abilities and to monitor with objective pointers the results of the participations put into effect, affording an appraisal of the relationships cost-benefits and orienting the choice of the various options through criteria of effectiveness and efficiency [2].
II. MATERIALS AND METHODS
A. The software tool The quantitative analysis of the movement is carried out in the laboratory of Neuro-Mechanic of the private hospital “Hermitage” in Naples. The opto-electronic system present in the laboratory, aimed to capture passive markers, consists of two main subsets: an acquisition structure (television cameras, illuminators, cards of acquisition) and an structure elaboration software. Specifically, the lab has installed a set of infrared cameras ProReflex MCU (Motion Capture Unit), made with CCD technology spread over an area of 4.5 x 6 m. The MCU uses a CCD sensor with low noise and high speed and a built-in microprocessor that allows them to achieve enhanced performance. The cameras support a infrared system that projects the special IR light that is reflected by markers and captured by optics of the cameras. Qualisys Track Manager is a Windows-based data acquisition software with an interface that allows the user to perform 2D and 3D motion capture. QTM is designed to provide both advanced features required by technically advanced users and a simple method of application for the inexperienced user. The subject of which we want to acquire the movement has put on himself some markers in strategic points , usually near the joints. To the aims of our observation, for the positioning of the markers we have been used to change the protocol of Davis [3] compared to his standard, by placing markers in the following points of anatomical landmark: Anterior Superior Iliac Spine, greater trochanter, thigh,lateral epicondyle of femur, lateral malleolus, calcaneus, V metatarsal head (Figure 1).
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 812–814, 2010. www.springerlink.com
Development of Software Tool for Quantitative Gait Assessment in Parkinsonian Patients with and without Mild Cognitive Impairment
813
Measuring the temporal and spatial distances between the minimums of the trajectories, it is possible to go back to the classics parameters of the human gait: Gait Cycle, Gait Stride, Stance Phase, Phase Swing, Support Double, Single Support, Step Length, Cadence, Velocity. B. The clinical study
Fig. 1 Markers set The QTM has been used in order to acquire and to reconstruct the trajectories of the markers. Successively the data have been exported in txt format for the elaboration in Matlab. A Matlab routine has been developed, that includes various algorithms for the calculation of the main cinematic gait parameters[4]. Txt file is a matrix containing the 3D coordinates of markers placed on the subject, sampled to 240 Hz. L '(?)M-files created in MATLAB provides to open read-only file. Txt. It extracted, from the complete matrix, the columns of the coordinates z of markers that are located on calcaneus and on metatarsal heads. The matlab routine calculates the minimums and the variations of trajectories function. It has been assumed that each minimum of the calcaneus and metatarsal heads represent the principal moment of the support to the land (fig. 2). The analysis of the variation of the function gives the stand and swing periods.
The developed tool has been used and tested in several neuro-mechanical studies carried out both on healthy and ill people. In particular , it has been of remarkable support to a clinical study titled: "Quantitative gait assessment in Parkinsonian patients with and without Mild Cognitive Impairment(MCI)”, which is currently underway and conducted in collaboration with a multidisciplinary team consisting of neurologists, bioengineers and experts in movement disorders. To our knowledge, this is the first study aimed to investigate the relationship between specific gait parameters anomalies in PD patients with MCI and the possible subsequent development of dementia. The target of this work is, at first, the evaluation of gait in PD patients with and without MCI in order to identify any differences in the gait parameters; successively, after two years, the revaluation of the patients will be carried out, with the objective to find a possible correlation between early specific anomalies of the gait and the development of dementia. To date, fourteen PD patients were investigated. All patients were neither demented nor depressed. The clinical assessment included clinical data collection, H&Y stage, UPDRS I, II and IV at on state, UPDRS III both at on and off state and an extensive neuropsychological assessment in order to classify the patients according to MCI criteria [5]. Quantitative gait analysis was performed by means of motion capture system (Qualysis, Sweden) and was applied during the following conditions both at off and on state: 1) normal gait (Gait-off and Gait-on); 2) motor dual task (carrying a tray with two glasses filled with water,while walking, Mot-off and Mot-on); 3) cognitive dual task (serially subtracting 7’s, starting from 100, while walking, Cog-off and Cog-on). Statistical analysis was carried out by means of MannWhitney test. III. RESULTS
Fig. 2
Representative graphs of the z coordinates of markers of metatarsal head and calcaneus; the minimums of the function are circled in red
Based on neuropsychological testing, seven subjects were classified as patients with MCI (MCI+) and seven subjects were classified as patients with normal cognition (MCI-). There were no significant differences in age, disease duration, H&Y stage, UPDRS and MMSE scores
IFMBE Proceedings Vol. 29
814
L. Iuppariello et al.
between the two groups. The following gait parameters significantly differed between the two groups: 1) double support time was longer in MCI+ vs. MCI- in Gait-off (p=0,007), Gait-on (p=0,026), Cog-on (p=0,026); 2) velocity was reduced in MCI+ vs. MCI- in Mot-on (p=0,038) and in Cog-off (p=0,011); 3) cadence was reduced in MCI+ vs. MCI- in Cog-on (p=0,038). In comparison with MCI- PD patients MCI+ display specific gait features: slower velocity due to the reduction of cadence and impairment of dynamic stability as revealed by the longer double support time.
Moreover the authors thanks the medical staff of Hermitage, that has supported the bio-engineering study.Format the Acknowledgment and References headlines without numbering.
REFERENCES 1.
2.
3.
IV. CONCLUSIONS
In this work we wanted to highlight the growing importance of the multifactorial integrated analysis of the movement in the biomedical field. As evidence of this, many facilities and hospitals are going to be equipped with laboratories designed for this type of analysis. Quantitative assessement of motion systems origin from the need to break free from the limitations of qualitative assessment that, based on sensory perceptions, results imprecise and depending on the observer. The tool developed in MATLAB has enabled us to reach an objective understanding of the motor pattern of each patient examined. Measurable and repeatable results have been obtained in this way, demonstrating that the software tool is able to perform efficiently the mission for which it was designed.
ACKNOWLEDGMENT
4. 5.
6.
7.
Cappello A, Cappozzo A ,Pietro Enrico di Prampero. XXII scuola annuale di Bioingegneria della postura e del movimento; Bressanone 2003: 63-65. Sheldon R. Quantification of human motion: gait analysis—benefits and limitations to its application to clinical problems. Journal of Biomechanics 2004; 37:1869–1880. Roy B. Davis III, Dunckley, Sylvia Ounpuu, Dennis Tyburski , James R. Gage. A gait analysis data collection and reduction technique. Human Movement Science 10(1991) 575-587. Perry J. Gait analysis: Normal and pathological functions. Hardcover 1992. Caviness J, Dunckley, Connor D, Sabbagh M, Hentz J, Noble B, .Evidente B, Shill H, Adler C. Defining Mild Cognitive Impairment in Parkinson’s Disease. Movement Disorders 2007; 22 (9) : 1272– 1277 Ronald C. Petersen, PhD, MD; Glenn E. Smith, PhD; Stephen C. Waring, DVM, Phd; Robert J. Ivnik, PhD; Eric G. Tangalos, MD; Emre Kokmen, MD . Mild Cognitive Impairment, Clinical Characterization and Outcome . Arch Neurol. 1999; 56: 303-308 Ronald C. Petersen, Ph.D., M.D. . Mild Cognitive Impairment, Current Research and Clinical Implications . Semin Neurol. 2007; 27: 22-31.
Author: Luigi Iuppariello Institute: Department of Biomedical, Electronic and Telecommunication Engineer, University of Naples Federico II Street: Via Claudio 21 City: Naples Country: Italy Email: [email protected]
The authors thank the private hospital “Hermitage” in Naples, that has shared the laboratory and the necessary instrumentation for the study.
IFMBE Proceedings Vol. 29
Tensile Stress Analysis of the Ceramic Head Endoprosthesis with different Micro Shape Deviations of the Contact Areas V. Fuis1 and M. Koukal2 1
Centre of Mechatronics – Institute of Thermomechanics AS CR and Institute of Solid Mechanics, Mechatronics and Biomechanics, Faculty of Mechanical Engineering, Brno University of Technology, Brno, Czech Republic 2 Institute of Solid Mechanics, Mechatronics and Biomechanics, Faculty of Mechanical Engineering, Brno University of Technology, Brno, Czech Republic Abstract— The paper deals with the problems of ceramic head of hip joint endoprosthesis destructions in vivo, and with assessing the impact of shape deflections of conical surfaces on the tensile stress in the head. Concerned are shape deviations from the ideal conical surfaces of the stem and the head of the endoprosthesis. The shape deviations may be modelled at the macro-level - this concerns model shape inaccuracies such as deviation from the nominal degree of taper, at the micro-level when the stochastic distribution of unevenness on the contact areas is respected. The problem of stress in ceramic heads was solved using the finite element method – system ANSYS under ISO 7206-5 loading. There are presented and analysed the results of solution of the macro shape deviations and micro shape deviations, obtained from measurements made on the cones of stems and heads. There are analysed two different groups of micro shape deviations. The first is three variants of the sizes of the measured micro shape deviations (measured, doubled and halved). The second group of the analysed shapes deviations contained modeled deviations with the linear transformation of the measured deviations along the cone depth.
II. METHODS
The computational modelling of stress has been made on a system consisting of a testing steel stem and a ceramic head. The head has been put on the cone of the testing stem and the load of this system has been in compliance with ISO 7206-5 – Fig. 1, which is the standard for determining static strength of ceramic heads for hip joint endoprostheses. With view to geometrical inputs, the deviations from ideal shapes can be divided into two groups - global (macro) deviations, i.e. the deviations from the stem's or head's nominal coneshape - angle D in Fig. 1. Maximum allowed difference of the head's and stem's cones is D = 10’ and this value is assumed in the computational modelling. D = 32 mm y
Keywords— Hip joint endoprosthesis, ceramic head, shape deviations of contact areas, tensile stress
F
I. INTRODUCTION
HEAD
H = 19 mm
0 coordinate of the cone depth [mm]
The failure of cohesion of ceramic heads of total hip joint prostheses has been stated in a not negligible number of patients in the Czech Republic. The implant’s failure of the ceramic head has always traumatic consequences for the patient, since a part of or even the whole endoprosthesis has to be re-operated. Hence, it is desired to reduce the number of implant re-operations to the minimum. Therefore the computational modelling (using FEM) of the stress and the failure probability (based on Weibull weakest link theory [1]) of the ceramics head was realised. In this case the influence of the micro shape deviations of the stem and the head contact cone were analysed in detail. The values of the deviations and its lay out along the cone depth are important for the tensile stress distribution in the ceramic head.
STEM Ds Dh
D/2
Fig. 1 Analysed system under ISO 7206-5 loading, macro shape deviation P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 815–818, 2010. www.springerlink.com
816
V. Fuis and M. Koukal
The second group is local (micro) deviations which were measured using IMS UMPIRE device. These deviations are represented by micro shape deviations from the ideal cone are shown in Fig. 2. The deviations from the ideal cone shape are represented in the developed section of the cone in Fig. 2a – measured head cone micro deviations (from -3.9 Pm to +4.12 Pm), Fig. 2b – stem cone micro deviations (from -2.27 Pm to +2.28 Pm).
deviations on the largest cone diameter. These modelled variants (VAR. 0-1 and VAR. 1-0) are used for stem (Fig. 3) and head (Fig. 4) cone micro deviations.
Fig. 3 Modelled variants of the stem cone shape deviations
Fig. 2 Measured micro shape deviation of the stem and head cones The computational modeling was realised for three values of the micro shape deviations – the measured head and stem deviations – VAR. 1x, the double of the measured deviations – VAR. 2x and the half of the measured deviations – VAR. 0.5x. The micro shape deviations are shown in Fig. 2 is the same for all three variants, only the scale is changed (doubled or halved). The rest assumed variants of the micro shape deviations are the linear transformation of the measured deviation along the cone depth – 1-0 or 0-1. VAR. 1-0 means the real deviations on the smallest cone diameter and zero deviations on the highest cone diameter. VAR. 0-1 means the zero deviations on the smallest cone diameter and real zero
Fig. 4 Modelled variants of the head cone shape deviations
IFMBE Proceedings Vol. 29
Tensile Stress Analysis of the Ceramic Head Endoprosthesis with different Micro Shape Deviations of the Contact Areas
The modeled shape deviations (VAR. S/H 1-0 or 0-1) is connected to the measured counterpart – for example – VAR. S 1-0 – stem shape deviations are linearly transformed (the smallest diameter has got measured deviations and the highest cone diameter has got zero deviation), is connected to the measured head deviations VAR. 1x (Fig. 2a). The state of stress of the ceramic head of endoprosthesis can be strongly affected by the process of endoprosthesis implantation, when the surgeon fits the head on the cone of the stem. The fitting of the head on the cone of the stem is a random process, in view of the mutual position of the head to the stem (in sense of the head's slight turning around the axis y, defined by angle E - Fig. 5). Therefore a series of computations will be made for each pair and for various positions of the head towards the stem.
817
The distribution of maximum values of tensile stresses in the ceramic head (Vmax), in dependence on the value of the head's loading, is shown in Figs. 6, 7 and 8. The values Vmax for various positions of the head to the stem (various value of angle E) form in Figs. 6 – 8 a belt of curves. The wide of the curve’s belt responds the dispersion of the Vmaxin the ceramic head.
Fig. 6 Maximum tensile stress in the head for different values of the micro shape deviations – VAR. 2x – doubled deviations, VAR. 0.5x – halved deviations
Fig. 5 Representation of various head to stem positions defined by angle E The head is modelled as a linear isotropic continuum with the following constants – Eh = 390 GPa and Ph = 0.23. The stem is modelled as a linear isotropic continuum (elastic) too, material parameters are the following Es = 210 GPa and Ph = 0.3. The coefficient of friction between the head and the stem is f = 0.15 [2]. FEM ANSYS system has been used for modelling the stress in the system. From the viewpoint of the bonds of the system, it may be stated that besides the head - stem contact also the shift of the stem's lower part has been prevented. To ensure convergence, the head's rotation on the stem has also to be prevented. The load of the system corresponds to ISO 7206 - 5 - the force acts as linear load on a on a circle with a radius of 10 mm – Figs. 1 and 5. III. RESULTS
In a ceramic head a 3D state of stress sets in under loading. In view of the reliability of the head, which is made of brittle material, the most important are the tensile stresses [3]. For this reason only extreme tensile stresses in the ceramic head will further be analysed.
The Fig. 6 shows the influence of the sizes of the head’s and stem’s micro shape deviations of the contact cone surface. There are assumed three variants – VAR. 1x – measured micro shape deviations, VAR. 2x – doubled sizes of the measured values and VAR. 0.5x - halved sizes of the measured values. The dispersions of the Vmax curves of the VAR. 1x and VAR. 2x are nearly same, only the Vmax values of the VAR. 2x are higher. Micro shape deviations for VAR. 0.5x caused the reduction of the Vmax values and dispersion too. The Fig. 7 shows the maximum tensile stress in the head for different lay out of the micro shape deviations – deviations on the smallest cone diameter are measured, on the largest diameter are zero. For comparison there is the measured variant shown too (VAR. 1x). The tensile stress curves for all three displayed variants created very similar belt, so the shape of the micro deviations on the smallest diameter determinate the value of the maximum tensile stresses in the head. The rest regions are not so important (see bellow). The Fig. 8 shows again the maximum tensile stress in the head for different lay out of the micro shape deviations – deviations on the highest cone diameter are measured, on the smallest diameter are zero. Measured variant (VAR. 1x) is shown in Fig. 8 too due to the comparison of the curves belts. The curves belts in Fig. 8 are different from curves in
IFMBE Proceedings Vol. 29
818
V. Fuis and M. Koukal
Fig. 7 – variants with modified micro shape deviations (VAR. 0-1) have got wider curve’s belt. In the other hand the narrowest belt is for VAR. S 0-1 (yellow color in Fig. 8) and highest tensile head’s stress reduction is for VAR. H 01 (light blue color in Fig. 8). The wide of the VAR. H 0-1 is smaller than for VAR. 1x. The computational modeling results show that the most important location of the cone in front of view is the smallest cone diameter. The reduction of the micro shape deviations on this location causes the reduction of the tensile stress in the head.
IV. CONCLUSIONS
By computational modelling (using FEM) it has been proved that the location of the micro shape deviations on the stem’s and head’s cone can reduce the maximum tensile stresses in the ceramic head (VAR. H 0-1 or VAR. S 0-1). The decreasing of the micro shape deviations on the smallest diameter of the stem and the head cone caused the tensile stress reduction. The values of the sizes of the micro shape deviations which are added to the macro shape deviation, significantly influence the tensile stress in the head and the value of the dispersion the of Vmax curves (different position of the head on the stem come). The doubled micro deviations (VAR. 2x) caused increasing tensile stress state and the dispersion of the stress curves too. The halved micro deviation (VAR. 0.5x) caused the stress reduction. The tensile stress in the ceramic head cause the brittle fracture and therefore the reduction of the tensile stress (location of the micro deviations (VAR. 0-1) or halved the micro shape deviations (VAR. 0.5x)) causes the decreasing of the head’s failure probability which is based on the Weibull weakest link theory [1].
ACKNOWLEDGMENT Fig. 7 Maximum tensile stress in the head for different lay out of the micro shape deviations – VAR. 1x – measured deviations, VAR. S or H 1-0 (S – stem, H- head) - deviations on the smallest cone diameter are measured, on the largest diameter are zero – see Figs. 3 a 4
This paper was prepared as a part of the projects AV0Z20760514 and CZ.1.07/2.3.00/09.0228 „Complex System for Attracting, Education and Continuing Involment of Talented Individuals to Research Centers of AS CR and FME BUT“, which is cofinanced by the European Social Fund and the state budget of the Czech Republic.
REFERENCES 1. 2. 3.
Fig. 8 Maximum tensile stress in the head for different lay out of the micro shape deviations – VAR. 1x – measured deviations, VAR. S or H 0-1 (S – stem, H- head) - deviations on the smallest cone diameter are zero, on the largest diameter are measured (real) – see Figs. 3 a 4
McLean A F, Hartsock D L (1991) Engineered materials handbook, Vol. 4, Ceramics and Glasses. ASM International 1991, 676-689 Fuis V, Janíþek P (2002) Stress and reliability analyses of damaged ceramic femoral heads. Damage & Fracture Mechanics VII. WIT Press, 475-486 Fuis V, Návrat T, HlavoĖ P, Koukal M, Houfek M (2007) Analysis of contact pressure between the parts of total hip joint endoprosthesis with shape deviations. Journal of Biomechanics, Vol. 40, Suppl. 2 Author: Vladimir Fuis Institute: Centre of Mechatronics – Institute of Thermomechanics AS CR and Institute of Solid Mechanics, Mechatronics and Biomechanics, Faculty of Mechanical Engineering, Brno University of Technology Street: Technicka 2, 616 69 City: Brno Country: Czech Republic Email: [email protected]
IFMBE Proceedings Vol. 29
Modelling of Cancer Dynamics and Comparison of Methods for Survival Time Estimation Tomas Zdrazil1 and Jiri Holcik1,2
2
1 Institute of Biostatistics and Analyses, Masaryk University, Brno, Czech Republic Institute of Measurement Sciences, Slovak Academy of Sciences, Bratislava, Slovakia
Abstract— The paper deals with an epidemiological compartmental model to describe cancer dynamics in a given population. Three methods including Kaplan-Meier algorithm and Weibull and exponential distribution fitting were used for estimation of the mean survival time for four TNM stages of prostate cancer described by the 1977-2007 data from the National Oncological Register of the Czech Republic. In comparison to Kaplan-Meier estimation both the distribution fitting methods resulted in overestimated length of probable survival. However, such overestimation can be corrected using demography mortality data. Keywords— Prostate cancer, survival analysis, KaplanMeier estimator, right-censored data.
I. INTRODUCTION There are two basic ways to analyse oncological data. The first deals with modelling a growth or behaviour of a single tumour at the cellular level, while the second describes a global cancer dynamics in a population. One approach to model a cancer dynamics in a population assumes that the disease progresses through four stages according to TNM classification [1]. The number of people in the stages can be described by state-space variables of compartmental model. To determine transition rates between the compartments we need to estimate a mean survival time for particular stages, which is mostly complicated by the fact that the real data are right-censored. There are several approaches to solve this problem. While Kaplan-Meier algorithm [2] uses a nonparametric maximum likelihood estimate of the survival function, Cox regression [3] evaluates the effect of several variables upon the time a specified event takes to happen. Another method is based on linearization of given distribution and estimation of missing fractiles by linear regression [4].
II. MATERIALS AND METHODS We assumed the disease progresses through four stages [1], so we chose a model of an epidemiological type, described by the system of following differential equations
x1' = c0 n − c1 x1 x2' = c1 x1 − c2 x2 x3' = c2 x2 − c3 x3
(1)
x4' = c3 x3 − c4 x4 where n is a number of healthy men in the population, which is assumed to be constant, x1, x2, x3, x4 are numbers of patients in the four disease stages 1, 2, 3 and/or 4 resp. according to the TNM classification, and c1, c2, c3, c4 are parameters referring to transition rates between particular compartments. The disease behaviour and health care services effectiveness changes with time. That is why we created groups of patients based on the year of diagnosis and calculated parameters for each of them. The data source, the National Oncological Register (NOR) of the Czech Republic contains a huge amount of data about patients diagnosed with cancer in years 19772007. For each patient NOR contains date of birth, date of diagnosis, stage of the disease at the time of diagnosis and eventually date of death. Based on these data we first computed c0, which represents a proportion of healthy men diagnosed per year, as a product of a reciprocal value of mean age at the time of diagnosis and probability of disease onset during life. In order to compute c1, c2, c3, c4 we had to estimate mean survival time. For the most of cases some patients were still alive at the end of observation time (end of year 2007), thus we obtained just only a part of a distribution function of a survival time. This is usually called right-censored data problem. Percentages of missing data are introduced in Table 1. We used Kaplan-Meier algorithm, that is a widely used approach to correct the estimation of the incomplete data series, and two developed methods based on parameter estimates of given distribution function by nonlinear regression method.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 819–822, 2010. www.springerlink.com
820
T. Zdrazil and J. Holcik
Table 1 Censored data percentages year 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007
stage 1 0 0 0 20 0 11 0 0 9 17 0 17 0 33 21 35 12 35 54 54 53 59 66 71 84 81 86 88
stage 2 0 0 0 0 0 29 0 0 14 14 0 0 25 18 0 18 19 21 23 43 49 54 57 68 75 81 86 93 94 97 99
stage 3 10 0 100 0 0 0 0 0 0 0 10 0 0 33 9 0 27 17 21 24 32 43 50 48 58 65 66 84 89 95 98
a) Weibull Distribution Two parametric distribution function of Weibull distribution is given as
stage 4 0 0 0 0 0 0 0 0 0 0 14 0 8 0 13 5 2 6 5 9 12 14 15 16 19 24 30 40 51 63 85
b
F ( x) = 1 − e
⎛x⎞ −⎜ ⎟ ⎝a⎠
(3)
Choice of this distribution was based on a typical character of majority of experimental data. Great part of histograms, especially for stages 1 and 2, seemed to fit the density of the Weibull distribution for the value of parameter b around 1.5, such as histogram for stage 2 in 1996. The other part showed a shape of exponential distribution, that is a special case of the Weibull distribution for b=1.
A. Kaplan-Meier Algorithm Kaplan-Meier algorithm is based on nonparametric maximum likelihood estimate of a survival function. Probability S(t) of having life time greater than t is estimated by statistic
n − di Sˆ (t ) = ∏ i ni ti < t
Fig. 1 Weibull distribution-shaped histogram In case of the two-parametric Weibull distribution we calculated its parameters using constrained optimization based on a condition
∫
(2)
where ni is a number of persons at risk and di stands for a number of deaths at the time ti . B. Distribution Fitting This method is based on estimation of parameters of a chosen distribution function. According to standard approaches to survival data modelling and in relation to the shape of survival time histograms we chose the Weibull and exponential distribution and applied nonlinear regression to estimate their parameters.
xα
0
F ' ( x) = α
(4)
That means that the value of F(x) in maximum survival time xα is equal to the number of proportion of patients that died before the end of observation. In other words, the distribution function goes through its α-fractile. This condition results in an equation describing a relationship between parameters a and b b=
ln[− ln(1 − α )] . x ln( α )
(5)
α
After parameter estimation the mean of the Weibull distribution was calculated as
IFMBE Proceedings Vol. 29
Modelling of Cancer Dynamics and Comparison of Methods for Survival Time Estimation
stage 3
stage 4
2007
2005
2003
2001
1999
1997
1995
1993
1991
1989
1987
1977
Most data, especially for stages 3 and 4 seemed to fit this distribution.
stage 2
1985
(7)
stage 1 18000 16000 14000 12000 10000 8000 6000 4000 2000 0 1983
F ( x) = 1 − e
⎛ x⎞ −⎜ ⎟ ⎝a⎠
Prevalence of prostate cancer in stages 1,2,3,4
1981
b) Exponential Distribution Because the Weibull distribution did not always describe experimental data well, we also tested the exponential distribution. The distribution function of exponential distribution is given as
The data used for the analysis came from the National Oncological Register. We applied the three described methods on C61 diagnosis, the prostate cancer.
1979
(6)
number of patients
⎛1 ⎞ E ( X ) = aΓ ⎜ + 1 ⎟ , ⎝b ⎠ where Γ represents Γ function.
821
year
Fig. 3 Prevalence of prostate cancer for stages 1, 2, 3, 4
III. RESULTS As a result of application of the three described methods we obtained time series of mean survival time for stages 1, 2, 3 and/or 4 in years 1977-2007. Because of the variability of the results we used moving average to smooth the data trends. All analyses were calculated by SPSS software. Kaplan-Meier mean survival time s tage 1
stage 2
stage 3
s tage 4
sur vival tim e (days)
5000,000
Fig. 2 Exponential distribution-shaped histogram
4000,000 3000,000 2000,000 1000,000
2005
2007
2007
2003
2001
1999
1997
1995
1993
1991
1989
1987
1985
Estimated mean of Weibull distribution stage 1
stage 2
stage 3
stage 4
(9)
survival time (days)
25000 20000 15000 10000 5000
year
1 c1 = t1 − t2
Fig. 5 Weibull distribution mean estimates IFMBE Proceedings Vol. 29
2003
2001
1999
1997
1995
1993
1991
1989
1987
1985
1983
1979
0 1977
1 c2 = t2 − t3
1983
Fig. 4 Kaplan-Meier mean estimates
30000
1 t4
1 . c3 = t3 − t 4
2005
(8)
Now let us assign survival times for stages 1, 2, 3, 4 as t1, t2, t3, and t4. Then parameters c1, c2, c3, and c4 of model (1) were calculated as c4 =
1981
1977
year
1981
E( X ) = a
1979
,000
The mean of the exponential distribution is equal to the value of parameter a.
822
T. Zdrazil and J. Holcik
overestimation of the values by the distribution fitting methods is caused by the fact that neither Weibull nor exponential distribution limits the survival time, which can theoretically go to infinity. We believe that this problem can be fixed by adding proper weights decreasing with time to both the Weibull and exponential density functions, derived from the Czech mortality tables.
Estimated mean of Exponential distribution stage 2
stage 3
stage 4
V. CONCLUSION
2007
2005
2003
2001
1999
1997
1995
1993
1991
1989
1987
1985
1983
1981
1979
1977
mean survival time (days)
stage 1 20000,000 18000,000 16000,000 14000,000 12000,000 10000,000 8000,000 6000,000 4000,000 2000,000 ,000
year
Fig. 6 Exponential distribution mean estimates
IV. DISCUSSION In the Czech Republic, the average age of men at the time of prostate cancer diagnosis is about 70 years. According to the mortality tables, the mean survival time for 70 years old man is 12 years. That is why we consider means of survival derived from the distribution fitting overestimated. On the other hand, as stated in [5], the Kaplan-Meier methodology underestimates the true mean for rightcensored data, which gets worse as the censoring increases. Fig.4 confirms this fact. As a result of these inaccuracies, the number of patients predicted by model (1) does not match the real numbers for any of the three used methods. We found that the data in the NOR shows a slowly increasing linear trend of survival time for all stages, which corresponds to the assumption that health care services are becoming more successful in time. We suppose that the
On the basis of compartmental model of cancer dynamics we applied three methods for the mean survival time estimation. While the generally accepted Kaplan-Meier algorithm underestimates mean survival time for right-censored data, method of parameter estimation for the Weibull and exponential distribution done by nonlinear regression clearly overestimates it. Our suggestion to improve the distribution fitting methods was to add proper weights to the density functions of given distributions, possibly calculated from mortality tables.
REFERENCES 1. 2. 3. 4. 5.
http://en.wikipedia.org/wiki/Prostate_cancer_staging Kaplan E. L., Meier P, (1958), Nonparametric estimation from incomplete observations, Journal of the American Statistical Association, Vol. 53, 457-81 Cox, D.R.., (1972), Regression model and life tables, Journal of the Royal Statistical Society, Series B 34, 187-220 Kupka, K., (2002), Reliability, Durability, Failure rate and their modeling, Automa Zhong M., Kenneth R. H., (2009), Mean Survival Time from The Right Censored Data, Cobra preprint series
IFMBE Proceedings Vol. 29
Implications of Data Quality Problems within Hospital Administrative Databases J.A. Freitas1,2, T. Silva-Costa1,2, B. Marques1,2, and A. Costa-Pereira1,2 1
2
Department of Biostatistics and Medical Informatics, Faculdade de Medicina, Universidade do Porto, Portugal CINTESIS - Center for Research in Health Technologies and Information Systems, Universidade do Porto, Portugal
Abstract— Administrative data, even with some coding irregularities, is easily accessible, inexpensive, and includes episodes from several years for large populations. This routine data can be used to study hospital performance and to screen potential problems in healthcare. In this context, the quality of data is a fundamental issue if we want to achieve reliable results. In fact, the problem of poor data quality cannot be ignored and should more and more be considered. In this paper we present some issues about quality problems, expose some examples of poor data in administrative databases and discuss some potential implications of these problems. Keywords — data quality problems, hospital administrative data, diagnosis-related groups.
I. INTRODUCTION Data Quality has become increasingly important to many organizations as they build data warehouses and focuses more on customer relationship. In particular, in the health care field, cost pressures and the need to improve patient care impel efforts to integrate and clean organizational data. Over the past years, the number of medical registries has greatly increased. Their value strongly depends on the quality of data contained in the registry [1]. With the development of informatics technology, medical databases tend to be more reliable. Issues regarding data quality are now even more relevant as the utilization of these databases increase in magnitude and importance. For health care organizations, data is vital to an effective health care and to financial survival. Data regarding effectiveness of treatments, accuracy of diagnosis and practices of health care providers is crucial to organizations that do their best to maintain and improve health care delivery. Hospital inpatient databases, commonly referred as Administrative Data, are on the most relevant medical repositories. Administrative Data is present in all hospitals in Portugal and in the majority of hospitals around the world. There are many studies, researches and analyses based on administrative data but there are also many problems in the use of this type of data.
A. Administrative Data Administrative data was created primarily to monitor utilization, to determine the consumption of resources, or to find out the capacity to supply a service. Administrative data was generated as part of standard hospital discharge coding procedures. In these procedures, certified coding clerks’ record diagnoses and procedures in standard ICD-9CM format and abstract information from hospital medical records. Administrative data is normally defined as: “large, computerized data files generally compiled in billing for healthcare services such as hospitalization” [2], and should only be used to identify areas for further study [3], whereas they can have significant influence in the improvement of healthcare quality [4]. The core data elements of the administrative data system are admission date, discharge date and status, principal and secondary ICD-9-CM diagnoses, procedures, external causes of injury and some demographic information. This data is often available in compiled research databases from federal agencies, state health departments, health plans, and private data institutions [5]. In Portugal, administrative data contain almost the same variables that exist in other countries. There is common database in all hospitals of the National Health Service (NHS), containing variables of administrative data along with some others variables related to Diagnosis Related Groups (DRG). This database has been used since 1990 [6] with a positive impact on the productivity and technical efficiency of some diagnostic technologies [7]. The use of administrative data for health care quality reporting involves balancing between a number of advantages and disadvantages. This type of data is informative about major processes of healthcare; however, they are primarily collected for payment purposes. For quality assessment such data is often inaccurate and offer reduced clinical detail. Death, length-of-stay and readmission usually are consistent descriptors, but accuracy of diagnosis coding can be inconsistent. Nevertheless, this type of databases can be easily obtained, include millions of episodes and covers large areas.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 823–826, 2010. www.springerlink.com
824
J.A. Freitas et al.
B. Taxonomy of Data Quality Problems
⋅ Incorrect reference [IR], when the referential integrity is respected but the foreign key contains a value, which is not the correct one; ⋅ Heterogeneity of syntaxes [HS], the existence of different representation syntaxes in related attributes; ⋅ Heterogeneity of measure units [HMU], the use of different measure units in related attributes; ⋅ Heterogeneity of representation [HR], the use of different sets of values to code the same real-world property; ⋅ Existence of homonyms [EH], the use of syntactically equal values with different meanings, among related attributes from multiple data sources.
II. EXAMPLES AND IMPLICATIONS OF DATA QUALITY PROBLEMS IN HOSPITAL ADMINISTRATIVE DATA
Fig. 1 Organization model of relational data As described by Oliveira et al. [8], data quality problem can be classified as (Fig. 1): ⋅ Missing values [MV], when a required attribute is not filled; ⋅ Syntax violation [SV], when an attribute value violates the predefined syntax; ⋅ Domain violation [DV], when an attribute value violates the domain of valid values; ⋅ Incorrect value [IV], when an attribute contains a value which is not the correct one, but the domain of valid values is not violated; ⋅ Violation of business rule [VBR], a problem that can happens at all granularity levels, when a given business domain rule is violated; ⋅ Uniqueness violation [UV], when two (or more) tuples have the same value in a unique value attribute; ⋅ Existence of synonyms [ES], the use of syntactically different values with the same meaning; ⋅ Violation of functional dependency [VFD], when the value of a tuple violates an existing functional dependency among two or more attributes; ⋅ Approximate duplicate tuples [ADT], the same realworld entity is represented (equally or with minor differences) in more than one tuple; ⋅ Inconsistent duplicate tuples [IDT], when the same realworld entity is represented in more than one tuple but with inconsistencies between attributes values; ⋅ Referential integrity violation [RIV], when a value in a foreign key attribute does not exist in the related relation as a primary key value;
Next we present examples and discuss possible implications of some problem in administrative data. The database associated with the Portuguese resource allocation system was the main source of data for these analyses. This database, with 9.098.628 episodes between 2000 and 2007, includes data from inpatient and outpatient discharges from the majority of the public acute care hospitals of the National Health Service (NHS). The access to the data was provided by ACSS, I. P. (Administração Central do Sistema de Saúde, I. P.), the Ministry of Health’s Central Administration for the Health System. Clear Identification of Type of Care (Missing Values) Missing values occurs when a required attribute is not filled, that is, there is absence of value in a mandatory attribute. The variable Type of care distinguishes between the different types of care the hospital provides, namely indicating if it is an outpatient or an inpatient episode. In the described database we may find 2.411.772 (26.5%) episodes with missing Type of Care. Possible implications: These missing values can have negative impact in many situations, including in the calculation of the hospital Case-Mix Index and consequently possibly affecting hospital budget. Even for research, and for many episodes, it is not either easy or clear, how to use other variables to split data. So, this is a fundamental variable for the majority of data analysis performed using these databases. Length-of-Stay (Domain Violation) We found 232 problems in the variable Length-of-stay, including a few cases with negative values. Possible implications: these errors are uncommon but can have implication in the hospital budget (these cases will be excluded). Clearly these errors should also be considered
IFMBE Proceedings Vol. 29
Implications of Data Quality Problems within Hospital Administrative Databases
in any statistical analysis or in any process of calculation of performance or quality indicators. Incorrect Value for Age The values of variable Age can be checked using two other variables: Birth date of the patient and the Admission date present on the registry. We recalculated this variable and detected a difference of one year of age in 290.479 episodes. We suspect that the problem is in the rule used by the system in some hospitals to calculate the age of patients. These systems might have only used the year as base of calculation while in our rule we used the entire date (DDMM-YYYY). Possible implications: a difference of one year in age may originate a different Diagnosis-Related Group (DRG). In fact, the age is one of the variables used to group episodes into DRGs. For instance, in AP-DRG (All Patient DRGs) version 21, age is a fundamental factor in the assignment of DRGs 185 and 186. According to the Ministry of Health, DRG 185, for age > 17 years, is priced 1.056€ while DRG 186, for age < 18 years, is priced 659€. An episode of an 18 years-old patient with age incorrectly assigned to 17 years will erroneously lead to the attribution of DRG 186 (with direct implications in the hospital budget). This situation can occur in many other DRGs. This wrong attribution of age can also have implications in many other analyses where age is one the factors. Hospital Transfers (Violation of Business Rule) We studied inter-hospitals transfers and found important problems in the national database. In general, the number of patient transfers from hospital A does not correspond to the number of patients received in other hospitals originated from hospital A. Specifically, we found a total of 263.639 patient transfers, and 556.710 patient receptions (ideally, if all data was correctly coded, this value should be equal). We also verified that not all transferred patients have a correspondence in the group of received patients. Possible implications: this data problem has evident implications in hospital reimbursement. A hospital that transfers a patient to other hospital due to lack of resources, can receive at most 50% of the value of the DRG. On the other side, if the patient is transferred to continue to receive hospital care; the hospital that receives the patient is limited to a restricted group of DRGs. This problem may also impose limitations to any analyses of patient movement’s and to studies about the efficiency of the NHS regarding the fulfillment of patient treatments.
825
Duplicated Episodes By analyzing the entire tuple of attributes, we detected 30.932 duplicate cases. This means that patients have different registries in the same date, at the same hospital, and matching all clinical information (e.g., diagnosis and procedures). This is a clear example of inconsistent duplicate tuples. Possible implications: if not considered, these repetitions can adulterate any calculation of indicators or any other statistical analysis. Principal Diagnosis (Referential Integrity Violation) We studied the variable Principal diagnosis and found 565 episodes with diagnosis code without respective reference in the ICD-9-CM lookup table (invalid code). According to the briefly described taxonomy, this situation can be classified under the problem of referential integrity violation. Possible implications: episodes with this problem are classified with DRG 470 (ungroupable), and the hospital will not be reimbursed for them (claims grouped in this DRG will have payment denied). Hospital Department Codification We detected different representations for the variable that represents the hospital department code (the first department for each hospital stay). As an example for this heterogeneity we considered the department of Pneumology, and found presentations such as “Pneum” (2550 cases), “Pneumo” (531 cases) and the code “210” (with 22cases). Possible implications: This heterogeneity of representations can have several implications. In can, for instance, complicate any study comparing medical specialties among hospitals or any study related with departmental transfers. Repeated ICD-9-CM Codes Another problem was found when studying diagnosis and procedures. We found repeated codes inside one episode, i.e., the same ICD-9-CM code was found, for instance, in the principal diagnosis and in one (or more) secondary diagnosis. In this situation we found 99.995 repetitions, representing 1.1% of total episodes. Considering procedures, we found 121.850 repetitions (1.3% of all episodes have repeated procedures). Possible implications: Under these circumstances we need to be careful when using the number of secondary diagnoses as a proxy for severity of illness, or if we want to study the evolution in the number of procedures or secondary diagnosis coded.
IFMBE Proceedings Vol. 29
826
J.A. Freitas et al.
III. CONCLUSIONS In this paper we present some problems and some possible implications. Some of these problems can be corrected, and other can not. Nevertheless, they should always be considered when interpreting results from analysis of administrative databases. Administrative data can contain inaccurate and unstable data, but they are readily available, relatively inexpensive and are widely used [2, 9]. But in some situations they can be the only source of information to look for a clinical question. Despite some existing problems [10-13], administrative data can, for instance, be used in the production of quality indicators, or for providing benchmarks for hospital activity [5, 11, 14, 15]. However calculating these indicators using only this data may bias quality reports. This analysis must be done carefully in order to avoid these problems. Health care organization and consumers are concerned on how health care can be improved and costs reduced thought the use of information technology. Improving decision-making with better data can make a significant contribution. In this context it is crucial to understand data and to give steps toward a continuous increase of data quality.
ACKNOWLEDGMENT The authors with like to thank the support given by the research project HR-QoD - Quality of data (outliers, inconsistencies and errors) in hospital inpatient databases: methods and implications for data modeling, cleansing and analysis (project PTDC/SAU-ESA/75660/2006).
REFERENCES 1. Arts D, Keizer N, Scheffer G-J. Defining and Improving Data Quality in Medical Registries: A Literature Review Case Study, and Generic Framework. J Am Med Inform Assoc l2002;9: 600-611. 2. Iezzoni LI. Risk Adjustment for Measuring Health Care Outcomes. Health Administrative Press; Chicago, IL. Foundation of the American College of Executives; 1997.
3. Iezzoni LI. Assessing Quality Using Administrative Data. Ann Intern Med l1997;127: 666-673. 4. Price J, Estrada CA, Thompson D. Administrative Data Versus Corrected Administrative Data. Am J Med Qual l2003;19: 38-44. 5. Zhan C, Miller MR. Administrative data based patient safety research: a critical review. Qual Saf Health Care l2003;12: 58-63. 6. Bentes ME, Urbano JA, Carvalho MdC, Tranquada MS. Using DRGs to Fund Hospitals in Portugal: An Evaluation of the Experience. In: Diagnosis Related Groups in Europe: Uses and perspectives. M. Casas, M. M. Wiley (Eds.). Springer-Verla ,Berlin; 1993. 7. Dismuke C, Sena V. Has DRG Payment Influenced The Technical Efficiency and Productivity Of Diagnostic Technologies In Portuguese Public Hospitals? An Empirical Analysis. In: The V European Workshop on Efficiency and Productivity Analysis. Copenhagen, Denmark; 1997. 8. Oliveira P, Rodrigues F, Henriques P, Galhardas H. A Taxonomy of Data Quality Problems. In: 2nd International Workshop on Data and Information Quality. Porto, Portugal; 2005. 9. Torchiana D, Meyer G. Use of administrative data for clinical quality measurement. J Thorac Cardiovasc Surg l2005;129: 1222-4. 10. Powell AE, Davies HTO, Thomson RG. Using routine comparative data to assess the quality of health care: understanding and avoiding common pitfalls. Qual Saf Health Care l2003;12: 122–128. 11. Williams J, Mann R. Hospital episode statistics: time for clinicians to get involved? Clin Med l2002;2: 34–37. 12. Calle J, Saturno P, Parra P, Rodenas J, Perez M, Eustaquio F, Aguinaga E. Quality of the information contained in the minimum basic data set: results from an evaluation in eight hospitals. European Journal of Epidemiology l2000;16: 1073-80. 13. Sutherland JM, Botz CK. The effect of misclassification errors on case mix measurement. In: Health Policy; 2006. 14. Jarman B, Gault S, Alves B, Hider A, Dolan S, Cook A, Hurwitz B, Iezzoni LI. Explaining differences in English hospital death rates using routinely collected data. BMJ l1999;318: 1515-20. 15. Scott I, Youlden D, Coory M. Are diagnosis specific outcome indicators based on administrative data useful in assessing quality of hospital care? Qual Saf Health Care l2004;13: 32–39.
Corresponding author: Author: Alberto Freitas Institute: Department of Biostatistics and Medical Informatics, Faculdade de Medicina, Universidade do Porto, Portugal Street: Alameda Hernani Monteiro City: 4200 Porto Country: Portugal Email: [email protected]
IFMBE Proceedings Vol. 29
Can the EEG Indicate the FiO2 Flow of a Mechanical Ventilator in ICU Patients with Respiratory Failure? E.G. Peranonti1, M.A. Klados2, C.L. Papadelis3, D.G. Kontotasiou2, C. Kourtidou-Papadeli3, and P.D. Bamidis1 2
1 Greek Aerospace Medical Association and Space Research, Thessaloniki, Greece Aristotle University of Thessaloniki, School Of Medicine, Laboratory of Medical Informatics, P.O. Box 323 54124, Thessaloniki, Greece 3 Center for Brain/Mind Sciences (CIMEC), University of Trento, Mattarello, (TN) Italy
Abstract— The aim of this paper is to show that the brain activity of patients with acute respiratory failure hospitalized in Intensive Care Units (ICUs) can provide useful medical information, which is directly related to neurological rehabilitation. It also aims to show that the entropy and kurtosis, widely used indices of the electroencephalographic (EEG) signals, are able to identify EEG changes associated with cerebral hypoxia. EEG signals were recorded from eight adult patients with acute respiratory failure admitted to the ICU. The measurements were recorded in five stages, with FiO2 at 40%, 100%, 60%, 20% and 0% (T-piece) respectively. Total time of recordings was 50min (10 min. for each stage). The EEG signals were filtered and further cleaned from ocular and muscular artifacts as well as from the artifacts introduced by other external devices, electrodes movements and electrode’s bad tangencies. Afterwards the 10-min EEG signals of each stage were segmented in ten epochs with one minute fixed length. Then Kurtosis and Shannon’s Entropy were calculated in each segment. One-Way ANOVA verified the assumption that there are statistically significant differences between the various stages of our protocol, while the Scheffe Post-Hoc tests revealed the homogeneous subsets compiled by the aforementioned stage. The results suggest that the EEG is directly connected with the mechanical ventilator’s changes, so in the future, clinicians could probably use the EEG as particularly useful and time-critical information, especially during the weaning procedure from the mechanical ventilator. Keywords— ICU, EEG, Kurtosis, Entropy, Respiratory Failure.
I. INTRODUCTION Weaning from mechanical ventilation is an essential and universal element in the care of critically ill patients intubated in mechanical ventilation. It covers the entire process of liberating the patient from mechanical support and from the endotracheal tube, including relevant aspects of terminal care. There is an uncertainty about the best methods for conducting this process, which will generally require the cooperation of the patient during the phase of recovery from
critical illness. This makes weaning an important clinical issue for patients and clinicians [1]. Vallverdu et al [2] reported that weaning failure occurred in 61% of chronic obstructive pulmonary disease patients, in 41% of neurological patients and in 38% of hypoxaemic patients. Patients with difficult weaning represent 10% of ICU admissions and consume a significant amount of the overall ICU patient-days and 50% of financial resources [3]. Therefore, strategies to improve the weaning process are needed. The international consensus conference [2] identified six stages within the weaning process as illustrated in: 1. 2. 3. 4. 5. 6.
Treatment of acute respiratory failure (ARF). Suspicion that weaning may be possible. Assessment of readiness to wean. Spontaneous breathing trial (SBT). Extubation and possibly. Reintubation.
Weaning from mechanical ventilation usually implies two separate but closely related aspects of care, discontinuation of mechanical ventilation and removal of any artificial airway. The first problem that clinician faces, is how to determine when a patient is ready to resume ventilation on his or her own. Several studies [4,5] have shown that a direct method of assessing readiness to maintain spontaneous breathing is simply to initiate a trial of unassisted breathing. Once a patient is able to sustain spontaneous breathing, a second judgment must be made regarding whether the artificial airway can be removed. This decision is made on the basis of the patient's mental status, airway protective mechanisms, ability to cough and character of secretions. If the patient has an adequate sensorium with intact airway protection mechanisms, and is without excessive secretions, it is reasonable to extubate the trachea [6]. Early prediction of neurological rehabilitation of the patient would be very important, providing the clinician with useful information for the optimal time initiating the weaning procedure from the mechanical ventilation. The electroencephalogram (EEG) allows us to continuously monitor and record the brain activity of the patient. Although the
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 827–830, 2010. www.springerlink.com
828
E.G. Peranonti et al.
EEG provides a particularly useful and time-critical information, it is not widely used [7]. To our best knowledge, there is a significant lack of studies related to the EEG recordings during the weaning procedure. This work comes to investigate if the EEG is capable to measure the brain responses during the weaning procedure. It also shows that kurtosis and Shannon’s entropy, widely used EEG parameters, are useful indicators of neurological recovery of patients with acute respiratory failure, who are hospitalized in ICUs.
II. MATERIALS AND METHODS A. Subjects Eight adult patients with acute respiratory failure admitted to the ICU of Saint Paul General Hospital, Thessaloniki, Greece, from April 2007 to December 2007, were enrolled in this study. B. Protocol At the phase 1 of the study, all the patients reaches the criteria to be disconnected from mechanical ventilation: Vital capacity-10 mL/ kg, Tidal volume > 4 mL/ kg , Minute ventilation <10L/min, Respiratory rate < 38 breaths/min, Dynamic compliance > 22 mL/cm H2O, Static compliance > 33 mL/cm H2O, Pa O2/PAO2 ratio > 0.35, Peak negative pressure 20-30 cm H2O, Dead space to tidal volume ratio 0.6, Frequency/tidal volume ratio < 105 breaths/min/L. At phase 2 BISpectral index (BIS) was used in order to evaluate the depth of anaesthesia [8]. Afterwards a neurologic deficit score from a specialist neurologist or the intensivist took place and the BIS device was then disconnect in order to apply the INVOS, for oxygen saturation measurement, ECG and EEG. Here it has to be mentioned that the INVOS® System provides a direct, non-invasive measurement of changes in regional brain blood oxygen status. Is a real-time guide to therapeutic intervention and it measures changes in oxygen levels beneath sensors in order to protect the brain and vital organ areas from hypoxia. At the third phase of the protocol, the FiO2 was initialized at 40% (Stage1) for 10 min, and then the FiO2 was increased in 100% (Stage2) for also 10 min. Afterwards we have decreased the FiO2 to 60% for 10 min (stage3) and finally return to 40% again for 10 min (stage4) also. At the phase 4, a 2 min recording took place when patient is in at the process of disconnection from mechanical ventilation, while at phase 5, a 10 min recording when patient is in T piece with oxygen administration (stage5).
C. Data Acquisition and Analysis Multichannel EEG measurements took place during the aforementioned protocol. Nineteen scalp electrodes were placed according to the 10-20 International-System. The earlobes were used as reference points [9]. The signals were digitized at a rate of 500 Hz and were further filtered (band pass filter at 0.5-40 Hz and notch filter at 50Hz). Electroocculographic (EOG) recordings took place simultaneously with the EEG. Two electrodes where placed above and below the left eye, while another two at the outer canthi of each eye. From these electrodes, two bipolar signals were obtained, namely vertical-EOG (VEOG), which is equal to upper minus lower electrode values and horizontalEOG (HEOG) which is equal to left minus right electrode values. Each EOG signal was further filtered with a bandpass filter at 0.5-5 Hz and with a notch filter at 50 Hz for power line noise extraction. In order to obtain more accurate results, EOG and EMG artifacts were rejected. For EOG artifact rejection, REGICA technique was applied [10] in the filtered EEG data, while the EMG artifacts were rejected according to [11]. Afterwards three independent EEG observers marked the contaminated segments by external noise like electrode movements or electrodes’ bad tangencies. The next linear formula was used in order to fill these corrupted segments: a
EEG (i, x) b = EEG (i, a) ⋅
b− x x−a + EEG (i, b) ⋅ b−a b−a
where a, b are the starting and the ending sample point of the corrupted segment respectively, x is inside the set [a, b] and denotes the sample point which is under estimation, while EEG and i defines the EEG waveform in i th electrode. D. Kurtosis In probability theory and statistics, kurtosis (from the Greek word kurtos, meaning bulging) is a measure of the "peakedness" of the probability distribution of a real-valued random variable. Higher kurtosis means more of the variance is due to infrequent extreme deviations, as opposed to frequent modestly-sized deviations. Data sets with high kurtosis tend to have a distinct peak near the mean, decline rather rapidly, and have heavy tails. Data sets with low kurtosis tend to have a flat top near the mean rather than a sharp peak. A uniform distribution would be the extreme case. Kurtosis was computed by the next formula:
IFMBE Proceedings Vol. 29
K = m4 − 3m22
Can the EEG Indicate the FiO2 Flow of a Mechanical Ventilator in ICU Patients with Respiratory Failure?
Table 2 Kurtosis
where m n is the n th central moment:
mn = E
{ (x − ~x ) }. n
Stage
E. Shannon’s Entropy The Shannon entropy was first introduced by Shannon [12]. Entropy is a function of the Probability Distribution Function (PDF) p(x) , and is sometime written as:
H ( p( x1 ), p( x2 ),..., p( xn )) It has to be noted that the entropy of X does not depend on the actual values of X , it only depends on p ( x) . The definition of Shannon's entropy can be written as the above expectation value:
829 Mean Values
kurtosis (mean ±SD)
Stage 1
0,6681±2.08985
Stage 2
1.4914±8.47761
Stage 3
0,3226±0,64772
Stage 4
0,2154±0,38658
Stage 5
0,2312±0,39702
Statistics
F(102.087)= 6.305 p<0.000
Looking at the Figures we can see that the aforementioned statistical difference is clearly depicted by the boxplots.
H ( X ) = − E [log b p ( X )] The quantity log b p ( X ) is interpreted as the information content of the outcome x ∈ X , and is also called the Hartley information of x . Hence the Shannon's entropy is the average amount of information contained in random varable X , it is also the uncertainty removed after the actual outcome of X is revealed. The Shannon entropy [13] is a standard measure for the order state of sequences. It quantifies the probability density function of the distribution of values. The probability density functions of the awake electroencephalographic amplitude values are relatively constant between epochs [14].
III. RESULTS
Fig. 1 The entropy’s mean values for the five stages of the experimental protocolo. Post-Hoc analysis results are depicted by asterisks. Double asterisk reveal that the difference is statistically significant lower than 0.01, while with a single asterisk we denote that there is a statistical significant difference at 0.05
The one-way ANOVA verified the assumption that there is a statistical significant difference between the various stages of our protocol for both entropy (Tab.1) and kurtosis (Tab.2). We can also see that the data of variables do not have extreme values and their distribution is close to normal. Table 1 Entropy Mean Values Stage
Entropy (mean ±SD)
Stage 1
1.6240±0.40483
Stage 2
1.7047±0.32597
Stage 3
1.7842±0.38586
Stage 4
1.7037±0.31878
Stage 5
1.5687±0.26850
Statistics F(2.548)=21.699 p<0.000
Fig. 2 The Kurtosis’s mean values for the five stages. Post-Hoc results are depicted by the asterisks IFMBE Proceedings Vol. 29
830
E.G. Peranonti et al.
IV. CONCLUSIONS AND DISCUSSION Until recently, weaning is usually conducted in an empirical manner. In international literature there is a significant lack of studies related to EEG recordings during the weaning procedure. Determining the optimal time at which to discontinue mechanincal ventilation must not be based simply on clinical impression because weaning depends on multiple factors: central drive and peripheral nerves; mechanical respiratory loads, ventilatory muscle properties and gas exchange properties; and cardiac tolerance and peripheral oxygen demands. Premature weaning places the patient at risk for reintubation and airway trauma, whereas delayed weaning exposes them to risk for nosocomial infection and increases hospitalization costs. In our study as it can be seen from the results, EEG can provide critical information for successful weaning. Results suggest that kurtosis and entropy seem to have some statistical significant differences among the different stages of our protocol. This enables us to assume that EEG can take a crucial role during the weaning procedure from the mechanical ventilation. Post-Hoc results for kurtosis didn’t revealed many differences among the various stages. Although entropy’s Post-Hoc tests conclude, that entropy is modulated more, in contrast to kurtosis, from the different percentages of the FiO2 . So it is assumed that Shannon’s entropy could probably be preferred in such cases. It is clearly observable that the minimum value of entropy was obtained for the last stage (T-piece). This finding can be explained if someone considers that entropy is an index which measures how chaotic is a system. Small entropy values reveal that in the T-piece stage the subject’s encephalic neurons were synchronized in order to start breathing by their own. As far as entropy seems to be modulated by the mechanical ventilator’s oxygen flow, it has to be further checked if other entropies, like approximate or permutation entropy, can be used as features in a classification scheme which will be able to classify more properly the mechanical ventilator’s oxygen flow.
REFERENCES
2. Vallverdu I, Calaf N, Subirana M, et al. Clinical characteristics, respiratory functional parameters, and outcome of a two-hour T-piece trial in patients weaning from mechanical ventilation. Am J Respir Crit Care Med 1998; 158:1855-1862. 3. Cohen IL, Booth FV. Cost containment and mechanical ventilation in the United States. New Horiz 1994; 2:283-290. 4. Esteban A, Frutos F, Tobin MJ, et al.: A comparison of four methods of weaning patients from mechanical ventilation. N Engl J Med 1995, 332:345-350. 5. Brochard L, Rauss A, Benito S, et al.: Comparison of three methods of gradual withdrawal from ventilatory support during weaning from mechanical ventilation. Am J Respir Crit Care Med 1994, 150:896903. 6. Inmaculada Alía and Andrés Esteban, Weaning from mechanical ventilation. Critical Care 2000, 4:72-80doi:10.1186/cc660. 7. Christos Papadelis , Nikos Maglaveras, Chrysoula KourtidouPapadeli , Panagiotis Bamidis, Maria Albani , Kyriazis Chatzinikolaou, Konstantinos Pappas Quantitative multichannel EEG measure predicting the optimal weaning from ventilator in ICU patients with acute respiratory failure. Clinical Neurophysiology, Volume 117, Issue 4, Pages 752-770, April 2006. 8. Gilbert TT, Wagner MR, Halukurike V, Paz HL, Garland A. Use of bispectral electroencephalogram monitoring to assess neurologic status in unsedated, critically ill patients. Crit Care Med. 2001;29:1996–2000. 9. Manousos A. Klados, Christos Frantzidis, Ana B. Vivas, et al., “A Framework Combining Delta Event-Related Oscillations (EROs) and Synchronisation Effects (ERD/ERS) to Study Emotional Processing,” Computational Intelligence and Neuroscience, vol. 2009, Article ID 549419, 16 pages, 2009. doi:10.1155/2009/54941. 10. Manousos A. Klados, Christos L. Papadelis, Panagiotis D. Bamidis, REG-ICA: A New Hybrid Method for EOG Artifact Rejection, IEEE ITAB 2009 (in press). 11. De Clercq W., Vergult A., Vanrumste B., Van Paesschen W., Van Huffel S., ''Canonical Correlation Analysis Applied to Remove Muscle Artifacts From the Electroencephalogram'', IEEE Transactions on Biomedical Engineering 53 (12): 2583-2587. 12. Shannon, C. E. (1948). "A mathematical theory of communication", Bell System Technical Journal, Vol. 27, pp. 379-423 and 623-656, July and October, 1948. 13. Shannon, C. E. and W. Weaver (1949, 1998). The Mathematical Theory of Communication. Urbana and Chicago, University of Illinois Press. 14. Schmitt AO, Herzel H: Estimating the entropy of DNA sequences. J Theor Biol 1997; 188: 369-77.
Author: Perantoni Eleni Institute: Greek Aerospace Medical Association and Space Research, City: Thessaloniki Country: Greece Email: [email protected]
1. J-M. Boles, J. Bion, A. Connors, M. Herridge, B. Marsh, C. Melote, R. Pearl, H. Silverman, M. Stanchina, A. Vieillard-Baron, T. Welte1 Weaning from mechanical ventilation. Eur Respir J 2007; 29: 10331056.
IFMBE Proceedings Vol. 29
A European Biomedical Engineering Postgraduate Program – From Evaluation to Continuous Improvement V. Griva and N. Pallikarakis BIT Unit, Department of Medical Physics, Faculty of Medicine, University of Patras, Greece
Abstract— The field of Biomedical Engineering is currently undergoing a rapid evolution characterized by an increasing degree of specialization. This in turn imposes new requirements in the field of education, while the changing scene at European level introduces a major challenge for harmonization and standardization of education with a focus on meeting the emerging needs. A multinational advanced postgraduate program on Biomedical Engineering has been established since 1989. The program addresses a multinational audience and draws expertise from a large multinational academic community. As such it presents an enormous potential for achievement of excellence, through extensive cooperative efforts. An evaluation mechanism of the program's quality has been designed and implemented over the past sixteen years aiming to create the necessary conditions that would allow the maximization of its potential and provide an appropriate framework for mutual recognition among the participating institutions. It was possible to set standards that would comply with the goals specified in the Quality Policy, prescribe and evaluate procedures against the requirements of standards, build a robust monitoring system and design the necessary corrective mechanisms. The approach led to a comprehensive quality system encompassing all relevant aspects, which has matured and evolved into a comprehensive Quality Management System that has been developed with the aid of an informatics tool and suitable methodology. The system meets sufficiently the requirements of the International Standard ISO9001:2008, focusing ultimately on the program’s effective contribution to the integration and harmonization across the European Higher Education Sector. Keywords— Biomedical Engineering, Postgraduate, Quality, Harmonization in higher education.
I. INTRODUCTION A. Overview of the Course The recent technological advancements have resulted in improved efficiency and efficacy of medical procedures and have largely affected not only medical practice, but also the overall management strategies in healthcare. Consequently, the associated field of Biomedical Engineering is undergoing a rapid evolution characterized by an increasing degree of specialization. This in turns imposes new challenges for
advanced education in this field, while the changing scene at European level dictates the need for harmonization and standardization of education, with a focus on meeting the emerging needs for appropriately trained young engineers and physicists within this new landscape. For the past 21 years, the European Union has supported an initiative for the development of a multinational advanced Program on Biomedical Engineering (BME) within the ERASMUS and subsequently also within the TEMPUS programs. The Postgraduate Program on Biomedical Engineering is organized at the Faculty of Medicine of the University of Patras, in collaboration with the Faculty of Mechanical Engineering and the Faculty of Electrical and Computer Engineering of the National Technical University of Athens and twenty five other European Universities from fifteen countries [1]. The syllabus, based on the TEMPERE Project recommendations [2] covers the following four subject areas: • • • •
Basic Knowledge & Skills (basic medical & physical sciences in medicine, transferable skills, research methods) 30% Conversion courses (0-2 taken from Mechanics and materials, Electronics, Digital signal processing) 0-10% Basic Biomedical engineering Topics 40-50% Advanced Biomedical engineering Topics 20%
Students may register at any of the collaborating Institutions and follow the whole or part of the Program according to their needs. Additionally, students registered at the University of Patras may elaborate on their MSc Thesis at any collaborating University. The European Credit Transfer System is then used for the completion of their studies and the degree award. This BME Program has succeeded in enrolling more than 500 students to date. B. Description of the BME Program There are two categories of students that can be admitted to the Program: students wishing to complete full-time postgraduate studies and obtain an MSc degree in Greece and students wishing to attend all or part of the Program in the framework of the LLP/Erasmus Program and transfer
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 831–834, 2010. www.springerlink.com
832
V. Griva and N. Pallikarakis
their grades and corresponding credits back to their home University. The duration of studies for this Master of Science (MSc) is three to four (3-4) academic semesters. The first and second semesters include lectures, laboratory sessions and written exams. The following one to two semesters are mainly dedicated to the elaboration and the presentation of the MSc thesis. Students are given the option to work on their Thesis in one of the Faculties co-organizing the BME Program or in one of the Collaborating Universities in Europe. After having obtained their MSc, selected students may pursue a PhD in the field.
II. APPROACH A. The Quality Assurance System The current Quality Assurance system was based on the generic Quality Model displayed in Figure 1[3, 4]. Objectives
Criteria
Monitoring and Data Collection
Feedback
C. Program Structure Τhe first year of the Program is divided in two parts: The first semester, from October to January, contains general and medical-subjects like: Human Anatomy and Physiology, Biochemistry, Research Methodology, Medical Electronics, Physics in Medicine Etc. The second semester, is dedicated to the core BME subjects, like: Biomechanics Biomaterials, Medical imaging, Biomedical signal processing, biosensors, Biomedical instrumentation, Health Care Telematics, Pattern recognition, Clinical Engineering etc. All lectures are given at the University of Patras in English, by teaching staff from many collaborating Universities from all over Europe. The program is therefore taking advantage of a unique pool of expertise available at the participating Universities. D. Quality Assurance A Quality Assurance system has been designed and implemented over the past sixteen years, aiming to create the necessary conditions that would permit the maximization of the potential and provide an appropriate framework for mutual recognition amongst the participating institutions. This activity has led to the conclusion that an advanced Program on Biomedical Engineering should be regarded as an integral part of the overall process of preparing professionals for the job market. It was therefore necessary to consider all associated issues within a global framework, including prior education, field-specific education and training, as well as continuous professional development. It has been also proven that the concept of a European Program is viable and can contribute effectively to the integration and harmonisation in the European Union and there is a pressing need to develop the appropriate framework that would facilitate student mobility and mutual professional recognition. This should be implemented in accordance to EU harmonisation policies [3].
Evaluation
Fig. 1 The Generic Quality Model Specifically, quality objectives relate directly to the key competitive areas, which concern the potential for a faculty with a high degree of specialization in the respective fields, a dynamic, continuously updated syllabus that stays in pace with the most recent developments in the fields addressed by its topics, as well as a background setting for students that will act as catalysts in the process of integration of multinational student groups into a harmonized, truly European student community. Fulfillment of the above goals would be ensured through the concurrent development of all the supporting functions and infrastructure. An analytical approach was followed, aiming at specifying the key parameters that affect each process and sub-process, in order to lead up to the specification of the various procedures and monitoring, evaluation and feedback mechanisms. The approach involved the identification of the quality parameters, which comprises what the "customer" sees or expects from the course. It was recognized here that the recipients of services in this course are not only the attending students but also the referring universities, which would subsequently be called to provide credits/recognition to their students. These quality parameters include measurable quality indices, which determine directly or indirectly the quality outcomes. On the basis of such an analysis, it was possible to subsequently set standards that would comply with the goals specified in the Quality Policy, prescribe and evaluate procedures against the requirements for satisfaction of standards, build a robust monitoring system and design the necessary corrective mechanisms.
IFMBE Proceedings Vol. 29
A European Biomedical Engineering Postgraduate Program – From Evaluation to Continuous Improvement
B. Quality Management System
833
Customer needs
III. RESULTS A total of thirteen (13) quality outcomes were identified, relating directly to five main course components, namely Teachers, Students, Syllabus, Infrastructure and Organisation. Quality Assurance and continuous Quality Improvement has been based on the measurement of these and it is expected to result from the concurrent fulfillment of goals set in each one of these components, through paying close attention to predefined criteria, clear definition of procedures, evaluation and adjustment in a continuous feed-back process. These quality outcomes were referenced to a total of 48 measurable quality indices. This process standard has formed the backbone for quality development in the course and has also provided the basis for the course assessment during the past 16 years.
QM: QUALITY MANUAL Quality policy
QM: QUALITY MANUAL P-01-01 Control of documents and data P-01-02 Control of records P-02-01 Human Resources P-02-02 Infrastructure P-03-01Evaluation of suppliers/products/materials/services P-03-02 Purchasing
Process design (Training, organizational structure & Responsibilities
C O R R E C T I V E
Process Implementation
A C T I O N S
Internal Audits / Measurements
C O R R E C T I V E A C T I O N S
P-04-01 Planning and implementation of the course P-04-02 Communication with customers P-04-03 Design & development of new services and products P-05-05 Non-conforming product & Corrective actions P-05-06 Preventive actions
P-05-03 Internal Audit P-05-04 Monitoring, Measurement & Process analysis
P-01-03 Review of the Quality System
Management Review
P-05-01 Receiving and Managing Customer Complaints P-05-02 Measurement of Customer satisfaction
Measurement of Customer satisfaction
Fig. 2 The Process Model of the BME program The implementation of the Quality Assurance system led to the evaluation and assessment of all these quality variables over the past sixteen (16) years. The collected dataset offers a valuable source for further exploitation by statistical quality control tools used for the purposes of Quality Improvement, including the frequency distribution of teachers’ ratings of the quality parameter Stay shown in Figure 3. As it can be depicted from Figure 3, this is a normal distribution with most ratings taking values above 3,5 showing that the offered services from the course coordination meet teachers’ expectations. STAY 6
5
Frequency of occurence
The QS-PRO (Quality System Processes) is a generic broad range informatics tool that can provide essential assistance in the development of Quality Systems. Based on the generic model described by Figure 1 it has been used to set up the process tree and specify variables and their interrelationships. The tool allows propagation of various qualityrelated effects through the process structure in a bottom up fashion, by linking quality indices to outcomes repetitively, until arriving at the final outcome on the top. By means of an interactive browser, it is employed to simulate the running processes, to test and evaluate them through a causeeffect interaction on intermediate or final outcomes. In view of these considerations the tool was used for mapping the existing BME Program functions and services in order to meet the ISO-9001:2008 requirements [4]. More specifically, the available routines facilitated the process breakdown and the interconnection of sub-processes and procedures, in terms of their effect on the rest of the process tree elements. At the same time feedback routes were identified and built into this BME case study, resulting in a complete process model that can be subjected to evaluation mechanisms. Moreover, the system allowed the creation of an ‘electronic’ quality manual. Thereby, this documentation involves all processes described as the appropriate level of detail including purpose, scope, responsibilities and authorities, complete description of the methods used and related forms as well as files kept. The BME process model is shown in Figure 2 where the main quality elements are displayed corresponding to suitable documentation (found on the left part of the Fig.)
4
3
2
1
0 3,5 to <= 4,0
4,0 to <= 4,5
4,5 to <= 5,0
Grade scale
Fig. 3 Frequency distribution of teachers’ ratings of a quality parameter (stay) used in assessing the coordination services of course The Run Chart in Figure 4 shows the temporal behavior of one of the content quality index (usefulness) concerning a particular course component (syllabus) as evaluated by the
IFMBE Proceedings Vol. 29
834
V. Griva and N. Pallikarakis
students. From the ensuing analysis all data points seem to be equally distributed above and below the median without any shifts or trends proving that the variation in this assessment is stable [5]. Moreover, the student’s rating of the syllabus’ usefulness appears to increase since most data points derive higher values showing that students’ expectations are generally fulfilled and only minor improvements are needed to exceed these expectations.
tools for assessment and decision support regarding improvements. It is expected that the ISO9001:2008-based quality system will lead to certification of the BME Program, while at the same time it can also prove valuable in assisting in the establishment of the mutual credit recognition arrangements amongst participating universities. BMPCS-9 Individuals Chart 5
UCL=4,9946
4,50
usefulness 4,24
3,99
3,99 3,90
4,18
4,5
4,05
3,98
4
3,90 3,83
3,81 Mean Value
4,22 4,08
4,06
4,00
3,80
CEN=3,7826
3,66
3,5
3,52
3,50
3 3,00 Median 3,99
2,50 199394
199495
199596
199697
199798
199899
199900
200001
200102
200203
200304
200405
200506
200607
200708
200809
2,5
LCL=2,5707
2 1993- 1994- 1995- 1996- 1997- 1998- 1999- 2000- 2001- 2002- 2003- 2004- 2005- 2006- 2007- 20081994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
Academic year
Fig. 4 Run Chart of one of the content quality index (usefulness) concerning a particular course component (syllabus) as assessed by the students
BMPCS-9 Moving R Chart
Figure 5 presents the Control Chart of the average values of quality parameters such as topic, teacher, teaching methodology, and examination paper as assessed by the students for a specific subject from the Basic Knowledge & Skills subject area. As shown in Figure 5 the educational procedure for this subject seems to be under control with the exception of the academic year 1993-1994 [5]. Taking into consideration the changes in the syllabus and faculty during the past sixteen (16) years for this particular subject, it seems that three different lecturers have been used, one of whom provided lectures only during the first academic year. Generally, process variability is the end result caused by multiple factors, such as measurement, method, technology, used resources, means and environment that are interrelated in a complex way. Therefore, it would be inappropriate to account for this change as the main special cause for the subject’s process variability.
IV. CONCLUSIONS The approach has lead to a fully comprehensive quality system encompassing all relevant aspects. The initial implementation of this system indicated a need for addressing certain aspects at a higher level of detail and lead to a reassessment of the process standard. The system has proven effective in practice and efficient in providing appropriate
1,6
UCL=1,4885
1,4 1,2 1 0,8 0,6 0,4
CEN=0,4556
0,2 0 -0,2 -0,4
LCL=0,0
19931994
19941995
19951996
19961997
19971998
19981999
19992000
20002001
20012002
20022003
20032004
20042005
20052006
20062007
20072008
20082009
Fig. 5 Control Chart of the average values of the educational process for the subject BMPCS-9 (mean x and moving Range diagrams)
REFERENCES 1. European Postgraduate Course on Biomedical Engineering, http://bme.med.upatras.gr/ (last accessed 15-3-2010). 2. Curriculum for BME-Towards a European Framework for Education and Training in Medical Physics and Biomedical Engineering, Studies in Health Technology and Informatics, Vol.82, IOS press-OHMSA, 2001, pp.101-113.editor Z. Kolitsi 3. European Association for Quality Assurance in Higher Education, (2005), Standards and Guidelines for Quality Assurance in the European Higher Education Area, ENQA, Helsinki. 4. ΙSO 9001:2008 Quality management systems - Requirements, International Standardisation Organisation, Geneva. 5. Reid R. D. and Sanders R. N. (2007) Operations Management: An Integrated Approach, Chapter 6: Statistical Quality Control, Wiley and Sons, USA.
IFMBE Proceedings Vol. 29
SOAP/WSDL-Based Web Services for Biomedicine: Demonstrating the Technique with the CancerResource T. Meinel1,2, M.S. Mueller1, J. Ahmed1, R. Yildiriman2, M. Dunkel1, R. Herwig2, and R. Preissner1 2
1 Charité - University Medicine Berlin/Institute for Physiology/Structural Bioinformatics Group, Berlin, Germany Max Planck Institute for Molecular Genetics/Vertebrate Genomics Department/Bioinformatics Group, Berlin, Germany
Abstract— Web services provide programmatic access to data or tools using internet technology. Several standards have been developed, one sophisticated application is the combination of the Web Service Description Language (WSDL) with the SOAP (originally defined as Simple Object Access Protocol) messaging protocol. We describe the fundamental technology of SOAP/WSDL-based web services and provide concepts to integrate data from independent, distant resources. Several informatics layers are described such as server-client messaging connections, programming languages, libraries, and modularity of web services. We illustrate principles underlying this technology and the types of web services they can be applied to in several functional stages, from simple data retrievals over combined data accesses (workflows) up to dynamically rendered images. As an example that is relevant for biomedicine we highlight the CancerResource as a use case for such diversified applications. The CancerResource is conceptualized to present cancerrelevant drug-target connections to the life sciences, particularly to the field of medical science. In-depth literature data-mining resolved thousands of drug-target connections. CancerResource connects manifold information from different knowledge categories. Targets are genes or proteins, which are well described in public databases like UniProt, Ensembl, and the PDB. Drugs are chemical compounds that specifically act on target genes in cancer tissues; they are collected in databases like DrugBank, SuperDrug, or PubChem. Connectivity Map (C-Map) expression data is an essential source of functional information for CancerResource. A general visualization feature is the organization of genes in pathways; the KEGG database provides cancer-specific pathway maps. For this purpose CancerResource utilizes web service technologies to dynamically combine C-Map expression data with pathways. The CancerResource web interface allows the user to specify problems and helps to understand noticeable behaviors of genes that are implicated in cancer. The ultimate aim of CancerResource is to provide suggestions towards developing specific drug therapies for cancer patients. Keywords— SOAP/WSDL, web service, interoperability, cancer medicine, drug treatment.
I. WEB SERVICES IN BIOMEDICINE The increasing impact of web services is one response of technology to the inflating complexity of data, knowledge,
and information in biomedicine. As recently reviewed [1], the acceptance and the employment of this technology require a broad understanding in basic components of web services. The overall goal of web services is to allow programmatic access to databases and tools that hold data or operate data. Advantages are that own data repositories can be economically used, data multiplication avoided and the problem of updating foreign data is out-sourced to the originators of the data. Such basic ideas imply the interoperability for messages between computers and several initiatives have been established that define and maintain message protocols. The data exchange is enabled by the exhaustive utilization of the internet. Here, the SOAP messaging protocol is based on the eXtended Markup Language XML and is therefore a pervasive standard, which is independent from the type of computer or programming language used. The organization of data and the separation from message transport standards are clear advantages of SOAP. Biomedicine shares a high amount of biological data. Independently from personalized data, which form an important part in biomedicine and require separate solutions for programmatic access, many general issues in bioinformatics or systems biology are oriented toward medical aspects. Such data are molecular items like genes or compounds, organization features like pathway maps or networks, or functional traits of molecular items which are comprised in proteomics or transcriptomics. If data are already publicly available the preferred method of retrieving such data is by programmatic access. The question of discovery of web services and achieving of information about web services is solved in three levels. First, several institutions like the EBI, NCBI, or DDBJ provide large collections of web services and tools. Second, sophisticated methods that utilize semantics to detect appropriate methods according to specific problems are under development. Finally, particularly suitable for single data repositories, initiatives have been founded that invite data providers to announce their web services in registries like the BioCatalogue [2], which is a successor of the EmbraceRegistry [3,4]. Here, the obligate deposition of test clients give programmers detailed insights into methods and constructions of particular web services.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 835–838, 2010. www.springerlink.com
836
T. Meinel et al.
$species ) ; $timeout = array( 'timeout' => 100 ); $result = $soapClient->call( $method , $param , $timeout ) ;
}
if ( PEAR::iserror($result) ) { echo "message:
'" . $result->message . "'
\n"; } else { if ( get_class( $result ) == "stdClass" ) { return get_object_vars( $result ); } elseif ( sizeof( $result ) > 0 ) { return $result ; } }
function f_02( $species, $dataset_array, $ensembl_ID_array ) { set_include_path( '/usr/share/php5/PEAR' ) ; require_once "SOAP/Client.php" ; $wsdl_url = "http://genomematrix.molgen.mpg.de/cgi-bin/ws/" . "esoaposti/wsdl" ; $soapClient = new SOAP_Client( $wsdl_url ) ; $method = "GetEnsIdsByExpression" ; $param = array('species' => $species , 'filter_val_lower' => 1.3333 , 'filter_val_upper' => 100 , 'categories' => $categoryArray , 'idlist' => $elementArray) ; $timeout = array( 'timeout' => 1000 );
The building of web services requires standardization on the client side as well as on the server side. Specific libraries or modules have been developed for most programming languages that are commonly used in bioinformatics. Lightweight packages exist for Perl and Python (SOAP::Lite; SOAPpy), as well as modules that overcome nested data structures (XML::Compile; ZSI). The PEAR extension and application repository [6] is a framework for reusable PHP components. PHP was used to implement the tool described in the use case, cf. section 3; a corresponding generic example for a PHP/PEAR client is presented in Figure 1. Here, the setting of GenomeMatrix methods, request parameters therein and PHP/PEAR/SOAP object parameters are given in detail. The timeout setting allows an adjustment of time delays that arise from time and CPU consuming procedures of web services, namely XML messaging, conversion of data formats or server-side database operations. A comparable Perl lightweight client is demonstrated in [1]. Both server and clients translate the messages between XML and the programming language and vice versa. Therefore, respective programming language modules are constructed to cover both functions in one package.
III. USE CASE: CANCERRESOURCE
$result = $soapClient->call( $method , $param , $timeout ) ;
} ?>
if ( PEAR::iserror($result) ) { echo "message:
'" . $result->message . "'
\n"; } else { if ( get_class( $result ) == "stdClass" ) { return get_object_vars( $result ); } elseif ( sizeof( $result ) > 0 ) { return $result ; } }
Fig. 1
Generic examples for PHP/PEAR client functions to access the GenomeMatrix web service [13]. The two web service methods are subsequently combined in a workflow - respective PHP code is not shown – and correspond to the methods of the workflow presented in Figure 2.A
II. TECHNOLOGY Biological data are often structured into scaled hierarchy levels as well as they possess key and value character. WSDL is developed to cope with this complexity. Moreover, the physical connection to a computer’s IP address is included, as well as the binding to the call of a specific operation; web services often comprise large collections of operations (methods; synonymously). Characteristics in this context are granularity, complexity, and modularity of web services. WSDL documents unify all the aspects in one description file, which can be accessed by a specific URL; cf. [5,9,14] for the use case in section 3. WSDL documents are organized and written in XML.
The CancerResource is intended to provide information about drug-target connections; detailed information about respective drugs and target genes is accessible via the web interface [7]. Knowledge that is achieved by literature text mining or structure modeling requires support by foreign data. For this purpose, general databases are integrated within CancerResource. However, specific information that consists of pre-calculated data or requires specialized tools like expression data or image generators can or must remain on original repositories. Such data can be utilized if accessibility is enabled by web services. To provide a basic understanding in web service functionality - and in the organization of web services in workflows - we present CancerResource as a use case for web service implementations. Several grades of web service complexity are integrated in CancerResource, from a simple database access to retrieve the protein family association of a target protein (SYSTERS – large scale protein sequence clustering into protein families across the entire organism space [8,9]; method getClustersByArrayofAccnrs not shown) over an integrated workflow of two web service methods (two database requests on GenomeMatrix [10], Figure 2.A) towards the access of a dynamically operating tool localized at the Kyoto Encyclopedia of Genes and Genomes, KEGG [11].
IFMBE Proceedings Vol. 29
SOAP/WSDL-Based Web Services for Biomedicine: Demonstrating the Technique with the CancerResource
837
Fig. 2 Workflow and web tool output for expression data in CancerResource. A. Workflow of two GenomeMatrix web service accesses to retrieve C-Map expression data. B. CancerResource web visualization for differentially expressed genes as an array of colored boxes associated to each of the 36 cancerrelevant KEGG pathways. A particular color array of KEGG pathway ‘Cell Cycle’ (gene information is available mouse-over boxes) is selected for demonstration. The out link ‘show pathway map’ initializes the KEGG web service access (C; workflow) to retrieve a URL for accessing the pathway map image. D: Detail of the dynamically rendered KEGG pathway map hsa04110, ‘Cell Cycle’, with differentially expressed genes indicated by colored borders around the gene icons in the map (cf. text). Border colors in D correspond to box colors in B
IFMBE Proceedings Vol. 29
838
T. Meinel et al.
Expression data provides important functional information to help understand cancer. The C-Map [12] describes the transcriptome-wide influence of about two hundred compounds on target genes. Our example in Figure 2 schematically shows the procedures to retrieve and visualize C-Map data. CancerResource exploits the GenomeMatrix repository that is intended to open perspectives on entire genomes. Amongst a huge amount of expression data sets in this repository, C-Map expression data are pre-calculated for each human gene from Affymetrix raw data [GEO series GSE5258]. Statistical estimations are available as well as ratios and log2ratios of drug influence relative to control experiments (pooled if replicate controls exist). Figure 2.A presents the integrated workflow and describes names and the functionality of the two applied GenomeMatrix web service methods. Out of all of the expression data sets, the CancerResource web tool filters out C-Map data sets. The web user is then given the option to select a cancer cell line and a compound to display an overview of the compound treatment on genes. After the second web service operation retrieves differential expression information for all target genes in CancerResource, visualization of the results is initiated by generating an array of colored boxes (Figure 2.B). Each box indicates higher (red) or lower (green) expression of a gene with regard to control tissues. A nonsignificant ratio interval is indicated by black boxes and is arbitrarily given between 3/4 < ratio < 4/3 by default. This color scheme corresponds to those used by whole-genome surveys like the GenomeMatrix web tool [13] or in clusterings of microarray data sets. The KEGG database for genes and pathways offers web service methods to retrieve particular data from the repository or manipulate data. An interesting feature is to introduce colors in KEGG standard pathway maps with the KEGG web service method color_pathway_by_objects [14]. The objective of CancerResource at this point is to indicate genes in a pathway corresponding to differential expression from C-Map request results. The presented KEGG method, shown in Figure 2.C, requests arrays of both genes and colors and generates dynamically a new map image, which includes the posted issues; as web service response, a URL to the image is sent back to the requesting computer. CancerResource integrates this image (example in detail: Figure 2.D) into a complex display of drug-target connections. Colors (red, green, or grey gene icon border colors; down, up, non-significant) indicate differential gene expression, which correspond to the web display colors of Figure 2.B. Both of the source repositories, KEGG and GenomeMatrix, provide helpful tutorials and descriptions of web service methods on their web pages or in the EmbraceRegistry - features that are strongly recommended by the web service
initiatives. Implemented web service methods can be retraced and inspected in detail there.
IV. CONCLUSION Web services that are based on SOAP/WSDL technology are a standard application in life sciences. Their introduction into biomedicine is a helpful alternative to local data repositories for publicly available data; with the help of CancerResource we elucidate respective client applications. To provide a generic example we explain general web service functionality as well as particular details of accessed data repositories. The implementation and function of presented web service examples can be followed by visiting CancerResource [7].
REFERENCES 1. Meinel T, Herwig R. (2010) SOAP/WSDL-based web services for biomedicine. In A Lazakidou, editor, Web-Based Applications in Healthcare and Biomedicine, chapter 7, pp 101–116. Springer, New York 2. BioCatalogue at http://www.biocatalogue.org 3. Pettifer S, Thorne D, McDermott P et al. (2009) An active registry for bioinformatics web services. Bioinformatics 25:2090–2091 DOI 10.1093/bioinformatics/btp329 4. EmbraceRegistry at http://www.embraceregistry.net 5. GenomeMatrix WSDL file at http://genomematrix.molgen.mpg.de/ \ cgi-bin/ws/esoaposti/wsdl 6. PEAR - PHP Extension at http://pear.php.net 7. CancerResource at http://bioinformatics.charite.de/care/ 8. Meinel T, Krause A, Luz H et al. (2005) The SYSTERS Protein Family Database in 2005. Nucleic Acids Res 33(Database issue):D226D229 DOI 10.1093/nar/gki030 9. SYSTERS WSDL file at http://systerstest.molgen.mpg.de/WSDL/ \ systers.wsdl 10. A Hewelt, A Ben Kahla, S Hennig et al (2002) The GenomeMatrix information retrieval system. HGM2002 Poster Abstracts, Human Genome Meeting, Shanghai, China, abstract 23 11. Kanehisa M, Goto S, Hattori M et al (2006) From genomics to chemical genomics: new developments in KEGG. Nucleic Acids Res 34(Database issue):354–357 DOI 10.1093/nar/gkj102 12. Lamb J, Crawford E D, Peck D (2006) The Connectivity Map: using gene-expression signatures to connect small molecules, genes, and disease. Science 313:1929–1935 DOI 10.1126/science.1132939 13. GenomeMatrix at http://genomematrix.molgen.mpg.de 14. KEGG WSDL file at http://soap.genome.jp/KEGG.wsdl
Author: Thomas Meinel Institute: Charité - University Medicine Berlin/Institute for Physiology/Structural Bioinformatics Group Street: Arnimallee 22 City: 14195 Berlin Country: Germany Email: [email protected]
IFMBE Proceedings Vol. 29
A Web-Based Application for the Evaluation of the Healthcare Management M. Bava1,2, D. Zotti1,2, R. Zangrando1, and M. Delendi1 1
2
IRCCS “Burlo Garofolo” Institute for Maternal and Child Health, Trieste, Italy Department of Electronics and Computer Scence, Faculty of Engineering, University of Trieste, Trieste, Italy
Abstract— According to the Italian law which regulates executive healthcare contracts, the professional evaluation is mandatory. The goal of the periodic evaluation is to enhance and motivate the professional involved. In addition this process should 1. increase the sense of duty towards the patients, 2. become aware of one’s own professional growth and aspirations and 3. enhance the awareness of the healthcare executive regarding the company’s strategies. To satisfy these requirements a model divided in two sections has been created for every evaluated subject. In the first part, the chief executive officer (CEO) scores: 1. behavioral characteristics, 2. multidisciplinary collaboration and involvement, 3. organizational skills, 4. professional quality and training, 5. relationships with the citizens. The scores for these fields are decided by the CEO. In the second part the CEO evaluates: 1. quantitative job dimension, 2.technology innovation, 3. scientific and educational activities. The value scores of these fields are decided by the CEO together with the professional under evaluation. A previously established correction coefficient can be used for all the scores. This evaluation system model has been constructed according to the enhancement quality approaches (Deming cycle) and a web-based software has been developed on a Linux platform using LAMP technology and php programming techniques. The program replicates all the evaluation process creating different profiles of authentications and authorizations which can then give to the evaluator the possibility to make lists of the professionals to evaluate, to upload documents regarding their activities and goals, to receive individual documents in automatically generated folders, to change the correction coefficients, to obtain year by year the individual scores. The advantages of using this web-based software include easy data consultation and update, the implementation of IT security issues, the easy portability and scalability of the system itself in different contexts even beyond healthcare. Keywords— Healthcare executive technology, PHP Programming.
evaluation,
LAMP
I. INTRODUCTION The periodic evaluation of healthcare business managers is mandatory in order to motivate and improve individual skills and possibly give a sense of responsibility to safeguard citizens health. The goal of the evaluation is to enhance the professionals involved, to increase the sense of
duty towards the patients, to become aware of one’s own professional growth and aspirations and to raise the awareness of the healthcare executive regarding the company’s strategies. To satisfy these requirements a theoretical evaluation model together with a “how to” guide was created; this article shows how this guide has been implemented in a computer program by an internal team of the “IT Office” of IRCCS “Burlo Garofolo” pediatric Hospital in collaboration with the University of Trieste. The guide should be free from bureaucratic aspects and be of great assistance during the evaluation period; furthermore it should be used differently on each professional according to on his/her own business role. The guide is divided in two main sections: the first part is “not negotiable”, while the second is “negotiable” with the managers and regards quantitative aspects. Both the negotiable and the not negotiable sections are divided in different subsections. Sub-sections are composed of fields, and to each field is associated a score, while to every sub-section is associated a coefficient that is fixed in the “not negotiable” part and variable in the “negotiable” part. This kind of model was refined following quality systems as JCI (Joint Commission International) and ISO (International Organization for Standardization), developing the Plan Do Check Act model (Deming cycle) to encourage a continuous professional and cultural growth.
II. DESIGN AND DEVELOPMENT The program is a web-based software developed using the LAMP (Linux, Apache, MySql, Php) system platform and implements the whole evaluation path providing user authentication giving different authorization profiles, in respect to individual roles. In particular users are divided in four categories: the evaluator (typically the CEO) evaluates all managers and inserts almost all documents; the evaluated manager who is capable to insert his documents and visualize only his proper evaluation reports; the administrative staff which introduces only documents regarding the hospital and can't visualize manager/evaluator reports; and finally the administrator that
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 839–842, 2010. www.springerlink.com
840
M. Bava et al.
has the possibility to operate directly on the software for technical problems and issues. There are many advantages using this web application: immediate availability of tools and information, unavoidable confront between managers and the evaluator; an easy consulting and data updating, an easy security and control implementation; finally great portability of the product due to its easy application in other contexts different from public and private healthcare.
Fig. 3 Staff home with documents regarding Hospital activity
III. THE PROGRAM STRUCTURE
Fig. 1 Manager home with documents regarding Hospital activity
Fig. 2 Evaluator home with documents regarding Hospital activity
The program presents a personal data sheet, an evaluation report sheet which helps to determine the manager final score, and a training sheet which follows the manager activities in parallel to the evaluation report. The main sheet is the evaluation report which is composed, on its first part, of links to user documents which are: the “job description” link which describes the work carried on by the manager (editable by the evaluator and read only by the manager); the “manager target” link (editable by the evaluator and read only by the manager), and the “job assignment” link which describes job motivations, structures, targets, and score/result indicator. The evaluation report sheet is further divided into two parts, in which has been reported the whole evaluation attributing a variable mark from 0 to 3. The first part is “not negotiable”, while the second is “negotiable” with the manager as regards quantitative aspects. The “not negotiable” section is divided in five subsections: 1. Characteristic behaviors, 2. Collaboration and interdisciplinary participation, 3. Organization ability, 4. Quality and vocational training, 5. Relationship with the citizens. The “negotiable” section is divided in three subsections: 1. Work quantitative dimension, 2. Technological innovation, 3. Scientific and teaching activities. In the last part of the evaluation report sheet there is a summary table concerning the performances achieved; the average score of the fields in a subsection, multiplied by a subsection correction coefficient, sum together gives the final vote.
IFMBE Proceedings Vol. 29
A Web-Based Application for the Evaluation of the Healthcare Management
Fig. 4 Evaluation report page visualized by the Manager
841
Fig. 6 Training report sheet visualized by the Evaluator
Fig. 5 Evaluation report page visualized by the Evaluator In order to have a positive evaluation, the final score must be greater or equal to the threshold value. There is a tabulated appointment calendar which permits to plan a series of meetings for future improvements. Beside to this evaluation report sheet there is a training sheet which controls individual abilities. It is the most complex sheet to be implemented for at least two reasons: the reading/writing permissions on this report are flexible (that is both manager and evaluator can read/write on it, in different positions) and the second reason is that it's composed of many fields with different roles facilitating the addition of all types of manager competences.
Fig. 7 Training report sheet visualized by the Manager
IV. IMPLEMENTATION The software was fully created under Linux environment using LAMP (Linux, Apache, MySql, Php) Open-Source platform for the development and implementation of Web applications. For data storage a “MySql” Data Base was used, not only because it is an Open-Source software but also because it is sufficient to satisfy the project objectives. The graphics interface was developed using both HTML and PHP languages.
IFMBE Proceedings Vol. 29
842
M. Bava et al.
HTML is not a programming language but a markup language, in fact it is used to describe the content (textual and not) of a hypertext document; it is read and elaborated by the browser, which generates all the pages visualized on the screen. For this program HTML was used to create the frame on which is based the graphics interface. PHP is a scripting language, created with the immediate purpose to design dynamic web pages and manage interface and data base interactions; in fact it is possible to operate directly on the DB using few code lines to insert, delete and update records. In order to visualize web pages created in PHP, it is not sufficient having a browser as for HTML language, but it is necessary to have a web server; for this reason the Apache web server is offered by the LAMP platform.
V. FUNCTIONING AND IT SECURITY ISSUES Every software which treats personal data has to be protected. In this case the DB was locked by a password and was implemented an authentication and authorization model. An initial login form based on the DB permits to enter the corresponding username and password (encrypted) in order to have access to the program. Besides the username and password, the reference table contains name, surname and user date of birth (to avoid ambiguity) and a field “type” which represent the role of the user running the program; doing so each user visualizes only his personal pages designed dynamically based on the type of authentication respect to the DB. These types are: the “evaluator” that has a main page which permits him to see, create and modify reports of all managers; the “manager” who can visualize only his proper page but can't modify what the evaluator has written, for example, about him; the “staff” which is limited in entering only hospital documents and can't visualize or modify any report sheet; the “administrator” that has all the permissions to operate on the entire program in case of problems and can perform changes if necessary. Further developing methods has been used for the program IT security such as the use of POST method in input
forms instead of the GET method which shows form data on the URL.
VI. CONCLUSIONS The program helps to control and check all the activities performed by the managers in a more efficient and smart way enabling to establish a skilled, and easy to use monitoring instrument; the use of a web interface equipped with adequate security levels permits a more simplified management for all users, and can be easily reached everywhere at every moment. Besides, the program itself encourages the dialogue and the exchange of ideas between the evaluator and the professional allowing to schedule meetings in order to discuss about factors of observation/evaluation and to improve the manager activities. The program is not obliged to use an “ad-hoc” data base which causes difficulties in exportation and portability at different platforms; the graphics interface is user-friendly hence can even be used by users with less informatics skills. Finally the software is easy to configure and is very scalable thus can even be used in other business organizations.
ACKNOWLEDGMENT We wish to thank Michael Ashu Formengia for his support and collaboration.
REFERENCES 1. Borgogni L.: Valutazione e motivazione delle risorse umane nelle organizzazioni –pagg.89 e 94 – FrancoAngeli, 2008. 2. Bowman J.S. Public Personnel Management. 28: 557-594, 1999. 3. Boyatzis R.E.: The competent manager. John Wiley & sons, NY. 1982. 4. Spencer L.M. e Spencer S.M.: Competenze nel lavoro, FrancoAngeli, 1993. 5. Bandura A.: Social Foundations of thoughts and action: a social cognitive theory. Prentice-Hall, Englewood Cliffs, NY. 1986. 6. Brotman L.E., Liberi W.P., Wasylyshyn K.M.. Consulting Phsychology Journal: Practice and Research. 50: 40-46, 1998
IFMBE Proceedings Vol. 29
A System for Acquiring, Transmitting and Distributed EEG Data Processing D. Kastaniotis1, G. Maragos2, N. Fragoulis1, and A. Ifantis2 1
2
University of Patras/ Electronics Laboratory/ Department of Physics/ Patras, Greece Technological Educational Institute of Patras/ Control Systems/ Digital Signal Processing and Data Acquisition Laboratory/ Department of Electrical Engineering/ Patras/ Greece
Abstract— In this work a remote controlled system for acquiring and processing EEG signals is presented having also capabilities of distributed EEG data processing. The system consists of a PC running specialized software and a specialized data acquisition (DAQ) card. The software environment for contacting remote EEG signal acquisition, analysis and data access is based on the well known National Instrument’s LabVIEW. EEG data can be transmitted over the internet and the associated software provides security in EEG and diagnostic analysis data sharing preventing access by unauthorized viewers. The EEG data processing is based on Independent Component Analysis technique (ICA). This technique allows good separation of signals arising from brain and non-brain sources (artifacts), as well as the distinction of the temporal discrete but spatial overlapping brain activities. The presented system is used for sharing confidential EEG data over the internet providing an automatic decision-making tool for a “second opinion” analysis in a clinical diagnosis procedure.
sharing feature could play an important role especially in diagnostic procedures, when a second opinion is needed. Second opinion could be produced by inspecting EEG data and analysis results from a different point of view, a process that can be facilitated by using a distributed diagnosis system. Such a system can be implemented using DataSocket technology, which simplifies the development procedure. To this context, in this work an “intelligent” EEG recording and analysis system is described which combines the advantages of the remote data access and control of an EEG DAQ system. The system employs EEG analysis features by using the well known ICA method forming a complete and effective solution in addition with the distributed processing of EEG data.
Keywords— Distributed processing, EEG, ICA, Remote Monitoring, Telemedicine.
A. System Architecture
I. INTRODUCTION The demand for remote EEG data transmission mainly comes from doctor specialists’ great difficulty in accessing underserved and isolated communities such as islands or mountain villages. In addition, development of such systems is motivated by the disability of elderly people to access clinics and hospitals, and by the need for decreasing the costs of unnecessary patient transfers from the towns to hospitals in the capital cities [1]. To this end many solutions have been developed for data transferring [2]. Most of them are using Application Data Interfaces– APIs, which they simplify the programming procedure of the communication part while they provide security in data sharing [3]. At the same time the introduction of the innovative ICA method [4], come up with very significant results in EEG analysis [5]. Using this method, systems able to acquire, transfer and process data (in real time or not) have been developed [6]. To this context, a system that provides distributed processing, shares the recorded EEG data, as well as the analysis results to a number of subscribers is very important. This
II. SYSTEM DESCRIPTION The general topology of the proposed system is depicted in Fig. 1. The philosophy of the system is based on distant EEG data access and the distribution of the execution control via DataSocket. Additionally, every client must have the ability of unique data processing, while the results should be shared among many subscribers. In this way, a variety in data analysis can be achieved. Moreover, every client must have the capability of making proposals on the acquisition process via a messenger task which is also included in the system software. The use of DataSocket reduces the communication’s part developing time, ensures high level data protection, provides full duplex communication, achieves data transferring and processing in real time. Additionally, gives the ability to the DataSocket server to run on a different PC than the PC that hosts the EEG Acquisition application. Therefore, DataSocket results in minimizing the total processing time due to the distribution of the various processes. The Main Workstation which is a PC based DAQ system, is located in the room where the experiment takes place. First of all it acquires transmits and processes the EEG data. At the end of the experiment, ICA algorithms are applied to the recorded data at every subscriber. At the same
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 843–846, 2010. www.springerlink.com
844
D. Kastaniotis et al.
time the DAQ system sends and receives processed results from and to the subscribers. Also, all the subscribers can exchange ICA algorithms results between each other.
iii. TCP-IP Protocol in Cooperation with LabVIEWs Application Programming Interface (API) DataSocket The Internet use for patients remote monitoring has significantly increased over the past years. The most important feature of TCP-IP protocol is that it is able to provide realtime data transferring. It can be used in many applications like distributed DAQ systems and remote monitoring. Additionally, DataSocket can access data in local files and data on HTTP and FTP servers. It minimizes the developing time, providing a unified API for low-level communication protocols without writing separate code for each protocol.
III. INDEPENDENT COMPONENT ANALYSIS (ICA) FOR EEG DATA
Fig. 1 System Architecture B. Data Acquisition and Transfer Technology Our data acquisition and transferring can be separated in hardware, and software components. i. Hardware For acquiring the signals we used four Adjustable Monopode Bridge electrodes, made of goldplated fine silver with a sintered Ag/AgCl (silver silver chloride) for adults from Alpine Biomed, model 9013E1312 with a screened cable. We designed a signal conditioning card, including a build in 3rd order IIR low-pass filter with 49 Hz cut off frequency. An additional reference electrode (DRL) is driven in right leg for cancelling common mode noise. The conditioning card is battery powered and a notch filter at frequency range 47- 53 Hz can be applied for main power line noise interference extraction [7]. The output of the conditioning card is connected to a NI- USB DAQ-9215A data acquisition card. ii. Software We developed a unique installable application code for every subscriber workstation. This application provides presentation, communication and data processing among all subscribers. The software uses LabVIEW’s ability to include MATLAB codes by incorporating algorithms for performing ICA written in MATLAB language. Every subscriber executes one different ICA algorithm of totally four algorithms which are implemented. More specifically the server executes Runica which is an automated version of the extended Infomax algorithm [8] the client No1 executes the JadeR algorithm [9], the client No2 executes FastICA [4] and finally the Client No3 executes efICA [10] algorithm.
Independent Component Analysis (ICA), was first developed in order to solve the blind source separation problem. More specifically, for a given set of mixed signals x=(x1(t),…,xn(t)) (1), promises to recover the source signals s=(s1(t),…,sn(t)) (2), based on independence by seeking an unmixing matrix W that maximizes the independence of the extracted signals. The method is based on statistic independence of the source signals and the Central Limit Theorem. In contradiction with similar methods like PCA, ICA uses high order statistics and returns a new set of data which instead of independent are also uncorrelated, because independence implies uncorrelatedness while the reverse doesn’t hold. For EEG data analysis the use of ICA imposes two restrictions. 1. The source signals are constituted by linear mixtures of temporal independent but spatial fixed brain and no-brain activities, 2. The spatial spread of electric current from sources by volume conduction does not involve significant time delays. In single trial EEG analysis, the rows of the matrix x are signals recorded from different electrodes and the columns are measurements recorded at different time instants. Performing ICA to the set of mixed signals x, we receive an unmixing matrix W who linearly unmixes the mixed signals into temporal independent and spatial fixed components, following the relation y=Wx. The rows of the output matrix y are time courses of activation of independent components. The projection of these independent components at each scalp sensor is given by the columns of the inverse of the unmixing matrix W-1. In this way we can extract the scalp topography of each component and also its physiological origins. In this way we can define artifacts [11] which in turn can be easily removed from the new data set that ICA gives us.
IFMBE Proceedings Vol. 29
A System for Acquiring, Transmitting and Distributed EEG Data Processing
The experiment data collection procedure took place in Control Systems, Digital Signal Processing and Data Acquisition Laboratory, of Electrical Engineering Department, T.E.I of Patras. Continuous EEG signal acquisition has been achieved using the PC-based DAQ system. This is realized by connecting four electrodes (P1, P2, F3 and F4) on a volunteer’s head, in distances based on 10-20 system, for ten minutes. The recorded data were presented and analyzed locally in the PC-based DAQ system and also on three long distance workstations. Every workstation could define its own processing parameters and has the ability to exchange data with other workstations including the acquisition workstation. The main workstation acquires the data, when alternatively another PC was used to run the DataSocket Server. This improved the performance furthermore as also insured the security. In Fig. 2, a sample of 4 second recording from the publisher front panel is depicted. The sampling frequency has been defined to be 512 Hz. Also has been employed a 3rd order digital Butterworth Bandpass adjustable filter with low cutoff frequency at 0.1 Hz, and high cutoff frequency at 49Hz.
A. Data Processing and Experiment Results In this section, data processed using EEGLABs [12] functions are presented. Functions implemented in MATLAB and nested in LabVIEW. Our intent is to recover the independent components by performing ICA on the recorded data in order to discriminate brain and non brain activities [13]. After a ten minutes continuous recording of EEG signals the data were transferred through the network to the subscribers. At the end of the recording procedure all of the subscribers perform ICA analysis and exchange their results by sending the unmixing (W) matrices. In Fig. 4, the channel spectra and maps of the recorded signals are displayed. 6.0
10.0
22.0 Hz +
-
110 Power 10*log10(μV2/Hz)
IV. THE EXPERIMENT DATA COLLECTION PROCEDURE
845
100 90 80 70 60 50
5
10
15
20 25 Frequency (Hz)
30
35
40
45
Fig. 4 Channel Spectra and maps After applying the algorithm in the recorded data at every workstation, we received four temporal independent and four spatially fixed components. Fig. 5 presents the results of applying runica algorithm on the main workstation. The results are represented using EEGLAB functions. These are the temporal independent activations and the respective spatially fixed components are presented.
Fig. 2 Main workstation front panel
1
2
3
4 +
my eeg
1
2
3 Scale 3.712e+005 +
4
0
1
2
3
4
Fig. 5 Temporal independent and spatially fixed components
Fig. 3 Subscriber- Client No 1
In these results is a noticeable symmetry on the ICA activity distribution over the scalp. The power spectra and the IFMBE Proceedings Vol. 29
846
D. Kastaniotis et al. 10.0 Hz 3
2
1
4
+
-
100
2
Power 10*log10(μV /Hz)
110
90 80 70 60 50
5
10
15
20 25 Frequency (Hz)
30
35
40
45
Fig. 6 Power spectra and maps of the independent components
Developing algorithms for clinical decision making tools: It is needed to develop algorithms that have the ability to implement a typical diagnosis. By this way it could be possible, to detect abnormalities during the examination, the timely notification in critical situations like epileptic crisis. Telemedicine: The combination of the above views resulted in the development of completed telemedicine applications. For instance a patient with movement disabilities could be monitoring by a specialist while be at his home. In an emergency situation, where that system detects an abnormality, it could timely give an alarm to a specific person.
maps of the independent components are shown in Fig. 6, while Fig. 7 presents their Fast Fourier Transform (FFT).
Fig. 7 The FFT of the independent components From the last figure, we can notice that the independent components have distinguished the line noise frequency of 50Hz. This noise is presented only in 3 of the 4 independent components. Independent component 3 has clearly a 50Hz harmonic, while the 50Hz noise in independent component 4 has been eliminated.
V. CONCLUSIONS The implementation of such a distributed system is expected to be followed by an important development in a wide field of applications. We mention some of them. Signal Processing and data representation: Although the fast development of this science branch, it is still needed to develop new more efficient or improving the previous techniques. Real time applications: Data are ideally transferred as fast as possible achieving real time communication. By developing more efficient protocols we could succeed faster data transferring that provide a more ideally real time communication.
REFERENCES 1. E. Karavatselou, G. Economou, C. Chassomeris, V. DanelliMylonas, and D. Lymberopoulos, “OTE-TS: A New Value-Added Telematics Service for Telemedicine Applications”, “IEEE Transactions on Information Technology In Biomedicine”, Vol. 5, No. 3, September 2001. 2. I. Lita, I. B. Cioc, I. Popa, “Technologies for Remote Data Acquisition Systems in Environmental Monitoring”, Electronics, Communications and Computers Department, University of Pitesti. 3. L. Yan, Yan, “China DataSocket Technology and Its Application in Remote Data Transmission Measurement”, The Eighth International Conference on Electronic Measurement and Instruments. 4. A. Hyvarinen. E. Oja. “Independent Component Analysis: Algorithms and applications, Neural Networks, No. 13 pp. 411-430, 2000. 5. M, S. Bartlett, S. Makeig, A.J. Bell, T.P. Jung, and T. J. Sejnowski, “Independent Component Analysis of EEG Data”, Society for Neuroscience Abstracts, vol. 21, 1995. 6. Obrenovic, Z. Starcevic, D.Jovanov, E.Radinojevic, V. “An implementation of real time monitoring and analysis in telemedicine”, Onformation Technology Applications in Biomedicine, Proccedings of IEEE EMBS International Conference, 2000. 7. Frantzidis, C. Bratsas, C. Klados, M. Konstantinidis, E. Lithari, C. Vivas, A. Papadelis, C. Kaldoudi, E. Pappas, C. Bamidis, P., "On the classification of emotional biosignals evoked while viewing affective pictures: an integrated data mining based approach for healthcare applications.", IEEE Transactions on Information Technology in Biomedicine Volume PP, Issue 99, 2010. 8. A. J. Bell and T. J. Sejnowski, “An Information-maximization approach to blind separation and blind deconvolution”, Neural Computation, pp. 7, 1129-1159, 1995. 9. J Cardoso and A. Souloumiac, "Blind beam forming for non Gaussian signals", "IEE Proceedings-F", vol. 140, No. 6, pp. 362-370, December 1993. 10. Z. Koldovský, P. Tichavský and E. Oja, “Efficient Variant of Algorithm FastICA for Independent Component Analysis Attaining the Cramér-Rao Lower Bound”, IEEE Trans on Neural Networks, vol. 17, no. 5, pp.1265- 1277, September 2006. 11. Ruijiang Li, Jose C. Principe, “Blinking Artifact Removal in Cognitive EEG Data Using ICA”, Proceedings of the 28th IEEEEMBS Annual International Conference New York City, USA, Aug 30-Sept 3, 2006 12. A. Delorme, S. Makeig, “EEGLAB: An open source toolbox for analysis of single trial EEG dynamics including Independent Component Analysis”, Journal of Neurosience Methods, 134, 9-2, 2004. 13. Jung T-P, S. Makeig, C. Humphries , TW Lee, MJ McKeown, V Iragui, and TJ Sejnowski, "Removing Electroencephalographic Artifacts by Blind Source Separation”, Phychophysiology, No. 37, pp.163-78, 2000.
IFMBE Proceedings Vol. 29
The Umbrella Database on Fever and Neutropenia in Children – Prototype for Internet-Based Medical Data Management Matthias Faix1, Daniela Augst1, Hans Jürgen Laws2, Arne Simon1, Fritz. Haverkamp1, J. Rentzsch3 1 Children’s Hospital Medical Center, University of Bonn, Germany, Germany Dept. Pediatric Hematology, Oncology and Immunology University of Düsseldorf, Germany 3 Charite Klinik für Psychiatrie und Psychotherapie Charite Campus Charite Mitte, Germany
2
Abstract–– Medical data from clinical studies are often collected from the patients’ file by the attending physicians or study nurses, entered into a hand written paper form designed for the study, sent to the principal investigator and later on merged into a centralized database. This traditional method of data mining (sending data to a leading centre) needs relevant expenditures of time and personnel and the underlying data model does not allow the rapid inclusion of new items of interest. In addition, the generation of reports for the user (participants) and the final statistical analysis of the data rely on additional software tools. Keywords–– Clinical Studies, Data Entry, OLAP, Cube Organization.
I. INTRODUCTION The majority of tertiary care pediatric oncology centers treat about 1800 newly diagnosed pediatric cancer patients per year [1]. Thus, the patient population of a single center is too small and heterogeneous to generate answers to specific questions in prospective randomized studies. In order to overcome these obstacles, a basic principle in pediatric oncology in the last decades is a nationwide multicentre cooperation in the collection and analysis of standardized data sets on risk factors, treatment modalities, complications, adverse events and outcomes. This principle has also been implemented in studies which investigated the epidemiology and prevention of infectious complications in pediatric cancer patients [2][3][4]]. The Umbrella Internet Database (UID) considers pediatric patients with fever and neutropenia (absolute neutrophile count < 0.5 x109/L). It has been designed by a group of pediatric oncologists and infectious disease specialists from the Netherlands, Switzerland and Germany. The cumulative and comparative investigation of the different local data sets yields to a standardized description and a better understanding of the epidemiology of fever and neutropenia in pediatric oncology departments in comparison with the results from other countries. Eventually, the participating units may compare their own patient characteristics, treatment modalities and
outcomes with the cumulative results of other pediatric oncology centers in a non-imperative procedure facilitating benchmarking for internal quality assurance. [2] Traditionally, medical data from clinical studies are collected from the patients’ file by the attending physicians or study nurses, entered into a hand written paper form designed for the study, sent to the principal investigator and later on merged into a centralized database by data management staff. [2] In addition, data monitoring resources are required to check the validity, plausibility and integrity of the primary data sets. This traditional method of data mining needs relevant expenditures of time and specially educated personnel. It may lead to transcription errors or to a loss of important information on its way from the primary user to the principal investigator and impedes the contemporary inclusion of additional items of interest. Finally, the generation of reports requested by the participants and the final statistical analysis of the data rely on additional software tools or again on laborious hands-on procedures [5]. Taken together, to reach the targets of the UID approach, a feasible (stability, availability, speed) and secure option for data entry into an internet based database had to be implemented. This database should be available worldwide by any user with internet access. No additional software should be necessary on the client side besides Opera as a browser und Java 1.4.2. One major task is that many of professional medical PC-clients do not possess administration rights to set particular default prerequisites on their computer workplaces. Furthermore, the startup time to get into work should be low. The primary pre-requisites of the database should be: [5][6][7][8] • • •
personal data of the patients have to remain confidential but available in the participating centre to ease the identification of particular datasets. users from a participating centre must only have access to their own data sets cumulative anonymous data analysis is provided by the principal investigator in certain time intervals or on request.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 847–850, 2010. www.springerlink.com
848
• • •
M. Faix et al.
real time data access and online-reports (if requested by the participating institution) multi-user access with from different locations (workplaces) in one institution low costs for maintenance procedures.
hardware with other medical applications. As method for database design we are trying to use ERWIN. The SQLscript is manually implemented into the database. A 3-tier environment was designed [9][10][11]]. C. Deploying the Information
II. MATERIALS AND METHODS Data storage and maintenance is realized in a central volume with regular backup routines to guarantee safety and stability. The age of the patient is automatically calculated according to the date of birth in the backend data set with a Julian calendar function of the Oracle® database. Nonetheless, the data management team and the principal investigator cannot decode the encrypted personal data of the patients. In Germany, the patient or his legal guardians have to give their informed consent to data storage and management. The study protocol has been approved by the Ethic committee and Scientific Study Advisory Board of the University of Bonn. A. Prearrangement The first strategic decision was to use Oracle® as the tool for storing data and for internal quality assuring procedures. In addition, Oracle® Forms (below: Forms) was chosen to generate the data entry surface for the Internet. At this point it was anticipated that the data to be collected were rather complex and should be validated (clear cited) as far as possible without an external data monitoring by a two step administrative process in the participating institution before the transmission into the database. This clear cited status of the data after validation by a local administrator (in general by a pediatric oncologist) aims at lowering the threshold for primary data entry by any authorized user (this may be a study nurse or a medical student) and at lowering the costs considering any post-processing efforts. Once established, Forms offers more comfort than other tools (i.e. PHP) with the advantage of "running everywhere", this means as well on Windows Clients and on Unix-based clients. In fact the application now can be used on micro devices e.g Windows Mobile or the iPhone. Even the possibility to offer access via UMTS is actually discussed. B. Basics Oracle Version 10gR2 was optimized for a SUSE 9.2 Linux hosted on machine A at the local electronic data processing centre. The application runs on a preexisting Oracle instance. The machine and the oracle database share the
The principle aim was: any place, any environment. [5] To produce the data real time on the screen of the user is crucial for the whole project. To reach this ‘real time environment’ for the collection of and for the access to medical data available all over the world at least 6 days a week turned out to be a challenge. D. Stations to Be Passed and Specific Obstacles Depending on the individual firewall settings only a few clients were able to start up and run the application through a simple connection to the home page of the project. Contact to the clients revealed, that other system software, multiple browsers and different Java® Virtual machines were in use. For example, Firefox® is used as browser and Linux as system software. On request, the support team of Oracle® delivered the information that only two standard environments guarantee a high probability to work without further interruption or subsequent problems. The first one is Windows® XP with Opera® Version 7.5, connected with the java runtime environment (JRE) version 1.4.2. Opera® is now a free ware browser and has been implemented as standard browser for the application. The version needed is offered by a download link on the homepage (www.febrile-neutro.de). In addition, the browser delivers the Java environment (Java 2.0).Opera and Java is in fact available for Mac Users as well. We support MS Windows with Opera and JRE but in fact Macintosh users can use the environment as well but according to a limited support capacity we only recommend the environment described above. The above environment is the supported environment but not the only environment which can be used. It is recommended to start the first steps of installation with a network and IT administrator at hand. After installation the use the Java™ virtual machine (JVE) of the Microsoft® Explorer is still possible but the appearance of Java applications might be influenced since the Java™ runtime environment (JRE) overwrites the default settings of the Microsoft Version oft the Java Project (MS VM). It is therefore recommended to test the connection with the environment found on the client machine first. Only if problems do occur the environment can be changed to the standard setting described above.
IFMBE Proceedings Vol. 29
The Umbrella Database on Fever and Neutropenia in Children – Prototype for Internet-Based Medical Data Management
849
G. Forming the Environment
Fig. 1 The Result: The UID [15] Data Entry Form (know also available on mobile devices like the iPhone) The second recommended standard environment is Knoppix® 3.7 to start up Firefox™ as browser, including JRE 1.4.2. Starting the application from a Knoppix ® CD turns the computer of the peripheral user into a standard Knoppix ® PC including Firefox® as standard browser. The PC is started with a setup routine delivered on the CD with a startup time of about 2 minutes. This concept of a network computer has been propagated by Oracle® itself years ago.
The development of the application is divided into the development of the data dictionary which is kept on the central database machine and the development of the user interface kept in the application server. The heart of the development surely is the data dictionary supplied by the principal investigator or the study protocol. Generally it is the smaller drawback to loose some information (to have a data set which is sufficient but not perfect) than to allow to enter any corrupt data. Foreign keys and constraints are defined, check conditions are made. The data dictionary is kept self-explaining by using the comment field function of the front end as often as feasible. H. Data Consistency In order to keep data consistency in the database we used some techniques to keep the data as valid as possible. Amongst these techniques were: •
E. Dataflow Initially, the IP-ports 80 and 9000 were taken for data transfer. The port 9000 is often blocked because of security reasons by local firewalls. The result is an error on the client machines ["Missing Java Classes"]. Thus, the port was changed to 443. Some proxies regulate the dataflow of this port - only proper http data are allowed. We were trying to tunnel the information. This would lead to higher stability and higher security a well. F. Further Obstacles in Practice Any tier may be affected by technical problems, and the temporary loss of any tier results in a failure of the complete system. This is in fact the most relevant disadvantage of the described approach. Decentralized data gathering is relatively stable; the clients on the outside are responsible for their stability. Nonetheless, the advantages of a light weight environment – an environment which is mainly driven by the central machines - do exceed the risks of instability. To overcome the risk of a temporary or total breakdown of the machine, on which the database is located, Oracle® provides a method to mirror the data at commit time in a raidlike system established in different physical places. The same is valid for any drop of the second tier (back end). This is the easiest tier to be backed up, since this machine is generated once and is normally not modified. The entry-forms are normally generated once and rarely changed. If all tiers are mirrored, even a drop down of one local supporting network does not result in an unavailability of the application. [11]
• •
clear input (check of the data on byte level and check of the data on field level [11] foreign keys to check the data [12] two phase commit [13]
I. Reports One benefit for the participating centers is online access to their complete datasets including particular reports (as requested). Only ‘checked’ data sets entered by the study centre can be retrieved and analyzed from the central database in a report volume.
III. RESULTS Our main aims of the project were: •
•
• •
a standardized internet-based documentation tool for baseline data with high clinical relevance to all attending physicians in the field of pediatric oncology to whom internet access is routinely available support of all accredited users with an efficient data management tool to gather a complete set or special reports of clinically relevant information about his own patient population and consequently about the epidemiology of febrile neutropenia in the particular unit provision of pediatric oncologists and infectious disease specialists with baseline data for secondary study objectives and risk factor analysis improvement of the efficacy of routine clinical documentation in the participating centers (are all important items documented in the file?)
IFMBE Proceedings Vol. 29
850
•
M. Faix et al.
development and evaluation of risk models for severe infectious complications in pediatric oncology resulting in more targeted treatment strategies for low and medium risk patients.
All our aims were reached. Furthermore the project is a source for data mining. We use OLAP [14] principles to gain information and to modify the repository of the project. The UID project with the set of data entered in the repository is now a template for a series of other projects.
IV. DISCUSSION Setting up a reusable environment for the internet and getting started is accompanied with high efforts on the one hand side. There are some obstacles which are cost intensive. Furthermore we use commercial software which is cost intensive. The presented method presents a good return of investment in two ways. Feasible data can be generated using the project and more projects can be set up in a cost efficient way.
V. CONCLUSIONS An internet based solution is the state of the art way to collect data in medicine especially when centers are spread across the world. The idea of generating a prototype for internet based data collections result in a cost efficient way to set up a data collection centre in the internet.
ACKNOWLEDGEMENT The Umbrella Internet Database Project has been supported by a grant from the Kinderkrebsstiftung Duesseldorf [1] 2005, Germany.
REFERENCES 1. Kinderkrebsstiftung www.kinderkrebsstiftung.de/krebs-bei-kindern/ 2. Gaur AH, Flynn PM, Shenep JL. Optimum management of pediatric patients with fever and neutropenia. Indian J Pediatr 2004;71:825-35. 3. Viscoli C, Castagnola E, Giacchino M, et al. Bloodstream infections in children with cancer: a multicentre surveillance study of the Italian Association of Paediatric Haematology and Oncology. Supportive Therapy Group-Infectious Diseases Section. Eur J Cancer 1999;35:770-4.
4. Laws HJ, Ammann RA, Lehrnbecher T. [Diagnostic Procedures and Management of Fever in Pediatric Cancer Patients.]. Klin Padiatr 2005;217(Suppl 1):9-16. 5. Daniel R. Rhodes, Jiajun Yu, K. Shanker, Nandan Deshpander, Radhika Varambally, Debashis Ghoshm Terrence Barrette, Akhilesh Pandey Arul M. Chinnaiyan In: Neoplasia Vol 6, No.1 January/February 2004, pp 1-6 ONCOMINE: A Cancer Microarray Database and Integrated Data-Mining Platform 6. Lenz R. Information Management in Distributed Healthcare Networks. In: Härder T, Lehner W (Hrsg.). Data Management in a Connected World: Essays Dedicated to Hartmut Wedekind on the Occasion of His 70th Birthday. Springer Verlag, 2005: 315-334. 7. Lenz R, Kuhn KA. Towards a continuous evolution and adaptation of information systems in healthcare. Int J Med Inf 2004; 73(1): 75-89. 8. Lenz R, Huff S, Geissbühler A. Report of conference track 2: pathways to open architectures. Int J Med Inf 2003; 69(2-3): 297-299. 9. Ehlers F, Amnenwerth E, Hirsch B. Design and Development of a Monitoring System to Assess the Quality of Hospital Information Systems: Concept and Structure. In: Engelbrecht R, Geissbuhler A, Lovis C, Mihalas G (Hrsg.): Connecting Medical Informatics and Bio-Informatics. Proceedings of Medical Informatics Europe (MIE 2005), Geneva, Aug 08 - Sep 01 2005. Studies in Health Technology and Informatics, Volume 116. Amsterdam: IOS Press. 575-580. 10. de Keizer N, Ammenwerth E. Trends in evaluation research 1982 2002: A study on how the quality of IT evaluation studies develop. In: Engelbrecht R, Geissbuhler A, Lovis C, Mihalas G (Hrsg.): Connecting Medical Informatics and Bio-Informatics. Proceedings of Medical Informatics Europe (MIE 2005), Geneva, Aug 08 - Sep 01 2005. Studies in Health Technology and Informatics, Volume 116. Amsterdam: IOS Press. 581-586. 11. Saboor S, Ammenwerth E, Wurz M, Chimiak-Opoka J. MedFlow improving modelling and assessment of clinical processes. In: Engelbrecht R, Geissbuhler A, Lovis C, Mihalas G (Hrsg.): Connecting Medical Informatics and Bio-Informatics. Proceedings of Medical Informatics Europe (MIE 2005), Geneva, Aug 08 - Sep 01 2005. Studies in Health Technology and Informatics, Volume 116. Amsterdam: IOS Press. 521-526. 12. John Strang O’ Reilly & Associates Inc. – Programming with curses 13. Codd, E.F.; Codd, E.F. (1970): A Relational Model of Data for Large Shared Data Banks. In: [[Communications of the ACM]], Jg. 13, H. 6, S. 377 14. Nigel Pendse, Richard Creeth (1995): The OLAP Report. In: Business Intelligence 15. UID at http;// www.febrile-neutro.de/
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Matthias Faix Universitäts Kinderklinik Bonn Adenauer Allee 119 Bonn Germany [email protected]
A research information system (RIS) for breast cancer genetics B. L. Leskošek1, J. Dimec1, K. Geršak2 and P. Ferk3 1
University of Ljubljana, Faculty of Medicine, Institute for Biostatistics and Medical Informatics (IBMI), Vrazov trg 2, SI-1000 Ljubljana, Slovenia 2 University Medical Centre, Institute of Medical Genetics, Šlajmerjeva 3, SI-1000 Ljubljana, Slovenia 3 University of Maribor, Faculty of Medicine, Slomškov trg 15, SI-2000 Maribor, Slovenia
Abstract— In healthcare great quantities of patient’s data are collected for healthcare and administrative/financial purposes. Some of these data would be very useful for potential research, however, for any serious research, uniform and standardised data are needed which are normally scattered among many different electronic and also paper based information data sources. We had a similar problem in our breast cancer genetics study, so a decision was made to develop a simple and lightweight web based research information system (RIS) which would allow user friendly data input, search, edit and export with low maintenance costs. For development our in-house system for automatic application generation (AAGIP) was used. Developed research information system is secure (all data between server and clients are encrypted by using 256-bit SSL protocol and database on server is periodically backuped), highly compatible (it was tested with number of popular browsers like Mozilla/Firefox, IE, Opera, Chrome, Safari …), fast (doesn’t use any high demanding graphical gadgets), userfriendly and allows (international) users simple data input (in local languages by using Unicode UTF-8 standard) and data usage/export for different simple or advanced analyses. Keywords— research information system, web-based application, distributed data collection, breast cancer genetics, AAGIP
II. SYSTEM DESCRIPTION
The system that we are reporting on serves as an intermediary between creators of records, typically medical doctors involved with breast cancer patients and on the other hand with geneticists and other specialists (pharmacists, informaticians) involved in research work. Figure 1 shows schematical structure of the data collection system. For the time being three institutions with more locations are involved: University of Ljubljana, Faculty of Medicine, Institute for Biostatistics and Medical Informatics (IBMI), Ljubljana, as a developer and manager of the information system and University Medical Centre, Institute of Medical Genetics, Ljubljana (IMG), University of Maribor, Faculty of Medicine, Maribor (MFMB) and Institute of Oncology, Ljubljana (OI) as a main data providers. online data entry online & batch data entry export of XML-formatted records to owners doctors (MDs)
geneticists (MDs)
I. INTRODUCTION pharmacists
In healthcare great quantities of patient’s data are collected for healthcare and administrative/financial purposes. Some of these data would be very useful for potential researche with high and quick impact on treatment procedures. However, for any serious research, uniform and standardised data are needed which are normally scattered among many different electronic (e.g. Electronic Health Records) and also paper based information data sources. To support narrow-focused and highly specialised research work for collecting genetics data for patients with breast cancer the decision was made to develop a simple and lightweight web based research information system (RIS) which would allow centres on different geographical locations to input, search, edit, and export data, and easy and low-cost upgrading and adaptation, as well.
IBMI
informaticians
Fig. 1 Roles of different project partners A decision to build a web based information system with database and centralised with regard to collaborating institutions, has some good consequences: the same data format is used by all partners thus avoiding the information loss normally occurring with format conversions, adaptation to different character sets used in Europe (for future partners) have to be solved only once, not to mention much easier backup and security management.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 851–854, 2010. www.springerlink.com
852
B.L. Leskošek et al.
Consequences of using the web as an infrastructure for the RIS development are twofold – good and not-so-good. Since all of the programmes that define application’s logic reside on the server side or are transferred inside the web pages, there is no need for the installation of client software and users don’t have to deal with any technical details. Partners’ computing needs are very modest – almost any PC with web browser is good enough for data entry. Collecting of texts in various character sets and even various scripts is theoretically easy at least for the data-entry phase, since all popular web browsers support Unicode (UTF-8). On the other hand, one must be aware of the of the web user interface limitations due to http and html constraints. A web browser application has limited access to computer resources such as memory, processing power, and disk space. Everything that is needed for the operation has to be part of web pages therefore for every user’s intervention in a database a rather extensive transfer of data and code describing the interface elements takes place. After a detailed analysis of the situation several basic requirements, which the system should fulfil, were identified: 1. The system must be operative even through slow lines and by using any hardware platform powerful enough to be connected to the web. 2. The user interface must be simple and intuitively clear. 3. The security policy must cover database access, transport of data, and involuntary damages to data. 4. The institutions that submit records remain their owners and must be able to download them from the system in the standardised format. 5. The upload of records in other well-defined documentary formats (from different existing information systems) must be supported. 6. The system must be simple enough to be deployed in a very short time. 7. All text-processing actions must support all character sets belonging to different European languages (for future use). With this paper we will try to describe how the abovementioned requirements were met. On the schema on Figure 2 we can identify three groups of functions: the data input and editing, searching, and data output/export. Via search user interface forms one can select specific records or record groups that need some attention, whether editing or exporting. The (selected) data can be exported in tab separated structured text file or XML file. The development of the system was based on the expectation that the users will mostly not reside in broadbandconnected computer centres so one of our important con-a. cerns was also to minimise the amount of data that are being transferred through communication lines during the data
entry. It was a natural decision to use on the client side only standard, computationally undemanding solutions with simple HTML and javascript functions without any animations, applets, Flash or similar gadgets. In that manner we were also able to reach high level of compatibility between our information system and different end user software equipment (browsers). PATIENT DATA INPUT / EDIT help files
SEARCH search form
authorisation and authentication
khsd flkjsdhf aisudoai lkjashlakjshdlkajshd soidsasoidu osidu khsd flkjsdhf jhskh kjsajh ksjh kjh aisudoai lkjashlakjshdlkajshd soidsasoidukhsd osiduflkjsdhf jhskh kjsajh ksjh kjh aisudoai
field
search results
lkjashlakjshdlkajshd soidsasoidu osidu jhskh kjsajh ksjh kjh
entry page
field
web forms
1234 skj kajsh kajshd kajshd kjashdkas dkjsahd kashdkajshdkjhs
1235
fiel d na me s
DATA OUTPUT / EXPORT RIS database
reports akdsfh kshf ksfhd ksjdhf ksjhdfksjhd ksjdhfksjhdfksjhdfk k d kjdhkjhdsk kjs jksdh fksjh kjd k kjsh ksjhdf ks kjhdf kshf kskjhf k kkkjd fk akdsfh kshf ksfhd ksjdhf ksjhdfksjhd ksjdhfksjhdfksjhdfk k d kjdhkjhdsk kjs jksdh fksjh kjd k kjsh ksjhdf ks kjhdf kshf kskjhf k kkkjd fk akdsfh kshf ksfhd ksjdhf ksjhdfksjhd ksjdhfksjhdfksjhdfk k d kjdhkjhdsk kjs jksdh fksjh kjd k kjsh ksjhdf ks kjhdf kshf kskjhf k kkkjd fk
INTERFACES other IS (e.g. EHRs) MAINTENANCE
GRD BEPO ksdhf kjshdf ksdhf BEPO kjshdf bepo
XML Loni <defg/>
backup
Fig. 2 Information system’s scheme (Simple arrows represent user’s actions and broad arrows denote data flow).
The system was developed after a thorough analysis and using in-house made software for automatic application generation [1] (AAGIP). The heart of this software is a formalism called Application Definition Language (ADL), a mixture of SQL’s Data Definition Language and HTMLlike tags. ADL was used to describe the database, the user interface and the application that uses both. By using Unicode [2] UTF-8 standard RIS can be used internationally without changes (only with user interface translation). The RIS enables direct data input/update via active web forms controlled by javascript and AJAX [3] (data validation, different views controlled on client side). The user interface is simple, intuitive and effective. The security policy covers database access, transport of data, prevents involuntary damages to data and is compliant with ISO/IEC 27000 [4] family of standards. The RIS is also used for data searching and as a central web site for analysis reports. The software has a data export functionality in formats suitable for statistical analyses in e.g. R, SPSS or Excel.
2.
USER INTERFACE
Data validation and Safety measures In accordance with our basic principle of minimisation of network traffic it is very important to discover as much
IFMBE Proceedings Vol. 29
A Research Information System (RIS) for Breast Cancer Genetics
errors in data as possible already on the client side to prevent unnecessary exchanging of erroneous data and error diagnoses. The data entry forms are constructed in a way that minimise the possibility to introduce errors in the first place. To minimise data crowd in forms, the data entry fields that belong to the same groups (e.g. identification data …) are not visible (excep titles) when the form is loaded, as it is shown on Figure 3 (group A is visible, groups B-F are not visible). The user can then open (make visible) only groups of data the she/he wishes to edit. The application usage protocols on server side are minimising the possibility to use destructive actions, intentionally or accidentally, on data already entered into the database, and possibility of data disclosure (e.g. with SQL injection). Data entered into web forms are checked mainly on the user's side after the submit button was pressed and before the form is transported. For security reasons the second control takes place on the server side before actual inclusion into database is performed. With this second control the results of possible data interception and intentional corruption is prevented even though we believe that by using the https protocol and data encryption it is highly unlikely that such corruption would go unnoticed and data not being rejected by the normal data format checking, already per- b. formed by the database itself. Data control is performed by the javascript snippets, which are part of screen forms. They check for the presence of data in mandatory fields, check characters in limitedcharacter-type (e.g. numeric) fields, and inspect format of data in fields for which the standard form is foreseen (e.g. dates, number intervals, e-mail addresses, ...). Whenever all possible values for a field are known in advance the need for data control is diminished by using the different standard (radio buttons, checkboxes or pull down lists) or special solutions, like input field types, which make possible the selection of values. If number of different values is small and more than one value could be selected checkboxes are used. Main RIS entry form with help window and validation alert is shown on Figure 3. Access to all parts of the information system is limited to registered users only and data traffic between client(s) and server is encrypted by 256-bit SSL protocol. Each user is authorised by personal username and password and all users’ actions (form load, data input, update, delete, …) are logged. While all the entered data in the database are visible to all users of the system, the permissions for editing and other potentially destructive actions are much more limited. Once the data are inserted into database all subsequent edit or delete actions on them are granted only to users affiliated to the partnering centre, which the record’s creator belongs to. The same principle is used for downloading – the part-
853
nering centres are able to download only those XML-tagged records that were submitted by their staff.
Fig. 3 Example of a complex user input form with a help window and data validation message opened.
Search and Edit The integral part of the RIS is its searching and browsing interface. The search interface serves as a tool that helps users with the selection of records that need to be edited. Search fields that are present in a searching form were selected with this function in mind. The user can make a selection based on interval of document identifications (IDs) or by free-text searching through all fields. Figure 4 shows the search interface in which the user set search criteria to select records with IDs between 40 and 42. Additional role of search interface is to provide listings of search hits and also the possibility to download selected records (hits) marked in well-formed XML.
Fig. 4 Search interface with executed example search. IFMBE Proceedings Vol. 29
854
B.L. Leskošek et al.
REFERENCES c. User’s help 1.
Because of the time limitations, lack of personnel and due to geographic dispersion of users there was no possibility to organise training workshops. All instructions that users need to perform data entry have to be delivered as online help. For non-self-explanatory fields on every form there is a help file, which can be reached via hyperlink. An example of a help file is presented on Figure 3.
2. 3. 4. 5. 6. 7. 8.
3.
CONCLUSION
With exact planning and by use of AAGIP, we were able to very quickly develop adaptive, simple and friendly RIS that collects data from different sources, allows data download in a structured (XML) form for later analyses and allows data search and edit. This example shows that fruitful and effective results with low maintenance costs can be obtained in spite of very fast development. In future, we are planning to further develop RIS and test its usability with our international partners.
ACKNOWLEDGMENT
Leskošek BL (1999-2009) Automatic application generator in Perl (AAGIP). Internal technical documentation, Ljubljana. Unicode Home Page. http://unicode.org/ Mahemoff M (2006) Ajax Design Patterns. O’Reilly, USA. ISO Information security management systems standards at http://en.wikipedia.org/wiki/ISO/IEC_27000 Dalgaard P (2008) Introductory Statistics with R. Springer, USA. Rob P, Coronel C (2004) Database Systems: Design, Implementation and Management, Sixth Edition. Course Technology, Boston. Shortliffe EH, Cimino JJ (2006) Biomedical Informatics. Computer Applications in Health Care and Biomedicine. Springer, USA. Goldstein D, Groen PJ, Ponkshe S, Wine M (2007) Medical Informatics 20/20. Jones and Bartlett Publishers, USA.
Corresponding authors: Author: Institute: Street: City: Country: Email:
Brane L. Leskošek Institute of Biostatistics and Medical Informatics Vrazov trg 2 SI-1000 Ljubljana Slovenija [email protected]
Author: Institute: Street: City: Country: Email:
Polonca Ferk University of Maribor, Faculty of Medicine Slomškov trg 15 SI-2000 Maribor Slovenija [email protected]
The project was supported by Slovenian Research Agency grant number L3-0431 (C).
IFMBE Proceedings Vol. 29
WeCare: Wireless Enhanced Healthcare Hande Ozgur Alemdar and Cem Ersoy NETLAB, Computer Networks Research Laboratory, Department of Computer Engineering, Bogazici University, Istanbul, Turkey Abstract— In-home pervasive healthcare systems provide rich contextual information and alerting mechanisms against odd conditions with continuous monitoring. This minimizes the need for caregivers and help the chronically ill and elderly to survive an independent life. In this study, we present a webbased indoor monitoring system, namely WeCare, and showed the applicability of multi-modal sensor network technologies on healthcare monitoring. Keywords— ambient assisted living, home health care, wireless sensor networks, RFID.
I. INTRODUCTION As the world population ages, the need for providing quality healthcare services to elderly and chronically ill becomes a challenge [1]. Advances in technology can be the remedy in that case with the development of new wireless sensors devices with the emerging and existing web technologies. The interdisciplinary concept of Ambient Intelligence (AmI) has brought together scientists from different areas like networking, artificial intelligence and medicine [2]. One of the most exciting applications of this interdisciplinary research is bringing the capability of constant monitoring to our lives and allowing surviving an independent life [3, 4]. In that context, ambient assisted living applications may help residents and their caregivers by providing continuous medical monitoring, memory enhancement, control of home appliances, medical data access, and emergency communication [5, 6]. There are several survey studies in the literature [7, 8, 9] summarizing the previous research on the subject. When these studies are explored, it is observed that they can be categorized into five broad classes, namely, activities of daily living monitoring, location tracking, medication intake monitoring, medical status monitoring, and fall and movement detection. There are several design requirements for intelligent remote healthcare monitoring systems. To begin with, development of end-to-end healthcare applications is important. Most studies mentioned in the literature, however, only focus on a particular problem such as fall detection or medication intake. This subset of functionalities should be integrated into one application for providing end-to-end,
seamless healthcare monitoring applications. Since the previous research projects focus only on specific problems, the flexibility and ease of extendibility of these systems remain as open problems. Integration of new functionalities to these systems are limited and, in some cases, impossible. Most of the projects lack the graphical user interface part which is very important for all the actors in the system. Moreover, unobtrusiveness should be provided for the effective use of the healthcare monitoring system. The unobtrusive devices like video sensors should be used together with the wireless sensors for providing both unobtrusiveness and multi-modal sensing. Multi-modal sensing will improve the context-awareness of the system. By considering the design requirements mentioned above, we built an end-to-end healthcare monitoring application, WeCare, which is capable of monitoring activities of the daily living, medication intake, medical status, and location and able to identify sudden falls. WeCare system incorporates the functionalities of five groups of applications that exist in the literature. Besides, with the help of different sensing modalities like RFID, wireless sensors, and video sensors, WeCare system provides a highly context-aware solution for healthcare monitoring. The flexible alarm definition mechanism helps identifying many different alarm conditions that are determined by the users according to their needs. The addition of new types of sensors in the system is also very convenient. The configurations are handled via simple graphical user interfaces of WeCare.
II. WECARE SYSTEM DESIGN WeCare is a wireless sensor network based application for remote healthcare monitoring of the elderly. The multimodal sensing environment of WeCare provides a combination of different sensing modalities like video, RFID, biological and ambient sensing. The wireless sensors provide environmental context information like the ambient light and temperature. The RFID technology is used for location tracking and also as an input for activating the video sensors. In a typical scenario, all sensor data is used to provide the context information which is forwarded to other nomadic and mobile actors such as caregivers, parents, and
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 855–858, 2010. www.springerlink.com
856
H.O. Alemdar and C. Ersoy
healthcare professionals over the Internet and GSM. The systems’ basic functionalities are as follows: • • • • • • • • •
Incorporating different sensing modalities such as ambient light, temperature, sound, humidity and acceleration. Identifying the presence of the residents, i.e., which residents are at home. Identifying the location of the residents, i.e., in which room each resident is. Video broadcasting enhanced with location tracking. Providing web based remote access for the users. Providing e-mail and SMS notifications in emergency situations. Easy alarm setting via predefined alarm mechanisms. Flexible, personalized alarm definition via GUI. Alarm and event logging and archiving mechanism for later evaluation and monitoring.
We build the testbed for WeCare in a 55m2 laboratory. The testbed home consists of three rooms decorated as a living room, a bedroom and a kitchen. The picture of the testbed environment can be seen in Figure 1.
situation is observed, the “Alarm Engine” takes the required action such as sending SMS or e-mail notifications to the related users. The “Web Server” enables the remote monitoring via web based “GUI”. The users can access all the information about their homes at any time and place via web GUI.
Fig. 2 WeCare architecture In the “Sensing and Perceptual Intelligence (SPI)”, the data is collected from the sensors deployed in the house by the “Sensing and Perceptual Intelligence Engine” and successful delivery to the GML module is handled by the “Communication Module”. The communication modules in both parts of the architecture use propriety “Application Layer Communication Protocol” for data exchange.
IV. WECARE SYSTEM COMPONENTS A. Graphical User Interface Fig. 1 WeCare testbed rooms We have deployed several sensors for temperature, humidity, light and sound monitoring together with RFID system and video cameras. The users also carry acceleration sensors on their bodies for activity tracking.
III. WECARE ARCHITECTURE The system architecture is composed of two main parts, one corresponds to the home environment and the other is at the healthcare control center for allowing remote evaluation. The architecture can be seen in Figure 2. The “GUI and Main Logic (GML)” part of the architecture is responsible from data aggregation, data inference, and data representation. Data is collected via the “Communication Module” and stored in the “Database Server”. Deducing useful information from the collected data and making inferences according to the configured alarm definitions are handled by the “Inference Engine”. When an alarm
We develop the graphical user interface by using Microsoft Silverlight for its support for animations, vectors and 3D graphics and video in web applications. Besides rich content, the web application is kept quite simple for userfriendliness. The web page has a navigation panel organized as a tree on the left and the multimedia content is displayed on the main screen. The object hierarchy starts with a “House” object. Each user in the system is associated with a single house. When the users login, they are directly taken to the home overview page they are associated with. The house object contains several “Room” and “People” objects. The Room objects have several “Sensor” objects and “Camera” objects associated with them, whereas People objects only have Sensor objects associated. Additionally, each person has a default “Location” property which is of type “Room”. The location of the people is identified by RSS of the RFID antennas deployed in the house. In this way, the camera associated with a person can also be identified and the stream from that camera is shown on the graphical user interface when the person is being monitored. Moreover, all camera streams
IFMBE Proceedings Vol. 29
WeCare: Wireless Enhanced Healthcare
857
are provided in a separate page for enabling the users for instantaneous monitoring of any room. The sensor readings are grouped under associated room or people objects. The snapshot of the room is provided together with the instant sensor values, the generated alarms and the live camera view of the room as shown in Figure 3. The dynamic nature of this page allows updating the values and events on the page without any action from the user. Likewise, the sensors associated with the people are updated continuously and automatically. Besides, since the people objects are moving, the live camera view changes according to the location of the people being monitored. In this way, the person’s presence and duration of his presence in the rooms can also be recorded by the system for later evaluation. This also helps identifying the activities of daily living pattern of the people and can be used to make inferences about the unusual occurrences.
that may be interpreted as emergency conditions. Moreover, all the alarms can be configured to be active between specific time intervals and also according to the presence of the other residents in the house. For example, a fall alarm can be configured to be inactive when someone such as healthcare personnel is present in the house besides the person being monitored. Generated alarms are logged after taking the emergency action such as sending SMS and e-mail. The users can view the alarm details on the graphical user interface as shown in Figure 4. The generated alarms are highlighted with red color. When the users handled the alarm situation they can change the status of the alarm to “Verified” or in case of false alarms the status can be changed to “False Alarm”. The false alarm feedback is important for the system for future calibration of the sensor devices and alarm conditions.
Fig. 4 Alarm view screen Fig. 3 Living room overview screen
B. Main Logic
A more detailed view of each sensor reading is also possible. When a specific sensor is selected to be monitored, the sensor readings are shown on a dynamic graph for identifying the changes in the sensor values visually. The alarms can either be configured by using predefined alarms like the person has fallen or there is a fire in the room or can be configured building more complex logical expressions regarding sensor readings. The “Alarm” objects can be associated with a Room or People object or both. For example, an alarm expression like the person fell down in bedroom and the bedroom is silent can be defined for identifying a serious fall condition at which the person become unconscious. This flexible alarm setting mechanism also has a time filter for reducing the false alarm rate. This filter determines the duration of the alarm condition in seconds before raising that alarm and takes the required actions. This timeout setting function helps eliminating the instantaneous sensor reading errors and other environmental conditions
The Main Logic part of the system is composed of the “Inference Engine”, the “Alarm Engine”, and the “Communication Module”. Communication Module is responsible for the data transfer between the healthcare control center ant the house being monitored. The communication is provided over socket based TCP/IP according to the propriety application level communication protocol. The alarm engine is responsible for sending e-mails or SMS messages according to the alarm definitions provided by the Inference Engine. The Inference Engine is responsible from the data handling and manipulation. It behaves like an interface between the GUI and the database server, besides identifying the alarm conditions. The “Inference Engine” constantly checks for alarms by evaluating the alarm’s conditions. It maintains the alarm start and end times with the algorithm represented in Algorithm. The engine holds two database tables AlarmOn and AlarmOff for active and deactivated
IFMBE Proceedings Vol. 29
858
H.O. Alemdar and C. Ersoy
alarms respectively, at a given time. AlarmList holds the alarm settings like the start and end times of the alarm checking and the timeout value for the alarms, which is the wait duration before switching to “Active” state when the alarm conditions occurred. When the handling mechanism identifies that an alarm has occurred, it puts a timestamp and waits for the timeout value before making that alarm active. When an alarm becomes active, alarm handling mechanism notifies the “Alarm Engine” for taking the required actions. Moreover, when an active alarm condition is no longer satisfied the handling mechanism timestamps the end time of that alarm and archives that alarm for later evaluation for the users. C. Sensing and Perceptual Intelligence All sensor devices in the house are wirelessly connected to a base station mote which is connected to a computer located in the house. XSniffer application installed on the base station mote listens to the radio and captures the data packets sent from other motes and forwards these packets to the USB port where it is connected to the computer. The SerialDump application in turn, collects the data packets from the port and forwards them to the “Sensing and Perceptual Intelligence Engine”. The collected raw data are identified and classified according to the source node and source sensor type here and the values are converted to the interpretable values. Finally, the data strings are sent to the communication module for successful delivery to the healthcare center. Since the data from the acceleration sensors are much frequent than other types of sensor data, the fall detection is handled in the “Sensing and Perceptual Intelligence Engine (SPIE)”. In order to interpret the three-axes acceleration sensor data, a window averaging method is used. Every time new values arrive, the averages of the x, y and z axis acceleration values are calculated and compared with the new values. According to the difference values, the characterization of the motion is determined. With the calibration experiments, we fuzzily classified the movement of the object as “Normal”, “Fast” and “Extreme” respectively. Likewise, RFID system works by interrogating the tags every second therefore, identification of the people’s location is handled by SPIE. With the RSS measurements obtained from the antennas of the RFID reader, the people’s locations are approximately guessed by the system. For calibrating the threshold values, we conducted several experiments. For eliminating the interference, we kept the output powers of the antennas as low as -30dBm. These low powers also provide a suitable radiation emission for a healthcare application.
V. CONCLUSIONS In this study, we proposed an architecture for a remote healthcare monitoring system that incorporates different sensing modalities and communication technologies. We also developed the WeCare system for evaluating the applicability of the proposed architecture. We have deployed the sensors and the software we developed in a home-like testbed setting. We conducted several experiments for evaluating the testbed. The results indicate great potential for such multimodal sensor systems to be used in healthcare monitoring applications for the elderly and child. Since indoor environments are convenient for wireless device communications in one-hop and broadband Internet access are available for almost everywhere, we do not have many difficulties in terms of deployment and communications. Therefore, we concentrated on the organization of the knowledge and calibrating the sensors which are of great importance for identifying the alarm conditions in a continuous healthcare monitoring application. The future work of this study will include a self learning system instead of rule-based alarm settings.
ACKNOWLEDGMENT This research is supported by Scientific and Technical Research Council of Turkey (TUBITAK) under the grant number 108E207 and also supported by BAP under the grant number 09A101P.
REFERENCES 1. Kinsella Kevin, Phillips David R. Global Aging: The Challenge of Success Population Bulletin. 2005;60. 2. Cook Diane J., Augusto Juan C., Jakkula Vikramaditya R.. Ambient Intelligence: Technologies, Applications, and Opportunities Pervasive and Mobile Computing. 2009;5:277–298. 3. Schmidt Albrecht, Laerhoven Kristof Van. How to Build Smart Appliances? IEEE Personal Communications. 2001:6–11. 4. Choudhury Tanzeem, Consolvo Sunny, Harrison Beverly, et al. An embedded Activity Recognition system IEEE Pervasive Computing. 2008:32–41. 5. Stanford Vince. Using Pervasive Computing to Deliver Elder Care IEEE Pervasive Computing. 2002. 6. Mcfadden Ted, Indulska Jadwiga. Context-aware environments for independent living in 3rd National Conference of Emerging Researchers in Ageing:1–6 2004. 7. Chan Marie, Est`eve Daniel, Escriba Christophe, Campo Eric. A review of smart homes- present state and future challenges Computer Methods and Programs in Biomedicine. 2008;91:55–81. 8. Sneha S, Varshney U. Enabling ubiquitous patient monitoring: Model, decision protocols, opportunities and challenges Decision Support Systems. 2009;46:606–619. 9. Koch Sabine, H¨agglund Maria. Health informatics and the delivery of care to older people. Maturitas. 2009
IFMBE Proceedings Vol. 29
The Functionality Control of Horizontal Agitators for Blood Bags Z. Vasickova, M. Penhaker, and M. Darebnikova Department of Measurement and Control, VSB-Technical University of Ostrava, Ostrava, Czech Republic Abstract— Blood collection from donors and its next processing is an integral part of every hospital. The whole blood itself is only rarely used nowadays, except autologous transfusions, that´s why every collected blood is separated into parts, usually red blood cells extract and plasma, eventually platelets. The platelets (thrombocytes) are stored in platelet agitators. The main function of the platelet agitator is to store platelet concentrates in continuous horizontal motion at a specified temperature. The reasons for this work were the requirements of the hospital to verify the functionality of their agitators. Some-times the blood bags were damage after the agitation. It means that there were some clutches of blood platelets. And this phenomenon has been seen more often in one of the agitator. We try to confirm or disprove if the functionality of this agitator is wrong. We compared the function of the agitators and we also made a several tests that were searching the effect of the surrounding events. Keywords— transfusion, horizontal agitator, accelerometer, ZSTAR.
I. INTRODUCTION Blood collection from donors and its next processing is an integral part of every hospital. A typical donation is 450 ml of whole blood in 5 – 10 minutes. The whole blood itself is only rarely used nowadays, except autologous transfusions, that´s why every collected blood is separated into parts, usually red blood cells extract and plasma, eventually platelets. Except typical whole blood donation only separate constituent can be donated, e.g. platelets. So cold „blood separator“ is used for that. The platelets (thrombocytes) are stored in platelet agitators. The main function of the platelet agitator is to store platelet concentrates in continuous horizontal motion at a specified temperature. One batch contains 240x10 9 platelets. Beside platelets there is the nourishing solution for oxygen handover in the sack as well, that prevents forming of clutches. The application of additional resuspension solution enables conserving thrombocytes vitality even when 90% of plasma is taken out. The application of sodium chloride, adenin and glucose is necessary for vitality, whereas other sugars are also used for cell membrane stabilization and haemolysis prevention. The sack has to have an inert surface, so that the platelets don´t settled there. The sack is made by special plastic
material that works like a semi permeable membrane for oxygen. The thrombocytes are stored in the temperature 20 – 24°C. A hermetic machine which allowed temperature check-up is recommended. If that machine is unavailable the room should be able to keep required temperature. The thrombocytes should be stored in the machine with agitation which: should provide satisfactory agitation in the sack including gas exchange through the wall of the sack shouldn´t deform or crease the sack should be able to work in various speeds to prevent forming. The agitator is determined by storing the platelet sacks and their horizontal agitation. The thrombo concentrate is typically agitated from the production date to the date of its expenditure at the clinical detachment to the recipient (patient). The agitation is not processing but just storing of the platelets in the semipermeable membrane and continual agitation. The agitation has its important role. The thrombo concentrate is after the blood collection directly put in the agitator where it is agitated until it is required to release for the recipient (patient).
Fig. 1 The horizontal agitator RL 45-B Tool The platelets have to be agitated at stable frequency, usually one oscillation per second. It´s very important to keep the agitation so that the platelets won´t form the clusters,
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 859–862, 2010. www.springerlink.com
860
Z. Vasickova, M. Penhaker, and M. Darebnikova
which are considered to be failed thrombo concentrate. That cannot be used for donation. Table 1 The properties of the horizontal agitator Utilization
Horizontal agitation
Rate
60oscilations per minute (+2; -3)
Voltage
24 V, 50 Hz
Input power
15 VA
Capacity
48 bags - 400 ml
Proportion of grid
450mm x 30mm
Proportion of agitator
52mm x 370mm x 400mm
Weight of agitator
15 kg
Temperature of surroundings
+22 °C (+10 °C; -5 °C)
Relative humidity
45 - 60%
Atmospheric pressure
100 kPa (4 kPa)
The reasons for the work were the requirements of the hospital to verify the functionality of their agitators. Sometimes the blood bags were damage after the agitation. It means that there were some clutches of blood platelets. And this phenomenon has been seen more often in one of the agitator. We try to confirm or disprove if the functionality of this agitator is wrong. We compared the function of the agitators and we also made a several tests that were searching the effect of the surrounding events. [1]
II. METHODS A. An Accelerometers An accelerometer is a device for measuring acceleration and gravity induced reaction forces. Single- and multi-axis models are available to detect magnitude and direction of the acceleration as a vector quantity. Accelerometers can be used to sense inclination, vibration, and shock. The effects of gravity and acceleration are indistinguishable, following Einstein's equivalence principle. As a consequence, the output of an accelerometer has an offset due to local gravity. This means that, perhaps counter-intuitively, an accelerometer at rest on the earth's surface will actually indicate 1g along the vertical axis. To obtain the acceleration due to motion alone, this offset must be subtracted. Along all horizontal directions, the device yields acceleration directly. Conversely, the device's output will be zero during free fall, where the acceleration exactly follows gravity. Modern accelerometers are often small micro electromechanical systems (MEMS), and are indeed the simplest
MEMS devices possible, consisting of little more than a cantilever beam with a proof mass (also known as seismic mass). Mechanically the accelerometer behaves as a massdamper-spring system; the damping results from the residual gas sealed in the device. As long as the Q-factor is not too low, damping does not result in a lower sensitivity. Under the influence of gravity or acceleration the proof mass deflects from its neutral position. This deflection is measured in an analog or digital manner. Most commonly the capacitance between a set of fixed beams and a set of beams attached to the proof mass is measured. This method is simple and reliable; it also does not require additional process steps making it inexpensive. Integrating piezoresistors in the springs to detect spring deformation, and thus deflection, is a good alternative, although a few more processes are needed. For very high sensitivities quantum tunneling is also used; this requires specific fabrication steps making it more expensive. Another, far less common, type of MEMS-based accelerometer contains a small heater at the bottom of a very small dome, which heats the air inside the dome to cause it to rise. A thermocouple on the dome determines where the heated air reaches the dome and the deflection off the center is a measure of the acceleration applied to the sensor. Most micromechanical accelerometers operate in-plane, that is, they are designed to be sensitive only to a direction in the plane of the die. By integrating two devices perpendicularly on a single die a two-axis accelerometer can be made. By adding an additional out-of-plane device three axes can be measured. Such a combination always has a much lower misalignment error than three discrete models combined after packaging. Micromechanical accelerometers are available in a wide variety of measuring ranges, reaching up to thousands of g's. The designer must make a compromise between sensitivity and the maximal acceleration that can be measured. [2] B. ZSTAR Kit For measuring of the data the ZSTAR kit (Freescale) was chosen. The ZSTAR design provides two small portable boards with the capability to demonstrate and evaluate various accelerometer applications that accommodate the costeffective, low-power wireless connection. Sensor Board containing the 3-axis accelerometer, 8-bit microcontroller and the 2.4GHz RF chip for wireless communication.
IFMBE Proceedings Vol. 29
The Functionality Control of Horizontal Agitators for Blood Bags
861
III. TESTS AND RESULTS
Fig. 2 ZSTAR Sensor Board, Freescale The Triple Axis Analog Accelerometer is a low power, low profile capacitive micromachined accelerometer featuring signal conditioning, a 1-pole low pass filter, temperature compensation, self test, 0g-detect which detects linear freefall, and g-Select which allows for the selection between 2 sensitivities. Zero-g offset and sensitivity are factory set and require no external devices. It includes a sleep mode that makes it ideal for handheld battery powered electronics. USB stick, again with the 2.4GHz RF chip for wireless communication and microcontroller for the USB communication.
We made several tests for the verification of the functionality of the horizontal agitators for blood bags. In the Figure 4 there is the measuring chain. We used the Zstar Kit for measuring the acceleration curves and we simulated several events that could effect the agitation. For scanning the accelerometric data the acceleration sensors were put: -S1..on the grate of the agitator -S2.. on the top of the agitator -S3..on the blood bag in the agitator -S4..on the floor The surroundings events that were simulated: -Knock on the side wall of agitator -Smooth digging to the table -Jumps by the person (120kg) in 1meter distant from the table -Walking in the room -Putting the bottle with 400ml of water on the agitator. All of the tests were made on the two agitators.
Fig. 3 ZSTAR USB Stick, Freescale Fig. 4 The measuring chain ý
p
y
2
1.5
1
g
Features of ZSTAR kit: Sensing acceleration in 3 axes; Wireless communication with sensors through the 2.4 GHz band; RF protocol supports 16 sensors per one USB stick (receiver); Data rate of a sensor is 30, 60 or 120 Hz; Typical wireless range is 20m, two walls or one floor; Auto calibration function of the sensor; USB communication on the receiver part; Virtual serial port - interface for the GUI and serial port terminal; 8bit/16-bit working modes; 3 push buttons on the sensor board. Current consumption: in normal run mode: 1.8 - 3.9mA, depending on the actual data rate, in sleep mode: less than 900nA; Power consumption depends on the current output values of the sensor. At a standstill, the board transmits only every 10th packet; Sensor Board is powered by a coin-sized CR2032 3V battery [3].
0.5
0
-0.5
-1 60
60.5
61
61.5
62
62.5 t [s]
63
63.5
64
64.5
65
Fig. 5 The measured data (agitator 1) without any event. The axes X has -0.6 offset to its better view
IFMBE Proceedings Vol. 29
862
Z. Vasickova, M. Penhaker, and M. Darebnikova 1.2
102/08/1429 "Safety and security of networked embedded system applications". Also supported by the Ministry of Education of the Czech Republic under Project 1M0567.
1 0.8 0.6
REFERENCES
g
0.4 0.2 0 -0.2 -0.4 -0.6 60
60.5
61
61.5
62
62.5 t [s]
63
63.5
64
64.5
65
Fig. 6 The
measured data (agitator 1) without any event. The axes X has -0.2 offset to its better view
After the whole tests we can exclude the surrounding events which we tested. They have no effect on the functionality of the agitators. The events aren’t seen in the acceleration curves. If we compare the data from two agitators, we can say that the agitator 1 have more vibration in axes Z, then agitator 2. We suppose it has no effect on the blood bags. The both agitator work properly.
IV. CONCLUSIONS The reasons for this work were the requirements of the hospital to verify the functionality of their agitators. Sometimes the blood bags were damaged after the agitation. It means that there were some clutches of blood platelets. And this phenomenon has been seen more often in one of the agitator. We try to confirm or disprove if the functionality of this agitator is wrong. We made a measuring chain based on accelerometric sensors which monitors the agitation. We compared the function of the agitators and we also made a several tests that were searching the effect of the surrounding events. We compared the results – accelerometric records and we can confirm that both horizontal agitator work properly. Only one of them has probably worn down the bearings. This phenomenon doesn’t have conjunction with damage blood bags. The surrounding events don’t have an influence on agitation as well.
ACKNOWLEDGMENT Grand – aided student, Municipality of Ostrava, Czech Republic. The work and the contribution were supported by the project Grant Agency of Czech Republic – GAČR
The list of References should only include papers that are cited in the text and that have been published or accepted for publication. Citations in the text should be identified by numbers in square brackets and the list of references at the end of the paper should be numbered according to the order of appearance in the text. Examples of citations for Journal articles [1], books [2], the Digital Object Identifier (DOI) of the cited literature (which should be added at the end of the reference in question if available) [3], Proceedings papers [4] and electronic publications [5]. Cited papers that have been accepted for publication should be included in the list of references with the name of the journal and marked as ‘‘in press’’. The author is responsible for the accuracy of the references. Journal titles should be abbreviated according to Engineering Index Inc. References with correct punctuation. 1. Product Manual RL – 45B, Tool. 2. Analog Devices, Accelerometer Design and Applications [online],[cit.2008-11-11], 3. Freescale, DRM103 Designer Reference Manual , Document, Number: DRM10, Rev. 0, 06/2008 4. Vašíčková, Z., augustynek, M.(2009): New method for detection of epileptic seizure. In 9th International Conference Biomdlore 2009. September 9 – 11. 2009, Bialystok, Poland, Journal of Vibroengineering, ISSN: 1392-8716Smith 5. Cerny M., Penhaker M. Biotelemetry In conference proceedings 14th Nordic-Baltic Conference an Biomedical Engineering and Medical Physics, JUN 16-20, 2008 Riga, LATVIA, Volume: 20 Pages: 405408 Published: 2008, ISSN: 1680-0737, ISBN: 978-3-540-69366-6 6. Cerny M., Penhaker M. The Circadian Cycle Monitoring In Konference proceedings Conference Information: 5th International Summer School and Symposium on Medical Devices and Biosensors, JUN 0103, 2008 Hong Kong, PEOPLES R CHINA, Pages: 41-43, 2008, ISBN: 978-1-4244-2252-4 7. Cerny, M.; Penhaker, M. The HomeCare and Circadian rhythm In Technology and Applications in Biomedicine, 2008. ITAB 2008. International Conference on, 30-31 May 2008 Page(s):245 - 248 Digital Object Identifier 10.1109/ITAB.2008.4570546
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Zuzana Vasickova, Msc Department of measurement and control, VSB-TUO 17. listopadu Ostrava Czech republic [email protected]
Experimental Hardware Solutions of Biotelemetric System Dalibor Janckulik1, Leona Motalova1, Karel Musil1 and Ondrej Krejcar2 1 dept. of Measurement and Control Technical University of Ostrava, Faculty of Electrical Engineering and Computer Science Ostrava, Czech Republic {dalibor.janckulik, leona.motalova, karel.musil} @vsb.cz 2 Centre for Applied Cybernetics, dept. of Measurement and Control Technical University of Ostrava, Faculty of Electrical Engineering and Computer Science Ostrava, Czech Republic [email protected]
Abstract— This article undertakes the most important parts of hardware platform of our Biotelemetric system. It describes the way of use of standard devices from commercial manufacturers, such as embedded PCs, PDAs and wireless ECG unit BlueECG communicating via bluetooth. Suggests major problems and disadvantages of their use and offers possible solutions in the form of construction of our own purpose-built equipment. Whether in the form of small auxiliary hardware and design options EKG units optimized for low power consumption and collaboration with mobile devices with limited computing capabilities. Keywords - ECG; measurement; biotelemetry; mobile; tablet PC I.
EMBEDDED DEVICES
In the case of mobile applications, there is a possible solution of mobile device intended for the collection, processing and sending data to a database using embedded computer. This is a single board computer on which is already built all the components necessary for its operation. During our research we used NuWa-470 board of embedded PC, manufactured by the company specialized in industrial solutions in the field of computer engineering ICP-DAS. NuWa-470 is fitted with a PXA255 processor series from Intel. This is a special platform with high integration of system on one chip, which is designed to minimize consumption and thus increase the compatibility of battery-powered applications. Processor is compatible with ARM architecture and is suitable for its simple implementation of systems WinCE and special-purpose distributions of Linux. It operates at a frequency of 400MHz and 64MB RAM. These parameters are comparable with older models of PDAs and smart phones. Operating system and all data is stored on memory card Compact-Flash. Thanks to this board don’t have any moving parts, because due to their low wattage processor there is no need of active cooling to be fitted with a condenser and fan and the entire appliance becomes a very robust and mechanically resistant. These
computers can run both with and without a graphical interface. An ideal graphical interface is formed by a touch-screen LCD. In the case of mobile applications powered from batteries, it is necessary to take into account the consumption of the LCD, which may go beyond the consumption of the embedded PC itself and shorten the life of a battery charge for a fraction of the time achieved without a graphical interface. Slight disadvantage of these devices is less computing power compared to conventional PC. Although facilities are equipped with operating systems designed so that their response appeared to be Real-Time, however, this response is not able to process large amounts of data using complex algorithms. This means that even in this platform is not possible to parse data received from BlueECG units of Corscience for displaying them in real time. II.
TABLET
One of the mobile equipment to be used for processing and visualization of various biological signals is TabletPC currently under preparation. It is a unit equipped with a widescreen LCD display with a resolution of 1280x800 pixels, which bears a touch layer to control without the use of common PC peripherals such as keyboard or mouse, e.g. for laptops. This allows you to work in different conditions and locations where the control with ordinary peripherals becomes difficult or impossible. Resolution of LCD panel and the dimensions of 15.4-inch diagonal is quite enough to visualize such as the multichannel ECG waveform set. The absence of an articulated, or hinge mechanism makes the device mechanically resistant and dimensionally and shape easier. Processor core is a series of Celeron M Intel. It is a powerful system almost comparable to desktop PC but is optimized for low power consumption and long operation under the power of the integrated battery. This system has the usual interfaces such as USB ports, Ethernet and wireless Bluetooth and WiFi. This makes it possible to
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 863–866, 2010. www.springerlink.com
864
D. Janckulik et al.
connect a wide range of end-measuring devices. During testing this system is running on modified versions of the Windows operating system. Specifically, the version of WinXP Embedded and newly for the testing released platform based on Windows 7 called Windows Embedded Standard 2011. These platforms are not primarily developed as a Real-Time operating systems, but compared to these here is advantage in software compatibility with its sister operating systems for PCs and thus a natural full support for all device drivers. This leads to significant reduction in development time as the software applications and interfaces with devices that will communicate through standard application interface. An example might be the use of rigorous protocol for the USB bus in the case of WinCE, which builds on the standard USB-TMC. Its advantage is the eighth of communication period leght over the bus in comparison to the ordinary USB, i.e. 125us. However this advantage is well balanced by the availability and simplicity of the development of drivers for PC hardware and drivers for end devices for ordinary PC USB. The next aim of this system is a gradual integration of special-purpose peripherals to communicate with any devices equipped only with non-standard interfaces. III.
„DONGLE PARSER“
As stated in the description of the test system with embedded single-board computer Nu-Wa, it can’t be running the operating system to perform the parsing of packets BlueECG units used during the research. For this real-time parsing a well was sufficient processor with much less computing power than that which is Nu-Wa 470 fitted with in reality. Ironically this is quite enough demanding for computation and to ensure the smooth running of the operating system and all its components. The limiting factor here, however, becoming the operating system itself. It is not optimized to handle large amounts of bit operations needed for parsing seemingly illogical compound packets to the commonly understood data formats, although the operations are almost entirely computationally simple. This problem persists across the imaginary tree of applications from PDA via embedded PC to a normal PC. The only immediate solution is targeted lack of immediate response to display data from the device using similar protocols. Then use the executive with the database server for data preparation and subsequent display data processed by the server sends back. Or display only a limited amount of measured data. Then to retrieve, process and display measurements as 5 seconds and the next leg measurements show after its processing e.g. for 30 seconds. But this view is a relatively small share of the measured data and the risks
associated with possible lost of important data by diagnostics. For the above-mentioned problems are offered a relatively simple solution. This consists in the construction of its way "smart" bluetooth module, which, in addition to its own communication interface also included a small single-chip microcomputer (MCU) with no operating system to work according to an algorithm for parsing mentioned packages and working as a parser. Program in the MCU would work with individual bytes in the packet at the lowest hardware level, which should cause that for these operations will be sufficient very simple, small and inexpensive system with negligible energy demands.
Fig. 1. Measurement chain detail for MCU solution inside the USB BT dongle Moreover, this process takes place reliably in real time, and the parser could be equipped with a large enough flash memory cache. This could save the processed data at a time when it is not normal Real-Time operating system was unable to read and save. This would ensure that there is, in short-term 'freeze' the system is no loss of valuable data. IV.
PURPOSE-BUILT ECG
During the real tests the battery consumption tests were executed. Firstly the set of two monocell battery with nominal voltage of 2,5 V were tested without successful time of usage. They provide only 2 hours of operation time. At second case the Lithium-Polymer cell was used with nominal voltage of 3,7 V. In this case an additional circuitry is needed to use an USB port for recharging of battery in device. At figure [Fig. 2] the battery tests screens of 12 channels ECG are presented. Figure [Fig. 2] shows voltage of 3V (discharged battery) where current is presented by yellow trace on oscilloscope screen and its average value approximately is 106 mA. Figure [Fig. 63] shows the same at a normal charged battery voltage level where the average current is going down to 81 mA. In case of Li-Pol battery usage the operation time of 12 channels ECG is about 10 hours.
IFMBE Proceedings Vol. 29
Experimental Hardware Solutions of Biotelemetric System
865
thhe parser in Bluetooth B moddule. In the caase of the use of ow wn EKG unitss with a simpllified protocoll there would be reeleased more computing c ressources in the parser for othher opperations. According A too preliminarry calculatioons innvolving the dependencee of currennt consumptiion deepending on the t speed sam mpling and signal changes in suupply voltage that we can ddesign and co onstruct an EC CG un nit with curreent collection ranging from m 20 to 25 mA. m Th his value is four times sm maller than BlueECG. B Thhus op perating at thee same time thhe battery cou uld be increassed to o more than 500 hours per chharge. V.
Fig. 2. Baattery test screeen of 12 channnels ECG. Dischargeed battery.
GETTING DAATA FROM THEE DEVICE
Our used eqquipment BlueeECG from Corscience C sennds daata, which aree unsuitable ffor direct disp play. Therefoore, th he received paacket must be split into more m substanttial daata to us and part of the m management. Control part (in th he picture surrrounded by reed) is used to navigate in the t co ommunicationn and verificcation of thee status of the t m measuring deviice.
This section contains infoormation on which w electroddes haave contact annd are accepteed in the measurement (LRF FN V6 V5 V4 V3 V2 V V1), what is the battery device (BAT1), xceeding the limits set byy heart rate (GAME, ( HRL L), ex heeart rate. F 3. Battery test screens of Fig. o 12 channelss ECG. Chargeed batttery. The second solution is prepared by developing our o ow wn wireless ECG units with optim mized protoccol. M Moreover, these could be optimized with w respect to coonsumption. Units U BlueEC CG from Corrscience contaain ennergy-intensiv ve FPGA circuuit, which realizes digital FIR F fillter over measured m datta. Necessaryy filtration of m measured sign nal is not trivial, since the speciific m measurement o ECG signnal may conntain interferiing of coomponents in n much greatter amplitudee than it actuual m measured sign nal. Howeveer, the intenntion is theese coomputationally y and thus alsso energy-intennsive operatioons shhift away from m the small battery installaations. Its aim m is too minimize EC CG physical dimensions annd weight. Thhis inndicates the presence p of very small batteries. Goood inntention, therrefore, appeears in porttage of thoose caalculations to o the follow wing facilities intended for f viisualization off measured data and calcuulations used for f
s (in thee picture surro ounded by bluue) In the data section arre stored meassured value off each electrodde. Hexadecim mal vaalue in the paacket is needeed to convertt into a decim mal suuitable for visualization. This operration is tim me co onsuming, nott only mobile but also for cllassical personnal co omputer. Valuues are arrangged in framess, depending on th he configuratioon of devices such as 12-eleectrodes
or bipolar ECG. E
IFMBE Proceedings Vol. 29
866
D. Janckulik et al.
These framees are borne by housing,, which bearr a deesired value. Value may bbe composed of two or one o by yte. Due recog gnition is thuss used LSB (L Least Significaant Biit) marking, where the bit with thee lowest vallue deetermines wheether the nexxt byte or belongs to anothher vaalue to the preevious value. If the LSB seet to 1, conveerts th he byte with th he next byte too decimal valuue.
th he
Otherwise, the second byte b will be completed c froom previoous valuue.
REFERENCES 1.
2.
3.
4.
This procedure is repeated throughout the measurements and extracts device so that the device is no longer able to view the data. Therefore, it acceded to separate the part that expands the packet and converts to decimal value. This section moved to the server, which expands the data, save and send them back. But that is part of a software project. VI.
CONCLUSIONS
We tested Corbelt ECG and 12-electrode BlueECG in the application communicating with embedded computers, standard PCs and PDAs. Measuring devices were previously tested under extreme conditions in the cryogenic chamber spa Teplice nad Becvou. Currently work is underway to design and edit the special-purpose tablet with a touch display. For simultaneous use with this tablet was tested the first prototype of bluetooth module Dongle parser based on MCU with USB by Freescale. This module will be used for parsing of incoming data from BlueECG to eliminate the needs of demanding operations in the facility dedicated for visualization and sending the data to a server. Currently in preparation is circuit solution of custom module for measuring ECG signals optimized to minimize power demand factor using solutions with a high degree of integration of special lowpower electronic circuits.
Janckulik, D., Krejcar, O., Martinovic, J.: Personal Telemetric System – Guardian, In Biodevices 2008, pp. 170-173, Insticc Setubal, Funchal,,Portugal, (2008) Krejcar, O., Cernohorsky, J., Janckulik, D., Portable devices in Architecture of Personal Biotelemetric Systems. In 4th WSEAS International Conference on Cellular and Molecular Biology, Biophysics and Bioengineering, BIO'08, December 15-17, 2008, Puerto De La Cruz, Canary Islands, Spain. pp. 60-64. (2008) Krejcar, O., Janckulik, D., Motalova, L., Kufel, J., “Mobile Monitoring Stations and Web Visualization of Biotelemetric System - Guardian II”. In EuropeComm 2009. LNICST vol. 16, pp. 284-291. R. Mehmood, et al. (Eds). Springer, Heidelberg (2009). Krejcar, O., Janckulik, D., Motalova, L., “Complex Biomedical System with Mobile Clients”. In The World Congress on Medical Physics and Biomedical Engineering 2009, WC 2009, September 07-12, 2009 Munich, Germany. IFMBE Proceedings, Vol. 25/5. O. Dössel, W. C. Schlegel, (Eds.). Springer, Heidelberg. (2009)
5.
Krejcar, O., Janckulik, D., Motalova, L., Frischer, R., “Architecture of Mobile and Desktop Stations for Noninvasive Continuous Blood Pressure Measurement”. In The World Congress on Medical Physics and Biomedical Engineering 2009, WC 2009, September 07-12, 2009 Munich, Germany. IFMBE Proceedings, Vol. 25/5. O. Dössel, W. C. Schlegel, (Eds.). Springer, Heidelberg. (2009) 6. Corscience GmbH & Co. KG, www.corscience.de 7. Texas Instruments Inc., www.ti.com 8. Analog Devices Inc., www.analog.com 9. Intel Corporation, www.intel.com 10. Microchip Technology Inc., www.microchip.com 11. ICP DAS USA Inc., www.icpdas-usa.com
ACKNOWLEDGMENT This research has been carried out under the financial support of the research grants “Centre for Applied Cybernetics“, Ministry of Education of the Czech Republic under Project 1M0567 and “Safety and security of networked embedded system applications”, GACR, GA 102/08/1429, Grant Agency of Czech Republic.
IFMBE Proceedings Vol. 29
Modern Tools for Design and Implementation of Mobile Biomedical System for Home Care Agencies Dalibor Janckulik1, Leona Motalova1 and Ondrej Krejcar2 1 dept. of Measurement and Control Technical University of Ostrava, Faculty of Electrical Engineering and Computer Science Ostrava, Czech Republic {dalibor.janckulik, leona.motalova} @vsb.cz 2 Centre for Applied Cybernetics, dept. of Measurement and Control Technical University of Ostrava, Faculty of Electrical Engineering and Computer Science Ostrava, Czech Republic [email protected]
Abstract — The work describe the implementation of the prototype solution of the rightly developing system. The system belongs to the group of the tele-medical applications orientated on the e-health sphere of the supplying of the services. This system can considerably help to streamline the work of the nurses. The system is proposed and implemented so that it was simple to extend it. The system contains three basic parts. The desktop application appointed for the projecting, the mobile application for the employees in the terrain on which from the design creates the history and the application for the synchronization. The mobile equipment serves as a task-work in which the employee will tear away the tasks which he has to do and which were in advance schemed by the superior employee. The system will be fully extendable about other required functionalities which relates departmentally with the system.
for analysis and proposal. The system is described by static and dynamic diagrams. A. General analysis and proposal Contextual diagram – view on the system as unit, and its communication with external systems. In this case an external system realizes roles of users using the system. In the form how are the users displayed on the diagram the system will be delivered. There´s is possibility to adapt the rights to the roles according to needs, so maybe that during the using of the system the diagram already won´t be valid.
Keywords — Biomedical, HomeCare, .NET Framework, Linq, SQL Server
I.
INTRODUCTION
The project is made on requirements of HomeCare Agency Ostrava, Czech republic for use with mobile medical health personal. The developed system is made on the basis needs gained from submitter (HomeCare Agency Ostrava, Czech Republic), as well as on the basis analyses current solving plant MS ACCESS 98. Globally the pilot desktop client and mobile client is implemented. The data and the database model appears to be final, whereas to definite modifications in face of original specifications. Data strobe be functional, but requires a lot of adjustments for finalization to the final state. II.
ANALYSIS AND PROPOSAL OF THE SYSTEM
First step before development is a specification with follow-up analysis and proposal of the solution. These paces are practiced by the consultation with customer/or by different method of collection of demands and are deal with detail text form (systemic specification) with supplemented UML diagrams. We get along with recommended processes
Fig. 1 Data Flow Diagram of developed system.
The system will contain 4 types of users: • Administrator – the main user of the system and the one whose rights can´t be modified. He has all rights in the system. • Head sister – the rights of this role can be freely modified by the administrator. Basic rights of the head sister
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 867–870, 2010. www.springerlink.com
868
D. Janckulik, L. Motalova, and O. Krejcar
are to read from the system and to register in it the information’s about all dials, data and work plans. • Nurse – she has right to read from the system the same information as the head sister, but she can write to the system only data of the patients. • Doctor – towards to the system has rights only to count, he can´t influence the system in neither way as well as he isn´t allowed to write anything.
Showed sequential diagram describes the effect of login of the user from the view of the objects and messages which they exchange between them. The communication between the single objects pass simultaneously, it means that every objects will send a message to other object and wait for an answer from it. In the time of waiting it doesn´t practice any operations.
Fig. 2. Architecture of developed system
B. The data proposal The data model considerably differs from the data model of the objects used in the application and also from the model proposed in MS Access. The original model contained 3 tables whose undesirable character was the duplication of the useless data. The new model try to work with data store more effectively, therefore the data are divided into more tables so that the notation which is used several times in the same form would be saved just once. The model was also extended about another entities – implementation of the dials of materials, zones, duties etc. the particular tables are tied to the groups of relations. C. The architecture of the system The resulting system composes from 3 basic parts which are interconnected. Each from the parts could be involve to other block of the suggested model for proposal of the architecture – models MVC (Model View Controller). The possibilities of the client application are far wider than the possibilities of PDA application. This results from the limitation of architecture of the particular platforms and also of the demand on the system. On the lowest ply there is a possibility to divide the system to the hardware and the software part. The hardware creates the server, particular client stations, mobile equipment and communication interface. The communication can pass by the aid of wire affiliation
(intranet, USB cable) or wireless connection (Wireless LAN, Bluetooth). One of the demands was the operation of offline system – mobile clients can/will synchronize only in the network of PAN type which will be strictly separate of any local network or of the network internet. This solution reduces the possibilities of stealing of the data just on purely physical way. The server and client can be for the personal computer the same equipment. There can be used any mobile equipment which complete the demands for the operation of the system. The software ply can be divided by 2 forms – division according to parts of the hardware, when will be practiced ( the parts create the particular compacted blocks of the application ) or according to aforesaid suggested model MVC. Division according to the hardware/logical units: • The server application – services – supplying of the data from the database • The client application PC – standard application – work with the provided data and create interface machine – human. • The client application PDA – standard mobile application – work with the available data in the terrain. Model is the data ply built on the technology of keeping the data SQL and in the described system is represented by the technologies Linq, ADO.NET. These technologies create the data interface between the database and the data model of
IFMBE Proceedings Vol. 29
Modern Tools for Design and Implementation of Mobile Biomedical System for Home Care Agencies
the very application – The data model isn´t accordant with the database model. View represents the singles GUI applications and so the client applications for the personal PC and the mobile applications for the PDA. They graphically interprets the
869
data with that is operating – they cover up the whole mechanism of the data processing. Controller is the ply of the methods working on the interaction of the user interface according to just enabled actuating elements, resulting from the roles/ assigned rights and available data.
Fig. 3. Class diagram of developed system.
D. GUI of the platform Both GUI applications are proposed with respect on the objective group of the users, and therefore they try to minimize the number of paces to construction of the demanded work. The client application for the PC uses the system of the markers which has hierarchic arrangement. On the highest ply there are 2 markers dividing application on the work with the data of patients and users and on the work with the dials. Particular markers are in the other level divided into the particular blocks of the entities. The last ply is the button evoking the particular operations over the applicable entity. The application serves to the administration of the users, patients, dials and the planning of the duties. The mobile client is realized in different way where the control results from the different main sight of the application. Application serves to display of the duties and to enrollment of the comments. The control is arranged so that
it would serve to the control of the classic application Windows Mobile. III.
TESTING OF DEVELOPED SYSTEM
The system passed through the stress tests. These tests measure the period of the reaction of the web service on the demand of the user in addiction on the number of the simultaneous accesses to the data saved in the database. To this testing was created the software which allows the setting of the maximal and the minimal count of the accessible users and the size of step of the rising of these simultaneous accesses. The application presents the all gained data graphically and also by the form of the tables. The number of tests was performed for the different count of the maximal simultaneous accesses. As shows the picture number 4., in the case of count of the accesses in range of 500 - 1000 users increasing about 50 in one step, the size of the reaction of the web service gradually raises and in the case of 1000 simultaneous accesses it achieves
IFMBE Proceedings Vol. 29
870
D. Janckulik, L. Motalova, and O. Krejcar
the reaction about size of about 2s. This reaction is suitable for the demands of the agency, their demand was the reaction of maximal length 5s, and in addition it deal with small agency which won´t reach 1000 simultaneous accesses, however with the rising count of the data is necessary what will also prolong the reaction.
4.
Cerny, M., Penhaker, M., “Biotelemetry”, In 14th Nordic-Baltic Conference an Biomedical Engineering and Medical Physics, IFMBE Proceedings, Vol 20, Jun 16-20, pp. 405-408, Riga, Latvia (2008)
Fig. 4. Graph of five iterations(I1 – I5) of response times testing (tested on server and standard PC).
Nevertheless in the future we will yet optimize this reaction by the transposition of processing of the demands from the application ply to the database ply. The database server by which help the system was tested is saved on the PC with the processor of the 2x4 GHz frequency and the main storage 6GB DDR2 RAM. IV.
CONCLUSIONS
The present results are the conclusion of the diploma thesis that are proposing and implements the factual solution. The system could be marked as the prototypical solution. One of the following paces after the fine-tuning if the application will be the creation of the Framework for the tele-medical applications.
ACKNOWLEDGMENT This research has been carried out under the financial support of the research grants “Centre for Applied Cybernetics“, Ministry of Education of the Czech Republic under Project 1M0567 and “Safety and security of networked embedded system applications”, GACR, GA 102/08/1429, Grant Agency of Czech Republic.
REFERENCES 1.
2.
3.
Krejcar, O., Janckulik, D., Motalova, L., Kufel, J., “Mobile Monitoring Stations and Web Visualization of Biotelemetric System Guardian II”. In EuropeComm 2009. LNICST vol. 16, pp. 284-291. R. Mehmood, et al. (Eds). Springer, Heidelberg (2009). Krejcar, O., Janckulik, D., Motalova, L., “Complex Biomedical System with Mobile Clients”. In The World Congress on Medical Physics and Biomedical Engineering 2009, WC 2009, September 0712, 2009 Munich, Germany. IFMBE Proceedings, Vol. 25/5. O. Dössel, W. C. Schlegel, (Eds.). Springer, Heidelberg. (2009) Penhaker, M., Cerny, M., Martinak, L., Spisak, J., Valkova, A., “HomeCare - Smart embedded biotelemetry system”, In World Congress on Medical Physics and Biomedical Engineering, Vol 14, PTS 1-6, Aug 27-Sep 01, pp. 711-714, Seoul, South Korea (2006)
IFMBE Proceedings Vol. 29
Application of Embedded System for Sightless with Diabetes L. Martinak, M. Penhaker VSB – Technical University of Ostrava, FEI, K450 17. listopadu 15, 708 33 Ostrava, Czech Republic Abstract— For sightless or weak-eyed people with diabetes is impossible to read the actual information showed on any displays. Especially personal diagnostic systems for glucometery measurement are vital important. Those who are sightless need a help of another person at disposal when measuring. The paper presents design and application of an electronic embedded system to enables a purblind or a sightless people to identify measured data and use them suitable for the further treatment processes. The described system can be used in connexion with any diagnostics system disposing data interface. The purblind or sightless people will be able to perform selfmonitoring (observation of disease trend and a dosage adjustment by a person himself). Keywords— Selfmonitoring, Sightless, Diabetics, Glucometers I. INTRODUCTION
The Diabetes mellitus is a chronic disease which is caused by malfunction of use of blood sugar as an energy source. In the wake of this malfunction there may arise typical pathological changes in small blood vessels and neural fibres. There is a higher sensitivity to specific organ complications such as dysfunction of eyes, nerves, kidneys and others. The main consequence of the disease is higher level of blood sugar (hyper-glycaemia). There is no way how to cure the diabetes nowadays. The treatment is based on compensation of the diabetes best as possible which means to reach such a balance by which the level of blood sugar would be as close as possible to the levels of healthy people. The diet is a basic way of treatment of diabetes compensation. It is based on regular consumption of energy-defined food. This is closely bound to daily schedule of a patient, mainly in terms of physical load and adequate weight. Diabetes may worsen not only by breaking the basic diet requirements but also by stress events, feverish disease or injury. If the compensation by a diet is inefficient the tabs must be used. If it’s still insufficient, it’s necessary to apply cure using diet and insulin. Prevention and timeout treatment of this disease is one of the most serious tasks both of an oculist and a patient. A patient makes it possible to affect eye complications by its own effort both in right and wrong way. It’s necessary to
prevent from final stage of eye harm and preserve this most important human sense [2]. For purblind or sightless diabetics people it is impossible to read the current data showed on displays of the personal diagnostic systems (mainly glucometers). Those who are sightless need a help of another person at disposal when measuring. II. MATERIALS AND METHODS
Measurement of glycaemia during the day serves for evaluation of diabetes compensation and at the same time as a clue to treatment adjustments of patient’s activities such as physical load, sleep and stress. Glucometer is an electronic device that uses various principles to measure patient’s glycaemia. This makes it possible to promptly prevent incipient unfavourable changes of diabetes. The significant thing is that a patient can perform the measurement by his own and therefore react to hypoglycaemia by giving additional food or in case of hyperglycaemia to apply insulin. It is called self-check up of glycaemia, so called selfmonitoring. A patient can change the portions of insulin between visits at the doctor and thus watch the trend of glycaemia and therefore diabetes compensation. Glucometer is usually provided with inner memory (up to 500 records) where the measured data are stored (including date, time and insulin doses) for further processing by a doctor’s PC. At the same time a doctor can perform glycaemia curves and profiles. The accuracy of glucometers is in the range of 5-10 %. On the figure 1 you can see three examples from the left side One Touch Profile Lifescan, Medisence Card Medisence and FreeStyle Therasence.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 871–874, 2010. www.springerlink.com
Fig. 1: Glucometers
872
L. Martinak and M. Penhaker
The style of stripes is given by a producer and type of scanning. Each glycaemia measurement using a glucometer needs new stripe which means the stripes are one-off. The measurement consists of two steps that hold for all glucometers: A drop of blood taken from a finger or a forearm is put on the measuring place and it’s necessary to wait some time, then the result is showed on a glucometer display. The goal is to design a voice output for using by a sightless diabetic that could be connected to a glucometer without a need to interfere to a glucometer. The voice output will contain a speaker saying the level of glycaemia and additionally other data from the glucometer display. The whole device should be portable and battery-powered. The required features are an easy manipulation, low price, minimum of control elements and comprehensible announcements. Some glucometers can be connected to a PC which is physically done through an output connector (Jack) or through a connector for a sensor plug-in (in case it’s not being used for glycaemia measurement at the moment). There are 3 wires to connect PC and a glucometer: input data, output data and the ground. After the connection to PC and installation of needed software an user can communicate with a glucometer. The goal is reading measured data into PC and consequently tables and charts are created from the values of glycaemia stored in the memory after previous measurements. There are variety of software tools such as Diabass or InTouch. Glucometers communicate by bidirectional serial lines in asynchronous mode of transfer, in a standard 8-bit format, no parity, using ASCII code. Voltage levels of serial line of glucometer match either voltage levels of RS 232 used by PC (so it’s possible to use direct connection glucometer-PC) or TTL levels; then it’s necessary to add RS232/TTL converter which is usually supplied with a glucometer or can be purchased. Three wires are used – TxD, RxD and the ground. Glucometers contain communication protocols that define which ASCII characters serve for communication between a glucometer and PC. A computer is a master in this type of communication. The brief description of communication is as follows: PS sends out ASCII characters which means it defines the requests and a glucometer respond. According performed experiments it was found out that besides communication in the form of requests and responds, One Touch Profile glucometer itself sends other ASCII independently on PC and they match the data being showed on glucometer display. The block schematic that describes use of serial interface is on the figure 2. Glucometer is connected by three wires with electronics that process and send data through serial interface. Electronics sort out useful information and then it is processed by
voice converter. Needed values, such as glycaemia level or other data, are heard from the speaker. In cooperation with the sightless people there have been tested the blood samples by glucometer with biosensoric type of sensor (Medisence), with coulometric type of sensor (Freestyle) and photometric type of sensor (One Touch Profile). According to sightless ones the way blood sampling is not significant but blood sampling methodology must be tried and trained when using any of the types mentioned above.
Fig. 2: Block diagram. The usual way of creating voice synthesised signal is to choose basic acoustic elements, process them and store in the memory and consequently, at seasonable time, to generate the signal by integration of suitable segments from the stored database. As for choice of basic units, it’s possible to consider either whole sentences, or words, syllables or phones. It’s obvious that the longer the unit is, the more speeches must be processed and stored. Requirements on such a number of recorded speeches are often extreme, even in cases when number of various words is small. A visual example of this situation can be automatic time annunciator that regularly says the precise time every 10 seconds with typical voice. If it came to record all of the variations, we would reach 8640, while there are only 35 various words can be used for these reports. This example speaks for us to use more likely smaller units for creating of synthesised reports. From this point of view it seems that the optimal solution would be use of phonemes as basic structural units of synthesised speech. Unfortunately, the smaller a structural unit for synthesised report is, the bigger influence of improper articulation caused by integration of the unit arises. This also holds true in case of using words and syllables as structural units. For implementation of voice output of glucometers, there may be used the following units as solutions of speech synthesis: phonemes, syllables, words. Implementation using phonemes and syllables requires excessive complexity of recording into the memory and final comprehensibility is inadequate to this complexity, refer to bibliography [1] and [3]. That’s why this way of implementation is not so suitable. Implementation using words as basic units is more suitable concerning the table of required words for reports. Storing of the words itself and their playback according to required reports can be done either by recording through processor with A/D converter
IFMBE Proceedings Vol. 29
Application of Embedded System for Sightless with Diabetes
into the memory (e.g. EEPROM), or using voice processor of the firm ISD, refer to [5]. Voice processor includes the whole structure of recording and reproduction of particular reports in its capsule. Using of the voice processor for this problem is then very suitable. For the practical implementation of the voice output connected to glucometer it was used a scheme with voice processor ISD 2560 and microprocessor ATMEL 89C4051 [4] which is showed on figure 3.
873 III. RESULTS
The final product (Figure 3) composes of box and fixed cable including connector jack 3,5. This is a connector for the voice output to be connected to glucometer.
Fig. 3: Scheme of the electronic construction
Fig. 3: Final product
Voice processor ISD2560 is made by the firm ISD using patent technology ChipCorder. The basic principle is based on analogue storing of the signal in its original form as an electric charge of a capacitor – the similar as EEPROM memory. It contains inner oscillator, microphone preamplifier, automatic print control, input filter, output filter, terminative amplifier, memory array of 480 000 elements, address bus, power block, control of the functions and terminative multiplexer. After switching on the voltage source or after reset of microprocessor it’s necessary to wait for coming data on serial bus in 8-bit format with 9600 Bd. Data is only being received. If there’s no signal in 30 seconds microprocessor lets itself to PowerDown mode. After coming of data it is recorded in the interruption through SBUF register as array of characters. Data is received by frames containing ASCII characters. All the characters received in one frame are written into an array of 40 characters. The array is then divided into single parts from which the following needed data is generated: There is a list of all words could be spoken by a speaker, each word is added by information about its length and address in the memory. By means of comparing received characters and characters included in the words on list a word that matches the word on glucometer display. Consequently the address is sent to the port according to the length and location in the memory for a time period of duration of the word. Then it keeps decoding next word.
Its dimensions are 65x65x30 mm, it’s powerde by 3x AAA battrery cells. If electronic construction doesn’t receive data anymore, it switches to stand-by mode with the current consumption less than 5 uA. In an operating mode it consumes about 10 mA, but during the playbeck it may reach up to 50 mA. It doesn’t contain any control elements such as buttons and switches and it works without necessity of intervention into a glucometer, because it uses a connector for connection with PC. Electronic construction switches off automatically. IV. CONCLUSIONS
The purblind or sightless people will be able to perform selfmonitoring (observation of disease trens and dosage adjustments by themselves). The features of electronic construction were practically tested by a blind diabetic. If we ignore problems with dosage of sample into a glucometer (which is a matter of time and practise), the patient’s statement was much more positive. Concerning that electronic construction can be applied to glucometers of various types (by different producers) after some adjustments, a contact with glucometer producers has been established. Further development of voice output and establishing of new contacts will continue with the intention of starting serial production.
IFMBE Proceedings Vol. 29
874
L. Martinak and M. Penhaker 6.
ACKNOWLEDGMENT The work and the contribution were supported by the project Grant Agency of Czech Republic – GAýR 102/08/1429 “Safety and security of networked embedded system applications”. Also supported by the Ministry of Education of the Czech Republic under Project 1M0567
REFERENCES 1.
2.
3. 4.
5.
Krejcar, O., Janckulik, D., Motalova, L., (2009) Complex Biomedical System with Mobile Clients. In The World Congress on Medical Physics and Biomedical Engineering 2009, WC 2009, September 0712, 2009 Munich, Germany. IFMBE Proceedings, Vol. 25/5. O. Dössel, W. C. Schlegel, (Eds.). Springer, Heidelberg. Valkova, A., (2006) HomeCare - Smart embedded biotelemetry system, In World Congress on Medical Physics and Biomedical Engineering, Vol 14, PTS 1-6, Aug 27-Sep 01, pp. 711-714, Seoul, South Korea Horak, J., Unucka, J., Stromsky, J., Marsik, V., Orlik, A., (2006) TRANSCAT DSS architecture and modelling services, In Journal: Control and Cybernetics, vol. 35, pp. 47-71, Krejcar, O., Janckulik, D., Motalova, L., Kufel, J., (2009) Mobile Monitoring Stations and Web Visualization of Biotelemetric System Guardian II”. In EuropeComm 2009. LNICST vol. 16, pp. 284-291. R. Mehmood, et al. (Eds). Springer, Heidelberg. Cerny, M., Penhaker, M., (2008) Biotelemetry, In 14th Nordic-Baltic Conference an Biomedical Engineering and Medical Physics, IFMBE Proceedings, Vol 20, Jun 16-20, pp. 405-408, Riga, Latvia
Penhaker M., Cerny M., Rosulek M. (2008) Sensitivity Analysis and Application of Transducers In konference proceedings 5th International Summer School and Symposium on Medical Devices and Biosensors, JUN 01-03, 2008 Hong Kong, PEOPLES R CHINA, Pages: 85-88 Published: 2008, ISBN: 978-1-4244-2252-4 7. Vašíþková, Z., Penhaker, M., Augustynek, M.: (2009) Using frequency analysis of vibration for detection of epileptic seizure. Global courseware for visualization and processing biosignals. In World Congress 2009. Sept 7. – 12. in Munich, ISBN 978-3-642-03897-6, ISSN 1680-0737 8. Prauzek, M., Penhaker, M., (2009) Methods of comparing ECG reconstruction. In 2nd Internacional Conference on Biomedical Engineering and Informatics, Tianjin: Tianjin University of Technology, 2009. Stránky 675-678, ISBN: 978-1-4244-4133-4, IEEE Catalog number: CFP0993D-PRT 9. Prauzek, M., Penhaker, M., Bernabucci, I., Conforto, S., (2009) ECG - precordial leads reconstruction. In Abstract Book of 9th International Conference on Information Technology and Applications in Biomedicine. Larnaca: University of Cyprus, 2009. Stránka 71, ISBN: 978-1-4244-5379-5 10. ýerný, M.: (2009) Movement Monitoring in the HomeCare System . In IFMBE proceddings. Ed. Dossel-Schleger, Berlin:Springer, 2009, issue. 25, ISBN 978-3-642-03897-6; ISSN 1680-07 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Lukas Martinak VSB –TU Ostrava 17. listopadu 15 Ostrava Czech republic [email protected]
TELEMON – An Embedded Wireless Monitoring and Alert System for Homecare C. Rotariu1,2, H. Costin1,2, R. Ciobotariu1, F. Adochiei1, I. Amariutei1, and Gladiola Andruseac1 1 2
Gr. T. Popa University of Medicine and Pharmacy, Iasi, Romania Institute for Computer Science, Romanian Academy, Iasi, Romania
Abstract— The current common goal in medical information technology today is the design and implementation of monitoring solutions, which provide to patients services that enhance their quality of life. Developing of mobile information and communication technologies in home monitoring applications is becoming more useful. Advances in wireless sensor network technology, the overall miniaturization of their associated hardware low-power integrated circuits and wireless communications have enabled the design of low-cost, miniature, and intelligent physiological sensor modules with applications in the medical industry. These modules are capable of measuring, processing, communicating one or more physiological parameters, and can be integrated into a wireless monitoring system. This paper describes the architecture of TELEMON, an embedded medical monitoring and alert system for patients with high medical risk. The system includes continuous monitoring of multiple vital signs, intelligent multiple parameter medical emergency detection, and an Internet/GSM connection to the monitoring centre. Keywords— monitoring, homecare, embedded devices, wireless.
I. INTRODUCTION Wireless monitoring [1] represents a medical practice that involves remotely monitoring patients who are not at the same location as the health care provider. Generally, a patient have a number of monitoring devices at home, and the results of these devices will be transmitted to the monitoring centre. For instance, the computer-assisted rehabilitation involves unwieldy wires between sensors and monitoring device that are not very comfortable for normal activity [2]. We propose a wireless system, based on low power microcontrollers and RF transceivers that perform the measurements and transmit the data to a patient personal server. Personal server, in form of a Personal Digital Assistant (PDA) that running a patient monitor application, receives the information from wireless sensors, activates the alarms when the measured parameters are above the limits, and communicates periodically to the monitoring centre by using WiFi or GSM/GPRS connection. The patient monitor reacts to potential risks and records physiological information into a local database.
Doctors can receive information that has a longer time span than a patient's normal stay in a hospital and this information has great long-term effects on home health care, including reduced expenses for health care. Physicians also have more accessibility to experts, allowing the physician to obtain information on diseases and provide the best health care available. Moreover, patients can thus save time, money and comfort. During the last few years there has been a significant increase in the number of various wearable health monitoring modules, ranging from simple pulse monitors, and heart activity monitors, to portable digital monitors. Although digital monitors are used only to collect data, they still remain the most used devices. Data processing and analysis are performed offline, making the device impractical for continual monitoring and early detection of medical disorders. Devices with multiple sensors for rehabilitation had unwieldy wires between the sensors and the monitoring device which may limit the patient's activity and their comfort. Our system allows persons with different diseases and also to elderly/lonely people to be monitored from medical and safety points of view. In this way the medical risks and accidents will be diminished. The TELEMON system will act as a pilot project destined to the implementation of a public e-health service, “everywhere and every time”, in real time, for people being in different hospitals, at home, at work, during the holidays, on the street, etc.
II. MATERIALS AND METHODS The main objective of TELEMON is the achievement of an integrated system, mainly composed by the following components in a certain area: a personal network of wireless transducers (PNWT) on the ill person (Fig. 1), a data multiplexing block and a personal server (PS) in form of a Personal Digital Assistant (PDA). After local signal processing, according to the specific monitored feature, the processed data are transmitted via one of Internet or GSM/GPRS to the database server of the monitoring centre. The PNWT includes medical devices for vital signs (ECG, heart rate, arterial pressure, oxygen saturation SpO2, body temperature), a fall detection module, a respiration one, all these components having radio micro-transmitters, which allows
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 875–878, 2010. www.springerlink.com
876
C. Rotariu et al.
an autonomic movement of the subject. The data processing will be performed by the PDA. The results of data processing are in principal and if necessary different locally generated alarms, transmitted to the central server. Other results on server data processing will be different medical statistics, necessary for the evaluation of health status of the subject, for the therapeutic plan and for the healthcare entities.
Our wireless personal area network is realized by using a custom developed sensors modules for physiologic parameters measurement and a low power microcontroller board (eZ430-RF2500 Board from Texas Instruments). The network is wirelessly connected to a personal server that receives the information from sensors. The eZ430-RF2500 is a complete MSP430 [3] wireless development tool providing all the hardware and software for the MSP430F2274 microcontroller and CC2500 2.4GHz wireless transceiver [4]. Operating on the 2.4 GHz unlicensed industrial, scientific and medical (ISM) bands, the CC2500 provides extensive hardware support for packet handling, data buffering, burst transmissions, authentication, clear channel assessment and link quality. The radio transceiver is also interfaced to the MSP430 microcontroller using the serial peripheral interface. The 3-lead ECG amplifier (Fig. 2) is a custom-made device [5]. It has for each channel a gain of 500, is DC coupled and has a cut-off frequency around 35 Hz. The high common mode rejection (>90 dB), high input impedance (>10 MΩ), the fully floating patient inputs are other features of the ECG amplifier.
Fig. 2 The ECG amplifier (block diagram)
Fig. 1 Personal network of wireless transducers The medical devices allowing monitoring of the vital parameters are the following: a 3-leads ECG module, an oxygen saturation module (SpO2), that also computes the cardiac rhythm, a arterial pressure module, a body temperature module, a respiration module and a fall detection module. These modules transmit data to a PDA through radio transceivers, operate in the 2.4GHz band, and have 5m/10m range indoors/outdoors.
For the body temperature measurement we use the TMP275 [6] temperature sensor (Texas Instruments). The TMP275 is a 0.5°C accurate, two-wire, serial output temperature sensor available in an SO8 package. The TMP275 is capable of reading temperatures with a resolution of 0.0625°C. The TMP275 is directly connected to the ez430RF2500 using the I2C bus and requires no external components for operation except for pull-up resistors on SCL and SDA. The accuracy for the 35-45°C interval is below 0.2 °C and the conversion time for 12 data bits is 220 ms typical. The respiration module uses one of the most usual methods to sense breathing - to detect airflow using a nasal thermistor [7]. Although most applications require only breathing detection, some applications and diagnostic procedures require monitoring of the respiratory rhythm.
IFMBE Proceedings Vol. 29
TELEMON – An Embedded Wireless Monitoring and Alert System for Homecare
The wireless respiration sensor uses a thermistor for long-time monitoring during the normal activity. The sensor is designed using MSP430F2274 microcontroller with an on-chip 10 bit A/D converter for data acquisition and CC2500 2.4GHz wireless transceiver. The thermistor detects changes of breath temperature between ambient temperature (inhalation) and lung temperature (exhalation). A thermistor placed in front of a nose detects breathing as a temperature change. The respiration signals are recorded using the MSP430F2274 A/D converter with 10 Hz sampling frequency. The Personal server on patient computes the breathing frequency calculated from the breathing interval as a number of breaths per minute. Normal breathing frequency is 12-20 cycles/minute. The pulsoximeter sensor used is Micro Power Oximeter board from Smiths Medical [8] (Fig. 3). The same sensor can be used for heart-rate detection and SpO2. The probe is placed on a peripheral point of the body such as a finger tip, ear lobe or the nose. The probe includes two light emitting diodes (LEDs), one in the visible red spectrum (660 nm) and the other in the infrared spectrum (905 nm). The percentage of oxygen in the body is computed by measuring the intensity from each frequency of light after it transmits through the body and then calculating the ratio between these two intensities. The pulsoximeter communicates with the eZ430-RF2500 through asynchronous serial channel at CMOS low level voltages. Data provided includes % SpO2, pulse rate, signal strength, plethysmogram and status bits and is sent to the eZ430-RF2500 at a baud rate of 4800 bps, 8 bits, one stop bit and no parity.
877
patient’s blood pressure and heart rate readings. Once the readings are received, the eZ430-RF2500 communicates with the network and transmits them to the Personal Server.
Fig. 4 The blood pressure module Our module for fall detection of humans is based on accelerometer technique. By using a tri-axial accelerometers our system can recognize patient movements. Linear acceleration are measured to determine whether motion transitions are intentional. The algorithm for the human fall detection uses the ADXL330 accelerometer [10] and eZ430-RF2500 Wireless Module. The ADXL330 is a small, thin, low power, complete three axial accelerometer with signal conditioned voltage outputs, all on a single monolithic IC. The product measures acceleration with a minimum full-scale range of ±3g. It can measure the static acceleration of gravity in tiltsensing applications, as well as dynamic acceleration resulting from motion, shock, or vibration. The microcontroler calculates the a acceleration using the formula:
a = ax2 + a y2 + az2 We determine if the subject has fallen if the condition a > 0.4g is valid.
III. RESULTS
Fig. 3
The Micro Power Oximeter and eZ430-RF2500
The Micro Power Oximeter has the following measurement specifications: range 0-99% functional SpO2 (1% increments), accuracy ±2 at 70-99% SpO2 (less than 70% is undefined), pulse range 30-254 BPM (1 BPM increments), accuracy ±2 BPM or ±2% (whichever is greater). For the blood pressure measurement, a commercially available A&D UA-767PC BPM [9] was used (Fig. 4). The blood pressure monitor (BPM) takes simultaneous blood pressure and pulse rate measurements. It includes a bidirectional serial port connection communication at 9600 kbps. An eZ430-RF2500 communicates with the BPM on this serial link to start the reading process and receives the
In the Fig. 5 it is represented the personal server, that were implemented by means of a PDA (Fujitsu-Siemens Loox T830). This personal medical monitor is responsible for a number of tasks, providing a transparent interface to the wireless medical sensors, an interface to the patient, and an interface to the central server. The USB interface is realized by using a serial to USB transceiver (FT232BL) from FTDI [11] and enables eZ430RF2500 to remotely send and receive data through USB connection using the MSP430 Application UART. All data bytes transmitted are handled by the FT232BL chip. It also contains a voltage regulator to provide 3.3 V to the eZ430RF2500. The software on the Personal Server receives real-time patient data from the sensors and processes them to detect anomalies.
IFMBE Proceedings Vol. 29
878
C. Rotariu et al.
The proposed system could also be used as a warning tool for monitoring during normal activity or physical exercise. The integrated system provides continuous medical care for patients, through Internet/GSM network infrastructure. In the monitoring centre, the appropriate infrastructure for monitoring, evaluation, and storage of patients data is implemented, allowing monitoring of the patient and advice on treatment by health care experts. The use of mobile infrastructure enables portability and flexibility. Furthermore, through monitoring, the number of required hospital visits can be significantly reduced. Fig. 5
The Personal server (block diagram)
ACKNOWLEDGMENT The software working on the Personal Server was written by using C# from Visual Studio.NET, version 8. The software displays temporal waveforms (Fig. 6), computes and displays the vital parameters and the status of each sensor (the battery voltage and distance from the Personal Server).
This work is supported by a grant from the Romanian Ministry of Education and Research, within PN_II programme (www.cnmp.ro/Parteneriate), contract No. 11067/2007 (www.bioinginerie.ro/telemon).
REFERENCES
Fig. 6
The Personal server (ECG waveforms)
The distance is represented in percent of 100 computed based on RSSI (received signal strength indication measured on the power present in a received radio signal). If the patient has a medical record that has been previously entered, information from the medical record (limits above the alarm become active) is used in the alert detection algorithm. When an anomaly is detected in the patient vital signs, the Personal server software application generates an alert in the user interface and transmits the information to the central server.
IV. CONCLUSIONS In this paper it is presented a prototype of a system that aims to develop a secure, scalable embedded system, designed for wireless monitoring of patients.
1. Milenkovic A., Otto C., Jovanov E. (2006), Wireless sensor networks for personal health monitoring: Issues and an implementation, Computer Communications (Special issue: Wireless Sensor Networks: Performance, Reliability, Security, and Beyond, Vol. 29, pp. 25212533 2. Jovanov E., Milenkovic A., Otto C., Groen P.C.(2005), A wireless body area network of intelligent motion sensors for computer assisted physical rehabilitation, Journal of Neuroengineering and Rehabilitation 3. MSP430 datasheet at http://focus.ti.com/docs/prod/folders/print/ msp430f2274.html 4. CC2500 datasheet at http://focus.ti.com/docs/prod/folders/print/ cc2500.html 5. Rotariu C., Costin H., Arotaritei D., Dionisie B. (2008), A Wireless ECG Module for Personal Area Network, Buletinul Institutului Politehnic din Iasi, Vol. 1, pp. 45-54 6. TMP275 digital out temperature sensor at http://focus.ti.com/docs/prod/folders/print/ tmp275.html 7. Jovanov E., Raskovic D., Hormigo R. (2001), Thermistor-Based Breathing Sensor for Circadian Rhythm Evaluation, Biomedical Sciences Instrumentation, Vol. 37, pp. 493-497 8. Micro Power Oximeter at http://www.smiths-medical.com/ Userfiles/oem/OEM.31392B1 9. UA-767PC blood pressure monitor at http://www.lifesourceonline. com/and_med.nsf/html/ UA-767PC 10. ADXL330 accelerometer at http://www.analog.com/en/sensors/ inertial-sensors/adxl330/products/ product.html 11. FT232BL chip at http://www.ftdichip.com/Products/FT232BM.htm Author: Cristian Rotariu Institute: Gr. T. Popa University of Medicine and Pharmacy, Faculty of Medical Bioengineering Street: Kogalniceanu 9-13 City: Iasi Country: Romania Email: [email protected]
IFMBE Proceedings Vol. 29
Graphical Development System Design for Creating the FPGA-based Applications in Biomedicine V. Kasik1 and M. Stankus1 1
VSB - Technical University of Ostrava / Department of Measurement and Control, Ostrava, Czech Republic
Abstract— Actual specifications of digital design in biomedical applications require the new approach, in which pre-defined parametrizable functional blocks with relationship are used as graphical development system components. The application implementing tools, which enable the problem evaluation and verification in graphic notation, exist as program-oriented frameworks. In this project an open program system for digital design applications, suited to Programmable logic devices and Application Specific Integrated Circuits is presented. Fundamental implementation idea is based on usage of Black Box Component Builder Framework ( BBCB ) as the basis for extension of common kernel. The project tool exploits the flexibility of the VHDL language and the lucidity of the schematic design. Keywords— VHDL, BBCB, ASIC, Digital Design, Framework.
II. F RAMEWORK AS IDE TOOL In their philosophy the frameworks are always oriented to any commonly used problem area in which they already offer a basic compact solution. Generally, that solution was evaluated as the best and the most reliable. Adaptation of frameworks to specific user application purposes is usually done with the development of specific class subdivisions from abstract framework classes. The concept, used in the framework kernel, will define the structure of resulting system as well as division into classes and objects, their interoperability, control sequence, etc. In some ways, these facts could be mentioned as constraint rules, however they contributes to design purity. Thereby, the framework predefines the design parameters and then a designer can fully concentrate himself to specific application properties. A. MVC Frameworks
I. I NTRODUCTION The availability of powerful FPGA devices could be only one part of biomedical digital system design. Successful implementation of designed extensive logic functions requires some easy to use development tools.
MVC frameworks are largely used for generation of documents, interactive forms, etc., with use of Model-ViewController architecture. The advantage of that architecture is obvious specification of roles implemented in separate parts of code and less difficult decomposition into three basic structures: Model, View and Controller.
In-circuit design of digital systems can be created usually either as diagram in schematic editor or as textual description in some HDL language. An VHDL is very popular due to its standardization and good portability. However, expressive advantage of schematic design still remains - lucidity and clarity of the graphical description form, especially in hierarchical designs. An Integrated Development Environment (IDE) with the broad set of logic parts - from simple gates to parametrizable functional blocks is very useful. Essential parts of those IDEs are tools for evaluation, verification and simulation of designed logic. The application design tools, providing the model evaluation and/or verification, exist as ”program-oriented frameworks” [1]. The mentioned project uses so called MVC framework [1,2].
Fig. 1: MVC framework ”Model” is used to basic information encapsulation together with their properties. ”View” represents the portion oriented to data interpretation (contained in model) to the user. In addition, the ”View” carries information about its
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 879–882, 2010. www.springerlink.com
880
V. Kasik and M. Stankus
own behaviour. Due the fact, that each ”View” usually requires different type of handling, there is also specific type of control - ”Controller”. Typical layout of ”MVC framework” together with relations between classes is shown in Fig. 1.
metic and logic units, communication modules, etc., as an example.
III. F LEXIBLE G RAPHICAL D EVELOPMENT S YSTEM D ESIGN The aim of that design is to provide an extensible program system for digital design applications, suited to Programmable logic devices and Application Specific Integrated Circuits. That system is created by kernel extension of the framework - development environment for control and information systems development support. The digital designing principles using that system uses the methodology, based on graphic interpretation of modelled problem and its solution in terms of generated structural/behavioral model in VHDL. The selected development tool for this project realisation is a ”BlackBox Component Builder” [3] from Oberon microsystems, Inc. Technopark Zrrich. BlackBox Component Builder (BBCB) is the tool for components creating and development under self frameworkoriented environment. The basis of BBCB is programming language Component Pascal, which is the componentoriented modification of Pascal language. BBCB has only basic means necessary for quick design and creating applications. These basic means are denoted as RAD (Rapid Application Development). The advantage of use of that product in mentioned project results from their orientation to creation of so-called compound documents and easy extensible basic graphic functions in frameworks. The portability of created applications, which can run on all platforms supported by system, is the next considerable benefit. Application designing technique in that development environment enables their easy maintenance, advancement and keeps their lucidity and configuration. On the other hand, VHDL based digital systems development environments usually have a set of specific tools for design entry, synthesis, implementation and simulation. These tools are drawn with the supposition, that the designer is familiarized with the design methodology and VHDL syntax. Then, the designer should have a plenty of library components (standard or user-defined) and the possibility of adjusting of their parameters and interconnection in complex project solutions. There are many designs in many applications, in which we can observe certain similarity in their graphical representation, that could be generalized. There are digital filters, arith-
Fig. 2: Graphical Development System flowchart The designed Graphical Development System block diagram is shown in figure 2. The MVC framework described above is used as IDE tool for creating the object oriented graphic user interface (GUI). The design entry is then performed using that GUI together with the application specific Block library for the digital systems. While in the Windows environment the objects in library are OLE objects, in the MVC framework concept they carries the information as the Model. As an result of the graphic design entry in specific GUI, a netlist (text file) is exported. Up to here the tools can share a lot of common attributes, independently on applications outgoing from object-oriented approach. The netlist then passes to strongly application specific netlist compiler, which is an executable program created in common programming language (C++). It is the tool performing the design decomposition, evaluation, testing and compiling into the VHDL description. The Synthesis tool in figure 2 is any commercial product, needed to finish the digital design.
IV. N ETLIST C OMPILER D ESIGN The design of graphical framework consists of several tasks. Main points relates to library blocks selection for digital filters modelling, customizing of their parameters, Simulink M-file recognizing and decomposition and converting to VHDL file. Although it seems to be easy, these partial phases have many of clouded attributes. The key role in the IDE design plays the netlist compiler (see Fig. 3), which is a stand-alone program, created in C++. That program must be able to recognize any digital design structure drawn in the Simulink. There is the Block Library created together to help design recognition.
IFMBE Proceedings Vol. 29
Graphical Development System Design for Creating the FPGA-Based Applications in Biomedicine
881
The project has 3 main development stages: Analytical phase, Design phase and Implementation phase. Presently these stages are not finished completely yet, the actual works primarily pertains to Block library expansion and Netlist compiler development.
ACKNOWLEDGEMENTS Fig. 3: Functions of the Netlist Compiler Essential parts of those IDEs are tools for evaluation, verification and simulation of designed logic. The mentioned project uses so called MVC framework [1]. The Netlist compiler (Fig. 2) integrates the design entry and implementation tool to more compact application. Primarily the target devices to implement any parallel system design in this project are FPGA Programmable Logic Devices and that the export description format is a VHDL. Essential parts of that compiler are tools for evaluation, verification and simulation of designed logic.
Fig. 4: Physical aspect of parallel structure mapping into FPGA - example The framework project results mainly to universal software tool which could be extended easily by customizing the topical libraries. However, the functionality and flexibility of the design tools depends not only on the library elements, but also on the Netlist compiler and its algorithms of design decomposition and VHDL description setting. That is why the Netlist compiler development takes the most of effort and work time.
V. C ONCLUSIONS An essential advantage of the mentioned systems is their readiness to solve complex problems with retaining their lucidity and clarity. Thanks to the fact, that the basic kernel of the system is optimized and tested, the created projects have better probability to be reliable biomedical applications.
The work and the contribution were supported by the project Grant Agency of Czech Republic GAR 102/08/1429 ”Safety and security of networked embedded system applications”. This work was supported by the Ministry of Education of the Czech Republic under Project 1M0567. This work was partially supported by the faculty internal project,”Biomedical engineering systems V”.
R EFERENCES 1. Hrudka, G. ”Component framework as a fast way to reliable and powerful applications”. In: Proc. AUTOS 2001. Ostrava, Czech Republic 2. Szyperski, C. ”Component Software - Beyond Object Oriented Programming”. Addison-Wesley, ACM Press New York, ISBN 0-201-17888-59 3. Pfister, C. ”Component Software: A Case Study using BlackBox Components”. Oberon Microsystems, Inc., 1998 4. D. Demus, J. Godby, G.W Gray, V Spiess, V. Vill ”Handbook of LiquidCrystals” Willey: VCH, 1998 5. S. Farahani ”ZigBee Wireless Networks and Transceivers”, Newnes, 2008 6. M. Penhaker , M. Cerny, L. Martinak, et al. HomeCare ”Smart embedded biotelemetry system” Book Series IFMBE proceedings World Congress on Medical Physics and Biomedical Engineering, AUG 27-SEP 01, 2006 Seoul, SOUTH KOREA, Volume: 14 Pages: 711-714, 2007, ISSN: 16800737, ISBN: 978-3-540-36839-7 7. M. Penhaker , M. Cerny, ”The Circadian Cycle Monitoring” Conference Information: 5th International Summer School and Symposium on Medical Devices and Biosensors, JUN 01-03, 2008 Hong Kong, PEOPLES R CHINA, Pages: 41-43, 2008, ISBN: 978-1-4244-2252-4 8. M. Penhaker , M. Cerny, M. Rosulek ”Sensitivity Analysis and Application of Transducers” 5th International Summer School and Symposium on Medical Devices and Biosensors, JUN 01-03, 2008 Hong Kong, PEOPLES R CHINA, Pages: 85-88 Published: 2008, ISBN: 978-1-424422529. V. Kasik, G.K. Adam, G. Garani, N. Smaras,V. Srovnal, J. Koziorek, J. Kotzian, ”Design and development of embedded control system for a lime delivery machine” 10th WSEAS International Conference on Mathematical Methods and Computational Techniques in Electrical Engineering, MAY 02-04, 2008 Istanbul, TURKEY, Pages: 186-191, 2008, ISBN: 978-960-6766-60-2 10. V. Kasik, ”FPGA based security system with remote control functions.” 5th IFAC Workshop on Programmable Devices and Systems, NOV 2223, 2001 GLIWICE, POLAND, IFAC WORKSHOP SERIES Pages: 277-280, 2002, ISBN: 0-08-044081-9 11. J. Havlk, J. Uhl, Z. Hork, ”Human Body Motions Classifications”, In IFMBE Proceedings EMBEC 2008,. [CD-ROM], Berlin: Springer, 2008, ISBN 978-3-540-89207-6 12. Cerny M, Martinak L, Penhaker M, et al. Design and Implementation of Textile Sensors for Biotelemetry Applications In konference proceedings 14th Nordic-Baltic Conference an Biomedical Engineering and Medical Physics, JUN 16-20, 2008 Riga, LATVIA, Volume: 20 Pages: 194-197 , 2008, ISSN: 1680-0737 ISBN: 978-3-540-69366-6
IFMBE Proceedings Vol. 29
882
V. Kasik and M. Stankus
13. Cerny M., Penhaker M. Biotelemetry In conference proceedings 14th Nordic-Baltic Conference an Biomedical Engineering and Medical Physics, JUN 16-20, 2008 Riga, LATVIA, Volume: 20 Pages: 405-408 Published: 2008, ISSN: 1680-0737, ISBN: 978-3-540-69366-6 14. Cerny M., Penhaker M. The HomeCare and circadian rhythm In conference proceedings 5th International Conference on Information Technology and Applications in Biomedicine (ITAB) in conjunction with the 2nd International Symposium and Summer School on Biomedical and Health Engineering (IS3BHE), MAY 30-31, 2008 Shenzhen, VOLS 1 AND 2 Pages: 110-113 Published: 2008 ISBN: 978-1-4244-2254-8 15. Penhaker M., Cerny M., Martinak L., et al. HomeCare - Smart embedded biotelemetry systm In Book Series IFMBE proceedings World Congress on Medical Physics and Biomedical Engineering, AUG 27SEP 01, 2006 Seoul, SOUTH KOREA, Volume: 14 Pages: 711-714, 2007, ISSN: 1680-0737, ISBN: 978-3-540-36839-7 16. ern, M.: Movement Monitoring in the HomeCare System . In IFMBE proceddings. Ed. Dossel-Schleger, Berlin:Springer, 2009, issue. 25, ISBN 978-3-642-03897-6; ISSN 1680-0737 17. Penhaker, M., Zurek, P. Peterek, M. Signal processing and visualization of multiparameters biosignal, 9th International Conference BIOMDLORE 2009 , Bialystok, Poland - http://www.biomdlore2009.pb.edu.pl
IFMBE Proceedings Vol. 29
Low Cost Data Acquisition System for Biomedical Usage M. Stankus1 , M. Penhaker1 and M. Cerny1 1
VSB - Technical University of Ostrava / Department of Measurement and Control, Ostrava, Czech Republic
Abstract— This paper describes architecture of data aquisition device for educational purposes in biomedicine. Overall architecture of such device as well as individual functional blocks are depicted including data conversion part, safety galvanic separation, control part, power supply and communication protocol.
• Safety - user manipulating with ADC’s input must not be able to come into contact with possibly dangerous voltages. Functional blocks forming whole data aquisition device can be seen on Figure 1.
Keywords— ADC, data aquisition, microcontroller, safety, USB.
I. I NTRODUCTION For purpose of educational data acquisition in biomedicine, there is often need for low cost yet safe device implementing sampling of biomedical readings. Described data aquisition device is portable system designed for measurement of slow analogue values in biomedicine, especially values of photopletysmograph and 1-lead electrocardiograph (ECG). As this device is intended for purposes of education, there is no need for especially fast or precise analog to digital converters (ADC) or high count of these ADCs. As specialized biomedical sensors attached to this device may be in direct contact with human body, certain parts of this device have to be galvanically separated. Device has to be simple to interface and provide ability to modify it’s functionality by changing programming of microcontroller.
II. C ONCEPT OF DATA AQUISITION DEVICE Described data aquisition device can be partitioned into several functional blocks. Together, these blocks have to fulfill following requirements: • Accuracy - although this data aquisition device is aimed at education, certain basic performance requiements must be met. ADCs should have resolution of at least 12 bits. Sampling rates up to 1 kHz are required. • Ease of use - as this data aquisition device is aimed at education, manipulation have to be easy and straightforward. • Low cost - described data aquisition device is supposed to be produced in large quantities. Low cost of device is essential.
Fig. 1: Block scheme of data aquisition device Central part of device is microcontroller implementing Universal Serial Bus (USB) device interface and Serial Peripheral Interface (SPI) bus inteface used for communication with ADCs. Microcontroller is powered by low dropout regulator (LDO). Whole device is powered by USB power rail, no extra power supply is necessary. Galvanic separation of SPI bus and ADC power rail is provided by specialized monolithic signal and power isolators. Analogue signal input is physically realized by 15 pin D-SUB connector together with analogue ground and external reference voltage for ADCs.
III. DATA CONVERSION PART OF DEVICE Data conversion is performed by dual channel single ended sucessive approximation ADC with 12 bit resolution. Sampling rate of up to 100 ksps is theoretically possible, but it’s supposed that sampling rate should never exceed rate of 1 ksps. As sampling rate is selected by user for particular application, anti-aliasing filter is not provided. It’s responsibility of user to user proper anti-alias filtering in accord with selected sampling rate. User has an option to provide specific reference voltage for ADCs in range of 0.25V to 5V.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 883–885, 2010. www.springerlink.com
884
M. Stankus, M. Penhaker, and M. Cerny
IV. P OWER SUPPLY AND GALVANIC SEPARATION
Described data aquisition device needs two power supplies. First power supply generates power for microcontroller. Second power supply generates power for ADC. Both power supplies power isolation device that provides galvanic separation of SPI bus. Power supply powering microcontroller is simple low-dropout regulator generating 3.3V voltage from 5V USB power rail. Power supply powering ADC is galvanically separated by DC-to-DC converter generating 5V voltage from 5V USB power rail. SPI bus is galvanically separated using monolithic four channel isolated coupler providing both isolation and 3V3 to 5V logic level translation. Both DC-to-DC converter and monolithic isolation coupler have safety approval of 2500V RMS for 1 minute per UL 1577 regulation.
V. C ONTROL PART OF DEVICE Control part of device is implemented with Microchip PIC24F 16 bit microcontroller providing computational performance of 16 MIPS at clock rate of 32 MHz. Provided computational performance is more than sufficient for simple data aquisition task. In fact, microcontroller implements just USB and SPI stacks, control routines parsing messages of communication protocol and finite state machine (FSM) driving behavior of device. If there is need for additional signal processing in future, microcontroller has enough resources to meet these needs when appropriate firmware is provided.
VI. C OMMUNICATION PROTOCOL As the sole purpose of this data aquisition device is to sample analogue data, no signal processing takes place inside this device. It’s responsibility of user to process and evaluate acquired data. For sake of simple interfacing, communication protocol is as plain as possible. Connection of data aquisition device to data processing workstation is realized by USB connection. Data aquisition device implements Communication Device Class part of USB specification. This way, data aquisition device emulates RS232 connectivity across USB connection. All messages processed by data aquisition device are of fixed length. Data sampled on both ADC channels are aggregated into one message. As a consequnce of this, sampling rates of both analogue channels are equal. General format of data packet can be seen on Figure 2. Data packet with total length of 32 bits is trasported as four consecutive octets. Start of data packet is unambiguously
Fig. 2: Format of data packet
identified by logical one in most significant bit (MSB) of data packet’s first octet. Bits denoted as O3 to O0 are packet’s operation code. Their value identifies type of packets. Exact specification of packet types is beyond scope of this paper, examples of these packet types are acquired data itself, setting of sampling rate and query for set sampling rate. Bits denoted as D23 to D0 are data bits containing packet payload. These bits contain actual transported data, for example acquired values or requested sampling rate. Nature of RS232 interface, albeit emulated, doesn’t permit reordering of transported octets. This allows communication protocol to be very simple, no sequence numbering is necessary.
VII. C ONCLUSIONS Regular biomedical data aquisition devices are usually very expensive. Described architecture of simple data aquisition device proves that such device can be cheap and simple without sacrificing safety. As galvanic separation is implemented between microcontroller and ADC, device can be simply modified for ADC with greater number of inputs or ADC with different properties. Although provided communication protocol is flexible enough for most tasks, functionality can be extended by change of microcontroller firmware.
ACKNOWLEDGEMENTS The work and the contribution were supported by the project Grant Agency of Czech Republic GAR 102/08/1429 ”Safety and security of networked embedded system applications”. Also supported by the Ministry of Education of the Czech Republic under Project 1M0567.
IFMBE Proceedings Vol. 29
Low Cost Data Acquisition System for Biomedical Usage
R EFERENCES 1. D. Demus, J. Godby, G.W Gray, V Spiess, V. Vill ”Handbook of LiquidCrystals” Willey: VCH, 1998. 2. Brida, P., Majer, N., Duha, J., Cepel, P., ”A Novel AoA Positioning Solution for Wireless Ad Hoc Networks Based on Six-Port Technology”, In IFIP, Volume 308, Wireless and Mobile Networking, pp. 208-219. (2009) 3. Horak, J., Unucka, J., Stromsky, J., Marsik, V., Orlik, A., ”TRANSCAT DSS architecture and modelling services”, In Journal: Control and Cybernetics, vol. 35, pp. 47-71, (2006) 4. Krejcar, O., Janckulik, D., Motalova, L., Kufel, J., ”Mobile Monitoring Stations and Web Visualization of Biotelemetric System - Guardian II”. In EuropeComm 2009. LNICST vol. 16, pp. 284-291. R. Mehmood, et al. (Eds). Springer, Heidelberg (2009). 5. Krejcar, O., Janckulik, D., Motalova, L., ”Complex Biomedical System with Mobile Clients”. In The World Congress on Medical Physics and Biomedical Engineering 2009, WC 2009, September 07-12, 2009 Munich, Germany. IFMBE Proceedings, Vol. 25/5. O. Dssel, W. C. Schlegel, (Eds.). Springer, Heidelberg. (2009) 6. Krejcar, O., Janckulik, D., Motalova, L., Frischer, R., ”Architecture of Mobile and Desktop Stations for Noninvasive Continuous Blood Pressure Measurement”. In The World Congress on Medical Physics and Biomedical Engineering 2009, WC 2009, September 07-12, 2009 Munich, Germany. IFMBE Proceedings, Vol. 25/5. O. Dssel, W. C. Schlegel, (Eds.). Springer, Heidelberg. (2009) 7. Idzkowski A., Walendziuk W.: Evaluation of the static posturograph platform accuracy, Journal of Vibroengineering, Volume 11, Issue 3, 2009, pp.511-516, ISSN 1392 - 8716M. Penhaker , M. Cerny, L. Martinak, et al. HomeCare ”Smart embedded biotelemetry system” Book Series IFMBE proceedings World Congress on Medical Physics and Biomedical Engineering, AUG 27-SEP 01, 2006 Seoul, SOUTH KOREA, Volume: 14 Pages: 711-714, 2007, ISSN: 1680-0737, ISBN: 978-3-54036839-7 8. M. Penhaker , M. Cerny, ”The Circadian Cycle Monitoring” Conference Information: 5th International Summer School and Symposium on Medical Devices and Biosensors, JUN 01-03, 2008 Hong Kong, PEOPLES R CHINA, Pages: 41-43, 2008, ISBN: 978-1-4244-2252-4 9. M. Penhaker , M. Cerny, M. Rosulek ”Sensitivity Analysis and Application of Transducers” 5th International Summer School and Symposium on Medical Devices and Biosensors, JUN 01-03, 2008 Hong Kong, PEOPLES R CHINA, Pages: 85-88 Published: 2008, ISBN: 978-1-4244225210. V. Kasik, ”FPGA based security system with remote control functions.” 5th IFAC Workshop on Programmable Devices and Systems, NOV 2223, 2001 GLIWICE, POLAND, IFAC WORKSHOP SERIES Pages: 277-280, 2002, ISBN: 0-08-044081-9 11. Cerny M, Martinak L, Penhaker M, et al. Design and Implementation of Textile Sensors for Biotelemetry Applications In konference proceedings 14th Nordic-Baltic Conference an Biomedical Engineering and Medical Physics, JUN 16-20, 2008 Riga, LATVIA, Volume: 20 Pages: 194-197 , 2008, ISSN: 1680-0737 ISBN: 978-3-540-69366-6 12. Cerny M., Penhaker M. Biotelemetry In conference proceedings 14th Nordic-Baltic Conference an Biomedical Engineering and Medical Physics, JUN 16-20, 2008 Riga, LATVIA, Volume: 20 Pages: 405-408 Published: 2008, ISSN: 1680-0737, ISBN: 978-3-540-69366-6 13. Cerny M., Penhaker M. The HomeCare and circadian rhythm In conference proceedings 5th International Conference on Information Technology and Applications in Biomedicine (ITAB) in conjunction with the 2nd International Symposium and Summer School on Biomedical and Health Engineering (IS3BHE), MAY 30-31, 2008 Shenzhen, VOLS 1 AND 2 Pages: 110-113 Published: 2008 ISBN: 978-1-4244-2254-8 14. Penhaker M., Cerny M., Martinak L., et al. HomeCare - Smart embedded biotelemetry systm In Book Series IFMBE proceedings World Congress on Medical Physics and Biomedical Engineering, AUG 27SEP 01, 2006 Seoul, SOUTH KOREA, Volume: 14 Pages: 711-714, 2007, ISSN: 1680-0737, ISBN: 978-3-540-36839-7
885 15. Cerny, M.: Movement Monitoring in the HomeCare System . In IFMBE proceddings. Ed. Dossel-Schleger, Berlin:Springer, 2009, issue. 25, ISBN 978-3-642-03897-6; ISSN 1680-07 16. Peterek, T., Zurek, P., Augustynek, M., Penhaker, M. Global courseware for visualization and processing biosignals, WORLD CONGRESS 2009 - MEDICAL PHYSICS AND BIOMEDICAL ENGINEERING, Mnichov - http://www.wc2009.org 17. Penhaker, M., Zurek, P. Peterek, M. Signal processing and visualization of multiparameters biosignal, 9th International Conference BIOMDLORE 2009 , Bialystok, Poland - http://www.biomdlore2009.pb.edu.pl 18. Vasickova, Z., Penhaker, M., Augustynek, M.: Using frequency analysis of vibration for detection of epileptic seizure. Global courseware for visualization and processing biosignals. In World Congress 2009. Sept 7. 12. in Munich, ISBN 978-3-642-03897-6, ISSN 1680-0737 19. Prauzek, M., Penhaker, M., Methods of comparing ECG reconstruction. In 2nd Internacional Conference on Biomedical Engineering and Informatics, Tianjin: Tianjin University of Technology, 2009. Strnky 675-678, ISBN: 978-1-4244-4133-4, IEEE Catalog number: CFP0993D-PRT 20. Prauzek, M., Penhaker, M., Bernabucci, I., Conforto, S., ECG - precordial leads reconstruction. In Abstract Book of 9th International Conference on Information Technology and Applications in Biomedicine. Larnaca: University of Cyprus, 2009. Strnka 71, ISBN: 978-1-4244-5379-5
IFMBE Proceedings Vol. 29
Embedded Programmable Invasive Blood Pressure Simulator J. Kijonka, M. Penhaker VSB - Technical University of Ostrava, FEI, Ostrava, Czech Republic Abstract— This paper is dealing with design and realization of programmable invasive blood pressure simulator. Parameters of the simulator match with blood pressure sensor. The simulator is able to generate programmable behavior of voltage signal witch correspond to blood pressure curve. The pressure curves can be monitored on patient monitors. Control unit of the simulator is a microcontroller. Generated behaviors of signals are stored in programmable memory. Display unit for the simulator is character LCD. The simulator is controlled by four push buttons. This work can be used to testing, calibrating the patients monitors and to educational purposes.
The IBP transducer is passive resistive element, which works as two-ports network. Output low level signal [μV] is linearly dependent on input voltage [V]. Pressure measurement is provided by four tensiometers connected to Wheatstone full bridge. This mounting is non-sensitive to ambient temperature changes.
Keywords— Blood pressure, Invasive, Simulator, Current Source I. INTRODUCTION
Blood pressure (BP) is one of the principal vital signs. We have many methods of measuring BP. We generally use indirect (noninvasive) methods. The noninvasive auscultatory and oscillometric measurements are simpler and quicker than invasive measurements and are less unpleasant for the person. However, noninvasive methods may yield lower accuracy in numerical results. Non-invasive measurements methods are more commonly used for routine examination and monitoring. Invasive measurement of BP is most accurate method of arterial blood pressure measurement. Arterial blood pressure is measured through an arterial line. Intravascular cannulae involves direct measurement of arterial pressure by placing a cannula needle in an artery (usually radial, femoral, dorsalis pedis or brachial). The cannula must be connected to a sterile, fluid-filled system, which is connected to an electronic pressure transducer. The advantages of this system are that pressure is constantly monitored, and a waveform (a graph of pressure against time) can be displayed. Cannulation for invasive vascular pressure monitoring is infrequently associated with complications such as thrombosis, infection, and bleeding. Patients with invasive arterial monitoring require very close supervision. There are pressure monitoring systems to acquire pressure information for display and processing. We can monitor the blood pressure curves, systolic, diastolic and mean pressure values on modular patient monitors. Modular patient monitors have invasive blood pressure (IBP) modules with several inputs for IBP transducers. [1]
Fig. 1 Invasive blood pressure measurement For testing, calibration and educational purposes on patient monitor IBP module is necessary to have equipment which is able to generate programmable behavior of voltage signal corresponding to blood pressure curve. Design and realization of programmable IBP simulator is described below. II. MATERIALS AND METHODS
A. Requirements specification x
x x
Programmable behavior of output voltage signal generating which correspond to constant blood pressure, sine wave blood pressure and arterial blood pressure curves. User defined output signal with adjustable parameters (amplitude, frequency, etc.). User interface with character LCD and push buttons
B. Designing method Invasive blood pressure simulator designing results from blood pressure sensor NPC-100 general parameters and NPC-100 inner circuit.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 886–889, 2010. www.springerlink.com
Embedded Programmable Invasive Blood Pressure Simulator
IBP simulator I/O parameters matching with blood pressure sensor NPC-100 I/O parameters. Schematic diagram of NPC-100 pressure sensor is in Fig. 2. Wheatstone full bridge contains four tensiometers. General parameters of NPC-100 are in Table 2. Output voltage of the sensor [μV] is depending on input excitation [V]. For example for input excitation of 1VDC and pressure 1mmHg, we have output voltage of about 5μV.
887
We can obtain Vout desired by current sources adjusting. The current source circuit is shown in Fig. 4. This circuit is operational amplifier-based. The advantages of this circuit are: x Low currents generation [μA]. x Adjustable current range. x Current source output linearly dependent of input voltage Vin. x Possibility connecting desired number of independently adjustable current sources to the Wheatstone bridge A or B point. x
Fig. 2 NPC-100 schematic diagram [2] We can be inspire with NPC-100 schematic diagram for IBP simulator output circuit designing. The tensiometers of NPC-100 pressure sensor can be replaced by resistors of the same values. Like this, we get Wheatstone bridge with output voltage Vout of 0VDC. Changes of Wheatstone bridge output voltage Vout can be obtained by current sources connected to the points A and B of the Wheatstone bridge (Fig. 3). The current source 1 will be produce negative potential between +Vout and – Vout terminals whilst the current source 2 will be produce positive potential between +Vout and –Vout terminals.
Fig. 3 IBP simulator output circuit
Fig. 4 Current source circuit [3] Resistor R2 in Fig. 4 is digital potentiometer in our application. Digital potentiometer allows a program control of its resistance. By changing the R2 value is generated output current I. Current Sources Adjusting It must be 3 current sources connected to the A and B points of Wheatstone bridge for specific voltage generation on Vout output: 1. Current source which correspond to blood pressure from 0mmHg to 330mmHg. 2. Current source which correspond to blood pressure from 0mmHg to 50mmHg. 3. Current source which correspond to blood pressure -55mmHg. The current sources 1 and 2 are connected to the point B of Wheatstone bridge, they produce positive potential between +Vout and –Vout terminals. The current sources 3 is connected to the point A of Wheatstone bridge, it produce negative potential between +Vout and –Vout terminals. This current source can be adjusted to constant value which corresponds to blood pressure -55mmHg. Resistor R2 (Fig. 4) can be fixed on constant value.
IFMBE Proceedings Vol. 29
888
J. Kijonka and M. Penhaker
C. Block Diagram
D. User Menu
Block diagram of IBP simulator is in Fig. 5. IBP simulator output circuit is labeled by yellow blocks. Current sources 1 and 2 are programmable adjustable by digital potentiometers communicating on the serial peripheral interface (SPI). Control unit of the simulator is microcontroller ATMEL ATmega16. Generated behaviors of signals are stored in microcontroller’s internal flash memory. User communication interface consists of display unit LCD type ATM2004 with four lines of twenty characters and four push buttons. The output voltage signal Vout from Wheatstone bridge is get in differential instrumentation amplifier INA101HP with gain of 200. The output of INA101HP is get in operational amplifier OP07CP connected as summing amplifier. The output of OP07CP should be in range of internal reference for ATmega16 AD converter (0V to 5V or 0V to 2,56V). The input voltage signal Vin is get in voltage divider and operational amplifier OP07CP connected as voltage follower. The output of OP07CP should be in range of internal reference for ATmega16 AD converter (0V to 5V or 0V to 2,56V). IBP simulator has two output connectors. One is special for connecting the patients monitor. Second is BNC connector with amplified output signal.
User has several possibilities to generate output signal which correspond to blood pressure curve: x Constant pressure: 0mmHg to 300mmHg x Offset: -25mmHg to 25mmHg x Primitive signals generation: sinus (amplitude [mmHg] and frequency [Hz] settings), … x Blood pressure curves generation Constant pressure generation can be used for patients monitor calibration. Blood pressure curves generation offers to generate normal or deformed pressure curves which can simulate some heart diseases. It’s useful for educational purposes.
Fig. 6 Windows of user communication interface Additional options: x ADC Vin range setting: low -> 0 to 5,8V with 5,7mV resolution high -> 0 to 11V with 10,8mV resolution x ADC Vout range setting (Vin = 5V): low -> -150mmHg to 377mmHg with 0,5mmHg resolution high -> -150mmHg to 850mmHg with 1mmHg resolution x LCD backlight setting (dark to bright) Some user interface options displayed on LCD are in figure 6. (Notes: Uvst [V] = Vin [V], Uvys [μV] = Vout [μV], ofs [%] = offset (from -25mmHg to 25mmHg)) Main window menu is in the left LCD window. Right window shows constant pressure generation option. Blood pressure curve Data of each pressure curve are stored in microcontroller’s flash memory. It’s stored only one period of each pressure curve with sampling rate 200Hz and 8-bit data resolution. Data of pressure curves can be uploaded by programming software.
Fig. 5 Block diagram of IBP simulator
IFMBE Proceedings Vol. 29
Embedded Programmable Invasive Blood Pressure Simulator
889
Table 2 IBP simulator and NPC-100 general parameters [2]
Fig. 7 Blood pressure curve – data stored in flash memory
III. RESULTS
Final device is described below. Input voltage for the device is 230VAC. 5VDC source is supply voltage for microcontroller and LCD device. ±15VDC source is supply voltage for instrumentation and all operational amplifiers. Digital part consists of main board for microcontroller with I/O connectors and LCD display device. Analog part is Wheatstone bridge controlled by current sources. Output connector contains output voltage signal lines from Wheatstone bridge (+Vout, -Vout) and amplified voltage signal lines (ADC1, GND).
Parameter
Value NPC-100
Pressure Ranges Input Excitation Input Impedance Output Impedance Offset setting Sensitivity Resolution
-30 to 300 1-10 1800-3300 285-315 -25 to 25 4,95-5,05
Value IBP simulator -30 to 300 1-10 3710-3730 300 -25 to 25 1,3
Units mmHg VDC mmHg μV/V/mmHg mmHg
Resolution of IBP simulator is given by using of 8bit AD8400 digital potentiometers. It can be increased by using of digital potentiometers with greater bit range. IV. CONCLUSIONS
The developed system was created for testing and calibration of multifunction monitor systems for invasive blood pressure measurement. The calibration system is conceived as general-purpose and allows wide spectrum of adjusting according to tested monitor requirements. Output parameters and accuracy was tested at certificated medical instruments for IBP measurement with 3,7% deviation. Built-up simulator is ready for using in medical applications. It allows making expensive and lengthy calibration before measurement easier. The simulator is currently used for educational and clinical purposes.
ACKNOWLEDGMENT The work and the contribution were supported by the project Grant Agency of Czech Republic – GAýR 102/08/1429 “Safety and security of networked embedded system applications”. Also supported by the Ministry of Education of the Czech Republic under Project 1M0567
REFERENCES 1. 2.
Obr. 8 Overall view on inner integration of IBP simulator parts
Blood Pressure at http://en.wikipedia.org/wiki/Blood_pressure V. Kasik, ”FPGA based security system with remote control functions.”5th IFAC Workshop on Programmable Devices and Systems, NOV 22- 23, 2001 GLIWICE, POLAND, IFAC WORKSHOP SERIES Pages:277-280, 2002, ISBN: 0-08044081-9
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Jan Kijonka VŠB- TU Ostrava 17. listopadu 15 Ostrava Czech Republic [email protected]
Generating and Transmitting Ambulatory Electronic Medical Prescriptions M. Nyssen, K. Thomeer, and R. Buyl Vrije Universiteit Brussel, Brussels, Belgium Abstract— This development was realized in the context of the introduction of electronic medical prescriptions for the ambulatory sector. As most physicians have a computer (providing access to the patients' health records) and an Internet connection in their cabinet, the remaining connectivity problem concerns house visits to patients. In following paper we describe the setting-up of a test-pilot, enabling generation of valid electronic prescriptions during ambulatory visits. Three aspects are covered here: the web client that generates the prescription, the mobile workstation (mini-laptop or personal digital assistant) and the integration into a national or transnational prescribing system. Keywords— E-health, electronic prescription, ambulatory.
I. INTRODUCTION
electronic medical prescription (EMP) in transit server: Recip-e. Here, prescriptions will be kept as “non-addressed messsages” until the patient picks up his prescription in the pharmacy of his choosing.
III. TECHNICAL SOLUTION Mini-laptops or personal digital assistants readily allow network connectivity via local- or wide area wireless networks. Moreover these machines have become powerful and versatile enough to realize the required functions for authentication, digital signature and for encryption, as required for a full fledged prescription system.
As electronic prescriptions become a reality, not only in the hospital context, but also in ambulatory care, the problem posed by generating and transmitting prescriptions during visits at patients' premises
II. SET-UP The experiment was initiated in the context of the iBrussels project [3].As shown in Figure 1, the aim was to make prescribing possible via mini-laptops or appropriate PDA's, with an Internet link via the iBrussels' Urbizone wireless network or via a G3 connection. The prescriptions, generated by the physician “on the road” will reach the national
Fig. 2
PDA and mini-laptop
The ZINEON (3) pda can read the E-id card, can produce digital signatures together with our Belgian eID cards, can communicate with wireless networks and can print out the paper-based prescription (at least useful during the transition period before national roll-out will be complete). A. Login with Electronic Identity Card (eID) – Authentication
Fig. 1 Setting for ambulatory prescriptions
Verification if the person who logged-in is a registered user. In the future, the prescribing physicians' identity and permission to prescribe will be verified via an authoritative source, made accessible via the Belgian eHealth-platform. Currently, in this pilot setting, we verify via local users databases. This enables us to open the system for testing users, participating in evaluation experiments, teachers and students.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 890–892, 2010. www.springerlink.com
Generating and Transmitting Ambulatory Electronic Medical Prescriptions
B. Verification Whether a Logged-In Person Is a Valid Prescriber: Authorisation Via the prescribers database: if a person is entitled to prescribe, he is granted access to the prescribing program. A complete e-ID based authentication mechanism was developed, based on the Fedict middleware and applets that were developed specifically for this application. After digital signing of the XML-formatted prescription, the identity of the signer, the validity and non repudiation of the signatures' certificate is verified via the Internet, before the prescription is released. C. Medication Database After logging in and obtaining access to the prescription program, prescribers can fill in the items (up to 10) of the prescrition. On an “intermediary” server, a php-based web application was developed to enable the creation of prescriptions. The BCFI (Belgisch Centrum voor Farmacotherapeutische Informatie) medication database is used as a reference for the prescription items, which can be selected via searching on the first 2, 3 or 4 digits of their name, resulting in a pick list of corresponding packages, then refined via a single mouse-click.
891
The web-prescriber then generates both an XML KMEHR (KMEHR stands for “Kind Messages for Electronic Health Records”, it is the Belgian standard for e-health related messages) message corresponding to the electronic prescription and a pdf in the legal format, ready for printout. D. Signing of the Prescription Electronic signing is done using a Java-applet, which is downloaded automatically from the server and then executed locally in the PDA or mini-laptop. To sign, the prescriber has to read in his electronic ID card (in Belgium distributed to all inhabitants), enter his PIN-code, after which, the applet sends the signature to the server. The server verifies if this signature corresponds and if the signing cerificate is valid via the OCSP protocol. A signed prescription obtains a unique number: the RID (Recip-e ID), which is printed at the top of the paper-based prescription. E. Prescription in XML Format The XML-prescription with administrative- and medical components in KMEHR XML format, is sent to the Recip-e server, where the pharmacist, chosen by the patient can collect the medication, listed in the prescription.
Fig. 4
Fig. 3
Generation of a prescription
Resulting prescription in XML format
The XML formatted prescription is now ready for encryption and transmission to the intermediate data storage Recip-e , awaiting until the patient goes to a pharmacist to get his medication.
IFMBE Proceedings Vol. 29
892
M. Nyssen, K. Thomeer, and R. Buyl
F. Prescription in PDF Format
G. Server Environment
The PDF format of the prescription is generated. Signed prescriptions obtain an alphanumeric bar-code, corresponding to the unique prescription identifier (RID), as shown on figure 5. This bar-code will allow for a smooth transition when the electronic prescriptions will be introduced gradually: pharmacists will immediately recognize prescriptions containing the RID-bar-code which will enable them to retrieve the electronic prescription within about 1 second. Later-on, patient identification relying solely on the E-id will prevail, as soon as full roll-out will be achieved.
The whole prescription system runs on a server with following characteristics: • • • • •
Linux OS Apache Web Server PHP Server Script Fedict middleware for eID Applets for data-input and digital signature.
IV. DISCUSSION In a few months time, a complete demonstrator could be built, using a combination of “off the shelf” and a few specific components, amongst which a signing applet. Security is obtained via encrypted https transmission and by relying on the high quality features of the Belgian e-ID. Some concern was felt, concerning the java code of the Middleware, but this was solved thanks to FEDICT's (the Belgian federal ICT agency) collaboration.
V. CONCLUSION Using readily available mini-laptops, PDA's and available browsers, an environment was created that enables physicians to generate prescriptions in ambulatory practice and during house calls. Although an evaluation by a substantial group of users still is required, acceptance of the demonstrator is positive. The look-and feel can still be improved to suit the ergonomic needs of the physician “on the move”.
REFERENCES 1. Code ST., Electronic prescribing: a review of costs and benefits, Top Health Inf Manage. 2003 Jan-Mar;24(1):29-38. 2. Papshev D, Peterson AM., Electronic prescribing in ambulatory practice: promises, pitfalls and potential solutions, Am J Manag Care. 2001 Jul;7(7):725-36. 3. iBrussels project: Brussels Regional Government, VUB ETRO, 20082009# Corresponding author:
Fig. 5
Resulting prescription: familiar “pdf” format
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Marc Nyssen Vrije Universiteit Brussel Laarbeeklaan 103 B-1090 Brussels (Jette) Belgium [email protected]
Estimating Pre-term Birth Using a Hybrid Pattern Classification System M. Frize1,2 and N. Yu1 1
2
Systems & Computer Engineering, Carleton University, Ottawa, Canada School of Information Technology & Engineering, University of Ottawa, Ottawa, Canada
Abstract— In this work, pre-term birth (PTB) was estimated with an accuracy as high as that of the invasive and expensive fibronectin test, yet produced these results only using data acquired before week 23 of gestation. We were able to estimate PTB for both parous and nulliparous women and expanded the model to all North America from a previous model valid for USA only. Keywords—Predicting preterm birth, decision trees, artificial neural networks, performance, parous and nulliparous cases. I. INTRODUCTION
The work of our research group focuses on developing and improving decision-support systems that can be generalised and applied to various clinical environments. Artificial neural networks (ANNs) allow to model complex interactions between variables. The advantage over conventional statistics is that ANNs can be trained to predict outcomes on various data types with multiple input parameters and outcomes. Another advantage is that they can estimate outcomes for a single case, whereas statistical tools typically estimate outcomes for a group of cases. In past work, we created models to estimate mortality, ventilation, length of stay, (for both adult and neonatal intensive care patients), and a number of complications for the infants [1, 2]. Premature births account for 85 per cent of all newborn deaths (defined as those occurring within the first 28 days of life) in industrialized countries. The large quantity of data collected during pregancy and up to the birth of an infant, from both mother and baby, is accumulated and stored in several different medical databases: obstetrical, perinatal, and neonatal intensive care. These databases are usually distributed temporally and geographically, lacking an integration infrastructure to allow their seamless access. While many studies have attempted to predict which women are at risk for pre-term birth (PTB), no risk scoring system at this time has performed better than clinical judgment, which, according to our physician partners, is also limited in its predictive value for this outcome. A major obstacle is that most women who deliver prematurely have no obvious risk factors, and over half of PTBs occur in low-risk
pregnancies [3]. The general consensus in the literature is that current statistical tools cannot effectively be applied to determine the complex associations between epidemiological, clinical, biochemical, and biophysical variables because preterm birth is a multifactorial problem. The National Institute of Child Health and Human Development Maternal-Fetal Medicine Unit Network's Preterm Prediction Study [4] found that fetal fibronectin (an invasive expensive test), is currently the strongest predictor of PTB in women with a history of prior preterm delivery, with a sensitivity of 64.7% (true cases of PTB) [5]. Further work by Iams et al. attempted to use the same technique for a low risk maternal population prior to 35 weeks gestation [3], but sensitivity was low (15.6%), indicating that the model was not applicable to low-risk mothers. These authors concluded that no screening test for preterm birth in low risk mothers, other than obstetric history, could be recommended. Current practice does not support the use of cervical length measurement or fetal fibronectin to screen for risk of preterm delivery in a low-risk population and these tests do not predict risk of PTB early in pregnancy; they become more sensitive only 7-10 days before advanced cervical dilation, leaving little time for decision making in terms of prenatal treatment [6]. No study we are aware of predicts PTB for nulliparous women (first pregnancy). Our previous study of PTB used two different datasets of newborns: A Canadian database (Perinatal Partnership Program of Eastern and Southern Ontario (PPPESO) and a USA database (Pregnancy Risk Assessment Monitoring System PRAMS). Preliminary results established by our PhD student used a classification based ANN and a hybrid integrated screening system using risk-stratification ANN, processing the test data through three passes: (i) maximize sensitivity; (ii) maximize specificity; (iii) classify ambiguous cases using a decision-tree voting algorithm. The resulting 48-variable model was tested using cases of women who had previous children (Parous), but all cases of women who had not had children (nulliparous) were deleted for that study. The optimal sensitivity achieved was 64% with a specificity of 84% (true cases of full term birth); these results matched the performance of the invasive fibronectin test [7]. Catley‟s best results used data from the PRAMS database collected prior to week 23 of gestation.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 893–896, 2010. www.springerlink.com
894
M. Frize and N. Yu
This is a major advantage over an invasive technique for screening a general population for risk of PTB. Our physicians indicate that they would like to have an effective risk assessment model that predicts PTB risk before 23 weeks of gestation and be applicable to the entire population, not just a subset of symptomatic patients. Ideally, women at risk of PTB could be accurately identified early in the pregnancy using readily available obstetrical and socio-demographic information, allowing preventive measures to be taken. Such a system could become a screening tool, early enough for potential clinical intervention, would be inexpensive and effective. Moreover, including nulliparous women would broaden the model to all pregnancies. Developing a system with these characteristics was the motivation for the current study. II.
METHODOLOGY
A. Developing a hybrid DSS In previous work, we compared and tested the performance of two data mining approaches, using a database of adult intensive care patients to estimate mortality for two types of patients: Post operative and non post operative. The research was to assess which one would perform better in predicting medical outcomes. The two methods were: (i) ANNs with the weight elimination algorithm; (ii) using a decision tree (DT) to pare down variables prior to submitting the remaining ones to an ANN to estimate PTB. Both methods result in eliminating variables that have minimal impact on estimating the outcome. However the two approaches function quite differently. The first pares down the variables after the experimental run, while the second pares them down prior to the run. The conclusion of that study was that the DTANN performed better than the weight-elimination ANN in terms of sensitivity (true classification of deaths), specificity (true classification of survivors), and area under the ROC curve [8]. The DT-ANN produced consistent results, thus we applied this hybrid approach to the problem of estimating PTB. Our hope was to reduce the complexity of the classifier while maintaining or exceeding past results. We also wanted to use variables that are not USA specific and obtain acceptable results for women who have not had any children (Nulliparous), thus creating a system that extends to the general population. B. The database The PRAMS database contained over 113,000 cases with over 300 variables relating to demographics, maternal and infant behaviour and medical conditions collected by
the Centers for Disease Control and Prevention (CDC) in Atlanta, USA between the years of 2002-2004. PRAMS is a national database from 32 participating states, using statespecific population-based sampling collected on maternal and infant prenatal, birth and post-partum experiences, supplementing data from vital records to monitor changes in maternal and child health indicators. The PRAMS database was a combination of birth certificate variables (taken at time of birth) and variables collected from the PRAMS questionnaire (collected a few months after delivery). Variables specific to USA (Medicaid and Women Infants Children Program) were eliminated. However, variables pertaining to maternal race such as women from HAWAII, ALASKA, and the PACIFIC area are applicable to Canada. The first step was to establish the risk model to predict PTB; the second step was to format the data for processing through our data mining tools; the third was to divide the PRAMS database into Parous (prevalence of 16.0%) and Nulliparous (prevalence of 17.9%) datasets. These two datasets were then split into a training set, a testing set, and validation sets (preserving similar prevalence for each set), which were then used to create the DT-ANN Hybrid. In order to maintain robustness in the performance of a DT classifier, an ensemble method was used. This required the creation of numerous decision trees, a process called ensemble classifiers. The ensemble classifiers were made by bootstrapping the original training sets and allocating these into bootstrapped training subsets. The DT classifier creates a tree from each training subset and tests each tree on the test set. A typical tree is created using the best split among all variables. The growing and pruning algorithms used to create a DT tests for significant contributing variables for the selected outcome. The DT algorithm (See 5 commercial software) first classifies each class according to conditions, followed by the pruning steps to global optimum. The pruning portion creates a tree that is able to generalize well and deletes variables with low significance [9]. The features appearing in the pruned DT become the selected subset of features to be used for ANN classification [10]. Once a tree is created using See5, the variables are ranked according to the most significant variables while some variables are not used at all. In order to optimize the DT classification once an initial tree is created, a variable used in the split is removed. A new tree is created from the remaining variables. If the performance of the new tree deteriorates after the removal of the variable, the variable is reintroduced, the next variable removed and a new tree is created. These steps continue until the best performing tree is created.
IFMBE Proceedings Vol. 29
Estimating Pre-term Birth Using a Hybrid Pattern Classification System C. Artificial
Neural Networks (ANNS) The performance of a neural network is affected by its architecture (the number of layers, the number of hidden nodes, etc.). Experimentation with various architectures led to the selection of a back propagation feed-forward ANN. When using an ANN, the values should fall within the range of -1 to 1, which requires data normalization. Continuous and categorical variables are stored in binary format. Optimal performance of the network can be determined using the Correct Classification Rate (CCR) and the Average squared error (ASE). A drawback of these stopping criteria is the poor representation of highly skewed data. An early stopping criterion used by our research group is the logarithmic-sensitivity (logsens) equation which allows to balance sensitivity (Sn) and specificity (Sp) [11]; a second constraint to stop the run is when a best performance has been reached and maintained for 500 epochs [12]. Logsens is calculated by the equation below: logsens = −Snn × log(1 − Sn × Sp)
(1)
The logsens value tends to slightly favour a higher sensitivity which is important in this type of problem. Our physician partners have stated that predicting PTB was of higher importance than predicting babies that are expected to be born at full term. The area under the ROC curve indicates the discrimination power of the ANN. Logsens has been shown to be a better stopping criterion than the minimum ASE or the CCR in databases with highly skewed outcomes [11]. The sensitivity and specificity were used to evaluate the DT results. Sensitivity, specificity, logsens and area under the ROC curve were used to evaluate the performance of the ANN models. D. Developing the DT-ANN Hybrid models For these tests, the dataset was divided as follows: 45% for training sets, 22% for testing sets, and 33% for the validation sets. Ten verification sets were created by random sampling data from unseen cases. A satisfactory performance on the verification sets is indicative of a classifier that generalizes well on new cases. Poor performance of the verification set may be suggestive of over-training the network. The verification results are reported as the mean and confidence interval of the ten sets tested. The approach compared real clinical data and data outputs from the DT-ANN system.
III.
RESULTS AND DISCUSSION
Table 1 shows the best performing classifiers created using the ANN for Parous cases. All the test results have
895
sensitivities greater than 65.0%. Node 4, 8 and 9 had a specificity > 80%. Each classifier had similar ROC values of 0.80, indicating excellent discrimination. The Nulliparous best performing results (Table 2) were Node 5 and Node 11. Although Node 5 had a test sensitivity of 65.0%, its specificity was at 71.3%. Its ROC (0.71) was lower compared to the Parous ROC values. Node 11 had a lower test sensitivity of 61.3%, with a slightly higher specificity (73.6%) and ROC (0.72) compared to Node 5. All classifiers in the Parous and Nulliparous cases had verification results that closely matched test set results. The positive predictive value (PPV) and the negative predictive value (NPV) are also reported in this work, as they are useful measures for physicians. One has to keep in mind that these values are greatly affected by skewed outcomes. Table 3 shows that the PPVs for all Parous cases were greater than 40% with a high value of 44.2% (Node4). This node also had the highest NPV (92.9%). The best Nulliparous PPV was 11.9% lower than the best Parous classifier and had an NPV (90.6%), a small difference of 2.2% from the Parous NPV. Table 1. Parous cases best performing Hybrid ANN (Node 4, 8 and 9). Train rate is 16.13%, Test rates is 16.08% and Verification rate is 16.13±0.1391 A = Train set; B = Test; C = Verification; Sn = sensitivity; Sp = specificity; LS = Logsens Node 4 8 9 Sn 66.0 68.5 71.1 Sp 83.3 83.2 82.1 A LS 0.23 0.25 0.27 ROC 0.81 0.82 0.83
B
Sn Sp LS ROC
66.3 83.9 0.23 0.80
65.1 83.9 0.22 0.81
67.9 81.2 0.24 0.81
C
Sn Sp LS ROC
61.4 ±4.5 83.3±2.2 0.19±0.03 0.79±0.07
61.8 ±4.5 83 ±2.5 0.19±0.03 0.80±0.03
64.4 ±4.8 81.1 ±2.3 0.21±0.03 0.80±0.01
IV.
CONCLUSION AND FUTURE WORK
Our new Parous results achieved 66.3% test sensitivity and 83.9% specificity (Node4), matching both our previous results [7] and fibronectin tests. Other nodes gave similar results for Parous cases. All classifiers met the clinical expectations set out by our physician partners. Improvements with our new approach were: (i) produces a generalized North American model, rather than US based. (ii) previous work reduced variables from 300 to a
IFMBE Proceedings Vol. 29
896
M. Frize and N. Yu
minimum dataset containing 48 variables [7]; this new work achieved equal performance with only 19 variables for Parous cases. The classifier for Nulliparous cases performed less well as the Parous database, as expected. Our former research had identified the variable “previous preterm birth” as the third best indicator of preterm birth, following „maternal bleeding‟ and „plurality‟. This study agrees with these findings, as all decision trees used „maternal bleeding‟ and „plurality‟ as consistent and high contributing factors among each DT classifier. The Parous variables also used „previous preterm birth‟ and „previous low birthweight‟ as strong contributors; however, for the Nulliparous cases, these variables were not available. „Previous preterm birth‟ and „previous low birthweight‟ are ranked high on the attribute list of Parous cases and are very strong indicators in the prediction of PTB, therefore the poorer accuracy and discrimination ability of the Nulliparous model can be attributed to this factor.
invasive methods for screening women for PTB. In future work, we plan to test two new databases which we are acquiring for this work. This will enable us to examine which variables in the three databases (PRAMS and the two new ones) are more indicative of a risk of PTB.
Table 2. Nulliparous cases best performing Hybrid ANN (Node 5 and 11). Train rate is 17.9%, Test rate is 17.40% and Verification rate is 17.7 ± 2.5.
2.
A = Train set; B = Test; C = Verification; Sn = sensitivity; Sp = specificity; LS = Logsens; ROC=area under receiver operating characteristic curve. Node 5 11 Sn 62.8 60.2 Sp 71.7 74.1 A LS 0.16 0.15 ROC 0.72 0.72
3.
B
Sn Sp LS ROC
65.0 71.3 0.18 0.71
61.3 73.6 0.16 0.72
C
Sn Sp LS ROC
65.5± 4.8 71.1± 2.7 0.18± 0.03 0.73± 0.03
63.5 ± 5.3 74.2 ± 2.3 0.18± 0.04 0.74± 0.03
Table 3. PPV and NPV for Train set Parous
Nulliparous
ACKNOWLEDGMENT We wish to thank CDC in Atlanta for the use of the PRAMS database and the Natural Sciences Research Council for the Discovery Grant supporting this work. Note: all our research projects received ethical clearance.
REFERENCES 1.
4. 5. 6.
7.
8.
9.
Node
4
8
9
5
11
10.
PPV
44.2
43.7
40.9
32.3
30.9
11.
NPV
92.9
92.6
93.0
90.6
90.9 12.
This work created a non-invasive prediction system that is applicable to a general population early in pregnancy (<23 weeks of gestation) while maintaining clinical expectations. Such a system can become an alternative to costly and
Ennett, C, Monique Frize and E. Charette (2004) Improvement and automation of artificial neural networks to estimate medical outcomes, Medical Engineering & Physics. 26: 321-328 DOI: 10.1016/j.medengphy.2003.09.005.. Frize, M., RC Walker, C. Catley. Ch-17: Knowledge Management in Perinatal Care. In Health Care Tech. Management. Eds. A. Dwidedi and R. Bali, Springer, 2006: pp234-261. Iams, J.D., R.L Goldenberg, B.M. Merber, A.H. Moawad, P.J. Meis, A.F. Das and S. Caritis. The preterm prediction study: can low-risk women destined for spontaneous preterm birth be identified? GeneralObst. &Gynecol,2001,vol. 184(4), pp. 652-655. NICHD MFMU at http://www.bsc.gwu.edu/mfmu/ Iams JD. The Preterm Prediction Study: A model for estimation of risk of spontaneous preterm birth in parous women. NICHD Network. Am J.Obst.& Gynecol., 1997, 176:S51. Tan, H., Wen, S. W., Walker, M., and Kitaw, D. (2004). Early prediction of preterm birth by logistic regression in multiple pregnancy. Report from Dr. Shi Wu Wen, OMNI research group, Dept. of Obst. and Gyne., Fac. of Med., U. of Ottawa, Ottawa, ON. Catley, C., M. Frize, CR Walker, and DC Petriu, D.C. (2006) Predicting high-risk preterm birth using artificial neural networks. IEEE Trans. Information Technol. in Biomedicine. Special section on mining biomedical data. Vol 10 (3):540-549. Yu, Nicole (2010) An Integrated Decision Tree-Artificial Neural Network Hybrid to Estimate clinical outcomes: ICU mortality and preterm birth. MASc. degree at Carleton University, Dept of Systems and Computer Engineering, Ottawa, Canada. R. Duda, P. Hart, and D. Stork (2001) Pattern Classification. New York, NY: John Wiley & Sons, Inc., 2nd ed. M. Dash and H. Liu (1997) Feature Selection for Classification. Intelligent Data Analysis–Elsevier, pp. 131–156. C.M. Ennett, M Frize, N Scales (2003) Evaluation of the LogarithmicSensitivity Index as a Neural Network Stopping Criterion for Rare Outcomes. Proc. ITAB, Birmingham (UK), April. pp: 207-210. Rybchynski, D. (2005) Design of an Artificial NeuralNetwork Research Framework to Enhance the Development of Clinical Prediction Models. MScE Thesis, School of Information Technology and Engineering, University of Ottawa, Ottawa, Canada.
IFMBE Proceedings Vol. 29
Design and Implementation of a Radio Frequency IDentification (RFID) System for Healthcare Applications A.C. Polycarpou1, G. Gregoriou1, A. Dimitriou2, A. Bletsas3, I.N. Sahalos2, L. Papaloizou1, and P. Polycarpou1 1
Department of Engineering, University of Nicosia, Nicosia, Cyprus RCLab, Aristotle University of Thessaloniki, Thessaloniki, Greece 3 Department of ECE, Technical University of Crete, Chania, Greece 2
Abstract— This paper presents the use of RFID technology in the healthcare sector. A highly sophisticated RFID system, which also incorporates advanced Information and Communication Technologies (ICTs), was carefully designed in order to be implemented as a pilot project at the premises of the Bank of Cyprus Oncology Center (BOCOC) in Cyprus. The RFID system will be used for automatic and error-free patient identification through the use of RFID wristbands and/or tagequipped plastic cards, for Real Time Location Service (RTLS) in order to locate medical equipment (e.g., infusion pumps, walkers, wheelchairs, etc.) in the premises of the hospital, and inventory control for the pharmacy. The RFID technology that is used in this pilot project is based on the UHF-band EPC C1 Generation 2 data exchange protocol. A Graphical User Interface for a medical tablet PC was developed which interfaces the RFID hardware (e.g., stationary and handheld readers, RFID printers, etc.) with the everyday routine tasks of the hospital’s medical personnel. The application platform developed by the research team is extremely easy-to-use by doctors and nurses, powerful, effective, and superior to traditional paper-bound processes. Keywords— RFID, e-health, ICT applications, patient identification, asset localization.
I. INTRODUCTION RFID is an innovative technology that uses radio frequency signals to communicate between the reader and the tag from a distance. By uniquely identifying a tag, without direct contact or line-of-sight, the RFID system usually triggers another process that results in downloading and viewing detailed information about the object or person associated with the specific tag ID. Using this system to automatically and uniquely identify patients in the healthcare sector significantly reduces the chances of making a mistake either by providing wrong medication or following a wrong medical procedure. The US Institute of Medicine estimates that more than 44,000 deaths occur each year due to in-hospital medication errors. In Canada, this number is estimated to be 700. USA Food & Drug Administration (FDA) estimates that medical errors approach 40% in paper-based environments [1], [2]. An RFID system will
significantly reduce error count and guarantee patient safety and satisfaction. The patient's profile will be automatically updated through the use of handheld or tablet PCs after a nurse or a doctor visits the patient's room, or administers a dose of a prescribed medication, or performs a certain medical procedure. The profile will be stored in digital form in a central database that can be accessed only by authorized nursing staff, doctors, and administrators. This system also improves transparency and accountability in case of something going wrong. Inefficiency at workplace in the healthcare sector is another serious problem. It is estimated that hospital employees spend approximately 25-33% of their time searching for equipment and losing about 10% of their inventory annually [3]. Significant time is also wasted in monitoring and maintaining the inventory up-to-date; this translates to unnecessary labor cost and waste of human resources that could be more productively used elsewhere. All these problems could be alleviated by deploying the proposed RFID system together with the ICT backbone. It is estimated that RFID could save a 200-bed hospital 600000 US dollars annually, from less shrinkage, fewer rentals, procurement planning, and staff productivity [3]. Although RFID was invented 60 years ago, it was not until the last decade that found applications to a number of sectors in the global economy such as logistics, container identification, retail business, etc. In the last 2-3 years there is an increasing interest in using RFIDs in the healthcare sector. Pilot projects are currently under way around the world including the Klinikum Saarbrücken in Germany (launched in 2005), Hamilton Health Services with the McMaster University in Canada (in 2006), and a few more. To the best of our knowledge, RFIDs have not yet been implemented in any sector of the Cyprus economy including the healthcare sector. The aim of this project was to introduce RFID technology, together with Information and Communication Technologies (ICTs), in the healthcare sector in order to improve quality of service to patients and reduce operational costs. The basic objectives of the project include use of RFID technology for: (a) inventory control and monitoring;
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 897–900, 2010. www.springerlink.com
898
A.C. Polycarpou et al.
(b) tracking and locating of valuable medical equipment; (c) identification and tracking of patient files; (d) automatic identification of in-hospital patients through the use of RFID wristbands; (e) real-time access/update of patient's profile and medication records by medical staff. The direct benefits of adopting this technology in the healthcare sector include: (a) reduction of errors and patient mix-ups due to traditional paper-bound processes; (b) real-time access and update of patient's medical profile; (c) increased productivity and efficiency at workplace; (d) better healthcare service to patients; (e) fast and error-free identification of specimen and blood samples during laboratory work; (f) item and equipment loss prevention; (g) labor savings; (h) automatic and accurate record of inventory. A well-defined subsection of a hospital is equipped with a set of static as well as mobile RFID readers, interconnected through a wireless network that serves as bridge to the hospital database system and backhaul ICT infrastructure. Patients wear low-cost, wristband RFID tags and medical personnel are equipped with simple-to-use handheld terminals able to rapidly receive and decode patient’s ID, wirelessly communicate with the medical record database, and quickly, securely and reliably retrieve patient’s information. In that way, medical staff is able to avoid mistakes, perform the appropriate medical treatment and update accordingly each patient’s profile. Furthermore, the implemented network of RFID tags and readers, in combination with the rest of the wired and wireless infrastructure, is able to provide real-time location service for valuable medical equipment.
II. SYSTEM EQUIPMENT The Bank of Cyprus Oncology Center (BOCOC), Nicosia, Cyprus, is the hospital where the system will be installed. Due to the limited funding available, it has been decided that only a small portion of the hospital’s floor plan be used for the project’s purpose, as the RFID equipment is expensive to cover the whole building. The BOCOC provided the partners with the architectural plans of one of the two wards where the RFID project will be implemented in its pilot form. The pharmacist and the sister nurse provided valuable information for better design and implementation of the Graphical User Interface. All information needed from the hospital’s personnel, including existing IT network and database structure, were given to the research team in order to better schedule and carry out the necessary tasks. The research team considered the following technologies which could be employed in order to properly design and implement the RFID system: (a) handheld and stationary readers (HF/VHF/UHF frequency bands, protocols, security, etc); (b) RFID printers and capabilities; (c) tablet PC’s
suitable for medical applications; (d) PDA’s, capabilities, and interfacing with USB handheld readers; (e) wireless access points (APs) and network equipment; (f) RFID antennas and low-loss RF coaxial cables; (g) servers and software; (h) RFID tags suitable for healthcare applications. Following a careful consideration of the various technologies that are available in the market, it was decided that the system would make use of the following: (a) the UHF ETSI (EN 302 208) band of frequencies together with the EPC Class 1 Gen 2 protocol which is characterized by a better anti-collision scheme as compared to previous generation protocols and, therefore, higher percentage tag readability; (b) the most suitable tablet PCs were chosen in order to host the GUI application along with the middleware and other hardware interfacing software. The research team decided on tablets that are built specifically for medical applications, and as a result, they can be cleaned with disinfectants without causing damage to the machine. These tablets are lightweight, easy to carry and use, have a USB port accessory which can be used to interface with the handheld reader, Wi-Fi capabilities, advanced graphics, batteries that last for at least two hours of continuous use, easy way of charging, etc. The option of PDAs was rejected as they would be impractical due to their small size screen; (c) a state-of-the-art UHF GEN2 stationary reader with -80 dBm sensitivity and four monostatic RF ports was chosen from a great selection of readers in order to support the application of locating and tracking medical assets inside the hospital ward as well as inventory control. This stationary reader has many advantages over other similar type of products including higher sensitivity which allows tag identification from longer distances. It runs on a Linux operating system and it can be easily interfaced with thirdparty software and custom-made applications; (e) proper antenna selection is also an important issue that can play a key role in the success of the project. A huge variety of antennas are available in the market but not all of them are suitable for the specific project. Antennas that are circularly polarized and have relatively high gain, low VSWR, fairly wide beam-width, small size, and small back-to-front ratio are considered as good candidates for the project under development. The research team managed to identify antennas that are characterized by all these figures-of-merit in order to eliminate any chances of project failure due to the antenna failure; (f) due to the use of a two-way splitter to feed the pair of antennas attached to a single port of the stationary reader, it was deemed necessary that losses due to cables are minimized. A thorough investigation was initiated in order to locate coaxial cables with extremely low losses. Cables with approximately 1.3dB/10m were used, thus enhancing the total link budget and, as a result, the readability of the system; (g) a thorough study of the
IFMBE Proceedings Vol. 29
Design and Implementation of a Radio Frequency IDentification (RFID) System for Healthcare Applications
performance characteristics of the large selection of tags in the market was undertaken in order to identify the most suitable ones for the purpose of the project. The selection was eventually narrowed down to inlay tags and asset tags. Silicon wristbands suitable for hospital environment were purchased; (h) Wi-Fi Access Points (APs) were deemed necessary in order to have a stable and robust wireless network at the premises of the hospital. The access points must provide a strong coverage everywhere in the hospital ward. A study on APs available in the market had shown that two APs from were necessary to provide adequate coverage in the space where the pilot project will be launched; (i) after an investigation of the project requirements, a suitable racktype server was chosen to host the database together with sensitive patients’ data; (j) in order to automate the process of tagging the drugs at the pharmacy, an RFID printer was needed. Printronix was proven to be the brand of choice due to the suitability of its printers with our application.
III. DESIGN APPROACH AND RESULTS RFID equipment, software and computer hardware were purchased in order to support the successful implementation of the project. Specifically, we purchased one stationary reader with eight antennas to provide coverage inside four patient rooms, four tablet PCs to be used by the medical personnel, two wireless APs to establish a wireless network in the hospital premises, two handheld USB RFID readers to be used along with the tablet PCs, an RFID printer to automate the RFID labeling of the drugs in the pharmacy, low-loss cables to improve link budget, a computer server with installed SQL and Window server software, and RFID tags for assets, drugs and patients. The main activities undertaken for the successful design of the system were the following: (a) a network of antennas was designed in such a way as to maximize the RF coverage inside the patient/inventory rooms and improve readability and error-free identification of tagged objects, patients, and drugs; (b) measurements and computer simulations were conducted in order to optimize antenna position as a function of RF coverage inside a typical room; (c) measurements using spatial and polarization diversity of RFID tags were conducted in order to evaluate the degree of improvement in cases where multiple tags are used per asset; (d) measurements and simulations were conducted in order to evaluate the RF coverage inside a patient’s room when using one antenna as opposed to a pair of antennas in conjunction with a two-way power splitter; (e) theoretical calculations were made in order to evaluate the total link budget for different case scenario; (f) a number of measurements were made in order to determine the optimum position of the two access points (Wi-Fi) in order to
899
Mobile Stations (Tablets)
Wi-Fi Access Point Database • Patients • Assets • Drugs
Computer Server
RFID tags
Access Point
Hospital Administrator
Antennas
Encryption/Security RFID Printer
Doctor’s Office
Nurse Station
Pharmacy
Stationary Readers
LAN/WLAN
Fig. 1 Block diagram of the RFID system provide sufficient wireless broadband coverage everywhere in the hospital ward; (g) the structure of the GUI was designed so as to develop a user-friendly and powerful application for tablet PCs; (h) a robust and high-speed wireless communication channel was provided that will allow secure and encrypted exchange of information between the server and the tablet PCs without the possibility of someone breaking into the system and stealing sensitive patient’s data; (i) a well-structured database that will house sensitive patient’s data, drugs, and medical assets was developed. Figure 1 depicts a block diagram of the designed RFID system incorporating all the aforementioned equipment and services. A user-friendly GUI was developed to operate on a tablet PC to be used by medical staff and hospital administrators. The GUI has various capabilities such as adding patients to the system’s database, interfacing with the handheld RFID reader attached to a USB port of the tablet, identifying patients equipped with an RFID tag, loading patient’s medical history and profile, assigning tasks to the nurses and monitoring their tasks, allowing doctors to prescribe drugs, allowing pharmacists to monitor the drug flow in and out of the inventory room, and many more. Most of these tasks have already been implemented in the current version of the GUI. However, a number of improvements suggested by the medical staff of the BOCOC, are currently being implemented. Figure 2 shows the front view of the GUI with an account for the nurse, an account for the doctor, an application for inventory control of the drug storage room at the ward, an application for locating and tracking medical assets and equipment (e.g., walkers), and an application for the administrator. The development of the central database is also in good progress. This is developed on a stand-alone server dedicated for this purpose. It was decided that this database is made independent of the existing database hosted at the BOCOC
IFMBE Proceedings Vol. 29
900
A.C. Polycarpou et al.
Fig. 2 Front view of the Graphical User Interface (GUI) due to licensing problems that appeared during an attempt to interface with it. A large portion of the database has already been developed where drug and patient data are securely hosted and being communicated through the Wi-Fi to the tablet PCs. During programming, the effectiveness of the database is being tested against the initial targeting goals and a number of checks are continuously being made in order to ensure that the tables are indeed structured correctly and the retrieved information is correct and securely stored. The entire design is being checked for errors on a daily basis and adjustments are being made in order to improve access time and reduce the likelihood of obtaining erroneous data and inconsistencies. The development of the wireless RFID network is progressing smoothly and according to schedule. A detailed simulation of the coupled RFID reader antennas in the interior of a patient room, and their interaction with the walls, ceiling and floor, has been performed using a ray-tracing method. This was done in order to optimize the location of the antennas so that the RFID wireless system provides adequate coverage. The judicious choice of the coupled antennas (position and height from the floor) in the interior of the room will greatly improve reading capability. Polarization and spatial diversity using multiple tags was also used in order to improve readability. Multiple simulations have shown that we can achieve more than 90% readability of asset and patient tags by properly choosing the position of the two RFID antennas and using polarization/spatial diversity. This, of course, mandates use of lowloss cables and a two-way splitter. The optimized two antenna arrangement is shown in Figure 3. A 93% coverage is sought provided that the sensitivity of the tags is -14dBm.
Fig. 3 Optimized Two-Antenna Arrangement are patient identification, inventory control, and asset localization. The system performed quite well in all aspects; however, a slight possibility of the system failing to read a specific RFID tag, mainly due to poor coverage in the room, was indeed a fact. Readability above 90% was achieved; however, there is still plenty of room for further research until the perfect RFID system is built. The system will soon be fully operational and training of the personnel will follow. Finally, the system will be thoroughly evaluated in order to make sure that it meets all design requirements.
ACKNOWLEDGMENT This research is funded by the Cyprus Research Promotion Foundation grant ΤΠΕ/ΟΡΙΖΟ/0308(ΒΙΕ)/13.
REFERENCES 1. M. McGee, “Health-Care I.T. has a new face”, Information Week, 988:16, 2004 2. A. M. Wicks, J.K. Visich, S. Li, “RFID Applications in Hospital Environments”, Hospital Topics, vol. 84, no. 3, 2006 3. M. Glabman, “Room for tracking: RFID technology finds the way”, Materials Management in Health Care, May 2004
Author: Institute: Street: City: Country: Email:
IV. CONCLUSIONS The designed RFID system was tested in a laboratory environment against the three major aims of the project which IFMBE Proceedings Vol. 29
Anastasis Polycarpou University of Nicosia 46 Makedonitissas Ave Nicosia Cyprus [email protected]
Reliability Issues in Regional Health Networks S. Spyrou, P. Bamidis, and N. Maglaveras Lab of Medical Informatics, Aristotle University of Thessaloniki, Thessaloniki, Greece {spirou,bamidis,nicmag}@med.auth.gr Abstract— Reliability engineering methods and techniques denotes a new area of research and application in the health care domain. The health care environment becomes more complex not only because of the complicated clinical and other processes in a health unit but also because of clinical, administrative and other processes in the cross-organizational environment of Regional Health Networks. A synopsis on the reliability requirements in the complex systems supporting the regional health networks and a bibliographic evidence of the use of reliability methods in the healthcare domain is included. A list of obstacles to apply new methods and models is also mentioned. Keywords— Reliability assessemennt, Regional Health Networks.
I. INTRODUCTION Application of quality assessment methods and techniques in the newly introduced Regional Health Networks is vital to the development of the systems supporting them. Regional Health Network (RHN) can be considered as an intra-organizational system established to manage the healthcare providers (hospitals, doctors etc) in an organizational, regional or even national context. The ultimate goal is to facilitate the healthcare information sharing (medical, administrative data etc) to ensure the continuity of care for health care consumers (i.e. patients). Information flow in such systems can be implementing with the use of new technology means and the establishment of Health Information Systems. The need to integrate different and geographically disparate components of information systems which support the provided clinical, administrative and health services denote the importance of assessing quality and reliability parameters. Reliability is the probability that a system will operate without failure for a specified number of natural units or a specified time – known as the mission time [1]. A software reliability model specifies the general form of the failure process and depends on the characteristics of the environment/product under examination and the fault introduction. To define software reliability model five activities are needed which are [1]: (1) define “necessary” reliability, (2) develop operational profiles, (3) prepare for test, (4) execute test and apply failure data to guide decisions.
In the healthcare environment were information to be examined is complicated not many implementation of reliability methods are found in literature. Additionally the newly introduced RHN, with the complicated information flow among the various and geographically disparate components make these implementations even harder but also very attractive to a researcher. However, a certain number of reliability modeling techniques exists in non health care bibliography incorporating general Markov models [2], non homogeneous Markov models taking into account hazard rates and time dependency [3], [4], as well as, semi-Markov reliability models considering complex repair policies [5, 6] and other [7, 8]. In the health care area reliability studies can be met to the following categories: (a) Medical Errors in engineering systems, (b) Human Errors in engineering systems, (c)Medical Device Reliability and (d) Reliability if Information systems. In the following paragraphs a synopsis of the available reliability models and techniques is included along with the available implementations or bibliographic reviews in those categories of the health care area.
II. RELIABILITY IN HEALTH CARE DOMAIN A. Information Needed for Analysis Process A definition of the system to be examined is needed for the application of any reliability technique. The system layout along with the failures and their effects should be defined. The system supporting a cross-organizational Regional Health Network is composed generally of the following subsystems/components/assets: • • • • •
Data of the system Processes ICT infrastructure of the system Information systems interoperability Users, and human resources
Reliability analysis in such a complex environment should be applied mainly in the information and processes that characterize the system.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 901–904, 2010. www.springerlink.com
902
S. Spyrou, P. Bamidis, and N. Maglaveras
The most important dimension in the multidimensional RHN environment is data that are exchanged among the health organizations or among the units within a health organization. The major categories of data in RHN systems are: • • • • • • • • •
Medical data Personal data Patient Management Data Clinical Protocols Data indexes for integrating processes e.g. Master Patient Index data for the Integrated Electronic Health Record Financial/ Logistics MIS (Management Information System) data/ Business Intelligence Data Administrative data Systems data (including backup, service level agreement data)
The second dimension is the processes that support the services provided by the system. Processes should be determined before implementing the system, at the analysis phase. Reliability assessments can be applied from the design phase of RHN taking into account the two parameters of data and processes. The other parameters (ICT infrast., IS interoperability and human resources) can be taken into account in next phases of implementing a system supporting RHN. B. Analytical Methods and Techniques to Assure Reliability of Engineering Systems or Items There are many analytical methods and techniques to assure reliability of engineering systems. The most common as mentioned in IEC 61508, ISA-S84.01 standards and in [9] are the failure mode and effect analysis (FMEA), fault tree analysis (FTA), reliability block diagrams, Markov techniques and hybrid techniques. An interesting comparison of those technique is presented by Rouvroye & Bliek in [9]. In this study the necessary steps to be taken for each of the methods are mentioned as well as the results taken by applying each of the methods. Those methods have been applied mostly in the field of (a) medical errors in engineering systems, (b) human error reliability and (c) medical device reliability. Not many applications exist in the field of reliability in health information system. FMEA is mentioned in [10] the Healthcare Failure Mode Effect Analysis (HFMEA®) and has become an invaluable patient safety tool within at the Department of Veterans Affairs (VA). Many other literature evidence exist ([11], [12], [13], [14]). FTA has been used in the field of human reliability analysis in [15], [16] and other like clinical alarms[17]. RDB can be met [18] or in a diagnostic scale [19] and other.
To summarize most of those techniques have been applied recently in health domain especially in clinical field or human error reliability. C. Analytical Methods and Techniques to Assure Reliability of Engineering Systems or Items Goseva-Popstojanova and Trivedi in [5] present the architecture-based approaches to reliability assessment of component-based software systems. The models used to examine software behaviour early in the design phase of software systems. The classification of architecture-based models proposed in [5] is as follows: • state-based models: these models use control flow graphs to represent the architecture of the system. The concept is that the transfer of control between modules has a Markov property. The architecture of software can be modelled as a discrete time Markov chain (DTMC), a continuous time Markov chain (CTMC), or semi-Markov process (SMP). • path-based models: they are based on the same concept with state based models, with the exception that failure behaviour is described as a path and system reliability is computed after considering the possible execution paths of the program. • additive models which are focused on estimating the overall application reliability using the component’s failure data. The models consider software reliability growth. Implementations of such models are met in the literature. Representative approaches are provided in [20], [21] [22], [23], [24] and [25]. The similarity is that the models for reliability predictions are applied at early stages of software development mostly following a scenario-based reliability analysis. Models of scenario-based component interactions are presented in [20], [23], [26], [27], [28] where the total system reliability is analyzed as a function of its architectural constituent reliabilities. A user-oriented reliability model is also used in [27] to predict reliability from the system behaviour model point of view. To summarize, generally all approaches include the following steps: 1. 2. 3. 4.
Define the components of the system Define the architecture of the system and the interaction among the components of the system Define the failures of the system and the interfaces Assign the failures to the components that constitute the architecture.
IFMBE Proceedings Vol. 29
Reliability Issues in Regional Health Networks
903
Reliability assessment at architectural level of componentbased software systems are hardly met in literatute [29], [30]. Some reliability parameters were included in the design phase of maintability processes for the detailed requests for proposals (RFP) for the ERP supporting each regional health authority of RHN of Greece. The architecture of RHA in countries is ambiguous and consequently the architectures of interoperable information systems supporting health units/hospitals are complex. The application of reliability assessment constitutes a difficult task.
III. CONCLUSIONS In this article a review of the reliability considerations, methods, techniques in the health care domain and especially in the area of complex, multidimensional regional health networks is included. The application of reliability engineering in the health care is new and limited to human error in health care systems and medical device reliability. Common fields of application that could be mentioned are the high risk medications, error-prone procedures, like oral transmission of orders, handwritten orders, changes in processes/ procedures like changed in medication process and other high risk medications. The first category of methods presented –that is the analytical methods and techniques to assure reliability of engineering systems – have mostly been applied to clinical data included in health care processes. On the other hand, the second category presented – that is the architecture based approaches – has mostly been applied in the design phase of systems and have a few implementations in the processes of systems supporting the health care field. The reliability engineering in the area of RHN should be applied in the new aspects of healthcare systems including non-clinical processes. The need for interoperability denotes the necessity to describe scenarios and the relevant data representing the information to be exchanged. Consequently, elaborated definition of the scenarios and their processes and data along with detailed definition of failure introduction leads the choice of the reliability model and success of its application. To apply reliability models, in order to implement appropriate iinfrastructures for application protection, data protection, network protection in the health care systems met many obstacles some of which are: lack of personnel time to be involved, lack of resources to implement improvement strategies, reluctance to change ready-made systemssolutions with new safer-time consuming ones and lack of support from leadership.
REFERENCES [1] J. Musa, in Software Reliability Engineering McGraw-Hill, 1998, pp. 41-94. [2] K. S. Trivedi, in Probability and Statistics with Reliability, Queuing and Computer Science Applications, vol. ch. 7: John Wiley & Sons, 2002. [3] A. Platis, "A generalized formulation for the performability indicator," Computers & Mathematics with Applications, vol. 51, pp. 239246, 2006. [4] N. Limnios, B. Ouhbi, A. Platis, and G. Sapountzoglou, "Nonparametric Estimation of Performance and Performability for SemiMarkov Processes " International Journal of Performability Engineering vol. 2, pp. 19-27, 2006 [5] K. Goseva-Popstojanova and K. S. Trivedi, "Architecture-based approach to reliability assessment of software systems," Performance Evaluation, vol. 45, pp. 179-204, 2001. [6] N. Limnios and G. Oprisan, Semi-Markov Processes and Reliability. [7] A. Platis, N. Limnios, and M. Le Du, "Asymptotic availability of systems modeled by cyclic non-homogeneous Markov chains," presented at Annual Reliability and Maintainability Symposium,1997, Proceedings, pp 293-297, Philadelphia, PA, USA, 1997. [8] A. N. Platis and E. G. Drosakis, "Coverage Modeling and Optimal Maintenance Frequency of an Automated Restoration Mechanism " IEEE Transactions on Reliability, vol. 58, pp. 470-475, 2009. [9] J. L. Rouvroye and E. G. van den Bliek, "Comparing safety analysis techniques," Reliability Engineering & System Safety, vol. 75, pp. 289-294, 2002. [10] E. Stalhandske, J. DeRosier, R. Wilson, and J. Murphy, "Healthcare FMEA in the Veterans Health Administration," in Patient Safety & Quality Healthcare http://www.psqh.com/home.html, Ed., 2009. [11] B. Duwe, B. D. Fuchs, and J. Hansen-Flaschen, "Failure mode and effects analysis application to critical care medicine," Critical Care Clinics, vol. 21, pp. 21-30, 2005. [12] G. Suresh, J. D. Horbar, P. Plsek, J. Gray, W. H. Edwards, P. H. Shiono, R. Ursprung, J. Nickerson, J. F. Lucey, D. Goldmann, and N. a. N. i. o. t. V. O. N. for the, "Voluntary Anonymous Reporting of Medical Errors for Neonatal Intensive Care," Pediatrics, vol. 113, pp. 1609-1618, 2004. [13] T. B. Wetterneck, K. Skibinski, M. Schroederx, T. L. Roberts, and P. Carayon, "Challenges with the Performance of Failure Mode and Effects Analysis in Healthcare Organizations: An Iv Medication Administration Hfmea," Human Factors and Ergonomics Society Annual Meeting Proceedings, vol. 48, pp. 1708-1712, 2004. [14] R. Latino, "Optimizing FMEA and RCA efforts in health care," ASHRM JOURNAL vol. 24, pp. 21-27, 2004. [15] M. Lyons, S. Adams, M. Woloshynowych, and C. Vincent, "Human reliability analysis in healthcare: A review of techniques " The International Journal of Risk and Safety in Medicine, vol. Volume 16, Number 4/2004, pp. 223-237. [16] J. Wreathall and C. Nemeth, "Assessing risk: the role of probabilistic risk assessment (PRA) in patient safety improvement," Quality and Safety in Health Care, vol. 13, pp. 206-212, 2004. [17] W. A. Hyman and E. Johnson, "Fault Tree Analysis of Clinical Alarms," Journal of Clinical Engineering, vol. 33, pp. 85-94 10.1097/01.JCE.0000305872.86942.66, 2008. [18] K. Witte, K. A. Cameron, J. K. Mckeon, and B. J. M., "Predicting Risk Behaviors: Development and Validation of a Diagnostic Scale " Journal of Health Communication, vol. 1, pp. 317-342(26), 1996. [19] Andreas Maercker, Simon Forstmeier, Anuschka Enzler, Gabriela Krusi, Edith Horler, Christine Maier, and Ulrike Ehlert, "Adjustment disorders, posttraumatic stress disorder, and depressive disorders in old age: findings from a community survey," Comprehensive Psychiatry, vol. 49, pp. 113-120, 2008.
IFMBE Proceedings Vol. 29
904
S. Spyrou, P. Bamidis, and N. Maglaveras
[20] V. Cortellessa, et al., "“Model Based Performance Risk Analysis”," IEEE Trans Software Engng, vol. 31, pp. 3-20, 2005. [21] S. M. Yacoub and H. H. Ammar, "A methodology for architecturelevel reliability risk analysis," Software Engineering, IEEE Transactions on, vol. 28, pp. 529-547, 2002. [22] S. Yacoub, S. Yacoub, B. Cukic, and H. H. Ammar, "A scenariobased reliability analysis approach for component-based software A scenario-based reliability analysis approach for component-based software," Reliability, IEEE Transactions on, vol. 53, pp. 465-480, 2004. [23] D. B. Petriu and M. Woodside, "“Software performance models from system scenarios”," Department of Systems and Computer Engineering, Performance Evaluation, vol. 61, pp. 65–89, 2005. [24] K. Goseva-Popstojanova, A. Hassan, A. Guedem, W. A. Abdelmoez, D. E. M. Nassar, H. A. Ammar, and A. A. Mili, "Architectural-level risk analysis using UML," Software Engineering, IEEE Transactions on, vol. 29, pp. 946-960, 2003. [25] T. Wang, A. Hassan, A. Guedem, W. Abdelmoez, K. GosevaPopstojanova, and H. Ammar, "Architectural level risk assessment tool based on UML specifications," in Proceedings of the 25th International Conference on Software Engineering. Portland, Oregon: IEEE Computer Society, 2003.
[26] G. Rodrigues, D. Rosenblum, and S. Uchitel, "Sensitivity analysis for a scenario-based reliability prediction model," in Proceedings of the 2005 workshop on Architecting dependable systems. St. Louis, Missouri: ACM, 2005. [27] G. Rodrigues, D. Rosenblum, and S. Uchite, "“Using Scenarios to Predict the Reliability of Concurrent Component-Based Software Systems”," LNCS, Springer Berlin / Heidelberg, vol. 3442, pp. 111126, 2005. [28] E. Dimitrov and A. Schmietendorf, "“UML-Based Performance Engineering Possibilities and Techniques”," IEEE Software, vol. January/February, 2002, vol. 19, pp. 74-83. [29] S. Spyrou, P. D. Bamidis, N. Maglavera, G. Pangalos, and C. Pappas, "A Methodology for Reliability Analysis in Health Networks," Information Technology in Biomedicine, IEEE Transactions on, vol. 12, pp. 377-386, 2008. [30] S. Spyrou, P. Bamidis, V. Kilintzis, I. Lekka, N. Maglaveras, and C. Pappas, "Reliability assessment of home health care services," Medinfo, vol. 12, pp. 275-9, 2007.
IFMBE Proceedings Vol. 29
Prevention and Management of Risk Conditions of Elderly People through the Home Environment Monitoring L. Pastor-Sanz1, M.M. Fernández-Rodríguez1, M.F. Cabrera-Umpiérrez1, M.T. Arredondo1, and E. Bekiaris2 1
Universidad Politécnica de Madrid, Ciudad Universitaria s/n 28040 Madrid, Spain 2 Hellenic Institute of Transport, L. Posidonos 17, 17455 Athens, Greece
Abstract— During the last decades the life expectancy has increased dramatically bringing about an ageing population. Additionally, elderly from some areas of Europe experience geographic and social isolation, which lead to growth of risk conditions. The usage of the Information and Communication Technologies (ICT) to prevent and manage risk situations linked with age related health problems is oriented towards the development of helpful and non-invasive systems. Within this field, the REMOTE project aims at becoming a reference point in the deployment of predictive and self-taught platforms. By acquisition of context and health data the system will have enough information to perform several daily life supporting tasks without user’s interaction.
Poland, the southern part of the Czech Republic, Slovakia, Hungary, Austria, Bulgaria, the western part of Romania, and Greece also present population densities under 60 inhabitants per square kilometer[5].
Keywords— Elderly, home environment, Service Oriented Architecture, multi-agent.
I. INTRODUCTION Ageing is a triumph of our times – a product of improved public health, sanitation and development [1]. In 1950, 8 out of every 100 people were aged over 60. By 2050, it is expected that 22 out of every 100 people will be over 60 [1]. Chronic diseases are leading causes of death and disability worldwide. Major chronic diseases are responsible for 85% of all deaths and constitute 70% of the disease burden in the European region. Major chronic diseases share common preventable lifestyle-related risk factors such as tobacco, unhealthy diet, alcohol abuse and reduced physical activity [2]. In addition, risk factors for chronic diseases can be linked to social, economic and environmental determinants of health [3]. The fact that the European population is ageing has clear influence on the morbidity and mortality from chronic diseases [2]. Figure 1 illustrates the population density of the European countries. It is observed that most parts of Norway, Sweden, Finland, the Baltics, Northern Scotland and Central Spain present population densities under 30 inhabitants per square kilometer. Areas with such a low population density can certainly be classified as predominantly rural. Large areas of Ireland, Central and Southern France, eastern Germany, Northern and North-eastern
Fig. 1 Low-density population areas in Europe (persons per squarekilometer) Source: European Rural Development, Project Description Figure 2 shows the percentage of elderly aged above 65 in Europe. A severe ageing of the population can be found in southern France, North-central Spain and northern Italy. In some of these areas, more than 25 percent of the population is aged 65 or older [5]. The REMOTE project aims at defining and establishing a multidisciplinary and integrated approach to R&D of ICT for addressing, in real life contexts, the needs of the three above mentioned population groups combined: elderly people, suffering from chronic conditions at risk due to geographic and social isolation [4]. REMOTE addresses all types of European social, economic, legal and preference related environments, as well as
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 905–908, 2010. www.springerlink.com
906
L. Pastor-Sanz et al.
all types of remote areas through its pilots in six countries (Greece, Romania, Israel, Spain, Norway and Germany) with relatively high percentages of population aged 65 years and over [4].
Fig. 2 Percentage of elderly population in Europe (age 65+) Source: European Rural Development, Project Description
II. METHODS A Use Case (UC) is a description of a system’s behavior, written from the point of view of a user who has requested the system to do something particular. A Use Case captures the visible sequence of events that a system goes through in response to a single stimulus. This means also that Use Cases only describe those things that a user can see, and not the hidden mechanisms of the system [6]. A Use Case is a collection of possible sequences of interactions, namely scenarios between the system under discussion and its users or actors, relating to a particular goal [7]. A Use Case is not a methodology, a notation nor a register. In fact, it is a powerful tool to preview and analyze the functionality of a system. Use Cases can be used during many stages of a system development, being associated with different objectives. They can be used at the analysis stage, guiding the conversation during the design process, giving it context and scope. They indicate what to include and exclude. Suggest, the level of profoundness needed, when to stop and they provide variations to validate the
design. They can prevent the occurrence of costly error corrections at later stages of the development cycle. At this initial phase of the REMOTE project development, Use Cases are devoted to the definition of the system requirements. The work to be performed by the Universidad Politécnica de Madrid (UPM) will focus on the home environment. Regarding the architecture, the REMOTE proposal is based on an Ambient Intelligence (AmI) framework that integrates a collection of services, contents and sensors focused on the prevention and management of chronic conditions of the elderly. Prior to this selection, several architecture possibilities were reviewed and analyzed, taking into account the project requirements. Three types of architecture were considered: Open Architecture, Distributed Architecture and Service Oriented Architecture (SoA). In order to implement the different UC, based on the abstract architecture of the REMOTE project, a Service Oriented Architecture (SoA) was selected. SOA development demands a series of practices in the architecture such as the weak coupling among services, separation of responsibilities and integration. Therefore, it is not a product but a way to approach to the architecture. Starting from a series of basic services providing elemental functionalities, service composition allows the aggregation of services in a way so a higher level of services is achieved. This should lead to a system better adapted to the user needs. Consequently, service composition intends to fill the gap between basic functionalities and the requirements proposed by the user. This architecture is not focused on an interface, but on a contract established between the schemes communicating with each other by exchanging messages. The REMOTE project adopts a multi-agent architecture of flexible and efficient agents to ensure a high performance AmI framework. Unlike SoA agents, the agents’ architecture used within this platform should act in a more deliberative way by developing specific techniques for learning, decision-making and the adoption of rational abstract decision making models. Thus, an analysis of different multiagent architectures was performed. The Java Development Framework (JADE) was selected due to its simplicity for developers, the updating support and documentation provided, the available resources, the developer’s tools, the security options and the characteristics of its inter-agent communication messages. Once the architecture was defined, the selection of the most suitable SoA lifecycle was needed. An analysis of existing lifecycle proposals available, resulted in the selection of the IBM solution. The Service Oriented Modeling and Architecture (SOMA) methodology is the software
IFMBE Proceedings Vol. 29
Prevention and Management of Risk Conditions of Elderly People through the Home Environment Monitoring
development lifecycle provided by IBM. It consists of four phases: Identification, Specification, Realization and Deployment, each one made up of more specific tasks.
III. RESULTS Current home automation systems are based on the execution of instructions given by the user. The REMOTE project intends to take this a step forward. The system, gathering information from the context and the user itself, will be autonomous enough to decide which tasks to carry out. Several Use Cases related with the augmented home autonomy applications have been considered. The “environmental control at home” aims at monitoring and controlling several domestic devices, in order to monitor users’ activity and ensure their comfort and safety. Function automation will support users with functional limitations in performing everyday domestic duties. Work to be performed will be based on the developments carried out in Thessaloniki (Greece) and Madrid (Spain) house automation rooms in the framework of ASK-IT project [8]. The purpose of the REMOTE project is to advance in the capacity of the system with respect to performing self activated action for the user. As mentioned before, the context data provides useful information obtained from different sensors distributed in the user’s home. The detection of risky situations related with daily life activities is an objective of this UC. Any gas leak or continuously (forgotten) switched on oven are managed by the environment, alerting or reminding the user. In order to allow the platform to control the environment, the system will take the initiative, asking previously to the user, in order to solve any risk situation at home. Because of the above, the knowledge of the current position of the user at home is fundamental. Therefore the definition of a new UC was needed, “At home user localization”. Whether for security reasons or comfort ones, the system needs to know the user’s location in order to decide what to do. The system will be able to detect the users’ presence in the house and perform some pre-defined actions like altering the status of certain home appliances according to the user’s previous settings. Examples include turning on the heat or, the TV, etc. The PERSONA project has developed an indoor localization system that will be used as the starting point for this part of the REMOTE project [9]. This information will be useful for other UC where knowledge of the specific situation of the user is needed. A compilation of location related data will provide the tools for making decisions regarding the user behavior. For example, if the user leaves a room, the user location the system tell exactly
907
where he has moved. The environment control defines, taking daylight schedules into account, whether artificial light is needed in the new room and if so, the system automatically switches it on. Since the REMOTE project aims at preventing and managing domestic risk situations. In order to accomplish a complete system the detection of a risk situation linked with the health status of the elderly people is essential. By means of the “Fall detection” UC, the system will detect user’s falls inside the house and will alert his relatives, neighbors or the emergency services. These will provide a communication link with the user to guide him and give him support to manage the situation until help arrives. The PERSONA fall detection system will be used as basis for REMOTE developments [9]. As mentioned, in many situations it is needed to communicate with an external service or contact. Thereby the need of a platform to perform any external communications is required. A “Home gateway for services” will be developed, a gateway to allow fast, cheap and secure bidirectional communication between the users’ home, his reference medical center, his professional and informal carers, and eventually third party services he subscribes to. Challenges are related to the high volume of data to be managed, cost, unavailability of high-end communication infrastructure in many homes and the need to ensure personnel data protection. The development of these UC intends to improve the quality of life of elderly people by offering comfort, security and safety at their homes.
IV. CONCLUSIONS This work takes into account the difficulties that elderly living in geographic and/or social isolation can experience in their daily lives. The combination with chronic conditions and the coexistence of lifestyle risk factors can bring about risk situations that the REMOTE project tries to alleviate. The development of applications based on Information and Communication Technologies (ICT) can improve the quality of life of elderly under the mentioned conditions. The home environment applications aim at improving the monitoring of the user at home, in a non-invasive manner. This monitoring aims at solving any risk situation aggravated by the geographic isolation factor by the usage of a multi-agent Service Oriented Architecture. The REMOTE project intends to anticipate to the user request through a continuous learning of his behavior and habits. Besides, by means for the environment control, risk situations can be avoided or solved, foreseeing risk signs and reacting on time.
IFMBE Proceedings Vol. 29
908
L. Pastor-Sanz et al.
ACKNOWLEDGMENT We would like to thank the REMOTE Project Consortium for their valuable contributions for the realization of this work. This project is part of the AAL Joint Program, partially funded by the European Commission.
REFERENCES [1] HelpAge International website, http://www.helpage.org/News/Mediacentre/Factsandfigures. Last checked February 2010. [2] Diet, nutrition, and the prevention of chronic diseases. Report of a WHO Study Group. Geneva, World Health Organization, 1990 (WHO Technical Report Series, No. 797).
[3] The World Health Report 2002, WHO, Reducing Risks, Promoting Health Life [4] REMOTE project, Annex I- Description of Work, 2009. [5] European Rural Development, Project Description, International Institute for Applied Systems Analysis (IIASA) Gerhard K. Heilig, 2002. [6] UML for Java Programmers, Martin, Robert C. Prentice Hall, 2003, ISBN-13: 9780131428485. [7] Writing Effective Use Cases, Cockburn, A. Addison-Wesley, 2001, ISBN 0201702258 [8] ASK-IT project, Contract number 511298), Annex I- Description of work, version 10 April 2008 Approved by EC on 24 April 2008. [9] PERSONA Project IST-045459t, Annex I – Description of Work, 2009, [10] REMOTE Project, D.6.1 REMOTE Service Methodology, 2009
IFMBE Proceedings Vol. 29
Multilevel Access Control in Hospital Information Systems V. BALDAS1 , K. GIOKAS1 and D. KOUTSOURIS1 1
Biomedical Engineering Laboratory, School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
Abstract— With many hospitals adopting new computer based health records management systems, an old question about the medical records privacy is once again posed. However, with the use of certain cryptographic techniques we can ensure that each part of a patient’s medical record is only readable by the ones that must have access to it. This paper presents a novel medical data access control, that is based on the Akl-Taylor hierarchical access control technique. According to the proposed scheme, the individuals that can have access to the patient’s record are divided into four seperate classes, each with different authorization level. A specific encryption key is assigned to each class, in a way that a class with higher authorization level, can derive the key of any class below them. Our medical data encryption technique, provides each user with a key that will give him access only to the data he needs to know. In addition, the proposed scheme makes it possible for medical health records to be used by research groups or even medical companies, in a totally anonymous way that does not compromise the privacy of the medical information. Keywords— medical data privacy, access control, electronic medical record, EHR encryption
I. I NTRODUCTION Lately, there has been a lot of discussion on computerizing health records and general patient information, due to the great amount of new features this approach offers. This is a necessity because of the many advantages Electronic Health Records (EHR) have, like the ones mentioned on [1]. They reduce the costs of operation, they provide easier and faster access to the medical information, and they make the task of information exchange between different parties easier. Also, they are a valuable knowledge management tool [2] and they can be used to support clinical trials, and extract important results and statistics anonymously. However, the implementation of electronic versions of the medical records, has some disadvantages too. One of the main concerns of both patients and medical professionals is about the level of privacy that can be achieved with this new type of records. Although as it can be seen in [3] and [4] the security standards have been set with the directives of coun-
tries and organizations on how to treat medical data, still the citizens do not feel secure enough, and many threats constantly compromise the security of the medical information systems [5]. In the hospital environment, there is a great number of professionals, that must not have the same level of clearance to access some parts of the available information. For example, the doctors must have higher permissions than the medical students, and the nurses should have higher clearance than the administrative staff. This is required in order to ensure that the confidentiality of the medical records, which is something really important for each patients, will not be compromised.
II. R ELATED W ORK In [6] it is stated that the purpose of access control in a security system is to limit the actions that a registered system user can do. Each user belongs to a class, which is a general set of users with the same level of authorization to read and alter data in a system. Between two classes Cχ and Cψ , the relation Cχ ≥ Cψ denotes that the members of class χ have higher clearance than those of the class ψ . This means that the Cχ users, can access the data that Cψ users can access too, but the reverse is not possible. In the medical field, access control defines the permissions of every medical class (doctors, nurses, administrative employees etc.) to read patient’s data that are stored as a part of a general medical database. Several types of access control exist, such as discretionary access control (DAC), mandatory access control (MAC), and role-based access control (RBAC), and cryptographic access control (CAC). In the DAC model, the access to a patient’s record is absolutely controlled by the patients. While this seems logical in the case of health records since the record owner is the patient, most researchers state that a DAC model creates a security threat because it is vulnerable to the patient’s mismanagement of their own records [7]. The MAC model, is generally used on military data, to classify them as SECRET, CONFIDENTIAL etc. It is clear that his model, could find effect in the medical scenario too, in order to distinguish the different classes of users in a hospital [1].
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 909–912, 2010. www.springerlink.com
910
V. Baldas, K. Giokas, and D. Koutsouris
The role-based access control, is the most commonly used method in the medical scenario. According to this, a specific role is assigned to each user of the system. Then, depending on the authorization level of this specific role, access to an object can be forbidden or granted respectively. It is obvious that by defining the medical roles as doctor, nurse, intern etc. an RBAC model can be fitted in the medical access control policy [8][9]. Finally, CAC is the method of controlling the access in objects, by using a specific hierarchy of encryption and decryption keys for each user, according to his level of authorization. Examples of application of this access control method in the medical scenario exist, but these cases are limited. In [10] a CAC based system is introduced, but this system is for the pre-hospital environment, and requires the medical practitioner and the patient to meet in the physical world, in order for the access to be provided. In this paper, a novel technique for encrypting the patient’s data is proposed. In contrast with the systems proposed on [8][9] and [10], our method makes it possible to encrypt each part of the medical file, in a way that a high level users can decrypt data that are decryptable by low level users too. However but the reverse operation is not computationally feasible.
III. T HE DESCRIPTION OF THE MEDICAL ACCESS CONTROL SYSTEM
A. Categorizing privacy levels A hospital or any other medical entity may hold a lot of information about a specific patient, and many electronic files of his exams. The data associated with each patient, should not be at their whole accessible by every hospital employee, and there has to be a specific policy of access to the medical data. One way to do that is to divide the data into different classes, with different privacy levels. For example, it is indisputable that the information regarding the patient’s past medical conditions are more sensitive than his date of birth. In order to present our scheme, a categorization of patient’s medical data is described below. Note that this is not the only way to distribute the data sets, but it is only presented as a paradigm. The medical data associated with a patient in general, or with a specific examination file of this patient are categorized as: • Non-descriptive demographics (nDD). Personal data that describe the patient in an anonymous way. Examples can be age, ethnicity, height, city of residence etc. • Descriptive demographics (DD). Personal data that reveal the identity of the patient. For example, name,
emergency contact name, address, etc. In this area, administrative-related data are held too. • Emergency information (EI). Medical data that can be useful in emergency situations such as possible allergies of the patient. This is particularly useful to the nurses and the other medical staff. • Current condition and test results (CC). Medical data regarding patient’s current problem, specific test results, etc. This is regarded as a highly private part, and should be only accessible to the doctors. • Patient’s medical history record (HR). In this part, the most medical sensitive information are stored. This includes past test results, past medical conditions and other hospitalization information. B. The structure of the classes The classes in our access control policy are specific groups from a medical facility who share similar authorization rights. By providing different combinations of the available data sets to each class we provide a proper level of security for the patient’s personal and medical data. While there is no specific way of categorizing the similarly-authorized groups in a hospital environment or any other medical facility, in the context of this work the following classes are to be considered: 1. Administrative staff. This group includes all the people that are in the administration of the hospital, and based on the need-to-know policy they are only able to view data related to the demographics of a patient, along with possible billing information. 2. Nurses. Nurses have full access to the patient’s demographics, and in addition they are allowed to read the data regarding emergency information of a specific patient. 3. Research groups and outer-hospital entities. Our scheme allows the use of absolutely non descriptive patient records by medical research groups and research companies in order to provide them with the data they need for analysis. It is noted that, these groups, have only access to data that cannot be linked with a specific patient, and can only be used as a statistical help set. 4. Doctors. Doctors have full access to a patient’s file. The connection between each security class that represents a specific group inside the hospital, and the parts of the medical record that this class is authorized to decrypt, is presented in Table 1. This means that the class relations are as following: Cdoctors ≥ Cnurses ≥ Cadministration , and also that Cdoctors ≥ Cresearchgroups . According to this scheme, the doctor can ac-
IFMBE Proceedings Vol. 29
Multilevel Access Control in Hospital Information Systems
911
Table 1: Access control matrix for the different medical groups. Group
Access Granted
Access Denied
DD
nDD, EI, CC, HR
Administration Nurses Research groups Doctors
nDD, DD, EI
CC, HR
nDD, CC, HR
EI, DD
nDD, DD, EI, CC, HR
-
cess all the information that the other three groups can access too, and also the nurses are authorized to read all the information that the administration has access to. C. Key generation based on the Akl-Taylor scheme To enforce the above policy, a cryptographic technique is going to be used that is based on the keys that are distributed among the different nodes. Each class holds one key, and the classes that are higher in the hierarchy, can derive the keys of the lower classes. Thus, they can decrypt data that the lower classes can decrypt too, but the inverse decryption is not possible. By transforming the limitations presented in Table 1, into a lattice, we have the graph shown in Fig. 1. In a lattice, the higher node has access to all the connected lower ones. For example, the class that has access to the EI data, also can access the DD and nDD data, because both DD and nDD, are sub-nodes of EI. On the contrary, the entities that have access to the DD data only, cannot have access to the HR data, because HR is not a sub-node of DD.In order to derive the keys for each class using this lattice, the Akl-Taylor scheme will be used [11]. The Akl-Taylor scheme is a popular hierarchical key extraction scheme proposed in 1983, and it is still widely used. While it has some limitations regarding the update processes, these limitations are not important in our medical related system. The Akl-Taylor scheme, relies on a trusted entity known
them. We also have a set of classes, C, and each class ci ∈ C, holds one unique cryptographic key, denoted kci . The process can be formally described as [12]: • • • •
CA computes n = p × q where n,q are large primes ∀ci ∈ C compute kci = se(ci ) mod n the value of s is kept secret e(ci ) is usually chosen to be ∏ci ≥c j p(ci ) , where p(ci ) is a prime, and c j classes are the classes with lower security clearance than ci • the value of e(ci ) is made public for each class
Hence, if the class chigh has higher security clearance than the class clow , it can easily compute the key of k(clow ), by using it’s own key k(chigh ) and the public value of e(clow ). e(clow )
This is possible by computing the value k(chigh ) e(chigh ) which is the value of k(clow ). It was proved by Akl and Taylor, that if a class ψ has lower authorization than a class χ , it is computationally infeasible to extract the value of the class χ key, something that is based on the assumption that is difficult to compute the integral roots modulo n [12]. Applying the Akl-Taylor scheme to the medical lattice, and using the first primes allowed for demonstration reasons, we can extract the e(ci ) values for every ci class, that the CA makes available to the public domain. The outcome is shown in Fig. 2.
k(∅) e(∅) = 2
k(EI)
k(DD) e(DD) = 2×3×5×11×13
∅
EI
DD
e(CC) = 2×3×7
k(nDD) e(nDD) = 2×3×5×7×13
k(HR) e(HR) = 2×3×5×11
Fig. 2: The computation proccess of the public e(ci ) values for every ci .
CC
nDD
k(CC)
e(EI) = 2×5×13
D. Controlling the access to the patient’s data with proper key distribution
HR
Fig. 1: The lattice for the proposed cryptographic access control system. as CA, to compute the keys for each class, and to distribute
As it was shown in Fig. 2, each of the 5 patient’s data privacy levels, are encrypted using different keys. By distributing specific keys to specific security classes, we can enforce the access control policy. This can be achieved by providing each class with the appropriate key for a specific privacy level, so that they can extract the information that are accessi-
IFMBE Proceedings Vol. 29
912
V. Baldas, K. Giokas, and D. Koutsouris
ble to them, according to the rules described on Table 1. The key that each class takes, along with the other keys that can be extracted based on the Akl-Taylor scheme, is shown in detail in Table 2. Table 2: Key distribution and extraction. Group
Key assigned to
Akl-Taylor
based
class
computable keys
Administration
k(DD)
none
Nurses
k(EI)
k(DD), k(nDD)
Research groups
k(CC)
k(nDD), k(HR)
Doctors
k()
k(EI), k(CC),k(DD), k(nDD), k(HR)
It is clear that since Doctors have the k() key, they can derive the keys of all the other privacy levels, thus they can see every piece of encrypted information in the patient’s file. Respectively, since the Research Groups are assigned the k(CC) level key, they can also derive the keys of the privacy levels nDD and HR, but not the others. Since the persons of the Nurses class, hold the k(EI), they can derive the keys that are used to encrypt the DD and the nDD data only. Finally, the administrative employees of the Administration class hold only the k(DD) key, so they cannot derive any other key. In Fig. 3, there is a clear depiction of the amount of information each class can decrypt, using their individual key, and the e(ci ) values that the CA has made public.
∅
EI
CC
DD ADMINISTRATION ACCESS DIAGRAM
nDD
HR
NURSES ACCESS DIAGRAM RESEARCH GROUPS ACCESS DIAGRAM
DOCTORS ACCESS DIAGRAM
Fig. 3: The area in which each class has access based on the key it holds, is shown in a respective dotted semicircle. Each access area corresponds to the class which name is shown on the bottom of each semicircle.
IV. C ONCLUSION
control technique. The patient’s data are divided into five major categories, and each system user according to his authorization has access to one or more of these five types of medical information. While the scheme is based on a four class distinction, with the appropriate changes the proposed method can be altered to fit every possible hierarchy in every hospital. It can also be used to control the access in specific objects inside a smaller hospital entity, like a lab or a specific department.
R EFERENCES 1. Alhaqbani Bandar, Fidge Colin. Access Control Requirements for Processing Electronic Health Records Business Process Management Workshops. 2008:371–382. 2. Montero Miguel, Prado Susana. Electronic Health Record as a Knowledge Management Tool in the Scope of Health Knowledge Management for Health Care Procedures. 2009:152–166. 3. Agrawal R., Johnson C.. Securing electronic health records without impeding the flow of information International Journal of Medical Informatics. 2007;76:471–479. 4. Bennett C. J., Raab C. D.. The Adequacy of Privacy : The European Union Data Protection Directive and the North American Response Information society. 1997;13:245–263. 5. Anderson Ross. A Security Policy Model for Clinical Information Systems in In Proceedings of the IEEE Symposium on Security and Privacy:30–43Society Press 1996. 6. Sandhu Ravi, S Ravi S., Samarati Pierangela. Access Control: Principles and Practice 1994. 7. Sandhu Ravi S., Coyne Edward J., Feinstein Hal L., Youman Charles E.. Role-Based Access Control Models IEEE Computer. 1996;29:38–47. 8. Slevin Lindi A., Macfie Alex. Role based access control for a medical database in SEA ’07: Proceedings of the 11th IASTED International Conference on Software Engineering and Applications(Anaheim, CA, USA):226–233ACTA Press 2007. 9. Tzelepi Sofia, Pangalos George. A Flexible Role-Based Access Control Model for Multimedia Medical Image Database Systems Information Security. 2001:335–346. 10. Dillema Feike W., Lupetti Simone. Rendezvous-based access control for medical records in the pre-hospital environment in HealthNet ’07: Proceedings of the 1st ACM SIGMOBILE international workshop on Systems and networking support for healthcare and assisted living environments(New York, NY, USA):1–6ACM 2007. 11. Akl Selim G., Taylor Peter D.. Cryptographic solution to a problem of access control in a hierarchy ACM Trans. Comput. Syst.. 1983;1:239– 248. 12. Crampton Jason, Martin Keith, Wild Peter. On Key Assignment for Hierarchical Access Control in CSFW ’06: Proceedings of the 19th IEEE workshop on Computer Security Foundations(Washington, DC, USA):98–111IEEE Computer Society 2006.
Author:Vassilis Baldas Institute: Laboratory of Biomedical Engineering, NTUA Street: 9 Heroon Polytechneiou Str. City: Athens Country: Greece Email: [email protected]
In this paper, a novel medical data access control was presented, that is based on the Akl-Taylor hierarchical access IFMBE Proceedings Vol. 29
Managing Urinary Incontinence through Hand-held Real-time Decision Support Aid Constantinos Koutsojannis1, Chrysa Lithari2, Eman Alkholy Nabil2, Giorgos Bakogiannis1 and Ioannis Hatzilygeroudis1 1
University of Patras, Computer Engineering & Informatics Department, 26500 Rion, Patras, Greece 2 University of Patras, Department of Medicine, 26500 Rion, Patras, Greece
Abstract: -In this paper, we present a intelligent system for the diagnosis and treatment of Urinary Incontinence (UI) for males as well as females, called e-URIN . e-URIN is an intelligent system for diagnosis and treatment of urinary incontinence according to symptoms that are realized in one patient and usually recorded through his clinical examination as well as specific test results. The user-friendly proposed intelligent system is accommodated on a hospital server supporting e-health tools, for use through pocket PCs under wireless connection as a decision support system for resident doctors, as well as an educational tool for medical students. It is based on expert system knowledge representation provided from urology experts in combination with rich bibliographic search and study ratified with statistical results from clinical practice. Preliminary experimental results on a real patient hospital database provide acceptable performance that can be improved using more than one computational intelligence approaches in the future. Key-Words: - Incontinence, Decision Support, pocket PC, wireless
I. INTRODUCTION
Computerized decision support systems within health care have been considered since the mid-1950’s however, development in medical practice has been limited by a number of factors, such as the lack of models to capture medical decision making processes and non-integrated nor real-time patient care information systems [1]. In addition, the original expert systems were often seen as a challenge rather than a support to professional decision-making in healthcare. Urinary incontinence is involuntary loss of urine; some experts consider it present only when a patient thinks it a problem [2]. The disorder is greatly under recognized and underreported; the common estimate of 13 million people affected in the US is low. Incontinence can occur at any age but is more common among the elderly and among women, affecting about 30% of elderly women and 15% of elderly men [2], [3]. Incontinence greatly reduces quality of life by causing embarrassment, stigmatization, isolation, and depression. Many elderly patients are institutionalized because incontinence is a burden to caregivers. In bedbound patients, urine irritates and macerates skin, contributing to sacral pressure ulcer
formation. Elderly people with urgency are at increased risk of falls and fractures [2]. A. TYPES OF INCONTINENCE
Incontinence may manifest as near-constant dribbling or as intermittent voiding with or without awareness of the need to void. Some patients have extreme urgency (irrepressible need to void) with little or no warning and may be unable to inhibit voiding until reaching a bathroom. Incontinence may occur or worsen with maneuvers that increase intraabdominal pressure. Postvoid dribbling is extremely common and probably a normal variant in men. Identifying the clinical pattern is sometimes useful, but causes often overlap and much of treatment is the same [3]. a. Urge incontinence (UUI) is an urgent, irrepressible need to void that occurs just before uncontrolled urine leakage (of moderate to large volume); nocturia and nocturnal incontinence are common. Urge incontinence is the most common type of incontinence in the elderly but may affect younger people. It is often precipitated by use of a diuretic and is exacerbated by inability to quickly reach a bathroom. b. Stress incontinence (SUI) is urine leakage due to abrupt increases in intra-abdominal pressure (eg, with coughing, sneezing, laughing, bending, or lifting). Leakage volume is usually low to moderate. It is the 2nd most common type of incontinence in women, largely because of complications of childbirth and development of atrophic urethritis. Stress incontinence is typically more severe in obese people because of pressure from abdominal contents on the top of the bladder. c. Overflow incontinence (OUI) is dribbling of urine from an overly full bladder. Volume is usually small, but leaks may be constant, resulting in large total losses. Overflow incontinence is the 2nd most common type of incontinence in men. d. Functional incontinence (FUI) is urine loss due to cognitive or physical impairments (eg, due to dementia or stroke) or environmental barriers that interfere with control of voiding. Neural and urinary tract mechanisms that maintain continence may be normal. d. Mixed incontinence (MUI) is any combination of the above types. The most common combinations are urge with stress incontinence and urge or stress with functional incontinence. Finally, there is a variant of UUI, which mimics the symptoms of SUI and we decided to treat it as a separate case, since the therapy proposed is different from
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 913–919, 2010. www.springerlink.com
914
C. Koutsojannis et al.
SUI and UUI. e. Detrusor hyperactivity with impaired bladder contractility (DHIC) is a condition characterized by involuntary detrusor contractions in which patients either are unable to empty their bladder completely or can empty their bladder completely only with straining due to poor contractility of the detrusor. It is defined as the presence of both detrusor overactivity during the storage phase, and underactive detrusor contraction during the evacuation phase [1]. B. CLINICAL DECISION MAKING INTELLIGENT SYSTEMS
Clinical decision-making is a complex task requiring a knowledgeable practitioner, and reliable informational inputs involving the identification and management of patients’ health needs. Most research in health-care decision-making is grounded in either decision analytic theory or information processing models. Analytic models stress achieving optimal decisions systematically and rationally by pre-specification of decision alternatives, determining the probability of the alternative occurring, and the utility of the alternatives to the decider [1]. Traditionally, an intelligent system that helps clinicians to diagnose and treat diseases is used to identify a patientspecific clinical situation on the basis of key elements of clinical and laboratory examinations and consequently usually refine a theoretical treatment strategy, a priori established in the guideline for the corresponding clinical situation, by the specific therapeutic history of the patient [2]. Depending on the patient's data, it models patient scenarios which drive decision making and are used to synchronize the management of a patient with guideline recommendations. The so-called guideline-based treatment choice can be considered under the main difference between management of acute and chronic disease that is the time. Guideline dependence introduces a computerassisted intelligent Decision Support Systems (DSSs), based on technologies that provide to the patient “most likely” treatment scenario [4]. So, the creation of an expert system to assist non-expert doctors in making an initial diagnosis would is very desirable [5], [6]. In this paper, we present an intelligent system for the diagnosis and treatment of UI for males as well as females, called e-URIN. e-URIN primarily aims to help in the diagnosis and treatment of UI diseases effectively under the consideration of clinical and special test findings. Also, it can be used by medical students for training purposes on UI management and introduce a computer-assisted environment for pocket pc that is able to synthesize patient specific information with treatment guidelines, perform complex evaluations, and present the results to health professionals quickly. II.
MEDICAL KNOWLEDGE MODELING
Appropriate diagnosis of UI requires urology doctors with
long experience in UI management. One of the problems is that there is no a widely accepted approach yet [1], [7], [8]. Therefore, except from the fact that we had a number of interviews with an expert in the field, we also used patient records and bibliographical sources. Our approach to knowledge modeling included three steps. First, we constructed a model of the basic diagnosis and treatment process. We relied on the expert and the literature at this step (Fig. 1). Then, we specified the parameters that played a role in each entity of the process model. At this step, we relied on the expert and the patient records. Finally, we determined the fuzzy models for the values of the resulted linguistic variables. We had, however, to iterate a number of times on this last step to improve the model (Fig. 2). A. INPUT-OUTPUT VARIABLES
Based on our expert, we specified a set of parameters that play a role for each of the entities in the process model that represent patient data (Fig. 1). Finally, we resulted in the following parameters for each entity in the process model. According to the model, we distinguish between input, intermediate and final parameters at each sub process. Input parameters: 1. Medical History: (a)Alzheimer, (b) Multiple Sclerosis, (c) Myelomeningocele, (d) Epispadias, (e) Oblitaration, (f) Bladder trabeculation, (g) Diabetic neuropathy, 2. Incontinence Description: (a) Loss during coughing, (b) sneezing, (c) laughing or physical activity (d) Involuntary loss and strong desire, (e) Frequent or constant dribbling, (f) Frequency and (g) urgency. 3. Previous Incidents: (a) Stroke, (b) Incontinence surgeries, (c) Sacral cord lesion, (d) Low or supra sacral cord lesion, 4. Other Medication: (a) (anti)cholinergic, (b) opioids, (c) radiation therapy, Intermediate input parameters/Special tests: (a) Cystometry: >300 ml causes urgency/contractions, (b) Instantaneous leakage at cough test, (c) Delayed or persisted leakage at cough test, (d) PVR>100, (e) Elevated postvoid PVR, (eg>50). And exclusionary questions for (f) age (g) Chronic impairment of physical or (i) cognitive function. Final output parameters: 1. Diagnosis (a) UI mechanism (b) SUI (c) UI diagnosis. 2.Treatment: Final treatment according to current UI and other preexisted diseases (a) pelvic floor training (b) biofeedback, (c) electric stimulation, (d) medication and (e) surgery B.
UI DIAGNOSIS AND TREATMENT
The knowledge base of the expert system includes production rules, which are symbolic (if-then) rules with Boolean or crisp variables. The variables of the conditions (or antecedents) of a rule are inputs and the variable of its conclusion (or consequent) an output of the system. To represent the process model, we organized production rules
IFMBE Proceedings Vol. 29
Managing Urinary Incontinence through Hand-Held Real-Time Decision Support Aid
in three groups: UI classification rules, UI diagnostic rules and treatment rules inspired form model presented in Fig. 1. The current patient data are stored in the Patient Database, as facts. Each time that the reasoning process
915
requires a parameter value, it gets it from the database or the user. In a pure interactive mode, it could be given only by the user.
Fig. 1: Urinary Incontinence Diagnosis Process Model
For each patient dataset that is stored in the Patient Database, UI diagnosis Rules decide to ask for the parameter special test values in order to give to the user the
final diagnosis. Fig.3 presents how the rule groups and the facts or user are used or participates during the reasoning process to simulate the diagnosis process.
Table 1. Urinary Incontinence Symptoms Classification Rules (partial) Symptom/Disgnosis Alzheimer Sclerosis Myelomeningocele Epispadias Oblitaration Bladder trabeculation Diabetic neuropathy Loss during coughing, sneezing, laughing or physical activity Involuntary loss and strong desire Frequent or constant dribbling Chronic impairment of physical or cognitive function
UUI + +
SUI
+ + +
+
e-URIN ARCHITECTURE
OUI + + + + +
DHIC + +
FUI
+ +
+
+
+
+
+ + +
Each time that the reasoning process requires a value, it gets it from the patient database or from user interaction. A sample of UI diagnosis rules can be seen in Table 1. Finally there are a small number of Treatment rules, which according to the resulted disease provide the appropriate treatment strategy. III.
MUI + + + + +
The developed intelligent system has the structure of Fig. 2, which is similar to the typical structure of such systems [7], [8]. The knowledge base of the expert system includes rules, which are symbolic (if-then) rules. The variables of the conditions (or antecedents) of a rule are inputs and the variable of its conclusion (or consequent) an output of the system as the result of the internal inference engine of the system (Fig 3). The user uses URIN through a handheld device with wireless
IFMBE Proceedings Vol. 29
916
C. Koutsojannis et al.
connection. Thus authorized users in the Urology clinic
have access to the web-based intelligent platform
.
Fig. 2. The general structure of wireless intelligent decision support system e-URIN
IV.
IMPLEMENTATION ISSUES
The user interface has been developed with Macromedia Flash 8.0, and the intelligent system has been developed in CLIPS 6.1b Expert System Shell. Patient data in the Database are organized by using CLIPS templates. The total number of rules is 21. All of them have salience 0, except for 2, which have 30. The first rules refer to the questions and the next one to diagnosis and treatment. There are 5 input rules which read the user’s answers. The answers are yes or no, except for 1 rule where the possible
answers are 0, 1 or 2, as the possible cases are 3. Each of he 5 input rules refers to a set of questions relative to a subject. Consequently, we have: x rule for incontinence description with 3 questions x rule for other diseases with 6 questions x rule for previous incidents with 3 questions x rule for other medicine or treatment with 2 questions x rule for medical tests with 3 questions
IFMBE Proceedings Vol. 29
Managing Urinary Incontinence through Hand-Held Real-Time Decision Support Aid
917
Fig. 3. The reasoning flow in URIN
Besides, there are more input rules which are fired only if some variables are “yes”. So, there are 4 more questions asked only in some cases, formed in 2 rules. The first is fired if the patient has PVR>100 and the question is about her age. The second is fired if the patient has stress or urgent incontinence, and its questions have to do with some more symptoms that may lead to detrusor hyperactivity with impaired bladder contractility, which is a type of urgent incontinence but mimic stress incontinence. As a result the questions asked vary form 17 to 21, except for 2 cases, where functional incontinence is diagnosed. In this case the system is halted as this diagnosis is exclusionary. As for the conflict strategy, the one which fires the last rule has persisted or delayed leakage insert 2. Otherwise insert 0. " 1 2 0) ) (if (eq ?leakage 1) then (assert(instant_leakage yes)) else (if (eq ?leakage "Cystometry: with fluid >300 ml does she have urgency or contractions(yes/no)? " yes no) ) (assert (cystometry ?cystometry)) (bind ?pvr (ask-question "Does
added in the agenda (depth) is used. Next rule asks the user to input parameters about specific medical tests: defrule cough_test "ask questions" (initial-fact) => (printout t "QUESTIONS ABOUT MEDICAL TESTS" t) (bind ?leakage (ask-question "If she has instantaneous leakage during cough test insert 1. If she 2) then (assert(persisted_leakage yes)) ) ) (bind ?cystometry (ask-question she have PVR>100(yes/no)? " yes no) ) (assert (pvr ?pvr)) (if (eq ?pvr no) then (assert (end yes)) )
IFMBE Proceedings Vol. 29
918
C. Koutsojannis et al.
) (defrule print_a (A yes) (end yes) => (printout t "*** you have severe symptoms and should consult your own doctor ** You may need an examination, and possibly a blood test. Your doctor may consider referring you for an operation to remove the prostate gland, or may consider putting you on a course of tablets *" crlf)) Next rule inserts as final diagnosis Detrusor Underactivity, Rule 16: If patient cholinergic_opioids is yes and spinal_cord_injury is yes and diabetic_neuropathy is yes then disease is detrusor_underactivity.. has been implemented in CLIPS as follows: (defrule DETRUSOR_UNDERACTIVITY (or(choliner_opioids yes) (spinal_cord_injury yes) (diab_neuro yes)) =>
… (assert(detrusor_underactivity yes))) To implement reasoning flow, different priorities have been used for different rule groups (named salience(?)). The final response of the system on pocket PC screen is like Fig 4.,
e-URIN
*** Diagnosis : The patient has Detrusor Hyperactivity with Impaired Bladder Contractility.
e-URIN
*** Treatment : Anticholinergic medicament, like oxybutynin or tolterodine, for 6 months.*** *** - a-blockers for the oblitaration. *** *** - 6 months later new urodynamic study. *** *** - ultra sound examination of bladder volume once a year. *** Fig. 4. The final text diagnosis and treatment of URIN on the pocket PC screen. VI.
V.
PRELIMINARY EVALUATION RESULTS
We used e-URIN for a number of 95 patient records from the Hospital Database with different types of prostate diseases. The corresponding treatment results were compared to the results of our expert doctor. To evaluate eURIN, we used three metrics, commonly used for this purpose: accuracy, sensitivity and specificity (abbreviated as Acc, Sen and Spec respectively), defined as follows: Acc=(a + d)/(a + b + c + d), Sen = a/(a + b), Spec = d/(c + d) where, a is the number of positive cases correctly classified, b is the number of positive cases that are misclassified, d is the number of negative cases correctly classified and c is the number of negative cases that are misclassified. By ‘positive’ we mean that a case belongs to the group of the corresponding initial diagnosis and by negative that it doesn’t. The evaluation results are presented in Table 3 compared with the mean results of a team of four nonexpert doctors and show an acceptable performance using as a “gold standard” for a system like this 90 %. Table 3. Evaluation results for initial diagnosis of UI disease patients according to the expert doctor Metrics ACCURACY SENSITIVITY SPECIFICITY
Non EXPERT doctors 0.75 0.78 0.79
URIN 0.91 0.95 0.93
CONCLUSIONS AND RELATED WORK
In this paper, we present the design, implementation and evaluation of e-URIN, a intelligent system that deals with diagnosis and treatment of Urinary Incontinence diseases that is used through pocket PC from the web server of the Urology Clinic of our University Hospital improving the usability of most similar approaches in the same medical field [5], [6], [7], [8], [9], [10], [11], [12], [13]. All of them are PC based, Most of them [5], [6], [7], [8], [9] and [10] deal only with women UI, even after their improvement and some others only with the nursing management of UI [11], [12], [13], [14]. None of them is web-based for remote access or designed or modified for pocket PC use. The diagnosis process was modeled based on expert’s knowledge and existing literature. Input variables were specified based again on expert’s knowledge and the statistical analysis of the records of 95 patients from a hospital database. Input-output values were determined by the help of expert, the statistical analysis and bibliographical sources. Experimental results showed that e-URIN did quite better than non-expert urologists, but worse than the expert. A possible reason for that may be the determination of the values of the variables. A fuzzy approach of the same system may give better results [15]. One the other hand, use of alternative or more advanced representation methods, like hybrid ones [16], [17] may improve the systems’ accuracy and provide adaptability to new knowledge.
IFMBE Proceedings Vol. 29
Managing Urinary Incontinence through Hand-Held Real-Time Decision Support Aid
REFERENCES 1.
Banning M, A review of clinical decision making: models and current research, J Clinical Nursing 2008, (17): 187-195 2. European Association of Urology, Guidelines 2007, http://www.uroweb.org/professional-resources/guidelines/. 3. Boyington AR, Wildemuth BM, Dougherty MC, Hall EP. Development of a computer-based system for continence health promotion. Nurs Outlook. 2004 Sep-Oct;52(5):241-7. 4. Laurikkala J, Juhola M, Lammi S, Penttinen J, Aukee P. Analysis of the imputed female urinary incontinence data for the evaluation of expert system parameters. Comput Biol Med. 2001 Jul;31(4):239-57. 5. Brenner B. Expert system technology: a new aid for the gynaecologist in managing stress urinary incontinence. N Z Med J. 1997 Nov 14;110(1055):425. 6. Gorman R. Expert system for management of urinary incontinence in women. Proc Annu Symp Comput Appl Med Care. 1995;:527-31. 7. Petrucci K, Petrucci P, Canfield K, McCormick KA, Kjerulff K, Parks P. Evaluation of UNIS: Urological Nursing Information Systems. Proc Annu Symp Comp Appl Med Care. 1991;:43-7. 8. Riss PA, Koelbl H. Development of an expert system for preoperative assessment of female urinary incontinence. Int J Biomed Comput. 1988 May-Jun;22(3-4):217-23. 9. Riss PA, Koelbl H, Reinthaller A, Deutinger J. Development and application of simple expert systems in obstetrics and gynecology. J Perinat Med. 1988;16(4):283-7 10. Koutsojannis C, Tsimara M, Nabil E. HIROFILOS: A Medical Expert System for Prostate Diseases, Proceedings Int WSEAS Conf CIMMACS ’08 Cairo Dec 30-31, 2008: 100-106. 11. Koutsojannis C, Hatzilygeroudis I. Fuzzy-Evolutionary Synergism in an Intelligent Medical Diagnosis System, Lect Notes Comp Science, 2006 Oct ; (4252):1313-22. 12. Beligiannis G, Hatzilygeroudis I, Koutsojannis C, Prentzas J. GA Driven Intelligent System for Medical Diagnosis. KES (1) 2006: 968-975
Author: Institute: Street: City: Country: Email:
Constantinos Koutsojannis University of Patras 25600 Rion Patras Greece [email protected]
IFMBE Proceedings Vol. 29
919
Renal Telemedicine and Telehealth – Where Do We Stand? E. Kaldoudi1, V. Vargemezis2 1
Democritus University of Thrace, School of Medicine, Laboratory of Medical Physics, Alexandroupoli, Greece 2 Democritus University of Thrace, School of Medicine, Division of Nephrology, Alexandroupoli, Greece
Abstract— Chronic renal patients and patients with end stage renal disease are a distinctive patient group with a serious, chronic and irreversible health condition which is mainly treated at home. As such they are unique candidates for support via telehealth services. During the last 20 years, a number of ICT interventions have been deployed to support renal disease. This paper reviews current trends in home care telematics for patients on peritoneal dialysis and comments on certain design considerations that prohibit the widespread deployment of such services. Whenever pilot studies have been performed, these report user acceptance, increased quality of life and even better health outcomes. Interest in the area is expected to rise as the population with renal disease is increasing. Despite this, the extent of development and maturity in renal telehealth is rather limited, when compared to other telehealth applications. This paper argues that this low technology penetration is mainly due to the fact that current approaches are treatment- and disease-centric, do not integrate patient education and tools for overall disease management. Additionally current trenal telehealth services do not follow open, standard based software development principles and are inadequately evaluated. The paper concludes with specific proposals to alleviate these problems. 1.
Keywords— home care telematics, renal telematics, telemonitoring, patient management.
I. INTRODUCTION
Nowadays, the number of renal disease patients tends to increase, mostly due to the increased incidents of diabetes and hypertension. Chronic kidney disease may lead to several and often severe chronic complications such as arterial hypertension, nephrogenic anemia, renal osteodystrophy, peripheral neuropathy, malnutrition as well as cardiovascular disease, and eventually death. Thus early detection and treatment can often maintain renal function before chronic kidney disease deteriorates to end stage renal disease and renal failure. However, this is not always possible and the disease progression may eventually lead to kidney failure and the fact is that the number of end-stage renal disease patients tends to increase [1]-[3]. It is therefore becoming all the more imperative to take measures for the prevention and the better management of end stage renal disease. Indeed, in various categories of renal patients close monitoring may prove a good measure for early diagnosis, treatment adjustment and rehabilitation.
Specifically, for patients with chronic renal failure, there is a need to follow up any unexpected exacerbation of the renal function in order to prepare for kidney replacement therapy (dialysis or/and transplantation). Especially patients with chronic systemic diseases such as diabetes, hypertension, as well as patients older than 60 years should be carefully evaluated not only to estimate the time for introducing renal replacement therapy but to avoid any undesired exposure to drugs or procedures associated with acute decline in kidney function. For patients on peritoneal dialysis, success of their treatment method depends on the dialysis scheme which is designed by the doctor for each individual patient and is determined among else by physiological parameters such as: patient weight, blood pressure and heart rate (and in specific cases ECG and blood glucose), as well as the type and amount and the daily frequency of the peritoneal solution exchanges that are required in order to succeed an adequate fluid and solute removal during dialysis. Abnormal alterations of these parameters, if detected on time, may prevent severe side-effects such as oedema and acute dehydration. Proper inspection of catheter exit site is also important to prevent and/or timely detect peritonitis. For patients on hemodialysis, there is substantial evidence regarding correlation between the delivered dose of hemodialysis and patient morbidity and final outcome. Since clinical signs and symptoms are not reliable indicators of HD adequacy, the delivered dose should be measured and monitored routinely. Formal kinetic modeling provides a quantitative method for developing a treatment prescription for a specific patient. Regarding the dialysis session duration, some clinical researchers argue that the hemodialysis treatment time alone, independent of dialysis adequacy indices, can be used as a measure of the hemodialysis adequacy. Today there are several HD methods that may easily serve the special patients needs for fluid and solute removal. For patients on a wait-list for transplantation, vital signs and other overall health condition monitoring ensures the patient’s condition is always adequate for undergoing transplantation. Finally, for transplanted patients, there is a special need for systemic careful evaluation both for adequate kidney function as well as for the avoidance of any inflammatory and other possible factor that may be threaten the patient’s health. Ensuring adherence to prescription is also important.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 920–923, 2010. www.springerlink.com
Renal Telemedicine and Telehealth – Where Do We Stand?
In addition to the above, regular interaction among the healthcare provider and the patient ensures adherence to treatment specifics and dietary style/prescription, while it supports psychologically the patients and their families. II.
CURRENT STATE OF THE ART
Information and communication technologies (ICT) can be (and have been) employed to support the management of renal patients. During the last 20 years, a number of ICT interventions have been deployed to support renal disease. These are mainly organized in two broad categories: (a) teleconsultations and virtual home visits, and (b) telemetry of related data. Early experiences mainly in USA [4] and Australia [5] concentrated on videoconferencing for hemodialysis. Around the year 2000, Europe [6] and Japan also showed interest in this area, and at the same time videoconferencing began to support PD as well [7],[8]. Recently, telemetry of cycler data and other related biometric parameters and vital signs enter the scene, mainly for PD home monitoring [9]. Recently, both leading companies in dialysis equipment, Fresenious Medical Care (Germany, http://www.fmc-ag.com/) and Baxter International Inc. (IL, USA, http://www.baxter.com/), have incorporated telemedicine in some of their peritoneal dialysis cycler models, allowing data transmission via modem, as well as live patientphysician interaction. Literature reports very limited clinical use of these cycler embedded telemedicine application - in Italy the employment of Fresenius telematic–enabled cyclers in 2002 and 2003 [10], [11], and in the USA the employment of Baxter telematic–enabled cyclers in 2008 [12]. To allow ubiquitous monitoring of peritoneal dialysis irrespective of cycler provider, two different services have been recently deployed in Europe. In France, the DIATELIC project [13] puts emphasis on monitoring telemetry data as this is given manually by patients. In Greece, the PERKA service [14] is the only fully web-based service, that allows for dynamic service configuration by the medical personnel to account for unforeseen data monitoring needs. A review of current state in the field shows that monitoring of the condition of the renal patients (either by teleconferencing or via data telemetry) may have positive effects and improve quality of life and health. Indeed, the fact that renal patients are treated mostly outside the hospital while maintaining at some degree their normal activities, combined with the fact that regular monitoring and management of their treatment and overall health condition is clinically meaningful and desirable, make the renal patient a unique candidate for support via telehealth services. Whenever pilot studies have been performed, these report user acceptance, increased quality of life and even better health out-
921
comes. Interest in the area is expected to rise as the population with renal disease is increasing. Most importantly, such monitoring services may prove invaluable for patients and healthy citizens at risk of developing end stage renal disease, and for monitoring the health level of patients on a wait-list for transplantation. Despite this, the extent of development and maturity in renal telehealth is rather limited, when compared to other telehealth applications. III.
PROBLEMS AND REQUIREMENTS CURRENTLY UNMET
We argue that major reasons that may lead to this low technology penetration include the following. Current approaches are treatment-centric, that is, their goal is the monitoring or consulting of either hemodialysis or peritoneal dialysis. However, the health care goal is the management of the renal patient, who may switch between treatments. A patient on PD awaiting transplantation may finally get the graft and then should be closely monitored in a different manner to ensure recovery and reduce rejection probability. Or, a patient on HD may change to PD and vice versa, etc. However, current solutions do not provide continuity of monitoring and care for the renal patient, irrespective of treatment. Current approaches are disease-centric rather than being personalized and human-centric. They emphasize on the dialysis and other medical parameters, in order to monitor the disease and treatment process, and create appropriate alarms and decision support to support doctors. However, renal patients are chronic patients that live with their condition for their entire life. They live in their own environment, most of the time outside the hospital, and they usually pursue (or try to pursue) a normal life. Indeed, most of the effort in state-of-the-art advancements in treatment methods aim at promoting a mobile patient in their own environment. It is only natural that the ICT intervention should take into account the patient at their own environment leading their life, in addition to being treated and monitored for renal disease. Current approaches are ‘data-centric’, in the sense that emphasis is on transmitting and processing medical data, while renal patients are often greatly overlooked. Indeed, confronting a chronic, irreversible condition mainly treated at home, renal patients and their families comprise one of the few patient groups that mostly need support for self management, continuous education and training, social support and networking. Moreover, current approaches are clinically oriented, putting emphasis on supporting medical personnel to man-
IFMBE Proceedings Vol. 29
922
E. Kaldoudi and V. Vargemezis
age the health condition of the individual. However, in renal disease a major challenge is the overall management of the chronic renal disease (not only the patient) including planning and management of dialysis centers, organ donation, matching and transplantation and overall management of related resources and financial issues. On the technical side, current approaches at a great majority are closed, proprietary solutions created by a single vendor, not allowing for any interoperability among third parties. From the patient to the center and the data processing, current solutions are developed by a single vendor without standard interfaces for interoperability with other products. Should a health care institute decide to deploy such a service they would have to stick with the same provider for all desired functionality. Finally, current approaches, when evaluated, are regarded either as technological interventions or as ‘drugs’ for patients to use in order to improve their health condition. Moreover, often evaluation is treated as an unavoidable project aftermath, rather than a learning process to improve and appropriately tailor the intervention at question. It is most likely that such systems eventually fail to fulfill expectations and thus fail to become useful and indispensible. IV.
PROGRESS BEYOND THE STATE OF THE ART
In order for information and communication technologies to provide efficient, effective and sustainable support for the renal patient the following must be taken into consideration. Thorough field analysis and research should be conducted to identify and model context in the case of renal patient management. Renal patient context encompasses issues from the social and health environment, as well as context related to the patient and the medical personnel. Renal patients, mostly treated as outpatients, are strongly interfering with their normal social environment, while at the same time they interact with the healthcare environment to address their chronic condition. The patients themselves have their individual characteristics, preference and overall situation that define their own context. On the other hand, their healthcare providers being individuals as well as professionals exhibit their own personalized perspective. Thus, research should target to develop four strongly interlinked ontologies: (a) a patient ontology, (b) a social environment ontology, (c) a healthcare professional ontology, and (d) a healthcare environment ontology. These ontologies can then be used to build context aware renal telematics services, which may include context aware patient monitoring, context aware medical intelligent alarms, context aware patient
feedback and education and context-aware health provider decision support. In order to support personalized self management, renal telematics should also make provisions for patient education and social networking. Here the active participative nature of web 2.0 paradigm should not be overlooked. Moreover, the enormous penetration of applications such as social networks and virtual worlds gives a unique opportunity to support networking of renal patients, and to promote public awareness on issues pertaining to renal disease prevention and organ donation. Thus, a renal telehealth service should also include access to such supportive functionality and, even more, allow feedback from social environments to reach the health care professional. Additionally, the healthcare professional and the administrator should have access to advanced tools for monitoring not only the individual but the entire renal patient population and all related resources used for renal disease management. In this respect, renal telematics services should be coupled with population and disease management simulation tools, with bi-directional flow of data. That is, continuous real monitoring data should be the input to decision support tools for overall population management, while the output should be directly used for the management of the individual via renal telemedicine. From the technical perspective, renal telemedicine systems and services should be designed and developed following service oriented architectures and abiding to international, preferably open and generic, technology standards. A service-oriented architecture (SOA) offers system design and management principles that support re-use and sharing of system resources across the healthcare organization. Respective systems should follow a principle of developing and combining core (web) services with generic standard interfaces for communication and data exchange amongst them, so as to allow for seamless integration of third party applications. Competitive development of similar components by third parties should be promoted. We believe that the existence of a number of competing solutions is for the advantage of the end user as well as for the advancement of the market itself. Moreover, each prospective researcher/developer in the area should not have to ‘reinvent the wheel’ by designing and building yet another integrated telehealth system. Rather, they should concentrate to develop the component that best fits their expertise and use an overall service oriented architecture to plug in their component and integrate with the overall telehealth application. Finally, pilot renal telehealth projects should invest on sustainability studies. There is an agreement that the evaluation process of home telehealth services is much more com-
IFMBE Proceedings Vol. 29
Renal Telemedicine and Telehealth – Where Do We Stand?
plicated than that of the rest telehealth applications [15], mainly because of the nature of stakeholders and the context of home telehealth interventions, and the engagement (or lack of) patients in the design process. Indeed, the most common reason mentioned is the diverse group of stakeholders. Stakeholders come from different parts of the healthcare system with different value systems, different perceptions of risk and different expectations of the home telehealth application. Costs and benefits may fall unequally between the various groups of stakeholders. The second reason that is seen often in literature is the diffused context that home telehealth is applied to. The surrounding context varies (each patients’ home) and given the fact that home telehealth applications are few and short (in terms of pilot applications duration) makes it difficult to generate data of sufficient scope and scale for conducting a careful analysis. These obstacles require careful consideration of the evaluation approach to be used, which should follow a holistic, interpretive paradadigm. Rather than go into randomized control trials and calculate cost benefit, the main objective of an evaluation approach should be to provide feedback for developing and deploying a meaningful and socially acceptable telehealth intervention. A renal telehealth intervention should be viewed neither as a medical innovation nor as a drug that can be prescribed to patients, but instead it should be viewed as an information system/service coming to serve information transmission and processing needs in a specific complex environment with a variety of actors in different context. Such actors include the service itself, the humans involved (patients, healthcare providers, and administrators) and the society in general (the social environment and the healthcare system). For all these actors, the evaluation process should address issues of structure, process and outcome alike [16]. In order to produce a telehealth service that is usable, meaningful, and beneficial to patients, health institutions and more generally to society, the technical intervention should be technologically viable, socially acceptable and institutionally feasible, and thus sustainable. Towards this objective, renal telehealth research and development should strive to develop patient-centered services for seamlessly supporting the renal patient across treatment methods, health centers and living environments, as well as different environmental/personal conditions, integrated with social and educational services as well as with overall disease management tools.
923
REFERENCES 1.
2. 3.
4. 5. 6.
7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
Jones CA, McQuillan GM, Kusek JW et al. (1998) Serum creatinine levels in the US population: Third National Health and Nutrition Examination Survey, Am J Kidney Dis 32:992-999 (erratum (2000) 35:178) National Kidney Foundation (2002) KDOQI Clinical practice guidelines for chronic kidney disease: evaluation, classification and stratification. Am J Kidney Dis 39:S1-S000 U.S. Renal Data System (2008) USRDS 2008 annual data report: atlas of chronic kidney disease and end-stage renal disease in the United States. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD Moncrief JW (1998) Telemedicine in the care of the end-stage renal disease patient. Adv Ren Replace Ther 5:286-91 Mitchell JG, Disney APS (1997) Clinical applications of renal telemedicine. J Telemed Telecare 3:158-162 Rumpsfeld M, Arild E, Norum J, Breivik E (2004) Telemedicine in hemodialysis: a university department and two remote satellites linked together as one common workplace. J Telemed Telecare 11:251-255 Stroetmann KA, Gruetzmacher P, Stroetmann VN (2000) Improving quality of life for dialysis patients through telecare. J Telemed Telecare 6(S1):80-83 Gallar P, Vigil A, Rodriguez I et al (2007) Two-year experience with telemedicine in the follow-up of patients in home peritoneal dialysis. J Telemed Telecare 13:288-292 Nakamoto H, Hatta M, Tanaka A, et al (2000) Telemedicine system for home automated peritoneal dialysis. Adv Perit Dial. 16:191-4 Ghio L, Boccola S, Andronio L, et al (2002) ―A case study: telemedicine technology and peritoneal dialysis in children. Telemed J E Health 8:355-359 Edefonti A, Boccola S, Picca M et al (2003) Treatment data during pediatric home peritoneal teledialysis. Pediatr Nephrol 18:560-4 Chand DH, Bednarz D et al (2008) Daily remote peritoneal dialysis monitoring: and adjunct to enhance patient care. Perit Dial Inter 28:533-537 Durand P-Y, Chanliau J, Mariot A et al (2001) Cost-benefit assessment of a smart telemedicine system in patients undergoing CAPD: preliminary results. Perit Dial Inter 21(S2):S53 Kaldoudi E, Passadakis P, Panagoutsos S, Vargemezis V (2007) ―Homecare telematics for peritoneal dialysis, Journal on Information Technology in Healthcare 5:372-378 Barlow J, Bayer S, Curry R (2006) Implementing complex innovations in fluid multi-stakeholder environments: experiences of ‘telecare’. Technovation 26:396-406 Kaldoudi E, Chatzopoulou A, Vargemezis V (2009) Adopting the STARE-HI guidelines for the evaluation of home care telehealth applications: an interpretive approach. The Journal on Information Technology in Healthcare 7:293-303 Corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
E. Kaldoudi School of Medicine, DUTH Dragana Alexandroupoli Greece [email protected]
A System for Monitoring Children with Suspected Cardiac Arrhythmias – Technical Optimizations and Evaluation E. Kyriacou1, C. Pattichis2, D. Hoplaros2, A. Kounoudes3, M. Milis3, A. Jossif4 1
Frederick University/Department of Computer Science and Engineering, Lemesos, Cyprus 2 University of Cyprus/Department of Computer Science, Lefkosia, Cyprus 3 Signal GeneriX Ltd, Lemesos, Cyprus 4 “Paedi” Center for Specialized Pediatrics, Lefkosia, Cyprus
Abstract— Children with cardiac arrhythmias constitute one of the most difficult problems in cardiology both in terms of diagnosis and management. In such cases continuous monitoring of ECG vital signs and environmental conditions can significantly improve the identification of a possible arrhythmia. In this study we present a system which enables the continuous monitoring of children with suspected cardiac arrhythmias. The system is able to carry out real-time acquisition and transmission of ECG signals, and facilitate an alarm scheme able to identify possible arrhythmias so as to notify the on-call doctor and the relatives of the child that an event may be happening. In-house monitoring of a child is performed using a sensor network able to record and transmit ECG and the living conditions, while outside the house, monitoring is performed through a GPRS/UMTS enabled device. The transmitted information can be accessed through a web based platform which facilitates a basic electronic patient record module and continuous display of monitoring information of the patient. This paper presents the technical tests, optimization and hardware changes applied in order to improve the system when used for the in-house monitoring of children. Keywords— Mobile health, sensor networks, emergency monitoring, children arrhythmias.
I. INTRODUCTION
Telemedicine has been used for many years in order to improve health care provision or for patient monitoring solutions. Several issues such as the computational capability, size of the devices, power efficiency and cost have been limiting the availability of devices and services to a few special cases [1], [2]. Recent advancements in communications and computer systems can help us develop generalpurpose systems that are more efficient, much smaller and at lower costs. In this study, we will focus on the continuous monitoring of children with suspected cardiac arrhythmias. Arrhythmia is one of the most difficult problems in Cardiology both in terms of diagnosis and management. The problem is particularly pronounced in Pediatric Cardiology because of
Fig.1. Overall system Architecture © IEEE 2007 [6] the variety of etiologies and the difficulty that the children are having in trying to communicate their symptoms. For example in the case of hypertrophic cardiomyopathy, it is known that children are at higher risk for arrhythmias and sudden death than adults. In most of the cases an ECG tracing is required and this is sufficient for an accurate diagnosis, whereas in some cases, a more sophisticated modality is required [1], [3]. As an example, a relatively recently recognized rare form of cariomyopathy, the Isolated Noncompaction of the Left Ventricle (NCLV), poses new challenges. A subset of patients with this disease are especially prone to arrhythmia and sudden death. Current testing with the Holter monitor has proved insufficient because it is limited to 24 or 48 hours of recording during which the patient may be asymptomatic. Some of these children are high risk for sudden death and at the same time it is very difficult to decide for the proper treatment, making their ECG monitoring a very important task [1], [4]. In order to monitor these children sufficiently we need a noninvasive or minimally invasive way to record the ECG for extended periods of time and at the same time perform automatic analysis continuously or at frequent intervals. Work presented here concerns the technical changes carried
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 924–927, 2010. www.springerlink.com
A System for Monitoring Children with Suspected Cardiac Arrhythmias – Technical Optimizations and Evaluation
925
Fig. 3. Block diagram of the in-house wireless sensor network © IEEE 2007 [6]
Fig.2 UML – sequence diagram of the actions performed during the inhouse case © IEEE 2007 [6]
out in order to improve the quality of the system when used in the subjects house. The software design and initial development have already being presented in [6], [7]. The work is a significant extension over our earlier telemedicine work in real-time ambulatory monitoring systems [5]. II. METHODOLOGY
In general the problem has been divided into two cases. The first one, called “In-house case” where the subject is located in his/her house. While for the second, called “Moving patient case” where the subject might be located anywhere else. Our goal is the continuous 24 hours monitoring of the child. An overall architecture diagram of the proposed system can be seen in Fig. 1. On the left hand side the two cases of patient monitoring are displayed; while on the right hand site the doctors and the access to the system are displayed. A. In-house case During this case,(a UML diagram describing the actions sequence for this case can be seen in Fig. 2) a sensor network is installed in the child’s house that will be used in
order to continuously monitor ECG signals from the patient [6] – [10]. Several other environmental parameters like light, temperature, sound, acceleration are also monitored so as to continuously check the living conditions. The ECG (3 lead) signal is recorded by a sensor carried from the child, that is part of a wireless sensor network (WSN) installed in the house. The ECG sensor has been specially designed & developed by SignalGenerix ltd (http://www.signalgenerix.com) [7]. Signal information from the wearable sensor is propagated to a local monitoring station which will also act as a gateway to the rest of the monitoring network. The cardiac pulse is propagated through the WSN to the local monitoring station with an embedded broadcast algorithm. The local monitoring station is responsible for collecting environmental measurements (e.g. temperature, 2D accelerometer, sound, light): x Sample the ECG signal. x Store the sensor data locally. x Analyze the ECG signal in order to detect possible cardiac arrhythmias. In the case of a detected arrhythmia: x Send an alarm message to the central monitoring station (located in hospital), to the supervising doctor and a relative. For our case we have chosen to develop a Mote-based sensor network based on Crossbow® equipment [10], [11], [12]. The proposed network that will be used to cover the patient’s house is based on motes like MicaZ™ while the acquisition of ECG data is performed through a custom
IFMBE Proceedings Vol. 29
926
E. Kyriacou et al.
10m yagi 2 doors 10m yagi door 10m yagi
0
154
0 89
0 75
5m yagi door
0 12
5m yagi
09
3m yagi
0 1
10m omni 2 doors
123
10m omni door
108
10m omni 5m omni door
85
32
5m omni
19
3m omni
13 109
0
Retries Lost
1893
1323
1089
304
242
500
1000
1500
2000
2500
Fig.4 Results for 10000 packets sent created board connected to the MDA300CA™ acquisition board. Additional environmental data will be collected through the MTS310CA™ sensor board. All collected information will be transmitted to a gateway, MIB510™ connected on a Personal Computer; this is going to be the local monitoring system(see Fig.4). B. Moving patient case The second case is more general and will be used in order to complete the coverage of the system. For this case, the child is monitored using the same ECG recording device (Fig. 3) but the signals are transmitted, through a PDA, directly to the central monitoring system (for the test pilot case an HP iPAQ hw6915 was used). The transmission is performed through the use of 2.5G and 3G mobile communication networks (GPRS/UMTS) [1]. Details for this case, as well as for the ECG acquisition device, have already been presented in [7]. C. Central Monitoring Station For both cases data are transmitted to a central monitoring station (see Fig. 1); where this station is responsible to: x Store data sent from the local monitoring station. x Display data transmitted from the local monitoring stations and through a web interface. x Analyze the ECG signal further (based on open source arrhythmia det. Algorithm [13]) send a message (SMS, e-mail etc.) to the doctor [7]. III. RESULTS
Several tests were performed in order to verify the correct transmission of data over the system. The tests described here are for the in-house use of the system. Tests
were performed using the ECG simulator with a sensor node connected to the gateway [7]. Initial tests were performed using omni-directional antennas for the sensor nodes (provided with Crossbow nodes). Tests were performed to ensure the correct functionality of the system using two repeaters, and positioning the sensors at various distances from each other, including various obstructions between them. Examples of these experiments showed that the greater the distances, the bigger the packet loss (that was somehow expected), but bigger loss was observed when there were more obstructions (for instance, when the two sensors were in different rooms, and the doors connecting the rooms were closed) [7]. Following the tests with omni directional antennas. The idea of using directional antennae came up since the antennas we chose were not initially designed for our application. The new directional antennae was designed by SignalGenerix LTD, it is very small, and limits power consumption on the nodes. Power consumption is not our concern however, because we chose to test the directional antennae only on the central gateway and the repeaters. It made no sense testing a directional antenna on the patient's node, as the patient will be moving and have no constant direction, not harnessing the good effects of the directional antenna we wanted to. The antennae, was based on the Yagi-Uda architecture. Using fractal geometry the size of the antenna was reduced to a small printed circuit, having a gain of 6.2dBs, and a very low cost. For the tests, we used out CardioBee device, the device that collects and sends the ECG from the patient, an ECG simulator, one gateway, the original omni-directional antenna and the new directional printed antenna. We wanted to compare the packet loss rate and retransmission rate regarding some variables: Distance, obstacles, interference from other applications utilizing the same frequencies as 802.15.4 does in the ISM band and whether the patient is stationary or moving. We tried to eliminate other factors that could affect our results (for instance, new batteries were used for each test although, in the long run, did not affected anything). All the tests were done indoor, with no Wireless Access points turned on. Tests done: x Base and stationary patient had 3m distance x Base and stationary patient had 5m distance. x Base and stationary patient had 10m distance. x Base and stationary patient had 5m distance and a closed door between. x Base and stationary patient had 10m distance and a closed door between. x Base and stationary patient had 10m distance and two closed doors between.
IFMBE Proceedings Vol. 29
A System for Monitoring Children with Suspected Cardiac Arrhythmias – Technical Optimizations and Evaluation
All the aforementioned tests were repeated with the directional antenna installed on the base. Figure 4 shows the results for all tests for 10000 packets sent (tests were repeated multiple times, the above values are the average values). The results were as expected, with the worst case being the one with the largest distance and obstacles for both antennae. Table 1 shows the loss rates for the several tests that were carried out and the comparison of the initial to the new setup. Table 1. Loss and Retransmission rate values for the in-house patient case tests Test
Loss Rate
Retrans. rate
3m omni
0.13%
1.09%
5m omni
0.19%
2.42%
5m omni door
0.32%
3.04%
10m omni
0.85%
10.89%
10m omni door
1.08%
13.23%
10m omni 2 doors
1.23%
18.93%
3m yagi
0
0.01%
5m yagi
0
0.09%
5m yagi door
0
0.12%
10m yagi
0
0.75%
10m yagi door
0
0.89%
10m yagi 2 doors
0
1.54%
Omni – moving patient
20.15%
193.91%
Directional – moving patient 4.68%
927
IV. CONCLUSIONS
Concluding, a prototype m-Health system for monitoring children with possible arrhythmias was developed, implemented, and tested. It is anticipated that through the use of the proposed system we will be able to help the pediatrics cardiologist in the identification of arrhythmia, thus helping the physician prescribe a proper treatment. We have provided the architecture description and all the hardware tests for the system. Future work will cover the complete live testing of the system with all actors (healthy subjects, patients, doctors, and parents) involved, in order to ensure the flawless functionality and robustness of the system.
REFERENCES 1.
2.
3.
4. 5.
6.
63.12%
7.
Although the results show that the directional offers less loss rate and retransmission rate for a stationary patient, in a real life scenario, the patient is not going to be stationary. We needed to see how the rates would change if the patient was moving. So, we had the base receiving packets, while the patient was moving randomly and constantly, within a range of 2 to 10 meters from the base (see last two rows of Table 1 ). All the tests showed that the directional antennas offer less loss and retransmission rates, with minimal additional cost and with no extra changes in the infrastructure of the system needed. The positions of the repeaters and the base must always be considered for each house independently, since avoiding obstacles is essential to achieve minimal loss rates, and obstacles are different in every house.
8.
9. 10. 11.
12. 13.
Pattichis, C.S. et. al. (2002) Wireless Telemedicine Systems: An Overview,” IEEE Antennas & Propagation Magazine, Vol.44(2), pp. 143-153, 2002. Nugent, C. et al.(2006) ECG TELECARE: Past Present and Future. M-Health: Emerging Mobile Health Systems, Ed. By R. Istepanian, S. Laxminarayan, C.S.Pattichis. pp 375-388, Springer. Moreira, F.C. et.al., (2006) Noncompaction of the left ventricle: a new cardiomyopathy is presented to the clinician Sao Paulo Med. J. Vol.124(1), pp. 31-35, 2006. Stollberger, C. et al.(2004) Left ventricular hypertrabeculation/noncompaction J. Am. Soc. Echocardiogr, vol17(1), pp. 91-100. Kyriacou, E. et.al., (2003) Multi-purpose HealthCare Telemedicine Systems with mobile communication link support. BioMed. Engin. OnLine., Vol2(7), at http://www.biomedical-engineering-online.com. Kyriacou, E., Pattichis, C.S. et.al. (2007) An m-Health Monitoring System for Children with Suspected Arrhythmias. 29th IEEE EMBS Conf. pp.1794-1797, Lyon, France. Kyriacou, E et.al (2009) Integrated Platform for Continuous monitoring of children with suspected cardiac arrhythmias. ITAB 09, Larnaca, Cyprus, November. Fensli, R. et.al. (2005) A wearable ECG-recording System for Continous Arrhythmia Monitoring in a Wireless Tele-Home-Care Situation. 18thIEEE Sym. On Comp.Based Med. Sys. Shnayder, V. et.al. (2005) Sensor Networks for Medical Care. Harvard University Technical Report TR-08-05. Proulux, J. et.al. (2006) Development and Evaluation of a Bluetooth EKG Monitoring Sensor 19th IEEE Sym.Com.Based Medic. sys. Bobbie, P. O. et.al. (2006) Telemedicine: A Mote-Based Data Acqusition System for Real Time Health Monitoring. Proc. Telehealth 512, pp.50-52. Crossbow Technology: Wireless Sensor Networks at http://www.xbow.com. Open Source Arrhythmia detection software at http://www.eplimited.com/ Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Efthyvoulos Kyriacou Frederick University Cyprus 18 Mariou Agathaggelou Str, Ag. Georgios Havouzas, 3080 Lemesos Cyprus [email protected]
Use of Guidelines and Decision Support Systems within EHR Applications in Family Practice – Croatian Experience D. Kralj1, S. Tonković2, and M. Končar3 2
1 Ministry of Interior/PD Karlovac, Karlovac, Croatia Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia 3 ORACLE Croatia, Zagreb, Croatia
Abstract— The aim of this survey was to evaluate the state of implementation of guidelines and decision support systems within the EHR applications in Croatian family practice. The survey was conducted using electronic and paper based questionnaires which were formed in accordance to European best practice. Obtained survey results are showing number of improvement areas, which relate to both existing applications, as well as doctors' knowledge about qualities and possibilities of the guidelinebased decision support systems. What we see as the foundation in this process is implementation of open standards such as HL7 in combination with open source technologies, which could significantly improve the overall situation. Continuous education of doctors and health authorities about the possibilities and needs for use of those systems is essential in the adoption process. Keywords— guidelines, decision support systems, electronic health records, family practice, HL7 and open source.
completeness and accuracy of the EHRs collected in FD offices. By using of the working guidelines and the guideline-based decision support systems (DSS), improvement and evaluation of the doctors' work, as well as completeness and accuracy of the EHRs, can be assured. After more than two years of running period and continuous improvement of EHR applications we were compelled to evaluate the acceptance and adoption of working guidelines and DSSs implementation within these applications, with the idea to try to measure the benefits that FD offices experience by system introduction. What is important to emphasize that our choice of survey method is based on European and other good practices in the world, which enables us to efficiently compare the outcome and suggest further steps.
II. METHODS I. INTRODUCTION Croatian primary health care (PHC) information system (IS) launched in late November 2007 currently includes the central part that contains all the essential services and the system registries, the information system of the Croatian Institute for Health Insurance (CIHI), an information system of the Croatian Institute for Public Health (CIPH ), and family doctors’ (FD) professional solutions [1,2]. Due to CIHIs implementation policy Croatia achieved almost 100% use of computers and electronic health record (EHR) applications in 2356 FD offices [3,7]. In the following stages of development the system will also encompass pharmacies, laboratories, integration to hospital information systems, other PHC practices and patient access as separate, but integrated components. The integration of all these system parts is enabled by using of HL7v3 and HRN ENV 13606 standards (the latter is Croatian translated version of CEN/TC ENV 13606 standard) [1]. Establishment of a central EHR repository, which will be synchronized with EHR applications in FD offices, is planned. In this way the strategy aims to ensure the secondary uses of data collected in FD offices and efficient health system management. Such system organization sets appropriate conditions on
A. Conducting of Survey Survey was conducted in late December 2009.The questionnaire consisted of questions about: (a) general information about the doctor and office (gender, age, experience, type and autonomy of the office, the EHR application), (b) using the working guidelines (clinical, pharmacological, administrative) in EHR application, (c) existence of visual indicators of the doctors' work quality and (d) existence of DSSs within the EHR application (diagnosis, prescriptions); was made. It was made in electronic PDF/FDF form with the ability to automatically return to the sender via e-mail, and in the classical paper form. The questionnaires in electronic form were offered via dedicated mailing lists to approximately 1100 FDs, while about 70 questionnaires were distributed in paper form at the professional meetings and collected on spot or received by post. Random sample selection depended on FDs' free will to fill the questionnaire. A total of 106 complete and correctly filled questionnaire forms were collected (85 in electronic and 21 in paper form). B. Papers and WEB Contents Analysis The analysis of available papers and WEB contents was conducted in order to gain insight into the history of
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 928–931, 2010. www.springerlink.com
Use of Guidelines and Decision Support Systems within EHR Applications in Family Practice – Croatian Experience
development, the current stage of development and availability of working guidelines in health care, as well as their impact on development of the DSSs. Obtained survey results were compared with the relevant European statistics. C. Insight into EHR Applications Possibilities Croatian FDs can choose among 11 applications offered on the market [2]. Applications' properties are determined by their manufacturers, not by recommendations of MHSW and other subjects. Because of that, we conducted a short insight in real possibilities for a few EHR applications which are, based on survey results, showed as the most represented in the Croatian FD offices.
By conducting the survey, 106 complete and correct filled questionnaire forms were collected. Table 1 shows the sample characteristics. Table 1 Sample characteristics Median
Gender Specialization
Interquartile range
49
44 - 51
Median
Interquartile range
23
18 - 26
Male
Female
23,6 %
76,4 %
Yes
No
Years of working
66 %
34 %
Health centre
Under lease
Private
FD office autonomy
18.9 %
69,8 %
11,3 %
Urban
Rural
Insular
FD office type
64,1 %
32,1 %
3,8 %
By comparison of data from well-known official Croatian health statistical publications [5], and data known from some previous analyzed works [3, 4], it can be concluded that the sample is representative enough to draw conclusions from the study. Table 2 Survey results Item
The results obtained in the effective part of the questionnaire are shown in Table 2. Examining the survey forms we noted that different users with the same applications categorized their features on different ways. We interpret this again by reflecting to the early stages of the overall system implementation – i.e. we foresee several years of active usage before some best practices start to be dominant and pervasive. Furthermore, in examining features of few EHR applications which are, according to the survey results, the most common in Croatian FD offices, it was found to be mainly "lighter" forms of user helping features. We can classify them as follows hereafter. A. Clinical Guidelines
III. RESULTS
Age
929
%
Satisfaction (0-1)
Inbuilt working guidelines
41,5
---
Inbuilt visual working quality indicators
51,9
---
Inbuilt DSS for diagnosis
8,5
0,78
Use of external DSS for diagnosis
9,4
---
Inbuilt DSS for prescribing
24,5
0,71
Use of any kind of DSS
25,5
---
Some applications have built-in the Ministry of Health and Social Welfare’s (MHSW) recommended clinical guidelines presented in HTML format. In addition to these internal facilities FD applications offer external links to clinical experts’ proven and recommended Web addresses: • http://iskra.bfm.hr/hrv/Guidlines.aspx [13] • http://www.cardionet.hr/cardionetHeart/casopis/ SmjerniceSadrzaj.asp [15] • http://www.plivamed.net [16] All of the above sources of clinical guidelines are well systematized, but are written in the form of free text and not encoded. The content of these guidelines does not cover all of the domain specialties. Access to these contents takes too much time FDs. B. Pharmacological Guidelines Similarly as in the case of clinical guidelines in addition to built-in pharmacological guidelines in HTML format, recommended by MHSW, there are also external links: • http://www.plivamed.net [16] • http://www.tg.com.au [17] Time required to access and manner of access to pharmacological guidelines are completely the same as in the case of clinical guidelines. C. Visual Working Quality Indicators As is evident from Table 2, 52.9% of FDs use EHR applications that contain some visual indicators or gauges for assessment of FDs' working quality according to CIHI administrative guidelines which are implicit implemented in EHR applications. For now, in this way only three elements are followed:
IFMBE Proceedings Vol. 29
930
• • •
D. Kralj, S. Tonković, and M. Končar
financial index for diagnostic-therapeutic procedures financial index for drug prescribing the rate of sick leave
Based on these three elements CIHI monitors the work of FDs, regardless of their visualization in the application [2]. D. DSS for Diagnosis Considered applications have the following diagnostic helping features: • • •
monitoring and signaling of chronic diseases (~73% of used applications) monitoring and signaling of allergies (~75% of used applications) monitoring and presentation of earlier patient's diagnosis during the following diagnosis (~50% of used applications)
IV. DISCUSSION Authors of the pilot study "Pilot on e-Health Indicators" (Empirica, 2007) [6] in analysis of the availability of DSS in European countries indicate that this term encompasses a wide range of different applications that can be used to denote different things depending on the understanding of the responding FD. This fact further suggests that we should access very carefully to the interpretation of survey results. Modified chart in Figure 1 was taken from the study [6] and shows the availability of DSS in European FD offices parallel with the results of our survey.
We can see that these features are, in principle, very simple but very useful helping elements, especially monitoring and presentation of earlier diagnosis. E. External DSS for Diagnosis Processes Considering the externally available diagnosis support systems, Croatian FDs are in most cases using an advanced system for diabetes monitoring named CroDiab NET [14] developed at the University Clinic "Vuk Vrhovec" in Zagreb. The system is based on the world-known guidelines for diabetes and ICD-10 coding system. It allows managing of disease registry, automatic generation of clinical discharge summaries (only for clinic use) and automatic creation of medical histories for each patient, based on data collected in the registry. The system can operate as an autonomous application on FD's computer or as a web application. FDs can enter patients' data into central registry, get printed clinical discharge summaries from clinic and print out ambulatory medical histories from the system. F. DSS for Prescribing These systems contain a slightly higher number of useful elements than the previously mentioned helping systems. Mostly are used automated therapy prescription for chronic diseases, control of the prescribed dose, the control of the prescribed medication according to ICD-10 diagnosis and helping FDs to choose drugs from CIHI's list. The systems are based on implicit administrative and professional guidelines and in 24.5% of cases are built-in in the Croatian EHR applications. Monitoring the interactions between certain medications and patient's allergic reactions to certain drugs, would further increase the quality of the systems.
Fig. 1 Availability of DSS in EU and Croatia (Empirica, 2007) [6] Table 1 shows that population of Croatian family/general practitioners is generally middle-aged (median is 49 years). A majority of 18.9% doctors that are employees of health centers (not self-employed practitioners with CIHI contract) have been using computer based EHR systems for only two years. Their understanding of the DSS is significantly different from the actual definition. Deeper analysis shows that in EU DSS for diagnosis make 59% compared to 32% of support for prescribing [6], while according to our results, Croatia currently measures support for diagnosis in 9.4% of cases, compared to 24.5% of support for prescribing. This is probably a consequence of CIHI's strict drug prescribing policy, especially in the case of chronic diseases [2]. However, if we compare the available helping elements in Croatian EHR applications with Gartner generation model [8] we see that applications are currently matching the second generation model (the Documentor) from the 90s, while the model for 2009 predicts the emergence of advanced fourth generation (the Colleague), which suggest large room for improvement. In the past fifteen years a wide range of research projects were conducted under the European research and development program AIM (Advanced Informatics in Medicine), with subjects of the research including, for instance, problems of definition and implementation of guidelines in
IFMBE Proceedings Vol. 29
Use of Guidelines and Decision Support Systems within EHR Applications in Family Practice – Croatian Experience
European health care [9], implementation of guidelines in quality assurance of physicians' work in PHC in the Netherlands [10] or knowledge engineering for drug prescribing guidelines in the UK [11]. Based on the experience gained in these projects, in last 5-10 years the experts have developed some advanced PHC information systems. Thanks to equitable support in implementation of clinical and business guidelines the British National Health System (NHS) successfully uses over 130 indicators for evaluation of FDs work quality and pay-for-performance policy [12]. As previously mentioned, the Croatian health care system, except to the domestic sources, usually refers to the world-known sources of clinical guidelines such as National Guideline Clearinghouse (USA) [18] and the NHS Clinical Knowledge Summaries (UK) [19], while for the pharmacological guidelines commonly refers to Therapeutic Guidelines Limited (Australia) [17]. Considered EHR applications also refer to these sources. To create the DSS on the basis of these guidelines, they need to be localized with respect to the functional model of FD offices and current classifications of diseases and procedures, it is necessary to make the formalization of the guidelines presentation, and to make an adjustment of logical system for use within existing EHR applications [22]. In the case of EHR applications in Croatian FD offices, solution should be sought in the further implementation of HL7 open standards [20, 21]. By applying of service-oriented communication architecture and knowledge base consisting of rules written as Arden Syntax Medical Logic Modules (MLMs) is possible to create DSS oriented to specific health problems of each patient within existing EHR applications. Emerging of the development systems based on free and open source enabling technologies has significantly improved progress and spreading of guideline-based DSS [22].
V. CONCLUSIONS Good properties of computerization within the Croatian FD offices, as a part of primary health care, certainly are: application of HL7v3 communication standard and a high degree of implementation of EHR applications. We see some encouraging first results in applying of the working guidelines and guideline-based DSS, but also significant areas of further improvement. Also, what we find to be essential is permanent education of doctors and health authorities with the aim of understanding capabilities of these systems and the needs for their use. Use of these systems guarantees the completeness and accuracy of the data in the EHRs, and thus the safety of the patient in the health care system. Solution for efficient and financially cost effective
931
implementation should be sought in the further implementation of HL7 open standards and products developed using the free and open source enabling technologies.
REFERENCES 1. Končar M, Gvozdanović D (2006) Primary healthcare information system—The Cornerstone for the next generation healthcare sector in Republic of Croatia. Int J Med Inform 75:306-14 2. CEZIH PZZ at http://www.cezih.hr 3. Kralj D, Tonković S (2009) Implementation of e-Health Concept in Primary Health Care - Croatian Experiences, ITI2009 Posters Abstracts, 31st Int. Conf. on Information Technology Interfaces, Cavtat, Croatia, 2009, pp 5-6 4. Kern J, Polašek O (2007) Information and Communication Technology in Familiy Practice in Croatia. European Journal for Biomedical Informatics 1:7-14 5. Baklaić Ž, Dečković-Vukres V, Kuzman M (2008) Croatian Health Service Yearbook 2007. Croatian National Institute of Public Health, Zagreb 6. Dobrev A, Haesner M, Hüsing T et al. (2008) Benchmarking ICT use among GPs in Europe – Final Report. Empirica, Bonn 7. Björnberg A, Garrofé B C, Lindblad S (2009) Euro Health Cosumer Index 2009 – Report. Health Consumer Powerhouse, Brussels 8. Handler T J (2004) The 2004 Gartner Computer-Based Patient Record System Generation Model (Research R-21-6592), Gartner Inc. 9. Purves I (1995) Computerised Guidelines in Primary Health Care: Reflections and Implications, AIM Proc. vol. 16, Conf. on Health Telematics, Amsterdam, Netherlands, 1995, pp 57-74 10. Zanstra P E, Beckers W P A (1995) Computerized Assessment with Primary Care Guidelines in the Netherlands, AIM Proc. vol. 16, Conf. on Health Telematics, Amsterdam, Netherlands, 1995, pp 87-95 11. Walton R, Ilic Z (1995) Knowledge engineering for drug prescribing guidelines, AIM Proc. vol. 16, Conf. on Health Telematics s, Amsterdam, Netherlands, 1995, pp 75-85 12. Teasdale S, Bates D, Kmetik K et al. (2007) Secondary uses of clinical data in primary care, Inform Prim Care Proc. 2007, 15:157-66 13. ISKRA at http://iskra.bfm.hr/hrv/Guidlines.aspx 14. CroDiabNET at http://www.idb.hr/crodiabnet.htm 15. CARDIOnet at http://www.cardionet.hr/cardionetHeart/casopis/SmjerniceSadrzaj.asp 16. PLIVAmed.net at http://www.plivamed.net/ 17. Therapeutic Guidelines Limited at http://www.tg.com.au 18. National Guideline Clearinghouse at http://www.guideline.gov 19. NHS Clinical Knowledge Summaries at http://www.cks.nhs.uk 20. HL7 Inc. at http://www.hl7.org 21. OpenClinical at http://www.openclinical.org 22. Leong T Y, Kaiser K, Miksch S (2007) Free and Open Source Tecnologies for Patient-Centric, Guideline-Based Clinical Decision Support: A Survey. Yearb Med Inform. 2007:74-86 Corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Damir Kralj Ministry of Interior, PD Karlovac 6 Trg hrvatskih redarstvenika Karlovac Croatia [email protected]
A New Concept of the Integrated Care Service for Unstable Diabetic Patients P. Ładyżyński1, P. Foltyński1, J.M. Wójcicki1, K. Migalska-Musiał1, M. Molik1, J. Krzymień2, G. Rosiński2, G. Opolski3, K. Czajkowski4, M. Tracz2, and W. Karnafel2 1
2
Nałęcz Institute of Biocybernetics and Biomedical Engineering PAS, Warsaw, Poland Department and Clinic of Gastroenterology and Metabolic Diseases WMU, Warsaw Poland 3 Department and Clinic of Cardiology WMU, Warsaw, Poland 4 II. Department and Clinic of Obstetrics and Gynecology WMU, Warsaw, Poland
Abstract— Diabetes is considered as one of the most serious, challenging and expensive health problem of the world. Diabetes causes a number of late complications affecting mainly vascular and nervous systems and increasing risk of: cardiovascular disease, cerebrovascular disease, kidney failure, blindness, lower limb amputation, impotence, etc. Currently diabetes is a not curable life-long disease that is treated under out-clinic conditions which requires close cooperation between a patients and a team of healthcare providers. In the Institute of Biocybernetics and Biomedical Engineering Polish Academy of Sciences during last 10 years several telematic support systems have been designed, developed and clinically implemented, in particular: system for monitoring of the treatment of pregnant diabetic women (TeleDiaPreT), system for monitoring of the newly diagnosed diabetic patients (TeleMed), system for the screening and monitoring of the diabetic retinopathy (DRWeb) and system for the monitoring of the treatment of the diabetic foot syndrome (TeleDiaFoS). Based on gained experiences the idea of design of the Model Center of Diabetes Treatment for unstable diabetic patients was born. In the authors opinion the model center for diabetes treatment using modern ICT infrastructure would make it possible to provide an original, integrated, high quality and low cost health care services for most challenging group of diabetic patients. Keywords— Diabetes, Telemedicine, Telecare, Model Center.
cooperation between a patients and a team of healthcare providers [1]. B. Home and Mobile Telecare of Diabetic Patients There have been several distinct systems designed and developed in IBBE PAS that made extensive use of the information and communication technologies (ICT), aimed at support of the home telecare of diabetes and some of its complications for the last 10 years. Most of these systems have been implemented and tested in the Department and Clinic of Gastroenterology and Metabolic Diseases Warsaw Medical University during clinical trials on several groups of type 1 or type 2 diabetic patients. The results of these trials demonstrated that telematic support of diabetes treatment is feasible, acceptable by the patients and the care providers and that it leads to increased patients selfconfidence, sense of safety and improved outcome of the treatment. All these telecare systems were developed as stand alone applications focused on specific problems and / or groups of patients, e.g. pregnant diabetic women (TeleDiaPreT system) [2, 3], newly diagnosed diabetic patients (TeleMed system) [4, 5], screening and monitoring of the diabetic retinopathy (DRWeb) [6] and patients with diabetic foot syndrome (TeleDiaFoS system) [7, 8]. The aim of the work is to develop an integrated ICT environment for a model center of diabetes treatment.
I. INTRODUCTION A. Diabetes Diabetes is considered as one of the most serious, challenging and expensive health problems of the world. A number of diabetic patients exceeded 200 mln and it is growing very fast. Diabetes causes a number of late complications affecting mainly vascular and nervous systems and increasing risk of: cardiovascular disease, cerebrovascular disease, kidney failure, blindness, lower limb amputation, impotence, etc. Currently diabetes is not curable life-long diseases that is treated under out-clinic conditions which requires close
II. MODEL CENTER FOR DIABETES TREATMENT Currently, all the systems developed in IBBE PAS have been integrated and new subsystems have been designed to form an integrated ICT environment for a model center of diabetes treatment. This center consists of: centralized patients registration module and a set of specialized modules related to the patient’s education, treatment of difficult diabetes (i.e. short-term applications of home / mobile telecare with semi-continuous connection of the patient with the care provider), treatment of diabetes before conception and during pregnancy (i.e. long-term integration of obstetrical
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 932–934, 2010. www.springerlink.com
A New Concept of the Integrated Care Service for Unstable Diabetic Patients
933
Fig. 1 Scheme of the Model Center for the Diabetes Treatment and diabetological care), monitoring and treatment of cardiovascular complication in diabetes (i.e. integration of cardiological and diabetological care), monitoring and treatment of the diabetic foot syndrome (i.e. control of the wound healing process connected with diabetes treatment) and screening for and monitoring of the diabetic retinopathy (i.e. systematic assessment of the patient’s retina with an application of digital fundus photography). Figure 1 shows the scheme of the Model Center. The system allows for the remote monitoring of the following exemplary parameters: blood glucose concentration, arterial blood pressure and heart rate, ECG, body mass, intake of carbohydrates, duration and intensity of physical activity, images of foot ulcers, etc. Some of this parameters are monitored using a specialized devices, which either have been available on the market (e.g. blood glucose meter) or have been custom made for the system (e.g. foot scanning patient’s module). Other quantities are estimated by the patient and are input manually to an electronic logbook based on a mobile phone. All these data are transmitted to the central server where they are stored in SQL client-server database. The database can be accessed by both groups of users, i.e. the health care providers and the patients. Feedback is provided to the patients during phone teleconsultations.
III. CONCLUSIONS It is well known that during treatment of any chronic disease a limited group of so called difficult patients engages more attention and resources than the remaining majority of the patients. In the authors opinion the model center of diabetes treatment using modern ICT infrastructure would make it possible to provide integrated, high quality and low cost health care services for most challenging groups of diabetic patients.
ACKNOWLEDGMENT This work was founded in the framework of research developmental projects program from Ministry of Science and Higher Education for the years 2007-2010.
REFERENCES 1. Wójcicki JM, Ładyżyński P (2003) Toward the improvement of diabetes treatment: recent developments in technical support. Jap J Artif Organs 6:73-87 2. Ładyżyński P, Wojcicki JM (2007) Home telecare during intensive insulin treatment - metabolic control does not improve as much as expected. J Telemed Telecare 13: 44-47
IFMBE Proceedings Vol. 29
934
P. Ładyżyński et al.
3. Wójcicki JM, Ładyżyński P, Krzymień J et al. (2001) What we can really expect from telemedicine in intensive diabetes treatment. Results from 3-years study on type 1 pregnant diabetic women. Diabetes Technol Therap 3:581-589 4. Ładyżyński P, Wójcicki, JM, Krzymień J et al. (2003) TeleMed – the Telematic system supporting intensive insulin treatment of the newly diagnosed type 1 diabetic patients. First clinical application. IEEE EMBS Proc., 25th Silver Anniversary International Conference of the IEEE EMBS, Cancun, Mexico, 2003, pp. 3657-3660 5. Ładyżyński P, Wójcicki JM, Krzymień J et al. (2006) Mobile telecare system for intensive insulin treatment and patient education. First applications for newly diagnosed type 1 diabetic patients. Int J Artif Organs 29:1074-1081 6. Ładyżyński P, Wójcicki JM, Chihara K (2007) Application of telemedicine technique in screening for diabetic retinopathy Biocybernetics and Biomedical Engineering 27:253-263
7. Foltyński P, Ładyżyński P, Wójcicki JM et al. (2008) Diabetic foot syndrome. A modern approach of treatment. ISBME Proc., International Symposium on Biomed. Eng., Bangkok, Thailand, 2008, pp. 123-126 8. Ładyżyński P, Wójcicki JM, Foltyński P et al. (2009) Application of the home telecare system in the treatment of diabetic foot syndrome. IFMBE Proc., vol. 23, International Conference on Biomed. Eng., Singapore, 2008, pp. 1049-1052
Author: Piotr Ładyżyński Institute: Nałęcz Institute of Biocybernetocs and Biomedical Engineering PAS Street: 4 Trojdena City: 02-109 Warsaw Country: Poland Email: [email protected]
IFMBE Proceedings Vol. 29
SHARE: A Meeting Point for the Promotion of Interoperability and Best Practices in eHealth Sector M. Ortega-Portillo1, M.M. Fernandez-Rodriguez1, M.F. Cabrera-Umpierrez1, M.T. Arredondo1, and G. Carrozza2 1
Universidad Politecnica de Madrid, Ciudad Universitaria s/n 28040 Madrid, Spain 2 SESM, 80014 Giugliano in Campania, Napoli, Italy
Abstract— Current health services show that in the medical area a lot of devices for diagnosis, controlling and monitoring the human status exist. The healthcare industry is facing acute cost pressures based on proprietary technologies. The open source development model could enables enterprises to efficiently amortize development costs with an ecosystem of partners, customers and competitors, providing value service and support for freely available software. Solutions based on the interoperability are considered a basic strategy to improve medical care. SHARE project enhances to create a knowledge framework within the OSS applied to the eHealth sector. This project aims at promoting the standardization in this sector by providing different tools and resources for increasing the usage of OSS in this sector.
In this sense, SHARE project will contribute to transform the eHealth research area into a dynamic, innovative and knowledge-driven field. There will be a growing number of opportunities for people with different areas of expertise working together. Besides, enterprises, software developers and end-users will have the resources needed to share expertise in order to improve industry competitiveness.
Keywords— Open Source Software, eHealth, Interoperability, Benchmarking.
Open Source Software (OSS) is emerging as a preferred alternative to proprietary solutions in many areas. The sector of OSS products and related research activities is increasing every year but the different stakeholders continue having difficulties in comparing them in terms of security, technical features, usability and usability parameters, among others. The usage of open source software in the health field is a recent activity line that in the last years has experienced a great growth. The trend in this field is not only the development of applications open source but also the use of existed ones to generate new projects. The development of open source applications attempt to get benefits from the advantages OSS offers. In this way, a benchmarking process provides a method not only to compare tools but also to select the best one for a particular purpose. The benchmarking methodology used is the QSOS (Qualification and Selection of Open Source software methodology). Paying more attention to this methodology, it must be said that the general process is constituted by four different independent steps: definition, evaluation, qualification and selection.
I. INTRODUCTION In health area, the incorporation of the new technologies in the field of medicine is increasingly important. Within this trend it is important to distinguish between the different options of the medical staffs has in order to get as much benefit as possible. This market is growing due to the increasing challenges to healthcare systems, as well as new medical information and communication technologies (ICT). Besides, the acute cost pressure the healthcare industry is experienced is related with the traditional IT infrastructures, really complex, rigid and costly. The healthcare market potential can be improved by creating solutions based on existing tools and platforms to promote the interoperability in this sector [2]. In order to perform a common framework to combine all possible contributors to this goal coming from scientific and technological developments within the European field, the SHARE project provides a common platform. Besides, this project establishes a virtual meeting point for institutions involved in OSS. The whole range of the key stakeholders in this field are being involved, namely Commercial organizations (SMEs, large firms, etc.), Universities, Research Centres, Developers Community, NGOs, End Users, etc.
II. METHODS A. Benchmarking
• •
In the definition phase the context and families are described. The evaluation step selects the specific value we want to give to each criteria.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 935–938, 2010. www.springerlink.com
936
• •
M. Ortega-Portillo et al.
The qualification step allows the selection of software already evaluated to obtain the result or get a comparison. The final step of the methodology is the selection where the results of the evaluation are obtained.
In order to specify the benchmarking to the health sector, additional criteria were needed. These criteria are selected in an individual way, carrying out a particularized study for each family. The specific criteria define those attributes only applications, belonging to a certain family, have. The selection is based on the acknowledgment that the factor chosen gives practical information to compare applications. B. SHARE Web Space
The benchmarking methodology consists on some templates with several criteria classified into two groups, common criteria and specific criteria. Within the health field, some families are identified to classify the applications in the health sector. The families selected try to cover the health sector, not leaving out any applications related with health. Ranging over many areas concerning with health, considering from those related with the management of health organizations to those related with image working. From a more technical point of view, families associated with the use of ICT have been defined (Communication, Data Management). Until a total of eleven families have been identify in this benchmarking phase. Each family has several useful characteristics that, together with general ones belonging to open source software, can perform a complete benchmarking process. The organization of criteria into two groups takes into consideration the common attributes open source applications have, initially to benchmark the applications as open source software itself and the particular characteristics health application have. These common criteria are provided by the QSOS methodology, categorized into four groups: intrinsic durability, industrialized solution, technical adaptability and strategy. Each one can be constituted by several groups as well, generating a nested structure of criteria. These common criteria can be used to benchmark any open source tool or system, without considering a particular field of operation.
Through a Web-based communication platform the SHARE project aims at enhancing the collaboration among different target users by means of OSS resources cooperation and sharing. Several target users have been identified in order to adapt the platform to their needs. By means of the SHARE Web Space, sharing facilities aiming at promoting knowledge transfer are available. Several tools for knowledge gathering and improvement in the context of a framework can also be found in order to perform collaboration among users. Furthermore, procedures and tools for project management are accessible in the platform. Regarding the benchmarking process previously described, a Benchmarking tool has been developed within this platform. As a public resource the benchmarking process can be accomplished by non-register users of the Web Space.
III. RESULTS A. Benchmarking The generation of xml templates to carry out with the benchmarking is the instrument this open source benchmarking has. The developments of these files are based on the criteria mentioned above, producing one xml template for each family identified. In our development eleven templates where created. All of them have the common criteria in first place and the differences between them are set up through the specific criteria. The benchmarking methodology and templates are applied in three different knowledge areas: software, licenses and research activities. Despite the fact that three benchmarking procedures are performed with specific templates and criteria for each one, only the technological benchmarking is described in this paper. The benchmarking process is established as an iterative procedure. Within this development, a total of 25 healthtools have been evaluated within the SHARE project. The most tools evaluated, the most precise and complete results are achieved. Although the number of evaluated tools has been set up within the SHARE project, it established the start point of the project’s aim.
IFMBE Proceedings Vol. 29
SHARE: A Meeting Point for the Promotion of Interoperability and Best Practices in eHealth Sector
For the templates mentioned above, each criteria has three possible values; 0, 1 or 2. Each one of them has a unique meaning defining the attributes of the software. These attributes are the criteria already cited and the values referenced correspond with three situation: the criteria is not supported (0), the criteria is partially supported (1) or the criteria is completely supported (2). B. SHARE Web Space SHARE has developed a Web-based communication platform that enables OSS resources sharing between the user groups. Specifically the functionalities offered through the SHARE Virtual Lab are: • • • • • •
Benchmarking Suite Collaboration Lab Code Sharing Wiki Community Review and References
Focusing on the benchmarking process the Benchmarking Suite provides the necessary tools to perform an OSS benchmarking or a Software evaluation. The main functionality of the Benchmarking Suite proposes the benchmarking of evaluated tools provided by the platform. The objective of this tool is to provide the user the possibility of comparing two or more applications according to several parameters. Based on the QSOS four steps constitute the tool after which the results of the evaluation can be inferred in several ways. The steps to accomplish an evaluation are described below: 1. Select the field of operation and the family. According to the SHARE project three areas of interested where considered right now: near real time and mission critical applications, nomadic, multimedia, networked applications and e-health applications. As it has been said above, several families have been identified for each sector, so the user not only selects the area but also the family. 2. Select the weights the user wants each criteria to have. The template associated to the combination of area and family let the user decide on the specific value each criteria may have. In this step the user points out what are the characteristics of the software he is looking for. These values will allow the benchmarking in the following steps relating the weights the user wants for each criteria with the score each tool to compare presents.
937
3. Pick the already evaluated tools to compare them. The tools to compare with in the field under study appear in this step. The user has the option to pick just one tool or as many as the application provides. 4. Present the results of the benchmarking. In the last step there are several ways to present the information got by the benchmarking tool. The user can select which one of them best suits his needs. Two ways of present the information are provided: numerical results and graphical results. In the numerical results it is presented the weights the user has selected for each criteria and the score the tool picked has. In this case an individual analysis is done, comparing each tool with the weights imposed by the user in a single way. In the graphical way the comparison is done all together. The scores are presented in a figure with as many axis as global criteria (criteria that may consist on other criteria) the template defined. The representation of this information can be also selected in two ways: general results or partial results. The general results presents the data from the entire template, meanwhile the partial one only draws the results of the group of global criteria selected by the user. Nonetheless the evaluated tools considered not always are enough; target users may look for others that have not been evaluated. For offering this opportunity a new functionality has been developed within the benchmarking tool, the software evaluation. It provides two different tasks: •
•
Modification of already evaluated tools. Having as a start point the tools already evaluated, the user has the possibility of modifying them. Selecting the specific tool, the different criteria used for its evaluation can be changed. Insertion of new evaluated tools. Users themselves are the responsible for proposing these new tools and evaluating them. The steps followed to carry out these evaluations are shown below.
IV. CONCLUSIONS Knowledge sharing and collaborative frameworks promote the interoperability in several sectors. Virtual workspaces allow the contribution and collaboration independently from time and space. By means of the different tools provided by this project the merge of knowledge can find a base in the SHARE project. It promotes the interoperability in terms of OSS usage as well as an analysis of the already existence tools. The project support enhances the information flow among different stakeholders, in order to get a heterogeneous and multidisciplinary field of knowledge.
IFMBE Proceedings Vol. 29
938
M. Ortega-Portillo et al.
ACKNOWLEDGMENT We would like to thank the SHARE Project Consortium for their valuable contributions for the realization of this work. This project is partially funded by the European Commission.
3. SHARE Project. Accessible in: http://www.share-project.eu/ 4. SHARE Annex I – “Description of Work”. ICT-2007.3.7 - 224170 5. SHARE Benchmarking tool. Accessible in: http://88.45.132.117:8080/joomla/ index2.php?option=com_osbenchmark&view=categoryselection 6. CORDIS. http://cordis.europa.eu/search/ 7. SHARE Project ICT – 2007.3.7 Deliverable D2.1 “Benchmarking methods and criteria”, November 2008.
REFERENCES 1. SHARE Project. Accessible in: http://www.share-project.eu/ 2. Economic impact of FLOSS on innovation and competitiveness of the EU ICT sector. MERIT, 2006.
IFMBE Proceedings Vol. 29
Long Term Evolution (LTE) technology in e-Health - a sample application R. Jagusz1 , J. Borkowski1 and K. Penkala2 1
2
European Communications Engineering ECE Poland Ltd, Szczecin, Poland West Pomeranian University of Technology/Faculty of Electrical Engineering/Department of Systems, Signals and Electronics Engineering, Szczecin, Poland
Abstract— In the paper the architecture of the Long Term Evolution (LTE) network and an innovative access scheme to the cellular system are described. Requirements concerning biomedical applications referring to wireless mobile systems are overviewed. An example of a biomedical application using LTE as a link path is described. Keywords— Long Term Evolution (LTE), Emergency Center, Station Emergency Center.
Mobile
I. INTRODUCTION
The variety of available solutions for mobile telecommunication is enormous. Cellular networks have a world scale range of subscribers. Applicability of Orthogonal Frequency Division Multiplexing Access (OFDMA) radio technology to cellular access network in Long Term Evolution (LTE) systems together with the evolved core network opens a new era in mobile telephony. Namely, high spectrum efficiency offers good user experience while evolved system architecture enables operators to significantly minimize CAPEX and OPEX of network becoming gradually more data-dominated. New solution handled by the LTE may influence also health telematics in the near future. The aim of this paper is to present an example of the biomedical application utilizing the LTE network as a link path between several system units. Furthermore, it contains the investigation of possible demands of data rates needed to handle the data exchange process. Finally, coverage estimation results conducted in a dedicated network planner tool are shortly presented.
II.
BRIEF DESCRIPTION OF LTE
A. System access schemes Cellular systems of the second (2G) and third (3G) generation were using the following, well-known multiple access schemes [1]: Frequency Division Multiplexing Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA) and Wideband Code Division Multiple Access (WCDMA). In OFDMA
multiplexing applies to independent signals which are a subset of the one main signal. The signal is split into independent signals modulated by the data and then remultiplexed into one carrier [2]. OFDMA is one of the most spectrally efficient radio technologies through simultaneous transmission on multiple overlapping subcarriers. Although the technology is commonly used in mature WiMax standards, the first application to cellular network is realized in LTE, 3GPP standards. OFDMA gives significant gain in capacity and holds interference on a lower level in comparison to existing (W)CDMA-based, 3G networks. This access scheme is described in detail by the following points [2,3]: - Subcarrier spacing used in LTE is 15 kHz; therefore, the symbol duration (Ts) is 66.67 μs. - Evolved UMTS Terrestrial Radio Access Network (EUTRAN) can possibly use 2048 subcarriers of OFDMA. - Limitations for subcarriers are needed as a guard band which prevents interaction with other systems that can cause interference. - A single LTE cell uses at least 72 subcarriers and maximum 1320 subcarriers. - Dedicated subcarriers correspond to bandwidths from 1.4 MHz, 3 MHz, 5 MHz, 10 MHz, 15MHz and up to 20 MHz. - Number of subcarriers in a single cell is bounded by the operator during radio network planning phase. The realization of the uplink access scheme in LTE is based on Single Carrier Frequency Division Multiple Access (SC-FDMA) in order to reduce the Peak to Average Power Ratio (PAPR) effect of the signal. Eventual application of OFDMA for uplink is complicated due to strict limitation of battery capability of the LTE terminal. OFDM signals have a high peak to average ratio that need linear amplification, and therefore are not power efficient. This directly translates to the shorter battery life of the User Equipment (UE).
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 939–942, 2010. www.springerlink.com
940
R. Jagusz, J. Borkowski, and K. Penkala
Fig. 1 Orthogonal Frequency Division Multiple Access (OFMDA) and Single Carrier Frequency Division Multiple Access (SC-FMDA) access schemes [2] That is the main reason for a different solution in the uplink scenario. SC-FDMA is a hybrid combination of an OFDMA scheme that cleverly combines the low PAPR of single-carrier systems with the multipath resistance and flexible subcarrier frequency allocation. SC-FDMA can reduce the PAPR of 6-9dB compared to a normal OFDMA scheme. Figure 1 shows comparison of multiple access schemes used in downlink (DL) and uplink (UL) with QPSK modulation. LTE defines the main barriers of uplink bandwidths: the smallest bandwidth as 180 kHz and the highest bandwidth as 20 MHz. B. LTE system architecture Improvements in air interface which consider a new type of access scheme and higher modulation algorithms, indicated a need of improvements in the LTE system architecture (Figure 2). Through the improvements and
Fig. 2 LTE system architecture overview - based on [3]
simplification in the core network for LTE – SAE (System Architecture Evolution), the evolved core is more scalable and more efficient in data-dominated traffic. Majority of the radio-related functionalities is managed within eNodeB as SAE (System Architecture Evolution) is Radio Network Controller (RNC)-less configuration. SAE is fully IP-based that points out a requirement in LTE for voice over IP transmission – (Voice over LTE Generic Access) VoLGA. Moreover, the essential evolution step is dedication of network elements together with corresponding interfaces to handle either control- or user-plane traffic. Hence, this approach allows faster communication between network units and smaller delay time in all connection processes like feedback from the UE to eNodeB. There are three main elements of the LTE-SAE network: Enhanced NodeB (eNodeB), Mobile Management Entity (MME) and System Architecture Evolution Gateway (SAE GW) [1]. III. SAMPLE APPLICATION
In the case of earthquakes, terrorist attacks or floods, public services need to prepare emergency systems that can deal with inevitable issues. Different cases require different solution methods because every situation has an exceptional background and consequences. Nowadays, where advanced technology is delivered to majority of the population, civil security systems have to be developed. Governments need to establish mobile operable systems. Long Term Evolution network gives new possibilities of higher wireless throughput level, high mobility and security rules. The consequences of this achievement are simplicity and very fast data transfer. An idea of SEC-MEC (Station Emergency CenterMobile Emergency Center) is simple: in places where wire infrastructure is damaged by earthquake, bomb explosion or floods, wireless reliable communication is crucial for information and logistics. Concept of LTE application in telematics system requires explanation. In the case of potential scenarios medical parties need to establish temporary field hospitals or medical chambers. Management and logistics are highly important to reduce latency between initial diagnosis of the patients and further treatment. Project of SEC-MEC is an idea of how the latest technology achievements may possibly be utilized in telematics system. In the MEC, a mobile device known as UE (User Equipment) is an element responsible for data collection, communication with SEC and patient position monitoring. The mobility of handheld devices makes transportation simpler and convenient for doctors. In situations when ambulances cannot arrive to the wounded
IFMBE Proceedings Vol. 29
Long Term Evolution (LTE) Technology in e-Health – A Sample Application
persons, paramedics equipped with mobile phone and peripheries, e.g. novel USG/USB [5], are able to reach them. Figure 3 shows basic MEC schematic diagram, Figure 4 presents the concept of SEC, and Figure 5 – the connection between both centers, controlled under the LTE network.
941
central health care electronic systems responsible for data collection and updating the patient history would highly simplify data processing.
Fig. 5 Communication between MEC and SEC
Fig. 3 Mobile Emergency Center (MEC) – schematic diagram
Fig. 4 Station Emergency Center (SEC) – schematic diagram The concept of SEC is based on station registry responsible for gathering data about the patients. MEC cannot exist without station centre responsible for data administration and user access. According to existing databases, SEC is an entity which not only collects data but also updates them and monitors patient position, and handles the logistics. Hospital Information System (HIS) and Electronic Patient Record (EPR) are proposed elements with possible usage in MEC-SEC application. During the past few years the number of electronic information systems has increased tremendously. Secure usage of personal data by public services can simplify communication and most everyday activities. The SEC concept is basically a database divided into four main elements described in Figure 4. Data collection from different health departments is not a simple process. Since governments introduced biometrical passports some processes have been developed. Building
The authorization requirements are also taken into account, and the process is divided into a few steps concerning MEC or SEC points. A paramedic who handles UE needs to authorize his identity before SEC will send information about examining a patient. Increasing the convenience and simplicity of the authorization process, a paramedic needs to scan patient fingerprints and send it to the SEC for identity confirmation. Another authorization step from the MEC point is patient identity confirmation. The method of this process is similar to paramedic authorization. One data point assigned to the patient profile is his fingerprint. It is possible that as a result of a bomb explosion, earthquake or fire a patient could lose consciousness. In this case fingerprint identification and information feedback from SEC is crucial. Station Emergency Center handles also most of the identification and authorization processes. At the beginning of the information gathering process, all data assigned to a certain patient needs to be verified. The SEC also needs to be equipped with database elements responsible for gathering the following paramedics’ profile information: - first name, - last name, - fingerprints, - social security number, - system access attempt information: date and time, GPS position during examination. The above mentioned points will prevent the unauthorized usage of the system and can reduce the system failure risk factor. Throughput requirements were also analyzed. In Table 1 comparison of this parameter for different functions is shown. LTE progress in video streaming is foreseen [6].
IFMBE Proceedings Vol. 29
942
R. Jagusz, J. Borkowski, and K. Penkala
Table 1 Data capacity/rate estimations
Function Live Video Streaming High Resolution Pictures Vital Signals Ultrasound * ** ***
Data rate estimation
Current systems throughput
LTE systems throughput
0.192÷16 Mbps
0,384÷10 Mbps**
5÷75 Mbps*
2,5÷10 Mbpp***
0,384÷10 Mbps**
5÷75 Mbps*
0,128÷2 Mbps
0,328÷10 Mbps**
5÷75 Mbps*
0,4÷1 Mbps
0,384÷10 Mbps**
5÷75 Mbps*
health care units to build a virtual system responsible for data gathering, processing and security mechanisms. The entire infrastructure will be build by operators all over the globe. This is the main reason to choose the LTE network as a link path for applications such as MEC-SEC. Simulations conducted for the LTE coverage show the possible benefits of utilizing the 2100 MHz frequency band for this network. Signal strength is similar to UMTS so the fit in process will possibly have a marginal impact on existing systems. Visible benefits are carried by MIMO antennas solutions which can improve the SNR level in NLOS (Non Line-of-Sight) scenarios.
REFERENCES
Theoretical UL throughput considering LTE Theoretical UL throughput considering HSPA+ Megabits per picture
1. 2.
MEC-SEC application requires certain coverage to make the communication possible between system entities. According the assumed model, coverage area was simulated in the dedicated LTE tool. The simulations were run for Helsinki area.
3. 4. 5. 6.
IV. CONCLUSIONS
After considering the wireless system responsible for data gathering and processing, the Mobile Emergency Centre – Station Emergency Centre application was defined. This application needs to be perceived as a complex global system that cooperates with several units responsible for information processing and storing. Possible utilization of MEC-SEC is highly needed by medical services due to a deficit of similar systems. The highest threat to wounded life is latency between initial recognition and efficient treatment. The MEC-SEC concept helps medical services to have access to patient history and the closest medical centre with particular departments which can efficiently start treatment immediately. The LTE network is the most suitable way to deliver this type of service for paramedics and patients. Also, the possible costs of MEC-SEC implementation process are competitively lower comparing to other communication methods. Devices needed to handle this venture will be accessible for everyone in an open market. This concept will only require software expenditure from
Holma H, Toskala A (2009) LTE for UMTS – OFDMA and SCFDMA Based Radio Access. John Wiley & Sons Limited Agilent Technologies (2008) 3GPP Long Term Evolution: System Overview, Product Development, and Test Challenges. At http://cp.literature.agilent.com/litweb/pdf/5989-8139EN.pdf, 3GPP (2009-2009) UE Categories and Major Parameters of LTE Release 8. At http://3gpp.org/LTE Alcatel Lucent (2009) Introduction to Evolved Packet Core-Strategic white Paper. Alcatel-Lucent Richard DW, Zar D (2009) Ultrasound imaging now possible with a smartphone. At http://www.PHYSorg.com Motorola (2009) Opportunity and impact of video on LTE Network. White Paper Motorola, Inc. Corresponding author: Author: Krzysztof Penkala Institute: Department of Systems, Signals and Electronics Engineering, Faculty of Electrical Engineering, West Pomeranian University of Technology Street: Sikorskiego 37 City: 70-313 Szczecin Country: Poland Email: [email protected] or Author: Rafał Jagusz Comp.: European Communications Engineering ECE Poland Ltd Szczecin Street: Basztowa 4/3 City: 74-500 Chojna Country: Poland Email: [email protected]
IFMBE Proceedings Vol. 29
EMITEL e-Encyclopaedia of Medical Physics – Project Development and Future S. Tabakov1, P. Smith2, F. Milano3, S.-E. Strand4, C. Lewis5, and M. Stoeva6 1
King’s College London, UK, 2 International Organization for Medical Physics (IOMP), 3 University of Florence, Italy, 4 University of Lund, Sweden, 5 King’s College Hospital, UK, 6 AM Studio Plovdiv, Bulgaria
Abstract— The e-Encyclopaedia for Medical Physics with Multilingual Dictionary EMITEL has been launched 6 months ago (www.emitel2.eu). This international project attracted more than 250 specialists from 35 countries and established itself as the largest international project in the profession. The paper describes the main phases of EMITEL, its current use and the planned future development. Keywords— Education, Training, Encyclopedia.
I. INTRODUCTION The EU pilot project EMITEL (European Medical Imaging Technology e-Encyclopaedia for Lifelong Learning) [1], funded with the help of the EU Leonardo da Vinci programme, develop the first e-Encyclopaedia in the profession. This is an original e-learning tool, which will be used for lifelong learning of a wide spectrum of specialists in Medical Physics and Engineering. The e-Encyclopaedia attracted a large international Network of more than 250 specialists from 35 countries (UK, Sweden, Italy, Bulgaria, Austria, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Latvia, Lithuania, Poland, Portugal, Romania, Slovenia, Spain; Australia, Bangladesh, Canada, China, Croatia, Iran, Libya, Malaysia, Morocco, Russia, Thailand, Turkey, USA. It currently included also specialists from Japan and Korea. The EMITEL web site www.emitel2.eu currently has more than 4000 unique users per month and the downloads are close to half a million. The paper describes the main features of EMITEL (both Encyclopaedia and Dictionary) and the plans for its future development.
II. EMITEL PROJECT INITIAL PARTNERS AND PHASES Although the idea for EMITEL appeared around 2001, the project was developed during 2005, started in 2006 and was completed by the end of 2009. The main project phases were: - Structuring e-Encyclopedia and Dictionary; - Developing the e-Encyclopedia content;
- Translating the main terms into many languages; - Developing the software and web shell; - Internal and external refereeing; - Editing of the e-Encyclopedia articles; - Dissemination events (incl. International Conference). At this stage the project partnership included the core of the Institutional partners from the previous projects (EMERALD and EMIT [2]) - King’s College London (Contractor) and King’s College Hospital, University of Lund and Lund University Hospital, University of Florence, AM Studio Plovdiv and the International Organization for Medical Physics (IOMP). This was the first EU project of IOMP as an Institution and paved its way for further international projects and funding. EMITEL was funded by the EU programme Leonardo da Vinci, and by the project partners. The EMITEL Consortium made an agreement to continue its function after the end of the project, and to provide constant support and update of the materials. Medical Imaging Technology was specially underlined in the name of the project, as this area expands rapidly. Together with X-ray Diagnostic Radiology, Nuclear Medicine; Magnetic Resonance Imaging; and Ultrasound Imaging, the areas of Radiotherapy and Radiation Protection were also included, plus a number of General terms associated with Medical Physics. Special care was taken in EMITEL for covering the aspects of Medical Engineering related to Imaging.
III. EMITEL ENCYCLOPAEDIA AND DICTIONARY EMITEL used as a background a specially developed database of specific terms (4000+) covering the main areas of the e-Encyclopaedia (listed above). These terms (of 1 to 3 or more words) were translated into 25 by specially formed Working groups. Thus the Dictionary included: English, Swedish, Italian, French, German, Portuguese, Spanish, Bulgarian, Czech, Estonian, Greek, Hungarian, Latvian, Lithuanian, Polish, Romanian, Russian, Slovenian, Bengal, Chinese, Iranian, Arabic, Malaysian, Thai, Turkish.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 943–944, 2010. www.springerlink.com
944
S. Tabakov et al.
As the uses synchronized tables of terms, crosstranslation of terms between each two of languages is possible through the original web site www.emitel2.eu. The Dictionary database is expandable to allow adding of new languages. The Dictionary was coordinated by S Tabakov and its software made by AM Studio. The same software company developed the whole web database and search engines for the e-Encyclopaedia. Each term from the Dictionary was covered by an explanatory article (entry) in English. The entries were aimed at MSclevel and above. Their volume varies in average from 50 to 500 words. The model of the Encyclopaedia was built around a larger number of specific entries, rather than small number of multi-page articles. This model allowed an easy and effective search. Some 3400 articles were developed with an overall volume of 2100 A4 pages (font 10, single spacing). The articles are not internally hyperlinked, but include list of related articles. Also many of the EMITEL articles include References, web links and further information. More than 2000 images, graphs, examples and other additional information were included in the articles to enhance the educational value of this reference material. The articles were grouped in 7 categories – Physics of: X-ray Diagnostic Radiology, Nuclear Medicine; Radiotherapy; Magnetic Resonance Imaging; Ultrasound Imaging; Radiation Protection; General terms. Each entry included contribution from at least three specialists – author, referee and group coordinator. The web site was build with two Search Engines – one searching into the Lists of terms (in all languages) and another one searching inside the text of the articles. The latest allowed significant increase of the potential of the eEncyclopaedia, including search for related terms, acronyms and synonyms. To use this facility the user have to select Search in Full Text and specify the category/area of the search (as described above). The EMITEL web site uses the ability of the current Internet browsers to operate with all languages and combines the Dictionary and the Encyclopaedia. This way each translated term comes with a category/area specific hyperlink displaying the corresponding article for this term.
IV. EMITEL FUTURE DEVELOPMENT As specified above EMITEL Consortium and Network intend to continue its activities related to the support and
update of the Dictionary and Encyclopaedia. To allow this an additional web site was developed handle the updates of the large web database. This web Content Management System (CMS, also developed by AM Studio) allows not only on-line editing of the materials, but also adding new terms/entries and including new languages. The CMS was tested rigorously and was found to work flawlessly. This way EMITEL will act as the professional wikipaedia of Medical Physics and Engineering, with the addition that only refereed and accepted by the Network articles will be uploaded. This way EMITEL Consortium and Network will have full editorial control over the online material. The Dictionary database is currently being expanded to include Croatian, Japanese, Korean and Finnish. It is expected these languages to be included by mid-2010. Alongside the development of the digital content of EMITEL a draft Agreement is made with a Publishing company to allow the paper print of the Encyclopaedia (preferably by the WC2012). It is expected that the content of EMITEL will additionally be printed on paper and commercialized.
V. ACKNOWLEDGMENT EMITEL gratefully acknowledges the financial support from the EU Leonardo Programme, the Partner Institutions and its many Contributors from the EMITEL Network. This paper is presented on behalf of all EMITEL Consortium (listed in the web site).
REFERENCES 1. EC project 162-504 EMITEL 2. Tabakov, S., Roberts, C., Jonsson, B., Ljungberg, M., Lewis, C., Strand, S., Lamm, I., Milano, F., Wirestam, R., Simmons, A., Deane, C., Goss, D., Noel, A., Giraud, J. (2005) ‘Development of Educational Image Databases and e-books for Medical Physics Training’, Journal Medical Engineering and Physics, 27 (7), pp 591-599.
Author: Institute: Street: Email:
IFMBE Proceedings Vol. 29
Slavik Tabakov, Dept. Medical Engineering and Physics King’s College London Denmark Hill, London SE5 9RS, UK [email protected]
BME Education Program Following the Expectations from the Industry, Health Care and Science P. Augustyniak1,2, R. Tadeusiewicz1,2 and M. Wasilewska-Radwańska1,3 1
AGH University of Science and Technology/ Multidisciplinary School of engineering In Biomedicine, Program Council Member,Krakow, Poland 2 Institute of Automatics, AGH-University of Science and Technology, Kraków, Poland
3
European Federation of Organisations for Medical Physics (EFOMP), A Company Limited by Guarantee in England and Wales, Registered Number 6480149, Fairmount House, 230, Tadcaster Road, York, YO24 1ES, UK Abstract— This paper presents the BME educational program implemented in AGH University of Science and Technology in context of history and current state of biomedical engineering and medical physics education in Poland. A particular attention is paid to program adaptation procedures. This aspect is an often discussed factor increasing the efficiency of educational process and the attractiveness of the studies. Education variants has been defined accordingly to the development of particular BME branches and forecasted employment requirements. The aspects of measurements and evaluation of teaching quality, advisability of the methods and adequacy of presented topics are also presented throughout this paper. Keywords— biomedical engineering, medical physics, BME education, multidisciplinary learning, education quality.
I. INTRODUCTION
The education on medical physics and engineering in Poland started in years 30th of the XXth century [1] with creation of the Radium Institute in Warsaw by Maria Sklodowska-Curie. Prof. Cezary Pawlowski, one of the assistants and then collaborators of Mme Curie organized first courses on medical physics and biomedical engineering in the Physics Department of the Radium Institute. The first course of medical engineering started at the Faculty of Electrical Engineering of Warsaw University of Technology in years 50th. Then at the Faculty of Electrical Engineering, Automatics, Computer Science and Electronics of the AGH University of Science and Technology (former University of Mining and Metallurgy) in Krakow Prof. Ryszard Tadeusiewicz organized first courses of biomedical engineering in years 70th. Until academic year 2005/2006 education in biomedical engineering was proposed as specialization in other fields of studies e.g. mechanics, automatics & robotics, electronics. The development of new technology in medical diagnostic and therapy caused the need for a new approach to biomedical engineering education [2]. Therefore, a consortium of six technical universities - in alphabetic order: AGH Uni-
versity of Science and Technology (Krakow), Gdansk University of Technology (Gdansk), Silesian University of Technology (Gliwice), Technical University of Lodz (Lodz), Warsaw University of Technology (Warsaw) and Wroclaw University of Technology (Wroclaw). The consortium has elaborated the new program of education and then applied to the Ministry of Science and Higher Education for creation a new field of studies “Biomedical Engineering” (BME). In June 2006 Ministry has accepted the application. AGH University of Science and Technology was the first in Poland to enroll the students in BME in academic year 2006/2007. In 2007/2008 all members of the consortium had their students in BME. In 2009/2010 education in BME is offered by 11 technical universities in Poland. The education in medical physics in Poland [3] started in 1950 with the Technical Physics specialization at the Warsaw University of Technology in Warsaw created by Prof. Cezary Pawlowski and at the AGH University of Science and Technology (former University of Mining and Metallurgy) in Krakow by Prof. Marian Miesowicz. In years 70th Medical Physics program has been initiated at Warsaw University in Warsaw and at the Jagellonian University in Krakow. In 1990 Radiation Physics and Dosimetry specialization has been established at the AGH University of Science and Technology in Krakow. Since 1991/92 it was transformed in Medical Physics and Dosimetry in close cooperation with the Collegium Medicum (Faculty of Medicine) of the Jagellonian University. In academic year 2009/2010 about 15 universities and technical universities train students in medical physics. II. ORGANIZATION OF THE TEACHING PROGRAM
A. General layout The BME teaching in the Multidisciplinary School of engineering In Biomedicine AGH University of Science and Technology is programmed accordingly to legal regulations including national standards for academic teaching by the
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 945–948, 2010. www.springerlink.com
946
P. Augustyniak, R. Tadeusiewicz, and M. Wasilewska-RadwaĔska
Ministry of Science and Higher Education [4] and to the guidelines of Bologna Process (including the Educational Credits Transfer System). The current offer consists of (fig. 1):
Three postulates were formulated as a background of the temporal organization of the proposed program:
•
a single 1-st degree (Bachelor/Engineer) 7 semester track,
•
five domain-oriented 2-nd degree (Master) 3 or 4 semester tracks,
•
a single 3-rd degree (Doctoral) 8 semester track.
•
the well-established fundamental knowledge comes first and the recently developing domains follow,
•
the general knowledge comes first and the lectures corresponding to particular specialization, students' interests and employers requirements follow,
•
the easy-to-understand topics precede the specialized knowledge and lectures based on recent achievements of medical technologies.
Although the curricula proposed by many universities assume first the education of technological backgrounds, followed by the life-sciences specialization, we postulate to start the teaching from the fundamentals of all basic topics: mathematics, physics, biology, chemistry, medical and information sciences. In result all obligatory lectures presenting well established canon of knowledge is put forward and being a reliable background for the subsequent modern technology-oriented lectures, leaves place for the program adaptivity in 4-7 semesters of the 1-st degree. Putting the elective lectures as late as possible shortens the program adaptation delay in case of fast changes of employment conditions (e.g. recession or new opportunities). C. Program adaptation mechanisms Fig. 1 Education tracks scheme for biomedical engineering at MSIB AGHUST
After a careful review of the needs from prospective employers, availability of existing infrastructure and resources and detailed studies of reports from more experienced colleagues, we decide to formulate and put into the practice several rules and mechanisms allowing for a broad basic education in all possible BME domains and fast adaptation of the program to the variability of unstable local employment market.
Despite in the 1-st degree program a single track (for 150 students each year) is proposed, various measures were implemented in order to increase the adaptation range of the proposed tracks. They include: •
supplementary lectures freely selected from the offer,
•
elective lectures selected by topics usually based on the students interests,
•
alternative lectures selected by advancement degree usually based on students skills or ambitions
•
individual study tracks (offered for best students, under the individual supervision of an associate professor they can modify the curriculum by up to 15%)
•
individual study schedule (offered for weak students, under the individual supervision of an associate professor they can adapt the schedule to their particular studying conditions)
•
obligatory individual activities: personalized projects, summer stages, diploma projects and international exchange.
B. BME-specific solutions The education in biomedical engineering is a particular challenge in (at least) two aspects [5, 6]: •
mulitidisciplinary approach including life, technology and human sciences,
•
broad range of the domain difficult to comply with the need of precise definition of the professional specialization field,
IFMBE Proceedings Vol. 29
BME Education Program Following the Expectations from the Industry, Health Care and Science
•
elective students activity: educational meetings with industry-proposed young engineers challenge, student scientific societies, participation in volunteer-based programs or events in hospitals and hospices.
947
the research is usually performed in close cooperation with other scientific and medical institutions in Poland and abroad.
Besides the high flexibility, a wide offer of elective elements, involves students as partners in the educational process and trains their responsibility and flexibility required in a workplace of biomedical engineer. The 2-nd degree program (proposed also for 150 students yearly) requires the students to select one of five offered parallel tracks. Four of them are taught in Polish and oriented towards main branches of biomedical engineering:
D. Education control mechanisms
•
medical electronics and information technologies,
•
•
biomaterials,
standardized student poll concerning the university staff,
•
biomechanics and robotics,
•
•
bionanotechnologies.
student polls concerning the teaching process and conditions,
•
student scores on final exams analyzed for specific lectures,
•
employer score on student performances (planned),
•
summer stage supervisors opinions,
•
opinions from reviewers.
Fortunately, in the formulae of multidisciplinary school benefiting from human resources and infrastructure of five faculties the support for these tracks is sufficient for complying with high requirements of teaching quality. The fifth track, named emerging technologies in health care and taught entirely in English is prepared for the 2011 offer. It aims at prospective international PhD students or workers of global-range medical companies. The tracks are oriented to the particular needs from the prospective employers, however a common root consisting of 6 mandatory lectures helps to maintain the general BME education within the range required by Ministry standard. The 2-nd degree tracks are composed of: •
mandatory lectures, common for all tracks - 30%,
•
mandatory track-oriented highly specialized lectures 40%,
•
elective lectures - 30%.
The offer of elective lectures is track-independent, allowing several intra-track combinations increasing the program adaptability. Additionally, the individual study tracks and schedules and personalized activities enrich the offer for the master studies. The total amount of lectures proposed in 1-st and 2-nd degrees of BME studies in AGH University of Science and Technology is currently 116. The 3-rd degree (doctoral) program, despite of the single track, is highly individualized and proposed yearly for 10 people only. All the lectures are given in English, and the curriculum proposes elective lectures as well. However the main stress in this degree is put on the individual research made under the supervision of a professor, publication and industrialization of results. Although not formally required,
No adaptation can be responsively made without the unbiased measure and evaluation of results. This rule wellknown from the control theory is fully applicable in teaching-learning process. In this aspects students, teachers and employers are all involved as partners in scoring and evaluating of the education. Measurements are performed in both qualitative and quantitative ways including:
diploma
project
supervisors
and
All the gathered information completed by remarks from staff and students international mobility programs are thoroughly analyzed by the Board of MSIB. Consequently, ordering a course from a particular faculty is an independent decision of the Board, usually based on the teaching quality. The educational aspects are also discussed on the national level, the MSIB is organizing a biannual National Conference on Biomedical Engineering Education (OKIBEdu), being a meeting of students, professors and employers throughout the country [7]. III. RESULTS OF PROPOSED TEACHING APPROACH
The adaptivity of the proposed educational tracks allows for huge number of possible combination of lectures, limited only by availability, law and industry requirements. The practice shows, that more advanced are the students, their selections is more justified. The selection background is still various, but random choice is avoided. Some impatient students claim that in first semesters they have to learn a very broad range of fundamental sciences, despite of their precisely-defined interests in biomedical engineering. Such position is rather typical for beginners and changes
IFMBE Proceedings Vol. 29
948
P. Augustyniak, R. Tadeusiewicz, and M. Wasilewska-RadwaĔska
with the time and with successive recognition of the employers preferences. The track of biomedical engineering has a reputation of one of harder in our University and therefore attracts mainly good candidates. For 150 candidates accepted in 2009, the average high school finals was 92,2% (minimum 85,5%). Nevertheless, although the total capacity of our offer was fairly high, we were far from satisfying 1005 candidates interested in studying (6,7 persons per one place). Recently, first 81 graduates of 1-st degree studies received their Engineer Diploma. Although they are almost all candidates for the 2-nd degree (Master), first evaluation of the proposed approach can be reliably made.
discussions when are not closed to the problem one particular university (e.g. MSIB), but can be developed throughout the whole country.
ACKNOWLEDGMENT This work was founded from the AGH-University of Science and Technology, grant no. 11.11.120.612
REFERENCES 1.
2.
IV. CONCLUSIONS
The education program for the biomedical engineering faculty has a long tradition in the AGH-UST, its elements are taught since nearly 40 years. Let us remind two first textbooks [8, 9] dedicated for learning of particular elements of biomedical engineering knowledge, printed by AGH-UST publishing house in 1978. However, until the year 2006 the faculty has not been established by the regulations, and the particular biomedical engineering lectures were given as elements of other engineering faculties (electrical engineering, material sciences, etc.). The proposed program resulted from the analysis of strengths and weaknesses of that approach. We stressed on the necessary background of all sciences developed from the beginning of the study and giving a motivation for future work. The flexibility is a principal factor to guarantee high employment ratio for the graduates. We are looking forward to strengthen our cooperation with the industry and to involve the employers' representatives in defining the future needs. Permanent elements of BME teaching process enhancement are every year performed students anonymous surveys, with questions about teaching quality, personal relations between students and teachers, curricula contents and mutual relations between particular lectures, labs and seminars. Very important role in this didactic optimization efforts plays mentioned above biannual National Conference on Biomedical Engineering Education (OKIBEdu). During such meeting of students, professors and employers we can exchange opinion and take into account points of view both students and teachers, researchers and entrepreneurs, scientists and practicing doctors and engineers. Especially fruitful are mentioned
3.
4. 5.
6.
7.
8.
9.
Palko T, Golnik N, Pawlicki G, Pawlowski Z (2002) Education on Biomedical Engineering in Warsaw University of Technology, Polish J Med Phys&Eng 2002, 8(2) pp 121-127 Wasilewska-Radwanska M and Palko T (2008) Actual State of Medical Physics and Biomedical Engineering Education in Poland. 14th Nordic-Baltic conference on Biomedical Engineering and Medical Physics : Riga, Latvia, June 16–20, 2008 : abstracts / IFMBE Riga Technical University, cop. 2008. — ISBN 978-9984-32-231-5, 2008, p121 Wasilewska-Radwanska M and Waligorski M (1995) The Curriculum of Medical Physics in Krakow, Medical Radiation Physics – European Perspective, King’s College London, Editors: Colin Roberts, Slavik D Tabakov, Cornelius Lewis, London 1995, pp 127140 Ministry of Science and Higher Education, Educational Standards for Higher Education, No 49 Biomedical Engineering (in Polish) 2007 Monzon JE (2005) The Challenges of Biomedical Engineering Education in Latin America. 27th Annual International Conference of the Engineering in Medicine and Biology Society, IEEE-EMBS, 2005, pp2403-2405 Schwartz MD (1988) Biomedical Engineering Education, in: WebsterJ.G (ed), Encyclopedia of medical Devices and Instrumentation, Wiley, New York, 1998, pp392- 403 Augustyniak P (ed.) (2008) Proceedings of First National Conference on Biomedical Engineering Education (OKIBEdu). Acta Bio-optica et Informatica Medica - Biomedical Engineering. Vol 3'/2008 Tadeusiewicz R, Kot L, Mikrut Z (1978) Biocybernetics [In Polish: Biocybernetyka], Lecture notes printed by AGH publishing house, Nr. 630, Krakow 1978 (40%) Tadeusiewicz R (1978) Basic Medical Electronics [In Polish: Podstawy elektroniki medycznej], Lecture notes printed by AGH publishing house, Nr. 640, Krakow 1978
Author: Institute: Street: City: Country:
IFMBE Proceedings Vol. 29
M. Wasilewska-Radwanska AGH-University of Science and Technology 30, Mickiewicz Ave, 30-059 Krakow Poland Email: [email protected]; [email protected]
Quality Assurance in Biomedical Engineering COOP-Educational Training Program: Planning, Implementation and Analysis A. Alhamwi 1, Manal A. Farrag 2, T. Elsarnagawy 3 1, 3
2
Applied Medical Sciences Department, King Saud University, Riyadh, Saudi Arabia Department of Computer and Information Sciences, Prince Sultan University, Riyadh, Saudi Arabia
Abstract— The paper sets selected definitions of terms which are important to apply for the interactions of all participants in a cooperative and educational training program COOP. It describes details how the COOP program is developed to closely match the National qualifications Framework designed by the Saudi National Commission on Academic Accreditation and Assessment (NCAAA). Within this applied research results of the COOP Program at King Saud University, an infrastructure is developed and applied on students registered for the associate degree program in the medical equipment technology. This covers time planning and skills to be acquired during the training and most important departments and medical devices that must be accessible to the student. Analysis, results and statistics are presented and discussed as well as ways of improvement. Keywords— Biomedical Engineering Education, Cooperative Education, Field Training, Work-based Learning.
I. INTRODUCTION
The Cooperative Education Program (COOP) for the medical equipment technology education is to emphasize the knowledge of students and implementing it in a practical application to obtain real experience in a real work environment and realistic situations during their training in health care institutions, medical solutions companies and selected hospitals. The goal is to refine their talents and establish distinct links between the foundations and principles and practical skills in the field of medical equipment technology in terms of good management. It also optimizes the investment of medical equipment and maintenance, and familiarizes creative thinking and practice good work ethic, self-reliance, and cooperation with others. Supervisors and trainees play a critical role in promoting interactions among them in the learning process. Cooperative Education Program has proved to be an effective process that can promote this interaction to benefit all parties. When students interact in cooperative groups, they learn to give and receive information, develop new understandings and perspectives, and communicate in a socially acceptable manner. It is through interacting with each other in reciprocal dialogues to construct new ways of thinking and build their sense of feeling
especially towards their future career [1],[2]. Cooperative learning creates opportunities for students to actively interact with others, negotiate meaning around a task, and appropriate new ways of thinking and working [3], [4]. By establishing a learning environment where students feel safe to test out their ideas, free from the scrutiny of the classroom teacher and the wider class group, they are provided with opportunities to reach out to each other and establish a personal synergy that facilitates engagement, promotion of learning, and group cohesion—all necessary elements for successful cooperative learning [5], [6]. The present study builds on a practical implementation of the Cooperative Education Program at the community college-applied medical sciences department in King Saud University (KSU) that indicates that when teachers are fully involved in the Cooperative Education Program, how efficient the learning outcome of the trainee is affected as well as how supervisors change the way they interact with their students. This is to determine if teachers can also be trained to use specific communication skills and improved ways to motivate the transfer of their knowledge to the trainees to facilitate creative thinking and learning during the COOP program for better results. The paper demonstrates the results of the COOP program in real work environment. The Cooperative Education Program at the Applied Medical Sciences Department (AMS) of King Saud University in Riyadh is the first to provide medical equipment technology students the opportunity to apply their academic knowledge and skills in a work-based environment under fully control from the university in cooperation with the training institution. This should fully match the definition of the COOP program which states that the COOP program is the process of relating student's academic achievements with practical and authentic reality. This is achieved while the student trains at a hospital or health care companies which are carefully chosen by the academic institution. The COOP program in the department of applied medical sciences at KSU provides medical equipment technology students the opportunity to work in a training organization to complete the requirements of their associated degree program. The coop is a 15 weeks, 12 credit hour program that students register for after completing 64 academic credit hours.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 949–952, 2010. www.springerlink.com
950
A. Alhamwi, M.A. Farrag, and T. Elsarnagawy II. NCAAA
FRAMEWORK
In its approach to support the planning, monitoring and improvement of field experience programs, The National Commission for Academic Accreditation and Assessment (NCAAA) has developed two key documents: the Field Experience Specification and the annual Field Experience Report. The Field Experience Specification file is completed during the planning and development phases of the COOP program. It includes the aims and objectives and a summary description of the Intended Learning Outcomes of the field experience in each of the domains of learning. Second, a general description of the field experience activity is stated. Third, the planning and preparation of the field experience is described. Fourth, the criteria for student assessment are described along with the responsibilities of the supervising faculty and staff. Finally, arrangements for the evaluation of the field experience activity by students, supervising staff in the field, and supervising staff from the institution are explained. The Field Experience Report is completed at the end of each COOP program cycle [7]. COOP-Terminology Cooperative education is defined as a process of education that formally integrates a student’s, academic and/or career interests with a productive work experience in a cooperating employer organization [8]. Training Coordinator is a chosen employee from the department who is obligated to become an informative agent between the COOP program and the department at the university. Academic Supervisor is a professor who is chosen to continuously supervise students seeking cooperative education and to evaluate their performance through a previously designed plan which includes a weekly site visit. Training supervisor is an employee at the institute where the student receives his training. The training supervisor is in charge of supervising the student.
2. Preparation of the trainees enough time before the start of training through lectures which familiarize them with their rights and obligations. 3. Holding meetings with training supervisors to make sure that they absorb the content of the COOP program and familiarize them with their duties and obligations. 5. Selection of the sections where students are trained and how long the students will spend time in each section, and making a tour in the training institution. 6. A weekly lecture is held by the academic supervisors for the trainees. This is to compensate and complete their knowledge about medical equipment they may not cover during the training. B. Participants The three different stakeholders involved in making decisions about COOP program are employers (training institutions), administrative member (training coordinator) and students. C. Program follow-up The training coordinator together with the department's chairman explores the chances of training at different institutions through meetings and letters. Thereafter the students are informed and distributed to their training places. A training plan is accurately set according to the needs of each specialization. This ensures that the trainee fills the practical gaps which are not to be fulfilled by the college as an academic institution. One of the duties of the training coordinator is to measure how strictly the proposed training program put by both the training institution and the college department is implemented through regular meetings and a follow-up form. After the students allocations are conducted by the academic supervisors the actual training starts.
III. METHODE
A. Planning and developing of Coop program The program takes the student through practical training in various identified professional organizations. The organization could be a hospital or medical equipment company. The planning and developing of the program includes: 1. A survey of the needs of the labor market in specialization and the ability of stakeholders to accommodate students training and capacity to train students in accordance with the program.
Fig. 1 COOP-Organization chart of the Applied Medical Sciences dept.
IFMBE Proceedings Vol. 29
Quality Assurance in Biomedical Engineering COOP-Educational Training Program: Planning, Implementation and Analysis
Figure 1 represents the developed organization chart that is implemented at the applied medical sciences department at King Saud University. It shows the flow of information and feedback for the applied cooperative training program. The academic supervisors pre-evaluate the learning efficiency of students through daily and weekly and monthly students’ reports. The training supervisor submits a monthly student assessment report to the academic supervisor to be reviewed in the department at the college for locating deviations or problems. D. Student Assessment The assessment of student based on the following elements: the behavior, self-reliance, the ability to perform tasks, interest in work, general appearance, relationship with others. Also the student must submit a final report documenting that what the student achieved within the weeks of training. The grade distribution for the COOP is 20% from the training organization (or the training supervisor) and 60% from the academic supervisor and 20% from external Committee evaluating the student's final report. E. Evaluation and Improvement The academic supervisors act as a liaison between the training sites and AMS department. They are responsible for obtaining regular feedback and resolving any work related issues. On their weekly site visits, they also have discussions with work supervisors on their suggestions regarding the program. Areas of improvements are modification of evaluation form, searching new training institutions, increasing the number of academic supervisors, improvement of evaluation procedures and increasing the coordination with the training supervisors. IV. RESULTS Since the establishing of AMS department at KSU until June 2009 about 106 students were enrolled in the COOP program by more then 12 training institutions for 15 weeks at 40 hours per week. The program included student work in four different workshops for the maintenance and troubleshooting of medical equipment for a period between 3 and 4 weeks within each workshop like medical imaging system workshop, electronic medical equipment workshop, mechanical medical equipment workshop, and medical laboratory instrumentation workshop. The training at medical imaging system workshop includes preventive maintenance and maintenance of high voltage transformers, cables, X-ray tube and maintenance of mobile X-ray, Cath-Lab, and Ultrasound Scanner.
951
The training at electronic medical equipment workshop includes electronic skills development such as dealing with printed electrical circuit and electronic devices, also includes various training on the maintenance of ECG, EMG, EEG and defibrillator also includes training on preventive maintenance of all electronic medical equipment under the supervision of engineers with expertise in the field of biomedical engineering. The training at mechanical medical equipment workshop includes training on maintenance of pumps, incubators, ventilators, injection pumps, infusion pumps, anesthesia machines, and sterilization equipment as well as preventive maintenance of all mechanical medical equipment. During the training period the number of lectures on some of the selected devices will be given. The training at Clinical Laboratory Instrumentation workshop includes training on preventive maintenance and maintenance of gas analyzer, blood counters, spectrophotometer, flame photometer, and microscope. The percentage of students who pass the COOP program is 100% and the percentage of students who have obtained jobs after finishing the COOP program is 68%. It is worth mentioning that some of the trainees signed job contracts even before they have finished their training period. V. DISCUSSION Results of trainees' evaluation reflected the suitability of the training program to the needs of the training institutions and the ability to develop human resources in the medical equipment technology specialization. Questionnaires showed the need to intensify cooperation between the training institutions and AMS department. It also showed satisfaction with all points of training program specially the academic supervisor weekly site visit. Questionnaires showed that the students received good information on specialization and practice what they learn on real work environment and follow up each new in this area and develop their skills in the maintenance of medical equipment. It has also shown a desire to prolong the duration of the training and to continue education to obtain a higher degree of specialization. A number of factors have contributed to the success of the COOP program in AMS. The effective communication between the employers and the COOP coordinator helps identify, resolve and prevents problems. The uniqueness of the AMS COOP program lies in the close and regular and continuously supervision of students throughout the training period. This is achieved through the academic supervisor weekly site visits and ongoing communication with field supervisors.
IFMBE Proceedings Vol. 29
952
A. Alhamwi, M.A. Farrag, and T. Elsarnagawy
Of all students who passed the program, 68% obtained jobs, 5% are continuing study to achieve the bachelor degree, and 27% are still seeking for a job. Students with moderate academic performance demonstrated exceptional work skills and were hired by their training organizations. Coop education also helps faculty who work as academic supervisors in keeping up to date with the rapidly changing medical technology field. Such real-world experiences allow a student to explore career options and better define his role in the biomedical engineering community.
VI. CONCLUSION This analysis and description of the COOP model at the applied medical sciences department can be conducted by other colleges and be a model for their training which is confined by three major entities: student, academic supervisor and training supervisor. The paper described the factors that contributed to the success of the COOP experience. Results so far show that the program has proven to be successful in strengthening the relationship between employers and the higher education institutions to meet the local job market demands and enhances the "saudiazation" program conducted by the Saudi government.
ACKNOWLEDGMENT The authors would like to thank his Excellency Professor Abdullah bin Abdurrahman Al Othman, King Saud University Rector for his support.
REFERENCES 1. Barnes, D. (1969). Language in the secondary classroom. In D. Barnes, J. Britton, & H. Rosen (Eds.), Language, the learner, and the school (pp. 11–76). Harmonsworth, Middlesex, England: Penguin Books. 2. Mercer, N. (1996). The quality of talk in children’s collaborative activity in the classroom. Learning and Instruction, 6, 359–377. 3. King, A. (1999). Discourse patterns for mediating peer learning. In A. O’Donnell, & A. King (Eds.), Cognitive perspectives on peer learning (pp. 87–115). Mahwah, NJ: Lawrence Erlbaum Pub. 4. Rogoff, B., & Toma, C. (1997). Shared thinking: Community and institutional variations. Discourse Processes, 23, 471–497. 5. Johnson, D. W., & Johnson, R. T. (1990). Cooperative learning and achievement. In S. Sharan (Ed.), Cooperative learning: Theory and research (pp. 173–202). New York: Praeger. 6. Slavin, R. (1995). Cooperative learning: Theory, research, and practice (2nd Ed.). Boston: Allyn and Bacon. 7. Handbook 2, Internal Quality Assurance Arrangements. The National Commission for Academic Accreditation and Assessment March 2007. 8. National Commission for Cooperative Education at www.co-op.edu 9. Coop Guide for King Fahd University of Petroleum and Minerals (KFUPM) students at http://www.kfupm.edu.sa 10. Educational Training Guideline, Department of Applied Medical Sciences, Riyadh Community College, King Saud University, September 2007. Author: Ahmad Alhamwi Institute: Applied Medical Sciences Department King Saud University Street: Sixty Street City: Riyadh Country: Saudi Arabia Email: [email protected]
IFMBE Proceedings Vol. 29
Accreditation of Medical Physics and Medical Engineering Programmes in the UK S Tabakov1, D Parker2, F Schlindwein3, A Nisbett4 1
King’s College London, UK; 2 University of Birmingham, UK; 3 University of Leicester, UK; 4 Royal Surrey County Hospital, Guildford, UK;
Abstract— The Quality Assurance of Education in Medical Physics and Medical Engineering in the UK includes two levels of accreditation – internal (based on University Guidelines) and external (based on IPEM Guidelines). The paper presents a brief overview of this process. Keywords— Education, Training.
I. INTRODUCTION The accreditation of Medical Physics and Medical Engineering MSc courses in the UK has usually two levels – internal University accreditation and external professional accreditation. While the internal accreditation follows a route common for all Higher Education Institutions in the UK, the external accreditation is based on the requirements of the UK Institute for Physics and Engineering in Medicine (IPEM). These two accreditation levels assure the quality of the Medical Physics and Medical Engineering MSc education. The paper presents a short overview of these two systems aiming to support the development of quality systems for professional education and training in other countries. II. INTERNAL UNIVERSITY ACCREDITATION The usual University accreditation is an internal process (usually repeated every 5 years) which always involves external assessors. The specially formed Accreditation panel includes the Chair (or other representative) of the Faculty Teaching Committee; one or two representatives of other MSc programmes and external expert in the field (in our case Medical Physics and/or Medical Engineering). The assessment continues about one day. During the internal assessment the Panel examines the Critical Review of the accredited MSc programme (submitted by the MSc Programme Director); the reports of the External Examiner(s) of this Programme and selected MSc materials (specifications, handbook, exam scripts, etc). The Panel also has short interviews with the Programme Director, Faculty staff and current students.
Usually this accreditation is focused on the methods and systems used to assure the high quality of education (entry requirements and admission; programme resources; faculty assessment; lecturing methods; building of knowledge and skills; student feedback; examination at various levels; Exit award criteria; administration; etc.). Each University has its own systems for Quality Assurance of Academic standards, but in general all these systems are quite similar. III. EXTERNAL (IPEM) ACCREDITATION The external professional accreditation of Medical Physics and Medical Engineering MSc Programmes is a process assuring that the academic content of MSc provides good background for the professional training of the students. This is especially important for assuring the quality of skills of the clinical scientists working for the UK National Health Service (NHS). Successful completion of an IPEM accredited MSc course (programme) and IPEM accredited training leads to IPEM Diploma (additional to the MSc degree), and is imperative for Registration of the graduate as a Clinical Scientist to the UK Health Professions Council (HPC). The external IPEM accreditation follows special guidelines [1] and is conducted by the IPEM Clinical Scientists MSc Course Accreditation Sub-Panel. The panel includes both academic leads and healthcare specialists. The accreditation process includes examination of the MSc programme application and visit of the applicant. Alongside with the assessment of the breadth and depth of the lecture material, the assessors have to examine the level of educational interaction with real healthcare practice. Due to this reason many MSc programmes work in close partnership with hospital departments in Medical Physics and Medical Engineering. The assessors have also to identify the adherence of the programme content with the IPEM model syllabus, which includes Prescribed topics and Specialist topics (the latter related to the areas of approved practical hospital training). The following Prescribed topics are assessed: Anatomy and Physiology; Radiation, Safety and Quality; Scientific principles and Research Skills; Professional topics. The assessors have to be satisfied that these are adequately covered (including minimum number of contact hours).
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 953–954, 2010. www.springerlink.com
954
S Tabakov et al.
The following Specialist topics are assessed: Radiotherapy Physics; Radiation Protection; Diagnostic X-ray Physics; Nuclear Medicine; Magnetic Resonance Imaging; Ultrasound Physics; Non-ionising Radiations; Physiological Measurement & Functional Assessment; Information and Communications Technology; Medical Electronics and Instrumentation; Medical Engineering Design; Assistive Technology. An MSc programme has to cover well all Prescribed topics and at least 3 Specialist topics. Some programmes may include combination topics related both to Medical Physics and Medical Engineering. The IPEM panel assessors meet with the Programme Director, Faculty staff and students/trainees. Additionally they assess lecture notes, laboratory protocols, examination papers, exam scripts and other Programme documentation including selection of MSc theses. The latter is of special importance for the quality of the MSc programme and its relation to healthcare practice. The accreditation visit takes about a day and aims specifically to the relevance of the acquired knowledge and skills to the clinical practice. The recommendations of the IPEM assessors are discussed by all members of the IPEM MSc accreditation panel and if the assessors are satisfied with the MSc programme, accreditation for 5 years is issued. New MSc pro-
grammes may receive conditional accreditation for 2 years to allow build-up of sufficient number of MSc theses. Additionally to this accreditation the Department delivering practical training (associated with the MSc Programme) has to be assessed by another IPEM panel. IV. CONCLUSION The UK IPEM system for assessment of Medical Physics and Medical Engineering MSc courses and clinical training has been developed over several years by many leading UK specialists. It has produced very good results maintaining the high level professional education and training and has been used as example in other countries developing their professional education and training.
REFERENCES 1.
IPEM Training Scheme Prospectus Author: Institute: Street: Email:
IFMBE Proceedings Vol. 29
Slavik Tabakov, Dept. Medical Engineering and Physics King’s College London Denmark Hill, London SE5 9RS, UK [email protected]
Tools based eLearning Platform to Support the Development and Repurposing of Educational Material T. Stefanut 1 , M. Marginean 1 and D. Gorgan 1 1
Technical University of Cluj-Napoca, Computer Science Department, Cluj-Napoca, Romania
Abstract— Recent development of eLearning Platforms, presentation technologies and user interaction methods has enabled the development of new, more complex and more efficient teaching materials, especially in medical domain. However, the process of creating new learning resources is not a trivial task and most of the time technical knowledge is required for such an action. In the same time, the majority of present eLearning applications are mainly oriented on management and presentation of already created didactic materials, rather than on their creation and development. As a result, until now, only a reduced number of medical specialists have been able to create learning objects, either by repurposing older materials or by creating new ones from scratch. Addressing this important aspect, this paper presents an eLearning platform architecture that provides the instructors with flexible tools specialized for didactic medical materials development. Through this software, learning resources developers have the ability to control aspects as: information search and retrieval, resource management, information presentation, user interaction methods, learning objects repurposing, lesson presentation structure and others. Even more, the flexibility of the application’s functionalities allows multiple interaction and data presentation techniques to be combined into the same lesson and even on the same resource, in order to enhance the adaptivity and adaptability capabilities of the learning materials to the needs of the student. Keywords— learning resource, eLearning, architecture, repurposing, tools
I. I NTRODUCTION E-learning applications represent today a feasible alternative to the teaching and communication methods used in most of scientific domains. Although it is not possible yet to entirely replace the classical pedagogical approaches based mostly on face-to-face meetings in specific locations, eLearning platforms provide efficient, cost effective, easily scalable and very flexible methods for distributing didactic materials and allowing information exchange between the participants to the learning process. The continuous development of these applications has en-
abled the professors to include in their lessons constantly improved representational materials: videos, 3D virtual models, virtual patients, serious games, sounds etc. At the same time, new user interaction methods have been developed and implemented to allow a more efficient and natural approach in the analysis of these new types of multimedia elements. The information exchange between instructors and learners (and between learners) has also been greatly improved through solutions that enable synchronous (text based chat, voice and video conferences, collaborative sessions, etc.) and asynchronous communication (forums, emails, offline messages, etc.). One of the most emerging domains that benefits from eLearning applications capabilities development is medicine. Through the new presentation and interaction abilities of the eLearning applications, complex information and concepts can be more easily presented by the medical specialists and analyzed by the students. Nevertheless, as creating new learning resources is an operation that requires technical knowledge, only few medical experts are able to develop eLearning materials that meet the minimum quality requirements and can be efficiently used in real eLearning scenarios. Addressing the need of specialized tools, this paper presents an eLearning application that has as main objective to facilitate the creation and presentation of medical teaching materials, by specialists without technical knowledge. The tools developed and included into the underlying platform allow a flexible description of information presentation settings, data sources, types of user interaction. At the same time, the modular implementation approach used for platform architecture allows independent development of different tools and easier integration process for application functionality extension.
II. R ELATED
WORKS
In the past few years, scientific community, especially from medical domain, has involved many efforts and resources in the development of eLearning technologies and materials. Anticipating the potential of this approach in educational field, many organizations have developed applica-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 955–958, 2010. www.springerlink.com
956
T. Stefanut, M. Marginean, and D. Gorgan
tions [1] and teaching materials [2]. Most of these solutions are focused on learning resources management and display, very few of them implementing tools specialised in resources creation [3]. At first, the development efforts have been isolated and focused on independent solutions, having as a result separate functionalities and learning materials that could not be shared with other applications or users without major modifications. To overcome these problems, in medical field for example, projects like mEducator [4] or REViP [5] are focused on the development of standardized ways of information search and retrieval. Development of new types of user interaction techniques is another major research development area in the eLearning domain. Applications like eTrace [6], Web-Trace [7] or Dokeos [8] enable graphical annotation based user interaction with teaching materials, while other projects [9] are experimenting on voice based communication. Although important progress has been recorded in the above mentioned areas, the tools specialized in creating learning resources that could benefit from these achievements are very few. Most of the applications used in teaching materials development have been created for other purposes (ex. HTML editors, image processing, 3D modelling, etc.) and require technical skills from their users [10] or are very restrictive and limited in functionalities (i.e. online rich text editors).
III. E L EARNING PLATFORM ARCHITECTURE The design of the platform’s architecture has been developed having as the main goal to create a flexible environment that would allow medical specialists without specific technical knowledge to create teaching materials. The tools included and managed by the platform constitute an application that provides functionalities for flexible specification of data sources, information presentation settings, interaction methods etc. and assists the teacher in actions like information search and retrieval, data processing and management, lesson structure definition. The profile of the target user for the application has the following main characteristics: • is a medical scientist that intends to create medical elearning content for his/her students • knows the basic concepts of teaching materials and elearning environments, including basic understanding of user interaction types, pedagogical approaches, information presentation methods etc.
• has no (or very little) technical knowledge about technologies like HTML, CSS, XML, JavaScript etc. or concepts like distributed databases, mash-ups, web services etc. • has medium level computer operating skills that include internet browsing, basic knowledge about file formats (ex. image or video formats), files management operations etc. The process of creating learning materials can consist of repurposing older teaching materials or of creating new learning resources from scratch. Platform’s main functionalities are not oriented on actions like image and video editing, 3D objects modelling, sound recording etc., but rather on using or reusing all these types of elements in more complex learning scenarios and resources that involve different interaction styles, presentation modes, distributed data sources or the visualization of large data processing outcomes from Grid or cluster infrastructures. Nevertheless, the functionalities of the platform can be easily extended towards other kind of actions through the development and inclusion of new specialized tools (ex. image editors, 3D modelling, etc.). One of the major issues in the creation process of teaching materials is represented by the description of visual layout of the elements included inside the same resource (ex. lesson). Usually, the layout formatting for the eLearning materials is defined in HTML and CSS languages, which can be generated by non-technical specialists using different visual applications (ex. online rich text editor, wiki system, forums, etc.). Due to the fact that these programs have been developed for other purposes and are not specialized in learning resources development, the formatting functionalities are rather limited and very restrictive, direct editing of the code being often necessary. Addressing these problems, the platform we developed provides specialized layout control and creation mechanisms for the visual representation of learning materials. The layout of the resources allows the integration of multiple instance tools that can be active at the same time and are placed in a flexible grid like structure with three levels: tools, patterns (group multiple tools on the same row) and templates (group all the patterns active at a specific moment in time). The architecture of the platform is based on two conceptual models: A. Client-server architecture This conceptual model was selected for the implementation of the core of the platform due to his main characteristics: scalability, technology independence between server
IFMBE Proceedings Vol. 29
Tools Based eLearning Platform to Support the Development and Repurposing of Educational Material
Internet connection
Server
Client
Learning resource
e-Learning platform core Users information
Tool 1
Tool 2 Tool 3
Tools description and settings Resources description and configuration file
957
through different connecting scenarios: web services, mashups, HTTP, streaming, etc. For specifying the connection settings and parameters, each tool provides to the user a visual interface that minimizes the need of technical knowledge. This approach provides the users with the ability to easily reuse previously created learning materials for developing new ones by repurposing procedures and with minimal modifications and effort.
Web Services
IV. T OOLS
Internet / LAN
File repository
Distributed Databases
Cluster / GRID infrastructure
Fig. 1: Application architecture
and client, location independent user data storage and centralized users and resources management. The information stored at the server level describes the basic components of the eLearning platform: the information about users and their role into the system, description of tools settings and specifications and also the files that describe the learning resources created using platform’s functionalities (see Fig. 1). This central part of the architecture is necessary for a better control over the resources of the platform and for data integrity and consistency maintenance procedures. It provides all the basic functionalities necessary in an eLearning application (ex. operations for users, courses and resources management etc.) together with the tools integration, instantiation and management mechanisms. B. Distributed architecture The basic functions provided by the core component described above have important roles in data and tools management activities and also in controlling the layout of the teaching resources. The capabilities related to the creation and presentation of learning materials are further extended using tools, each of them providing different functionalities and implementations regarding user interaction types, data formats, information search and retrieval actions etc. Through these additional components, the developers of teaching resources have the possibility to retrieve remote pieces of information from other systems or databases
In the context of our eLearning platform, a tool is a small application that manages one specific section of a learning resource, and represents the smallest component that can be used in its structure. Each tool is usually specialized on a specific content type (ex. image, video, 3D models, virtual patients etc.), on one interaction technique (ex. graphical annotation, voice interaction etc.) and on one or more data retrieval mechanisms (ex. web services, mash-ups etc.). Once integrated into the platform each tool can be instantiated as many times as needed inside each teaching resource, with independent data sources and display settings (ex. an image viewer tool can be used more than once, with different images, in the same resource). All the tools included into the platform constitute the application level, which combined with the specialized functionalities of the platform assist the teacher in the learning materials creation process. Some important advantages of using a tools based modular structure for developing teaching materials are: • development and integration of new functionalities (data types, interaction techniques, data sources etc.) into the platform can be done much easier • multiple interaction techniques can be defined separately or together over the same section of a learning resource • tools can be implemented using different technologies such as PHP, Flash, Flex, JavaScript, Java etc. depending on which one is best suited for the purpose of the tool • same teaching resource could combine data retrieved from different external sources and through different communication protocols Each of the tools developed for the platform is responsible with the implementation of the user interface needed for toolspecific settings input (ex. data source, interaction type, etc.) and for the correct and complete implementation of the two main working modes: authoring and display. Furthermore, all the tools developed to be included into the application must implement a standard API that allows:
IFMBE Proceedings Vol. 29
958
T. Stefanut, M. Marginean, and D. Gorgan
• information exchange between the tool and the platform • behavioral control of the tool, by the platform, in order to restrict access rights according to current user role (ex. students cannot access the settings interface of the tool) • changing the working mode of the tool, according to current working mode of the application: authoring, change display settings or visualize information
V. C ASE STUDY: CREATE MEDICAL TEACHING RESOURCE
Scenario: The medical expert wishes to create a learning resource with the title “General presentation of internal organs”, that allows multiple types of user interaction for different displayed data: mouse interaction to control the visualization of a video file and annotation based interaction for a 3D medical virtual representation. Also, inside this resource he wants to display an image from a remote database and a very detailed 3D model that can be correctly traced in real time only on a graphical cluster. Actions: The teacher inserts into the layout of the new resource, from the list of available tools displayed by the platform, four types of tools: one for image display, a tool with 3D representation functionalities and 3D annotation taking mechanisms, another tool with video playing capabilities, that allows user interaction for playing control and a fourth tool that is capable to display streaming feeds from the internet, and can be connected with a graphics cluster After inserting these tools into the layout, the teacher can modify their settings through the interface each tool provide for these actions (see Fig. 2). When all the modifications are done, the teacher can simply save the resource that will become available in display mode only for the authorized users.
Fig. 2: Example of teaching object authoring
VI. C ONCLUSION The eLearning platform described in this paper represents a solution to the creation of new medical teaching materials for eLearning domain, by providing functionalities that assist medical specialists throughout the process of creating teaching resources. Easily extended through specialized tools, this platform provides the teachers with the ability to include inside the same learning resource more user interaction types; different content types; information from various external sources etc. Furthermore, the platform allows the medical specialist to define his own layout and control its structure in an interactive and visual manner, without requesting any technical knowledge.
ACKNOWLEDGEMENT The work reported through this paper was supported by the: • ESA PECS Contract no. 98061 GiSHEO - On Demand Grid Services for High Education and Training in Earth Observation - core platform development • mEducator project - Multi-type Content Repurposing and Sharing in Medical Education, funded by European Community, under the Contract 418006/2009, eContentPlus section - tools development
R EFERENCES 1. Reviews and Condensed Profiles of 90+ Commercial Learning Management Systems (2009) at http://www.brandonhall.com/publications/lms snap/lms snap.shtml 2. Electronic Virtual Patients (eViP project) at http://www.virtualpatients.eu/ 3. SLOODLE - Simulation Linked Object Oriented Dynamic Learning Environment at http://www.sloodle.org 4. mEducator Project - Multi-type Content Repurposing and Sharing in Medical Education at http://www.meducator.net/ 5. ReViP Project - Repurposing Existing Virtual Patients at http://www.elu.sgul.ac.uk/revip/ 6. Gorgan D, Stefanut T, Gavrea B. Pen Based Graphical Annotation in Medical Education in Proc, 20th IEEE International Symposium on Computer-Based Medical Systems(Maribor, Slovenia):681–686 2007. 7. Giordano D, Leonardi R. Web-Trace and the Learning of Visual Discrimination Skills in Proc, 1st International Workshop on Pen-based Learning Technologies(Catania, Italy):CD Published by IEEE Computer Society 2007. 8. Dokeos eLearning at http://www.dokeos.com/ 9. Li W, Zhang Y, Fu Y. Speech emotion recognition in e-learning system based on affective computing in Proc, 3rd Conf. on Natural Computation;5(Haikou, China) 2007. 10. Watson J, Dickens A, Gilchrist G. The LOC Tool: Creating a Learning Object Authoring Tool for Teachers in Association for the Advancement of Computing in Education(http://eprints.soton.ac.uk/52551/01/LOCTool.pdf) 2008.
IFMBE Proceedings Vol. 29
A FEASIBLE TEACHING TOOL FOR PHYSIOLOGICAL MEASUREMENT R. Stojanovic1, D. Karadaglic2, B. Asanin3, O. Chizhova4 1
University of Montenegro, Professor, Podgorica, Montenegro University of Manchester, Senior researcher, Manchester, UK 3 University of Montenegro, Professor, Podgorica, Montenegro 4 The MHU, Docent, Moscow, Russia
2
Abstract— This paper describes the development of a variety of classical biomedical experimental exercises by using interdisciplinary approach. A number of them have been developed integrating the knowledge of sensors, electronics, microprocessors and MATLAB software. The exercises depicted here are intended to introduce students to fundamental concepts of biomedical instrumentation, from the sensing requirements to subsequent data analyze. This not only enhances the fundamental knowledge, but also trains students in the application of complex concepts in real-world of practice and laboratory research. The emphasis is put on the measurement of physiological vital parameters. Similar concept can be applied to some other signals and systems, as well. Using proposed approach sophisticated and expensive equipment can be replaced successfully by a functional low cost hardware and/or versatile virtual instruments. Keywords— BME education, physiological measurements, teaching tool, ECG, PPG, MATLAB, virtual instrument. I. INTRODUCTION
The fusion of practical knowledge from different disciplines in the context of biomedical engineering education is important for the contemporary students and researchers. The approach with the application of a modular educational layout can result in a very effective learning environment that emulates real-world practice. Through several years of experiences in teaching a number of courses such as electronics, measurements, microprocessors, we proved that the knowledge from these areas can be effectively applied in the design of flexible teaching exercises for purpose of biomedical engineering education. The approach exposed in this paper differs from existing in the following points: instead of classical, relatively expensive acquisition units/boards it uses general-purpose low cost microcontroller (MC), while standard stand alone monitoring units (special purpose computers and monitors) are replaced with MATLAB-based Virtual Instrument (VI), which can be powered by any PC compatible machine [1],[2],[3]. The sensing, amplifying and filtering of measuring data has been achieved by using electronic circuits
based on standard components (transistors, operation amplifiers and digital gates) designed by students. The acquired analog signals are digitalized and processed by low-cost, low-power microcontroller (MC). Using parallel, serial or USB protocol the microcontroller sends the packed data in real-time to server (PC compatible machine). VI designed in MATLAB accepts the data, analyze them and display the desired biomedical signals and effects. Students also design microcontroller circuit, its firmware as well as associated functions for biomedical signal processing like plotting, filtering, transformation in frequency and time-frequency domain, QRS detection, heart rate variability (HRV) etc. Thus, these laboratory exercises permit the students to learn the concepts of the physiological phenomena and measurements at first, and secondly, to understand principle of system integration and signal processing of real-time data. They become familiar with the problems that cause real signals and try to solve them by knowledge from literature or by own ideas. This paper offers an overview of our work in this area. The emphasis is placed on the representative experiments related to Electrocardiography (ECG) and Photoplethysmography (PPG). Section 2 briefly introduces applied methodology, while Section 3 illustrates some of classical exercises performed by proposed tool. Sections 4 and 5 give the Conclusions and References. II. METHODOLOGY
The laboratory toolset we propose is quit modular and consists of both, hardware and software units, Fig. 1. Its final appearance, in the process of exploitation, is given in Fig 2. A. Software architecture Overall software consists of firmware and MATLAB code. As mentioned, the firmware mainly supports acquisition process and emulation of communication protocols. It is developed in widespread compilers like IAR or CodeVision AVR and then uploaded to the MC’s program memory.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 959–962, 2010. www.springerlink.com
960
R. Stojanovic et al.
In order to avoid repeated reprogramming, the host (PC) controls the client (MC) through simple commands. eg. ‘a’ will start one-channel acquisition by 100Hz sampling rate, ‘b’ will change the data length to 1024 points, ‘s’ stops the process etc.
prior to arrival of the next “termination” sign. The example of MATLAB GUI with serial port initialization and simple “callback” function is given in Listing 1.
Fig. 3 MC-MATLAB communication protocol .
Fig. 1 Tool architecture
Fig. 2 Tool in a running, during exercises The packets are continuously arriving to the serial port of PC and "termination" sign (CR/LF) activates the "event" or "callback" function, which performs data reading, processing and displaying, Fig. 3. Inside “callback” function it is possible to call simple and very complex MATLAB functions like; signal filtering, statistics, time-analysis, frequency (FFT) and time-frequency domains (TF), QRS finding, plotting etc. “Callback” function must be completed IFMBE Proceedings Vol. 29
Listing 1 MATLAB code of simple VI
A Feasible Teaching Tool for Physiological Measurement III. EXAMPLES
During student exercises, due to its simplicity, easy handling and speed, the ECG signal is taken from fingertips (see Fig 2) in two leads, ground free configuration. It can also be done in chest version of LEAD 2 or RA-LA-RL. The PPG signal is taken from finger or ear. A. Exercise 1: Detection and processing PPG signal This is a starting exercise where the students learn about PPG signal, the optical probes for its detection; and the designing principle of simple VI in MATLAB, Fig 4. The PPG signal is acquired from finger. VI has only one button and operates as self-recursive function. Inside the “callback” routine two diagrams are plotted, originally PPG signal and its filtered version obtained by Butterworth filter of 2nd order with cut-off frequency of 10Hz. Context of call-back routine should be changed without breaking a process, simple change a code and re-save. The code for this exercise is given in Listing 1.
961
During this exercise the students become familiar with two basic physiological signals ECG and PPG and vital parameters extracted from them, PTT and HR. Also, the students should observe physiological signals in spectral domain which is an effective tool in HR and HRV analyze. Here, the VI instrument is more complex allowing additional options like pause, re-start, loading and saving, holding, filters switching as well as changing sampling frequency. MATLAB tools associated to the “Figure” option; cursor, zooming, cursor measurement, printing, statistics, saving in different formats should be effectively used.
Fig. 5 VI instrument with ECG and PPG signal with FFT for HR determination
Fig. 4 VI instrument for acquisition, display and filtering PPG signal
Fig. 6 Calculating of PTT using “Figure” cursor B. Exercise 2: PTT calculation This exercise is a continuation of the previous one in term of simultaneously acquisition and processing two signals, ECG and PPG, Fig. 5. Both signals are filtered in order to remove 50Hz noise. The forth diagram shows the Fast Fourier Transform (FFT) of PPG signal. As it can be seen, the position of the dominant peak corresponds to the Heart Rate (HR) in Hz. By displaying ECG and PPG signals in same window, third plot, and using built-in “cursor function” the Pulse Transmit Time (PTT) can be measured, Fig. 6. There are mathematical relations between PTT and systolic and diastolic pressures [4] which gives possibility for cuffless blood pressure monitoring.
C. Exercise 3: Detection of QRS complexes Real time detection of QRS complexes is an important task in analysis of ECG signals. One of the first and most effective is Pan-Tompkins [5] algorithm based on adaptive dual thresholding in which the decision rule uses a filtered signal and a version of the signal produced from a moving window integrator. The students are introduced with steps of Pan-Tompkins algorithm. Also, as part of their homework they are working on different algorithms, like ECG beat detection using filter banks, proposed by V. Afonso, W. Tompkins at all [6]. Initially, they test efficiency of the existing algorithms, offline, by MIT-BIH database; and after that in real time using
IFMBE Proceedings Vol. 29
962
R. Stojanovic et al.
VI, Fig. 7. Here, the students can illustrate intermediate steps in the implementation of the algorithms; low and high pass filtering, derivation, moving averaging, adaptive thresholds etc. They also clearly recognize the problem of QRS appropriations for real signals.
IV. CONCLUSIONS
The paper describes the development of a variety of classical biomedical experimental exercises by using standard tools such as amplifiers, microcontrollers and Matlab software. They are developed by the fusion of the knowledge from various relevant disciplines. The exercises depicted are intended to introduce students to fundamental concepts of biomedical experimentation, from the instrumentation and data acquisition requirements to subsequent data analysis techniques. As a case study emphasis the detection and processing of physiological signals has been chosen, while the same principle can be applied to other signals. Additionally, the approach can be successfully used in research purposes where virtual instruments should be use as a cheap and effective replacement for expensive instruments.
ACKNOWLEDGMENT Fig. 7 QRS detection in real time, the signal is taken from fingertips
The authors are thankful to the Montenegrin Ministry of Science and Technology for funding the work described in the paper.
D. Exercise 4: Time-frequency analysis It is advanced exercise dealing with time-frequency analysis, Fig. 8. The S transformation, which is considered as one of Wigner transform modifications, was implemented with the following parameters: Hannig window with Nw=64, Tw=1, Ld=0 and distribution order=1 [7]. The ECG signal with inducted artifact was acquired (upper waveform). The FFT transform does not clearly distinguish the artifact (middle diagram) as it is done by implementation of S transform (lower diagram). System parameters can be customized.
REFERENCES 1.
2.
3.
4.
5. 6.
7.
Fig. 8 VI for time frequency analyze
Olansen J, Ghorbel, Clark W and A. Bidani (2000) Using Virtual Instrumentation to Develop a Modern Biomedical Engineering Laboratory, Int. J. on Enginering Ed. Vol. 16, No. 2: 1-11 Yao J and Warren S, Stimulating Student Learning with a Novel “InHouse” Pulse Oximeter Design, Proceedings of the 2005 American Society for Engineering Education, Annual Conference & Exposition, American Society for Engineering Education, 2005. S. Carmel and A. J. Macy, Physiological Signal Processing Lab, www.biopac.com/Curriculum/pdf/ss39lhite_physiological-signalporcess.pdf. Chan, K.W.; Hung, K.; Zhang, Y.T., Noninvasive and cuffless measurements of blood pressure for telemedicine, Proceedings of the 23rd Annual International Conference of the IEEE , Vol. 4, Issue , 2001 p.p. 3592 – 3593. Pan J and Tompkins W. J (1985) A real-time QRS detection algorithm. IEEE Trans. Biomed. Eng., BME-32 (3):230-236 Afonso V, Tompkins W, Nguyen T and Luo S (1999) ECG beat detection using filter banks IEEE Trans. Biomed. Eng., vol. 46, no. 2 :192--202 Djurovic I, Stankovic Lj (1999) A virtual instrument for timefrequency analysis , IEEE Trans. on Instrumentation an Measurements, Vol.48, No.6 :1086-1092.. Author: Radovan Stojanovic Institute: University of Montenegro Street: Cetinjski put b.b. City: Podgorica Country: Montenegro Email: [email protected]
IFMBE Proceedings Vol. 29
Repurposing Serious Games in Health Care Education A. Protopsaltis, D. Panzoli, I. Dunwell, and S. de Freitas Serious Games Institute/Coventry University, Coventry, UK {aprotopsaltis,dpanzoli,idunwell,sdefreitas}@cad.coventry.ac.uk Abstract— Serious games are one of the most content-rich forms of educational media, often combining high fidelity visual and audio content, novel interaction paradigms, and diverse pedagogic approaches. This article describes exploratory work towards identifying the key issues faced when repurposing serious games in order to enable their use and reuse in the same or different educational contexts. To address these issues, we propose a theoretical framework for the repurposing of serious games in medical education and in education in general. Two case studies based on the Climate Health Impact serious game are presented. These case studies demonstrate the ability to repurpose a serious game into new learning objects, covering two different paradigms of content repurposing language and pedagogy. Keywords— Serious Games, Repurposing, Metadata, Health Education, E-Health.
I. INTRODUCTION Interest amongst educators regarding the use of serious games alongside existing techniques for teaching and learning has been growing steadily over the last decade. Considering the complexity, time, effort and cost of developing a serious game it is imperative that such a game can be repurposed, enriched and embedded effectively adaptively into educational practices and curricula. This includes updating or changing serious games to reflect new functionality, different pedagogic objectives, technologies etc. Such repurposing and reuse, therefore, is a desirable activity, reducing organisational resource consumption and opening up new opportunities for learning, maximising the capabilities of existing learning objects. Therefore, being able to reuse and repurpose game content avoids the need to recreate bespoke content from the ground-up, and offers potential to efficiently adapt serious games and game elements to wider audiences and application areas. Repurposing refers to the changing of a learning resource initially created for a specific educational context, to a new educational context (or contexts) [1], and should be distinguished from reuse, which refers to the use of the same learning resources without any changes [2]. Considerable research work has been done in the field of automatic learning object repurposing [e.g. 3, 4]. Furthermore,
a sub-section of the literature regarding notions of repurposing has focused on multimedia repurposing [e.g. 5, 6]. Contrary to those areas, the area of games repurposing is still in its infancy and there are no exhaustive works addressing the issue. In one such study, Burgos et al. [7] describe the use and the repurposing of commercial games. Their focus of the work was on the different pedagogic approaches in game repurposing. They described two different approaches where a game is fully integrated in the learning flow while in the other a game is used as an autonomous learning object disconnected from the learning flow. Bar the work of Burgos et al. [7] and the authors’ own work within the mEducator project, our research uncovered few studies regarding the repurposing of serious games content in general, and in health care in particular. Our current mEducator study, therefore, aims to provide the mechanisms with to discuss and analyse the repurposing of game-based content. The work aims to present a framework within which serious games repurposing can be facilitated.
II. THE FRAMEWORK We propose the adoption of the game-based learning design model in conjunction with the additional component of fun to support motivation and engagement on the part of the learner. The serious game design framework therefore will: 1. Adopt learner centred approaches for fostering positive attitudes and behaviours (e.g. developing appropriate user modelling and profiling to support personalisation). 2. Integrate learning outcome based and usability design criteria. 3. Adopt a participatory design strategy (engaging users and stakeholders within the serious game design process. 4. Use formative evaluation methodologies (e.g. based upon the four dimensional framework). 5. Integrate relevant tools and techniques including preprototyping, human factors analysis, scenario creation tools and learning needs analysis underpinned by the four dimensional framework. 6. Integrate fun as a design category through use of narrative, flow and immersion.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 963–966, 2010. www.springerlink.com
964
A. Protopsaltis et al.
III. THE REPURPOSING CONTEXTS Educational content repurposing is a very popular activity between educators and it can broadly be distinguished into ten different categories as proposed by the mEducator consortium [1, 8]. The categories include: 1) Actual content, 2) Languages, 3) Cultures, 4) Pedagogical approaches, 5) Educational levels, 6) Disciplines or professions, 7) Delivery content types, 8) Technology, 9) Educational context, 10) Different abilities.
game is designed to convey to the player how global warming will affect the way diseases spread in the future. The game is about discovering and identifying diseases and then taking the right policies to save as many people as possible, and as such relies upon decision-making and research skills on the part of the player. The game is attached to the learning objectives of A-levels biology curriculum, and also has embedded overarching learning objectives in the areas of the environment and conservation. Like many content-based games, CHI favours a separation between the game behaviour (interfaces, game mechanics, AI, etc) and content. CHI is a web-based game, built using FLEX technology from Adobe, which allows the production of rich internet applications (RIAs) by providing the developer with a substantial set of pre-written routines and user interface elements. However, as the following case studies reveal, accessing and modifying the interfaces are far beyond the skills of a teacher or an educator that would like to repurpose the game to address different needs, and require expert technical skills for repurposing.
Fig. 1 Hypothesised relationships between repurposing types and common game elements As shown in Fig. 1, using these 10 definitions of repurposing types, we begin to deconstruct the various elements common to serious games and illustrate the strongest links between repurposing types and game elements. This provides a preliminary guide outlining which elements of a game are likely to be needed to effectively perform a given type of repurposing. For example, cultural repurposing may require animations and game dynamics are changed to reflect differences in gameplay and gestures between cultures and language repurposing may also be essential. Together with the serious games framework and two main extracted elements of repurposing (language and pedagogy), the next section explores the best methods for repurposing game content using the Climate Health Impact game. The case studies aim to present a new approach to game repurposing for the medical education field and to present a testing of the framework.
IV. THE CASE STUDIES The case studies are based on the Climate Health Impact (CHI) game developed by a UK company, PlayGen. The
Fig. 2 Content is structured using an XML representation The game includes separate files containing texts and references to assets. These files are formatted with XML (eXtended Markup Language) as this format allows for the representation of any type of content in the most structured way (Fig. 2). Each disease is organised as a single record, integrating the texts to be displayed for each disease, references to illustrative pictures and parameters for use by the game engine. This structured representation not only represents an asset when designing the game, but also reveals itself, a mandatory step towards repurposing the content. The next sections discuss the issues related to extracting, using and repurposing the content related to the diseases’ descriptions. The two case studies address two of the ten repurposing situations where serious games can be repurposed.
IFMBE Proceedings Vol. 29
Repurposing Serious Games in Health Care Education
A. Language Repurposing Repurposing the game to a different language requires the translation of the content into the target language. The target language here is French. Translating the game, or parts of the game, is in theory relatively straightforward, in terms of editing the XML files and translating the texts (descriptions, captions, etc). No game design knowledge is necessary as the content is separated from the game engine, and the XML files can simply be passed to a translator for edition. Error! Reference source not found. shows the result of such translation. However, some tags or titles may be hard written in the interface objects and translating them requires at least web-designers skills.
Fig. 3 Translating the content is easy as translating the text fields inside the XML document B. Pedagogical Repurposing The objective of the second case study was to design a quiz based on the disease descriptions. Most of the mechanisms related to researching the diseases are kept from the original game. Transforming the original game into a quiz is relatively different to translating. Whereas the latter requires changing the content of the game, which is hosted in separate, easily readable and modifiable XML files, the objective here involves modifying the mechanics of the game. In this case unfortunately, nothing at the design stage has been defined or planned to ease the process. It is here that the serious games design framework could be employed at the design stages of the original game content development, to build in the extractable elements of the game to ease the process of reusability or repurposing content. However, the question of whether repurposing can be built into the game
965
mechanics design or simply the game content from our research leading to a requirement for considering both aspects of game design (mechanics and content) as critical to ease of repurposing. Building a quiz out of the CHI game has therefore involved some programming. In this case, as the game engine is separated from the flash-based interfaces, the workflow of the game is clearly simplified. Firstly, the engine loads the interfaces. The interfaces can then be integrated to the main window, displayed or hidden depending on the context. Then, the engine loads the XML data from the content files. Once the content is loaded, it is passed to the interfaces, where it is formatted. In practice, the content can represent text to display in the correct text areas, links to images that will be loaded on a panel, or parameters that will be used to animate a virtual avatar for example.
Fig. 4 Panels or parts of the interfaces can easily be "extracted" as they are separate Adobe Flash files. In this example, the disease information panel on the right of the main game screen has been extracted. The PlayGen logo has however been re-introduced to identify the company as owner of the material Repurposing the game is therefore aided by the ability to select and extract specific interface components. For the CHI game, the disease information panel has simply been extracted, and used as the main window (Fig. ). From this point, having the content and the panels in which the content has to be displayed, development work is required to build new mechanisms. The direct feedback mechanism has been kept from the original game, so that a mistake is instantaneously notified to the player, by means of a red sign (Fig. 5) coming with an advice to look the answer up on a medical website. Note the red sign has also been reused from the original game. During the game, the mistakes and successes of the player are recorded, so that a score is attributed to the player at the end of the game, depending on his performance. The resulting repurposed learning object (the quiz, see Fig. 5) is a standalone application and it that can be used
IFMBE Proceedings Vol. 29
966
A. Protopsaltis et al.
with a different instruction approach such as exploratory, focused on increasing the learner’s zone of proximal development by directing them to web and other external material and resources in order to solve the challenges and address the issues presented by the quiz. The quiz was developed using the serious game design framework approach and we have found that through integrating different learner groups and introducing an element of fun that the repurposed content can be reused in a different pedagogic context and for supporting a different age range of learners. Further testing of the repurposed content and the serious game design framework is planned in ongoing work as part of the mEducator project.
Due to the technical nature of game content, the educator has to keep in mind that a limited knowledge about how games are designed and programmed is still a limiting factor for everyday repurposing of game content. That said, game developers themselves should realise the potential of anticipating the increasing demand from educators who consider serious games content as a relevant source of educational material. In concert with educators, the role of researchers is thus to provide game developers with recommendations and frameworks that would enable this turnover at the lowest cost. To that end, future research should focus on developing technical solutions that can bridge the gap between technical skills required for the repurposing and external module, serious game, simulation or virtual world and an e-learning system.
ACKNOWLEDGMENTS
Fig. 5 The CHI game once re-purposed into a quiz. The interface and the content have been extracted from the game. However, in this case, some programming has been necessary to design new mechanisms, like scoring. Top-left, the disease has been successfully found. Top-right, the player picked the wrong disease. Bottom-left, the disease’s vector has been correctly found. Bottom-right, the player has scored 84/100, as shows the end screen
V. CONCLUSIONS This paper has presented a framework for serious gamebased learning design, and interrogated how game content can be repurposed according to content, language and pedagogic elements. The case studies demonstrate that two new serious game learning objects were created by repurposing the CHI serious game. The conclusion makes clear that a separation between the content and the behaviour of the game is a great facilitator to such repurposing. However at present this approach involves complex programming skills. Extracting the content of the game for repurposing is relatively simple; modifying its behaviour is far from an easy task. In the future, methods of rapid repurposing may be developed based upon extracted high level criteria, models and frameworks such as the one outlined here, and the spread of standards for conformance in the design and implementation of game-based content.
The authors wish to thank Kam Star, from Playgen UK, for letting them use the Climate Health Impact game. The game can be found at http://playgen.com/climate-healthimpact/
REFERENCES [1] Dovrolis, N., et al., Depicting Educational Content Re-purposing Context and Inheritance, in International Conference on Information Technology and Applications in Biomedicine (ITAB). 2009: Larnaca, Cyprus. [2] Meyer, M., et al., Requirements and an Architecture for a Multimedia Content Re-purposing Framework, in Innovative Approaches for Learning and Knowledge Sharing, W. Nejdl and K. Tochtermann, Editors. 2006, Springer-Verlag: Berlin / Heidelberg. p. 550-556. [3] Zaka, B., et al. Topic-Centered Aggregation of Presentations for Learning Object Repurposing. in World Conference on E-Learning in Corporate, Government, Healthcare, & Higher Education (E-Learn). 2008. Las Vegas, USA. p. [4] Jovanović, J., et al. Ontology of learning object content structure. in 12th International Conference on Artificial Intelligence in Education. 2005. Amsterdam, The Netherlands. p. 322–329. [5] Steiger, O., Ebrahimi, T., and Sanjuan, M.D., MPEGBased Personalized Content Delivery, in IEEE International Conference on Image Processing (ICIP). 2001: Barcelona, Spain. [6] Hossain, S.M., Rahman, A.M., and El Saddik, A. A Framework for Repurposing Multimedia Content. in CCECE 2004/CCGEI 2004. 2004. Niagara Falls. p. [7] Burgos, D., Tattersall, C., and Koper, D., Re-purposing existing generic games and simulations for e-learning. Computers in Human Behavior, 2007. 23(6): p. 2656-2667. [8] Kaldoudi, E., Balasubramaniam, C., and Bamidis, D.P., mEducator D.3.1 Content Repurposing: Definition of Repurposing Reasons & Procedures. 2010.
IFMBE Proceedings Vol. 29
Geotagged Repurposed Educational Content through mEducator Social Network Enhances Biomedical Engineering Education S.Th. Konstantinidis1, N. Dovrolis2, Eleni Kaldoudi2 and P.D.Bamidis1 1
Medical Informatics Laboratory, School of Medicine, Aristotle University of Thessaloniki, Greece 2 Medical Physics Laboratory, School of Medicine, Democritus University of Thrace, Greece
Abstract— Biomedical Engineering and Medical Informatics complete educational programs are offered throughout european institutions. Common curriculum and level of provided knowledge is trying to be reached through protocols and guidelines from international and European organizations. Educational material that used, shared and repurposed across institutions could be an indication of the differences between educational curriculums. We propose a mashup tool that represents the different repurposing types across institutions that can be used as an indicator for different knowledge provision. This tool can be accomplished due to the mEducator Social Network, comprised of two distinctive and interacting networks, one with persons and the other with dynamic connected learning objects with both persons and other learning objects. Keywords— WEB2.0, mashup, Medical Informatics Education, content repurposing, learning object.
I. INTRODUCTION
Nowadays, Biomedical Engineering and Medical Informatics complete educational programs are offered throughout European Institutions both in post- and undergraduate level. Each institution is responsible to define the curriculum within his departments according to national health care systems and to local and international scientific needs. Different needs and different access to educational material, lead the institutions to different curriculums and resultant to different level of knowledge for the new scientists in the era. To this extend, each institutions provide to its student different quality of educational material especially state-of-the-art educational material. The inequality in knowledge of the new Biomedical Engineers or Medical Informatics scientists across Europe can both influence the industries and the research activities. During the last few years international nonprofit organizations tried and produced some recommendations on education in Biomedical Engineering and Medical Informatics. These recommendations based upon current curriculums and new trends in the scientific field. However, there is no tool that can “measure” or at least provide a few indicators on the differential of knowledge provided in institutions between countries or even between cities in the same coun-
try. To fill in the gap, we propose a mashup tool that provides us with visual indicators of how state-of-the-art educational material is exchanged and how it is repurposed and adapted across institutions. The long term aim of this system is to re-establish education in the field of Biomedical Engineering and Medical Informatics, to provide a common line for further discussions and recommendations on educational activities in this field and to enforce and enhance education and research across european and international institutions. The remainder of this paper is structured as follows. In section II we are setting the scene presenting efforts for a common curriculum on Biomedical Engineering and Medical Informatics education and current trends in state-of-theart educational content sharing and geottaging as a mashup tool in health. In section III a Social Network of learning objects is shortly presented followed by different perspectives of content repurposing. Map as a representation tool for repurposed learning objects and curriculum differential indicator are given in section IV, followed by a discussion of key issues of concern. II. SETTING THE SCENE
A. Biomedical Engineering and Medical Informatics Education The rapid development of European Union and freedom of movement raise the issue of recognition of professional qualifications. Bologna Declaration aims to set the base of comparable criteria for inter-institutional mobility and curriculums development [1]. Based on it, traditional engineering curriculums harmonized relative easy across EU institutions. In the contrary, bioengineering and biotechnology due to the rapid development and the small educational units without all the specialties to can be fully offered, urge for a common curriculum [2]. IFMBE (International Federation for Medical and Biological Engineering) and EAMBES in 2005 in recognition of the need for guidelines for the professional formation and development of the Clinical Engineer publish through BIOMEDEA project (http://www.biomedea.org/) the “Protocol for the training of clinical Engineers in Europe” [3].
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 967–970, 2010. www.springerlink.com
968
S.Th. Konstantinidis et al.
This protocol contains guidelines for institutions on curriculum details at different stages of the training programs. The basic principles that this protocol is based is the “experience of those currently working in the field, international developments regarding Clinical Engineering, current and proposed professional structures and benchmarking across the international approaches to the professional formation and development of Clinical Engineers”. To this extend, IMIA (International Medical Informatics Association) publish “Recommendations of the International Medical Informatics Association (IMIA) on Education in Biomedical and Health Informatics - 1st Revision” [4], [5]. The educational needs are described as a three-dimensional framework: professionals in health care, type of specialisation in biomedical health informatics and stage of career progression. The recommendation based on reports and current literature in the field, as well as workshops proceedings and formal discussions. Furthermore, models for common educational curriculum worldwide have been set as a main goal in IMIA’s strategic plan [6]. B. State-of-the-art educational content share and re-use across EU Continuous advances in Bioengineering and Medical Informatics and lack of a common curriculum lead to an enormous need of state-of-the-art educational material. A number of research projects try to enable educational material sharing across EU institutions [7]. In this context, “mEducator”, an EU funded best practice network (funded by the European Commission under the eContentPlus2008 programme, Contract Nr: ECP 2008 EDU 418006) [8], aim to elaborate on pedagogical, technical, standardization, cultural, social and legal issues towards a standard-based infrastructure that enables the sharing of state-of-the-art digital medical educational content among medical educators and students in European higher academic institutions. Considering the state-of-the-art nature of medical educational content, it is imperative that such content can be repurposed, enriched, and embedded effectively into respective medical and other related scientific curriculums, clinical practice and continuing education, as well as public dissemination and awareness. Different perspectives of content repurposing across institution in Bioengineering and Medical Informatics education can be an indicator of the needs and the variation of educational curriculums in the field. As described in [9] and extended in [10], could be a variety of situations where re-purposing educational content is desired, including the following:
1. Re-purposing in terms of the actual content: Add or mutate content, integrate content from different learning objects, re-organize existing content, etc. or a combination of the above. 2. Re-purposing to different languages: Especially a mandate in healthcare, as acquired knowledge should be finally communicated to the patient. 3. Re-purposing to different cultures: Can be viewed as content localization and includes to different legislation and local medical regulations, different lab tests norms, reference values and units as well as different medical requirements of various ethnic groups. 4. Repurposing for different pedagogical approaches: Pedagogical cultures present in healthcare education range from the conventional lecturing to clinical practice and a variety of active learning methodologies. All of these educational approaches would require the same content to be presented in a different way. 5. Repurposing for different educational levels: Content needs to be adapted to match different pre-requisites and consecutively different learning outcomes for different levels: undergraduate, postgraduate, residency, specialty training, and continuing life-long education during medical practice, public education, etc. 6. Re-purposing for different disciplines or professions: Healthcare education addresses a multitude of professions, ranging from medical doctors to nurses and lab technicians, to basic life scientists and even healthcare administrators. 7. Re-purposing to different content types: Contemporary medical education exhibits a considerable variety of content types. Thus a common aim of repurposing is to change a learning object from one type to another. For example, a lecture presentation to a didactic problem or course notes to presentation and so on. 8. Re-purposing for different technology: Finally, we should account for changes to a digital learning object that affect its technological characteristics, such as digital format, digital size and quality (e.g. for images), metadata description scheme, computer platform, etc. 9. Re-purposing to educational content: Re-purpose content created for a different purpose to content used for education. 10. Re-purposing for people with different abilities: This includes re-purposing content for people with special needs, e.g. from written to spoken form, etc. C. Geotagging as a representation mashup tool in webbased application The last decade due to the release of many information administration APIs applied on a map, a huge expansion of web-based applications represent information on map have
IFMBE Proceedings Vol. 29
Geotagged Repurposed Educational Content through mEducator Social Network Enhances Biomedical Engineering Education
969
been launched. Representative examples could be “Healthmap”, a map that represents global infectious diseases based on media reports [11] and “biomedexperts” (http://www.biomedexperts.com/), a literature-based scientific social network that represents among others the connections of scientists across the world. These two and many other application use mashup tools in order to represent in a more human-understandable way useful and complex informations. III. CONTENT REPURPOSING IN MEDUCATOR SOCIAL NETWORK
There are numerous social networks that enable collaboration in general and between specific groups. Facebook (http://www.facebook.com), probably the most famous example and MySpace (http://www.myspace.com) have grown rapidly the last few years, proving that relationships from real life can be transferred to a virtual network. Linkedin (http://www.linkedin.com/) and Xing (https://www.xing.com/) emphasize in professionals linking and new business contacts, while Epernicus (http://www.epernicus.com/) and SciSpace (http://www.scispace.com) are some of the social network platforms dedicated to scientists and researchers. Kaldoudi etal [9] propose a social network that can be viewed as two distinctive and interacting networks. The first one is a network of persons, including authors, potential authors and final users of learning objects (students, or teachers or others, e.g. educational managers, etc). The second is a network of published learning objects. WEB2.0 approach allows users to interact through blogs, chat and other mashup applications and create a scientific network and group of interests. At a different level, learning objects (LO) themselves create an equivalent social network with interactions with other learning objects as well as with persons. These interactions are variable and dynamic, thus create an evolving, user centric and goal oriented organization of objects and persons, based on social dynamics. The LO itself can be a resource in an LMS, another repository, a resource on the web etc. and its location is stated by its description or it can also be an associated file or files uploaded in the social network itself. A complex and dynamic organization is created based on the user generated tags that have been declared for each of the LO description fields. Finally, a third type of organization is a hierarchical one, describing the repurposing history of each object. The current deployment of this learning objects social network is implemented using the Elgg open source social engine (http://elgg.org/) (Fig. 1).
Fig. 1 Learning Object Social Network IV. MAP AS A REPRESENTATION TOOL FOR REPURPOSED EDUCATIONAL OBJECTS AND CURRICULUM DIFFERENCIAL INDICATOR
The repurposed history of a learning object can reveal the educational level of the institutions of the users that share, use and repurpose it. To this extend, the different Curriculums and the lack or the completeness of them would be easily figured out, so as the guidelines for common educational curriculum can be correctly adjust to be referred to all EU institutions. A learning object that repurposed “for different educational levels”, for example from an undergraduate level in an institution A to a postgraduate level to an institution B and accompanied from many others within the same repurposing category, can be a serious indication that the postgraduate curriculum of institution B may have the same educational and knowledge level with undergraduate curriculum of institution A. It should be noted that the representation of this repurposed history is only an indication of the level of the educational curriculum and should be combined with other indication and proofs in order to come to a certain conclusion. For example if the repurposed history reveals that the majority of these specific learning objects repurposed also in terms of “different culture”, the indication of the different educational level between the institutions is being strengthen. Assuming that this example involves institutions of two different cities or countries, the indication can be valid for those two different cities/countries. In order to visualize this representation we created an automatic annotated map, where repurposing history of objects can be viewed. In terms of map representation, there is a two dimension framework to enable the appearance of learning objects in the map. The first dimension considers whether all the objects that are repurposed appear on the
IFMBE Proceedings Vol. 29
970
S.Th. Konstantinidis et al.
graph, or just repurposing history of one object appears. The second one gives the opportunity to user to select a type of repurposing. So as an example the end-user can select to see all the repurposed learning objects that have been repurposed in term of language and different disciplines or professions or a single repurposed path of an object repurposed on different pedagogical approaches. Fig. 2 depicts the map.
ACKNOWLEDGMENT This work is funded in part under the project “mEducator: Multi-type Content Repurposing and Sharing in Medical Education” (www.meducator.net), supported by the eContentplus 2008 program, Information Society and Media Directorate-General, European Commission (ECP 2008 EDU 418006).
REFERENCES 1.
Fig. 2 Map as a mashup tool that represents the different repurposing types across institutions.
Map representation was implemented based on Google map API (http://code.google.com/apis/maps/) and elgg social network API (www.elgg.org). V. CONCLUSIONS
We proposed a mashup tool within a social network of learning objects that “at a click of a button” provide information about sharing and repurpose educational material between institutions of the same or different countries. We pointed out that repurposing of learning objects across institutions is relative to the level of knowledge provided in them. Our geotagging tool was established to represent repurposed learning objects across different cities and countries. However, there are some considerations that should be taken under account when talking about indicating different knowledge level across institutions. The indicators are only theoretical and should be taken under consideration as a start point or as additional indicators through a crossinstitutional curriculums analysis. Nevertheless, a representation of this kind could be a useful tool in hands of curriculums designers and educational guidelines creators. It could, improve collaboration between European countries and help to the recognition of professional qualifications not only across Europe but all over the world. Biomedical Engineering and Medical Informatics education may vary based on different needs and different national health care systems. In spite of this variability common knowledge and educational curriculum can be identified and enhance education and research across european and international institutions.
The Bologna Declaration of 19 June 1999, www.bolognaberlin2003.de/pdf/bologna_declaration.pdf 2. J. H. Nagel, "Bioengineering and Biotechnology: A European Perspective," in Career Development in Bioengineering and Biotechnology, 2008, pp. 1-12 3. International Federation for Medical and Biological Engineering, “European Protocol for the Training of Clinical Engineers”, http://www.biomedea.org/Documents/European%20CE%20Protocol %20Stuttgart.pdf, 2005 4. International Medical Informatics Association, Working Group 1: Health and Medical Informatics Education, "Recommendations of the International Medical Informatics Association (IMIA) on Education in Health and Medical Informatics," Methods of Information in Medicine, vol. 39, pp. 267-277, 2000. 5. J. Mantas, E. Ammenwerth, G. Demiris, etal, and International Medical Informatics Association, Working Group 1: Health and Medical Informatics Education, "Recommendations of the International Medical Informatics Association (IMIA) on Education in Biomedical and Health Informatics First Revision," Methods of Information in Medicine, vol. 49, 2010 6. NM Lorenzi, "Towards IMIA 2015--the IMIA strategic plan," Yearb Med Inform, pp. 1-7, 2007. 7. E. Kaldoudi, P.D. Bamidis, and C. Pattichis,, “Multi-type content repurposing and sharing in medical education,” in Proc. INTED2009: International Technology, Education and Development Conference pp. 5129-5139. 8. P.D. Bamidis, E. Kaldoudi, and C. Pattichis, “mEducator: A Best Practice Network for Re-purposing and Sharing Medical Educational Multi-Type Content”, in Proc. PRO-VE'09: 10th IFIP Working Conference on Virtual Enterprises, pp. 769-776. 9. E. Kaldoudi, N. Dovrolis, S. Konstantinidis, P. Bamidis, “Social Networking for Learning Object Repurposing in Medical Education”, The Journal on Information Technology in Healthcare, vol. 7(4), pp. 233–243, 2009 10. N. Dovrolis, S.Th. Konstantinidis, P.D. Bamidis, E. Kaldoudi, “Depicting Educational Content Re-purposing Context and Inheritance”, Proceedings of ITAB 2009: 9th International Conference on Information Technology and Applications in Biomedicine, Larnaca, Cyprus, November 5-7, 2009 11. C.C. Freifeld, K.D. Mandl, B.Y. Reis, J.S. Brownstein, "HealthMap: Global Infectious Disease Monitoring through Automated Classification and Visualization of Internet Media Reports," Journal of the American Medical Informatics Association, vol. 15, pp. 150-157. Author: Stathis Th. Konstantinidis Institute: School of Medicine, Aristotle University of Thessaloniki Street: P.O.Box 323, 54124, Thessaloniki Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
MORMED: Towards a multilingual social networking platform facilitating Medicine 2.0 Eleni Kargioti1, Dimitrios Kourtesis1, Dimitris Bibikas1, Iraklis Paraskakis1 and Ulrich Boes2 1
South East European Research Centre (SEERC), The University of Sheffield & CITY College, Thessaloniki, Greece 2 URSIT Ltd. Services in Information Technology, Sofia, Bulgaria
Abstract— The broad adoption of Web 2.0 tools has signalled a new era of “Medicine 2.0” in the field of medical informatics. The support for collaboration within online communities and the sharing of information in social networks offers the opportunity for new communication channels among patients, medical experts, and researchers. This paper introduces MORMED, a novel multilingual social networking and content management platform that exemplifies the Medicine 2.0 paradigm, and aims to achieve knowledge commonality by promoting sociality, while also transcending language barriers through automated translation. The MORMED platform will be piloted in a community interested in the treatment of rare diseases (Lupus or Antiphospholipid Syndrome). Keywords— Medicine 2.0, Social Networking, Multilingual Web, Information Management, Rare Diseases.
I. INTRODUCTION
In the early 2000s, eHealth was seen as “an emerging field in the intersection of medical informatics, public health and business, referring to health services and information delivered or enhanced through the Internet and related technologies” [1]. According to Eysenbach, eHealth, as a term, encompasses “not only a technical development, but also a state-of-mind, a way of thinking, and a commitment for networked, global thinking, to improve health care locally, regionally, and worldwide by using information and communication technology” [1]. By the end of the decade, the broad adoption of Web 2.0 technologies [2,3] led to the emergence of a new term for relevant applications, services and tools: “Medicine 2.0”. Central to Medicine 2.0, is the trend of sharing healthrelated experiences and data with a “crowd” of patients and professionals, with the aim of harnessing the collective wisdom at the benefit of reaching knowledge. Medicine 2.0 is defined as comprising “web-based services for health care consumers, caregivers, patients, health professionals, and biomedical researchers that use Web 2.0 technologies as well as semantic web and virtual reality tools, to enable and facilitate specifically social networking, participation, apomediation, collaboration, and openness within and between these user groups” [4]. Web 2.0 applications seem to have contributed towards bringing geo-
graphically dispersed groups of people with common interests together. Even in such circumstances the requisite for expressing in a common language seems to be restricting: people not conversant in that common language could find themselves excluded. This issue may become a burden to the effective dissemination of valuable knowledge among dispersed medical communities interested in topics where information is scarce, such as rare diseases. MORMED attempts to address this problem by developing a multilingual social networking and content management platform. MORMED (Multilingual Organic Information Management in the Medical Domain) is a research project which aspires to address all the above mentioned dimensions of Medicine 2.0. The developing platform is aimed at promoting online collaboration and diffusion of knowledge within online communities, while transcending geographical and language barriers. This is achieved by combining a semantically-enhanced social networking and content management platform [5,6] with technologies enabling machine translation and post-editing by human experts to make content available in multiple languages. MORMED will be piloted in a community interested in Lupus or Antiphospholipid Syndrome (Hughes Syndrome), involving researchers, medical doctors, general practitioners, patients and patient support groups. The rest of the paper is organised as follows. Section II briefly discusses how MORMED is positioned with respect to other Web 2.0 applications in the health domain. Section III presents an overview of the envisaged MORMED platform, starting with the motivation behind MORMED, its key features, the targeted user groups, and the proposed architecture. Finally, conclusions and directions for future research during the project are presented in Section IV. II. WEB 2.0 IN HEALTH
Numerous applications offering health-related services are available nowadays. The nature of these applications varies; from informative websites to elaborate web applications employing sophisticated tools and algorithms, each seeking to offer valuable medical information. [7, 8] are examples of informative websites providing timely and
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 971–974, 2010. www.springerlink.com
972
E. Kargioti et al.
credible content. There also exist discovery engines for support groups [9] and medical professionals [10,11]. Professional medical blogs, wikis [12,13] and social bookmarking [14] sites are excellent cases of services aggregating valuable information resources for professionals and laymen. Finally Personal Health Application Platforms, such as [15,16], have emerged that store detailed medical records and share them with professionals and other patients. Moreover, the potential of Web 2.0 tools in medicine is an active research area of medical informatics. Social networking [17], online collaboration [18] and semantic technologies [19] have been employed to facilitate the challenges of medical education and medical content sharing. MORMED approaches the domain from a different perspective and shifts the focus towards sharing experiences, building upon sociality and leveraging Web 2.0 semantically enhanced tools that support multilingualism. Thus MORMED aims to achieve “just-in-time” diffusion of the collective knowledge and act as an apomediary for patients, professionals and researchers of a specific area, while overcoming the language barrier. III. THE
MORMED SOLUTION
A. Motivation Even though information on topics of general interest is offered online in abundance, information for restricted and highly focused communities, such as communities interested in the treatment of rare diseases, is not widely available and easy to find. An example such a community is people interested in the treatment of Lupus, or Antiphospholipid Syndrome. Diseases like Lupus appear all over the world, but are very rare, making relevant information resources highly dispersed and difficult to locate. Stakeholders that suffer from this lack of information are not only the patients, but also general practitioners and experts who could benefit from information about clinical trials, research theories or results, patients’ experiences, etc. The scarcity of information due to the rareness of the particular disease is aggravated by the fact that stakeholders come from different parts of the world, and typically communicate in no language other than their native tongue. A certain educational level is required in order to not only communicate in a foreign language, but also to understand information making use of scientific terminology. Communication between the various stakeholders, e.g. between researchers and general practitioners (GPs), or between GPs and patients, is currently cumbersome, or even non-existent, due to the two-dimensional language barrier of national languages and specialised terminology.
Rare diseases such as Lupus could be treated more effectively if experiences and relevant information could be shared rapidly across borders and stakeholder communities. Scientific research could be promoted if experiences and information were accessible in a way that nurtured collaboration and knowledge commonality. All of those requirements give rise to the need for a novel means to support contribution, effective dissemination and retrieval of content, as well as to promote informal social networks which exchange ideas and experiences, overcoming the language barriers. B. The Proposed Solution Contemporary Web 2.0 tools allow for the production, instant sharing, and consumption of content, in an intuitive way which lowers the barrier of technological complexity and facilitates the rapid creation of informal communities centred on common themes. People concerned about, suffering from, or working on rare diseases, can benefit from these technological developments in order to access valuable information, publish their experiences, and socialise. The intuitiveness and usability of Web 2.0 applications, further enhanced with semantic technologies for “intelligent” information processing, and combined with automated translation services, can open participation to individuals regardless of educational, professional or language background. The use of recommendation techniques can further promote effective discovery of information, since relevant content is recommended to people according to their profiles. Information resources and documents from parallel multilingual web sites are bookmarked and thus reused and complement the published content. Within this context, people will be motivated to publish their experiences and comment on the experiences of others, knowing that the language barrier is transcended, and that information is made available to the widest audience possible. MORMED will work towards this direction by developing a platform that is tuned for supporting communities interested in the treatment of rare diseases. The platform will support efficient online collaboration for content publishing and social networking in a multilingual environment. Semantically enriched Web 2.0 tools are combined with comprehensive automated translation and text summarisation tools to form the MORMED platform. a) Key features: The MORMED platform aims to promote the creation of online communities and groups with similar interests, where people can contribute any information in an informal way. These groups can be administered and moderated by one or more individuals. The group participants will have permis-
IFMBE Proceedings Vol. 29
MORMED: Towards a Multilingual Social Networking Platform Facilitating Medicine 2.0
sions and rights assigned by administrators. Visibility of content and data privacy issues can also be managed in a group- or individual-based manner, reassuring that no information is exposed to unauthorized access. The users of MORMED will be able to post their experiences, concerns, research results or news items in a straightforward way, thus triggering discussions and comments, and promoting sociality. User friendly tools will support the collaborative authoring of texts. All content published will be possible to annotate with descriptive tags. Suitable tags will be suggested by analysing the content in a semantically-enhanced way. These tags will contribute to building a community-enriched taxonomy of terms descriptive of the specific domain. Thus, users will benefit from targeted search results, grouped views of the published content and many more features. The MORMED platform will employ mechanisms that will not only allow users to retrieve the desired information, but also have information relevant to their interests “pushed” automatically (notification, RSS feeds). Moreover, user profiles will be analysed (favourite content, tags used, etc) and conclusions will be drawn in order to enhance the quality of recommendations. To overcome the language barrier, MORMED will offer content in multiple languages. Efficient translation tools offering pro-active or on-demand translation of the content will be integrated to the platform. In cases where high quality of translation is demanded, a human expert will be engaged in the translation process. Platform users will be able to rate the quality of translations and thus trigger postediting of content items with low ratings.
973
1. Researchers collaborating in research projects. They could reach and communicate with GPs and patients; post recruitment information for clinical trials and make it available to GPs dealing with patients. 2. Medical doctors and GPs requiring information on rare diseases and accessing international information resources. 3. Patients, also represented by patient support groups, like family and friends, requiring and exchanging information on international level. Patients contribute their experiences are used as case studies to further fuel research activities and improve common practices. As shown in the Fig. 1 the MORMED platform will facilitate the communication and information flow in intrauser type (horizontal) and inter- user type (vertical) manner, as far as researchers, medical doctors, GPs, patients and other interested parties are concerned. b) Architecture: Technically, the project combines a semantic Web 2.0 knowledge management platform, result of the OrganiK project [4] with advanced automatic translation tools [20], provided by the Language Technology Centre, in order to support information exchange and bridge the language gap. The system architecture approach aims at providing an open platform, based on open standards, which is extensible and scalable, and applicable to different domains.
Fig. 1: Information flow between various users
b) Use: MORMED will serve as a platform for dissemination and exchange of multilingual information between both experts and laypeople. Their primary goals will be to share their own experiences, to seek people with similar interests and to discover relevant content in an intuitive way. The users of MORMED are expected to be:
Fig. 2: MORMED envisaged Architecture
The envisaged architecture of the MORMED platform is illustrated in Fig. 2. The system to be developed will consist
IFMBE Proceedings Vol. 29
974
E. Kargioti et al.
of two main components: The OrganiK platform and LTC Communicator. The OrganiK platform is a next generation knowledge management system that manages and promotes social structures. OrganiK combines social software applications and semantic technology. This includes social software applications (wiki, blog and microblogging work spaces, collaborative bookmarking, search engine), enhanced with semantic information processing tools (semantic search, recommender system, content and user behaviour analyser, collaborative filtering engine, etc). LTC Communicator will be the multilingual production system that will be operating in the background. It is a workflow system incorporating translation tools and machine translation, further enhanced by open source text summarization tools.
by the European Commission’s 7th Framework Programme, CIP-ICT PSP, under Grant Agreement 250534 (Multilingual Web).
REFERENCES 1. 2. 3.
4.
5.
IV. CONCLUSIONS
In this paper we have introduced MORMED as a platform that addresses the need for social networking, participation, apomediation, collaboration and openness in the spirit of Medicine 2.0. MORMED aims to provide a social platform and content management system that will promote the creation of informal social networks and will allow the effortless contribution of content, as well as its effective dissemination and retrieval. Through MORMED’s support for multilingualism, it is expected that dispersed user groups with similar interests will be brought closer, thus transcending geographical, educational and language barriers. The platform will be developed as a combination of a semantically-enhanced Web 2.0 knowledge management platform, developed within the OrganiK research project, and language technology enabling machine translation and human post-editing, developed by the Language Technology Centre. It will be customised to address the requirements of a community interested in Lupus or Antiphospholipid Syndrome, which will provide the basis for evaluation. The research challenges in MORMED lie in employing efficient techniques for semi-automated translation of the content and the community-enriched taxonomy, as well as exploring options for effective information retrieval of multilingual content and targeted information push methods. Moreover, various recommendation algorithms and personalisation techniques will also be evaluated. Finally, the way the users of the platform will choose to organise themselves, collaborate and exchange information will be an interesting aspect of the project from a social point of view.
ACKNOWLEDGMENT Research project MORMED (Multilingual Organic Information Management in the Medical Domain) is funded
6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
17.
18.
19.
20.
Eysenbach G. (2001) What is e-health. J Med Internet Res 3(2):e20 Giustini D. (2006) How Web 2.0 is changing medicine. [editorial] BMJ J. 333:1283-1284 Kamel Boulos MN (2007) The emerging Web 2.0 social software: an enabling suite of sociable technologies in health and health care education. Health Info Libr J. 24:2-23 Eysenbach G. (2008) Medicine 2.0: Social Networking, Collaboration, Participation, Apomediation, and Openness. J Med Internet Res 10(3):e22 Bibikas, D., Kourtesis, D., Paraskakis, I., Bernardi, A., Sauermann, L., Apostolou, D., Mentzas, G., Vasconelos, A.C. (2008). A Sociotechnical Approach to Knowledge Management in the Era of Enterprise 2.0: the Case of OrganiK. Scalable Computing: Practice and Experience Scientific International Journal for Parallel and Distributed Computing, 9(4), 315–327 OrganiK Research Project at http://www.organik-project.eu WebMD at http:// http://www.webmd.com Organized Wisdom at http://organizedwisdom.com Daily Strength at http://dailystrength.org HealthCareMagic at http://healthcaremagic.com Vitals.com http://www.vitals.com Barsky E. (2006) Introducing Web 2.0: weblogs and podcasting for health librarians. J Can Health Libr Assoc 27:33-4. FluWiki at http://www.fluwikie.com Connotea at http://www.connotea.org Google Health at http://www.google.com/health Frost JH, Massagli MP (2008) Social Uses of Personal Health Information Within PatientsLikeMe, an Online Patient Community: What Can Happen When Patients Have Access to One Another’s Data. J Med Internet Res 10(3):e15 Kaldoudi, E., Dovrolis, N., Konstantinidis, S., Bamidis, P. (2009), Social Networking for Learning Object Repurposing in Medical Education, The Journal on Information Technology in Healthcare, vol. 7(4), pp. 233-243. Bamidis., P., Kaldoudi, E., Pattichis, C.s (2009), mEducator: A Best Practice Network For Repurposing And Sharing Medical Educational Multi-Type Content, In proceedings of PRO-VE 2009, Springer Verlag, IFIP Advances in Information and Communication Technology 307, pp. 769-776 Bratsas, C., Kapsas, G., Konstantinidis, S., Koutsouridis, G., Bamidis, P. (2009). A Semantic Wiki within Moodle for Greek Medical Education, CBMS Proc, The 22nd IEEE International Symposium on Computer-Based Medical Systems, Special Track: Technology Enhanced Learning in Medical Education, Albuquerque, New Mexico, USA. Nozay P, Rinsche A. (2003) LTC Communicator, Translating and the Computer at mt-archive.info
Author: Eleni Kargioti Institute: South East European Research Centre (SEERC), The University of Sheffield and CITY College Street: 24 Proxenou Koromila Str City: Thessaloniki Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
Design and Development of a Pilot on Line Electronic OSCE Station for Use in Medical Education E.L. Dafli, P.D. Bamidis, C. Pappas, and N. Dombros Aristotle University of Thessaloniki/Medical School, Lab of Medical Informatics, Thessaloniki, Greece Abstract— The aim of this work is to present the technical issues around the design and implementation of an OSCE station (Objective Structured Clinical Examination) in electronic format, so as to be used in the fields of medical education. The presented e-OSCE examination - through the interactive web pages developed – is aimed to be used to assess applied knowledge, clinical reasoning and professional attitudes in the context of a basic clinical skill: intravenous cannulation. Candidates are rated on their medical knowledge of the physical and psychological issues involved in the questions. They are also rated on their ability to manage the issues raised in this case and perform the procedure appropriately and competently with regard for the patient safety and comfort. Two programs, complementary to each other were used for the design of the electronic OSCE; VUE, for the design of the logical path, and OpenLabyrinth, as an activity modeling system. They allowed the schematization and the creation of the on line interactive educational activity. OpenLabyrinth had to be installed in the local server and be configured. Moreover, the application is compatible with the latest technological standards proposed by MedBiquitous, which allows the interoperability, accessibility and reusability of the educational content. Keywords— e-OSCE, medical education, clinical skills, simulation.
I. INTRODUCTION Clinical competency is poorly measured by knowledge based written examinations. Especially, assessment of clinical skills cannot be achieved through traditional methods of evaluation, such as oral or written exams [1], [2]. An alternative proposal as a students’ evaluation method is the development of OSCE stations. OSCE stands for Objective Structured Clinical Examination and is used to test clinical skill performance and competence [3]. It usually involves a circuit of short (5-15 minutes) stations. Each station has a different examiner and simulated patient (actor). The stations can be standardized this way and complex procedures can be assessed without endangering real patients' health [4]. However, the design and implementation of several OSCE stations that are essential for a circuit of this examination
is effort and cost consuming since there is need for many actors – as simulated patients – and examiners [5]. An effective suggestion to this issue could be the development of OSCE stations in electronic format (e-OSCEs) that will be online available to medical students. In this paper the technical part of the design and implementation of a pilot e-OSCE station for the medical procedure of intravenous cannulation is presented. For that work, the use of two programs, complementary to each other in the area of education, was essential; VUE (Virtual Understanding Environment) [6] and OpenLabyrinth [7]. OpenLabyrinth seemed to be a reasonable choice, as its code is open source and consequently freely available to the educational community. Thus, there was initially the need to install it in the local server as a modeling system that could be used for the implementation of the educational activity. A. VUE for the Design of the OSCE Station VUE is a flexible tool for managing and integrating digital resources in support of teaching, learning and research. It provides a flexible visual environment for structuring, presenting, and sharing digital information [8]. This tool provided an intuitive way to develop and visualize a decision tree representing an OSCE case, which can then be imported straight into OpenLabyrinth. Using VUE, the process of selecting and organizing data transformed information into meaningful knowledge. These were the reasons for the selection of this open source software for the design of the main core of the maze of the presented e-OSCE. The scope of the use of VUE was the design of the visualized decision tree, as far as choices and consequences related to a specific (intravenous cannulation) are concerned. The VUE map involved the nodes that represent the choices –wrong or right- and the links between the nodes that represent the way that choices are connected to certain consequences. That way the whole procedure of the electronic examination is visualized and can then be uploaded to OpenLabyrinth, which was utilized for building the final educational activity.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 975–978, 2010. www.springerlink.com
976
E.L. Dafli et al.
B. OpenLabyrinth for the Interactive Educational Activity OpenLabyrinth is an opensource online activity modeling system that allows the design and implementation of interactive ‘game-informed’ educational activities. It is licensed under the Academic Free License (AFL) v. 3.0 [9]. OpenLabyrinth is a web application written using Active Server Page (ASP) code written in VBScript. To run, Internet Information Services (IIS), that are part of Microsoft server technologies, are needed to be installed. [10]. OpenLabyrinth also requires a database joining the code and the database using an ODBC connector. The most suitable database is MS SQL Server that was used in our work [11]. Thereupon the installation of OpenLabyrinth in our server was an initial requisite. For that reason, we had to install MSXML, Internet Information Server, Microsoft .Net Framework v.2.0, SQL Express 2005with advanced services and XZip. Besides the OpenLaburinth package was online freely available and was downloaded from the official website. Its components were unzipped and these from the ‘code’ directory were copied to a new directory in the ‘wwwroot’ directory. The next step was the setting up of logical paths for uploads imports and exports. That was performed in the OpenLabyrinth directory UTILITIES.ASP. Besides, special permissions had to be set for uploads to ‘Vue’, ‘export’, ‘import’ and ‘files’ folders: we had to grant permissions for all factors (in advanced) everything except ‘full control’, ‘change permissions’ and ‘take ownership’. After putting the code part of OpenLabyrinth in place, the next step was setting up the database. SQL Server 2005 Express was installed and the appropriate database for OpenLabyrinth was created. Permissions were set for IUSR for all stored procedures, views and tables. After completing the previous steps, OpenLabyrinth was ready to be used for editing our electronic OSCE station. C. Creating and Editing the Electronic OSCE Station Vue was used, as mentioned before, for the creation of the design of the labyrinth –the basic unit of organization in the e-OSCE. That was utilized for the schematization of boxes that represented nodes -ideas- and the links between them. Although Vue supports many other features only the boxes and the text in them, and the links between the nodes were possible to be imported in system. Everything else was ignored by the parser and had to be added manually (media material for example). Besides, the features of the nodes, such as color and size, were ignored while being imported, but they helped to the visualization of path of the choices and consequences in designing the case. The arrows between boxes had to point in the right direction as they were parsed in the upload.
After the import of the Vue file the text in each box was taken as the new node’s title and content. The new labyrinth’s name had to be specified and then new properties had to be set by using the global editor. The following were the principle editing function that had to be done: a) the summation and editing of the existing nodes, b) the creation of section for grouping nodes together, c) the creation of new links and editing of previously existed ones, d) the creation of feedback rules, e) the uploading of files, f) the creation of a counter for scores and g) the editing of global properties.
Fig. 1 An instance of a part of the Vue file for the e-OSCE Station The last function refers to the properties that structure the whole labyrinth of the electronic OSCE. These included the title, the author, the keywords and description. They were free text entries and were used so as metadata properties to be set for this particular OSCE station. Besides, the type of labyrinth was set as “game” so as scores to be used. A real timer was then set to be used and the time delta had to be set in seconds. The next step was the modification of the nodes that represent the main ideas in the educational scenario. The node editor had to be launched since it has an HTML editor available for use. The node content that represents the elements that are shown to the user included narrative, images, videos and instructions. It should be noted that adding a media resource, such as image or video file, it had firstly to be uploaded to the system. Then wiki style references had to be used into the nodes’ content boxes where the files were essential to appear. The last things that had to be adjusted were the root node – starting point for the OSCE – and the score function for the examination. The scores were comprised of a label and a dynamic integer value, which can be dynamically change as the user works through the labyrinth. A counter was needed to be created globally for the map.
IFMBE Proceedings Vol. 29
Design and Development of a Pilot on Line Electronic OSCE Station for Use in Medical Education
II. RESULTS A. Running the e-OSCE The user/trainee starts and works through the labyrinth created for the on line OSCE examination by the use of an interface, where there is a number of different elements to be displayed. The educational scenario of this application is based on the sequential steps of performing the medical procedure of an intravenous cannulation. It consists of 57 nodes that represent pages presenting the educational scenario and 87 connections corresponding to those available options for the user in the performance of each labyrinth. It comprises an e-station of 10 minutes. Clear instructions are given about the procedure and the expected requirement/s in that particular station before the student “enters’’ the examination room, as with the “real” OSCEs. Every node has a textual message and is accompanied by media material – video, images, audio. There are linked options that offer the way to traverse the OSCE through the available options. Moreover, as OSCE is referred to the evaluation of students’ medical knowledge and clinical skills, there are scores available in the labyrinth that go up and down depending on the choices and the decisions the trainee takes. Besides, candidates are rated on their medical knowledge of the physical and psychological issues involved in the questions. They are also rated on their ability to manage the issues raised in this case and their ability to perform the procedure appropriately and competently with regard for the patient safety and comfort.
Fig. 2 A screenshot of the interface of the OSCE station As with every educational activity, one of the most useful functions of this on line OSCE application is the extensive feedback that the student can be given, based on his choices in playing the labyrinth. That includes a report of the counter values, which choices were made and whether the nodes where marked as “must visit” or “must avoid”. That happens because every option selection (or ‘click’ - along with the current score, timer and counter values) is recorded as
977
the user works through a labyrinth. There is a user report available at the end of the process presenting what path was taken or how much time elapsed between entering and leaving each node. A histogram of this data is also offered to the user. When the user clicks the report link, he gets a number of elements such as: a) metadata, b) the start time and time get to complete, c) total number and type of nodes visited, d) the histogram of time spent per node, and e) the graph of counter values through the session.
Fig. 3 A screenshot of the histogram of time spent per node in the OSCE
III. CONCLUSIONS This pilot electronic OSCE station potentially offers medical students and trainees the opportunity to practice decision making in a risk free, safe and protected environment and to be assessed for their performance. They can thus be trained in basic clinical skills far off the clinics area and unfettered from stress, from the first years of medical studies, before exposure to real patients and they can be evaluated for their progress. Organizing and setting up real OSCE station is time, money and effort consuming. Besides, there is a limitation in the number of attendees at each examination, since only one student at a time can attend each OSCE station and this minimizes the wide use of the OSCE examination as far as the number of attendees is concerned. A reasonable response to these issues could be the use of OSCE stations in electronic format, available through web services. Moreover, in terms of cost and time effectiveness, extra saving could be achieved through collaboration and content sharing among institutions in medical education. That composes the core search area of large European projects like “mEducator” [12],[13].
IFMBE Proceedings Vol. 29
978
E.L. Dafli et al.
In our experience setting up an OSCE station in electronic format is cost and effort effective and can provide a realistic simulation of a real OSCE station in the fields of clinical skills and especially decision taking and professional attitudes. The aim of this effort was not to substitute the clinical practice or real OSCE stations, but to extend it through new educational experiences, so as to be used not only in examinations but in students’ training as well. The next step in this pilot study is the utilization and therefore the evaluation of the e-OSCE system from the users – trainees and examiners – as a method of clinical skills’ assessment [14].
ACKNOWLEDGMENT This work was partially funded by the mEducator project, supported by the eContentplus 2008 program, Information Society and Media Directorate-General, European Commission (ECP 2008 EDU 418006).
3. Smyrnakis E, Faitatzidou A, Benos A, Dombros N (2008) Implementation of the objective structured clinical examination (OSCE) in the assessment of medical students, Archives of Hellenic Medicine, 25(4): 509-519 4. R. M. Harden † (1988) What is an OSCE?, Med Teacher 10(1): 19-22 5. Cusimano MD, Cohen R, Tucker W et al. (1994) A comparative analysis of the costs of administration of an OSCE (Objective Structured Clinical Examination), Acad Med 69(7):519-3 6. Kumar A, Saigal R. (2005) Visual Understanding Environment, Proc 5th ACM/IEEE-CS joint conference on Digital libraries, Denver, CO, USA, 2005, pp 413-413, ISBN:1-58113-876-8 7. OpenLabyrinth at http://labyrinth.mvm.ed.ac.uk/ 8. VUE at http://vue.tufts.edu/ 9. OpenLabyrinth Userguide at http://142.51.75.17/documents/userguide.pdf 10. IIS at http://www.iis.net/ 11. MS SQL at http://www.microsoft.com/sqlserver/2008/en/us/ 12. PD Bamidis, E Kaldoudi, C Pattichis. mEducator: A Best Practice Network for Repurposing and Sharing Medical Educational Multitype Content. Proceedings of PRO-VE 2009, Springer Verlag 2009, pp.769-776 13. mEducator, ―Multi-type Content Repurposing and Sharing in Medical Education.. Available: http://www.meducator.net/ 14. Smothers V, Clarke M, Van Dyck C (2006) MedBiquitous and journal publishers: scholarly content and online medical communities. Learned publishing, 19(2): pp125-132 DOI 10.1087/095315106776387075
REFERENCES 1. John M. Eisenberg (1989) Evaluating internists’ clinical competence, JIM 4(2):139-143 DOI 10.1007/BF02602356 2. Peter S. Greene, Valerie Smothers, Toby Vandemark (2009) Supporting continuing pediatric education and assessment, Health Infromatics , Springer New York, pp 197-202, DOI 10.1007/978-0-38776446-7_14
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
E. L. Dafli Aristotlle Univerity of Thessaloniki Campus Thessaloniki Greece [email protected]
Review of the Biomedical Engineering Education Programs in Europe within the Framework of TEMPUS IV, CRH-BME Project Z. Bliznakov and N. Pallikarakis BIT Unit, Department of Medical Physics, School of Health Sciences, University of Patras, 26500, Rio, Patras, Greece Abstract— Biomedical Engineers should be prepared to adapt to existing or forecasted needs. Today, education in Biomedical Engineering (BME) in Europe is mainly influenced by: a) the European policy on higher education, b) research & development (R&D) programs and c) the market demands. There is a strong pressure on education, training and life long learning programs to continuously adapt their objectives in order to face new requirements and challenges. The main objective of the TEMPUS IV, CRH-BME project is to update existing curricula in the field of BME in order to meet recent and future developments in the area, address new emerging interdisciplinary domains that appear as a result of the R&D progress and respond to the BME job market demands. The first step is to extensively review the curricula in the BME education field. The work is carried out through collection of information from the project partners by the means of questionnaires. The present study covered 46 countries in Europe and identified 40 countries with Biomedical Engineering programs. Approximately 150 Universities across Europe offer in total 297 BME programs, distributed as following: 77 Undergraduate programs offering BSc degree, and 220 Postgraduate programs, from which 161 offer MSc degree and 59 offer PhD degree. The results of this study reveal that Biomedical Engineering programs are experiencing rapid growth after the year 2000 and especially during the last five years. This leads to an increased number of Biomedical Engineers available today on the market and it is expected to play an important role in meeting the existing and forecasted needs in the BME field. Keywords— Biomedical Engineering, Education programs review, TEMPUS.
I. INTRODUCTION The impressive progress in the creation of medical knowledge combined with the progress in other related scientific domains and fields of technology provided during the last 4 decades an excellent ground for the advancement of the Biomedical Engineering (BME) sector. Successful biomedical research resulting to the development of new diagnostic and therapeutic methods, techniques and equipment, has lead to a radical change in the way health care is delivered today. Given this dynamic situation, Biomedical Engineers should be prepared nowadays to adapt to existing
or forecasted needs, in the form of knowledge, skills and attitudes. Those needs address the demands of the work environment in the broader health care related sector. Therefore, there is a strong pressure on education, training and life long learning courses, to continuously adapt their objectives and programs in order to face new requirements and challenges. A. Biomedical Engineering Education in Europe Today, education in BME in Europe is mainly influenced by: a) the European policy on higher education, b) research & development (R&D) programs and c) the market demands. The political decision of the EU for the creation of The European Higher Education Area, adopted by 46 European countries until today, aims to lead to comparable degrees, based on two main cycles articulating higher education in undergraduate and graduate studies. The establishment of the European Credit Transfer and Accumulation System (ECTS) increased the flow of students and teaching staff between universities [1]. The promotion of mutually recognized quality assurance systems has also been considered of primary importance and accreditation schemes already exist in most of the EU countries. However, although many countries have officially adopted the principles of Higher Education reform, they are very slowly implementing the necessary changes. The ECTS is only partially implemented and in most cases credit units have been allocated to courses and modules of existing programs without estimation of students’ workload and links to the learning outcomes. B. The Bologna Declaration The visionary initiative of 29 European countries to commit themselves to the promotion and establishment of a common European Higher Education Area (EHEA) was confirmed with the signing of the Bologna declaration on the 19th June 1999 in Bologna, Italy [2]. The vision, sometimes seen as an unreachable utopia, has through laborious and persistent work, been broken down to concrete recommendations, guidelines and action plans adopted by the signatory countries. This transitional process, known as the Bologna
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 979–982, 2010. www.springerlink.com
980
Z. Bliznakov and N. Pallikarakis
process, involves political decisions and initiatives, administrative and pedagogical changes aiming at nourishing the EHEA consciousness without depriving the national or local identities of the participating educational centres. The objectives of the Bologna process is to create a common framework of readable and comparable degrees, to introduce a two cycle system of undergraduate and postgraduate, an ECTScompatible credit system, quality assurance with comparable criteria and methods and to promote free mobility of students, teachers and researchers. The virtual campuses enhances this effort adding objectives that is to increase virtual mobility as a complement or substitute to physical mobility and to integrate this mobility into the development of multilateral curriculum development, increase high-quality European educational resources and to modernise the European higher education system integrating information and communication technology in daily education.
II. MATERIALS AND METHODS A. The TEMPUS CRH-BME Project The CRH-BME “Curricula Reformation and Harmonisation in the field of Biomedical Engineering” is a Joint Project within the TEMPUS IV program and is 95% financed by the Commission of the European Communities [3]. The main objective of the CRH-BME project is to update existing curricula in the field of Biomedical Engineering in order to meet recent and future developments in the area, address new emerging interdisciplinary domains that appear as a result of the R&D progress and respond to the BME job market demands. The generic BME programs will assist participating Institutions to restructure their existing programs in full compliance with the Bologna Declaration and the ECTS and especially those that are in their initial stage of their educational system reform. This main objective will be reached by (1) extensive review of the curricula in the countries of consortium members in the field of BME education, if existing; (2) investigation of the current and future demands in the medical device industry market; (3) preparation of a generic program on graduate and postgraduate education in BME, with core and elective courses. The new programs will focus on present and forecasted needs for competencies and skills of biomedical engineers based on job requirements. Other objectives that will be addressed by the project are: • • •
Promotion of the development of new study programs in partners countries Investigation of the possibilities and support in the establishment of joint degrees Provision of a template guidance document for Quality Assurance (QA), to be used for
• • •
Implementation in the field of BME education Promotion of international teacher and student exchange Creation of links with the medical device industry in Europe
Additionally, an analysis of the relationships between competence, learning outcomes and credits will be performed in order to propose the most efficient ways to reach the goals. B. Previous Studies The MELETI Project. Several studies have been published on the post Bologna developments of education, training and accreditation in the area of Biomedical Engineering. Above them the MELETI project - Medical Engineering Listed Education & Training Information was an initiative aiming to establish an electronic documentation point for information on training, education and research activities with a specific focus in the field of Biomedical Engineering at a European level [4]. This web based service was offering information and guidance on education, training and continuous professional development activities in biomedical engineering and its related subspecialties. This study performed in 2000 by the Institute of Biomedical Technology (INBIT), under an initiative of the International Federation for Medical and Biological Engineering (IFMBE), revealed that 50 Universities were delivering a program in the field of BME; 26 BME undergraduate programs and 30 postgraduate programs. Only 6 Universities were offering more than one program related to the field. 33 Institutions were running their program within a national or international inter-university collaboration scheme, from which 15 under the ERASMUS program. 20 Universities were applying the ECTS. 29 Institutions were applying Quality Assessment schemes, and 37 reported a follow-up by means of student opinion surveys. 24 Universities validated their programs with industry and other employers and 20 out of them used external evaluators. The BIOMEDEA Project. The objective was to develop and establish consensus on criteria, guidelines and protocols for the harmonization and accreditation of high quality Medical and Biological Engineering and Science programs, and for the training, certification and continuing education of professionals working in the health care systems with the goal to insure mobility in education and employment as well as the highest standards for patient safety. BIOMEDEA was a mainly European project in which more than 60 universities and other academic institutions participate. The initiative was supported by the International Federation for Medical and Biological Engineering (IFMBE) and the European Alliance for Medical and Biological Engineering [5].
IFMBE Proceedings Vol. 29
Review of the Biomedical Engineering Education Programs in Europe within the Framework of TEMPUS IV, CRH-BME Project
C. The Present CRH – BME Review of the Existing Educational Programs in Europe The work is carried out through collection of information from the participants, preparation of draft reports and documents, discussion and approval in meetings between the participating institutions. Project working group is responsible for delivering the project outcomes, associated with this particular task. The information on the existing Biomedical Engineering programs in the European Union and Partner countries Institutions is collected, processed, analysed, and presented. For the purposes of the information collection, a draft questionnaire was initially prepared by the Lead Partner, University of Patras, and distributed to all partners in order to receive their feedback. Taking into account all the partners’ comments, remarks, and suggestions, the questionnaire on the BME programs was entirely analysed, reworked and concluded in its final form on the 1st Project General Assembly Meeting. Brief presentation of the BME questionnaire is given below. It consists of 3 main sections. The first section contains general information for all the Biomedical Engineering programs or any other study programs containing specialization in BME in the particular country under review. The second section contains specific information for each BME study program or any other program that contains specialization in BME. Particularly, the information collected for each study program is: • Study Program Name, University(ies), Department(s), Coordinator(s), web-site • Type of education - full-time, part-time, e-learning • Type of degree - BSc, MSc, PhD • Duration of studies in semesters • Elaboration of a thesis • Average number of registered / entering students per year • Starting (academic) year of the program • Teaching language(s) • Information about whether the BME study program apply the European Credit Transfer System (ECTS) or any other credit transfer system • Information about whether there are any bilateral agreements with other universities • Information about whether foreign students attend the program for ECTS exchange • Information about whether foreign students register to the program • Information about whether there are any tuition fees The third section contains specific information for all the courses / topics being offered at each BME study program or any other program that contains specialization in BME. Particularly, for every course / topic, information for the
981
credits and whether the topic is obligatory, optional or conversion is collected: After discussion between all project partners, it was concluded that the study programs of interest that will be concerned in this review, will belong to the following areas of Biomedical Engineering: • • • • • • • •
Clinical Engineering Medical Engineering Rehabilitation Engineering Cellular Engineering Biomedical Signal Processing Biomedical Materials Biomedical Electronics and Instrumentation Biomedical Technology
For the purposes of the project 46 countries in Europe and neighbourhood area were investigated (Fig. 1).
Fig. 1 Visual representation of the countries’ coverage concerning the BME programs: - Country with a partner from the project consortium - Country covered by a partner from the project consortium - Country covered by Internet resource only
The project consortium consists of 23 partners distributed 20 different countries. Each project partner provided information for the BME programs in his own country, and thus initially 20 countries were covered from the project consortium. From the remaining 30 countries, there were other 15 countries that were identified as possible to be covered from the some project partners. Finally, 11 other countries remained, and the information concerning the BME programs was provided from Internet resources only.
IFMBE Proceedings Vol. 29
982
Z. Bliznakov and N. Pallikarakis
III. RESULTS The present study covered 46 Countries in Europe and identified that in 40 countries there are Biomedical Engineering programs. Approximately 150 Universities across Europe offer in total 297 BME programs, distributed as following: 77 Undergraduate programs offering BSc degree, and 220 Postgraduate programs, from which 161 offer MSc degree and 59 offer PhD degree. In percentage ratios the numbers are: 26 % BSc, 54 % MSc, 20% PhD. More than 90 % of BME programs have the elaboration of a thesis obligatory. More than 85 % of BME programs are only full time teaching, and less than 15 % have options for part-time teaching or e-learning. The majority of BSc programs (63%) have duration of 6 semesters followed by programs with duration of 8 semesters (28%). More than half of the MSc programs (53%) have duration of 4 semesters. Almost one-third of the MSc programs (30%) are only 2 semesters’ programs, while a small number 6% are MSc programs with integrated first and second cycle degree and have duration of 10 semesters. Concerning the PhD programs, more than two-thirds (68%) have duration of a minimum of 6 semesters, while the 8and 10- semesters’ programs are sharing 22% and 10% respectively. English as a teaching language is offered in approximately 60% of the BME programs, while 30% of the BME programs apply teaching only in their country’s native language. Only one teaching language is used in 57% of the BME programs, two teaching languages are used in 40% of the BME programs, while a small number 3% of the BME programs use three teaching languages. Related to the student exchange and mobility, approximately 90 % of the BME programs apply the European Credit Transfer and Accumulation System (ECTS). About 75 % of the BME programs accept foreign students and 70 % of the BME programs have bilateral agreements with other universities. Concerning the age of the BME programs, only 15% of the programs were existing 20 years ago, while the 2/3 of the BME programs have been created after year 2000. The oldest BME program runs since 1967, while the newest two BME programs are planned to start in 2010. The number of registering students per academic year varies largely between the different BME programs in the different countries. For the BSc programs it is within the range 10 to 400, for the MSc programs from 5 to 150, and for the PhD from 1 to 30. The majority of the BME programs (58%) require tuition fees from the students. Only 27% of the BME programs are completely free of charge and does not apply any tuition
fees. There are 15% of the BME programs which are special cases, and fees are required or not depending on specific circumstances. Concerning the distribution of the BME programs in the different countries in Europe, Italy is the country having the highest number of 38 BME programs, followed by United Kingdom with 26 BME programs. There are other six countries (Czech Republic, Turkey, Spain, Finland, Germany, Israel) with a significant number of 10 to 20 BME programs. The majority of the European countries (70% or 32 countries) possess less than 10 BME programs. There are also 6 countries without any BME programs. Some of them used to have such programs in the past (Bosnia and Herzegovina, FYROM) but they have been discontinued.
IV. DISCUSSION AND CONCLUSIONS The biomedical engineer applies Engineering to solve problems in biology and medicine. This requirement implies a two-skill approach to practice and training. First the biomedical engineer must have the fundamental Engineering knowledge and second, he must understand how to apply that knowledge to problems in biology and medicine. The results of this study reveal that Biomedical Engineering programs are experiencing rapid growth after the year 2000 and especially during the last five years. This results in an increased number of Biomedical Engineers available on the market today and it is expected to play an important role in meeting the existing and forecasted needs in the field of Biomedical Engineering.
REFERENCES 1. European Credit Transfer and Accumulation System (ECTS), http://ec.europa.eu/education/lifelong-learning-policy/doc48_en.htm 2. The Bologna Process - Towards the European Higher Education Area, http://ec.europa.eu/education/higher-education/doc1290_en.htm 3. Pallikarakis N, Bliznakov Z, Harmonizing the Curricula of Biomedical Engineering Programs in Europe: The TEMPUS CRH-BME project, International Technology, Education and Development Conference – INTED 2009, 9-11 March 2009, Valencia, Spain 4. MELETI project - Medical Engineering Listed Education & Training Information, http://www.inbit.gr/publications/index.html 5. BIOMEDEA project - Biomedical and Clinical Engineering Education, Accreditation, Training and Certification, http://www.biomedea.org/ Author: Zhivko Bliznakov Institute: BIT Unit, Department of Medical Physics, School of Health Sciences, University of Patras Street: University Campus City: Rio, Patras Country: Greece Email: [email protected]
IFMBE Proceedings Vol. 29
Virtual Experiments: May VCV Impede Circulation More than PCV in (Virtual) Patients in the Lateral Position? T. Golczewski and M. Darowski Institute of Biocybernetics and Biomedical Engineering PAS, Warsaw, Poland Abstract— The position of artificially ventilated patients has to be changed into a lateral one to avoid a bed score. Such a position causes asymmetrical work of lungs, esp. in older patients when the closing capacity (CC) of the dependent lung is greater than its functional residual capacity (FRC). If CC>FRC, a lung is closed at the end of expirations, and in consequence certain time is necessary to open it during inspirations. Virtual organs may be useful for initial testing of wide range of problems, such as differences in volume control ventilation (VCV) and pressure control ventilation (PCV) influences on local intrapleural pressure affecting pulmonary blood flow, when CC>FRC for the dependent lung. Methods: Virtual respiratory system elaborated previously was used for simulation of VCV with constant inspiratory airflow and PCV with constant inspiratory pressure applied to a standard patient in the left lateral position when CC>FRC for the dependent (left) lung and CC
I. INTRODUCTION Development of computer technology enables us to create virtual patients (an aggregate of complex models) to perform e-experiments for initial tests of scientific hypotheses or methods of examination and treatment. Fig.1 presents a platform for cardiopulmonary interactions analyses, which is developed in the Institute of Biocybernetics and Biomedical Engineering, Polish Academy of Sciences (partly in cooperation with the Institute of Clinical Physiology, Section of Rome, CNR, Rome, Italy, e.g. [1]).
Fig. 1 A scheme of models and their connections for simulation of cardiopulmonary interactions during ventilatory (the left arrow) and circulatory (the right arrow) support. Modules: AGT - airway gas transfer, GE - gas exchange, BGT - blood gas transport. F- blood flow rates, Q- airflow rates, V- volumes of respiratory system compartments, P- pressures (alveolar, intrapleural, etc), SaGj, SvGj, VGj, Ej- arterial and venous saturations, volumes and exchange rates of j gas species Mechanical ventilation has such disadvantages and complications that the term 'ventilator-induced lungs injury' has appeared. Therefore, looking for such an artificial method that provides appropriate tissue oxygen supply with the smallest health hazard is still important. Since the ventilation/perfusion ratio is one of the main determinants of the blood oxygen concentration, it should be analyzed which mode of the artificial ventilation is better if influence on pulmonary circulation is taken into account. In this paper, a comparison of two main modes: the volume control ventilation (VCV) and pressure control ventilation (PCV) is presented. In particular, alveolar and intrapleural pressures in dependent and independent lungs were analyzed in older virtual patients in the lateral position. In the case of older patients, the closing capacity (CC) is close to the functional residual capacity (FRC), and thus in the lateral position CC>FRC for the dependent (lower) lung, whereas CC
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 983–986, 2010. www.springerlink.com
984
T. Golczewski and M. Darowski
B. Simulation Procedures
II. METHODS A. Virtual Respiratory System A virtual RS (Fig.1) previously created in the Institute of Biocybernetics was utilized to compare VCV and PCV. It was presented in details in [2], its model of RS mechanics (Fig.2) was detailed in [3] (also in [4] - free on-line access). It has been utilized in many application, for example, in a system for E-learning of spirometry [5]. From the point of view of the problem analyzed here, separation of the chest wall and lungs divided into five lobes is the main features of the virtual RS utilized. This enabled us to ‘measure’ the intrapleural pressures affecting pulmonary circulation and to ‘introduce’ the compliant mediastinum between the left and right lungs influencing these pressures. Taking into account differences in both physiological base and mathematical description, airways resistance was decomposed into resistances of upper airways and large bronchi (Ru, R, RL, RR in Fig.2), resistances of bronchi of the middle order (Rp in Fig.2 - such bronchi may collapse if the pleural pressure is greater than the pressure inside these bronchi), and resistances of the smallest bronchi and ducts dependent on the corresponding lobe volume (Rv in Fig.2). These smallest bronchi may be closed (Rv is almost infinite) if CC is greater than the current lobe volume. Influence of gravity is simulated with pressure sources (G in Fig.2) related to quasi-hydrostatic pressure (e.g. in the supine position the G values for all lobes are equal to zero whereas in the other positions G is negative for independent regions and positive for dependent ones).
Fig. 2 A simplified scheme of the respiratory system mechanics model. Ginfluence of gravity, Pp- local intrapleural pressures affected by quasihydrostatic pressure, Ppl, Ppr - the left and right pleural pressures nearby to mediastinum, Cm- compliant mediastinum (see the text for other details)
Constant inspiratory pressure (P in Fig.2) or constant inspiratory airflow (F in Fig.2) during 1.6 sec is treated as simulation of a respirator working in PCV or VCV mode, respectively (the ventilation frequency equal to 15/min). The standard virtual patient (parameters values as shown in [3]) with CC≈FRC for all lobes in the supine position of the patient was ventilated in this position in such a way that provided minute ventilation equal to 10 L/min. After position change into the left lateral one the ventilation was increased to keep the oxygen blood saturation at the same level.
III. RESULTS CC is a physiological parameter related to parenchyma properties, and it does not change with position change. However, FRC depends on the position (independent regions presses dependent ones, which decreases FRC of the dependent regions). Therefore, if CC≈FRC for all lobes in the supine position, CC>FRC for the left (lower, i.e. dependent) lobes in the left lateral position. This fact caused that the left lobes were closed at the end of expirations. As shown in Figs.3 and 4a,b, certain time was necessary to open these lobes during inspirations. In the case of VCV, both fresh air from the respirator and partly deoxygenated air from the right lobes flow into the left lobes (the Pendelluft phenomenon) after their opening (Fig.3). The Pendelluft was not observed during PCV. Moreover, overinflation of the right lung was greater during VCV (Fig.4a) than during PCV (Fig.4b). Thus, PCV appeared better than VCV.
Fig. 3 Airflow patterns during VCV in the supine (white) and left lateral (black) positions. Vmin=18 L/min Dotted curves - the right lung (independent), solid ones - the left lung (dependent). In the left lateral position, the left lung needed time to open
IFMBE Proceedings Vol. 29
Virtual Experiments: May VCV Impede Circulation More than PCV in (Virtual) Patients in the Lateral Position?
Fig. 4a Lung volume patterns during PCV in the supine (white) and left lateral (black) positions. Vmin=18L/min. Dotted curves - the right lung (independent), solid ones - the left lung (dependent). In the left lateral position, the left lung needed time to open (the left upper lobe opened here as the first)
Fig. 4b Lung volume patterns during VCV. Note overinflation of the right lung before the left lung opened
Fig. 5 VCV in the left lateral position: white patterns - differences between right and left pleural pressures nearby to mediastinum; black patterns differences between the volumes of the left and right lungs; when the mediastinum existed (solid lines) or not (dotted ones)
985
In the lateral position, the left and right pleural pressures nearby to the mediastinum (Ppl and Ppr in Fig. 2) were different. The value of mediastinum compliance had influence on this difference. If the compliance was very high (i.e. there was not mediastinum, in fact), there were differences neither during inspiration nor expiration (Fig. 5 the white dotted line). If, however, the mediastinum was stiffer, this right pleural pressure was greater than the left one. This difference was greater for VCV. Note, however, that if the mediastinum existed (was stiff), the left lung opened earlier, and the mean difference between the left and right lungs volumes was smaller (Fig. 5), i.e. both lungs were ventilated more similarly and the Pendelluft was smaller in VCV.
IV. DISCUSSION As Fig. 4 shows, the left lung was closed at the beginning of inspiration, and the increasing pressure generated by a respirator could open that lung but it needed time. In the case of PCV, when the maximal pressure is generated at the inspiration beginning, the left lung opened earlier than during VCV (Figs. 4 a and b). For that reason, the left lung was ventilated better during PCV. Additionally, there was not the Pendelluft during PCV, and thus the left lung was ventilated with the fresh air only. Therefore, PCV seems to be better mode of ventilation of RS working asymmetrically if ventilation of both lungs considered together is taken into account. If the ventilator-induced lungs injury is considered, PCV seems to be again better than VCV. Indeed, the whole airflow that was generated by the respirator during VCV was directed by force into the right lung. Therefore, the right lung was overinflated before the left lung opened (Fig. 4b).
Fig. 6 The chest wall changes symmetrically even if lungs change asymmetrically [6] because stiff ribs are connected at the both ends. Therefore, the resultant force that causes chest volume change is the sum of forces being results of the left and right pleural pressures
IFMBE Proceedings Vol. 29
986
T. Golczewski and M. Darowski
In the lateral position, the dependent lung is compressed by the independent lung, which caused that FRC for the dependent lung may decrease below CC, and in consequence this lung may be closed. However, it depends on the mediastinum compliance how much it is compressed. If the mediastinum is very compliant (not exists), the whole weight of the independent lung has to be balanced by the dependent lung, and thus its compression is significant. Therefore, much time and higher pressure is necessary to open it. If the mediastinum is stiffer, it balances partially the independent lung weight, and thus the dependent lung is less compressed, and in consequence less time (lower pressure) is necessary to open it (Fig. 5). Thus, stiffer mediastinum decreases asymmetry of ventilation. On the other hand, however, stiffer mediastinum increases the difference between the right and left pleural pressures. As the chest wall moves symmetrically even if lungs are inflated asymmetrically (Fig. 6), the volumes of the left and right parts of the thorax increase equally. If the mediastinum is stiff, it prevents (partially) the independent lung from expansion into the thorax part ‘reserved’ for the dependent lung. For that reason, the volume of the dependent lung (the left one here) increases despite that the air amount in this lung does not change when this lung is closed. This fact causes that both the alveolar and pleural pressures in the dependent lung have to fall because of air decompression. Since the independent lung is more inflated (by force) during VCV (Fig.4), the alveolar and pleural pressures in the dependent lung fall and in the independent lung rise more during VCV than during PCV. Inasmuch as the alveolar pressure affects blood flow through pulmonary capillaries and pleural pressure influences pulmonary arterial and venous flow, asymmetry of perfusion should appear greater during VCV than PCV. This study has one main limitation. Since lungs are divided into lobes, concurrent closing and opening of such a big part as the whole lobe can be simulated, only. In fact, closing and opening of particular pieces of lungs are not concurrent. For that reason, simulated phenomena related to the lung opening are probably more rapid than they really are. Nevertheless, they seem to be correct qualitatively.
V. CONCLUSIONS The value of mediastinum compliance seems to affect the degree of asymmetry of both ventilation and perfusion: the smaller the compliance, the smaller the ventilation asymmetry, however, the smaller the compliance, the greater the perfusion asymmetry. Although VCV is commonly assumed as the better ventilation mode because it always supplies the desired tidal volume, PCV seems to be more unfailing than VCV if RS works asymmetrically and such aspects as the ventilation/perfusion ratio and alveolar partial pressure of oxygen are taken into account.
ACKNOWLEDGMENTS The work was supported by the grant No NN518332235 from the Ministry of Science and Higher Education, Poland.
REFERENCES 1. Golczewski T, Zieliński K, Ferrari G, Palko KJ, Darowski M. (2010) Influence of ventilation mode on blood oxygenation - investigation with Polish virtual lungs and Italian model of circulation. Biocybernetics and Biomedical Engineering 30(1):17-30 2. Comprehensive models of cardiovascular and respiratory systems: their mechanical support and interactions. Eds. Darowski M, Ferrari G. Nova Science Publishers, Inc; New York (in press) 3. Golczewski T, Darowski M. (2006) Virtual respiratory system for education and research: simulation of expiratory flow limitation for spirometry, Int J Artif Organs 29:961 - 972 4. Golczewski T, Darowski M. (2007) Virtual respiratory system in investigation of CPAP influence on optimal breathing frequency in obstructive lungs disease. Nonlinear Biomedical Physic 1:item 6 5. Tgol.e-spirometry at http://www.virtual-spirometry.eu 6. Groote A, Muylem A, Scilla P, et al. (2004) Ventilation asymmetry after transplantation for emphysema- role of chest wall and mediastinum. Am J Respir Crit Care Med 170:1233-1238 The corresponding author: Tomasz Golczewski Institute of Biocybernetics and Biomedical Engineering, PAS Street: Ks. Trojdena 4 City: Warsaw Country: Poland Email: [email protected]
IFMBE Proceedings Vol. 29
!"# $$% $& () * + ,- .( !)# $$% - .( ) * + ,- .( !)# $$% - .( ) * !"# $$% $& () *
! " !
# $ % ! & % & ! ' & & $
&& ! & $ ! $
$ $ !
/0 1%+"01/ 0#2$, #2 ## $, # 02 #2 $(
3 .## 4( 02#2$5 2 - $
32 $ - 4( , 02
# # $-# , # 2 # 5, - 2 , 2 2# 2($267 8223#2$ --#(2 ( ( , 2 % $ 2
# 9(#2# (( --# $ : , 2 3 # # 02 --"# $$ 32#2 - $ $ ,*$#$ $2 9 # 32 # # # 9
$ ";<;<::; 67 - - + , = (:4( #> ( 2 2 ( 2( - 9 $ #$ ( - 4( 2 - 9 $ 1?@ 67 2 * 2 2( - # ( ( $ $- 32$-4( 2
# $ (02 ##- ,$( # $$ 9 $ 2 #( $ ,2%= %($ > 67 02 % 2 2 2 -
## $# 4( (6A7 2 2#( (, 2%672B(B( -2 B( B 02 #2# # ( , ( ## 2 32 ( -
#2 -#, 202C( D# (# $$( - 32#2##(2 -(2 9
-- # 02 # 9 , ( --#(-( ,32$-24( -- ##( ,(,--( 2(# (-2 , 2 - 3$-24( ( ##$6;72-(- , # 2 -2## # --#4(#- # ) 2 2 2$, - , (,-2( - # 2(- # 4( 02 , 2 232## (-4(
1)"0F 02 2$, $ ## $ 3 , -# 2# ( -( 322 ($ - 2 $ - 4( ( #2 ## ( # -2- #( #(#( # -- $32 #$ 2 (# - + $ 2 #2 - 2( :4( # 2 # 2 3 2 $2 $ - # ,/3 .##( #(#
$ $ *$=93>
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 987–990, 2010. www.springerlink.com
988
A.P.S. Silva et al.
-5$4( +$2## 2( 9 ## $ 01%
02 2 #24( $ 2 #32( #2
-22 # ,( # $$ 02 , - -
,$ 2 93$ ( 2( 2 2#2 ##2$- #2,4( , 39 #2 - (# #- (6@7 2 39 3 2$, # - 2 G 0 9 , (# ( + , 0 02 : 23 $(02,3#2 $ 2 ( #$$# (2,-- #
3 , -, ( 2 ( , 822(2 #2 $ # 9 #( -,
$(H 2$, 2 *$2 32 ( ( (
02 9 , :,: # - 2 # $ 2 4( 2 2 02 2 I$ # 2 # -($24( 9 2- # 25 2$ - #2(,$2 #( #(
32#29 ,2( ( 2## -24( #(#2(,-2 9 2 93$ 6J7G • +#2 ##K • 4( #2 ##K • "2 ##-2 -(K • %# - , # , # ( - 2 4( ) 2( $-2 #9$( #-, # ($ 2 4( 32#2#(# (2 * ((02 3- $ 2($2-(( ,
$ (# ( 2-( , ( 32 $( - ( -, 2 - #-24( 322(2($2
, - -$ # # 2(# 02 2 - ( ##$ -# 6?7 2 -2 ( (( ,-@AL-( , 02.# -2 G ( 24( ,- - # 32 # ( ,K , # - ( , 5 # 32 #($ -- $ 02 $ , #$ 2 $( 32#2 (# # 2(-2.# 4( 02- 9 -2 ( 2 =0 9 ,> 3 - $ # - (# ( ( , $(- # -# - ,25 -2 ( #22 - 2(# ( # 025#( ##$ 6<7# 3 2 G ⎯ ,G #2 - 2 5 ( ,
,* 2 # - # - -=(( ,:2(> ##$2 2(# $ 2 2 23$ 2 -( #2 - 2 2( # 32#2 - 2 2(# 3 2 , - 2 =<H-3$$ # 6;7> ⎯ " ,G2 2 25
$ 32 2 #( 2 (- #2 ( ,2 $ (- (# 32 2 -(,2$(
IFMBE Proceedings Vol. 29
Human Factors Engineering Applied to Risk Management in the Use of Medical Equipment
+ ,- ( ,-( #( 39 # - # ( 2 (# 2 ( - 2 4( 4(6702# 2 $2(32 2 ( 02(- #2(( # 2 5 # 2( 2 24( 2 $2 (-32---# #- 2 # (( , - 2 32 4( $ $2(-4( $2 3 9
,*$ -# --, 3 -- $ 3 (- #( 02 # #-2- 3 2 # - 4( 32 2 ( 2 (#4(#822- 2 ## $$ , # -- 3 ,G#2 $2 (#,2 (- #((#2 3 $(= : # ,-$ (-9> #2 $$K - #-# $-#(2 -(
# 2K #( $ 2 - 3 # #(2(-4(
02 2$, 2 & () *#(*2(- -( ( 02 (2#239322 # 39 $ 2-22 022# (--( ( - (( 4( 2$2 # 32 2 4( 02 # ## $$ 2 ( 2 5 # - 4( 2 $ 39 #22
3# -$(-( ( A-J2- ,%-# (,2 ( - 39 $ 2 ( - ##$ -2 - (# 2C-# 4( D C -(D02 # -# # (# (2, 2 2( #(2 - #3 9 $ 2 -- - -( ( 2 ( 2 3 32#2 (# 39 3 -( ( -( ( 2# -2 !% $%
"%
&%
% &% "%
%
%$ !
"
'( )*
%
$(:# $-(= -(:/> -# - (- #2-(#
3# -$( $-# -239 -#-24( 2 (, 02 $-# - 2 ( - 2 2$, - # 9 20 0 H 2$,(= #>-2# (,
+M0
!$
!# !"
989
$(H/(-2 #24( 2 2,-5$ -39 -J 2 -J393 ,*
"2 $ 2 F $ -( 32 2 #9 #2
, #2 , 2 (- #( - 2 5#2 $ #-2 0 $ 2( #( #-# (# (2- ( 23# 2- ( 0 $2( 2 *23 ( -2 F #9 $ 2 -2K(# 2($2 3 $ - + = - > /3 -3 I 232# "2 $2-3 32 2 4( N (- #( -#$2(-2 02 (- #( 3 ( 3 (#+-2-(# ,9 : $ $ -($2 ( -
%--#( 2
-2 3 - 22 # % #
$(( -, 4( 2 2, -
#
-2 #3 , 9K-2 3 2 -3 #2 $ -( ( -3 23$(
IFMBE Proceedings Vol. 29
990
A.P.S. Silva et al.
$(H/3-3 F2 $ 2(- %"+1/
# --$( I5, ,*22 4( ( - 9$ ,# 02 (## - 2 (, , , ( 2
,-2# - 09 2,- 4( - ( -(, 5, # , 2 93$ - 4( ( ( --2, - 322, # --$( 2-
392 - , ( ,<<J (,<2 -( -2 (( , # 2( - # 02 - - 2 2$, 9 , 3 5, - ( $ - 2 4( - # 232, (22 %($22(# ( 2 (3 ##- #22- 9 -2 ( 2 #2 3 2# 322 - ( , # 2 2(# 02 ##(# - 2 # - #$# , - -2 - 9$ , #02 3 - - 2 2 2 3$ 2 ( 3 -(2( , ($$,2
(2 ,(
2$,,$$-2# # (,3224( ( 022, # 2 - 3$ (#2 #-# $
##( $ 2 #-*$29- ( ,( 02($22# (,#(#3 2
2 2 - ( 32 2 4( 3 # (,(= -(>02#(
- (- # # #3
# 2 2 $( -, , , $ 4( ( ,
/"
A
;
@ J ? <
F"1/"M+1/
02 39 32#2 (-( ,* #92(-# 4( 2 -2( - # $$ 02 2$, 23 # 2 G 9
,2(# ( ( ,$ 02 (2#2 22$, 2 2 -- 9 (#2 # , # -,$24( #$ -, ,: -:( -# 2 # - ( - 2
M.$B+ , # #2$,#5 - 2-( , ( -# 4( B ( -( $# ; A:A<<; #2# " ! " # $ % # <+% - 2 ( # %($ 1 2 G!!333- $ ! # % #!% # $( ( #! ( #%#(!(#;<2 % 3,=??;>+% - 2 ( # %($ 1 2 G!!333- $ ! # % #!% # $( ( #! ( #%#(!(#?A@2 M 9 F 8 "2 0 , "2 $ B $2 # $2$2 $2 $2(G3 3 #2$2 :# $ ="> #PB !+ , + <:A<<@ "% --F( #B @A= ->: %$-# #B< M$92% C( # 2"- # % #: D617333-# #!, Q2 $ 0 2 F B+$ ( , 2(# ( -, - # #B - . " :<<< " ) # R& ( :" ( ="> 1 2 G!!3332 #$#!$ 5PTA / #2B(# ( -(- #B " + / ! ) 0 + & 8 2$+??< ?:A; )% 13" "" " 1 " # 5" & ( G # U## + & (<<J N% & 13 " 0 "" " ." 3 W "G+ " << N% ($B( - #$$G-# # ( 2 #( #: 9$B ) - J :?<
IFMBE Proceedings Vol. 29
Electromagnetic Interferences (EMI) from Active RFId on Critical Care Equipment Ernesto Iadanza, Fabrizio Dori, Roberto Miniati, and Edvige Corrado Department of Electronics and Telecommunications, University of Florence, Florence, Italy Abstract— RFId is quickly becoming a pervasive technology inside hospitals. EMI on electrical medical equipment is a concern. Patient safety has to be assured by assessing the electromagnetic compatibility (EMC) between Radio Frequency Identification (RFId) equipment and electrical medical devices. This study tests the effects of an active RFId system, used to track patients, on a critical care electrical medical equipment. 16 devices in five different categories have been tested in a children’s Intensive Care Unit (ICU) and no performance modifications were observed. Keywords— EMI, RFId, ICU, medical equipment.
I. INTRODUCTION Radio Frequency Identification (RFId) technology is quickly entering hospitals often close to the patient himself. Many tasks can be done using simple passive RFId tags: mother-baby matching with wristbands to avoid mix-ups; patient-drug tracking using RFId tagged packaging; blood bags tracking; sterile surgical tools tracking, etc. Some studies show that the active technology is particularly suitable for tasks such as the location of patients or assets [1] [8] [10] [11] [12]. Active RFId systems allow some tasks not achievable with previous technologies like barcodes, video-cameras or else. RFId use in healthcare is also receiving much attention to assess the implications in terms of patient safety. [2] [3] The possible EMI on medical equipment is a concern when the life of the patient is related to the medical device correct function. Some recent studies showed contrasting results, pointing out the need for further investigations to be done case by case [3] [4]. This work’s focus is examining the EMI between an active RFId system and the critical care equipment in a children’s ICU. An active RFId system consists of three main devices: illuminator, receiver, tag. In addition you can have a data network and a management software. The tag is batterypowered and is normally in stand-by mode; when entering an illuminator field cone, it wakes up and it starts to transmit its ID code together with the illuminator’s ID code to a receiver. The various systems on the market use many different transmitting frequencies and modes of operation, also depending on the different national regulations.
The electrical medical equipment must comply to UL/EN/IEC 60601 standard plus some national deviations. In particular the collateral standard TE 60601-1-2 applies to electromagnetic compatibility of medical electrical equipment and medical electrical systems. Nevertheless many devices, still in use, only meet older versions of the standard that required lower immunity test levels over the frequency range 26 MHz to 1 GHz. For this it is useful to test the EMI of RFId on the hospital actual equipment.
II. MATHRIALS AND METHODS A. RFId System The tested RFId hardware is an active dual frequency system, LNX®, by Advanced Microwave Engineering S.r.l. (www.ameol.it., Florence, Italy). The LNX system includes three devices: the illuminator, the tag and the reader [1].
Fig. 1 How the LNX active RFId System works The major EMI source in the system is the illuminator. In this application the footprint of its antenna is designed to cover a single ICU room. It consists of a 2.45 MHz PLL oscillator cascaded with a OOK modulator and a medium power MMIC amplifier. The radiation pattern of the antenna has 120 degrees –8 dB angular aperture. Circular polarization is employed because the orientation of the tag, that uses a linear polarized antenna, is unpredictable in
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 991–994, 2010. www.springerlink.com
992
E. Iadanza et al.
many applications. The signal transmitted by the illuminator provides a programmable ID code and few more setting commands that are used for programming the operation mode of the tag entering its field pattern. The RF output power of the illuminator can be set from 0 dBm to 20 dBm [5]. For each test we used the maximum power of 20dBm and an AC adapter. The RFId tag is a battery-powered dual frequency device that can be activated and programmed by the illuminator. It comes with a 4 Kbytes memory board and it is in a low power consumption stand-by mode until it is activated. Then it transmits to a receiver unit its own ID code and the illuminator code using a 433 MHz centred band and a maximum output power of 0 dBm.
Only the two Draeger ventilators were compliant to the latest IEC 60601-1-2:2003 standard, that specifies a general immunity test level to radiated RF noise of 10 V/m. The remaining 14 devices, according to their manuals, were compliant to previous versions of the same standard, that required a level of immunity of just 3 V/m. All the tests were performed in a fully operating critical care room without any patients and with just one medical device put on at a time (see fig. 2).
B. Medical Equipment The tested critical care devices are a typical equipment for a children’s resuscitation. Testing was performed at Meyer Children’s Hospital in Florence, Italy (www.meyer.it). An ICU room, away from the patients area, was set up with a moveable RFId illuminator and some active RFId tags. The medical equipment was operated by healthcare personnel, trained to manage it in everyday use. Table 1 shows a list of the 16 devices, tested in two different times. Fig. 2 The intensive care room used for the tests
Table 1 Tested critical care equipment Device
Manufacturer / Model
N. of devices
Ventilator
Draeger Evita XL
2
Ventilator
Siemens Servo Ventilator 3000
2
Syringe pump
Braun Perfusor Compact
4
Volumetric Infusion pump
Braun Infusomat fmS
3
Defibrillator / Monitor
Medtronic Lifepak 12
3
Multi-parametric monitor
Siemens SC 8000
2
For each device, the operating manuals has been studied to assess EMC and to create the checklists for the tests to perform.
C Test Method The test method was based on the American National Standards Institute recommendation ANSI C63.18 to assess the electromagnetic immunity of the medical devices by the RFId illuminator and an active tag [6]. The standard has been integrated with checklists designed for each medical device after the analysis of its operational and maintenance documentation. Each electrical medical device was first checked using its own internal test procedure and by healthcare staff. If necessary the devices were connected to the provided simulators. Then the illuminator was turned on. The distance between the illuminator and the device was reduced, according to ANSI C63.18 standard, in three following steps from 2m to 0,6m to 0.01m (indicating illuminator on top of the device, below the minimal distance for the RF immunity tests imposed by IEC 60601-1-2, see fig. 3). For each step the device was turned off and then on, the device internal test procedures were performed and the performances were evaluated by the healthcare personnel.
IFMBE Proceedings Vol. 29
Electromagnetic Interferences (EMI) from Active RFId on Critical Care Equipment
993
rate remained constant. The ‘signal absence alarm’ functioning was verified, after removing an electrode. The alarm stopped as soon as the electrode was repositioned. Also the Siemens multi-parametric monitors, tested detecting the ECG and the pulse oximetry signal, worked properly during all the performed tests.
IV. CONCLUSIONS
Fig. 3 The RFId illuminator placed on top of a ventilator Each test was repeated having a battery powered transmitting tag attached to the device body. At the minimal distance, the illuminator was moved in three different positions on the axes x (frontal), y (lateral) and z (above the device).
III. RESULTS No malfunctions spotted on Draeger EvitaXL ventilators in Paw, flow, respiratory frequency or other parameters for any of the tested modes: 1. 2. 3. 4. 5. 6. 7. 8.
IPPV (Intermittent Positive Pressure Ventilation); SIMV (Synchronized Intermittent Mandatory Ventilation); MMV (Mandatory Minute Volume Ventilation); CPAP (Continous Positive Airway Pressure); ASB (Assisted Spontaneous Breathing); BIPAP (Biphasic Positive Airway Pressure); APRV (Airway Pressure Release Ventilation); PPS (Proportional Pressure Support).
No malfunctions in the alarms, tested simulating alert situations. Siemens ventilators were tested using a rubber test lung for adults since no children simulator was available. No malfunctions recorded for these devices, that conform to the first version of the IEC 60601-1-2. None of the tested Braun pumps, set to give 5 mL/h, revealed malfunctions during the tests. Alarms correct functioning was assessed by simulating an occlusion, then waiting for the alarm beeps and for the error message, both disappeared as soon as the shrinkage was eliminated. No anomaly as well for the Medtronic defibrillators. Tests were performed using device’s ‘User Test’ mode with no actual defibrillator shots. Also the ECG trace, obtained by connecting the electrodes to a test subject, showed no errors: the ECG curve has not revealed distortions and heart
This study determined that the use of an active RFId system, with the above characteristics (low power and relatively high operating frequency) doesn’t affect the performances of the critical care equipment of a children’s ICU. The heterogeneousness and the typology of the tested devices, together with the total absence of malfunctions due to EMI, lets us suppose that this results can be widened also to a generic hospital ward. However, it is advisable that on-site studies like this are performed using the particular RFId hardware chosen, each time you plan the introduction of an RFId system in a hospital. It is also important to perform health technology renewal plans privileging the acquisition of devices that are most robust over the EMI, since RF systems are quickly becoming a widespread technology inside our hospitals.
ACKNOWLEDGMENT Thanks to Advanced Microwave Engineering (www. ameol.it), Meyer Children’s Hospital (www.meyer.it) and to Professor Guido Biffi Gentili.
REFERENCES 1. E. Iadanza, F. Dori, R. Miniati, R. Bonaiuti, "Patients tracking and identifying inside hospital: A multilayer method to plan an RFId solution" Engineering in Medicine and Biology Society, 2008. EMBS 2008. 30th Annual International Conference of the IEEE, pp.14621465, 20-25 Aug. 2008 2. B. S. Ashar and A. Ferriter, "Radiofrequency Identification Technology in Health Care: Benefits and Potential Risks," JAMA. 2007;298(19):2305-7 3. R. van der Togt, E. J. van Lieshout, R. Hensbroek, E. Beinat,; J. M. Binnekade,, P. J. M. Bakker, “Electromagnetic Interference From Radio Frequency Identification Inducing Potentially Hazardous Incidents in Critical Care Medical Equipment”, JAMA. 2008;299(24):2884-2890. 4. B. Christe B, E. Cooney, G. Maggioli, D. Doty, R. Frye, J. Short, “Testing potential interference with RFID usage in the patient care environment”, Biomed Instrum Technol. 2008;42(6):479-84 5. G. Biffi Gentili, C. Salvador, “A New Versatile Full Active RFID System”, in Proc. RFIDays 2008 - Workshop on Emerging Technologies for Radio-frequency Identification, Roma, 2008, pp 30 – 33.
IFMBE Proceedings Vol. 29
994
E. Iadanza et al.
6. Institute of Electrical and Electronics Engineers. American National Standard Recommended Practice for On-site Ad Hoc Test Method for Estimating Radiated Electromagnetic Immunity of Medical Devices to Specific Radio-frequency Transmitters (Standard C63.18). Piscataway, NJ: IEEE; 1997. 7. Franklin, B.J., “Development of a best practice model for the implementation of RFID in a healthcare environment”, in Proc. Bioengineering Conference, 2007. NEBC '07. IEEE 33rd Annual Northeast, Long Island, NY, USA, 2007, pp. 289 – 290. 8. E. Fry, L Lenert, “MASCAL ‐ RFID tracking of patients, staff and equipment to enhance hospital response to mass casualty events”, Amia Symposium, Bethesda (USA), American Medical Informatics Association, 2005. Available: https://www.wiisard.org/papers/command_center/events.pdf. 9. TC Chan, J Killeen, W Griswold, L Lenert, “Information technology and emergency medical care during disasters”. Acad Emerg Med. 2004;11(11):1229–36 10. S. Davis “Tagging along. RFID helps hospitals track assets and people”. Health Facil Manage 2004;17(12):20–4
11. A. M. Wicks, J. K. Visich, Suhong Li “Radio Frequency Identification Applications in Hospital Environments. Hospital topics 2006;84(3):3-9 12. R.S. Sangwan, R.G. Qiu, D. Jessen “Using RFID tags for tracking patients, charts and medical equipment within an integrated health delivery network” in Networking, Sensing and Control, 2005. Proceedings. 2005 IEEE, pp. 1070-1074
Address of the corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Ernesto Iadanza Department of Electronics and Telecommunications Italy Florence Italy [email protected]
The Clinical Data Recorder: What Shall Be Monitored? L.N. Nascimento and S.J. Calil Departamento de Engenharia Biomédica - Faculdade de Engenharia Elétrica e de Computação Centro de Engenharia Biomédica - Universidade Estadual de Campinas Campinas - SP - Brasil
Abstract— Medical error has been identified as one of the major causes of deaths in the USA. To reduce the number of preventable deaths, it is necessary to study the causes that lead to medical error and improve the processes involved in the delivery of healthcare. Some tools and techniques have been adapted from other areas to healthcare. The Clinical Data Recorder (CDR) – an adaptation of the aviation’s Flight Data Recorder (FDR) – is a promising tool for the assessment of surgical errors because it can provide reliable data on healthcare adverse events. This paper presents some developments already done in the area of medical data recording and offers some suggestions to the improvement of the CDR concept. We also briefly discuss some questions still to be addressed before surgical procedures can be routinely recorded for risk management purposes. Keywords— clinical data recorder, medical error, risk management, surgical error.
I. INTRODUCTION Ten years ago, a report form the US Institute of Medicine pointed medical errors as a leading cause of deaths in the country with an estimate between 44000 and 96000 preventable deaths each year [1]. Such numbers are evidence that the processes involved in delivering healthcare need to be studied and improved to push healthcare towards safer levels. Traditionally, the analyses of medical errors were focused on individuals [2]. However, it’s clear now that most of the medical errors are caused by multiple factors and some of these factors are attributable to system defects, not only to individual errors [3, 4, 5]. Sleep loss [6] and lack of training (sometimes worsened by lack of equipment standardization in the health institution) [7], are some of the factors that influence the safety of medical procedures. Some of these factors (e.g.: technical skills [4]) can be improved with simple actions while others cannot be controlled at all due to the need of changes in healthcare that society could not undertake (e.g.: limiting the flow and choice of incoming patients [8]). Because aviation and medicine are similar in some aspects [9], some principles and tools of aviation safety such as incident reporting systems [10] and observational analysis
[4, 11, 12] have successfully been applied to healthcare (e.g.: the mandatory incident reporting system by UK’s Department of Health [13]; and the Veterans Administration near miss incident reporting system [10]). These tools, however, depend on the clinical team members’ recall. Because memory for events is fallible [3], other methods should be employed to accurately capture the rich and complex data surrounding the care of a patient in an operating room [14]. One promising tool for capturing adverse event data in clinical settings is the Clinical Data Recorder (CDR), also based on an aviation tool – the Flight Data Recorder (FDR, also known as “the black box”). The FDR is used to record specific aircraft performance and environmental parameters and it has been for many years a valuable tool in the analysis of adverse events in aviation. Because relevant flight parameters have already been determined, there’s a list of mandatory parameters to be recorded by FDRs (and it is defined by specific regulations) [15]. There have been already some progresses in the development of Clinical Data Recorders and other data recording systems for healthcare, but we’re still far from defining a comprehensive set of parameters to be monitored. Some studies have developed systems capable of recording audio and video data from operation rooms [14, 16], but they are mainly focused on the assessment of technical or team skills – sometimes with the aid of observational methods (which do require the active employment of specialized personnel during operations) [4, 14, 17]. The “black box” concept has also been applied in a system for collecting data on alarms from a non-surgical setting at hospital and the results seemed promising [18]. The objective of this paper is to present some collected information about data recording on healthcare and discuss its applicability to the improvement of risk management in operation rooms.
II. MATERIALS AND METHODS At the basis of error management, is the understanding of both the nature and the extent of error [9]. If the proper data
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 995–998, 2010. www.springerlink.com
996
L.N. Nascimento and S.J. Calil
is recorded it might provide valuable information about the causes of an adverse event or “near miss”. In airplanes, a wealth of data is recorded, mainly related to airplane functioning and operation, but some of those parameters are related to external environment conditions or crew [15]. It is still necessary additional scientific studies to optimize the data collection in the health area, however, following the black box concept, a Clinical Data Recorder should as well collect as much data as possible. Pertinent data should comprehend parameters related to different types of factors [16]. Based on previous studies, we identified some healthcare parameters already recorded for different purposes and divided them into four categories: patient, performance, communication and environment. Patient refers to the physiological data from the patients. Performance parameters are related to technical skills, proper realization of procedures and correct operation of medical equipment. Communication parameters are related to interactions between team members during surgery. Environment parameters are related to conditions that surround the medical team and might negatively influence the outcomes of the procedures either by affecting the technical performance of the team members or by increasing other types of surgical risks.
III. RESULTS A. Patient Physiological data (especially cardiac parameters) have been recorded for years and some medical devices nowadays have built-in data recording systems. Some projects related to the assessment of surgical skills and risks have adopted the recording of physiological data along with other types of data [14, 16]. This sort of records is paramount to error management because sudden changes in physiological parameters’ values might indicate the occurrence of an error. B. Performance Video: the recording of videos from surgeries is one of the most useful data for error analysis and requires only a camera. It has been amply adopted, mainly for teaching purposes, but also for assessment of technical and nontechnical skills (e.g.: communication and team skills) and errors in controlled studies [4, 14, 16, 17, 19, 20]. Equipment settings: these data might be recorded by video [16], but some devices equipped with electronic display systems have built-in capability for recording the
selected settings. This parameter is important for identifying errors in the operation of devices and might reveal the necessity of training operators in specific equipment. Hand motion: some studies have applied an electromagnetic tracking device for assessing laparoscopic skill and showed good results [19, 20]. If properly displayed, sudden oscillations in this parameter might help in the identification of “near misses” and other incidents. C. Communication Conversation recording: the recording of conversations in conjunction with visual data might clarify the exact events occurred during a surgery. It has been used for assessing non-technical skills (e.g.: communication and leadership) [14] and is often used in aviation to elucidate adverse events. Utterance count: audio recording has also been used to determine communication density during surgery [14, 17]. It might help the identification of the phases during the surgery when communication between team members is more intense and hence more critical. D. Environment Noise and lighting: studies have indicated noise and lighting as factors that influence clinical practice [16], but these data have not been specifically monitored and recorded. Noise level, however, can be easily measured with a decibel meter and lighting levels can be monitored with photometers. Equipment functioning: in a recent study, medical equipment alarm data have been successfully recorded and the information gathered from the analyzed data helped to improve the proper use of alarms in that specific setting [18].
IV. DISCUSSION Despite the development of some promising systems [14, 16, 17, 18], it seems that the application of the FDR concept in the surgery room is still in its infancy. Patient data recording has been evolving fast in the last few years, and some physiological (and other) data can even be remotely monitored [21]. Performance data (except for video) is not yet recorded on a regular basis and some performance-related parameters (e.g.: hand motion) still need to be spread among medical researchers. Equipment settings recordings are not yet builtin resources in many medical devices and, because there are no standards for these kinds of data, there could be problems in synchronizing data from different manufacturers’ equipment in a single system.
IFMBE Proceedings Vol. 29
The Clinical Data Recorder: What Shall Be Monitored?
Eye-blink observation and other methods researched by the automobile industry to detect driver fatigue [22] might help the detection of fatigue in surgeons. Communication data recordings might also aid in the detection of fatigue in surgeons by the use of techniques for fatigue detection from voice [23]. Except for audio and video recording and the aforementioned study on alarms, environment data is poorly investigated, though some parameters (e.g.: lighting, noise, distractions, interruptions etc.) are considered influent in operation outcomes [16]. A comprehensive environment data collection could include other parameters like temperature and humidity, which can influence both personnel and equipment. Electrical power quality and oxygen concentration in the operating room should also be monitored because the former can affect the functioning of medical equipment [24] and the latter can increase the risk of surgical fires [25]. Besides all the technical challenges, there are still some legal and ethical questions to be addressed, like the use of the collected data as evidence in litigation, the privacy of the medical team during surgery and patient's access to medical error data [26]. Another barrier to be overcome is the traditional attitude of focusing error analysis on individuals. It must be replaced by a culture of analyzing each incident as the result of multiple systemic failures, otherwise error will continue to be concealed for fear of penalization and healthcare processes will not be improved [2]. Obviously, care should be taken not to put all blame upon the system, otherwise professional responsibility and accountability will be sacrificed, increasing the risks to the patients [27]. At last, the Clinical Data Recorder should not be expected as a panacea for surgical error because its sole function should be to record data and, eventually, detect anomalous patterns in the monitored parameters. Proper techniques and trained personnel will be still required to harvest sound information from all the data and to correct flawed processes, making them safer.
V. CONCLUSIONS In the last decade, awareness of risks associated to healthcare has greatly increased and many initiatives have been put in place to understand the nature of medical error, some of those initiatives were based on other areas. A promising tool now being developed to healthcare is the clinical data recorder, based on the aviation’s flight data recorder. It has the potential to provide comprehensive and reliable data of adverse events and near misses in surgical settings.
997
If we can overcome some technical and cultural barriers and address some legal end ethical questions properly, CDRs might help pushing surgical safety levels further, reducing avoidable harm to patients and saving lives.
ACKNOWLEDGMENT The authors wish to thank CNPq.
REFERENCES 1. Kohn L, Corrigan J, Donaldson M (2000) To err is human: building a safer health system. A report of the Committee on Quality of Healthcare in America, Institute of Medicine. National Academy Press, Washington 2. Leape L, Woods D, Hatlie M et al. (1998) Promoting patient safety by preventing medical error. JAMA 280:1444-1447 3. Gawande A, Zinner M, Studdert D et al. (2003) Analysis of errors reported by surgeons at three teaching hospitals. Surgery 133:614-621 DOI: 10.1067/msy.2003.169 4. Tang B, Hanna G, Bax N et al. (2004) Analysis of technical surgical errors during initial experience of laparoscopic pyloromyotomy by a group of Dutch pediatric surgeons. Surg Endosc 18:1716-1720 DOI: 10.1007/s00464-004-8100-1 5. Rogers S, Gawande A, Kwaan M et al. (2006) Analysis of surgical errors in closed malpractice claims at 4 liability insurers. Surgery 140:25-33 DOI: 10.1016/j.surg.2006.01.008 6. Firth-Cozens J, Cording H (2004) What matters more in patient care? Giving doctors shorter hours of work or a good night's sleep?. Br Med J 13:165-166 DOI:10.1136/qshc.2002.002725 7. ECRI Institute (2008) Healthcare Risk Control (Vol. 4). ECRI Institute, Plymouth Meeting 8. Amalberti R, Auroy Y, Berwick D, et al. (2005) Five System Barriers to Achieving Ultrasafe Healthcare. Ann Int Med 142:756-764 9. Helmreich R (2000) On error management: lessons from aviation Br Medl J 320:781-785 DOI: 10.1136/bmj.320.7237.781 10. Barach P, Small S (2000) Reporting and preventing medical mishaps: lessons from non-medical near miss reporting systems. Br Med J 320:759-763 DOI: 10.1136/bmj.320.7237.759 11. Wiegmann D, ElBardissi A, Dearani J et al. (2007) Disruptions in surgical flow and their relationship to surgical errors: An exploratory investigation. Surgery 142:658-665 DOI: 10.1016/j.surg.2007.07.034 12. Cuschieri A (2005) Reducing errors in the operating room: Surgical proficiency and quality assurance of execution. Surg Endosc 19: 1022–1027 DOI: 10.1007/s00464-005-8110-7 13. Bruce S (2009) DH mandates incident reporting. E-Health Insider at http://www.e-healthinsider.com 14. Guerlain S, Adams R, Turrentine F et al. (2005) Assessing team performance in the operating room: Development and use of a “blackbox” recorder and other tools for the intraoperative environment. J Am Coll Surg 200:29-37 DOI: 10.1016/j.jamcollsurg.2004.08.029 15. Flight Data Services at http://www.flightdataservices.com 16. Vincent C, Moorthy K, Sarker S et al. (2004) Systems Approaches to Surgical Quality and Safety: From Concept to Measurement. Ann Surg 239:475–482. DOI 10.1097/01.sla.0000118753.22830.41 17. Moorthy K, Munz Y, Adams S et al. (2005) A Human Factors Analysis of Technical and Team Skills Among Surgical Trainees During Procedural Simulations in a Simulated Operating Theatre. Ann Surg 242:631–639.
IFMBE Proceedings Vol. 29
998
L.N. Nascimento and S.J. Calil
18. David Y, Hyman W, Woodruff V et al. (2007) Overcoming barriers to success: collecting medical device incident data. Biomed Instrum Technol 41:471,473-475 19. Aggarwal R, Moorthy K, Darzi A (2004) Laparoscopic skills training and assessment Br J Surg 91:1549-1558. 20. Datta V, Mackay S, Mandalia M et al. (2001) The use of electromagnetic motion tracking analysis to objectively measure open surgical skill in the laboratory-based model J Am Coll Surg 193:479-485 21. Costin H, Cehan V, Rotariu C et al. (2009) TELEMON — A Complex System for Real Time Telemonitoring of Chronic Patients and Elderly People. 4th European Conference of the International Federation for Medical and Biological Engineering, pp. 1002-1005 22. Autoweb (2006) Mercedes-Benz developing systems to counter driver fatigue. Autoweb at www.autoweb.com.au 23. Greeley H, Friets E, Wilson J et al. (2006) Detecting Fatigue From Voice Using Speech Recognition. 2006 IEEE International Symposium on Signal Processing and Information Technology, pp.567-571
24. Hanada E, Itoga S, Takano K et al. (2007) Investigations of the Quality of Hospital Electric Power Supply and the Tolerance of Medical Electric Devices to Voltage Dips. J Med Sys 31:219-223 25. Thompson J, Colin W, Snowden T et al. (1998) Fire in the operating room during tracheostomy. Southern Med J 91:243 26. Mavroudis C, Mavroudis CD, Naunheim KS et al. (2005) Should surgical errors always be disclosed to the patient? Ann Thorac Surg 80:399-408 DOI: 10.1016/j.athoracsur.2005.05.023 27. Walton M (2004) Creating a “no blame” culture: have we got the balance right? Br Med J 13:163-164 DOI: 10.1136/qshc.2004.010959 Author: Leonardo Novaes do Nascimento Institute: Universidade Estadual de Campinas Street: Centro de Engenharia Biomedica CP 6040 City: Campinas - SP Country: Brazil Email: [email protected]
IFMBE Proceedings Vol. 29
Clinical Engineering and Patient Safety: a forty year cycle M. Frize1,2, S. Weyand2 and K. Greenwood1,3 1
2
Systems and Computer Engineering, Carleton University, Ottawa, Canada School of Information Technology and Engineering, University of Ottawa, Ottawa, Canada 3 The Children’s Hospital of Eastern Ontario, Ottawa, Canada
Abstract— In this paper, we examine the issue of patient safety as it was perceived when clinical engineering departments (CEDs) first began to emerge, and discuss how this responsibility changed dramatically in the past two decades. Technology management was of prime importance in the last three decades of the twentieth Century and to this, we now add the issue of reducing adverse medical events and errors. We suggest how some technologies such as electronic health record (EHRs), clinical decision support systems CDSSs) and physician order entry systems (POESs) can help to reduce their occurrence. Other concepts discussed include the use of human factors engineering, technology planning and management, and error reporting and analysis. Researchers developing technology for clinical applications need to work closely with physicians and clinical engineers if these systems are to be successfully deployed in the future. Keywords— Clinical engineering, patient safety, technical approaches, minimize adverse events, information technology. I. INTRODUCTION
The late 1960s and early 1970s saw the appearance of Clinical Engineering Departments (CEDs) in the United States and Canada, and clinical engineering services within medical physics departments in the United Kingdom. Another early model still present today, particularly in the UK and Nordic countries, is a University-based biomedical program which is also involved in delivering clinical engineering services to affiliated hospitals. The field of clinical engineering expanded rapidly in the 1980s in many industrialized nations. In developing countries, clinical engineering appeared in the late 1980s and early 1990s, except in countries like India and Brazil where they sprung up in the late 1970s and early 1980s. Following a statement made by Ralph Nader in the Ladies’ Home Journal in 1971, claiming that there were 1200 electrocutions due to micro-shocks in US hospitals every year, a few engineers began to be hired in the USA and Canada to ensure the electrical safety of patients. My own job was created in large part for this reason in a Montreal hospital (Hôpital Notre-Dame) in 1971. As first priority, I and a biomedical technologist tested every medical device in this 1200 bed hospital. We did find some problems and
fixed them as we found them, sometimes recommending disposal. It soon became clear that the majority of my work was technology management in which patient safety was one aspect. This did not mean that safety was not of the highest importance, but soon other equipment management tasks became integrated into our job definition. In a 1988 article, I proposed a definition for such departments: “A Clinical Engineering Department should provide safe and effective management of technology used in patient diagnosis or therapy, within health care institutions.” [1] A few years later, the American College of Clinical Engineering (ACCE) posted a definition of the clinical engineer on their web site: "A Clinical Engineer is a professional who supports and advances patient care by applying engineering and managerial skills to healthcare technology." [2] A second factor which helped the field to develop was the rapid proliferation of medical technologies in the 1980s. In a community hospital of 540 beds in Moncton, Canada, the CED was responsible for the management of 320 devices in 1979, and of over 2000 devices by 1989 and we demonstrated that in-house equipment management delivered more comprehensive services at one third the cost of external services companies [3, 4]. A 1988 survey of CEDs in western nations, including the US and Canada, Sweden and Finland, and a few respondents in the European Community revealed that, although there was some difference in the level of involvement in the various functions, most CEDs were involved in: prepurchase consultation, drawing specifications and requirements, analysis of quotations and making recommendations based on criteria established with potential users; corrective and preventive maintenance; incoming inspections when equipment is delivered; training of the users on safe and effective use of the new devices. Some functions were performed by medical technologists and others by clinical engineers. The study found that half of the CEDs did not think that their work was receiving recognition and some were not consulted prior to equipment purchases in their institution. This study led to the development of a model describing a desirable level of involvement in technology management, and the resources needed to accomplish these tasks. [5]
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 999–1002, 2010. www.springerlink.com
1000
M. Frize, S. Weyand, and K. Greenwood
In 1999, a similar survey was conducted by Glouhova involving six regions: North America, Nordic countries, West Europe, South Europe, Australia, and Latin America (including Brazil, Mexico and Cuba) with similar results to the previous survey except that a higher proportion of CEDs felt recognized. [6] Mullally and Frize focused a new survey on CEDs in developing countries, based on the 1988 survey model, with a few additions regarding equipment donations which had not been part of the former surveys. [7] In this most recent study, the 1988 model by Frize was applied successfully to a developing country context; it enabled us to assess the resources needed by CEDs to perform their role, and the manner in which services could be provided effectively in these countries. [7] The rest of the article focuses on patient safety, a function which took centre stage in the early 1970s, and which is again a major focus in the first decade of the 21 st century. II. PATIENT SAFETY THEN AND NOW
A. Patient safety in the early days In the 1970s and the remainder of the 20th century, patient safety was mostly focused on electrical safety, macroshocks and microshocks, which meant regular leakage current testing on devices used in patient care, the electrical distribution system, removal of extension cords, and so on. Another concern was the effect of electromagnetic interference (EMI) on medical devices. For example, EMI could stop the proper functioning of a pacemaker, or an apnea monitor, potentially resulting in the death of the patient. B. Patient safety today Following the Institute of Medicine’s (IOM) publication “To Err Is Human: Building a Safer Health System”, focus on patient safety took centre stage once again, but in a much broader manner than in previous decades. The IOM report estimated that 44,000 to 98,000 deaths occurred each year in US hospitals due to medical errors, and estimated the national cost of preventable adverse events to be between 17 and 29 billion dollars a year. Kohn suggests that about half of the adverse events are due to medical error and are preventable. [8] In Canada, a study showed that adverse events occurred in 7.5% of patients in 2004, and of these, 36.9% were considered highly preventable; 5.2% resulted in permanent disability, and 20.8% in death. [9] Studies have now been performed in many other countries with similar results. In response to these studies, a number of measures have been taken to minimize adverse events and medical errors in hospitals. These include implementation of infor-
mation technology (IT), the use of human factor engineering, technology planning and management, and error reporting and analysis. The use of IT in health care is still a relatively new area but provides a high potential for decreasing medical errors. IT systems include electronic health records (EHR), physician order entry systems (POES), clinical decision-support systems (CDSS), electronic results reporting systems, patient-centred decision-support systems, and telemedicine. The lack of proper technology planning has illustrated itself time and time again in recent years. Decision makers in healthcare facilities often make purchase decisions based on limited information and to find out later that important options were omitted or there were incompability issues. Ill conceived technology solutions often lead to patient safety issues after the device is placed into service. Clinical Engineers have a unique opportunity, as a result of their unique training and skill set, to move into the role that will match the hospital needs and operational condition [4]. Studies conducted on adverse events have also identified human factors as a key ingredient in making health care safer. Human factors engineering can systematically reduce adverse events in health care through the use of a usercentered design approach. Using knowledge on the strengths and limitations of human performance, a human factor-centered approach to health care can ensure that technologies perform as intended. It can also increase the efficiency of non-technological aspects such as work flow, schedules, and staffing levels. The benefits of using a human factors method include increased patient safety, increased efficiencies, decreased training requirements, and increased adoption and satisfaction. [10, 11] Error reporting and analysis provides adverse event tracking, which can ultimately lead to solutions. Medical error reporting also can lead to error prediction techniques including: fault tree analysis, failure mode effect analysis, operability study, and hazard analysis. [12, 13] C) Medical technology management If CEDs are to succeed in medical technology management, they must clearly understand the strategic plan and the financial limitations of their health care facility and develop a clinical equipment plan that supports corporate long range goals. The initial step is to develop a clinical technology strategic plan (CTSP) using the corporate clinical strategic plan as a building block. For both plans to be successful it is important that they are fully integrated with one another. The next step, technology assessment, involves the appraisal of the safety, practicality and the financial viability of
IFMBE Proceedings Vol. 29
Clinical Engineering and Patient Safety: A Forty Year Cycle
specific technologies and their societal, legal and technical impacts. The data collected from both the CTSP and the completed technology assessments can then be integrated into the corporate clinical equipment plan which should forecast needs over the next three to five years but requires reassessment on an annual basis to fine tune the plan’s relevance to current priorities. All items indentified within the equipment plan should be assessed using a consistent numerical ranking system that includes factors such as patient and staff safety, clinical significance, corporate priorities, technical and material obsolescence, clinician retention, productivity and service delivery and cost avoidance. [14] Once the list of clinical equipment is finalized, CEDs must then lead the efforts along with the rest of the corporate leadership to define the annual clinical capital equipment priorities while remaining aware of their dual roles “that of manager and that of engineer” at all times. [15] Once the annual priorities are established, the procurement and commissioning cycle begins. CEDs must work closely with end-users, Materiel Management, Information Services and Facilities Management to define device specifications prior to the release of a Request for Proposal (RFP) to the marketplace. After the RFP is released and a successful vendor is selected from the responses, a contract is negotiated with the vendor to finalize issues such as cost, installation, acceptance testing, warranty, service and user training, along with service support, consumables agreements, and service training where needed. The final and longest phase of the technology management process is the asset management stage, which lasts during the entire lifecycle of the device, until it is decommissioned. Maintenance of newly acquired devices would be key in ensuring that the life cycle is extended to a maximum either through internal or an external service, depending on the optimum fit for the specific type of device. The CED should be charged with monitoring the ongoing cost of ownership of medical equipment. Review of the cost of ownership data and weighing against industry device lifespan benchmarks give the CED the ability to make concrete choices around replacement schedules. It is this role within health care facilities that the CED’s leadership provides today. Providing objective advice on the best technology solutions for the organisation makes clinical engineers a good fit for the position of Chief Technology Officer. [5] D) Information technology (IT) The implementation of IT in hospitals has been shown to reduce many types of adverse events. For example, electronic health records have been shown to decrease fatigue
1001
and stress which cause cognitive errors and increase negligence. [16, 12] IT can also be used to decrease errors due to a lack of training by increasing accessibility to training resources. It is important that everyone using medical equipment be trained and periodically re-trained. Training includes the operation and function, safety, limitations, malfunctions, hazards, and electrical safety. Training through the use of tele-education and tele-mentoring allows more people to be trained in less time. Additionally, simulators can be used for training by providing hands-on experience without putting patients at risk. [17] Another source of error is due to misdiagnosis, which can happen when doctors favour a certain diagnosis and may fail to consider alternatives. [18, 9] Decision-support systems can aid the doctor by supplying a complete list of possible diagnoses and suggesting alternatives. One of the main conclusions of the report “To err is Human” is that the majority of medical errors are the result of systemic factors. The publication emphasized a need to focus on systems-oriented error reduction rather than faultfinding. [19] An example of a systemic factor error is the lack of access to patient information, which is currently stored in many files and records at different locations, making access difficult. These files contain important information for the safe treatment of a patient, yet it is rarely available where and when needed. Even within the same hospital, it can be difficult to track down a patient’s information which is stored in a different section of the hospital. [18] The implementation of an electronic patient record (EPR) would provide one centralized place to store all pertinent medical information, allowing quick access to all patient data. [20] Another systemic factor is a shortage of staff, beds, and equipment. The use of IT tools has been shown to increase the efficiency of the health care system allowing for a faster turnover and an increased number of patients receiving care. IT systems have been shown to improve the processes of care delivery from 5% to 66% with most increases being between 12-20%. For example, a study was conducted which demonstrated that the use of IT increased the rate of influenza vaccination by 12-18% and pneumococcal vaccinations by 20-33%. [12] Medication related adverse events can be minimized using a computerized physician order entry system (CPOE). Medication errors include: incorrect assessment of the drug needed, mistaking different medications with similar names or packaging, illegible handwriting on a prescription, failure to include the leading zero in front of a decimal number, processing the order incorrectly, missing critical patient information, missing critical drug information, miscommunication, and lack of quality control. The CPOE system can
IFMBE Proceedings Vol. 29
1002
M. Frize, S. Weyand, and K. Greenwood
be coupled with an electronic patient record and a clinical decision support system to provide smart alerts that prevent medication errors. [8, 18] The CPOE system can provide automatic alerts for allergies, inform doctors of sound-alike and look-alike drugs, drug-food interactions, and drug-drug interactions. Use of CPOE systems can also reduce errors due to miscommunication and illegible hand writing. A study conducted by Chaudry showed that the use of CPOE in two hospitals resulted in a significant decrease in adverse drug events. The adverse events were reduced from 28 to 4 and the resultant cost from $35,283 to $26,315 in the period of study. [12]
ACKNOWLEDGMENT
We wish to thank the Natural Sciences and Engineering Research Council (Canada) for the grant supporting this research. REFERENCES 1. 2. 3. 4.
F) Human factors engineering
5.
The use of human factors engineering can significantly reduce adverse events. There are many issues leading to an inadequate health care system, including the mismanagement of patients, communication issues, staff shortages, responsibility confusions, distractions and interruptions, long wait times, and long work hours. These systemic failures need to be addressed in order to improve the work flow. [8] The human factors approach for organizations involves focusing on understanding the needs of all users, the required tasks, environmental constraints, people skills and knowledge in order to come up with a better work flow. One of the changes that can be implemented using human factors engineering includes decreased hand-offs. A reduction of the number of hand-offs or people involved with the treatment of a single patient can reduce errors. [10, 11] Wilson et al. estimate that 57% of adverse events are due to cognitive failure. [20] The use of human factors engineering has been shown to decrease the amount of training and supervision required. [17] Also, human factors engineering has been shown to reduce stress, workload, fatigue and interruptions. Additionally, use of simple technology such as a checklist can help with for and stress. [16, 12]
6. 7. 8.
9. 10. 11. 12. 13. 14. 15. 16.
III. CONCLUSION
Patient safety has always been an important responsibility for CEDs, but the breadth of issues covered changed dramatically over the years, from electrical-focused safety in the early days, to a concern with adverse events and errors. With the advent of EHRs and CDSSs, there emerged a new role for clinical engineers and researchers, working together to integrate IT solutions into patient care that can help reduce and eliminate adverse events. Future work will consist of refining our previous models of clinical engineering effectiveness to include these new aspects related to patient safety and patient care.
17. 18. 19. 20.
Frize, M. (1988) The clinical engineer: a full member of the health care team? Med. Biol. Eng. Computing 26:461-165 ACCE (1992), available at: http://www.accenet.org/default.asp?page=about§ion=definition Frize, M. (1989) Evaluating the Effectiveness of Clinical Engineering Departments in Canadian Hospitals. Doctoral Thesis, Erasmus Universiteit, The Netherlands. David Y, Maltzahn W, Neuman M et al. (2003) Clinical Engineering (Principles and applications in Engineering) David Y , Danvers Frize, M. (1990) Results of an international survey of clinical engineering departments. Med. Biol. Eng. Comput. vol. 28, pp.153-165. Glouhova, M. Kolitsi, Z., Pallikarakis, N. (2000) International survey on the practice of clinical engineering: mission, structure, personnel, and resources. J. Clin. Eng 25(5):205-212. Mullally, S. and Frize, M. (2008) Survey of Clinical Engineering Effectiveness in Developing World Hospitals: Equipment Resources, Procurement and Donations.” IEEE/EMBS Vancouver: 4499-4502. Kohn L, Corrigan J, Donaldson M, Institute of Medicine (U.S.). Committee on Quality of Health Care in America (2000) To Err is Human: building a safer health system. Kohn L, Corrigan J, Donaldson M, Washington Baker, R, Norton P, Flintoft V et al. (2004) The Canadian Adverse Events study: the incidence of adverse events amount hospital patients in Canada. CMAJ 170(11):1678-1689. Easty T, Healthcare Human Factors at http://humanfactors.ca/ Cafazzo J, Trbovich PL, Cassano-Piche A., et al. (2009) Human Factors Perspectives on a Systemic Approach to Ensuring a Safer Medication Delivery Process. Healthcare Quarterly 12:70-74. Chaudhry. Wang, J, Wu, S., et al. (2006) Systemic Review: Impact of Health Information Technology on Quality, Efficiency, and Costs of Medical Care. Ann Intern Med, 144(10):742-752. Rideout, K. (2006), Identification of Primary Risk Factors of Adverse Medical Events Using Artificial Neural Networks. MASc Thesis, Systems and Computer Eng, Dept, Carleton University, Ottawa, Canada. Greenwood K.., Yazdanpanah, M., Picard. A., et al (2006) The CHEO Long Range Clinical Equipment Plan. Report of The Children’s Hospital of Eastern Ontario, Ottawa, ON. David Y, Maltzahn W, Neuman M., et al. (2004) Introduction to Medical Technology Management Practices. Clinical Engineering Handbook: 108-113. JCR, Joint Commission Resources Inc (2001) Essential Issues for Leaders Emerging: Challenges in Health Care. JCR, Oakbrook Terrace. Issenberg SB, McGaghie WC, Hart IR., et al. (1999) Simulation Technology for Health Care Professional Skills Training and Assessment. JAMA 282(9):861-866. Leape L, Lawthers AG, Brennan TA, Johnson WG. (1993) Preventing Medical Injury. QRB Qual Rev Bull.19(5):144–149. Gilmour J (2006) Patient Safety, Medical error and tort law: an international comparison: available at http://www.hc-sc.gc.ca/srsr/finance/hprp-prpms/results-resultats/2006-gilmour-eng.php Wilson RM, Harrison BT, Gibberd RW, Hamilton JD. (1999) An analysis of the causes of adverse events from the Quality in Australian Health Care Study. MJA 170(9): 411-415.
IFMBE Proceedings Vol. 29
Extracorporeal Membrane Oxygenation in the Treatment of Novel Influenza Virus Infection: A Multicentric Hospital-Based Health Technology Assessment in Lombardy Region P. Lago1, I. Vallone2, and G. Zarola2 1
San Matteo Polyclinic Hospital, Ahead of Clinical Engineering Department, Pavia, Italy 2 San Matteo Polyclinic Hospital, Clinical Engineering Department, Pavia, Italy
Abstract— ECMO or Extracorporeal Membrane Oxygenation is a specialized heart-lung bypass machine used to take over the body’s heart and lung function while the body heals from injury or illness. One of the disturbing hallmarks of the novel A/H1N1 flu virus is that it produces severe lung damage resulting in ARDS (Acute Respiratory Distress Syndrome). Normally, patients with ARDS are placed on mechanical ventilation in Intensive Care Units (ICUs) and treated with a variety of pharmacological agents to reduce infection and lung inflammation. With A/H1N1 viral pneumonia, mechanical ventilation often does not result in adequate oxygenation so with ECMO the burden of pumping and oxygenating the blood is taken from the heart and lungs, and they are given time to heal. San Matteo Polyclinic Hospital was nominated by the Lombardy Region and the Ministry of Health as national reference for the installation of ECMO to deal severe heart failure and pulmonary caused by novel A/H1N1 flu virus. Clinical Engineering Department of the hospital had a fundamental coordination role in biomedical technology-related issues. It supported high-level management decisions (strategy, management, planning, procurement and maintenance) to answer to Lombardy’s guidelines about the organization of ECMO machines in the region. Moreover it had an important role in the assessment, by the use of HTA procedures, of ECMO alternative technologies. Keywords— Hospital-Based Health Technology Assessment, Extracorporeal Membrane Oxygenation, influenza A/H1N1, Acute Respiratory Distress Syndrome.
I. INTRODUCTION In April 2009, the Mexican Ministry of Health reported an increase in severe pneumonia cases in young adults [1]. The same month first two cases were reported in the United States of human infection with a novel influenza A/H1N1 virus [2]. This novel swine-origin pandemic began in the northern hemisphere during late spring and early summer and appeared to decrease in intensity within a few weeks [34]. In July 2009, a total of 122 countries had reported 94.512 cases of novel influenza A/H1N1 virus infection, 429 of which were fatal; in the United States, a total of 33.902 cases were reported, 170 of which were fatal [5].
Cases of novel influenza A/H1N1 virus infection have included rapidly progressive lower respiratory tract disease resulting in respiratory failure, development of acute respiratory distress syndrome (ARDS), and prolonged intensive care unit (ICU) admission. In some severe cases, extracorporeal membrane oxygenation (ECMO) was commenced for the treatment of refractory hypoxemia, hypercapnia, or both, which occurred despite mechanical ventilation and rescue ARDS therapies. ECMO is most commonly used in neonatal intensive-care units, for newborns in pulmonary distress, but it is also used for adults that, even with the use of a ventilator, need to be oxygenated until they are able to do the job without assistance. One of the new uses is in adults and children with the A/H1N1 flu. ECMO treatment provides oxygenation until their lung function has sufficiently recovered to maintain appropriate O2 saturation. In July 2009, Italy began to register same cases of novel influenza A/H1N1 virus infection. Epidemiological and virological influenza surveillance network (Influnet) is strengthened and the hospitals, above all those specialized in the treatment of infectious diseases, are alerted across the Regions to be ready to handle suspected cases of novel influenza, through appropriate containment and treatment measures.
II. MATERIALS AND METHODS A. The Situation in Italy In Italy, the estimated new cases of the flu syndrome in the third week of January 2010 are 96.000, for a total of 4.293.000 cases from the beginning of surveillance. The total value of the impact of the flu syndrome is equal to 1.61 cases per thousand assisted. The age group most affected is always the pediatric (0-14 years old), with an incidence equal to 3.23 cases per thousand assisted (4.76 ‰ in the range of younger children of 0-4 years old and 2.46 ‰ in the range 5-14 years old). There is a slight increase in incidence in the pediatric age groups (especially in children under 0-4) while among the people over 15 years old the incidence is almost stable [6].
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 1003–1006, 2010. www.springerlink.com
1004
P. Lago, I. Vallone, and G. Zarola
Italian Regions have reported to Ministry 1038 hospitalizations for severe cases of influenza, of which 448 have required ventilatory support (0.010%). The percentage of victims related to influenza A has been updated compared to the total number of cases estimated by Influnet and is equal to 0.005% of patients [6]. Table 1 Deaths number from influenza A (H1N1) per Region at January 2010 in Italy [6] Region
Deaths
Abruzzo
3
Basilicata
3
Calabria
15
Campania
53
Emilia Romagna
13
Friuli Venezia Giulia
5
Lazio
14
Liguria
3
Lombardia
13
Marche
4
Molise
4
Piemonte
21
Puglia
35
Sicilia
21
Toscana
5
Umbria
3
Veneto
11
P.A. Bolzano
1
P.A. Trento
1
Total deaths
228
The overall number of deaths is 228 [6] (Table 1). This value includes cases for which the regional health authorities have confirmed the finding of infection by new viruses A/H1N1. The Italian Government has faced the spread of new flu by providing a vaccine strategy and by promoting the Regions and Autonomous Provinces to identify the reference centers for patients suffering from acute respiratory failure. In particular, San Matteo Polyclinic Hospital was nominated by the Lombardy Region and the Ministry of Health as national reference for the installation of mobile ECMO to deal severe heart failure and pulmonary. B. ECMO Technique Extracorporeal Membrane Oxygenation (ECMO) is an adaptation of conventional cardiopulmonary bypass technique for providing life support used for long-term support
of respiratory and cardiac function. In most cases patients with ARDS respond favourably to advanced methods of intensive care, which include various forms of mechanical ventilation and positional manoeuvres. For a small number of ARDS patients whose pulmonary gas exchange can not be improved sufficiently, ECMO can be an additional therapeutic option during the acute phase [7]. ECMO involves connecting the patient’s circulation to an external blood pump and artificial lung (oxygenator). A catheter placed in the right side of the heart carries blood to a pump, then to a membrane oxygenator, where exchange of oxygen an carbon dioxide takes place. The blood then passes through tubing back into either the venous or arterial circulation (Fig. 1). An anticoagulant is used to prevent blood clotting in the external system. An ECMO machine, in addition to remove CO2 and add O2 to blood, regulates the blood temperature with a heat exchanger, removes air bubbles via drip chambers and checks incoming and return pressures.
Fig. 1 Schematic description of Extracorporeal Membrane Oxygenation System [7] Moreover, the system is equipped with safety devices and monitors: Air bubble detectors can identify microscopic air bubbles in the arterialized blood; Arterial line filters between the heat exchanger and the arterial cannula are used to trap air and trombi; Pressure monitors, which are placed, measure the pressure of the circulating blood and are used to monitor for a dangerous rise in circuit pressure. ECMO technique, however, can cause mechanical complications as oxygenator failure, pump or heat exchanger malfunction, and problems associated with cannula placement or removal, but also patient-related medical complications as bleeding, neurological complications, additional organ failure, barotraumas and infection [8].
IFMBE Proceedings Vol. 29
Extracorporeal Membrane Oxygenation in the Treatment of Novel Influenza Virus Infection
1005
C. Hospital-Based HTA Methodology In recent years, increasing attention has been placed on the adoption of HTA principles and tools to produce evidence for managerial decision making at an hospital level. Hospital based HTA has been recognized as a possible approach to foster HTA's impact into practice and to sustain rational-based decision making processes regarding health technologies in health care organizations [9]. San Matteo is among the first hospitals that have organized a group of Health Technology Assessment (HTA), made by different professionals as doctors, clinical engineers, healthcare economists and, obviously, General Manager. This group is in charge of the assessment of the equipment to be introduced into the hospital, especially of the most innovative or most relevant from an organizational point of view. The group uses the procedures pointed out by The HTA Italian Network and guidelines of the Ministry of Health for the Hospital Based HTA. First step has been the analysis of the characteristics, in terms of efficacy, efficiency and costs, of the existing alternative technology in order to point out the correct clinical needs. Second step has been the analysis of the existing literature and, more precisely, of HTA reports on each piece of equipment, both the most innovative and the most common, but not operating in the hospital yet. This has brought to a synthesis of evidences concerning efficacy and, where possible, to the analysis of costs. The evaluation of efficiency has been reached by the analysis of technologies and of existing organizing patterns, as well as of the information in existing guidelines. D. Evaluated Technologies Maquet technologies: The main components of the Emergency-MECC system (Fig. 2) include a centrifugal pump (RotaflowTM, MAQUET Cardiopulmonary AG, Hechingen, Germany) and a diffusion membrane oxygenator (QuadroxTM PLS — Permanent-Life-Support, MAQUET Cardiopulmonary AG, Hechingen, Germany), both mounted on a specially designed multifunctional holder (total weight 27 kg). A flow meter and a bubble sensor are integrated into the pump unit. The tubing circuit is a pre-connected, heparin-coated (BiolineTM, MAQUET Cardiopulmonary AG, Hechingen, Germany), closed-loop extracorporeal circulation system for rapid setup and priming. It includes a shunt line to facilitate arterial blood gas monitoring and to simplify drug and volume administration. Total priming volume is 600 ml normal saline. The centrifugal pump provides non-pulsatile flow rates of up to 4.5 l/min and is connected to a steering unit by a driveline with an effective length of 150 cm.
Fig. 2 The Emergency-MECC system. Hardware and disposable circuit tubes: (1) steering and control unit including battery pack; (2) belt for hand-held use; (3) driveline; (4) pole for volume resuscitation; (5) multifunctional holder; (6) membrane oxygenator; (7) centrifugal pump; (8) oxygen bottle [10] In addition to wall connection points for 220 V and oxygen, the system is also provided with an oxygen bottle and battery pack, and thus can operate independently as a standalone device for approximately 90 min during patient transfer from intensive care to air or ground ambulances [10]. Decapsmart technology: The Decapsmart (Decapsmart®, Medica Srl, Medolla, Italy) is a venovenous, low-flow extracorporeal device to removal carbon dioxide (CO2) that does not need a specialized staff. This device, has low invasive properties and does not require surgical cannulation of large vessels. Its management does not require the presence of specialized personnel (perfusionist), and it requires minimum administration of heparin [11]. The bilume catheter inserted into the femoral vein aspires blood from the lateral openings. Blood, driven by a first pump, enters into the Decap device where the CO2 is removed. Subsequently, blood passes through an hemo-filter that separates plasma, which, thanks to a second pump, is re-placed upstream of Decap device. Filtered plasma in this way dilutes the blood by improving the efficiency of extraction of CO2 and by reducing the dose of anticoagulant, while the presence of hemo-filter prevents the formation of bubbles to the benefit of safety. Finally, the blood purified by hemo-filter is sent back to the patient through the opening located at the extremity of the catheter. Novalung technology: The Novalung Interventional Lung Assist (iLA) device is a membrane ventilator that allows for oxygen and carbon dioxide gas exchange to occur by simple diffusion. It has been used in patients with severe acute lung failure due to ARDS, inhalation injury, severe pneumonia, chest injury, foreign body aspiration, and after thoracic surgical interventions [12]. Novalung system is a pump-less
IFMBE Proceedings Vol. 29
1006
P. Lago, I. Vallone, and G. Zarola
extracorporeal system to remove carbon dioxide (iLA – Novalung® GmbH, Germany) during the secondary transfer for higher level of care of patients with severe hypoxemichypercarbic respiratory failure (ARDS, ALI, other interstitial disease but also in patients who require immediate and emergent management of CO2 levels (severe respiratory insufficiency, intracranial hypertension). This device is attached to the systemic circulation and receives only part of the cardiac output (1-2 L/min) for extracorporeal gas exchange The iLA consists of a plastic gas exchange module with diffusion membranes made from polymethylpentene (PMP). Gas transfer takes place without the direct contact with blood and the PMP membrane surface in contact with blood is treated with a heparin coating to provide non-thrombogenic surface. Blood flows over the exterior surface of the device’s fibers so the ventilating gas (commonly O2) flows inside these fibers. In this way the Novalung iLA mimics the native lung. This allows for the blood exiting the device to have the normal amount of oxygen and carbon dioxide that exits the normal lung [13].
III. CONCLUSIONS In the 1990s ECMO technique has been applied successfully in pneumonia and very severe chest trauma, but last year proved to be useful also in the treatment of pneumonia caused by swine influenza virus. These A/H1N1 pneumonia have proven resistant to many drugs and also nitric oxide: the effects on health of the patient become very serious so ECMO is used because it often is effective when other treatment options are not. Since the alarm of swine flu and the serious lung complications it can cause, the Governor of the Lombardy Region, has announced the purchase of 20 ECMO machines and the activation in major hospitals of Lombardy against possible emergencies caused by influenza A. 10 ECMO machines were installed permanently in intensive care units of hospitals while 10 mobile ECMO machines were made available to be installed on ambulances or helicopters with special metal plates for mounting. All the machines have been bought by the Clinical Engineering Department and installed into the selected hospital of the region. This Department, using the Hospital-based Health Technology Assessment methodology, has performed an evaluation of commercially available alternative technologies (Maquet, Decapsmart and Novalung). The evaluation found that Decapsmart and Novalung systems are more similar respectively to a dialysis system and to a kind of artificial lung while the best results in terms of effectiveness of removal of CO2 are provided by the ECMO technique achieved by Maquet technology. Thanks to ECMO technology, in Pavia,
in patients with A/H1N1 flu, the survival was 100% in 7 patients. During the pandemic peak, the A/H1N1 flu sufferers have occupied from 20% to 30% beds in ICU of San Matteo Polyclinic Hospital.
ACKNOWLEDGMENT The authors whishes to thank all members of Clinical Engineering Department of San Matteo Polyclinic Hospital.
REFERENCES 1. World Health Organization (2009). Influenza-like illness in the United States and Mexico. http://www.who.int/csr/don/2009_04_24/ en/index.html. Accessed September 10, 2009. 2. CDC (2009) Swine influenza A (H1N1) infection in two children. Southern California, March-April 2009. MMWR 58:400-2. 3. Dawood FS, Jain S, Finelli L et al (2009) Novel Swine-Origin Influenza A (H1N1) Virus Investigation Team. Emergence of a novel swine-origin influenza A (H1N1) virus in humans. N Engl J Med 360(25):2605-2615. 4. Perez-Padilla R, de la Rosa-Zamboni D, Ponce de Leon S et al (2009) Pneumonia and respiratory failure from swine-origin influenza A (H1N1) in Mexico. N Engl J Med 361(1):680–689. 5. CDC (2009) Intensive-Care patient with severe novel Influenza A (H1N1) virus infection. Michigan, June 2009. MMWR 58:1-4. 6. Italian Ministry of Health (2010). Influenza A/H1N1. Il punto della situazione alla settimana 3 (18-24 gennaio 2010). Press release n° 22, 28 January 2010. 7. http://emedicine.medscape.com/article/1818617-overview (access on February 2010) 8. Lewandowski K (2000) Extracorporeal membrane oxygenation for severe acute respiratory failure. Crit Care 4:156-168. 9. Cicchetti A, Francesconi A, Guizzetti G, Lago P, Maccarini EM, Zambianchi L (2006) Health Technology Assessment International. Meeting (3rd : 2006 : Adelaide, S. Aust.). Handb Health Technol Assess 3:51 10. Arlt M, Philipp A, Zimmermann M, Voelkel S, Hilker M, Hobbhahn J, Schmid C (2008) First experience with a new miniaturised life support system for mobile percutaneous cardiopulmonary bypass. Resuscitation 77:345-350. 11. Ruberto F, Pugliese F, D’Alio A, Perrella S, D’Auria B, Ianni S, Anile M, Venuta F, Coloni GF, Pietropaoli P (2009) Extracorporeal removal CO2 using a venovenous, low-flow system (Decapsmart) in a lung transplanted patient: a case report. Transplantation Proceedings 41:1412-1414 12. Matheis G (2003) New technologies for respiratory assist. Perfusion, 18:245-251 13. Liebold A, Philip A, Kaiser M, Merk J, Schmid XF, Birnbaum DE. (2002) Pumpless extracorporeal lung assist using an arterio-venous shunt. Applications and limitations. Minerva Anestesiol, 68:387-91. Author: Paolo Lago, Ilaria Vallone, Gianluca Zarola Institute: San Matteo Polyclinic Hospital Clinical Engineering Department Street: Viale Golgi, 19 City: Pavia 27100 Country: Italy Email: [email protected], [email protected] , [email protected]
IFMBE Proceedings Vol. 29
MRI-induced heating on patients with Implantable Cardioverter-Defibrillators and Pacemaker: Role of Lead Structure E. Mattei1, G.Calcagnini1, M. Triventi1, F. Censi1 and P. Bartolini1 1
Dept. Technologies and Health, Italian National Institute of Health, Rome , Italy
Abstract— Magnetic Resonance Imaging (MRI) induced heating on patients with metal implants can pose severe health risks and careful evaluations are needed for pacemaker (PM) or implantable cardioverter-defibrillator (ICD) leads to be labeled as ‘MRI conditionally’. Experimental studies in this field have shown a great variability in results and revealed that several aspects can affect the amount of heating induced at the lead tip. The structural parameters of the lead are one of these. In this study we performed in-vitro temperature measurements of a human-shaped phantom with a PM / ICD implant, exposed to the RF field of a 1.5T MRI scanner. Aim of these measurements is to investigate the role of the lead structure on the induced heating at the lead tip. The same implant configurations were tested for different PM / ICD manufacturers and lead types. A total of 26 configurations were tested, considering both the right and left pectoral implant. The temperature increases induced by the RF field ranged from <0.1°C to 6.9°C. In the measurements we performed, bipolar leads showed higher heating than unipolar ones (2.2°C versus 0.7°C mean temperature increase), as well as active fix leads than passive fix leads (3.4°C versus 1.4°C mean temperature increase). However, other parameters, such as the number of wires and their arrangement inside the lead, steel need further investigations and do not allow to define general conditions to immediately extend MRI to patients with metal implants. Keywords— MRI, RF field, Implantable devices, heating.
I. INTRODUCTION In the last decade, advances in device technology were the driving forces to study the interactions between magnetic resonance imagining (MRI) and pacemaker (PM) and Implantable cardioverter-defibrillator (ICD) systems, both in in-vivo and in-vitro experiments. The results of these studies demonstrated that the devices in use today may be more resistant to changes in function during an MR examination. Data on 430 patients who underwent clinically driven MRI are now available [1-4]. No deaths have been reported in physician-supervised MR studies in which the patients were carefully monitored, and only minor effects in few cases were observed (minor changes in pacing threshold, the need for device reprogramming, possibly battery depletion). Despite this evidence, MRI for patients with such pacemakers remains controversial, it is not approved by the US Food and Drug Administration and it is only
being performed in specialized centers. There are indeed a number of aspects that need further investigations and that do not allow, at the moment, to define general standard conditions to extend MRI also to patients with PMs or ICDs. In particular the radiofrequency (RF) induced heating in tissues with metal implants is a major concern since it can pose severe health risks. Experimental measurements of the temperature increase at the lead tip of PMs/ICDs during RF exposure show widely varying results, with value ranging from negligible degrees up to more than 60°C [5,6]. It reveals that there are a large number of elements that has to be taken into account and that must be properly addressed. Previously published papers demonstrated how the implant geometry and location, as well as the position of the patient inside the RF coil, can significantly affect the amount of induced heating at the implant tip [6]. However, other elements have not been exhaustively considered yet, and thus need further investigations. In this paper we focus on some macroscopic characteristics of the leads, in particular the number of electrodes (unipolar and bipolar) and the tip fixing modality (passive and active). We measured in-vitro the temperature increase at the led tip of PM / ICD implants, placed both in the right and in the left pectoral region, of various leads with the same implant paths. A total of 26 combinations were tested. II. MATERIALS AND METHODS We performed temperature measurements inside a fullsize RF coil (length 112 cm, inner diameter 60 cm) with 16 legs forming the classic birdcage configuration. Tuning capacitors are placed on each of the legs, resulting in a lowpass structure. This system is the same as those used in 1.5 T clinical systems. The coil was fed by a quadrature power divider so to produce a circularly polarized B1 fields. The birdcage coil was housed inside a metal cage which acts as an RF shield, and the exposure was realized by a RF amplifier that delivers over 150 Watts at 64 MHz. A human-shaped phantom was used to simulate the human trunk and to host the PM/ICD implant. The phantom consists of a male-shaped transparent PVC torso, corresponding to a 70 kg male, and internal volume of 30 l. It was filled with a saline solution composed of hydroethylcellu-
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 1007–1010, 2010. www.springerlink.com
1008
E. Mattei et al.
lose gelling agent (HEC) and NaCl to meet the general requirements of the ASTM F2182-02a standard for testing of MRI heating of implants [8]. The mixture we used produced a conductivity at 64 MHz of about 0.6 Sm-1, corresponding to a salinity of about 0.4% by weight, and permittivity of 79. The amount chosen for the HEC (2% by weight) allowed implants to be placed in the gel, moved and replaced, but, at the same time, provided a barrier to rapid thermal convection. Fig.1 shows a picture of the RF coil and the human-shaped phantom.
Fig. 1 RF coil and human-shaped phantom equipped with fluoroptic® probes for temperature measurements.
Temperature measurements were performed using a fluoroptic® thermometer (Luxtron model 3100) with four probes (SMM model). These plastic fiber probes (1 mm diameter) minimize perturbations to the RF fields. The Luxtron system was operated at 8 samples per second, with a resolution of 0.1°C. The background noise was in the range of Luxtron resolution. For passive fix leads, the terminal portion of the temperature probes were placed in transversal contact (i.e. the probe is perpendicular to the body of the lead-wire axis) with the lead tip: this contact configuration was demonstrated to minimize the measurements error associated to the physical dimensions of the probes [9]. For active fix leads, the temperature sensor was placed inside the helix tip of the lead. Realistic PM/ICD implant configurations were reproduced inside the phantom. These configurations were derived from RX images of patients with PM implants. The same paths were reproduced for all the lead tested. The differences of the lead lengths resulted in minor differences in the paths. Both atrial and ventricular lead paths were considered, and each implant was tested with the PM/ICD chassis placed in the right and the left pectoral region. The implant was fixed on a 26x18 cm2 PVC grid, that maintained a consistent separation distance between the metallic structures, the phantom surface and the temperature probes.
We tested 8 stimulators (1 single-chamber and 6 doublechamber PMs; 1 double-chamber ICD) of 7 manufacturers, and 12 leads of various length, diameter and structure. The specific characteristics of the leads we tested are reported in Table 1. Table 1 Structural properties of the leads tested in the RF coil Lead #
Polarity
Fixing type
Length (cm)
Tip surface (mm2)
1 2 3 4 5 6 7 8 9 10 11 12
Unipolar Unipolar Unipolar Bipolar Bipolar Bipolar Bipolar Bipolar Bipolar Bipolar Bipolar Bipolar
passive passive passive passive active passive active passive passive passive passive active
60 60 62 62 60 62 46 58 52 53 65 65
14.2 5.7 4.2 9.1 5.3 3.1 3.5 3.5 3.5 3.8 5 8
The commercial names and brands of the PMs/ICDs and leads cannot be explicitly mentioned in the paper. Before starting the tests, a calorimetric study was performed in order to set amplitude of the RF signal to produce inside the phantom a whole body specific absorption rate (SAR) of 1 Wkg-1. Each test had a length of 660 s: 60 s of temperature baseline recording (no RF exposure), 300 s of RF exposure, and 300 s of cooling phase. For each test, the induced temperature increase was calculated as the difference between the mean temperature at baseline, and the mean temperature in the last 5 second of the RF exposure. During all the tests, temperature was measured at the tip of the implant leads, and at the tip of a 20 cm-long straight metal wire, always in the in same position over the grid, which was kept as reference. The wire provided us with a mean to ensure the repeatability of the measurements we performed. In addition, the forward power delivered by the RF amplifier was constantly monitored by a power meter during the experiments and no significant changes were observed. III. RESULTS Table 2 summarizes the results we obtained in terms of temperature increases measured at the lead tip. We observed widely varying results, ranging from value comparable with the resolution of the fluoroptic® thermometer, up to almost 7°C.
IFMBE Proceedings Vol. 29
MRI-Induced Heating on Patients with Implantable Cardioverter-Defibrillators and Pacemaker: Role of Lead Structure Table 2 Temperature increases induced by the RF exposure at the implant lead tip.
2
3
4
5
6
7
8
Bipolar passive fixing Bipolar passive fixing Bipolar passive fixing Bipolar passive fixing Bipolar passive fixing Bipolar passive fixing Unipolar passive fixing Unipolar passive fixing Bipolar active fixing Bipolar active fixing Bipolar passive fixing Bipolar passive fixing Unipolar passive fixing Unipolar passive fixing Unipolar passive fixing Unipolar passive fixing Bipolar active fixing Bipolar active fixing Bipolar passive fixing Bipolar passive fixing Bipolar active fixing Bipolar active fixing Bipolar passive fixing Bipolar passive fixing Bipolar passive fixing Bipolar passive fixing
Stimulation Site
Implant Location
dT (°C)
Ventricle
right
1.4
Ventricle
left
1.1
Atrium
right
0.8
Atrium
left
0.1
Ventricle
right
2.9
Ventricle
left
3.6
Atrium
right
1.0
Atrium
left
1.7
Ventricle
right
6.9
Ventricle
left
3.6
Atrium
right
0.4
Atrium
left
1.0
Ventricle
right
0.4
Ventricle
left
0.4
Atrium
right
0.3
Atrium
left
0.3
Ventricle
right
2.7
Ventricle
left
3.9
Ventricle
right
2.4
Ventricle
left
2.6
Atrium
right
2.1
Atrium
left
1.2
Ventricle
right
2.6
Ventricle
left
3.2
Atrium
right
0.3
Atrium
left
1
8
right implant
Ventricle
left implant
6
dT(°C)
1
Lead type
Figure 2 shows the bar plots of the temperature increases grouped by stimulation site and chassis location. A marked difference was observed between the induced heating at the tip of atrial and ventricular leads. In the former case the lead path was shortened by wrapping it around the stimulator chassis. Temperature increases of bipolar leads were generally higher than unipolar ones: the mean value (±s.d.) measured for the formers was 2.2°C (±1.8°C), compared to 0.7°C (±0.5°C) of the latters. Also the active fix leads were generally related to a higher induced heating than passive fix ones: 3.4°C (±2.1°C) mean temperature increase versus 1.4°C (±1.0°C). On the other hand, the distribution of lead lengths and tip areas was not large enough to allow the observation of significant differences in the test we conducted.
4 2 0 1
8
dT(°C)
Implant #
1009
2
3
Atrium
4 5 Implant#
right implant
6
7
left implant
6 4 2 0 1
2
3
4 Implant#
6
8
Fig. 2. Temperature increases induced by the RF exposure at the lead tip for implant in ventricle (upper panel) and in atrium (lower panel). Implant # refers to the first column of Table 2. IV. DISCUSSION Previous studies have shown that the MRI RF induced heating on metal structures is a very complex phenomenon, which involves a large number of variables. It has been already demonstrated how the implant geometry (i.e. the path of the lead within the thorax and the location of the chassis in the pectoral regions) can affect the amount of induced heating at the tip. Little attention has instead been paid, so far, to the structural parameters of the leads.
IFMBE Proceedings Vol. 29
1010
E. Mattei et al.
The temperature measurements we performed on various type of PM/ICD leads revealed widely varying results, that cannot be justified by the minor differences in the implant geometry or lead path. The position of the lead tips did not change during all the test, and also the lead path was substantially the same for equivalent configurations. In addition, the exposure and environmental conditions were monitored to guarantee reproducible results: the temperature increases measured at the tip of the reference wire were comparable for all measurements. The parameters we investigated are the number of electrodes and the type of tip fixing. Bipolar leads are associated with a higher heating if compared to unipolar ones. The different structure of the lead may produce a different coupling between the RF field and the metallic wires the lead is made up. In addition, the insulation sheath, which is generally thicker in bipolar leads so to increase the impedance towards the phantom for the current induced along the lead, may contribute to explain such difference. Temperature increases measured for active tip fixing implants are higher than for passive implants. An active fixing is obtained by a thin metal helix which goes deep into the heart wall and works as an electrode. In this case, the smaller is the metal surface the current can flow out from, the higher is the power density and the induced heating. In addition, also the temperature probe positioning differs for the two types of lead: for active implants, the temperature sensor is inserted into the helix and it is in contact with the metal electrode for almost its entire length; for passive fix lead, the temperature probe is placed in transversal contact with the lead tip, causing a measurement error that has to be taken into account [6]. Regardless the lead structure, we observed that atrial implants are associated with lower temperature increases than ventricular ones. The path from the chassis to the atrium is shorter than to the ventricle, with the need to wrap the exceeding portion of the lead around the chassis. It implies a shorter exposed length for the lead, and a consequent smaller induced current. No prevalent heating was instead observed for right versus left implant positioning. This result may appear in contrast with other findings reported in literature [10], where right implants are generally associated with a higher induced heating. Actually, the realistic configurations we reproduced in this study minimize the differences between left and right positioning and make difficult to highlight this aspect. V. CONCLUSIONS Structural parameters of PMs/ICDs leads are important elements that can significantly affect the heating induced by the RF field during MRI procedures. In particular, bipolar
leads are related to higher temperature increases than unipolar ones, as well active tip fixing implant than passive ones. However, other parameters, such as the number of wires and their arrangement inside the lead, steel need further investigations and do not allow to define general conditions to immediately extend MRI to patients with metal implants. At the moment, when a patient with a PM/ICD is supposed to undergo MRI examinations, preliminary studies on the particular implant characteristics are necessary in order to evaluate the risks/benefits and to plan the treatment minimizing the potential health hazards for the patient.
ACKNOWLEDGMENT Authors wish to thank the PM/ICD manufactures who provided the products tested. This research was partially funded by the Italian Ministry of Health.
REFERENCES 1.
2.
3.
4.
5.
6.
7.
8.
9.
Roguin A, Schwitter J, Vahlhaus C, et al. MRI in individuals with cardiovascular implantable electronic devices. Europace 2008;10:336– 46 Martin ET, Coman JA, Shellock FG, Pulling CC, Fair R, Jenkins K. Magnetic resonance imaging and cardiac pacemaker safety at 1.5-T. J Am Coll Cardiol 2004;43:1315–24 Sommer T, Naehle CP, Yang A, et al. Strategy for safe performance of extrathoracic MRI at 1.5T in the presence of cardiac pacemakers in non-pacemaker-dependent patients: a prospective study with 115 examinations. Circulation 2006;114:1285–92 Nazarian S, Roguin A, Zviman MM, et al. Clinical utility and safety of a protocol for noncardiac and cardiac MRI of patients with permanent pacemakers and ICDs at 1.5T. Circulation 2006;114:1277–84 Achenbach S, Moshage W, Diem B, Bieberle T, Schibgilla V, Bachmann K. Effects of magnetic resonance imaging on cardiac pacemakers and electrodes. Am Heart J. 1997 Sep;134(3):467-73 Mattei E, Triventi M, Calcagnini G, Censi F, Kainz W, Mendoza G, Bassen HI, Bartolini P. Complexity of MRI induced heating on metallic leads: experimental measurements of 374 configurations Biomed Eng Online. 2008 Mar 3;7:11 ASTM F2182–02a – ‘‘Standard Test Method for Measurement of Radio Frequency Induced Heating Near Passive Implants During Magnetic Resonance Imaging’’, ASTM International, 100 Barr Harbor Drive, PO Box C700, West Conshohocken, PA, 19428–2959 USA Mattei E, Triventi M, Calcagnini G, Censi F, Kainz W, Bassen HI, Bartolini P: Temperature and SAR measurement errors in the evaluation of metallic linear structures heating during MRI using fluoroptic probes. Phys Med Biol 52(6):1633-46. 2007 Mar 21 Calcagnini G, Triventi M, Censi F, Mattei E, Bartolini P, Kainz W, Bassen H I. In vitro investigation of pacemaker lead heating induced by magnetic resonance imaging: role of implant geometry. J Magn Reson Imaging. 2008 Oct;28(4):879-86 Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Eugenio Mattei Italian Institute of Health Viale Regina Elena 299 Roma Italy [email protected]
Adoption and Sophistication of Clinical Information Systems in Greek Public Hospitals: Results from a National Web-based Survey S. Kitsiou1, V. Manthou1, M. Vlachopoulou1and A. Markos1 1
University of Macedonia, Department of Applied Informatics, Thessalonica, Greece
Abstract— Objectives: The objective of this study was to assess the current level of Clinical Information Systems (CIS) adoption and sophistication in Greek public hospitals through a national web-based survey. To do so, a comprehensive measurement instrument that integrates the existing theoretical and empirical literature knowledge on CIS adoption in hospitals was developed. Methods: A secured web-based survey of 107 Chief Information Officers in Greek public hospitals (both rural and urban) was conducted, in order to identify the availability of various Clinical Information Systems, their functional sophistication capacities (i.e. computerized activities/processes), the intensity of their use, as well as their level of integration. The clinical domains which were assessed by the instrument include: (1) Patient management, (2) Physician Support, (3) Nursing Support, (4) Emergency Department, (5) Operating Rooms (6) Laboratories, (7) Radiology and (8) Pharmacy. Results: A total of 70 questionnaires were completed by CIOs online (through a dedicated web-survey platform), which represents a response rate of 65,4%. Our findings indicate that Patient Management Information Systems (e.g. AdmissionDischarge Transfer Systems and Outpatient Management Systems) as well as Pharmacy and Laboratory Information Systems have been adopted so far by the vast majority of Greek public hospitals (>68,6%) and are utilized by end-users on a regular basis. Overall findings demonstrate a moderatehigh level of functional sophistication for these systems but a significantly low level of integration. Adoption of Outpatient (15,7%) and Inpatient Electronic Medical Record Systems (22,9%), Nursing Information Systems (28,6%), Computerized Physician Order Entry Systems (14,3%), as well as Telemedicine systems for diagnostic purposes (14,3%) was found to be significantly low, confirming that Greek public hospitals have failed so far to successfully incorporate and exploit a wide range of CIS/T to improve the quality, effectiveness and efficiency of patient care services. Keywords— Clinical Information Systems, IT Sophistication, IT Adoption, Greek Hospitals, Survey
I. INTRODUCTION
The apparent need for the adoption and diffusion of Clinical Information Systems (CIS) in healthcare organizations and the positive impact that these systems can have on the quality (e.g. [1-2]), effectiveness and efficiency (e.g. [3-
4]) of care services have been analyzed and depicted over the years in the Healthcare Informatics literature. Nowadays, in many European nations, as well as other countries around the world, there is a growing awareness that strategic investments in innovative CIS as well as other types of Health Information Systems (HIS) and e-services can yield significant improvements and business value not only for healthcare organizations but also for an entire healthcare system. This can be evidenced by the fact that numerous eHealth strategies, research initiatives, implementation projects, and other activities have been initiated across Member States and other countries beyond Europe to promote the introduction of various ICT-enabled solutions at different levels within the healthcare sector [5]. In Greece, since 2002, the Ministry of Health and Social Solidarity - in collaboration with other government bodies, non-profit organizations and beneficiaries - has tried to accelerate the introduction of integrated HIS and e-services in healthcare organizations, mainly through a number of regional, large-scale implementation projects within the “Information Society” ICT Action Plan of 2000-2006 [6]. One of the main aims of these projects was to introduce in public hospitals various clinical information systems (e.g. electronic medical record systems, laboratory information systems, etc.) and to integrate them, based on international standards (e.g. HL7). Furthermore, the growing demands to improve the quality of care and to promote patient safety have prompted many other public hospitals as well (which did not participate in the aforementioned projects), to explore new opportunities for investing in CIS adoption. Nevertheless, despite the aforementioned initiatives and efforts, comprehensive information about the current level of CIS adoption in Greek public hospitals remains unknown due to a lack of empirical evaluation studies in this field. This study aims to address this gap by presenting comprehensive statistical information on the current status of clinical information systems availability, functional capacity, use, and integration, in Greek public hospitals. The results and main findings that are depicted in this paper are only a glimpse of a large-scale survey that was conducted by the first author (as part of his doctoral dissertation), with the aim to evaluate the adoption and sophistication of Health Information Systems and Technologies in Greek
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 1011–1016, 2010. www.springerlink.com
1012
S. Kitsiou et al.
public hospitals, as well as the factors and characteristics that influence their diffusion in the working environment [7]. II. METHODS
A. Development of the Survey Instrument In order to develop a comprehensive assessment instrument, capable of measuring not only the availability but also the intensity of use and capabilities of CIS (i.e. supported functions and level of integration), a comprehensive literature review was conducted during a 6 month period to identify theoretical and empirical studies that provide insights to key dimensions and measurement indicators used for the conceptualization and measurement of CIS adoption in hospitals. Based on the findings from this process (e.g. [8-11]) and in particular the work of Pare and Sicotte [11], a benchmark model was designed and subsequently a survey instrument, incorporating 224 measurement indicators, was developed for the operationalization of the model. In order to further validate the content of the instrument before the implementation of the survey (content validity), a review process with 10 field experts was also conducted. The final instrument that was used to evaluate the adoption of CIS in Greek public hospitals consisted of the following clinical domains and sub-domains: (1) Patient Management, (2) Patient Care (Physician Support, Nursing Support, Operating Rooms, Emergency Department), and (3) Clinical Support (Laboratory, Radiology, and Pharmacy). Each of these domains/ sub-domains had 4 sets of questions that investigated the following core dimensions: the range of computerized activities, the availability of CIS, the extend of CIS use by the end-users, and the level of CIS integration.
the percentage of CIS reported by the respondents as available and subsequently the corresponding mean for their intensity of use were calculated. Each domain/sub-domain also included a set of questions assessing the integration level among the available CIS of that particular domain with other hospital domains. The integration dimension was measured on a 1-7 scale (1: “Not at all” to 7: “Fully integrated”). The mean scores of integration were calculated in each domain for the analysis of the results. C. Survey implementation and data collection Due to the technical orientation of the instrument, which required sufficient knowledge and expertise in the field of health informatics, a key prerequisite for the selection of the population was that hospitals participating in the survey should comprise a distinct IT department with an appointed Chief of Information Officer (CIO)/IT director. To identify the qualified hospitals and collect the necessary information of each hospital’s CIO into a database (e.g. name, email, phone number) telephone interviews were conducted with the CEOs of all 132 public hospitals that constitute the National Health System of Greece. This process yielded a total number of 107 qualified hospitals. Next, an email invitation to participate in the survey was sent to the CIO of each hospital (n=107), along with a user name, a password and a link to the web-based questionnaire. The data from the respondents were collected over a period of three and a half months (mid July to end of September 2007). In order to increase the response rate, two email reminders were sent two weeks after the beginning of the survey and one week before the end. Data were exported from the survey platform to the SPSS software version 15 for data analysis.
B. Variables and Scoring Questions regarding the dimension of computerized activities consisted of a list of processes and activities that involve the use of clinical oriented computer-based applications, information systems, and/or technologies. A score of “1” was assigned for each activity that was reported as being computerized and a score of “0” otherwise. The percentage of hospitals possessing these computerized activities in each of domain/sub-domain was calculated for the analysis of the results. Questions assessing the dimensions of CIS availability and intensity of use comprised a list of various well-known CIS, which were measured on a 0-7 Likert scale; zero represented “Not Available” and the 1-7 scale denoted the availability and the extend of use (1: “Barely Used to 7: “Extensively Used”). For the statistical analysis, in each domain
III. RESULTS
Of the 107 public hospitals, 70 completed the online survey yielding a response rate of 65.4%. The distribution of the responding hospitals across the Healthcare Authority Regions that make up the Greek National Healthcare System was highly satisfactory, since responses were collected from hospitals in all regions. The weighted response rate for each region was 64.5%. A. Hospitals’ Internal Characteristics Of the 70 public hospitals that completed the survey, half were medium to large with 250 or more staffed beds, while 31% had 101-250 beds and 19% had less than 100 beds. As shown in Table 1, the average number of staffed
IFMBE Proceedings Vol. 29
Adoption and Sophistication of Clinical Information Systems in Greek Public Hospitals: Results from a National Web-Based Survey 1013
beds in the sample was quite high (330.1 beds). However, the average number of internal staff that work full-time in the IT department of the responding hospitals was found to be quite low (mean = 3). Table 1 T
Profile of Responding Hospitals and CIOs
Hospitals Internal Characteristics Structural Capacity Number of staffed beds Number of internal staff in the IT Department Financial Capacity Investments in ICT (Annual IT budget %) Education Level of CIOs Phd Graduate (Masters Degree) Undergraduate (Higher Education Institute) Undergraduate (Technological Training Institute) Other (e.g. High School, Certificate) CIOs’ Managerial Tenure Experience in current position (years) Experience in current hospital (year) CIOs’ IT Tenure Working experience in the field of IT (years)
Mean 330.1 3 Mean 1,2 % % 4.3 18.6 35.7 24.3 17.1 Mean 7.7 11.3 Mean 12.1
Range 21-1251 1-15 Range 0.5 – 3 %
Range 1-25 1-30 Range 1-32
Financial capacity measured by the annual percentage that hospitals attribute from their total budget for the adoption of ICTs was found to be considerably low. Based on the results (Table 1), Greek public hospitals allocate on average only 1.2% from their total annual budget for the acquisition and implementation of new information systems and communication technologies. However, it should be noted that this percentage is fairly close to the average (1.8%) that was recorded by the Health Information Network Europe (HINE) organization in a similar survey, which was conducted in 2006 among public and private hospitals in 15 EU countries. With regard to the educational capacity of IT Directors, the vast majority (82.9%) has a university degree. Nevertheless, a small percentage of hospital IT Departments (17.1%) are managed by people who have only a high school diploma or a training certificate. Findings regarding the managerial tenure of the CIOs indicate a considerable variability in both the years of their work experience in the current position and in other ITrelated positions in the current hospital. In particular, the average experience of the CIOs in the current position is 7.7 years (range 1–25), while their average experience in other IT-related positions within the current hospital is 11.33 years (range 1-30). The overall average of IT-Tenure is 12.1 years (range 1-32). These results indicate that IT Directors who completed the survey had spent enough time in their institutions and would have enough knowledge about these hospitals to be accurate reporters of the level of CIS adop-
tion. Hence, our decision to select IT Directors as key respondents is justified. B. Patient Management Services As shown in Table 2, with the exception of bed availability and waiting list management, the vast majority of hospitals (>80%) has computerized most of the basic patient management processes (e.g. inpatient admissions and discharges, transfers, and outpatients’ appointment scheduling). In 95.7% of the cases these activities are supported by an Admission-Discharge-Transfer (ADT) information system in the inpatient admitting office, while 81.4% of the responding hospitals reported the adoption of specialized outpatient management information systems to electronically support appointments management in outpatient clinics (Table 3). Nevertheless, information technologies such as bar coding systems to track medical records or smart card readers for the identification of patients have not been adopted yet. Among the two systems that were reported as available by the majority of the hospitals, the mean values representing the frequency with which they are used by endusers were found to be particularly high (6.3 and 6.6 respectively). However, the integration level among patient management systems and other systems within the same hospital was low (Table 4). C. Patient Care Unlike Patient Management, the majority of the responding hospitals (65.7%) have not computerized so far any of the activities which were assessed in the 4 sections of the Patient Care Domain (i.e. Physician Support, Nursing Support, Emergency Room, Operating Room), with the exception of patient admissions in the Emergency Room, which however is not directly related to the care process (Table 3). Medication order entry by physicians (28.6%), medication administration by nurses (34.3%), and patient diet management (32.9%) were found to be the most frequently computerized activities (Table 3). Availability and use of various CIS within the four sections of the patient care domain was found to be significantly low, with the exception of patient management systems in the Emergency Room. According to the responding hospitals, Nursing Information Systems, Electronic Medical Record (EMR) Systems in inpatient and outpatient clinics, Computerized Physician Order Entry (CPOE) systems, and Telemedicine systems for triage purposes were available in only between 14.3% and 22.9%, while other well-known clinical information systems and technologies, such as Clinical Decision Support Systems (CDSS), were scarce. Apparently, the most frequent type of information technologies utilized in patient care for clinical purposes are Personal Computers (PCs) placed at the nurs-
IFMBE Proceedings Vol. 29
1014
S. Kitsiou et al.
ing stations in the emergency rooms (45.7%) and clinical departments (67.1%). The limited availability and functional sophistication of the aforementioned systems shown by the rate of supported activities and the low frequency with which, in most cases, these systems are used by the clinical personnel (Table 4), provide evidence that approximately 70% of Greek public hospitals conduct the majority of clinical care related activities manually, through paperbased records. D. Clinical Support Services In clinical support services, out of the 16 activities that were investigated in relation to laboratories, radiology, and pharmacy, only 8 (50%) were reported as computerized in at least 50% of the sample (Table 3). Most of these were in the pharmacy section. Radiology activities were the least computerized among all sections in the Clinical Support services domain. With regard to the laboratories, the most frequent activities supported by computers included: results capturing from analyzers (70%), patient registration and admission (62.9%), and specimen archiving (51.4%). Pharmacy Information Systems (PhIS) were adopted by all of the participating hospitals (100%), however bar coding systems (e.g. for the preparation, check and distribution of medications) and extranet links to medication suppliers were scarce, 11.4% and 7.1% respectively (Table 3). Laboratory Information Systems (LIS) were available in more than half of the hospitals (68.6%). Yet, bar coding systems for the identification of blood specimens were present in less than half of the sample (42.9%). Electronic orders for laboratory tests, as well as results reporting to medical units were available in only 21.4%. Contrary to the laboratory and pharmacy, in radiology departments the adoption of Radiology Information Systems (RIS) and Picture Archive Communication Systems (PACS) was particularly low, 25.7% and 8.6% respectively. Generally, as shown in Table 4, all of the aforementioned systems are used frequently in most hospitals, with the exception of the systems in the radiology section in which the average level of systems use was lower than the other two sections. In particular, the mean level of systems use in pharmacies varied between 4.7 and 6.3, in laboratories between 4.4 and 5.8, and in radiology between 4 and 4.3.
have failed so far to successfully incorporate and exploit a variety of clinical information systems and information technologies that support the documentation of daily care nursing and physician activities at the bedside and offer great potential for improving the quality of care, as well as the effectiveness and efficiency of the personnel. On the other hand, patient management, laboratory and pharmacy information systems with a moderate – high level of functional sophistication have been implemented so far by more than half of the hospitals (>68,6%). Nevertheless, based on the findings it becomes apparent that systems integration remains a critical issue and a barrier for all Greek public hospitals, since the vast majority of the aforementioned systems operates in an autonomous mode.
REFERENCES 1.
Cordero L, Kuehn L, Kumar RR, Mekhjian HS (2004) Impact of Computerized Physician Order Entry on Clinical Practice in a Newborn Intensive Care. Unit Journal of perinatology : official journal of the California Perinatal Association 24(2):88-93 2. Mullett CJ, Evans RS, Christenson JC, Dean JM (2001) Development and Impact of a Computerized Pediatric Antiinfective Decision Support Program. Pediatrics 108(4):e75 3. Wong DH, Gallegos Y, et al. (2003) Changes in intensive care unit nurse task activity after installation of a third-generation intensive care unit information system. Critical care medicine 31(10):24882494 4. Kuperman GJ, Jonathan M, et al. (1999) Improving Response to Critical Laboratory Results with Automation. Journal of the American Medical Informatics Association 6(6):512-522. 5. e-Health ERA Report. (2007) eHealth priorities and strategies in European Countries. eHealth Research Area, European Commission, Information Society and Media. 6. Information Society S.A. (2001) Information Society Action Plan 200-2006. Available at: http://www.infosoc.gr/infosoc/el-GR/epktp/ 7. Kitsiou S, Manthou V, Vlachopoulou M. (2009) Evidence of hospitals e-profile: Level of Adoption and Use of Information Systems in Greek Public Hospitals. The Economist, Kathimerini Special Editions. Available at http://www.webcitation.org/5nbqiG1Ih 8. Burke DE, Wang BL, Wan TH, Diana ML. (2002) Exploring Hospitals' Adoption of Information Technology. J Med Systems 26(4):349355. 9. Jaana M, Ward MM, Paré G, Wakefield DS. (2005) Clinical information technology in hospitals: A comparison between the state of Iowa and two provinces in Canada. Int. J Med. Inf. 74(9):719-731. 10. Menachemi N, Burke D, Clawson A, Brooks R. (2005) Information Technologies in Florida's Rural Hospitals: Does System Affiliation Matter? J. Rural Health 21(3):263-268. 11. Paré G, Sicotte C. (2001) Information technology sophistication in health care: an instrument validation study among Canadian hospitals. Int. J Med. Inf. 63(3):205-223.
IV. CONCLUSIONS
This paper presented the results of a national web-based survey conducted in Greece to evaluate the adoption and sophistication of Clinical Information Systems (CIS) in public hospitals. Overall, the findings reported in this study depict that the majority of Greek public hospitals (70%)
Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Kitsiou Spyros University of Macedonia 156 Egnatia Str Thessaloniki Greece [email protected]
Adoption and Sophistication of Clinical Information Systems in Greek Public Hospitals: Results from a National Web-Based Survey 1015
Table 2 T
Computerized Activities
Functional sophistication of CIS in terms of computerized activities/processes %
%
Patient Management Inpatient admissions Inpatient discharges Patient Index Impatient pre-admissions Outpatient appointments management Inpatient transfers Bed availability management Waiting list management Patient Care (Physicians) Medications order entry Inpatient medical records Order entry for blood tests Classification of diseases Medical discharge summaries Outpatient medical records Order entry for pathology tests Operative reports Order entry for radiology tests Clinical protocols and guidelines Order entry for surgeries
95.7 94.3 88.6 82.9 82.9 81.4 48.6 18.6 28.6 21.4 18.6 17.1 12.9 11.4 7.1 7.1 5.7 4.3 2.9
Table 3 T
Patient Care (Nursing) Medication administration Patient diet management Staff scheduling Historical record keeping Nursing flowsheets Patient acuity/condition recording Vital signs recording Patient Care (Emergency Room) Registration and admissions Patient waiting time management Order entry/Results reporting Staff scheduling Recording of patient’s clinical data Patient Care (Operating Room) Materials management Case costing Operations booking Staff scheduling Clinical notes recording Anesthetic notes recording
34.3 32.9 27.1 15.7 8.6 4.3 2.9 51.4 12.9 8.6 8.6 7.1 18.6 17.1 15.7 8.6 7.1 4.3
Clinical Support (Laboratory) Results capturing from analyzers Patients registration & admission Specimen archiving Blood bank management Recurring specimen management Staff workload management Clinical Support (Radiology) Patients registration & admission Results capturing Label generation Staff workload management Clinical Support (Pharmacy) Medication administration Medication purchasing Wards stock management Drug profile lookup Registration of medication orders Drug interaction checking
70 62.9 51.4 35.7 35.7 14.3 28.6 14.3 11.4 2.9 100 82.9 72.9 67.1 61.4 8.6
Availability and Use of Clinical Information Systems and Technologies
Available CIS/T Patient Management ADT IS Outpatient Management IS Bar coding to track medical records Smart cards (readers) for patient Identification Patient Care (MD) Electronic Medical Record (EMR) system for inpatients Ambulatory EMR system (for outpatients) Computerized Physician Order Entry (CPOE) System Telemedicine system for triage and evaluation purposes Telemedicine systems for transmission of diagnostics Clinical Decision Support System Patient Care (Nursing) PCs at the nursing stations Nursing IS Portable computing devices (e.g. laptops) Patient Care (Emergency Room) Patient Management System PCs at the nursing station Paging system for doctors Portable computing devices
%
Intensity of Use (Mean)
95.7 81.4 2.9 0
6.3 6.6 2 0
22.9 15.7 14.3 14.3 11.4 1.4
3.6 3.8 4.5 2 3 2
67.1 28.6 24.3
3.8 4 2
51.4 45.7 21.4 8.6
7 3.8 5.3 2.6
Available CIS/T Patient Care (Operating Room) Surgery/OR management IS Portable devices for data input Dictation system for post-operative reports Bar coding system for surgical material Clinical Support (Laboratory) Laboratory information system (LIS) Bar coding system for samples Electronic requisitions for lab tests Electronic reporting of test results Clinical Support (Radiology) Radiology information system (RIS) Picture Archive and Communication System(PACS) Bar coding (e.g. for films management) Clinical Support (Pharmacy) Pharmacy information system (PhIS) Bar coding Extranet links to pharmaceutical suppliers
IFMBE Proceedings Vol. 29
%
Intensity of Use (Mean)
15.7 12.9 2.9 1.4
4.6 2.6 2 6
68.6 42.9 21.4 21.4
5.8 5 4.4 4.4
25.7 8.6 2.9
4.2 4.3 4
100 11.4 4.9
6.3 4.7 5
1016
S. Kitsiou et al.
Table 4 T
Level of systems integration
Integration of CIS
Mean Scores
Integration of patient management systems to other hospital systems Integration of physician support systems to other hospital systems Integration of nursing support systems to other hospital systems Integration of OR systems to other hospital systems Integration of ER systems to other hospital systems Integration of laboratory systems to other hospital systems Integration of radiology systems to other hospital systems Integration of pharmacy systems to other hospital systems
2,6 1,7 1,8 1,4 1,6 1,9 1,2 2,4
IFMBE Proceedings Vol. 29
Risk Management Process and CE Marking of Software as MD Fabrizio Dori, Ernesto Iadanza, Roberto Miniati, and Samuele Mattei Department of Electronics and Telecommunications, University of Florence, Florence, Italy
Abstract— Computers and software are more and more used for the scope of the clinical data analysis and for the patient treatment control. At the same time a lot of electrical medical device are used together for increase their function and to expand their performance, in a logic of a system thought as a single unit. This integration may bring new hazards, created by functional supplementary connections and by those linked to software. An appropriate design approach, including a deep and integrated risk analysis, must then be performed. A wide legislative and technical standard context has to be considered to achieve a CE marked (i.e. “ready-forthe-market”) product. In this study main international technical standard related both to software and to MD are taken into account to define a sort of guideline to software related CE marking.
one ore more system based on one or more central processing units, including their software and interfaces. To offer on the market a medical device, also a PEMS, is indispensable that the product is “workmanlike”, so is made with the safety essential requirement defined by Community Directive, for don’t cause hazard to operator, patient and present staff. Nowadays software is often an integral part of medial device technology and so we may examine risk caused by new physical and functional connections, and those about the software system, to reach a safe PEMS (fig.1).
Keywords— Risk assessment, software, MD, PEMS, safety.
I. INTRODUCTION With the technological progress increase, in particular computers are often used for the scope of the clinical data analysis and for the patient treatment control, a lot of electrical medical device are used together for increase their function. This integration may bring new hazards, created by functional supplementary connections and by those attached to the management software; consequently the design manager must run this hazard appropriately. An appropriate design approach, including a deep and integrated risk analysis, must then be performed. A “guideline” for this approach has been developed by mixing technical, regulatory and state-of-the-art main elements, in a unique tool able to smooth the many single strictness. In this work we consider risk management about software, in order to establish the safety and effectiveness of a medical device containing software: this requires knowledge of what the software is intended to do and demonstration that the use of the software fulfils those intentions without causing any unacceptable risk.
II. THE METHOD The acronym PEMS, programmable electrical medical (EM) system, means EM equipment or EM system containing
Fig. 1 PEMS safety The market of PEMS has been analysed in our research, underlining the critical items about the software product development in keeping with MD Directive, and is necessary to outline that is difficult to effect a detailed and full risk management about software hazard; this because it’s hard to estimate hazard and so to evaluate the risk acceptable. In this work we propose a procedure, that is able to realize a medical software “workmanlike”, so in keeping with safety essential requirement defined by European Community Directive. In the first step of the work are identified the effectiveness requirements about the CE marking of the medical software.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 1017–1020, 2010. www.springerlink.com
1018
• • • •
F. Dori et al.
Mainly the most important aspect for the CE marking is the introduction of new legislative obligations, in particular the medical software is thought as a medical device. Complex system: the device must be made keeping with safety essential requirement and for a complex system is better to follow the technical Norm. The functionality concentration at the software caused a medical device flexibility, but this fact implies an increment of the software critically. Another aspect is the different methodological approach between the programmer and who to be involved in medical device.
In this phase is needed to study in depth the applicable Directive about a product: a research about the legislative context and then the normative one that may be applied on a medical software has been conducted. This way to proceed derive from the notion of the “presumption of conformity” given by the new approach; if a product is realized following the harmonized technical stanfard prescriptions, it is “workmanlike”, that is it in keeping with safety essential requirements. The legislative context is mainly composed by the Community Directive EEC 93/42 and by Italian law 46/97. To these we have to add several technical standards: • •
• • • •
CEI 60601-1 General requirements for basic safety and essential performance; CEI 60601-.1-xx General requirements for safety; in particular: − part 1 about safety requirements for medical electrical system, and − part 4 about programmable electrical medical system; CEI EN 62304 Medical device software – Software life-cycle processes; CEI UNI EN ISO 14971 Application of risk management to medical devices; CEI 62.122: Guidelines to acceptance tests and to safety and/or performance periodical checks of medical devices supplied by a particular supply source; CEI 62.128: Guidelines to acceptance tests and to safety and/or performance periodical checks of electrical medical systems.
In a second phase a practical way to use this standard to obtain our methodological procedure is studied. As a first point is thought to use only the article about PEMS of the general Standard CEI 60601-1 and CEI UNI EN ISO 14971 together; they both have a methodological character, and their approach has the major limit for the
development of a procedure in the fact that their prescription aren’t timely defined. As a second point, it’s necessary to integrate to these elements also the prescription of CEI EN 62304, in which are defined the operation to do for each processes defined in a logical and sequential way, but non time defined. Each of these two point of view have to be integrated in a single procedure able to consider as the processes interact each other in a correct development about a medical software. For completeness is realized also another procedure about the correct software maintenance, with the scope to control the product in all its life-cycle In the scheme of Fig. 2 we can identify: −
Green blocks: PEMS development process activities; − White blocks: Software development process; − Red block: Risk management process; − Yellow blocks: Software problem resolution process; − Azure blocks: Software configuration management process. If we look more in depth in each single element of the logical scheme we can identify some basic aspects. • In the green block are defined the user needs, and these are transformed in the PEMS requirements; these represent the input of our procedure. At the end is needed to make PEMS’s validation that is the last activity of this procedure. • Software development process is a group of activities that allows the PEMS’s development from the user needs. It is divided in two part: the decomposition process and the integration process. Within the decomposition process we arrive to software design and it’s formed by: − Software development planning; − Software requirements analysis; − Software architectural design; − Software detailed design. The integration process define the integrations and their verify test for arrive to software release and validation; is formed by this activities: − Software unit implementation and verification; − Software integration and integration testing; − Software system testing; − Software release. • Management risk process is the most important process for the device’s safety; it can be schematize in three main activities: − Risk analysis; − Risk evaluation; − Risk control.
IFMBE Proceedings Vol. 29
Risk Management Process and CE Marking of Software as MD
•
•
Software problem resolution process is fundamental to found new problems and to solve they; it is composed by these activities: − Problem investigation; − Approve change requests; − Change control and verify. The software configuration management process is need to assure that to not add further functionality without the risk management process taking they; it’s formed by: − Configuration identify; − Change control and verify.
1019
We may underline some significant aspects of this procedure: 1. It’s very important to achieve the management risk process during the design phase because a belated risk evaluation may bring to an onerous engagement for implementation of additional risk control measures. 2. New risk control measures implies another risk evaluation to assure that their implementation in the PEMS not create new hazard. 3. In consequential is more difficult to held a retrievability of the hazard in the risk management report if the hazard to be evaluate in the end.
Fig. 2 Software development procedure Each blocks contain activities of varied processes, and its much important to observe that each blocks makes a share of the documents that are included in the risk management file. This documentation is indispensable to offer on the market the medical software, so the entire PEMS, because the risk management file is required by the technical standard.
4. In this procedure the software development process is closely connected to other processes, in order to management all possible risks. In fact a problem detected by the software problem resolution process is solved by risk management process and his result may bring to a software configuration change, that may bring to new risk. 5. Software development is directly correlated with system (PEMS) development, because from the PEMS
IFMBE Proceedings Vol. 29
1020
F. Dori et al.
requirements, we may define our software requirements, that are the data input for establish the software product. 6. During the processes application may be necessary to define new risk control measurement or added safety functional, that may change PEMS requirements; for this reason PEMS requirements are re-controlled after the risk management process. 7. Because of the importance of the risk management process, the medical software development resume when is no need to reduce the risk. 8. A Part of the risk management file is made during the procedure: this is fundamental to evaluate the conformity to the general standard. This file keeps the possibility to retrieve every point about done process, possible risks and control measures applied to reduce or eliminate remaining risk.
III. CONCLUSIONS The fundamental aspects that justify and drive the use of this procedure regard the fact that: •
•
•
it is simple; his application do not demands big mental effort, in fact it leads the operator step by step to the right processes and activities to develop a functional and safety software; the procedure has a general and versatile characteristic; that is useful to develop both PEMS manager software that stand-alone software, starting from software requirements (changing PEMS or software requirements, is possible to develop many several software “workmanlike” not only medical software); it is objective; in particular referring to risk management process, in fact is almost sure that from system’s description and his use expected two users identify similar potential causes of contribution to a hazard situation, because this process is repeated many times in the planning stage;
•
this procedure allows the manufacturer to define milestones, which integrations test and verification strategies use to verify software system: the manufacturer may choose the better way to work, agreeing wide flexibility of the procedure.
Main disadvantage tied with the application of this procedure is that the safe software need implies a scrupulous management risk phase, for the ultimate end to contemplate all possible and reasonable hazard. This necessity brings to risk management process application cyclical and continuous after each software development activity, involving a lengthy design. An other impediment is that, from the last phase of the decomposition process of the procedure, demands a staff with programmer competence, while the other phase, those refer to the evaluation risk, demand a staff with knowledge in risk management. This need of both competence leads to comparison between two different methodology to tackle problems, as a consequence that the programmer learn an methodological approach while the man who makes the risk analysis increases his programming knowledge.
REFERENCES 1. Ministry of Health, Commissione Unica per i Dispositivi Medici definiti nella Classificazione Nazionale dei Dispositivi Medici (CND) 2. Institute Of Medicine at http://www.iom.edu 3. National Patient Safety Foundation at http://www.npsf.org 4. The Joint Commission at http://www.jointcommission.org/ Address of the corresponding author: Author: Institute: Street: City: Country: Email:
IFMBE Proceedings Vol. 29
Fabrizio Dori Department of Electronics and Telecommunications Italy Florence Italy [email protected]
From Laparoscopic Surgery to 3-D Double Console Robot-Assisted Surgery P. Lago1, C. Lombardi2, and B. Dell’Anna2 1
San Matteo Polyclinic Hospital, Ahead of Clinical Engineering Department, Pavia, Italy 2 San Matteo Polyclinic Hospital, Clinical Engineering Department, Pavia, Italy
Abstract— The introduction of the laparoscopic surgery has led the progress of surgery towards a new area. In fact, it allows a minor surgical trauma, a better preservation of immune function, a better aesthetic results and contributes to reduce days of hospitalization. But laparoscopic technique presents some disadvantages: depth precision is lost, tactile feedback is reduced, instruments movements and degrees of motion of the surgeons hands are reduced. Robotic surgery opens the way for new interventional techniques and introduces some advantages such precision, miniaturization, smaller incisions, decreased blood loss, less pain and quicker healing time. Robotic surgery lets surgeon take his hand and eyes inside the human body without opening it. This is possible through, respectively, robotic arms and three-dimensional magnification. The three-dimensional vision is another advantage of the robotic surgery: in fact with laparoscopy, only two-dimensional image can be observed and the quality of images depends on the hand of an assistant. The robot plays exactly the same movements of the surgeon’s hand and allows to operate in a more precise and minimally invasive way. Robotic surgery can be defined as the evolution of laparoscopic surgery which has maintained and contributed to emphasize the advantages. Keywords— Laparoscopic surgery, robotic surgery; Da Vinci robotic system.
I. INTRODUCTION The evolution of surgery has allowed the accessibility of the surgeon to the operative field with minimal incisions referred to ‘as minimally invasive surgery’. Initially, the endoscopic technique was available nearly 100 years ago [1] as an ocular diagnostic instrument, but it didn’t excite much interest other than for diagnostic purpose in gynaecology and urology [2]. The era of laparoscopic surgery began in the late 1980s and into the early 1990s [3], when the widespread acceptance of laparoscopic cholecystectomy by patients and surgeons brought an explosive growth in minimally invasive surgical approaches to common general surgical, urological and gynaecological procedures. The success of laparoscopic surgery is due to a lower incident of complications and a better post-operative
outcome compared with conventional open surgery. Because laparoscopic surgery reduces the surgical trauma, it also can be associated with less systemic immune impairment [4]. Although laparoscopy was advantageous for patients, the surgeon had to deal with some disadvantages. Depth perception is problematic, in fact, two-dimensional (2-D) image replaces surgeon’s threedimensional (3-D) operative fields. Furthermore, the surgeon’s tactile feedback and touch sensation is now transmitted by means of instruments that reduce the feedback. Moreover, the surgeon needs for an assistant who play the role of the surgeon’s eyes and holds the camera across the surgical field. The stability of images depends on the assistant tremor. Another problem that has to be considered is the reduction of degrees of freedom available for the surgeon. All these disadvantages can be overcome by the introduction of robot-assisted laparoscopic surgery. The first devices that incorporated robots in medical practice appeared in the late 1980s. In the late 1980s, an industrial robot called PUMA (Programmable Universal Machine for Assemply) was used to hold instruments for neurosurgical stereotactic biopsies [5]. In the same years, a robot was used for prosthetic surgery which, eventually, lead to the developments of PROBOTT [6], and then ROBODOC was developed for clinical use in orthopaedic surgery [7]. At the same time, robots for optimal camera position had been introduced (AESOP, Automated Endoscopic System for Optimal Positionig) [8]. ZEUS and Da Vinci (Intuitive Surgical Inc.) are more complex robots with multiple arms, driven by a surgeon remotely from a console. Da Vinci system, introduced in 1988, is now becoming the only system used in surgical procedure. In Italy, robotic surgery has just celebrated its first 10 years of surgical intervention with the Da Vinci robot (Intuitive Surgical Inc.). Italy is holding a relevant share of European market with its 41 Da Vinci robots installed. The present paper evaluates the minimally invasive surgery and assesses its evolution up to the current robot technology. The latest addition to the Da Vinci product line is the newly refined Da Vinci SI Surgical System. San Matteo Polyclinic Hospital in Pavia is the first italian hospital to have installed at the end of 2009 the Da Vinci SI Surgical System.
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 1021–1024, 2010. www.springerlink.com
1022
P. Lago, C. Lombardi, and B. Dell’Anna
II. LAPAROSCOPIC TECHNOLOGY IN SURGERY The key element in laparoscopic surgery is the use of a laparoscope. The laparoscope is a rigid tube with two optical channels: one channel carries the light inside the human body and the other optical channel sends out the image. The instrument has also a tip lens used to enlarge the image. A fiber optic system conveys through the laparoscope the light from a source high-power external to the patient. Moreover, the laparoscope is connected to a camera that allows processing the signals from the laparoscope and display the images on a high-definition monitor. It’s also possible to record the images and use them for research purpose. Laparoscopic surgery allows the surgeon to see inside human body and operate through grip and cutting tools, inserted in small incisions suitable for surgery. The working and viewing space is created insufflating the abdomen with carbon dioxide gas that can be easily absorbed by tissue and removed by respiratory system. A. Surgical Applications Laparoscopic techniques can be used to perform some interventions. Laparoscopic cholecystectomy (LC) is a routine operation performed in most hospitals with a low morbidity [9]. Laparoscopic appendicectomy confers advantages to the patients in terms of fewer wound infection, less pain, faster recovery and earlier return to work, but is more time consuming and is associated with increased hospital costs compared with open appendicectomy [10]. Some other uses of laparoscopic surgery are for inguinal hernia repair [11], for treatment of incisional hernia [12], for resection of diverticular fistulae [13] and for other surgical procedure [14, 15, 16]. Laparoscopic technique has been widely used as a valid alternative to traditional open surgery, but it has been also showed some limitation and introduced same risks. For these reason, laparoscopic technique’s use remains controversial. B. Advantages and Risks The success of laparoscopic surgery is due to some advantages compared to open surgery. Potentials benefits to patients from laparoscopic techniques may include reduced pain, a shorter hospitalizations, a faster recovery, improved patent outcomes, patient satisfaction, aesthetic results and a smaller incisions referred to as ‘minimally invasive surgery’. Although the laparoscopic technique was advantageous to patients, surgeons had noticed some disadvantaged that may increase the risk to the patient. In
fact, the surgeon can observe only two-dimensional images from a 2-D monitor and the firmness of images depends on the hand of an assistant who navigates the camera across the surgical field. In fact, physiologic tremor is not eliminated. Moreover, surgeon has to stay in a poor ergonomics positions and his motions are limited. Another limitation that can be noticed is the surgeon’s tactile feedback that is not transmitted by the instruments. These limitations have led the surgeon’s attention to new minimally invasive surgical technologies, consisting in robotic systems.
III. ROBOT-ASSISTED SURGERY Recent advances in the field of medical robot have proposed solutions to the problems due to laparoscopic surgery. In fact, the introduction of robot-assisted surgery has improved the conventional laparoscopy. Robot can be used for different surgical procedures because it can easily reproduce surgeon’s movements. Several surgical teams use robot-assisted surgery for surgical procedures of the upper gastrointestinal tract [17], other teams for cardiac surgery [18]. Different surgical robots has been realized and tested since 1980s. Currently, Da Vinci System has generated the most headlines with regards to robot-assisted surgery. The Da Vinci Surgical System is produced by Intuitive Surgical and it has been distributed all over the world: there are installed 916 robotic system in United States and 221 in Europe at September 2009 (source: AB MEDICA’s data). A. Da Vinci Robotic System The Da Vinci robot is a master – slave system, and it's composed by three components [19]. First of all an ergonomically surgeon’s console, that let the surgeon to operate from a seated position and control the three or four robotic arms. The surgeon controls the movements with his hands and his thumbs, and the movements are perfectly reproduced on the robotic arms and on the tips of the instruments. From the console the electrosurgical functionality can be activated. Secondly, a high-definition 3D vision system, by which 3-D view of operative field at 10 magnification is projected. The 3-D vision system is obtained from a stereoscopic endoscope inserted into the patient and with image processing equipment. The endoscope consists of two parallel cameras, each of which is channelled to each of surgeon’s eyes and this provides 3-D vision. Moreover, the presence of a touch screen monitor allows 2-D vision for the assistants. Another component is a patient-side cart, on which the robotic arms are mounted. It provides three or four robotic arms, either laparoscopic arm that instrument arms that are controlled from the surgeon’s control.
IFMBE Proceedings Vol. 29
From Laparoscopic Surgery to 3-D Double Console Robot-Assisted Surgery
1023
Another disadvantage is due to set-up times for robotic procedures that are initially quite lengthy. If the set-up time has to be reduced it’s necessary to institute a large and welltrained multidisciplinary team.
IV. THE INSTALLATION OF DA VINCI ROBOT IN SAN MATTEO POLYCLINIC HOSPITAL IN PAVIA
Fig. 1 Dual camera endoscope The laparoscopic arm holds the high-resolution 3-D endoscope (Fig. 1), instead Specialized EndoWrist instruments (Intuitive Surgical, Sunnyvale, CA, USA) are mounted on the remaining arms. B. Advantages Robotic technology achieves all the benefits given by laparoscopic surgery, like decreased blood loss, shorter hospital stay, faster return to activity. Other advantages regard are the presence of EndoWrist instruments, which offer 7 degrees of freedom; the robotic system allows easier movements during laparoscopic procedures and 3-D vision of the operative field; moreover, it allows smooth and precise movements, given by the motion scaling and the compensation of physiological tremor. Another important advantage is the possibility for the surgeon to sit in a comfortable ergonomic position during the surgical procedure. An important difference from laparoscopic surgery is the 'intuitive' movements of the instruments. In fact, laparoscopic surgery has a ‘fulcrum’ effect, in which the instrument tips move in the opposite direction to the hands of the surgeon. The Da Vinci robotic instruments cancel this effect, so that the instrument tips within the patient's body move in the same direction as the surgeon’s hands in the console. C. Disadvantages The most striking drawback of the Da Vinci system is the cost, that could be about € 2.000.000, with annual running costs of about 5–10% of the purchase cost if the costs of EndoWrist instruments and maintenance are included. The users’ costs of formation are included into purchase costs. A specific surgical disadvantage is the absence of tactile feedback for the surgeon. The development of haptic sensor technology could be introduced into future generations of master–slave systems. It is important to underline that the introduction of virtual reality (VR) simulators into surgical training may reduce the reliance for the future surgeons on tactile feedback. The introduction of Virtual Reality may perfectly help the training necessary for robotic surgery.
Da Vinci SI robotic system was installed in San Matteo Polyclinic Hospital in Pavia in December 2009 and it is the first installation in Italy with two surgeon consoles. With this configuration two surgeons can work during the same operation, exchanging the control of the arms and of the endoscope. Also this characteristic allows an optimum training for the students, followed by an expert surgeon. In Fig. 2 is showed Da Vinci configuration as it is installed in San Matteo Polyclinic Hospital in Pavia.
Fig. 2 Da Vinci SI configuration The remote assistance is another option considered in this installation. This option link via internet the robotic system with the assistance centre, and let users, like surgical staff or technicians of the Structure of Clinical Engineering of the Hospital, send possible error messages helping the assistance interventions. This could be useful when a substitutions of damaged parts is needed or when software errors occur. The Da Vinci configuration installed in San Matteo Polyclinic Hospital in Pavia has also a touch screen monitor from which it is possible to enlarge same details. A. Surgical Applications In San Matteo Hospital the Da Vinci robotic System’s utilization is going to organize between different the surgery specialties. In urology, the robotic system can be used for some surgical procedures such pyeloplasty, nephrectomy and partial nephrectomy, radical cystectomy and urinary diversion. Gynaecology is another medical specialty that can use robotic system for hysterectomy, myomectomy and sacro-colpopexy. Also in Cardiac surgery the robot-assisted surgery can be useful for mitral valve replacement and coronary artery bypass grafting. General surgery and trans-oral surgery can use
IFMBE Proceedings Vol. 29
1024
P. Lago, C. Lombardi, and B. Dell’Anna
this system respectively for gastrointestinal laparoscopic procedures, laparoscopic Roux-en-Y gastric bypass and laparoscopic Nissen fundoplication and for radical tonsillectomy. There are different kind of surgical procedures that can be effectuated using Da Vinci surgical system using the different instrumentation that the system provides, so the surgeons must consider what instruments have to use for own specialty. This second step of the system’s installation is done in collaboration with the technician who supports the users. Finally, the robot can be exploited to create a team of surgeons dedicated to develop some protocols of miniinvasive surgery. This could be realized creating a precise training plan, identifying the components of the surgical team and specifying the principles of patient’s eligibility.
V. CONCLUSIONS Laparoscopic surgery has introduced a new surgical technique that has some advantages compared with open surgery. The introduction of robotic-assisted surgery has improved the conventional laparoscopy, proponing solutions to some problems introduced by laparoscopic technique. The Da Vinci SI system is the last generation of robot currently available on the market. This system has just been introduced in Italy at San Matteo Polyclinic Hospital in Pavia and has the potential to become the robot-assisted system best known. This is due both to the latest addition to the Da Vinci product line, such as the presence of two surgeon’s console and remote assistance, than to the possibility of using this system for different surgical procedures by changing the instruments used.
REFERENCES 1. Ott D. (1909) First report of laparoscopic in gynecology; historical interest. Rev Med Tcheque 2:27-30 2. Berci G., Forde K.A. (2000) History of endoscopy. Surg Endosc 14:515 3. Mouret P. (1996) How I developed laparoscopic cholecystectomy. Ann Acad Med Singapore 25: 744-747 4. Gupta A., Watson D.I. (2001) Effect of laparoscopy on immune function. British Journal of Surgery, 88:1296-1306
5. Kwoh Y.S., Hou J., Jonckheere E:A:, et al. (1988) A robot with improved absolute positioning accuracy for CT-guided stereotactic brain surgery. IEEE Trans Biomed Eng, 35:153-161 6. Davies B.L., Hibberd R.D., Coptcoat M.J., Wickham J.E.A., (1989) Surgeon Robot Prostatectomy – A Laboratory Evaluation, J. Med. Engne. Technol., 13: 273-277 7. Satava R.M., (2002) Surgical robots: the early chronicles: A Personal Historical Perspective. Surg. Laprosc.Endosc.Percutan. Tech. 12:6-16 8. Alessandrini M., De Padova A., Napolitano B., Camillo A., Bruno E. (2008), The AESOP robot system for video-assisted rigid endoscopic laryngosurgery, Eur Arch Otorhinolaryngol, 265:1121-1123 9. Steinert R., Nestler G., Sagynaliev E., Muller J., Lippert H., Reymond M-A, (2006) Laparoscopic Cholecystectomy and Gallbladder Cancer, Journal of Surgical Oncology, 93: 682-689 10. Pedersen A.G., Petersen O.B., Wara P., Ronning H., Qvist N., Laurberg S., (2001), Randomized clinical trial of laparoscopic versus open appendicectomy, British Journal of Surgery, 88:200-205 11. Liem M.S.L., van Vroonhoven J.M.V., (1996), Laparoscopic inguinal hernia repair, British Journal of Surgery, 83:1197-1204 12. Munro A., Cassar K., (2002), Surgical treatment of incisional hernia, British Journal of Surgery, 89:534-545 13. Engledow A.H., Pakzad F., Ward N.J., Arulampalam T., Motson R.W., (2007), Laparoscopic resection of diverticular fistulae: a 10year experience, The Assosiation of Coloproctology of Great Britain and Ireland. Colorectal Disease, 9:632-634 14. Read T.E., (2007), Laparoscopic Proctectomy for Rectal Adenocarcinoma, Journal of Surgical Oncology, 96:660-664 15. Everett M, Gutman H., (2008), Surgical Management of Gastrointestinal Stromal Tumors: Analysis of Outcome With Respect to Surgical Margins and Technique, Journal of Surgical Oncology, 98: 588-593 16. Bokey E.L., Moore J.W.E., Keating J.P., Zelas P., Chapuis P.H., Newland P.C., (1997), Laparoscopic resection of the colon and rectum for cancer, British Journal of Surgery, 84:822-825 17. F. Ito, J.C. Gould, (2006), Robotic foregut surgery, Int J. Med. Robotics Comput Assist Surg, 2: 287-293 18. Y.J. Woo, (2006), Robotic cardic surgery, Int J. Med. Robotics Comput Assist Surg, 2:225-232 19. Declan G. Murphy, Rohan Hall, Raymond Tong, Rajiv Goel, Anthony J. Costello, (2008), Robotic Technology in Surgery: Current Status in 2008, ANZ J. Surg.; 78: 1076–1081
Author: Paolo Lago, Cesare Lombardi, Beatrice Dell’Anna Institute: San Matteo Polyclinic Hospital Clinical Engineering Department Street: Viale Golgi, 19 City: Pavia 27100 Country: Italy Email: [email protected], [email protected] , [email protected]
IFMBE Proceedings Vol. 29
Clinical Engineering and Clinical Dosimetry in Patients with Differentiated Thyroid Cancer Undergoing Thyroid Remnant Ablation with Radioiodine-131 M. Medvedec and D. Dodig Clinical Hospital Centre Zagreb / Department of Nuclear Medicine and Radiation Protection, Zagreb, Croatia Abstract— Differentiated thyroid cancer (DTC) is a malignant disease with increasing incidence worldwide. Croatia is among the top European countries regarding a high incidence rate of DTC, whereas DTC is among the top fifteen primary cancer sites in Croatian population. The objective of this work is to present the practical impact of clinical engineering in supporting and advancing care in patient with DTC by applying engineering and managerial skills to nuclear medicine technology. The final goal of this work was to harmonize radioactive iodine-131 (I-131) ablation of thyroid remnant with a desired clinical outcome, radiation protection and safety, quality of patient’s life and the costs of treatment. This study included 269 DTC patients. The first part of the study dealt with dosimetric measurements and analysis in 49 patients after thyroidectomy and administration of diagnostic and therapeutic I-131. Serial in vivo measurements of I-131 activity in thyroid remnant and whole body were performed by a conventional probe system and beta–gamma exposure rate meter during the first week after I-131 administrations. The mass of thyroid remnant was determined from two orthogonal pinhole gamma camera images assuming an ellipsoidal shape. The radiation absorbed dose was calculated according to the Medical Internal Radiation Dose (MIRD) formalism. In the second part of the study, 220 low risk DTC patients had been post-surgically given only 900 MBq of therapeutic I-131 and were followed up for five years. Clinical engineering efforts integrated into a novel clinical dosimetry-based approach applied to our DTC patients undergoing I-131 thyroid remnant ablation decreased overall patient’s procedure time and costs two-folds and the amount of radioactivity five-folds. The quality of patient’s life and radiation protection and safety have been significantly improved. Simultaneously, the high success rate of thyroid remnant ablation with I-131 has been preserved. Keywords— clinical engineering, thyroid cancer, thyroid remnant ablation, internal dosimetry, radioactive iodine-131.
I. INTRODUCTION Differentiated thyroid cancer (DTC) is defined as a malignant disease deriving from the follicular epithelium and retaining basic biological characteristics of healthy thyroid tissue, including expression of the sodium iodide symporter,
the key feature for specific iodine uptake. DTC is a clinically uncommon disease but its incidence increases in many countries [1]. However, when appropriate treatment is given, the prognosis of the disease is generally excellent [1,2]. Croatia is among the top European countries regarding a high incidence rate of DTC, whereas DTC is among the top fifteen primary cancer sites in Croatian population. In 2007 thyroid cancer as a primary site in Croatian population was diagnosed in 376 women and 77 men (453 cases in total), equaling the incidence rate of 16.3 and 3.6 (10.2 in total), respectively [3]. Typical treatment of patients with DTC consists of 1) surgical removal of the entire thyroid and involved lymph nodes, in any, 2) 3-6 weeks waiting period of patient preparation for radioactive iodine-131 (I-131) administration, 3) radioactive iodine therapy (RAIT) with 1-5 GBq of I-131, 4) lifelong thyroid hormone supplementation, and 5) followup examinations every 6-12 months [1,2]. However, none of these steps is without controversies, starting from the extent of surgical treatment, the need for and duration of patient preparation for I-131 administration, the amount of activity and/or radiation absorbed dose and dose rate of I-131 sufficient for successful RAIT, indications and degree of thyroid stimulating hormone (TSH) substitution/supression therapy, and ending with the frequency, content, costs and effectiveness of follow-up examinations. Whilst both external beam radiotherapy and brachytherapy have shown significant advances in recent decades and are ongoing fields of active research, it is disappointing that some 60 years after the introduction of radioiodine for the treatment of thyroid diseases, there has been little progress in methods for administration of radioactivity in radionuclide therapy [1,2,4,5]. Even despite the fact that patientspecific dosimetry is often a legal obligation in radionuclide therapy, an estimation of the crucial radiobiological parameters such as radiation absorbed dose (Gy) and dose-rate (Gy/h) of I-131 actually delivered to the target thyroid tissue, is usually ignored [1,2,4,5,7]. Thyroid remnant ablation can be achieved either by administering an empirical fixed or dosimetrically determined activity of I-131 [1,2,4,5]. The majority of centers have adopted the fixed activity approach because of technical,
P.D. Bamidis and N. Pallikarakis (Eds.): MEDICON 2010, IFMBE Proceedings 29, pp. 1025–1028, 2010. www.springerlink.com
1026
M. Medvedec and D. Dodig
organizational, economical and other difficulties in dosimetric estimations. The amount of I-131 ablation activity to be given is usually based upon an empirical decision involving the extent of the disease, the site and the way of its involvement, and other clinical parameters. All standard ablation activities of I-131, which have been empirically used so far are, basically, arbitrary radioiodine quantities chosen just as a rounded number of radioactivity units in most of the cases. If a dosimetric approach is applied, the threshold radiation absorbed dose of 300 Gy is usually believed to be crucial for successful ablation of residual thyroid tissue [1,2,5,7]. However, neither the optimal I-131 activity in MBq nor the optimal cumulative absorbed dose (Gy) and/ or maximal initial absorbed dose rate (Gy/h) required for a complete ablation of thyroid remnant with a single administration of I-131 have not been established yet [1,2,5,6,7]. The objective of this work is to present the role and practical impact of clinical engineering in supporting and advancing care in patient with differentiated thyroid cancer by applying engineering and managerial skills to nuclear medicine technology. The final goal of these specific investigations which have been conducted during the last fifteen years in our department is to harmonize radioiodine therapy with desired clinical outcome, radiation protection and safety principles, quality of patient’s life and the overall costs of treatment.
II. MATERIALS AND METHODS
Fig. 2
Measurement of neck surface dose rate and one-meter distance whole-body dose rate of radioactive iodine-131 by using beta–gamma exposure rate meter
Fig. 3 Measurement of whole-body and neck activity of radioactive iodine I-131 by using whole-body counter equipped with especially designed slitcollimator
The study included 269 patients following total or near total thyroidectomy for DTC. In 49 patients (group A) a median 75 MBq and 1.9 GBq of I-131 were given for diagnostic and therapeutic purpose, respectively. Serial measurements of I-131 activity in thyroid remnant and whole body were performed by a conventional probe system (Fig. 1), beta–gamma exposure rate meter (Fig. 2) and whole-body counter (Fig. 3) during the first week after I-131 had been given for diagnostic and therapeutic purpose, 4 and 5 weeks postthyroidectomy, respectively.
The mass of residual thyroid tissue was determined from two orthogonal images obtained by gamma camera equipped with pinhole collimator and assuming an ellipsoidal shape (Fig. 4) [8]. Internal radiation absorbed doses were calculated by applying the Medical Internal Radiation Dose (MIRD) formalism [9].
Fig. 1
Fig. 4 Scintigraphic imaging of radioactive iodine-131 concentrating tissue
Measurement of radioactive iodine-131 activity in the neck and thigh by using conventional probe system
in the neck by gamma camera equipped with pinhole collimator
IFMBE Proceedings Vol. 29
Clinical Engineering and Clinical Dosimetry
1027
Furthermore, the study included 220 low risk DTC patients (group B) who had 1) total thyroidectomy for small papillary thyroid cancer without local or distant metastases classified as pT1-2N0M0 tumours, 2) serum concentration of TSH>30 mU/L, measured thyroglobulin (Tg) and negative Tg antibodies off thyroxin (T4) supplementation at the ablation and during the follow up, 3) treatment for thyroid remnant with 0.9 GBq of I-131, 4) post-ablation and follow up whole-body and head/neck/chest scintigraphy, 5) neck radioiodine uptake (RAIU) measurements in follow-up and 6) neck ultrasonography in follow-up. The patients of group B were followed up for five years. Ablation success rate was assessed by taking into account negative scintigraphy and RAIU<0.1% with 185 MBq of I131, Tg<2 μg/L when off T4, and negative ultrasonography. DTC management analysis was performed at the basis of available billing and statistical data.
III. RESULTS The biokinetics of diagnostic and therapeutic I-131 were found far from being approximately equal, but were still correlated. The values of uptake and effective time of therapeutic I-131 were both smaller than the corresponding values of diagnostic I-131. Consequently, therapeutic cumulative activity per unit administered activity of therapeutic I131 was about twice lower then diagnostically expected value, i.e. thyroid stunning was found to be a real phenomenon. The lower uptake and the shorter effective half-time of therapeutic I-131 contributed to a similar extent to the effect of thyroid stunning. The most important cause of thyroid stunning was found to be the absorbed dose of diagnostic I-131 in thyroid remnant. Thyroid stunning already appeared at low diagnostic absorbed doses of I-131 of only few Gy. The latter two parameters were positively correlated in a non-linear manner. Nevertheless, the therapeutic absorbed dose and dose rate of I-131 sufficient for complete thyroid remnant ablation were found to be, at least, one third lower than currently recommended values. Furthermore, this study showed that the TSH concentration following total or near total thyroidectomy reaches recommended value of 30 mU/L usually within 3 weeks. However, neither the absorbed dose nor the absorbed dose rate of I-131 in thyroid remnant was found to be significantly dependant on TSH concentration. High success rate but prolonged complete ablation was achieved in 80% of patients and 85% of lesions in group B. Some managerial aspect of thyroid remnant ablation with I-131 are illustrated in Table 1 and Table 2. The most
important post-surgery nuclear medicine related procedures are listed (Table 1) with their quantities and costs per patient, showing the evolution of the ablation procedure in our institution over the last 15 years (Table 2). Table 1 Names and costs of the most usual items related to the diagnostics and treatment of differentiated thyroid cancer with radioactive iodine-131 (I-131) in Croatian nuclear medicine departments Item
Cost [€]
whole-body and head/neck/chest scintigraphy (74 MBq I-131)
57
whole-body and head/neck/chest scintigraphy (185 MBq I-131)
65
thyroid scintigraphy (1.85 MBq I-131)
18
thyroid uptake measurement
7
bone scintigraphy (740 MBq Tc-99m)
135
1st hospital day (full board)
41
≥2nd hospital day (full board)
37
average wage per day in Croatia
42
thyroid stimulating hormone (TSH) measurement
8
thyroglobulin (Tg) measurement
16
thyroglobulin antibodies (TgAb) measurement
18
thyroid ablation (0.9 MBq I-131)
70
thyroid ablation (1.85 GBq I-131)
111
thyroid ablation (4.44 GBq I-131)
313
post-ablation whole-body and head/neck/chest scintigraphy
52
Table 2 Names, average quantities and costs of the most important items routinely applied to post-thyroidectomy thyroid remnant ablation with radioactive iodine-131 (I-131) in patients with differentiated thyroid cancer treated in our nuclear medicine department during different time periods Item time period 1st hospital day
Quantity × Cost [€] ≤1994 1995-1999 ≥2000 1×41 1×41 1×41
≥2nd hospital day
9×37
6×37
2×37
bone scintigraphy
1×135
0×135
0×135
1×8
1×8
2×8
Tg measurement
1×16
1×16
2×16
TgAt measurement
1×18
1×18
2×18
pre-ablation I-131 scintigraphy
1×65
1×57
0×57
1×313
1×111
1×70
1×52
1×52
1×52
40×42
30×42
20×42
2661
1785
1161
TSH measurement
thyroid ablation post-ablation I-131 scintigraphy sick leave/off work days
IFMBE Proceedings Vol. 29
total [€]
1028
M. Medvedec and D. Dodig
IV. DISCUSSION Over the years until the mid 1990s, we had been applying high activity of I-131 (4.6 GBq in total) in each patient after thyroidectomy during ablation procedure of thyroid remnant. Although satisfied with the final results of complete ablation achieved in more than 90% of patients, for the last 15 years we have been conducting comprehensive dosimetric investigations of radioiodine ablation. The purpose of those studies was to guide our clinical practice in line with three main principles of radiation exposure and protection (justification, optimization, and dose limits) as applicable for medical exposure, but also to try hard to maximize patient’s quality of life and to minimize the overall costs of treatment [1,2,4,7,10,11]. As a result, particularly regarding the quantified effect of thyroid stunning [1,2,7], we had first diminished pre-ablation diagnostic imaging activity from 185 MBq of I-131 to 74 MBq of I-131, and prolonged the time interval between diagnostics and treatment using I-131. Finally, we omitted any pre-ablation diagnostic scanning because it appears that any amount of diagnostic I-131 activity used for pre-ablation whole-body scanning will induce thyroid stunning, even if therapeutic administration of I-131 occurs within the following 2–3 days [7]. In regard to the modifications of ablation procedure made for thyroid stunning and evidence-based dosimetric and TSH-related data [1,2,5,6,7], we have been also gradually decreasing ablation activity of I-131 and shortening duration of patient’s hypothyroidism, stay in the isolation room, and restricted social contact. Since the end of 1999 our approach to radioiodine ablation has been based on twoactivity regime prescribing 0.9 GBq of I-131 in low risk patients (Table 2) and 1.5 GBq of I-131 in others, with the exception of the highest risk patients who have been treated on the case-by-case basis. Thus, the ever lowest activity of I-131 used for more than ten years in a large group of low risk post-surgical thyroid cancer patients (>500 of our patients so far) appears justified, optimized and dosecompliant for high but delayed complete ablation rate. Namely, thyroid remnant ablation process itself seems to last significantly longer than usually expected [11]. All this has already caused large, if not dramatic, changes in overall DTC management in our institution (Table 2).
differentiated thyroid cancer undergoing thyroid remnant ablation with radioactive iodine-131 in clinical setting of our institution decreased overall patient’s procedure time and costs more than two-folds and the amount of radioactivity more than five-folds. The quality of patient’s life and radiation protection and safety have been significantly improved, while simultaneously preserving high success rate of thyroid remnant ablation with radioactive iodine-131.
REFERENCES 1. Luster M, Clarke SE, Dietlein M et al.; European Association of Nuclear Medicine (EANM) (2008) Guidelines for radioiodine therapy of differentiated thyroid cancer. Eur J Nucl Med Mol Imaging 35:194159 2. Dietlein M, Dressler J, Eschner W et al. (2007) Procedure guidelines for radioiodine therapy of differentiated thyroid cancer (version 3). Nuklearmedizin 46:213–9 3. Croatian National Cancer Registry of the Croatian National Institute of Public Health (HZJZ) at http://www.hzjz.hr/rak/novo.htm 4. Flux G, Bardies M, Chiesa C et al. (2007) Clinical radionuclide therapy dosimetry: the quest for the "Holy Gray". Eur J Nucl Med Mol Imaging 34:1699-700 5. Schlesinger T, Flower MA, McCready VR (1989) Radiation dose assessments in radioiodine (131I) therapy. 1. The necessity for in vivo quantitation and dosimetry in the treatment of carcinoma of the thyroid. Radiother Oncol 14:35-41 6. Medvedec M, Dodig D (2007) Has come the day to do away with thyroid remnant ablation targeting 300 gray (Gy)?. J Nucl Med 48:16P Suppl 2 7. Medvedec M (2005) Thyroid stunning in vivo and in vitro. Nucl Med Commun 26:731-5 8. Grosev D, Medvedec M, Loncaric S et al. (1998) Determination of small objects from pinhole scintigrams. Nucl Med Commun 19:67988 9. Stabin MG (1996) MIRDOSE: personal computer software for internal dose assessment in nuclear medicine. J Nucl Med 37:538-46 10. International atomic energy agency (1996) International basic safety standards for protection against ionizing radiation and for the safety of radiation sources. IAEA,Vienna 11. Kusacic Kuna S, Samardzic T et al. (2009) Thyroid remnant ablation in patients with papillary cancer: a comparison of low, moderate, and high activities of radioiodine. Nucl Med Commun 30:263-9
Author: Institute: Street: City: Country: Email:
V. CONCLUSIONS Clinical engineering efforts integrated into a novel dosimetry-based approach applied to low risk patients with
IFMBE Proceedings Vol. 29
Medvedec Mario Clinical Hospital Centre Zagreb Kispaticeva 12 Zagreb Croatia [email protected]
Author Index
A
Ay, M.R. 220, 311, 319, 327, 351 Azevedo, C.M. 355
Accardo, A. 53, 160 Adam, C. 77, 745 Adamos, D.A. 5 Addio, G. D’ 53 Adochiei, F. 875 Aggelidis, Pantelis 498 Ahmed, J. 835 Akbari, S.M. 351 Alba, J. 725 Alemdar, Hande Ozgur 855 Alexandersson, Á. 21 Alexiadis, D.S. 749 Alhamwi, A. 949 Allen, R. 81, 85, 323 Almeida, R.M.A. 987 Almeida, R.M.V.R. 45 Almqvist, F. 639 Aloisio, G. 300 Alvarenga, A.V. 315, 355 Amariutei, I. 875 Amboni, M. 812 Andersson, B. 624 Andersson, B.M. 395 Andruseac, Gladiola 875 Angarita-Jaimes, N.C. 93 Antoniadis, A. 506 Antonova, N. 647 Aouzale, N. 529 Apostolopoulos, G. 707 Arhip, J. 132 Aronis, Z. 584 Arredondo, M.T. 905, 935 Arsene, O. 659 Arvanitis, Theodoros N. 256 Asanin, B. 959 asl, A.R. Kamali 327, 351 Astaras, A. 240 Asvestas, P.A. 296, 307 Athanasiou, A. 111 Augst, Daniela 847 Augustynek, Martin 533 Augustyniak, P. 945 Avanzini, M.A. 9 Avrahami, I. 545 Avramidou, I.A. 65
B Babarada, F. 132, 459 Baets, P. De 639 Baicus, C. 745 Bajic, D. 136 Bakogiannis, Giorgos 913 Balazs, Tibor 788 Baldas, V. 909 Bamidis, P. 635, 901 Bamidis, P.D. 49, 111, 115, 276, 280, 434, 442, 675, 683, 691, 827, 967, 975 Bartolini, P. 168, 1007 Baruffaldi, F. 296 Baumgarten, D. 260 Bava, M. 839 Bednarčíková, L. 339, 513 Behrens, B.-A. 562 Bekiaris, E. 905 Benazzo, F. 9 Benedetti, L. 604, 608 Berjano, E.J. 521, 725 Beşikçi, Cengiz 244 Bibikas, Dimitris 971 Biček, A. 719 Bifulco, P. 323 Bifulco, Paolo 53 Biller, S. 260 Billis, A.S. 691 Birbilis, T.A. 792 Birngruber, T. 268 Blasco, R. 725 Bletsas, A. 897 Bliznakov, Z. 367, 979 Bliznakova, K. 367, 796 Blumrosen, G. 490 Böhler, A. 502 Bodenlenz, M. 268 Boes, Ulrich 971 Bognar, Eszter 788 Bogunovic, Nikola 29 Bohraus, Y. 578 Borisov, S.V. 631 Borkowski, J. 939
Boskovic, Z. 331 Bougia, P. 592 Bouguecha, A. 562 Branco, Isa 475 Brand, M. 545 Branidou, K. 745 Braschi, A. 479 Bratsas, C. 49, 635 Braun, C. 73 Brodkorb, S. 418 Buydens, L.M.C. 33 Buyl, R. 890 C Cabrera-Umpiérrez, M.F. 905 Cabrera-Umpierrez, M.F. 935 Cagy, M. 41 Calcagnini, G. 168, 1007 Calil, S.J. 995 Callewaert, B. 483 Campos, R.S. 663 Cantone, M.G. 200 Carlak, H. Feza 244 Carrozza, G. 935 Carta, R. 232 Ceccarelli, G. 494, 608 Cech, P. 467 Cemazar, M. 89 Censi, F. 168, 1007 Cerciello, T. 323 Ceregan, V. 627 Cerny, M. 216, 883 Cerny, Martin 228 Cervinka, T. 212 Cesarelli, M. 53, 323 Charfreitag, J. 455 Charisis, Vasileios 236 Chatzi, P. 442 Chatziioannou, A. 438 Chatzimichail, A. 600 Chatzimichail, E. 600 Chatzitheodorou, E. 111 Chitnalah, A. 529 Chizhova, O. 959 Chlup, H. 768, 772
1030
Author Index
Cho, S.W. 722 Chouvarda, I. 240 Christodoulou, C.I. 69 Chrysou, O.I. 792 Chumerin, N. 57 Cifrek, M. 422 Cikajlo, I. 383 Cimerman, M. 430 Ciobotariu, R. 875 Ciorap, M. 537 Ciorap, R. 537 Ciupa, R. 733 Ciupa, R.V. 471 Cizek, M. 467 Clarys, J.P. 549 Clercq, H. De 525 Codrean, A. 627 Collier, R. 85 Combaz, A. 57 Constantinou, I. 450 Constantinou, Ph. 101 Coosemans, J. 517 Corbi, G. 53 Cornelis, J. 699 Corrado, Edvige 991 Correvon, M. 240 Costa, J.C.G.D. 45 Costa-Pereira, A. 823 Costaridou, L. 208 Costin, H. 875 Cretu, M. 733 Cristea, P.D. 659 Crosetto, N. 604 Csorba, J. 402 Cugmas, B. 89 Cuong, N.V. 804 Czajkowski, K. 932
Davies, Nigel P. 256 De Angelis, M.G. Cusella 9, 494, 604, 608 De Beule, M. 639 de Crom, R. 753 de Freitas, S. 963 Dekhtyar, Yu. 800 Deklerck, R. 699 Delendi, M. 839 Dell’Anna, B. 1021 Delopoulos, A.N. 671 De Paolis, L.T. 300 de Paula, T.H. 667 Dermatas, E. 707, 741 de Sá, A.M.F.L. Miranda 13 Desloovere, K. 483 Dewhirst, O.P. 93 Dimec, J. 851 Dimitric, D. 331 Dimitriou, A. 897 Dlouhy, J. 467 Dobre, A. 651 Dodig, D. 1025 Dombros, N. 975 Domingues, J.P. 475 Dori, Fabrizio 991, 1017 dos Santos, R.W. 663, 667 Doukas, C. 438 Doukas, Charalampos 119 Dovrolis, N. 967 Dragomir, T.L. 627 Duarte, M.A. 355 Dubruel, P. 9 Dunkel, M. 835 Dunwell, I. 963 Dzyatkovska, I.I. 510 Dzyatkovska, N.M. 510
D
E
Da-Silva, P.J.G. 37 Dafli, E.L. 975 Daha, I. 745 Dai, Chung-Feng 596 Dan, A. 745 Dan, A.Gh. 745 Danesova, J. 570 Danielewska, M. 128 Daoussis, D. 208 Darabant, L. 733 Darebnikova, M. 859 Darowski, M. 983 da Silva, R.F. 753 Daukantas, S. 554
Economides, A.A. 675 Economopoulos, T.L. 307 Einav, S. 545, 584 Elgaly, I. 562 Eliasy, R. 584 Elsarnagawy, T. 949 Erdős, Csanád G. 61 Erla, S. 73 Ersoy, Cem 855 Ersöz, A. 248 Erts, R. 304 Esposti, F. 343, 347 Exarchos, Konstantinos P. 588 Eyüboğlu, B.M. 248
F Faes, L. 73 Faix, Matthias 847 Fanelli, A. 343, 347 Farkas, Gergő 61 Faro, A. 200, 272 Farrag, Manal A. 949 Fassina, L. 9 Feichtner, F. 268 Ferk, P. 851 Fernández-Llatas, C. 655, 757 Fernández-Rodríguez, M.M. 905 Fernandez-Rodriguez, M.M. 935 Ferrara, N. 53 Ferreira, Ana 264 Ferreira, J.A. 987 Fevgas, A. 643 Fiedler, P. 418 Filos, D. 240 Fleissig, K. 399 Foltyński, P. 932 Fonseca, C. 418 Fotiadis, D.I. 592 Fotiadis, Dimitrios I. 588 Fragoulis, N. 843 Frantzidis, C. 49 Frantzidis, C.A. 276, 683 Freitas, J.A. 823 Fridolin, I. 124, 379 Frize, M. 893, 999 Fuis, V. 815 G Gadžijev, E. 616 Galli, D. 604, 608 Gao, X.F. 224 García, D. 541 Gençer, Nevzat G. 244 Gergely, S. 471 Geršak, K. 851 Ghadiri, H. 220, 327, 351 Giannoukas, A.D. 284 Gibertoni, A. 987 Giokas, K. 808, 909 Giordano, D. 200, 272 Giudici, F. 558 Gluhcheva, Y. 647 Golczewski, T. 983 Goletsis, Yorgos 588
Author Index Golja, P. 160 Gómez, W. 292 Gorgan, D. 955 Gratkowski, M. 97 Greenwood, K. 999 Gregoriou, G. 897 Griva, V. 831 Grossenbacher, O. 240 Gultova, E. 768 Gumusel, L. 172 Guttmann, J. 164 H Haberl, S. 711 Habibzadeh, M.A. 327 Hadjileontiadis, L.J. 65, 687 Hadjileontiadis, Leontios J. 236 Haj-Ali, R. 584 Halbleib, A. 97 Hannula, M. 212 Hassan, M. 21 Hatzilygeroudis, Ioannis 913 Haueisen, J. 97, 260, 399, 418 Haverkamp, Fritz. 847 Heerschap, A. 33 Heida, T. 176, 180, 184, 784 Herwig, R. 835 Herzog, A. 566 Hickey, M. 387 Hjouj, Mohammad 371 Hlavoň, P. 761 Hoefferer, C. 268 Holcik, Jiri 819 Hoplaros, D. 924 Horny, L. 768, 772 Horváth, Á. 252, 359, 363 Horváth, G. 252, 359, 363 Hozman, J. 455 Hsieh, M.F. 804 Huamaní, R. 541 Hulan, M. 772 Huonker, R. 399 Hyttinen, J. 212 I Iadanza, Ernesto 991, 1017 Ifantis, A. 843 Infantosi, A.F.C. 13, 37, 41, 45, 292, 355 Innocenti, B. 483 Ipate, C.M. 188 Irimia, D.C. 426
1031 Issa, P.R. 156 Iuppariello, L. 812 Ivanov, I. 647 J Jacquet, W. 204 Jagomägi, K. 144, 148, 152 Jagusz, R. 939 Jakjoud, H. 529 Jalkanen, V. 395, 463 JamilPour, N. 729 Janckulik, Dalibor 863, 867 Jansen, B. 699 Japundzic-Zigon, N. 136 Jarm, T. 89, 616 Jerotskaja, J. 124, 379 Jiang, J.L. 804 Jobbágy, Á. 17 Jossif, A. 924 Jourand, P. 517, 525 Jovic, Alan 29 Jozwik, K. 410 Juhász, S. 359 Jui-MingYeh 596 Jwo, S.C. 804 K Kaik, J. 406 Kainerstorfer, J.M. 631 Kalamara, Ch. 737 Kaldoudi, E. 920 Kaldoudi, Eleni 967 Kalfas, A. 280 Kalfas, A.I. 284, 442 Kalogeropoulou, C. 208 Kalogianni, K. 111 Kalozoumis, P.G. 284, 442 Kamali-Asl, A.R. 311 Kanaris, I. 438 Kandušer, M. 679 Kang, Y.J. 722 Kaplanis, P.A. 69 Karabatakis, V.E. 675 Karadaglic, D. 959 Karagiannis, A. 101 Karai, D. 406 Kargioti, Eleni 971 Karlovasitou, A. 675 Karlsson, B. 21 Karnafel, W. 932 Kasik, V. 216, 879 Kasprzak, H. 128
Kastaniotis, D. 843 Kattai, R. 406 Kavadarli, G. 107 Kavasidis, I. 200 Keller, T. 383 Khlusov, I. 800 Kijonka, J. 886 Kilintzis, V. 506 Kim, H.J. 722 Kim, I.Y. 722 Kim, S.I. 722 Kim, S.Y. 81 Kitsas, I.K. 65 Kitsiou, S. 1011 Kivastik, J. 144, 148, 152 Klados, M.A. 49, 115, 280, 675, 827 Klemenc, M. 160 Klosinski, P. 410 Kokonozi, A. 240 Koller, J.-M. 240 Komnidis, A. 280 Končar, M. 928 Konitsiotis, S. 592 Konstantinidis, E. 280 Konstantinidis, E.I. 276, 442, 691 Konstantinidis, S.Th. 967 Kontogiannidis, K. 792 Kontomaris, S.B. 612 Kontotasiou, D.G. 827 Korfiatis, P. 208 Koričan, A. 422 Kormanyos, B. 402 Korodi, A. 627 Korpas, David 533 Korsatko, S. 268 Kosmidis, E.K. 5, 335 Kotsiakis, X. 33 Koudounis, G.C. 335 Koukal, M. 815 Koumpis, A. 780 Kounelakis, M.G. 33 Kounoudes, A. 924 Kourtesis, Dimitrios 971 Kourtiche, D. 529 Kourtidou-Papadeli, C. 827 Koutsojannis, Constantinos 913 Koutsouris, D. 776, 808, 909 Kowalska, M. 128 Kralj, D. 928 Krams, R. 753 Krejcar, Ondrej 228, 863, 867 Krejsa, J. 761
1032 Kretschmer, J. 25 Kreuh, D. 430 Kristan, A. 430 Kronek, J. 768, 772 Krzymień, J. 932 Ku, J.H. 722 Kutilek, P. 455 Kyriacou, E. 924 Kyriacou, P.A. 387 Kyriazi, M. 612 L Labey, L. 483 Lacković, I. 695 Laczko, Romola 788 Ładyżyński, P. 932 Lago, P. 1003, 1021 Lalis, S. 643 Langford, R. 387 Lanza, G. 200 La Rovere, M.T. 53 Laskaris, N.A. 5, 335 Lauri, K. 124, 379 Laurutis, Vincas 1 Laws, Hans Jürgen 847 Lazarov, N.E. 184 Lazeyras, F. 578 Lebar, A. Macek 140 Lee, K.H. 722 Leija, L. 292 Leopoldo, R. 240 Lerch, M. 562 Leskošek, B.L. 851 Le Van Quyen, M. 77 Lewis, C. 943 Li, Wen-Tyng 596 Liatsos, Christos N. 236 Liehr, M. 399 Lindahl, O.A. 395, 463, 624 Lísal, J. 570 Lista, L. 812 Lithari, C. 111, 115 Lithari, Chrysa 913 Livint, Gh. 426 Lobosco, M. 663, 667 Loizou, C.P. 446 Lombardi, C. 1021 Louis, O. 549 Lučev, Ž. 422 Lukosevicius, A. 554 Luman, M. 124, 379 Lundström, R. 624 Luprano, J. 240
Author Index Lupu, R. 537 Lyratzopoulos, N.
792
M Machado, J.C. 156 Machedon, A. 188 Maestri, R. 53 Magenes, G. 9, 479, 494, 604, 608 Magjarević, R. 695 Maglaveras, N. 240, 901 Maglogiannis, Ilias 119, 438 Magnani, B. 604 Magnes, C. 268 Mali, B. 616 Mandalá, M. 502 Manea, E. 459 Manoliu, V. 745 Manopoulos, C. 764 Manthou, V. 1011 Manyakov, N.V. 57 Maragos, G. 843 Maramis, C.F. 671 Marani, E. 176, 180, 184 Marginean, M. 955 Mariolis, I. 208 Marjanović, T. 391 Markatis, S. 776 Markos, A. 1011 Marozas, V. 554 Marque, C. 21 Marques, B. 823 Martinak, L. 871 Martins, C.J. 156 Massarwa, E. 584 Mastalerz, A. 715 Matei, D. 192 Matei, R. 192 Matis, G.K. 792 Matjačić, Z. 383 Matos, M. 376 Matrone, G. 479 Matsopoulos, G.K. 296, 307 Mattei, E. 168, 1007 Mattei, Samuele 1017 Matthias, T. 562 Mavrogiannis, Christos C. 236 Medvedec, M. 1025 Medvedev, A.V. 631 Mehranian, A. 319 Meigas, K. 406 Meinel, T. 835 Mel’nik, Yu.G. 510 Mele, F. 168 Melges, D.B. 13, 45
Meruelo, A.C. 81 Mey, J. De 549 Michail, E. 240 Michailidou, A.V. 442 Michalíková, M. 339, 513 Migalska-Musiał, K. 932 Miklavčič, D. 616, 679, 695, 711 Milano, F. 943 Milis, M. 924 Miniati, Roberto 991, 1017 Mocholí, J.B. 655, 757 Moeller, K. 25 Mojoli, F. 479 Molik, M. 932 Molina, J.A. López 521 Möller, K. 164 Morega, A.M. 651 Morega, M. 188 Morgado, A.M. 475 Morgado, António Miguel 264 Moridis, C.N. 675 Motalova, Leona 863, 867 Moulos, I. 111 Moumtzi, V. 780 Mouzakidis, C. 691 Mueller, M.S. 835 Müller, V. 252 Munih, M. 703 Murray, V. 69, 446 Musil, Karel 863 N Nabil, Eman Alkholy 913 Nadal, J. 37 Nadi, M. 529 Nagy, Á. 17 Naranjo, J.C. 655, 757 Nascimento, L.N. 995 Natarajan, Kal 256 Navarro, L. 541 Navarro, Vincent 77 Neidrauer, Michael 487 Neocleous, C.N. 580 Neofytou, M. 450 Neokleous, K. 580 Nikházy, L. 252, 359 Nikiforovs, V. 304 Nikolaides, K. 580 Nikolić, D. 81, 85 Nikolopoulos, S. 77 Nikolov, M.O. 510 Nikolova, G.S. 574 Nisbett, A. 953 Nisevic, G. 331
Author Index Nollo, G. 73 Nolte, I. 562 Noseworthy, M. 376 Nyssen, E. 204 Nyssen, M. 890 O Obidowski, D. 410 Oblak, J. 383 Olivieri, C. 604 Olivon, V.C. 753 Opolski, G. 932 Orbán, G. 363 Orel, V.E. 510 Orlando, A. 479 Ortega-Portillo, M. 935 Ouzounoglou, A.N. 307 Ozmen, N. Gursel 172 P Palczewska, I. 715 Pallikarakis, N. 367, 796, 831, 979 Pallikarakis, N.E. 288 Pantziaris, M. 446 Panzoli, D. 963 Papadelis, C. 73, 276, 434 Papadelis, C.L. 49, 827 Papaloizou, L. 897 Papazoglou, Elisabeth S. 487 Pappa, M. 442 Pappas, C. 276, 683, 691, 975 Paraskakis, E. 600 Paraskakis, Iraklis 971 Parker, D. 953 Pasquariello, G. 323 Pastor-Sanz, L. 905 Pataki, B. 402 Pataki, Béla 61 Pattichis, C. 450, 924 Pattichis, C.S. 69, 446 Pattichis, M.S. 69, 446 Paura, L. 323 Pavlin, M. 679, 711 Peet, rew C. 256 Peldschus, S. 107 Penhaker, M. 216, 859, 871, 883, 886 Penhaker, Marek 228, 533 Penkala, K. 196, 414, 939 Pennisi, M. 200, 272 Peranonti, E.G. 827 Pereira, W.C.A. 156, 292, 315, 355 Perren, F. 578 Perrry, J.C. 383
1033 Petrantonakis, P.C. 687 Petrella, L.I. 156 Petrenas, A. 554 Petrík, M. 513 Petrtyl, M. 570 Petsas, T. 208 Petsatodis, Theodoros 119 Pieber, T.R. 268 Pilt, K. 406 Pino, C. 200 Poboroniuc, M.S. 426 Podaru, C. 459 Podobnik, J. 703 Polyaka, N. 800 Polycarpou, A.C. 897 Polycarpou, P. 897 Popescu, A. 459 Porchet, J.-A. 240 Porrat, D. 490 Postma, G.J. 33 Pourrezaei, Kambiz 487 Prauzek, Michal 228 Prè, D. 494, 608 Preissner, R. 835 Priedl, J. 268 Princi, T. 160 Protopsaltis, A. 963 Provyn, S. 549 Psimarnou, Markela 498 Pucihar, G. 679 Puers, R. 232, 517, 525 Puketa, V. 430 Pulimeno, M. 300 Pusca, M.V. 471 Q Quintelier, J.
639
R Raamat, R. 144, 148, 152 Rajaai, S.M. 729 Ramat, S. 502, 620 Ramser, K. 624 Randive, N. 387 Ranilović, M. 391 Ravariu, C. 132, 459 Rengo, F. 53 Rentzsch, J. 847 Reorowicz, P. 410 Reymond, P. 578 Rigas, A. 600 Rigas, G. 588, 592 Rivera, M.J. 521
Rodina, A. 796 Roman, N.M. 471 Romano, M. 53, 323 Romanov, A.V. 510 Romero-García, V. 521 Rosen, J. 545 Rosen, L. 584 Rosiński, G. 932 Rotariu, C. 875 Rotman, O. 584 Rozman, J. 467 Rubins, U. 304 Rubinsky, B. 490 Rubinsky, Boris 371 Rudel, D. 140 Ryvkin, M. 545 S Saffar, K. PourAkbar 729 Sahalos, I.N. 897 Saino, E. 9 Sakka, Eleni J. 498 Sala, P. 655, 757 Salonikiou, A. 506 Sammons, R. 800 Samuels, N. 387 Sánchez, C. 757 Sansone, M. 812 Sarenac, O. 136 Sarivougioukas, J. 737 Sazel, Pavel 533 Scafoglieri, A. 549 Schaller, R. 268 Schaupp, L. 268 Schizas, C.N. 580 Schlindwein, F. 953 Schröttner, J. 566 Schuller, E. 107 Secca, M. Forjaz 376 Sedlacek, R. 768 Segers, D. 753 Segers, P. 558 Seimenis, I. 446 Sergiadis, G.D. 749 Sergiadis, George D. 236 Serša, G. 616 Shchepotin, I.B. 510 Shie, Mu-Feng 596 Sievanen, H. 212 Signorini, M.G. 343, 347 Silva, A.P.S. 987 Silva, José Silvestre 264
1034 Silva-Costa, T. 823 Silvani, G. 604 Simon, Arne 847 Simpson, D.M. 93 Sinner, F. 268 Sipilä, A. 240 Skalkidis, I. 776 Smith, P. 943 Song, J.Y. 722 Spampinato, C. 200, 272 Spyrou, S. 901 Srovnal, V. 216 Stahl, C.A. 164 Stan, A. 537 Stanescu, C.M. 745 Stankus, M. 216, 879, 883 Stare, Z. 391 Stecklow, M.V. 41 Stefan, M.C. 426 Stefanut, T. 955 Stegenga, J. 784 Stergiopulos, N. 578 Stergiopuloss, N. 753 Stoeva, M. 943 Stojanovic, R. 959 Strand, S.-E. 943 Strohmeier, D. 97, 399 Stukenborg-Colsman, C. 562 Styliadis, C. 434 Stylianou, A. 612 Stylianou, I. 280 Sun, Yu 256 Symeonidis, I. 107 T Tabakov, S. 943, 953 Tabirca, S. 224 Tadeusiewicz, R. 945 Talts, J. 144, 148, 152 Tähepõld, P. 148 Tangney, M. 224 Tanos, V. 450 Tassani, S. 296 Teimourian, B. 220 Teixeira, César A. 315 Teodorescu, M. 545 Terrien, J. 21 Terzis, V. 675 Theophilidis, G. 5 Thomeer, K. 890 Thoné, J. 232 Titapiccolo, J. Ion 343, 347 Tomazevic, M. 430
Author Index Tomše, P. 719 Tonković, S. 928 Topouzis, F. 506 Torres, M. 541 Tóth, T. 339, 513 Toumpaniaris, P. 776 Tracz, M. 932 Tranfaglia, R. 812 Triventi, M. 168, 1007 Trujillo, M. 521, 725 Tsangaris, S. 764 Tsiafis, I. 442 Tsiligiri, Alexandra 236 Tsinopoulos, S. 707 Tsipouras, M.G. 592 Tsirbas, H. 808 Tsitlakidis, V. 335 Tsolaki, M.N. 691 Tsompanopoulou, P. 643 Tuduce, R. 659 Turcato, A. 620 Turukalo, T. Loncar 136 Tyulkin, F. 800 Tzalavra, A. 635 Tzallas, A.T. 592 U Uhlin, F. 124, 379 Usunoff, K.G. 184 Uziel, M. 490 V Vagelatos, A. 737 Valchinov, E.S. 288 Valderrama, M. 77 Valle, H.A. 156 Vallone, I. 1003 Van Canneyt, K. 558 Vande Vannet, B. 204 Van Haver, A. 639 Van Hulle, M.M. 57 VanMeter, J. 631 Van Vlierberghe, S. 9 Vargemezis, V. 920 Vasic, A. 331 Vasickova, Z. 859 Vaz, F. 418 Veneman, J. 383 Vercesi, L. 608 Verdonck, P. 558 Verdonk, P. 639 Vescoukis, V. 635 Vesely, J. 768
Vicha, I. 467 Viigimaa, M. 406 Villavicencio, E. 541 Visai, L. 9 Vlachopoulou, M. 1011 Vlachos, M. 741 von Krüger, M.A. 315 Vrhovec, J. 140 W Wahl, A. 25 Wallace, J.A. 549 Wasilewska-Radwańska, M. 945 Wefstaedt, P. 562 Weingarten, Michael S. 487 Weyand, S. 999 Wills, C. 780 Witte, O.W. 399 Wójcicki, J.M. 932 Wong, P.D. 483 X Xavier, M.P.
663
Y Yabar, L.F. 541 Yao, X.L. 699 Yildiriman, R. 835 Yova, D. 612 Yu, N. 893 Z Zafarghandi, M. Shamsaei 220 Zaidi, H. 220, 311, 319, 327, 351 Zaleski, M. 414 Zangrando, R. 839 Zanow, F. 418 Zarola, G. 1003 Zdrazil, Tomas 819 Zemblys, Raimondas 1 Zeraatkar, N. 311 Zervakis, M.E. 33 Zezula, M. 761 Zhao, Z. 164 Zitny, R. 768, 772 Živčák, J. 339, 513 Zotti, D. 839 Zubkov, Leonid 487 Zurek, Petr 228 Zvetkova, E. 647
Keyword Index
.NET Framework 867 3D camera 699 3-D in vitro model 711 3D texture 208 A AAGIP 851 AAL 757 AAL services 655 Ablation 521 Absorbance 379 absorbancespectra 124 accelerometer 859 Accelerometers 525 accesscontrol 909 accessibility 691 Accessibility validation 655 acetylcholine 180 acoustic attenuation 156 Activation function 733 active contours 292 Acute Respiratory Distress Syndrome 1003 ADC 883 AF detection 168 Affective Computing 276 Affective learning 675 affective states 687 Affine motion 749 ambient assisted living 855 Ambient Assisted Living (AAL) 780 ambulatory 890 AM-FM 69 AM-FM analysis 446 Amphiphilic polymer 804 analog processing 132 Analysis 45, 631 Angular Radial Transform 707 anisotropic diffusion 292 anisotropy 772 anodization 459 anthropometry 574 anthropomorphic phantoms 319 Aorta 768, 772 apparent viscosity 647 applied pressure 406 approximate entropy 136 architecture 955
arginase pathway 753 arm 383, 761 Arterial Stenoses 764 Artery 545 Articular 430 articular cartilage 570 Artifacts 49 artificial heart 410 artificial heart valves 410 Artificial Neural Networks 600 artificial neural networks 893 Artificial ventilation 983 ashing 549 ASIC 879 ASM 359 asset localization 897 Asthma Diagnosis 600 Astrocytes 343, 347 asynchrony 479 Atherosclerosis 284, 584, 753 Atomic Force Microscopy 612 Atrial fibrillation 168 attenuation correction 220 attenuation map 220 auto-mated diagnosis 592 automatic infusion system 719 automatic segmentation 264 Automatic Segmentation of Images 667 autonomy 780 average gray-level 315 B Back-Propagation Learning Algorithm 600 Balance training 691 base line wander 554 Battery operated 517 Bayes approach 212 BBCB 879 Benchmarking 935 Bidimensional Ensemble Empirical Mode Decomposition 236 bioelectric signals 418 bioelectronics 132 bio-heat equation 244 bioimplant 800 Bio-inspired systems 81
Biomaterial 800 biomechanics 107 Biomedical 867 biomedical electrodes 418 Biomedical Engineering 624, 831, 945, 979 Biomedical Engineering Education 949 Biomedical signal processing 101 biomedical signals 188 biomimetics 9 biopotential amplifier 288 Bioreactor 494 bioreactor 608 biosensor 459 Biosignals 81 Biotelemetry 216 biotelemetry 863 Bladder pressure 517 blood flow 89, 93 blood perfusion mapping 304 Blood pressure 93, 886 Blood pressure measurement 148 blood pulsation 128 BME education 945, 959 B-Mode mage 315 BOLD 376 bone cell 800 bone extracellular matrix 9 Bone fracture risk 296 bone geometry 729 bone marrow stromal cell 9 Bone matrix 494 bone remodeling 562 bone tissue 800 Bowtie Filter 327 brain activation 376 brain cancer 792 brain computer interface 111 brain gliomas 33 brain oscillations 675 brain slices 784 brain white matter 446 Brain-computer interface 57 Breast Cancer 355 breast cancer genetics 851 breast cancer simulation 244 Breast ultrasound 292 Breathing measurement 525 business development 624
1036 C CAD 252 CAD, sliding band filter 363 cadaveric study 639 Calcium 343 calcium dynamics 347 camera calibration 699 cameras calibration 455 Cancer Evolution Monitoring 588 cancer medicine 835 Capacitive sensor 517 Capsular endoscopy 232 Capsule Endoscopy 236 carbohydrates 160 Cardiac Electrophysiology 663 Cardiopulmonary inter-action 983 Cardiovascular 25, 240, 745 Cardiovascular System 667 carotid artery 284 cascade models 93 Catch-up saccades 1 catheters 268 CC1000 471 CCD medical camera 450 CCD multielement sensors 474 cell embrane 371 cell proliferation 9, 604 cells 679 Cellular dynamics 224 cemented cup 562 ceramic head 815 Cerebral Autoregulation 93 Cerebral Circulation 578, 764 Chest 359 chest radiographs 363 chest X-ray 402 children arrhythmias 924 CHO cells 711 Choreography 757 chronic diabetic foot ulcers 487 chronic heart failure 53 Circle of Willis 764 circulatory assessment 188 classification 69 Classification 77 clinical data recorder 995 Clinical engineering 999, 1025 Clinical Information Systems 1011 clinical skills 975 Clinical Studies 847 clinical testing 502 clinical validation 168 Cluster Analysis 45 clustering 5 Coherence 73 collagen 772 Collagen Films 612
Keyword Index
collagen gel 711 color stimulation 414 color systems 450 color vision testing 414 compartment syndrome 513 component analysis 192 Computational fluid dynamics 284 Computational modeling 725 Computational visualization 224 Computed Tomography 327, 351 Computer 430 Computer model 983 COMSOL 521 conducting polymer 596 conductivity 647 connectivity networks 115 contact pressure 639 content repurposing 967 Continuous Blood Pressure 228 continuous wavelet transform 65 continuous wavelet transform 89 Continuous Wavelet Transformation (CWT) 196 contractions 140 Cooperative Education 949 COPD 252 Coregistration 434 corneal confocal microscopy 264 corneal nerves 264 coronary veins 788 cortex activation 111 cortico-subthalamic connection 176 Craniofacial anomalies 204 Creatinine 379 cross-correlation and coherencefunction 128 CT 549 Cube Organization 847 cuff transfer function 144 current density 248, 422 Current Source 886 cutaneous carcinomas 156 D Da Vinci robotic system 1021 damage 768 Damage Factor 545 damper 383 Data Entry 847 data quality problems 823 data regression 331 data transmission 232 dataaquisition 883 Decision Support 913 decision support systems 928 decision trees 893 DECT 220 deposition 459
derivative spectra 124 Detection 13 detection 533 detection of epileptic activity 192 detection techniques 490 dextran 70 647 Diabetes 932 diabetic neuropathy 264 Diabetics 871 diagnosis-related groups 823 dialysis 124 Dialysis 379 Differential Evolution 93 differentiation 494 Diffuse lung disease 208 Diffuse Photon Density Wave 487 digital re-cording 132 Digital Tomosynthesis 367 DigitalDesign 879 digitizing 339 direct concentration measurement 280 disease progression 446 disorder detection 402 dissociated neurons 180 dissolution process 260 distinctive points extraction 307 distributed data collection 851 Distributed processing 843 DNA microarrays 438 dominant sets 5 Dose 327 dose optimization 331 Down syndrome 580 drug treatment 835 DXA 549 Dynamic Bayesian Networks 588 Dynamic rogramming 359 Dynamic Visual Stimulation 37 dynamics 25 E ECG 29, 49, 288, 541, 863, 959 ECG analysis 17, 85 echocardiography 745 Echolocation 81 Education 943, 953 Education programs review 979 education quality 945 EEG 37, 41, 45, 172, 541, 683, 827, 843 EEG signal processing 192 e-health 537 E-Health 667, 890, 897, 935, 963 EHG 21 EHRencryption 909 elbow 761
Keyword Index Elderly 780, 905 eLearning 955 electric pulp vitality tester 391 electrical impedance 248 electrical potential 800 electrocardiogram 616 electrochemical characterization 418 electrochemotherapy 616 Electroencephalography 97, 418 electromagnetic irradiation 510 electronic health records 928 electronic prescription 890 electronic textiles 643 electronicmedicalrecord 909 Electrophysiological signal 541 electro-poration 679 electroporation 89, 695 Electroretinography (ERG) 414 electrospin 596 embedded devices 875 Embodied Conversational Agents 675 emergency monitoring 924 EMG 541 EMI 991 emotional feedback 675 Emotional Processing 276, 683 emotions, EEG 115 Empirical Mode Decomposition 101 Encyclopedia 943 Endocytosis 679 endoprosthesis 566 Endoscopy imaging 450 End-stage renal disease 796 Energy metabolism 33 energy-mapping 220 Entropy 343, 827 EOG 49, 541 e-OSCE 975 Epsilon-Skew-Normal atom 97 ERFs 73 e-services 780 Event-related optical signal (EROS) 631 Event-Related Synchronization/Desynchronization 37 Exercise 240 Experimental model 558 experimental tumor 89 expiratory trigger 479 Extracorporeal Membrane Oxygenation 1003 extrapolated center of mass XCoM 620 Eye 442 Eye movements 1 F fall prediction 620 family practice 928
1037 Fast optical signal (FOS) 631 Feature Extraction 77 feature-based registration 402 feedback loop 554 FEM 562 fiber distribution 772 Fibre optics 387 Field Training 949 Fingerprint image correspondence 307 finite element analysis 651 Finite Element Method 521, 695 finite element method (FEM) 422 finite element modeling 729 FLAIR 371 Fluorescence 343, 347 Fluoroscopic image sequences 323 fMRI 376 forearm 761 Fourier transform 749 fractal dimension 53 Fractures 430 fragment length domain 671 Frame-work 879 frequency characteristics 627 Frequency-domain near infrared spectroscopy(NIRS) 487 frequencyrange selection 57 Frequency-Synchronization 683 frontal EEG asymmetry 687 Functional Electrical Stimulation 426 functional genomics 438 G Gait Analysis 643, 812 game 691 gamma correction algorithm 450 gas exchange 25 gastrointestinal motility 260 gel electrophoresis 671 Gelatin cryogen 9 Gender 160 gene electrotransfer 679 gene electro-transfer 711 genetic algorithm 627 genomic analysis 438 Geriatric Depression 200 GFP 711 glaucoma 196 Glaucoma 442 glaucoma 506 Glucome-ters 871 Glycolysis 33 graphs 115 graph-theoretic 5 grasping 703 Greek Hospitals 1011 grid computing 438
Grid Computing 663 GRISSOM 438 GSM/GPRS 498 Guidelines 928 H Half Value Layer 331 hand-held 463 haptic interface 703 Haptics 383 Harmonization in higher education 831 hBMSCs 494 head position monitoring 467 head posture 455 head tracking 455 Headrelated 81 health care 566 Health Education 963 Health Infor-matics 737 healthcare 808 Healthcare executive evaluation 839 healthy subjects 483 heart model 578 Heart Modeling 663 heartrate variability 29, 136, 160 heating 1007 hemifield 506 hemodynamic 651 hemodynamics 753 hemoglobin 487 Hemoly-sis 804 Hessian matrix 741 HHT 604 High density EEG 631 high flexion 483 hip joint 566 Hip joint endoprosthesis 815 Histogram signatures 208 histology 772 HL7 and open source 928 Hodgkin-Huxley model 733 home care telematics 920 home environment 905 home health care 855 home health monitoring 61 Home monitoring 525 HomeCare 867, 875 horizontal agitator 859 hospital administrative data 823 Hospital Information Systems 737 Hospital-Based Health Technology Assessment 1003 HRV 53 human 268 human arm model 422 human body modeling 574 Human Computer Interaction 655
1038 Human Factors Engineering 987 Human performance 85 human radar signature 490 Human red blood cell 707 Humphrey Field Analyzer 506 Hydrophobicity 659 Hypoxia 160 hysteroscopy imaging 450
Keyword Index
K Kaplan-Meier estimator 819 Kinesthetic and Visual 41 knee biomechanics 639 knee simulator 483 KNN 741 Kurtosis 827 L
I IAPS 683 ICA 49, 843 ICT applications 897 ICU 827, 991 identification 296 Identification 93 image processing 212 image registration 204, 376 Image-guided surgery 300 imaging, tomography 248 Implantable devices 1007 inclined seats 715 Incontinence 913 independent 192 Independent Componen 631 inductive powering 232 inertial sensor 502 Infant monitoring 525 influenza A/H1N1 1003 Information Management 971 information systems 635 information technology 999 innovation 624 inpatients 792 inspiratory trigger 479 integrated Weibull 671 intensity profile modeling 671 intensity-duration curve of tooth 391 Interaction 545 internal dosimetry 1025 internal tumors 616 Internet of things 808 Interoperability 835, 935 intra-abdominal pressure 513 intrabody communication(IBC) 422 intraocular pressure 442 Invasive 886 In-vivo Toxicity 804 IP3 receptors 347 IPv6 808 ischemia 745 ISOMAP 5 IT Adoption 1011 IT Sophistication 1011 J joint kinematics 323
laboratory scale device 442 Lacunarity 236 LAMP technology 839 Laparoscopic surgery 1021 laparoscopy imaging 450 laser Doppler 89 LDA 172 learning object 967 learning resource 955 Levodopa-induced dyskinesia detection 592 Levodopa-induced dyskinesia severity classification 592 light scattering 707 limiting fiber extensibility 768 linear features 29 link 471 Linq 867 lipophilic 268 Long Term Evolution (LTE) 939 low dose X-ray noise 323 low power ECG sensor 554 M mage registration fracture 296 magnetic drug targeting 651 magnetic field measurement 260 magnetic marker monitoring 260 magnetic MRS 256 magnetic nanoparticles 510 magnetic resonance angiography 284 Magnetic resonance 248 Magnetic stimulation 733 magnetoencephalography 399 Magnitude- 13 Mammography 355 manifold learning 5 marker design 260 Markov model 796 Markov random field 212 mashup 967 Mass-inertial parameters 574 matched filter 192, 741 Matching Pursuit 97 matching score 307 Mathematical 355 MATLAB 25
MATLAB 959 MCNP 319 MD 1017 MDCT 208 mean arterial pressure 148 measure of similarity 17 Measurement 204, 216, 513, 529, 863 measurement methods 533 Mechanical properties of bone 729 mechanical vibration 608 mechano– and magnetochemically synthesized magnetosensitive complex 510 Mechatronic Device 426 median nerve 399 Mediastinum 983 medical education 975 Medical Equipment 987, 991 medical error 995 medical image registration 402 Medical images 300 Medical Imaging 244, 667 Medical Informatics Education 967 medical physics 945 medicaldataprivacy 909 Medicine 2.0 971 MEG 73, 399, 434 MEMS 467 mental task classification 172 mesenchymal stem cell 604 Metadata 963 Microcalcification 355s microcontroller 883 micro-CT 296 Microprocessor 276 Mild Cognitive Impairment (MCI) 812 mild hyperthermia 510 mind-typer 57 minimize adverse events 999 mobile 863 Mobile Emergency Center 939 Mobile health 924 mobile sound processing 119 Model Center 932 Modeling 529 modeling 566 Modeling framework 655 modular design 280 Molecular local interaction 659 Molecular local resemblance 659 Molecular surface 659 Monitoring 379, 875 Monte Carlo 311 Monte Carlo simulation 319, 796 morphological postprocessing 741 Morphology 355 motion analysis 483
Keyword Index motion analysis 61 motion analysis 643 Motion Analysis 812 motion artifacts 554 motion disorder 61 Motor Cortex 200 motor imagery 111 Motor Imagery 41 Movements 1 MRI 256, 371, 667, 1007 MRI processing 272 MSP430F5419 471 MSP430F5500 471 MTF 351 mu rhythm 111 Mullins effect 768 multi electrode arrays 784 multi-agent 905 multidimensional directed information 687 multidisciplinary learning 945 Multilayer Perceptron Network 600 Multilingual Web 971 Multiple channels 77 Multiple Coherence 13 Multiple features 77 Multiple Projection Algorithm 367 Multiple Sclerosis 272 Multiple sclerosis 446 Muscle 107 muscle activity 715 Muscle fatigue 85 myoblast 596 myocardial 745 N Naïve Bayes Classifier 49 Nanobiomaterials 612 Nanomanipulation 612 Nanoparticles 804 nanopores 459 NEMA NU 4–2008 311 Neural Gas 335 neural network 707 Neural Network Classification 272 Neural networks 252, 580 neurological rehabilitation 383 neurology 455 Neuroprosthesis 426 newborn mice 608 nigro-subthalamic connetion 184 nigro-trigeminal connection 184 Nintendo Wii 691 NIR Camera 228 NNCA 741 Nodule detection 363 Noise 327
1039 noise removal 367 Non Invasive Measurement 228 Noncontac otoplethysmography 304 noninvasive investigation 188 Non-invasive temperature estimation 315 noninvasive vascular imaging 578 nonlinear features 29 Non-linear system 93 nonlinear viscoelasticity 578 non-thermal irreversible electroporation 371 Normalization 434 numerical fluid mechanics 410 Numerical model 545 numerical modeling 695 numerical simulation 651 Numerical Simulations 584 Nutrition 379 O Object recognition 631 objective anomaloscopy 414 Objective Response 13 Ocular Fluorometry 474 ocular pulse 128 odor module 280 OLAP 847 Olfactometer 280 Ontology 655 Open Source Software 935 open-flow microperfusion 268 Optical recordings 335 Oral Cancer 588 Orchestration 757 Orthodontics 204 Orthognathic surgery 204 oscillatory activity 784 oscillometric envelope 144 P P300 57 pacemaker 533 Paraplegia 426 Parkinson’s disease 176, 180, 784 Parkinson’s disease (PD) 812 parous and nulliparous cases 893 patellofemoral joint 639 patient iden-tification 897 patient management 920 patient monitoring 537 patient safety 999 Pattern Electroretinogram (PERG) 196 PCR-RFLP 671 pelvis 562
PEMS 1017 performance 893 perfusion 387 peripheral quantitative computed tomography 212 PES 17 PET/CT 220 PET/CT 719 Phase correlation 749 phase shift 128 photoplethysmogram 537 photople-thysmographic 144. 152 photoplethys-mography 387 photoplethysmography imaging 304 PHP Programming 839 physical activity monitoring 699 physiological measurements 959 physiological modeling 93 Physiological simulation 25 PI controller 725 piezoelectric transducer 406 pixel aspect ratio 331 Planning 430 Plaque 584 pocket PC 913 poly(o-methoxyaniline) 596 Poly(İ-caprolactone) 804 Polyethylene glycol 804 Population dynamics 224 Postgraduate 831 Postural stability 620 Power 73 PPG 406, 959 PPG mapping 304 PPGI 304 Predicting preterm birth 893 Prediction 65 Preoperative 430 preoperative planning 339 Pressure Support Ventilation 479 pressure-compliance relationship 152 Preterm labor 21 primary soatosensory cortex 399 Principal Components 45 probability density 772 processes 757 proprioception 722 Prostate cancer 819 proteins 65 prototype construction 643 public health 635 Pulmonary screening 252 pulse oximetry 387 pulse pressure 148 Pulse transit time 406 Pulse Transmit Time 228 pulse wave 537
1040 Q Qausiperiodicity 17 Quality 831 quality control 549 quantum noise 331 R radial artery 152 radial noninvasive bloodpressure 144 radiation exposure 719 radioactive iodine-131 1025 radiofrequency ablation 521 Ra-diofrequency cardiac ablation 725 Radiograph 359 Radiography 252 random forest 29 random process 136 Rare Diseases 971 RBC morphology 647 RBC suspension 647 real time algorithm 168 Real-Time 240 Red-wood 529 Regional Health Networks 901 Registration 256 reliability 216 Reliability assessemennt 901 Remote Monitoring 843 renal telematics 920 re-purposing 955 Repurposing 963 research information system 851 residual strains 570 resonance sensor 395, 463 Respiratory Failure 827 respiratory mechanics 25 RF field 1007 RFID 808, 855, 897, 991 rib elimination 363 Right Catheterization 776 Right Ventricular Volume 776 right-censored data 819 rigid and non-rigid transformations 402 Risk assessment 1017 Risk Management 987, 995 robot-aidedneuro-rehabilitation 703 robotic surgery 1021 R Roughness 612 RRPET 311 S Sacadic eye 1 safety 1017 safety 883
Keyword Index
SAM 434 sample entropy 136, 140 satellite cells 608 satisfaction 792 science centre 624 SD method 351 seatpan and backrest adjustments 715 Security 216 Segmentation 292, 355, 359 Seizure prediction 77 self organizing maps 307 Selfmonitoring 871 Semantic Web 635 Semantic Wiki 635 SEMG 69 seniors 691 Sensor 240 sensor networks 924 sensorimotor 111 SERCA pumps 347 series elastic actuation 383 Serious Games 963 Service Oriented Architecture 780, 905 shape deviations of contact areas 815 sheep tibia 729 Sightless 871 signal analysis 196 Signal peptide 65 Silicone embedding 517 Simulation 107, 757, 975 Simulator 886 sit-to-stand 715 sit-to-stand (STS) 61 skin 268 Skin Conductance 276 Sleep Apnea detection 119 Slit-Lamp 474 small animal PET 311 small tissue sample 395 Smoth pursuit 1 snore detection 119 snore signals 119 SOAP/WSDL 835 Social Networking 971 Social Software 635 soft biological tissue 463 software 339, 1017 software processing 467 Somatosensory evoked potential 13 somatosensory evoked potentials 97 sound speed 156 spatial average filter 323 Spatial Frequency 351 Spectral cues 81 Spectral F-Test 37, 41 Spider8 761 Spike sorting 5 spike volleyball movement 41
Spin-off companies 624 Spirometry 498 splanchnic organs 387 SQL Server 867 Squared Coherence 13 squat 483 Station Emergency Center 939 statistical analysis 188 Statistical Texture Analysis 272 Stent 545 step response of tooth 391 stiffness 395, 463, 745 Stimulating coil 733 stimulation voltage 533 STIR 371 stress softening 768 structure-function correlation 506 Subclavian Steal Steno-Occlusive Disease 764 Subclavian Steal Syndrome 764 Subcortical Ischemic Vascular Disease 200 substantia nigra 184 substitute material 788 subthalamic area 180 subthalamic nucleus 176, 184 successive nonoverlaping intervals 140 Supervised classification 208 surface 800 Surface EMG analysis 85 surface modification 9 surgical error 995 Survey 1011 survival analysis 819 SWI 376 synovial fluids 570 system dynamics 566 T T1weighted 371 T2 weighted 371 tablet PC 863 teaching tool 959 technical approaches 999 Telecare 932 Telemedicine 843, 932 telemonitoring 920 Temperature control loop 725 TEMPUS 979 tensile stress 815 tensile test 788 Textile integration 525 theoretical model 521 thermal imaging 244 Thermography 584 three point bending 729
Keyword Index three-component magnetic field measurements 399 thyroid cancer 1025 thyroid remnant ablation 1025 tilt sensing 467 time interval selection 57 tissue dissection 549 tissue electrical conductivity 695 titanium nitride 418 TMS 200 tools 955 Tooth impedance 391 Training 943, 953 transfer function 81 transform 533 transfusion 859 Transmembrane 65 Tremor 490 Trisomy 21 580 tumor 510 U Ulcer Detection 236 Ultrasound 529 ultrasound biomicroscopy 156 ultraviolet 124 upper extremity 703 uric acid 124 Usability 987 USB 883 Uterine contraction 21 uterine electromyogram 140 UWB 490
1041 V validation 558 vascular access 558 Vein pattern 741 ventricular assist device 410 vestibular nucleus 627 vestibular system 502 vestibular-sympathetic reflex 627 VHDL 879 VHDL-AMS 529 Virotherapy simulation 224 Virtual Environment 37 virtual environment 703 virtual instrument 959 Virtual patient 983 Virtual reality, Shoulder 722 viscohyperelastic properties 570 Visual Evoked Potentials(VEP) 414 Visual feedback 722 Visual field 506 visual summaries 335 Visualisation 256 Visualisation modalities 300 Visuo-Tactile 73 vitrectomy 467 Voice control 426 Volatile Organic Compounds 280 VOR 502 Voxel localisation 256 voxel-based models 319 vulnerable plaque 753
W wall shear stress 284, 753 wave intensity analysis 558 Wearable Sensors 276 wearables 643 web service 835 WEB2.0 967 Wave propagation 578 Wavelet Entropy 683 Wavelet phase synchronization 21 wavelet transform 212 wearable device 537 web-based application 851 Western Blot 608 white balance 450 WhiteGaussian Noise 101 Whole body vibration 85 Wireless 240, 471, 875, 913 wireless monitoring 288 wireless powering 232 wireless sensor networks 855 Work-based Learning 949 Workflow 757 wound healing 487 wrist 383 X x-ray 339 Z ZigBee 288 ZigBit 288 ZSTAR 859