USER CENTRED NETWORKED HEALTH CARE
Studies in Health Technology and Informatics This book series was started in 1990 to promote research conducted under the auspices of the EC programmes’ Advanced Informatics in Medicine (AIM) and Biomedical and Health Research (BHR) bioengineering branch. A driving aspect of international health informatics is that telecommunication technology, rehabilitative technology, intelligent home technology and many other components are moving together and form one integrated world of information and communication media. The complete series has been accepted in Medline. Volumes from 2005 onwards are available online. Series Editors: Dr. O. Bodenreider, Dr. J.P. Christensen, Prof. G. de Moor, Prof. A. Famili, Dr. U. Fors, Prof. A. Hasman, Prof. E.J.S. Hovenga, Prof. L. Hunter, Dr. I. Iakovidis, Dr. Z. Kolitsi, Mr. O. Le Dour, Dr. A. Lymberis, Prof. J. Mantas, Prof. M.A. Musen, Prof. P.F. Niederer, Prof. A. Pedotti, Prof. O. Rienhoff, Prof. F.H. Roger France, Dr. N. Rossing, Prof. N. Saranummi, Dr. E.R. Siegel, Prof. T. Solomonides and Dr. P. Wilson
Volume 169 Recently published in this series Vol. 168. D.P. Hansen, A.J. Maeder and L.K. Schaper (Eds.), Health Informatics: The Transformative Power of Innovation – Selected Papers from the 19th Australian National Health Informatics Conference (HIC 2011) Vol. 167. B.K. Wiederhold, S. Bouchard and G. Riva (Eds.), Annual Review of Cybertherapy and Telemedicine 2011 – Advanced Technologies in Behavioral, Social and Neurosciences Vol. 166. V. Koutkias, J. Niès, S. Jensen, N. Maglaveras and R. Beuscart (Eds.), Patient Safety Informatics – Adverse Drug Events, Human Factors and IT Tools for Patient Medication Safety Vol. 165. L. Stoicu-Tivadar, B. Blobel, T. Marčun and A. Orel (Eds.), e-Health Across Borders Without Boundaries – E-salus trans confinia sine finibus – Proceedings of the EFMI Special Topic Conference, 14–15 April 2011, Laško, Slovenia Vol. 164. E.M. Borycki, J.A. Bartle-Clar, M.S. Househ, C.E. Kuziemsky and E.G. Schraa (Eds.), International Perspectives in Health Informatics Vol. 163. J.D. Westwood, S.W. Westwood, L. Felländer-Tsai, R.S. Haluck, H.M. Hoffman, R.A. Robb, S. Senger and K.G. Vosburgh (Eds.), Medicine Meets Virtual Reality 18 – NextMed Vol. 162. E. Wingender (Ed.), Biological Petri Nets Vol. 161. A.C. Smith and A.J. Maeder (Eds.), Global Telehealth – Selected Papers from Global Telehealth 2010 (GT2010) – 15th International Conference of the International Society for Telemedicine and eHealth and 1st National Conference of the Australasian Telehealth Society ISSN 0926-9630 (print) ISSN 1879-8365 (online)
Userr Centtred Network ked Health H C Care Proceed dings of MIE M 2011
Edited by y
A Anne Moeen Universsity of Oslo, Norway
Stig Kjær K And dersen Aalborg University, Denmark D
J Aartss Jos Erasm mus Universitty, Rotterdam m, The Netheerlands
and
Peetter Hurllen A Akershus University Hosp pital, Norwa ay
Amstterdam • Berrlin • Tokyo • Washington, DC
© 2011 European Federation for Medical Informatics. All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without prior written permission from the publisher. ISBN 978-1-60750-805-2 (print) ISBN 978-1-60750-806-9 (online) Library of Congress Control Number: 2011934890 Publisher IOS Press BV Nieuwe Hemweg 6B 1013 BG Amsterdam Netherlands fax: +31 20 687 0019 e-mail:
[email protected] Distributor in the USA and Canada IOS Press, Inc. 4502 Rachael Manor Drive Fairfax, VA 22032 USA fax: +1 703 323 3668 e-mail:
[email protected]
LEGAL NOTICE The publisher is not responsible for the use which might be made of the following information. PRINTED IN THE NETHERLANDS
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved.
v
Preface This volume of Studies in Technology and Health Informatics contains the proceedings of MIE2011, the 23rd Conference of the European Federation of Medical Informatics. MIE2011 is hosted by Forum for Databehandling i Helsesektoren (FDH)1 in collaboration with the European Federation of Medical Informatics (EFMI). MIE2011 builds on the traditions of 22 preceding MIEs, starting in Cambridge (1978), and more recently in Geneva (2005), Maastricht (2006), Gothenburg (2008) and Sarajevo (2009). The special theme for MIE2011 is “User centered networked health care”, highlighting design for and experiences by health professionals and patients working and living in ICT enabled environments. This ties into the Scandinavian tradition of active user involvement in all aspects of design and implementation of complex workplace technology. MIE2011 will highlight the broad range of health informatics research and innovations at regional, national, and international levels. Health care is transforming into a networked activity where strict boundaries between health care facilities and the home are vanishing. Patients demand and require continuity of care. Health care providers become team players sharing decision-making responsibilities with colleagues and patients. This poses interesting challenges for health informatics, where critical appraisal of strategies for user involvement, deployment and sustainable use of information systems and new forms of patient-provider collaboration are needed. Ideas presented as “meaningful use” open additional perspectives and exciting opportunities to ensure that the users, understood broadly as health providers, patients, their families or consumers at large, would be offered solutions according to their needs. Such trends and developments are recognized across the contributions at MIE2011. Related to the specific theme, let us highlight specifically: •
•
•
•
1
User-centeredness is discussed in terms of usability and usefulness for the health provider, but also in terms of citizen orientation, empowerment and opportunities for patient-provider collaboration or self-care enabled by web applications. Networked health care may be realized by an integrated, operational EHR in collaborative environments where appropriate information is available at the point of need. Continued development of standards and terminologies is complemented with search strategies to make sense of free text entries in several languages. Coordination and collaboration in an institution or across levels of care is another line of development towards patient oriented health care where current developments discussions of interoperability, standards and search strategies in new ways. Such achievements also calls for discussions of privacy and security, as well as careful evaluation to systematize and share the experiences and gains. The changing environments of care reflect the changing division of labor and calls for more permeable boundaries between community health, primary care, and specialized care to support unfolding patient trajectories of care. In this
Norwegian Society for Medical Informatics
vi
picture, integration and additional perspectives in health informatics, like social care informatics, open opportunities to support mobility of patients and providers in new and innovative ways. MIE2011 received approximately 500 submissions for consideration. Selection has been a major challenge for the Scientific Program Committee. The majority of the submissions received three reviews, which were accompanied by suggestions and advice for possible improvements of the contribution. We are indebted to the over 220 colleagues who volunteered time and energy to serve as reviewers. To honor their contribution they are all listed in this book. The Conference Program and the Proceedings offers a selection of oral presentations submitted as full papers or short communications, as well as workshops, panels and posters. Many of these activities are sponsored by the EFMI Working Groups. It is encouraging for the development of health informatics that many of the submissions are by young researchers. We are pleased that MIE2011 is an arena where they choose to share their ideas and findings with peers. Furthermore, MIE2011 will host an application oriented track “partnerships in innovation” where EFMI institutional members and corporate affiliates participate actively. Most topics presented in this MIE2011 proceedings are interdisciplinary in nature and may interest a variety of stakeholders: nurses, physicians and allied health providers, health IT specialists, informaticians, engineers, academics and representatives from industry and consultancy. This European conference gathers participants from most parts of the world, reflected by the nationalities of the more than 1150 contributing authors representing Europe, Asia, Africa as well as South and North America. We hope you will enjoy the program of keynotes, presentations of accepted papers, workshops, panels and posters and participate in exchange of ideas and experiences. The proceedings is an integral part of MIE2011. The printed version of the MIE2011 proceedings includes the PubMed indexed, full papers accepted for presentation. The MIE2011 CD includes the printed proceedings (i.e., the full papers), short communications, posters, workshops, panels as well as demonstrations and the “partnership in innovation” synopses. To facilitate wider access to the material presented during MIE2011, the proceedings with the full papers will be available from IOS Press online book platform. The additional material on the CD will be handled as an EFMI publication. We are grateful to the colleagues who agreed to serve as members of the SPC core: Drs. Truls Østbye, Elske Ammenwerth, Ronald Cornet, Rolf Engebrecht, Sabine Koch, Silvana Quaglini, Pieter Toussaint, and Rune Fensli. Dr. Alexander Horsch chaired the subcommittee for workshop and panel selection. Dr. Sabine Koch chairs the award committee for the Peter L. Reichertz Prize that will be awarded to the best paper by a young scientist, Dr. Robert Cornet chairs the committee selecting recipient of the Rolf Hansen Prize for the best paper on clinical information system and Dr. Elske Ammenwerth chairs the committee selecting recipient of the prize for the best poster. Acknowledgement: The editorial team is grateful to Ms. Shazia Mushtaq for her careful and extensive work editing the submissions and preparing the proceedings.
Oslo, Aalborg, Rotterdam, June 2011 Anne Moen, Stig Kjær Andersen, Jos Aarts, Petter Hurlen (editors)
vii
Reviewers for MIE2011 The following persons contributed to the selection of papers: Aarts, Jos Abidi, Syed Sibte Raza Adams, Samantha Ahmadian, Leila Alonso, Albert Alpay, Laurence Alsafadi, Yasser Ammenwerth, Elske Andersen, Stig Kjær Anguita, Alberto Atalag, Koray Aydin, Serap Balkanyi, Laszlo Bamidis, Panagiotis Bastos, Laudelino Ben Said, Mohamed Bernad, Elena Bertelsen, Pernille Beuscart-Zephir, Marie-Catherine Bichel-Findlay, Jen Bodenreider, Olivier Borycki, Elizabeth Bouamrane, Matt-Mouley Boye, Niels Bratan, Tanja Breil, Bernhard Breu, Ruth Burgert, Oliver Bygholm, Ann Bø, Marte Rime Capozzi, Davide Ceusters, Werner Cheshire, Paul Chronaki, Catherine Chute, Christopher Cimino, James Cornet, Ronald Costa, Carlos Courteille, Olivier Creswick, Nerida Cummings, Elizabeth
Daskalakis, Stylianos Day, Karen de Bruijn, Berry de Clercq, Etienne de Keizer, Nicolette de la Calle, Guillermo de Lusignan, Simon Dexheimer, Judith Dias, Andre Dinesen, Birthe Eccher, Claudio Effken, Judith Eisenstein, Eric Elberg, Pia Faxvaag, Arild Fernandez-Breis, Jesualdo Tomas Ferrazzi, Fulvia Fioriglio, Gianluigi Focsa, Mircea Forkert, Nils Daniel Fox, Scott Fritz, Fleur Garde, Sebastian Georg, Gersende Georgiou, Andrew Giacomini, Mauro Gibaud, bernard Giuliani, Francesco Gong, Yang Grabar, Natalia Grandison, Tyrone György, Surján Hackl, Werner O Hägglund, Maria Hains, Isla Hanmer, Lyn Hartvigsen, Gunnar Hartz, Tobias Haskell, Robert Hasman, Arie Heimly, Vigdis
viii
Heinzl, Harald Héja, Gergely Horsch, Alexander Hsu, William Huang, Chun-His Hurlen, Petter Huser, Vojtech Häyrinen, Kristiina Hörbst, Alexander Ingenerf, Josef James, Andrew Jao, Chiang Joubert, Michel Juhola, Martti Kalra, Dipak Karanikolas, Nikitas Karopka, Thomas Kastania, Anastasia Katt, Basel Kaufman, David Kim, G Kindler, Hauke Koch, Sabine Kohl, Christian Kondoh, Hiroshi Korpela, Mikko Koutkias, Vassilis Kristiansen, Lill Kurzynski, Marek König, Sergio Layzell, Brian Leonardi, Giorgio Lopez, Diego Lovis, Christian Lungeanu, Diana Lærum, Hallvard Magrabi, Farah Mantas, John Mayorov, Oleg Mazzoleni, M.Cristina Menke, James Mensah, Edward Merabti, Tayeb Mihalas, George Moehr, Jochen Moen, Anne Mohd Yusof, Maryati Mohyuddin, Mohyuddin Montani, Stefania
Mulvenna, Maurice Murray, Peter Musgrove, Marcela Muttitt, Sarah Mykkänen, Juha Møller, Marcel Neumuth, Thomas Neveol, Aurelie Niggemann, Joerg Nilsson, Gunnar Nishibori, Masahiro O’Connor, Martin Oemig, Frank Oliveira, Jose Luis Orman, Häkan Otero, Paula Ozkaynak, Mustafa Panzarasa, Silvia Parry, Dave Pasche, Emilie Pearl, Adrian Peek, Niels Pelayo, Sylvia Peleg, Mor Petrovecki, Mladen Portet, Francois Power, Michael Prinz, Michael Protti, Denis Punys, Vytenis Quaglini, Silvana Quantin, Catherine Rasmussen, Anne Razavi, Amir-Reza Reichert, Assa Richards, Janise Rigby, Michael Roderer, Nancy Rodrigues, Jean Marie Rognoni, Carla Rosenbeck, Kirstine Rubrichi, Stefania Röhrig, Rainer Saboor, Samrend Sacchi, Lucia Saka, Osman Santos, Raquel Sara, Antony Saxena, Kshitij
ix
Schabetsberger, Thomas Schmidt, Rainer Scholl, Jeremiah Scott, Philip Sedlmayr, Martin Seroussi, Brigitte Shahsavar, Nosrat Shifrin, Michael Showell, Chris Sintchenko, Vitali Smalheiser, Neil Smith, Catherine Spyns, Peter Staemmler, Martin Stausberg, Jørgen Stenzhorn, Holger Stern, Milton Stoicu-Tivadar, Lacramioara Subramaniam, Kailash Supek, Selma Svatek, Vojtech Takeda, Hiroshi
Takian, Amir Teodoro, Douglas Thiel, Rainer Toussaint, Pieter Toyoda, Shuichi Tucker, Allan Tudor, Mrs Anca Tusch, Guenter Vagnoni, Matthew Valdez, Rupa van Engen-Verheul, Mariette Warren, Jim Weber, Patrick Westbrook, Johanna Wolf, Klaus-Hendrik Zai, Adrian Zhang, Songmao Zrimec, Tatjana Zvarova, Jana Østbye, Truls Øyri, Karl
This page intentionally left blank
xi
Contents Preface Anne Moen, Stig Kjær Andersen, Jos Aarts and Petter Hurlen Reviewers for MIE2011
v vii
Citizen-Centred e-Health A Unified Approach for Social-Medical Discovery Haggai Roitman, Yossi Mesika, Yevgenia Tsimerman and Sivan Yogev Information Provision for Adolescents with Cancer Anna Shillabeer Electronic Symptom Reporting by Patients: A Literature Review Monika A. Johansen, Eva Henriksen, Gro Berntsen and Alexander Horsch Increasing Physical Activity Through Health-Enabling Technologies: The Project “Being Strong Without Violence” Corinna Scharnweber, Wolfram Ludwig, Michael Marschollek, Wolfgang Pein, Peter Schack, Reiner Schubert and Reinhold Haux Review of Mobile Terminal-Based Tools for Diabetes Diet Management Eunji Lee, Naoe Tatara, Eirik Årsand and Gunnar Hartvigsen Interaction Between COPD Patients and Healthcare Professionals in a Cross-Sector Tele-Rehabilitation Programme Birthe Dinesen, Stig Kjaer Andersen, Ole Hejlesen and Egon Toft Enhancing Self-Efficacy for Self-Management in People with Cystic Fibrosis Elizabeth Cummings, Jenny Hauser, Helen Cameron-Tucker, Petya Fitzpatrick, Melanie Jessup, E. Haydn Walters, David Reid and Paul Turner Evaluation of a Hyperlinked Consumer Health Dictionary for Reading EHR Notes Laura Slaughter, Karl Øyri and Erik Fosse A Pilot Assessment of Why Patients Choose Not to Participate in Self-Monitoring Oral Anticoagulant Therapy Morten Algy Bonderup, Stine Veje Hangaard, Pernille Heyckendorff Lilholt, Mette Dencker Johansen and Ole K. Hejlesen Mobile Peer Support in Diabetes Taridzo Chomutare, Eirik Årsand and Gunnar Hartvigsen Evolution of Health Web Certification Through the HONcode Experience Célia Boyer, Vincent Baujard and Antoine Geissbuhler Personal Health Data: Patient Consent in Information Age Dragana Martinovic, Victor Ralevich and Milan Petkovic Emotions and Personal Health Information Management: Some Implications for Design Enrico Maria Piras and Alberto Zanutto Socio-Technical Challenges in Designing a Web-Based Communication Platform Miria Grisot, Maja van der Velden and Polyxeni Vassilakopoulou
3 8 13
18
23
28 33
38
43
48 53 58
63 68
xii
Results of the 10th HON Survey on Health and Medical Internet Use Natalia Pletneva, Sarah Cruchet, Maria-Ana Simonet, Maki Kajiwara and Célia Boyer Social Connectedness Through ICT and the Influence on Wellbeing: The Case of the CareRabbit Sanne R. Blom, Magda M. Boere-Boonekamp and Robert A. Stegwee Technological Choices for Mobile Clinical Applications Frederic Ehrler, David Issom and Christian Lovis Modified Rand Method to Derive Quality Indicators: A Case Study in Cardiac Rehabilitation Mariëtte van Engen-Verheul, Hareld Kemps, Roderik Kraaijenhagen, Nicolette de Keizer and Niels Peek A Cloud-Based Semantic Wiki for User Training in Healthcare Process Management D. Papakonstantinou, M. Poulymenopoulou, F. Malamateniou and G. Vassilacopoulos Reference Architecture of Application Services for Personal Wellbeing Information Management Mika Tuomainen and Juha Mykkänen Development of a Web-Based Decision Support System for Insulin Self-Titration A.C.R. Simon, F. Holleman, J.B. Hoekstra, P.A. de Clercq, B.A. Lemkes, J. Hermanides and N. Peek TreC – A REST-Based Regional PHR Claudio Eccher, Enrico Maria Piras and Marco Stenico
73
78 83
88
93
98
103
108
Decision Support, Knowledge Management, Guidelines Next Generation Neonatal Health Informatics with Artemis Carolyn McGregor, Christina Catley, Andrew James and James Padbury Limitations in Physicians’ Knowledge when Assessing Dementia Diseases – An Evaluation Study of a Decision-Support System Helena Lindgren A Generic System for Critiquing Physicians’ Prescriptions: Usability, Satisfaction and Lessons Learnt Jean-Baptiste Lamy, Vahid Ebrahiminia, Brigitte Seroussi, Jacques Bouaud, Christian Simon, Madeleine Favre, Hector Falcoff and Alain Venot An OCL-Compliant GELLO Engine Jing Mei, Haifeng Liu, Guotong Xie, Shengping Liu and Baoyao Zhou Improvement of Inter-Services Communication Through a CDSS Dedicated to Myocardial Perfusion Scintigraphy Julie Nies, Gersende Georg, Marc Faraggi, Isabelle Colombet and Pierre Durieux Prognostic Data-Driven Clinical Decision Support – Formulation and Implications Ruty Rinott, Boaz Carmeli, Carmel Kent, Daphna Landau, Yonatan Maman, Yoav Rubin and Noam Slonim
115
120
125
130
135
140
xiii
Knowledge-Based Surveillance for Preventing Postoperative Surgical Site Infection Arash Shaban-Nejad, Gregory W. Rose, Anya Okhmatovskaia, Alexandre Riazanov, Christopher J.O. Baker, Robyn Tamblyn, Alan J. Forster and David L. Buckeridge Factors Known to Influence Acceptance of Clinical Decision Support Systems E. Kilsdonk, L.W.P. Peute, S.L. Knijnenburg and M.W.M. Jaspers Cross-Frontier Information Provision in the ALIAS European Project Frédérique Laforest, Atisha Garin-Michaud, Thierry Durand, Emmanuel Eyraud and Edouard Barthuet Event-Driven Architecture for Health Event Detection from Multiple Sources Kerstin Denecke, Göran Kirchner, Peter Dolog, Pavel Smrz, Jens Linge, Gerhard Backfried and Johannes Dreesman Towards an Interoperable Information Infrastructure Providing Decision Support for Genomic Medicine Matthias Samwald, Holger Stenzhorn, Michel Dumontier, M. Scott Marshall, Joanne Luciano and Klaus-Peter Adlassnig Identifying Patients for Clinical Trials Using Fuzzy Ternary Logic Expressions on HL7 Messages Raphael W. Majeed and Rainer Röhrig Towards a Metadata Registry for Evaluating Augmented Medical Interventions Anne-Sophie Silvent, Alexandre Moreau-Gaudry and Philippe Cinquin A Comparison of Internal Versus External Risk-Adjustment for Monitoring Clinical Outcomes Antonie Koetsier, Nicolette de Keizer and Niels Peek Interoperability Driven Integration of Biomedical Data Sources Douglas Teodoro, Rémy Choquet, Daniel Schober, Giovanni Mels, Emilie Pasche, Patrick Ruch and Christian Lovis Creating Knowledge Archive in the Internet Medical Consultant for Decision Support at the Point of Care Draško Nakić and Suzana Loškovska Architecture of a Decision Support System to Improve Clinicians’ Interpretation of Abnormal Liver Function Tests Raphaël Chevrier, David Jaques and Christian Lovis
145
150 155
160
165
170 175
180 185
190
195
Education – Professional Development Push and Pull Models to Manage Patient Consent and Licensing of Multimedia Resources in Digital Repositories for Case-Based Reasoning Andrzej A. Kononowicz, Nabil Zary, David Davies, Jörn Heid, Luke Woodham and Inga Hege Next Steps in Evaluation and Evidence – from Generic to Context-Related Michael Rigby, Jytte Brender, Marie-Catherine Beuscart-Zephir, Hannele Hyppönen, Pirkko Nykänen, Jan Talmon, Nicolette de Keizer and Elske Ammenwerth Virtual Ward Round Michael Storck and Frank Ückert
203
208
213
xiv
Professional Development of Health Informatics in Northern Ireland Paul McCullagh, Gerry McAllister, Paul Hanna, Dewar Finlay and Paul Comac How Important is Theory in Health Informatics? A Survey of UK Academics Philip Scott, James Briggs, Jeremy Wyatt and Andrew Georgiou Better Quality in Healthcare Through Gamified Simulation Based Skill Training Application Weronika Tancredi, Mikael Wintell and Lars Lindsköld Implementation of a Web-Based Interactive Virtual Patient Case Stimulation as a Training and Assessment Tool for Medical Students A. Oliven, R. Nave, D. Gilad and A. Barch Online CME Usage Patterns M. Cristina Mazzoleni, Carla Rognoni, Enrico Finozzi, Ines Giorgi, Marco Pagani and Marcello Imbriani How Do Nursing Students Perceive the Notion of EHR? An Empirical Investigation Parisis Gallos, Stelios Daskalakis, Maria Katharaki, Joseph Liaskos and John Mantas Recording and Podcasting of Lectures for Students of Medical School Pierre Brunet, Marc Cuggia and Pierre Le Beux
218
223
228
233 238
243
248
Electronic Health Record, Workflow, Intra- and Interorganizational Collaboration Developing an Electronic Health Record for Intractable Diseases in Japan Eizen Kimura, Shinji Kobayashi, Yasuhiro Kanatani, Ken Ishihara, Tsuneyo Mimori, Ryousuke Takahashi, Tsutomu Chiba and Hiroyuki Yoshihara Three Key Concerns for a Successful EPR Deployment and Usage Rebecka Janols, Bengt Göransson and Bengt Sandblad Implementation of an Open Source Provider and Organization Registry Service Markus Birkle, Benjamin Schneider, Tobias Beck, Thomas Deuster, Markus Fischer, Florian Flatow, Robert Heinrich, Christian Kapp, Jasmin Riemer, Michael Simon and Björn Bergh Implementation and Experimentation of TEDIS: An Information System Dedicated to Patients with Pervasive Developmental Disorders Mohamed Ben Said, Laurence Robel, Erwan Vion, Antoine El Ghazali, Bernard Golse, Jean Philippe Jais and Paul Landais Traceability of Patient Records Usage: Barriers and Opportunities for Improving User Interface Design and Data Management Ricardo Cruz-Correia, Luís Lapão and Pedro Pereira Rodrigues Important Ingredients for Health Adaptive Information Systems Yalini Senathirajah and Suzanne Bakken Everyday Ethical Dilemmas Arising With Electronic Record Use in Primary Care Ellen Balka and Marianne Tolar The Shift in Workarounds Upon Implementation of Computerized Physician Order Entry Heleen van der Sijs, Irene Rootjes and Jos Aarts
255
260 265
270
275 280
285
290
xv
Task Analysis and Interoperable Application Services for Service Event Management Juha Mykkänen, Hannu Virkanen, Pirkko Kortekangas, Saara Savolainen and Timo Itälä Organs Transplantation – How to Improve the Process? Viriato Ferraz, Gerardo Oliveira, Pedro Vieira-Marques and Ricardo Cruz-Correia A Reference Architecture for Integrated EHR in Colombia Edgar de la Cruz, Diego M. Lopez, Gustavo Uribe, Carolina Gonzalez and Bernd Blobel Integration Services to Enable Regional Shared Electronic Health Records Ilídio C. Oliveira and João P.S. Cunha Towards Smart Environments Using Smart Objects Martin Sedlmayr, Hans-Ulrich Prokosch and Ulli Münch Interoperability in Hospital Information Systems: A Return-On-Investment Study Comparing CPOE with and Without Laboratory Integration Rodolphe Meyer and Christian Lovis Building the Technical Infrastructure to Support a Study on Drug Safety in a General Hospital Melanie Kirchner, Thomas Bürkle, Andrius Patapovas, Anja Mathews, Reinhold Sojer, Fabian Müller, Harald Dormann, Renke Maas and Hans-Ulrich Prokosch Implementing Change in a Diverse and Politicized Landscape Espen Skorve Characteristics of German Hospitals Adopting Health IT Systems – Results from an Empirical Study Jan-David Liebe, Nicole Egbert, Andreas Frey and Ursula Hübner Nursing Information System: A Relevant Substitute of the Paper Nursing Record Margreet B. Michel-Verkerke GP Connector: A Tool to Enable Access for General Practitioners to a Standards-Based Personal and Electronic Health Record in the Rhine-Neckar Region Oliver Heinze, Holger Schmuhl and Björn Bergh Proposal of an End-To-End Emergency Medical System Samir El-Masri and Basema Saddik The General Practitioner in the Giant’s Web Vigdis Heimly When Information Sharing is not Enough Berit Brattheim, Arild Faxvaag and Pieter Toussaint Information and Communication Needs of Healthcare Workers in the Perioperative Domain Børge Lillebo, Andreas Seim and Arild Faxvaag Clinical Situations and Information Needs of Physicians During Treatment of Diabetes Mellitus Patients: A Triangulation Study Gudrun Hübner-Bloder, Georg Duftschmid, Michael Kohler, Christoph Rinner, Samrend Saboor and Elske Ammenwerth
295
300
305
310 315
320
325
330
335
339
344 349 354 359
364
369
xvi
A Constructivist Approach? Using Formative Evaluation to Inform the Electronic Prescription Service Implementation in Primary Care, England Jasmine Harvey, Anthony Avery, Justin Waring, Ralph Hibberd and Nicholas Barber Can Cloud Computing Benefit Health Services? – A SWOT Analysis Mu-Hsing Kuo, Andre Kushniruk and Elizabeth Borycki
374
379
Evaluation Medical Providers’ Dental Information Needs: A Baseline Survey Amit Acharya, Andrea Mahnke, Po-Huang Chyou, Carla Rottscheit and Justin B. Starren What Makes an Information System More Preferable for Clinicians? A Qualitative Comparison of Two Systems Habibollah Pirnejad, Zahra Niazkhani, Jos Aarts and Roland Bal Does PACS Facilitate Work Practice Innovation in the Intensive Care Unit? Isla M. Hains, Nerida Creswick and Johanna I. Westbrook Innovation in Intensive Care Nursing Work Practices with PACS Nerida Creswick, Isla M. Hains and Johanna I. Westbrook Evaluation of Telephone Triage and Advice Services: A Systematic Review on Methods, Metrics and Results Sara Carrasqueiro, Mónica Oliveira and Pedro Encarnação Human Factors Based Recommendations for the Design of Medication Related Clinical Decision Support Systems (CDSS) Sylvia Pelayo, Romaric Marcilly, Stéphanie Bernonville, Nicolas Leroy and Marie-Catherine Beuscart-Zephir Making a Web Based Ulcer Record Work by Aligning Architecture, Legislation and Users – A Formative Evaluation Study Anne G. Ekeland, Eva Skipenes, Beate Nyheim and Ellen K. Christiansen Assessing the Role of a Site Visit in Adopting Activity Driven Methods Irmeli Luukkonen, Kaija Saranto and Mikko Korpela A Multi-Method Study of Factors Associated with Hospital Information System Success in South Africa Lyn A. Hanmer, Sedick Isaacs and J. Dewald Roode Assessing Biocomputational Modelling in Transforming Clinical Guidelines for Osteoporosis Management Rainer Thiel, Marco Viceconti and Karl Stroetmann Technical Data Evaluation of a Palliative Care Web-Based Documentation System Tobias Hartz, René Brüntrup and Frank Ückert
387
392 397 402
407
412
417 422
427
432
437
Imaging and Biosignals Extracting Gait Parameters from Raw Electronic Walkway Data André Dias, Lukas Gorzelniak, Angela Döring, Gunnar Hartvigsen and Alexander Horsch Safe Storage and Multi-Modal Search for Medical Images Jukka Kommeri, Marko Niinimäki and Henning Müller
445
450
xvii
Respiration Tracking Using the Wii Remote Game-Controller J. Guirao Aguilar, J.G. Bellika, L. Fernandez Luque and V. Traver Salcedo A Nomenclature for the Analysis of Continuous Sensor and Other Data in the Context of Health-Enabling Technologies Matthias Gietzelt, Klaus-Hendrik Wolf and Reinhold Haux Image-Based Classification of Parkinsonian Syndromes Using T2’-Atlases Nils Daniel Forkert, Alexander Schmidt-Richberg, Brigitte Holst, Alexander Münchau, Heinz Handels and Kai Boelmans Cell Edge Detection in JPEG2000 Wavelet Domain – Analysis on Sigmoid Function Edge Model Vytenis Punys and Ramunas Maknickas
455
460 465
470
Information Modeling, Storage and Retrieval Using Multimodal Mining to Drive Clinical Guidelines Development Emilie Pasche, Julien Gobeill, Douglas Teodoro, Dina Vishnyakova, Arnaud Gaudinat, Patrick Ruch and Christian Lovis Defining and Reconstructing Clinical Processes Based on IHE and BPMN 2.0 Melanie Strasser, Franz Pfeifer, Emmanuel Helm, Andreas Schuler and Josef Altmann Facilitating Access to Laboratory Guidelines by Modeling Their Contents and Designing a Computerized User Interface Mobin Yasini, Catherine Duclos, Jean-Baptiste Lamy and Alain Venot Evaluation of Multi-Terminology Super-Concepts for Information Retrieval Nicolas Griffon, Lina F. Soualmia, Aurélie Névéol, Philippe Massari, Benoit Thirion, Badisse Dahamna and Stefan J. Darmoni Framework Model and Principles for Trusted Information Sharing in Pervasive Health Pekka Ruotsalainen, Bernd Blobel, Pirkko Nykänen, Antto Seppälä and Hannu Sorvari Populating the i2b2 Database with Heterogeneous EMR Data: A Semantic Network Approach Sebastian Mate, Thomas Bürkle, Felix Köpcke, Bernhard Breil, Bernd Wullich, Martin Dugas, Hans-Ulrich Prokosch and Thomas Ganslandt A Novel Way of Standardized and Automized Retrieval of Timing Information Along Clinical Pathways Eva Gattnar, Okan Ekinci, Vesselin Detschew Computing the Compliance of Physician Drug Orders with Guidelines Using an OWL2 Reasoner and Standard Drug Resources Joseph Noussa Yao, Brigitte Séroussi and Jacques Bouaud Automatic Definition of the Oncologic EHR Data Elements from NCIT in OWL Marc Cuggia, Annabel Bourdé, Bruno Turlin, Sebastien Vincendeau, Valerie Bertaud, Catherine Bohec and Régis Duvauferrier Developing a Model for the Adequate Description of Electronic Communication in Hospitals Samrend Saboor and Elske Ammenwerth
477
482
487 492
497
502
507
512 517
522
xviii
Contextualization in Automatic Extraction of Drugs from Hospital Patient Records Svetla Boytcheva, Dimitar Tcharaktchiev and Galia Angelova Revisiting the Area Under the ROC Berry de Bruijn Service Delivery for e-Health Applications Martin Staemmler A KPI Framework for Process-Based Benchmarking of Hospital Information Systems Franziska Jahn and Alfred Winter
527 532 537
542
Natural Language Processing, Data Mining Medical Knowledge Evolution Query Constraining Aspects Ann-Marie Eklund Optimal Asymmetrical SVM Using Pattern Search. A Health Care Application Gilles Cohen and Rodolphe Meyer Factuality Levels of Diagnoses in Swedish Clinical Text Sumithra Velupillai, Hercules Dalianis and Maria Kvist Network Analysis of Possible Anaphylaxis Cases Reported to the US Vaccine Adverse Event Reporting System after H1N1 Influenza Vaccine Taxiarchis Botsis and Robert Ball Using Pharmacogenetics Knowledge to Increase Accuracy of Alerts for Adverse Drug Events Yossi Mesika, Byung Chul Lee, Yevgenia Tsimerman, Haggai Roitman and Heon Kyu Park Schizophrenia Prediction with the Adaboost Algorithm Jan Hrdlicka and Jiri Klema Applying One-vs-One and One-vs-All Classifiers in k-Nearest Neighbour Method and Support Vector Machines to an Otoneurological Multi-Class Problem Kirsi Varpa, Henry Joutsijoki, Kati Iltanen and Martti Juhola Roogle: An Information Retrieval Engine for Clinical Data Warehouse Marc Cuggia, Nicolas Garcelon, Boris Campillo-Gimenez, Thomas Bernicot, Jean-François Laurent, Etienne Garin, André Happe and Régis Duvauferrier Truecasing Clinical Narratives Markus Kreuzthaler and Stefan Schulz Checking Coding Completeness by Mining Discharge Summaries Stefan Schulz, Thorsten Seddig, Susanne Hanser, Albrecht Zaiβ and Philipp Daumke
549 554 559
564
569
574
579 584
589 594
Privacy and Security Healthcare Professionals’ Experiences with EHR-System Access Control Mechanisms Arild Faxvaag, Trond S. Johansen, Vigdis Heimly, Line Melby and Anders Grimsmo
601
xix
Personal Health Information on Display: Balancing Needs, Usability and Legislative Requirements Erlend Andreas Gjære, Inger Anne Tøndel, Maria B. Line, Herbjørn Andresen and Pieter Toussaint Watermarking – A New Way to Bring Evidence in Case of Telemedicine Litigation Gouenou Coatrieux, Catherine Quantin, François-André Allaert, Bertrand Auverlot and Christian Roux Sharing Sensitive Personal Health Information Through Facebook: The Unintended Consequences Mowafa Househ End-to-End Security for Personal Telehealth Paul Koster, Muhammad Asim and Milan Petkovic
606
611
616 621
Public Health, Catastrophes, Outbreaks The Epidemiologic Surveillance of Dengue-Fever in French Guiana: When Achievements Trigger Higher Goals Claude Flamand, Philippe Quenel, Vanessa Ardillon, Luisiane Carvalho, Sandra Bringay and Maguelonne Teisseire Prescribing History to Identify Candidates for Chronic Condition Medication Adherence Promotion Jim Warren, Debra Warren, Hong Yul Yang, Thusitha Mabotuwana, John Kennelly, Tim Kenealy and Jeff Harrison Challenges for Signal Generation from Medical Social Media Data Johannes Dreesman and Kerstin Denecke Providing Trust and Interoperability to Federate Distributed Biobanks Martin Lablans, Sebastian Bartholomäus and Frank Ückert Web 2.0 in Healthcare: State-of-the-Art in the German Health Insurance Landscape Mirko Kuehne, Nadine Blinn, Christoph Rosenkranz and Markus Nuettgens Improving the Transparency of Health Information Found on the Internet Through the Honcode: A Comparative Study Sabine Laversin, Vincent Baujard, Arnaud Gaudinat, Maria-Ana Simonet and Célia Boyer
629
634
639 644
649
654
Telemedicine and Mobile Health Data Privacy Preservation in Telemedicine: The PAIRSE Project Ebrahim Nageba, Bruno Defude, Franck Morvan, Chirine Ghedira and Jocelyne Fayn Relevance and Usability of a Computerized Patient Simulator for Continuous Medical Education of Isolated Care Professionals in Sub-Saharan Africa Georges Bediang, Cheick Oumar Bagayoko, Marc-André Raetzo and Antoine Geissbuhler Applications of Medical Intelligence in Remote Monitoring István Vassányi, György Kozmann, András Bánhalmi, Balázs Végső, István Kósa, Tibor Dulai, Zsolt Tarjányi, Gergely Tuboly, Péter Cserti and Balázs Pintér
661
666
671
xx
Virtual TeleRehab: A Case Study Lena Pareto, Britt Johansson, Sally Zeller, Katharina S. Sunnerhagen, Martin Rydmark and Jurgen Broeren Patient Empowerment by Increasing Information Accessibility in a Telecare System Vasile Topac and Vasile Stoicu-Tivadar
676
681
Terminology, Ontologies and Standardization A Standard Based Approach for Biomedical Knowledge Representation Ariel Farkash, Hani Neuvirth, Yaara Goldschmidt, Costanza Conti, Federica Rizzi, Stefano Bianchi, Erika Salvi, Daniele Cusi and Amnon Shabo Ontology-Based Framework for Electronic Health Records Interoperability Carolina González, Bernd G.M.E. Blobel and Diego M. López Ontology-Based Knowledge Management for Personalized Adverse Drug Events Detection Feng Cao, Xingzhi Sun, Xiaoyuan Wang, Bo Li, Jing Li and Yue Pan A Formal Analysis of HL7 Version 2.x Frank Oemig and Bernd Blobel Simplifying HL7 Version 3 Messages Robert Worden and Philip Scott Creating an Ontology Driven Rules Base for an Expert System for Medical Diagnosis Valérie Bertaud Gounot, Valéry Donfack, Jérémy Lasbleiz, Annabel Bourde and Régis Duvauferrier A Methodology and Supply Chain Management Inspired Reference Ontology for Modeling Healthcare Teams Craig E. Kuziemsky and Sara Yazdi Supporting openEHR Java Desktop Application Developers Hajar Kashfi and Olof Torgersson Large Scale Healthcare Data Integration and Analysis Using the Semantic Web John Timm, Sondra Renly and Ariel Farkash ACGT: Advancing Clinico-Genomic Trials on Cancer – Four Years of Experience Luis Martin, Alberto Anguita, Norbert Graf, Manolis Tsiknakis, Mathias Brochhausen, Stefan Rüping, Anca Bucur, Stelios Sfakianakis, Thierry Sengstag, Francesca Buffa and Holger Stenzhorn Architectural Approach for Providing Relations in Biomedical Terminologies and Ontologies Mathias Brochhausen and Bernd Blobel Integration of Classifications and Terminologies in Metadata Registries Based on ISO/IEC 11179 Sylvie Mn Ngouongo and Jürgen Stausberg Development of a New International Classification of Health Interventions Based on an Ontology Framework Béatrice Trombert Paviot, Richard Madden, Lori Moskal, Albrecht Zaiss, Cédric Bousquet, Anand Kumar, Pierre Lewalle and Jean Marie Rodrigues
689
694
699 704 709
714
719 724 729
734
739
744
749
xxi
The Revision of the Korean Classifications of Health Interventions Based on the Proposed ICHI Semantic Model and Lessons Learned Boyoung Jung, Chaeyoung Jung, Jean Marie Rodrigues, Cédric Bousquet, Anand Kumar, Pierre Lewalle, Béatrice Trombert Paviot, Hoonshik Yang and Sukil Kim Web-Based Collaboration for Terminology Application: ICNP C-Space Claudia C. Bartz and Derek Hoy Mapping Medical Records of Gastrectomy Patients to SNOMED CT Eun-Young So and Hyeoun-Ae Park Terminology for the Description of the Diagnostic Studies in the Field of EBM Natalia Grabar, Ludovic Trinquart and Isabelle Colombet Representing Knowledge, Data and Concepts for EHRS Using DCM William Goossen Ontology-Based Automatic Generation of Computerized Cognitive Exercises Giorgio Leonardi, Silvia Panzarasa and Silvana Quaglini Creating a Magnetic Resonance Imaging Ontology Jérémy Lasbleiz, Hervé Saint-Jalmes, Régis Duvauferrier and Anita Burgun Validation of the openEHR Archetype Library by Using OWL Reasoning Marcos Menárguez-Tortosa and Jesualdo Tomás Fernández-Breis Grouping Pharmacovigilance Terms with Semantic Distance Marie Dupuch, Magnus Lerch, Anne Jamet, Marie-Christine Jaulent, Reinhard Fescharek and Natalia Grabar The Archetype-Enabled EHR System ZK-ARCHE – Integrating the ISO/EN 13606 Standard and IHE XDS Profile Michael Kohler, Christoph Rinner, Gudrun Hübner-Bloder, Samrend Saboor, Elske Ammenwerth and Georg Duftschmid Using a Logical Information Model-Driven Design Process in Healthcare Yu Chye Cheong, Linda Bird, Nwe Ni Tun and Colleen Brooks SNOMED CT Implementation: Implications of Choosing Clinical Findings or Observable Entities Anne Randorff Rasmussen and Kirstine Rosenbeck What is the Coverage of SNOMED CT® on Scientific Medical Corpora? Dimitrios Kokkinakis Assisting the Translation of the CORE Subset of SNOMED CT into French Hocine Abdoune, Tayeb Merabti, Stéfan J. Darmoni and Michel Joubert Recording Associated Disorders Using SNOMED CT Ronald Cornet and Nicolette F. de Keizer SNOMED CT’s RF2: Is the Future Bright? Werner Ceusters Serious Adverse Event Reporting in a Medical Device Information System Fabrizio Pecoraro and Daniela Luzi Metadata – An International Standard for Clinical Knowledge Resources Gunnar O. Klein Comparing Existing National and International Classification Systems of Surgical Procedures with the CEN/ISO 1828 Ontology Framework Standard Jean M. Rodrigues, Ann Casey, Cédric Bousquet, Anand Kumar, Pierre Lewalle and Béatrice Trombert Paviot
754
759 764 769 774 779 784
789 794
799
804
809 814 819 824 829 834 839
844
xxii
Model Driven Development of Clinical Information Sytems Using openEHR Koray Atalag, Hong Yul Yang, Ewan Tempero and Jim Warren
849
Translational Research A Metadata-Based Patient Register for Cooperative Clinical Research: A Case Study in Acute Myeloid Leukemia Anja S. Fischer and Ulrich Mansmann De-Identifying an EHR Database – Anonymity, Correctness and Readability of the Medical Record Kostas Pantazos, Soren Lauesen and Soren Lippert Service Oriented Data Integration for a Biomedical Research Network Matthias Ganzinger, Tino Noack, Sven Diederichs, Thomas Longerich and Petra Knaup Single Source Information Systems Can Improve Data Completeness in Clinical Studies: An Example from Nuclear Medicine Susanne Herzberg and Martin Dugas Reporting Qualitative Research in Health Informatics: REQ–HI Recommendations Zahra Niazkhani, Habibollah Pirnejad, Jos Aarts, Samantha Adams and Roland Bal Cell Seeding of Tissue Engineering Scaffolds Studied by Monte Carlo Simulations Andreea Robu, Adrian Neagu and Lacramioara Stoicu-Tivadar The ONCO-I2b2 Project: Integrating Biobank Information and Clinical Data to Support Translational Research in Oncology Daniele Segagni, Valentina Tibollo, Arianna Dagliati, Leonardo Perinati, Alberto Zambelli, Silvia Priori and Riccardo Bellazzi IT Infrastructure Components to Support Clinical Care and Translational Research Projects in a Comprehensive Cancer Center Hans-Ulrich Prokosch, Markus Ries, Alexander Beyer, Martin Schwenk, Christof Seggewies, Felix Köpcke, Sebastian Mate, Marcus Martin, Barbara Bärthlein, Matthias W. Beckmann, Michael Stürzle, Roland Croner, Bernd Wullich, Thomas Ganslandt and Thomas Bürkle Using a Robotic Arm to Assess the Variability of Motion Sensors Lukas Gorzelniak, André Dias, Hubert Soyer, Alois Knoll and Alexander Horsch The Single Source Architecture x4T to Connect Medical Documentation and Clinical Research Philipp Dziuballe, Christian Forster, Bernhard Breil, Volker Thiemann, Fleur Fritz, Jens Lechtenbörger, Gottfried Vossen and Martin Dugas Information Technology Solutions to Support Translational Research on Inherited Cardiomyopathies Riccardo Bellazzi, Cristiana Larizza,, Matteo Gabetta, Giuseppe Milani, Mauro Bucalo, Francesca Mulas, Angelo Nuzzo, Valentina Favalli and Eloisa Arbustini
857
862 867
872
877
882
887
892
897
902
907
xxiii
Usability, HCI, Cognitive Issues Emerging Approaches to Usability Evaluation of Health Information Systems: Towards In-Situ Analysis of Complex Healthcare Systems and Environments Andre W. Kushniruk, Elizabeth M. Borycki, Shigeki Kuwata and Joseph Kannry Contextualization of Automatic Alerts During Electronic Prescription: Researchers’ and Users’ Opinions on Useful Context Factors Elske Ammenwerth, Werner O. Hackl, Daniel Riedmann and Martin Jung Reducing Clinicians’ Cognitive Workload by System Redesign; A Pre-Post Think Aloud Usability Study L.W.P. Peute, N.F. de Keizer, E.P.A. van der Zwan and M.W.M. Jaspers Impact of Alert Specifications on Clinicians’ Adherence M.M. Langemeijer, L.W. Peute and M.W.M. Jaspers Medication Decision-Making on Hospital Ward-Rounds Melissa Baysari, Johanna Westbrook and Richard Day A Qualitative Analysis of Prescription Activity and Alert Usage in a Computerized Physician Order Entry System Rolf Wipfli, Mireille Betrancourt, Alberto Guardia and Christian Lovis Combining Usability Testing with Eye-Tracking Technology: Evaluation of a Visualization Support for Antibiotic Use in Intensive Care Aboozar Eghdam, Johanna Forsman, Magnus Falkenhav, Mats Lind and Sabine Koch Design of a Mobile, Safety-Critical In-Patient Glucose Management System Bernhard Höll, Stephan Spat, Johannes Plank, Lukas Schaupp, Katharina Neubauer, Peter Beck, Franco Chiarugi, Vasilis Kontogiannis, Thomas R. Pieber and Andreas Holzinger Facilitating the Iterative Design of Informatics Tools to Advance the Science of Autism David R. Kaufman, Patrick Cronin, Leon Rozenblit, David Voccola, Amanda Horton, Alisabeth Shine and Stephen B. Johnson Evaluation of Computer Usage in Healthcare Among Private Practitioners of NCT Delhi P. Ganeshkumar, Arun Kumar Sharma and O.P. Rajoura Contextual Inquiry Method for User-Centred Clinical IT System Design Johanna Viitanen A Method to Measure the Reduction of CO2 Emissions in E-Health Applications Paola Di Giacomo and Peter Håkansson
915
920
925 930 935
940
945
950
955
960 965
970
EFMI Invited Session: Health Informatics Research Management Medical Informatic Research Management in Academia – The Danish Setting Stig Kjær Andersen Research Management in Healthcare Informatics – Experiences from Norway Arild Faxvaag, Pieter Toussaint and Trond S. Johansen Research Management: The case of RN4CAST Dimitrios Zikos and John Mantas
977 980 985
xxiv
eMeasures: A Standard Format for Health Quality Measures Catherine Chronaki, Charles Jaffe and Bob Dolin Clinical Information Systems: Cornerstone for an Efficient Hospital Management Christian Lovis Patient Centered Integrated Clinical Resource Management Jacob Hofdijk Subject Index Author Index
989
992 996 1001 1009
Citizen-Centred e-Health
This page intentionally left blank
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-3
3
A Unified Approach for Social-Medical Discovery Haggai ROITMAN1, Yossi MESIKA, Yevgenia TSIMERMAN, Sivan YOGEV IBM Research, Haifa 31095, Israel
Abstract. In this paper we describe a novel social-medical discovery solution, based on an idea of social and medical data unification. Built on foundations of exploratory search technologies, the proposed discovery solution is better tailored for the social-medical discovery task. We then describe its implementation within the IBM Medics system and discuss a sample usecase which demonstrates several new social-medical discovery opportunities. Keywords. social-medical discovery, entity-relationship graph, IBM Medics
1. Introduction In recent years, social-media (web 2.0) has become one of the main driving forces on the web. Unlike traditional semantic-web technologies, which mainly focus on efficient interoperable data exchange among computers, social media technologies focus on online collaboration and knowledge sharing among people. Nowadays, the healthcare domain exhibits a similar shift towards the adaptation of social web technologies [1,2]. New Health 2.0 services now empower patients to take more active part in managing their health wellbeing [1,3], and offer patients new set of tools for sharing personal social and medical data, and sharing experiences or expertise about various healthrelated topics through social collaboration between patients, physicians, and various healthcare service providers [3,4]. In this line of services, online services such as Google Health and Microsoft Health-Vault now allow patients to share their personal health records (PHR); this compared to traditional EMR systems that prohibit patients from accessing their own medical records. Depending on patients’ privacy preferences, PHR data may be publicly (or partially) shared, offering new discovery opportunities. For example, personalized medical content recommendations may be delivered to patients based on their PHR data [5]. Furthermore, several online social-medical community services such as Patients-Like-Me and Cure-Together allow patients to discover other patients who share similar medical characteristics, such as similar disorders or symptoms. For example, by joining to a medical community on Patients-Like-Me, patients may get additional medical (and even mental) support, which leverage the community’s power to discover new possible treatment plans, clinical trials, expert physicians, etc [4]. Finally, online services such as Med-Help and Drugs.com provide access to rich medical knowledge gathered from various medical knowledge resources (e.g., HCLS 1
Corresponding Author. Haggai Roitman, IBM Research, Haifa 31095, Israel; email:
[email protected].
4
H. Roitman et al. / A Unified Approach for Social-Medical Discovery
Linked Open Drug Data (LODD)). Such resources can be used by users who seek drugrelated information, wish to find expert advice, or find evidence for various health related topics. Despite the increasing amounts of social data fused together with rich medical data, there still remains a great challenge of how to fully utilize this new combination for purposes of efficient social-medical discovery. Existing social media discovery solutions use relatively simple data models that record relationships between people and their associations with unstructured (text) documents [6]. Therefore, existing social data models are not well suited for handling medical data, which is usually structured in its nature, semantically rich (e.g., defined over some medical terminologies such as SNOMED-CT, UMLS, ICD-10, etc), standard-based (e.g., HL7 RIM), etc. On the other hand, existing social-medical solutions utilize only simplified data models and provide limited discovery capabilities that merely exploit the social-medical dataspace. For example, social community services such as Patients-Like-Me currently provide very simple query interfaces for exploring their social-medical community data, spanning from simple keyword search to very limited category-based search over several medical facets such as symptoms or demographic data. As another example, personalized medical recommendation systems that utilize PHR data commonly ignore social data. Furthermore, patients, and even more expert users such as physicians, usually find it hard to explore data that they are not familiar with its structure, terminology, query language, etc [7]; hence, a more exploratory solution is desired which can gradually guide patients within the social-medical dataspace. Such data exploration should be backed up with as much evidence as possible, yet very intuitive even for non-expert users such as patients usually are. Aiming at fulfilling the gaps, in this paper we describe a novel social-medical discovery solution, based on an idea of social and medical data unification. Built on foundations of exploratory search technologies, the proposed discovery solution is better tailored for the social-medical discovery task. In the rest of this paper we describe our solution, its fundamentals and discovery capabilities. We then describe its implementation within the IBM Medics patient empowerment system.
2. Methods We now present a novel model for social-medical discovery. Built on foundations of conceptual modeling, social data and medical data are fused together using a uniform representation in the form of a rich entity-relationship (ER) data graph. In turn, social discovery can be augmented with medical discovery and vice-versa. This allows to explore new facts about social and medical entities through various paths within the ER graph. For example, we may discover similar patients not only based on direct patient similarity, but also based on their relationships with other similar social or medical entities, e.g., similar medications, allergies, family bonds, treating physicians, etc. Social and medical facts known to exist are modeled by entities and their relationships. Such facts can be gathered by observing and collecting data from various data sources, such as the ones that were mentioned in Section 1. Each fact is accompanied with an evidence link which traces its source origin, e.g., a fact about an adverse drug reaction between two drugs may be linked to its FDA alert page or knowledge from DrugBank. Social entities include among others, patients, physicians, or even “virtual entities" such as various health service providers (e.g., hospitals).
H. Roitman et al. / A Unified Approach for Social-Medical Discovery
5
Medical entities include among others, medications, allergies, immunizations, symptoms, genetic variations, etc. Each entity may have a rich set of attributes describing its properties. For example, a patient is represented as a single entity in the graph together with its socio-demographic attributes such as gender, age, location, etc. As another example, each medication is represented as an entity with attributes such as its generic or brand name (code), substance name, etc. Both social and medical entities may have relationships with other social or medical entities. For example, a patient entity may be related to some consumed medication; a medication entity may be related to some drug-interacting medication entity; a patient entity may be related to his treating physician entity, etc. Using such a discovery model allows to support various types of exploratorydriven queries over the social-medical data graph. This includes rich keyword-based queries that can also be mixed with more structured query predicates, allowing to express very complex information needs. For example, users can submit a query like “Hemophilia AND Patient.age:[40 TO *)” to discover all patients whose age is above 40 and are related to Hemophilia related topics. Furthermore, the discovery model supports rich faceted-search and data lineage capabilities that allow interactive exploration of the social-medical dataspace. For that, we implemented an extended faceted-search model that enables to index and retrieve both text and structured data formats and supports an OLAP-like complex faceted search over rich entityrelationship data. Using the faceted-search user interface, users may start their search based on some initial information need. Search result includes a list of social or medical entities or both, uniformly ranked by their relevance to the user’s query. Each entity is further accompanied with relationship links that allow the user to flexibly explore the sub-graph induced from that entity. In addition, facets about various entities in the result set (e.g., patient age or gender distributions) further allow the user to quickly filter out entities according to facets of interest and explore the graph projected by those facets. Such social-medical data exploration may be highly useful for patients who wish to explore possible treatment plans by following the medication links of some patient returned as result to their query, or for physicians and researchers who wish to discover new interesting patterns in the social-medical dataspace. We discuss more example usages in the next section.
3. Results We have implemented the proposed social-medical discovery solution within the IBM Medics system. IBM Medics is a novel clinical decision support system (CDSS) developed in collaboration between three IBM labs and the GIL hospital in Korea. IBM Medics empowers the patients and helps to increase patient safety by assisting patients and their medical providers with daily medical decision-making. One of the main services in IBM Medics is the social-medical discovery (SMD) service. SMD serves various queries, submitted by patients, physicians, and researchers, which explore its social-medical dataspace. IBM Medics social-medical dataspace is formed by integrating social and medical data stored within IBM Medics sub-systems together with data it gathers from public social-medical sources.
6
H. Roitman et al. / A Unified Approach for Social-Medical Discovery
Figure 1. IBM Medics social-medical discovery (SMD) user interface.
Figure 2. Example social-medical sub-graph for the query “Hemophilia".
Figure 1 depicts SMD’s main user interface with an example discovery usecase. In this example, two patients were returned as a result to an initial query “Hemophilia" submitted by the user, who later on followed the “Related patients" link to discover relevant patients. We can also observe that SMD provides several facets related to those patients, and for each patient, the user may further follow several relationship links to explore that patient’s social-medical sub-graph.
H. Roitman et al. / A Unified Approach for Social-Medical Discovery
7
Finally, to illustrate additional discovery options, Figure 2 further depicts a possible social-medical sub-graph that may be explored by a patient user searching for Hemophilia-related information. By following the links to Hemophilia-related patients (e.g., based on information gathered from Patients-Like-Me or Google Health), a patient searcher can discover new possible treatments, e.g., other medications consumed by those patients or physicians who treat those patients, and whom the patient may contact for her own benefit. Furthermore, the searcher may discover Hemophilia-related symptoms (e.g., Hematuria) gathered from WebMD.com, or discover related genetic variations gathered from PubMed, etc. Using the searcher’s own medical profile, she can also discover possible lineage paths between her genetic profile (e.g., from 23AndMe) and Hemophilia. The patient can also detect whether a new medication, which she just discovered through some related patient, has a potential interaction (e.g., based on Drug-Bank gathered knowledge) with any of the medications she currently consumes.
4. Conclusions Though many Health 2.0 services have already emerged, there is still a strong requirement for discovery solutions that are better tailored to this domain. In this paper, we suggested a novel unified social-medical discovery solution, implemented within the IBM Medics patient empowerment system, which serves as a major step towards this goal and brings new discovery opportunities for the Health 2.0 domain. As future work, we wish to leverage the new discovery model and develop new online patientsimilarity search methods that fully utilize the power of social-medical discovery. In addition, using user studies, we plan to perform usability analysis among IBM Medics users, and examine other potential useful discovery use-cases. Finally, we now work on a new privacy model that is better tailored for the social-medical discovery domain, allowing patients to have a more fine granular (PHR-section level) control on the social-medical data they share and discovered by others.
References [1] [2] [3] [4] [5] [6] [7]
Eysenbach G. Medicine 2.0: social networking, collaboration, participation, apomediation, and openness. J Med Internet Res, 10(3):e22+, 2008. Hughes B, Joshi I, Wareham J. Health 2.0 and medicine 2.0: Tensions and controversies in the field. Journal of Medical Internet Research, 10(3):e23+, August 2008. Brubaker JR, Bren D, Lustig C, Hayes GR. Patientslikeme: Empowerment and representation in a patient-centered social network. Workshop on Research in Healthcare, CSCW, 2010. Wicks P, Massagli M, Frost J, Brownstein C, Okun S, Vaughan T, Bradley R, Heywood J. Sharing health data for better outcomes on PatientsLikeMe. J Med Internet Res, 12(2), June 2010. Roitman H, Messika Y, Tsimerman Y, Maman Y. Increasing patient safety using explanation-driven personalized content recommendation. IHI, November 2010. Carrington PJ, Scott J, Wasserman S. Models and Methods in Social Network Analysis (Structural Analysis in the Social Sciences). Cambridge University Press, February 2005. Cline RJW. Consumer health information seeking on the Internet: the state of the art. Health Education Research, 16(6):671–692, 2001.
8
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-8
Information Provision for Adolescents with Cancer Anna SHILLABEERa1 Senior Consultant, IT Advisory. Ernst and Young, Adelaide, Australia.
a
Abstract. Recent research has provided a detailed insight into what information cancer patients as a generic group require and we now understand that this requirement changes during the disease episode. This paper will focus on the information needs of adolescent cancer patients as little research has been done in this area and unlike every other group of cancer patients very little improvement in information provision and clinical outcomes for this small but important group of people has occurred over the past 20 years. Adolescents have specialised needs and have for too long been grouped either with young children or adults. This paper describes our current knowledge regarding their special needs and outlines future directions to facilitate equality in information provision for this group. Keywords. Information provision, adolescent, cancer.
1. Introduction Researchers have made significant advances in our understanding of what information patients require when they are diagnosed with a life changing illness such as cancer. We also understand that this requirement changes during the disease episode and that while some patients require access to all possible sources of information others require very little and are happy to allow their treating physician full control over the management of their condition and decisions regarding treatment options. Less knowledge has been published regarding the specific needs of young adults with a life changing illness such as cancer and this represents a significant omission and an area deserving attention due to the impact of such a diagnosis during the formative years and the deep psychosocial impact of cancer on a young adults feeling of connection with their peers and the wider world in which they live. This paper will focus on the information needs of young cancer patients for a number of reasons; there is little work focussing on the specific needs of this group; young people are often afforded a lower status than adults and excluded from decision making processes [1, 2, 16]; the cancer episode is prolonged and hence longitudinal tracking to determine the impact of various solutions can be monitored and; teens are already concerned about changes to their bodies and thought processes and a diagnosis such as cancer can exacerbate this and have a deep and long lasting impact on their mental and physical wellbeing far beyond the effect of the illness [2]. Specifically targetted information provision is vital to ensuring they are able to take a mature role in their illness and can overcome feelings of isolation and loss of control and can still 1
Corresponding author
A. Shillabeer / Information Provision for Adolescents With Cancer
9
identify themselves as an individual not a ‘freak of nature’ [1, 4]. This paper will provide an overview of recent research in the area of information provision for adolescents and will outline future directions to facilitate the provision of a personalised information management focus for all healthcare providers.
2. Impact of Information on the Patient Experience There has been work published around the world that outlines a number of positive impacts on the adult patient experience that can be directly related to the quantity and quality of information presented. The following are commonly stated: • Development of a closer and more trusting relationship with the primary physician and other healthcare workers [9] • Improved participation in decision making [8, 9, 10]. • Reduced fear/higher compliance with treatments and investigations [10]. • Increased ability to cope and develop long term strategies [1]. • Overall patient empowerment [13]. Work focusing on children and adolescents show that these groups have similar outcomes resulting from the provision of high quality, well timed and appropriately targetted information [1]. The difference is highlighted when comparing the mode of delivery. While most adults are satisfied with hospital leaflets and talking to their physician [10] teens use information as a means of connecting with their physician and prefer a more interactive mode [9]. A significant number do not feel that hospital produced information is appriate for them as it either uses complex medical terminology that they do not understand and hence alienates them further from the discussion processes, or, it is written for children and is too simplistic to answer their needs [9]. There are a number of reasons for the observed dichotemy: • For both research and clinical purposes patients are most often divided into two groups; children under approximately 10-16 years and adults who encompass all others [for example 14, 3]. It is not unusual for adolescents to represent less than 5% of a research cohort and for the group to have an average age of 60 or above [11]. This leads to the needs of adolescents being included in the results for adults as they are outliers so their outcomes may become obfuscated and unreported and not considered suitably significant for further research investment. • Cancer treatment for adolescents almost always occurs in either a children’s or adult’s hospital and is not specifically designed for the needs of young people. The information is therefore tailored for the primary patient body and again teens do not form a large enough single group to warrant investment in new information provision. • There are few who specialise in working with adolescents in the cancer field and hence the body of expert knowledge is limited. It can therefore be difficult to gather, understand and incorporate appropriate input. Whilst there are some moves towards addressing these issues including the provision of an expert in adolescent oncology in the U.K. [5] and a new initiative in Adelaide, Australia by the State Government to develop an adolescent treatment facility as part of the new Royal Adelaide Hospital development [7], these initiatives are often focussed on treatments and environments for adolescents, which are vitally
10
A. Shillabeer / Information Provision for Adolescents With Cancer
important considerations but there is little suggestion that information provision will be a core factor in the plan. The issues documented here are therefore likely to persist. The continued inappropriateness of information provision in the adolescent cancer environment will contribute to the teen’s feelings of isolation and will reduce their ability to relate to their healthcare professionals and participate in decision making. This has been stated earlier to have a measurable impact on the outcomes for adolescents with cancer and potentially contributes to the alarming statistic that there has been little improvement in cancer outcomes for adolescents in the past 20 years despite up to 50% improvements for all other groups [3, 6]. The primary message from this work is that an adolescent cancer patient is an isolated generic entity with unique characteristics and should not be absorbed into the requirements for information provision of other groups [15]. It is also evident that clinical outcomes and overall feelings of wellbeing can be directly affected by the information a patient receives [8, 9, 10], thus providing a significant motivation for researchers and for interest and investment by healthcare providers. It is suggested that not only will patients recover faster and be more compliant and active in their treatment but this could create a flow on effect in terms of financial savings to the organisation through shorter treatment plans, reduced side effects and earlier interventions.
3. Information Media If we accept that the adolescent cancer patient group is worthy of separate consideration we must then determine their specific needs and understand how we might best serve those needs. Whilst there is little evidence of any significant research in this niche area there has been some foundational work done on the health information needs of adolescents in general. That research provides a clear indication of what is required in terms of the preferred sources of information and modes of delivery. Whilst generalised to the provision of healthcare information to all adolescents, there are significant parallels with information provision needs in adolescents with cancer that suggest this research is likely applicable to this context albeit not conclusively tested. 3.1. Information Sources •
•
Health professionals are seen as the primary source of information, especially in a cancer diagnosis, but the quality of interactions and hence information gathering from this source is directly affected by the quality of the relationship & terminology used. This critical relationship is however complicated by its association with the appropriateness of information received thus potentially presenting a spiralling information gap if not done well [1, 2, 8, 16]. Two of the most crucial factors in an adolescent’s ability to form a relationship with their doctor are the need for privacy and confidentiality and the difficulty in developing trust at the same pace as the need for information and it is suggested that only 30% of doctors in general have taken the time to actively address these issues [16]. On a psychosocial level parents and friends are vital in maintaining consistency in a patient’s life and reduce the potential to become isolated or consumed by the cancer diagnosis [6]. This group has been a traditional
A. Shillabeer / Information Provision for Adolescents With Cancer
•
• •
11
source of information for adolescents and many know of no other way of gaining new information. However, many adolescents express problems in confiding to friends and parents, especially regarding matters of sexual and mental health, both of which are significant areas of concern during a cancer episode [17]. Fellow patients are almost universally stated to be a highly important source of information although they must be matched by age, interests and diagnosis [1,8]. It can be a great source of reassurance when a patient further through the treatment protocol is enjoying life and can talk about what to expect and how to manage new experiences. Unfortunately doctors do not always support the patient receiving unqualified, anecdotal information and this presents a barrier to non traditional forms of information provision and suggests a need for professional mediation for all sources [10]. Printed materials are readily available but as discussed earlier are often generic and do not adequately meet the needs of adolescent patients who prefer a more conversational information gathering process [1, 3]. For many an online medium is seen as less confronting and judgemental than a clinical space, confidential through anonymity, more freely available in terms of time and location and not constrained by the formality of clinical language. These are seen as core requirements of a successful information medium for this group [16, 17, 18]. There are a small number of organisations that provide a digital presence including Canteen in Australia [3], the Teenage Cancer Trust in the U.K. [6] and the kidshealth website in the U.S. [12] but these do not aim to provide qualified clinical information and have more of an emotional support and fundraising focus. One dedicated online health information source for adolescents, the Teenage Health Freak website, was reported to have had over 52,000 hits per day between 2000 and 2007 [17] but does not address cancer. With the knowledge that utilisation of some non traditional information sources has increased by almost 50% between 2008 and 2009 alone [18] and 75% of adolescents have used the Internet to source health information [17], digital media should be considered core if information provision is to meet needs of young cancer patients. Whilst not all adolescents have access to this medium, for most the need for auxiliary active, ongoing, incremental information gathering could be easily satisfied [8, 17].
The minimum requirements of any information provision medium for adolescents are, privacy and confidentiality, access at any time and place, teen specifc langauage, anonymity, no parental consent, non judgemental qualified advice and low cost [16, 17]. All of these criteria can be met through the utilisation of online technologies including email, blogs, websites and social networking. Whilst it is important for adolescents to communicate with people ‘just like them’, it is important to incorporate medical professionals to ensure high quality advice is given and hence mediated sites, which have been shown to attract high traffic, should be the primary focus[17].
4. Future Directions The overview of research presented in this paper has demonstrated that even after many years of work in information provision for adolescents with cancer, few improvements
12
A. Shillabeer / Information Provision for Adolescents With Cancer
have been realised. Whilst clinical and environmental changes can take years, modifications in the presentation of information could be relatively fast and enable an immediate impact to be felt. Given the importance of information on the generalised experience of patients as described herein it should be seen as a moral obligation to apply a similar research focus for cancer patients and allow young people access to at least the health information quality we as adults receive. Future work should focus on confirming the specific information needs and modes of delivery for adolescents with cancer and actively engage them through empowerment in the development of a targetted information model. The result should be a flexible system using familiar digital and other technologies that will enable adolescents with cancer to actively participate in gathering information at a time and in a form that best suits their individual needs. The focus for this work shall therefore be on testing the suitability of a range of digital media options that better reflect the aesthetic and information provision preferences of young people without changing the consistency of qualified information content provided to all patients.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
[11]
[12] [13] [14] [15] [16] [17] [18] [19]
Beresford B, Sloper P. The information needs of chronically ill or physically disabled children and adolescents, Social Policy Research Unit, University of York, York. (1999) Rolinson JS. Health information for the teenage years. Information Research. 3(3) (1998). CanTeen. http://www.canteen.org.au/ Dec 2010. James’ Story. http://www.canteen.org.au/default.asp?articleid=2511&menuid=86 Jan 2011 Teenage Cancer Trust. http://www.teenagecancertrust.org/what-we-do/health-professionals/professor/ Bleyer A. The adolescent and young adult gap in cancer care and outcomes. Current Problems in Pediatric and Adolescent Heathcare. Volume 35, Issue 5 (2003), 182-217. Adelaide Now. Http://adelaidenow.com.au/news/in-depth/big-shoes-to-fill-as-lance-departs. Jan 2011. Ankem K. Types of Information Needs Among Cancer Patients. LIBRES 15.2: (2005). Better Together: Scotland’s Patient Experience Programme: Building on Children and Young People’s Experiences. (2009). http://www.scotland.gov.uk/Publications/2009/06/12150703/2 Dec 2010. Adler J, Paelecke-Habermann Y, Jahn P, Landenberger M, Leplow B, Vordermark D. Patient information in radiation oncology: A cross-sectional pilot study using the EORTC QLQ-INFO26 module. Radiation Oncology. 4:40, (2009) Isenring E, Cross G, Kellett E, Koczwara B, Daniels L. Nutritional status and information needs of medical oncology patients receiving treatment at an Australian public hospital. Nutrition and Cancer. 62(2) (2009) 220-228 Kidshealth. http://kidshealth.org/teen/ diseases_conditions/cancer/deal_with_cancer.html Jan 2011 Vordermark D. Patient Information and Decision Aids in Oncology: Need for Communication Between Patients and Physicians. Journal of Clinical Oncology. 28(29). (2010). Oakley C, Powell S. Cancer Directorate Patient Involvement Work 2005/2006. NHS. England 2006 http://www.stgeorges.nhs.uk/docs/about/EHR/CancerPPISummary300806.pdf Jan 2011 Albitron K, Bleyer WA. The Management Of Cancer In The Older Adolescent. European Journal of Cancer. Volume 39, Issue 18 (2003), 2584-2599. McPherson A. Adolescents in primary care. BMJ. Volume 330, February (2005), 465 – 467. Harvey K, Churchill D, Crawford P, Brown B, Mullany L, Macfarlane A, McPherson A. Health Communication and adolescents: what do their emails tell us? Family Practice. June (2008) 304 - 311. Lenhart A, Ling R, Campbell S, Purcell K. Teens and Mobile Phones. (2010). Http://pewinternet.org/reports/2010/Teens-and-Mobile-Phones.aspx. April 2011. Christie D. Adolescent development. BMJ. Volume 330, February (2005), 301 – 304.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-13
13
Electronic Symptom Reporting by Patients: a Literature Review Monika A. JOHANSENa,1, Eva HENRIKSENa, Gro BERNTSENa, Alexander HORSCHb,c a Norwegian Centre for Integrated Care and Telemedicine, University Hospital of North Norway, Tromsø, Norway b Research group Telemedicine, Department of Clinical Medicine, University of Tromsø, Tromsø, Norway c Technische Universität München, München, Germany
Abstract. A literature review has been conducted to gain an overview of which technologies and patient groups have previously been employed in scientific studies with regard to patients reporting symptoms electronically. This paper presents preliminary results from the review, based on the abstracts from relevant publications. The Medline database search identified 974 publications. Of these, 235 (24%) met the inclusion/exclusion criteria. The number of studies has increased heavily over the past two decades. A lot of the studies are small with regard to sample size, but we see that the number of studies increase over time. Cancer and lung diseases are the largest diagnosis groups. Cancer symptom reporting seems to take place inside the healthcare institutions, while lung disease and musculoskeletal disease reporting mainly take place at home via Internet. Keywords. electronic symptom reporting, physician-patient relations, consumer participation, data collection, review
1. Introduction The traditional patient and provider roles are in change. A new approach is arriving, focusing on patient-provider information technology partnership to promote more patient-centred healthcare [1] and personal health information management systems [2]. Consequently, there is a need to build new and better computerized tools to support the patient as an active partner in healthcare, while at the same time take into consideration the challenges and constraints patients and providers have to deal with. Healthcare providers often find it demanding to determine the patient’s main problem or concern [3]. The way patients present their problems, the sequence, importance and severity of symptoms influence their professional interpretation [4]. Likewise, studies of consultation interviews show that physicians elicit only around 50% of the medical information considered important [5]. The facts that patients have increasing difficulties with correctly remembering symptom levels beyond the past several days [6], and that older patients do not report most of their symptoms to health professionals [7] are worsening this situation. In contrast, we find that people report a higher number of and/or more serious symptoms when using computer-mediated communication 1
Corresponding Author: Monika A. Johansen, E-mail:
[email protected]. Norwegian Centre for Integrated Care and Telemedicine, University Hospital of North Norway, N-9038 Tromsø, Norway.
14
M.A. Johansen et al. / Electronic Symptom Reporting by Patients: A Literature Review
compared to face-to-face encounters or in phone talks [8], p. 28-29. This supports the viability of patient-centric symptom reporting tools to report and grade symptoms electronically in pre-clinical and clinical settings, and, if possible, at the time when the symptoms are present. However, this is a new and unexplored area and there is a need to assemble the knowledge that already exists. The main purpose of this study is, based on review of abstracts, to establish an overview of clinical settings such tools might be useful for and technologies that have previously been examined in scientific studies. This knowledge will be valuable for everybody planning to conduct research in the field and for the development of future symptom reporting tools.
2. Methods 2.1. Inclusion and Exclusion Criteria The inclusion criteria were: I1) Original studies; I2) Patients or parents reporting symptoms or health information electronically, either to healthcare personnel, or to an organization/institution, or a public system that processes and/or interprets the data for healthcare purposes and provides feedback. The focus is on systems that can be established within the healthcare system, including e-diaries and personal health records accessible by health providers; I3) The information reported must represent patient symptoms at present or within the last few days. The exclusion criteria were: E1) Retrospective questionnaire, prevalence surveys, screening, and test of medicines; E2) All electronic communication that requires patient and healthcare personnel to be present at the same time, as for instance video conference; E3) Automatic biometric measurements, since these are defined as reporting of signs, not symptoms. 2.2. Search and Assessment Strategy The Medline database was searched, limited to publications from 1990 to 1st September 2010, human medicine, in English language. The search was built up around four search files (What – Who – Why – How), with the logical function OR within the files, and AND between the files. Already known eligible publications were reviewed to identify possible MeSH terms and relevant search words. The What-file included 22 search terms for symptoms and synonyms. The Who-file searched for “patient*” and “parent*” plus 18 relevant MeSH terms. The Why-file included 35 search terms for “self-report*”, “pre-report*”, and synonyms. Finally, the How-file included 38 search terms, for the possible technology involved. The search strategies were pilot tested and modified several times to ensure that they identified eligible publications. The first and second author reviewed and rated independently all abstracts as “potentially relevant” or “not relevant”, and subsequently merged their results. In all cases where the reviewers had disagreed in the perceived eligibility of the publication, the two reviewers discussed the abstracts to reach consensus. Finally, the second author reviewed the entire abstracts a second time to extract more specific information characterizing these publications. We used the diagnosis categories of the International Classification of Primary Care (ICPC), as a basis for classification of clinical conditions. Where we found more than five papers within a category, main and subgroup figures are presented explicitly.
M.A. Johansen et al. / Electronic Symptom Reporting by Patients: A Literature Review
15
Figure 1. Number of included publications over the years
3. Results 3.1. Literature Search Results The search in Medline identified 1006 references, including 32 duplicates. Of the referenced 974 articles, 235 met all inclusion and exclusion criteria. Initially we had agreed on inclusion for 190 and exclusion for 628 (total 818) papers, while for 156 papers agreement was reached only after a consensus discussion (45 inclusion, 111 exclusion). Considering the publication year of these 235 articles, the number of papers increased heavily over the past two decades (Figure 1). Authors from the United States published 151 papers, UK 24, Norway 9, Australia 8, Germany 7 and The Netherlands 6. The remaining 30 papers were spread on 16 countries, with four or less from each. Fifty-six of the abstracts did not report how many patients were involved. The other 179 ranged from five to 10999 recruited patients, in average 235 (median 77), in total 42038 patients. Thirty of the studies involved 20 or less patients, 109 involved 100 or less, and only six involved more than 1000. The average number of patients involved increased from 72.5 for the studies conducted during 1990-1999, to 93 for those in 2000-2004, and further to 281 for 2005 to Sept. 2010. The exact number of healthcare providers involved is in general not reported in the abstracts. 3.2. Technologies and Clinical Settings The systems employed in the studies have been categorized within three different scenarios: 1) An inside scenario, where the patient is present inside the healthcare institution, using a local computer, stand-alone or connected to the network, to input symptoms prior to the encounter. The most typical technology in use is the tablet-PC with touch-screen. 2) An outside stationary scenario, using a computer and the Internet at a location distant from the healthcare provider, usually at home, to report symptoms. Applications used are mainly e-diary or more general web applications, and in some cases e-mail. 3) An outside mobile scenario using a handheld device with mobile communication technologies to report symptoms. This includes “ordinary” mobile phones, smartphones and PDAs, where the use ranges from simple text messages (SMS) to advanced applications.
16
M.A. Johansen et al. / Electronic Symptom Reporting by Patients: A Literature Review
Table 1. Number of articles related to technology and conditions categorized by use of ICPC. Subgroup count is included in main group. Patient may be Inside healthcare institution, or Outside using stationary PC or mobile solution. Technology/condition not evident in abstract are placed in the “Unspec.” column/row. Inside Outside (Stationary) Outside (Mobile) Unspec. Diseases, Clinical Conditions Total Touch Web e-mail e-diary unsp. SMS Appl. others/mix Cancer 50 16 4 1 2 10 17 Lung diseases 48 7 9 6 6 2 6 12 - Asthma 36 6 5 6 5 2 5 7 - COPD 7 1 2 1 1 2 Cardiovascular Psychiatry Diabetes Musculoskeletal - Rheumatologic
17 18 12 15 9
Gastrointestinal Neurological HIV/AIDS Unspec./others Total
8 8 6 53 235
5
1
5 3 1
1 3 1
1 1
4 3 2 1
2 5 38
6 28
1 3
5 21
2 2 1 1
3
1 2 2
2 3 1 3 17
5
12 36
10 6 7 3 2 4 4 3 21 87
While the scenarios are overlapping only when observing one patient over time, the technologies in use are largely overlapping. Smartphones can, for instance, be used for more or less the same applications as stationary computers. Table 1 presents the scenarios and the most common technologies involved, related to the diseases or conditions where symptoms are reported electronically. Cancer is not a separate category in ICPC, but did represent a large and distinct body of literature and is therefore presented separately. In addition to including abstracts where it was unclear which medical condition they belong to, the “Unspec./others” row include the A-group of ICPC (common/unspecified), four papers dealing with lifestyle services, and four dealing with follow-up procedures after surgical interventions. For the specified scenarios, cancer and lung diseases are clearly the largest groups. The largest part of the cancer symptom reporting seems to take place inside healthcare institutions. Lung disease and musculoskeletal disease reporting mainly take place at home via Internet. For the other disease categories the unspecified group is too large to indicate any trends. Web, e-diary and more advanced mobile applications are more used than e-mail and SMS, technologies where the user interface has limited functionality.
4. Discussion and Conclusion Over the last two decades, the number of studies on electronic symptom reporting has increased heavily. This may indicate the start of a new paradigm. However, most studies are small in terms of sample size (median 77), and many are best characterized as feasibilities studies. This number has increased over time, though, which may reflect a general improvement of study quality. Cancer and lung diseases are the largest diagnosis groups. Cancer symptom reporting seems to take place inside the healthcare institutions, while lung disease and musculoskeletal disease reporting mainly take place at home via Internet.
M.A. Johansen et al. / Electronic Symptom Reporting by Patients: A Literature Review
17
Tablet-PC with touch-screen applications, advanced smartphone and PDA applications, web and e-diary represent the main technologies examined in these studies. Electronic symptom reporting appears to be used in situations where it is challenging to identify all aspects of a situation in a short clinical encounter, in other words in diseases that are complex to diagnose, in long-term diseases, and in taboo and sensitive situations. We also find examples of electronic symptom reporting for the opposite, the more easy cases, where the electronic symptom reporting might substitute a face-toface consultation. Examples here are follow-ups after surgery. It seems to be easier to create a meaningful electronic dialogue when the system is focused on a specified diagnosis or clinical problem as opposed to an open approach where the patient’s health problem is unspecified or unknown when the symptom reporting starts. A lot of the studies focus more on technologies than on health effects, and most of the studies seem to be underpowered to document clinical outcomes or specific benefits for patients or healthcare personnel, as revealed by a former review of new technologies [9]. On the other hand, electronic symptom reporting empowers the patient as an active partner in healthcare. Patients support the electronic reporting of symptoms to their doctor before each encounter [10] and they believe it will improve the level of care and effectiveness during the encounter [11]. Further, electronic symptom reporting has been demonstrated to reveal patients preferences and information needs prior to the consultation [12]. This is an early report of a larger and more thorough review. The next step is the extension of the search to more databases and the full-text review of all relevant papers, in order to reveal more information regarding usefulness and to classify types of health outcomes. The preliminary results presented in this paper are considered highly encouraging for these ongoing efforts.
References Kaplan B, Brennan PF. Consumer informatics supporting patients as co-producers of quality. J Am Med Inform Assoc, (2001);8(4): 309-316. [2] Randeree E, Whetstone M. Personal health records: Patients in control. In Wilson EV, Editor, PatientCentered E-Health. IGI Global: Hershey, PA. (2009); p. 47-59. [3] Haugli L, Finset A. Lege-pasient-forholdet ved funksjonelle lidelser. Article only in Norwegian, Tidsskrift for Den norske legeforening, (2002);122(11): 1123-1125. [4] Johansen MA. Data from focus-group interview with 5 GPs, Norwegian Centre for Integrated Care and Telemedicine, University Hospital North Norway HF: Tromsø. (2009). [5] Roter DL, Hall JA. Physician's interviewing styles and medical information obtained from patients. J Gen Intern Med, (1987);2(5): 325-329. [6] Broderick JE, et al. The accuracy of pain and fatigue items across different reporting periods. Pain, (2008);139(1): 146-157. [7] Brody EM, Kleban MH. Physical and mental health symptoms of older people: who do they tell? J Am Geriatr Soc, (1981); 29(10): 442-449. [8] Johnsen JAK, Gammon D. Connecting with Ourselves and Others Online: Psychological Aspects of Online Health Communication. In Wilson EV, Editor. Patient-Centered E-Health, IGI Global: Hershey, PA(2009) [9] Roine R, Ohinmaa A, Hailey D. Assessing telemedicine: a systematic review of the literature. CMAJ, (2001);165(6):765-771. [10] Sciamanna CN, Diaz J, Myne P. Patient attitudes toward using computers to improve health services delivery. BMC Health Serv Res, (2002);2(1): 19. [11] Benoit A, et al. Using electronic questionnaires to collect patient reported history. AMIA Annu Symp Proc, (2007); p. 871. [12] Buzaglo JS, et al. An Internet method to assess cancer patient information needs and enhance doctorpatient communication: a pilot study. J Cancer Educ, (2007); 22(4): 233-240. [1]
18
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-18
Increasing Physical Activity through Health-Enabling Technologies: the Project “Being Strong Without Violence” Corinna SCHARNWEBER1,a, Wolfram LUDWIGa, Michael MARSCHOLLEKa, Wolfgang PEINb, Peter SCHACKc, Reiner SCHUBERTd, Reinhold HAUXa a Peter L. Reichertz Institute for Medical Informatics, University of Braunschweig Institute of Technology and Hannover Medical School, Germany b General Education Secondary School Sophienstreet, and c Elementary and General Education Secondary School Pestalozzistreet; Braunschweig, Germany d Braunschweig Health Planning Department, Germany
Abstract. Due to the increasing prevalence of adiposity in children numerous schools are introducing prevention programmes. Among these is “Gewaltlos Starksein” (“Being strong without violence”), a project of Hauptschule Sophienstraße Braunschweig, Germany (a general education secondary school for grades 5-10). This study aims to discover possible increases in activity through “Gewaltlos Starksein” where health–enabling technologies play a major role. A prospective intervention study with a span of 1.5 years was designed to measure this increase in activity. Partners in this study were Hauptschule Sophienstraße as the intervention group and Grund- und Hauptschule Pestalozzistraße as control group. Data collection was performed using a multi-sensor device, and questionnaires. Confirmatory data analysis of average metabolic equivalent (METs) yielded no significant results. Exploratory analysis showed interesting results, especially concerning the number of steps during leisure time. Descriptive analysis of questionnaires showed that all children enjoy physical activity. There were differences in sports team participation, open-air games and club affiliation. The study could not prove that the intervention “Gewaltlos Starksein” improves physical activity in children. However, the increased leisure activity step count indicates that “Gewaltlos Starksein” has positive effects on children’s behaviour. This should be investigated in a further study in cooperation with psychologists. Keywords. Health-Enabling Technologies, Evaluation Study, Activity, Children
1. Introduction The prevalence of overweight and adiposity in children and adolescents has increased steadily over the last 25 years. [1, 2]. Overweight and adiposity during infancy can lead to the manifestation of numerous diseases that can reach into adulthood [3, 4, 5]. It could be shown that there is a disparity between energy input and energy consumption in overweight children and adolescents [6]. Balancing this disparity requires measures that motivate children and adolescents to increase physical activities and to change 1
Corresponding Author: Peter L. Reichertz Institute for Medical Informatics, University of BraunschweigInstitute of Technology, Germany, Muehlenpfordtstr. 23, 38106 Braunschweig, Germany; E-mail:
[email protected].
C. Scharnweber et al. / Increasing Physical Activity Through Health-Enabling Technologies
19
their eating habits. In order to support and increase physical activity in children and adolescents in the “westliches Ringgebiet”, a quarter of Braunschweig, the action alliance "Das Westliche Ringgebiet - ein Stadtteil in Bewegung Steh auf...Mach mit... Lauf los!!!“ was initiated on March 1, 2009 [7]. The objective of this alliance is to convey knowledge and competence regarding health and activity issues to families and children. In order to realize this objective a cooperation was initiated that includes 22 sub-projects for a sustainable development of physical activities [7]. One of these sub-projects is the intervention “Gewaltlos Starksein” (“Being strong without violence”) at Hauptschule Sophienstraße (a general education school for grades 5-10). The introduction of additional mandatory athletic and charitable (“We help others” shopping for seniors) working groups intends to positively influence pupils’ activity patterns, their confidence and self-assurance [8]. Accompanying the intervention “Gewaltlos Starksein”, the study at hand presents an evaluation with an emphasis on changes in the intensity and forms of everyday activities in children aged 10 to 14 years at Hauptschule Sophienstraße. Within the prospective intervention study heath-enabling technologies [9, 10] play a major role. Consequently, this evaluation study investigates the efficiency and effectiveness of the intervention “Gewaltlos Starksein” in relation to children’s activity patterns.
2. Method For the evaluation of the intervention „Gewaltlos Starksein”, the study was designed as a prospective non-randomized comparative intervention study. The evaluation investigates and compares activity patterns of children in Hauptschule (HS) Sophienstraße (intervention group) and Grund- und Hauptschule (GHS) Pestalozzistraße (control group). All study subjects were enrolled in their respective school on August, 6, 2009. The study has a time-span of 18 months (from August 18, 2009 until January 31, 2011). Within this timeframe, five measurement campaigns with duration of two weeks each are performed (see Table 1). The campaigns are performed with 32 children aged between 10 and 14 years. In the run-up to the study, the children’s parents were informed about course and content of the study through an information letter and an informative meeting. Since participation in the study is not mandatory was not compulsory for the children, the informative meeting was followed by a registration phase until August 14, 2009. At this date, a total of 50 children had submitted letters of agreement for participation in the study. The final participants were then drawn by lot, wherein 16 children from each school were selected. The group size of 16 was chosen because of the minimum class size of 14 children plus 2 more to cope with participant drop outs. The 16 children from HS Sophienstraße participated in the intervention in addition to their normal classes. The 16 children from GHS Pestalzzistraße form the control group. The evaluation is conceived as a sensor-based study and is performed in parallel in both schools. Thus, each of the 32 children participated in all five measurement campaigns. Each measurement phase starts with a self-assessment of activity levels by the participants. The standardized questionnaire “MoMo Activity questionnaire for adolescents from 6 to 17 years“ [11] is used for this assessment. Following the assessment, parameters for the configuration of the SenseWear Pro 2 CE sensor used in the study are collected. This sensor wristband is then to be worn for one week. The wristband may only be removed for activities such as bathing, showering or swimming
20
C. Scharnweber et al. / Increasing Physical Activity Through Health-Enabling Technologies
in order to avoid contact with water. On the seventh day of the measurement campaign the sensors are collected, read out and the data storage is cleared. On day eight, the process is repeated for another week of measurements. Afterwards, the participants receive a report of the collected data. This includes a one-on-one interview where the children are informed about the relevance and impact of the data collected by their sensor. At the end of the interview each child receives an easily understandable and comprehendible print-out for their personal records. Table 1. Time-span and measurement campaigns for the study
3. Result An analysis of the two sensor parameters metabolic equivalent (METs) and step count shows if the intervention changes the children’s activity patterns. Awareness and commitment to physical activities are clarified through analysis of the “MoMo Activity questionnaire“ (cf. [11]).
4. MET – Metabolic Equivalent A confirmatory data analysis is used to determine changes in METs over the course of the study. From the beginning of the study, the null hypothesis H0 is assumed: there is no change in METs over the duration of the study between intervention group and control group. Throughout the course of the study, the null hypothesis is intended to be replaced by the alternative hypothesis H1, which states the existence of a change in METs between intervention group and control group over the duration of the study. To be able to discard hypothesis H0 a Mann-Whitney U test was performed. For the study, the following values were derived: n1=9, n2=9, U=30, α=0,05, Uα=18. Using these values for formula 1 we arrive at:
The condition in formula 1 is not fulfilled, thus hypothesis H0 cannot be discarded. Within this study there is no significant change in METs between intervention and control group.
5. Leisure Time Step Count The exploratory data analysis of the collected sensor data yielded interesting results especially regarding the step count during leisure time. This analysis assesses possible positive effects of the intervention on the children’s behaviour during leisure time. „Leisure Time Steps“ are steps that the children made outside of school phases, i.e. in the afternoon or on weekends. This analysis uses the number of steps counted by the
C. Scharnweber et al. / Increasing Physical Activity Through Health-Enabling Technologies
21
sensors within the measurement phases. Increases or decreases in the number of steps between two measurement campaigns are determined by computing the delta (Δ) of the step count. Sorted by the magnitude of the changes it can be determined if the step count of the children within the intervention group can be increased. The analysis of changes in the leisure time step count showed that out of 16 children in the intervention group 31% had a positive and 25% had a negative change in the number of leisure time steps. Within the control group, out of 14 children 14% had a positive and 50% had a negative change in the number of leisure time steps. Over the course of the study, a number of participants left the study and in some cases medical conditions resulted in unusable datasets. Within the intervention group, these unusable datasets accounted for 44% of all sets, compared to 36% within the control group. A mean value analysis of leisure time steps for measurement campaigns M1 to M5 produced the mean values shown in figure 1. Within the intervention group the step count increased slightly by 1733 and within the control group it decreased by 15267.
Figure 1. Mean values of „Leisure Time Steps“ for measurement campaigns M1 to M5
This allows us to form the hypothesis that “Gewaltlos Starksein” has a positive effect on the children’s leisure time activities.
6. MoMo - Activity Questionnaire The MoMo Activity questionnaire for adolescents from 6 to 17 years is a selfassessment of children’s activities and their attitude towards physical activity and sports. In order to assess the changes in the children’s self-assessment and their attitude towards activity and sports, the questionnaire is used before each measurement campaign. The questionnaire encompasses 51 questions and statements that have to be answered and assessed by the children. A descriptive analysis of the questionnaire has shown that all children show great interest in sports and physical activity. Differences became visible for questions about participation in teams sports, open-air games and club affiliations.
7. Discussion The concept of this evaluation study is based on the measurement of activity using sensors. Thus, the evaluation of the intervention is focused solely on changes in children’s physical activity. Psychological changes through “Gewaltlos Starksein” could not be proven. For this reason, a separate evaluation by experts is advised. The design of this study is aligned with other national and international studies on adiposity prevention for children and adolescents. In particular, the studies CyberMarathon [12], CHILT - Children´s Health Interventional Trial [13], IDEFICS European project on diet- and lifestyle-related disorders in Children [14] and Planet
22
C. Scharnweber et al. / Increasing Physical Activity Through Health-Enabling Technologies
Health - An Interdisciplinary Curriculum for Teaching Middle School Nutrition and Physical Activity [15] were used as input for the study design. The selection of intervention and control groups was not randomized. The intervention group was determined through the project specification „Gewaltlos Starksein“ [8], and the control group was selected according to pre-determined parameters such as type of school, geographic area, class size, and school organisation. With the exception of the postponement of measurement campaign M5 until November 2010, the study could be performed as planned. The study has shown that the intervention “Gewaltlos Starksein” has no significant influence on changes in children’s activity patterns. Concerning the exploratory analysis of sensor parameters, especially the number of leisure time steps has proven to be of interest. Although no increase could be demonstrated, the step count within the intervention could be kept constant while the step count within the control group decreased. This allows the conclusion that the intervention has a positive effect on the participating children. However, to prove this conclusion will require a separate study in cooperation with psychologists.
References [1]
[2]
[3] [4] [5] [6] [7]
[8]
[9] [10] [11] [12] [13] [14]
[15]
Kurth BM, Schaffrath Rosario A: Prevalence of overweight and adiposity in children and adolescents result of the children’s' and adolescents' health survey (KiGGS) [in German]. Bundesgesundheitsblatt Gesundheitsforschung - Gesundheitsschutz 2007;50:736–743 Kronmeyer-Hausschild K, Wabitsch M. Current views on prevalence and epidemiology of overweight and adiposity in children and adolescents in Germany [in German]. [Internet] Working Group on Adiposity in Children and Adolescents of the German Society of Pediatrics and Adolescent Medicine. [cited 2011 Jan 30]. Available from: www.a-g-a.de Ebbeling C, Pawlaw D, Ludwig D. Childhood obesity: public-health crisis, common sense cure. Lancet. 2002;360:473-482. Reinehr T. Complications of adiposity in infancy and adolescence [in German]. [Internet]. 2005 [cited 2011 Jan 30]. Available from: www.a-g-a.de Burke V. Obesity in childhood and cardiovascular risk. Clin Exp Pharmacol Physiol 2006;Sep; 33(9):831-7. Warschburger P, Petermann F, Fromme C. Adiposity: Training with children and adolescents [in German]. 2. Au. Beltz Psychology Publisher Union; 2005. ISBN-10: 3621274898. Rake H, Schubert R. Das Westliche Ringgebiet - ein Stadtteil in Bewegung Steh auf...Mach mit...Lauf los!!! Application for implementation of the action alliance Healthy Lifestyles and Living Environments to the Federal Ministry of Health [in German]; 2009. Pein W. Being Strong without Violence - Health and Social Competence through Confidence and Selfassertion [in German]. [Internet]. 2009 [cited 2011 Jan 30]. Available from: http://www.bs4u.net/wegweiser/index.cfm?fuseaction=portal.page&page=15799 Bardram J E. Pervasive Healthcare as a Scientific Discipline. Methods Inf Med. 2008; 47(3):178-185 Koch S, Marschollek M, Wolf K H, Plischke M, Haux R. On Health-enabling and Ambient-assistive Technologies. Methods Inf Med. 2009; 48(1):29-37 Bös K, Worth A, Heel J, et al. Test manual of the motor activity module for the infancy and adolescence health survey of the Robert Koch Institute [in German]. Haltung und Bewegung. 2004; 24. Plischke M, Marschollek M, Wolf KH, Haux R, Tegtbur U. CyberMarathon - increasing physical activity using health-enabling technologies. Stud Health Technol Inform. 2008;136:449-54. Graf C. CHILT - Children´s Health InterventionaL Trial. [Internet]. 2008 [cited 2011 Jan 30]. Available from: http://www.chilt.de Ahrens W. IDEFICS - Identification and prevention of Dietary- and lifestyleinduced health EFfects In Children and infantS. [Internet]. 2006 [cited 2011 Jan 30]. Available from: http://www.ideficsstudy.eu/Idefics/index Carter J, Wiecha J, Peterson K, Gortmaker SL. Planet Health - An Interdisziplinäre Curriculum for Teaching Middle School Nutrition and Physical Activity. Human Kinetics; 2001.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-23
23
Review of Mobile Terminal-Based Tools for Diabetes Diet Management Eunji LEEa,1, Naoe TATARAa,b, Eirik ÅRSAND b,a, Gunnar HARTVIGSENa,b a Department of Computer Science, University of Tromsø, Tromsø, Norway b Norwegian Centre for Integrated Care and Telemedicine, University Hospital of North Norway, Tromsø, Norway
Abstract. Changing dietary habits is one of the most challenging tasks of diabetes self-management. Mobile terminals are increasingly used as platforms for tools to support diet management and health promotion. We present literature describing mobile terminal-based support tools for management of diabetes focused on diet. We also propose a summary of key success factors for designing such tools and discuss recommendations for future research. Keywords. Diabetes, Nutrition, Diet, Self-help, Self-management, Mobile phone
1. Introduction Medical recommendations in both Type 1 and Type 2 diabetes management involve nutrition, physical activity, and medications if necessary. Of these three elements, patients regard following nutrition recommendations as especially challenging, partly due to their lack of knowledge, understanding or skills concerning diet management. Mobile terminals are considered to have high potential as a platform for supporting tools for people with diabetes, due to their portability and emerging technologies embedded [1]. For such tools to be useful for diet management, they should be designed so that users can easily and quickly find necessary information and eventually achieve healthy dietary habits [2]. In this paper, we present findings from reviewing academic literature that describes mobile-terminal-based tools supporting diet management in diabetes. The aim is to improve knowledge about how a tool for diabetes diet management should be designed to promote health.
2. Methods PubMed, ACM digital library and IEEE Xplore were searched for relevant literature using the following combination of keywords: {(food OR nutrition OR diet) AND (cell phone OR mobile phone OR personal digital assistant (PDA) OR handheld)}. After removal of duplicates, only the papers including the keyword, ‘Diabetes’ were selected. The search was conducted in September 2010. 1
Corresponding author: Eunji Lee, Department of Computer Science, University of Tromsø, 9037 Tromsø, Norway; E-mail:
[email protected]
24
E. Lee et al. / Review of Mobile Terminal-Based Tools for Diabetes Diet Management
Following exclusion criteria were applied: (i) papers not written in English; (ii) papers of which full text was not available; and (iii) review articles. Finally, the relevance of each publication was examined by reading the abstract and the whole text if needed. The following data were extracted from the final selected papers: study design, type of mobile terminal used, targeted population, main purpose of the tool used or developed, significant features of the tool regarding diet management, and the findings for each study.
3. Results After removal of duplicates, 27 papers were found, of which five met the exclusion criteria. Based on the abstracts, 16 papers were selected as relevant to diet/nutrition. One of these focused on insulin therapy and another was found irrelevant to diabetes, leaving 14 papers for inclusion in this review. 3.1. Study Design, Terminal Type, and Target Population Ten papers [3-12] describe design and development of management tools for people with diabetes. Of these, seven [3,5-10] describe results from evaluation of tools by potential users regarding usability, feasibility and general acceptance; two [11,12] report results from technical evaluation of tools; the last paper [4] describes the design and development of a tool from a technical perspective. Three of the papers [7,9,11] state that the design requirements were obtained by involving people with diabetes as potential users. Evaluations by potential users are conducted through field testing, namely evaluation by use of a tool in the users’ real-life setting for a certain period [3,5,7-9], and through laboratory testing [6,10]. Clinical outcomes such as HbA1c were also examined in four studies [3,5,13,14]. In three studies described by the four other papers [13-16], the effectiveness, acceptance and feasibility of commercially available tools based on mobile terminals were investigated in the context of clinical intervention. Six studies [3,6,7,9,11,12] involved mobile phones as the terminal; the others involved PDAs. Windows Mobile-based phones with a touch-sensitive screen were mostly used [6,7,9,11]. The commercially available applications were all PDA-based. The year of publication and of each study indicates a clear shift from PDAs to Smartphone-type mobile phones. Six studies described in seven papers [7,8,10,13-16] target people with Type 2 diabetes, and two studies [3,11] target young people with Type 1 diabetes. The others do not specify the target population, but one study [9] limited participation to people aged over 18. 3.2. The Purpose of the Tool and Special Features In six studies [3,4,7,10,11,13], a tool was used or developed for overall diabetes management with recording of blood glucose values, physical activities and other relevant data in addition to food intake. In the seven studies described in the eight other papers, a tool dedicated to dietary management was used or developed. Several tools are designed for use as a part of telemedicine intervention, where health care professionals support patients remotely by viewing and analyzing the stored data [3,6,10,11]. The tools described in four studies [4,5,13-16] give patients nutrition
E. Lee et al. / Review of Mobile Terminal-Based Tools for Diabetes Diet Management
25
information for a selected food item and/or results of automatic analysis of recorded foods in terms of nutrients and calorie intake; some provide feedback based on the patient’s personal information, such as calorie balance or nutrition balance over meals [4,15,16]. One tool focuses on the glycaemic index (GI) of food items, showing a GI value with an indicator, low, medium or high, for assisting in food choices [14]. Recording of food or drink items uses various methods. The most common is to identify items from a database [4,5,8,10,13-16]. Not all the papers specify the number of items in the database, but one includes more than 4300 items [15,16] whereas another includes 423 items [8]. Portion size can be adjusted in some of these tools [8,10,15,16], and two tools present photographs of food or drink items that can be used as a reference [8,10]. Other methods of recording include free text input [11] and photographing using a camera on a mobile phone [6,12]. The tool described in paper [12] is designed to recognize a food item by semi-automatic analysis of the photo together with contextual information. Meal types, such as breakfast, lunch, or dinner, are also used as data for recording [4,8,10,14-16], and time for meal intake can also be recorded on two tools [8,11]. The tool used in two of the studies [6,7] has only six buttons for the user to select a meal, snack, or drink with high or low carbohydrate content, enabling simple and quick recording in only a few operations. After data entry, this tool shows cumulative totals of foods or drinks recorded by category together with feedback according to personal goals, and smileys when goals are achieved [7]. One study [9] involves tools designed and developed purely for educational purposes utilizing three types of games incorporating several education theories and customizable functions so that patients can play and learn about diet management. 3.3. Summary of Findings In four of the studies [3,5,13,14] where clinical outcomes are evaluated, it is observed that HbA1c decreased among the participants in the intervention group who completed the study. However, in the study described in [5], decrease in HbA1c is only observed among the group of participants whose history of having diabetes is shorter than the other group. In the study described in [7], the participants improved their nutrition habits, especially intake of vegetables and fruits. In most of the identified studies, the tools used are generally well accepted by participants in terms of ease of use [5,6,10,13-15], usefulness, problem-solving capabilities, learning and motivational effects in dietary management [5-7,9,13-15], and feasibility for patient interventions due to high accuracy and reliability of recorded data [8,15]. It is noteworthy that no drop-outs from the studies due to difficulties in using the tools are reported in the selected papers. However, in the studies described in [9,15], considerable time was devoted to instructions for use, and the 12 elderly participants without experience in using PDAs or with problems in motor skills remained in the study, but gave up on using the PDA [15]. In some studies, consequences such as drop-outs from the study, decrease in use, low use, or negative opinions of the tools were observed – partly due to burdensome or tiresome daily registration [7,13-15], apparent improvement in glycaemic control [13], or saturation of effects on diet management [7], or misunderstanding, underestimating importance of self-management or treatment regimens, or limited understanding [15]. Despite the generally positive opinions of the tools, some difficulties in behaviour change are reported in terms of nutrition habit [5] and adherence in self-monitoring of diet [16]. Sevick et al. found that adherence to diet self-monitoring is not associated
26
E. Lee et al. / Review of Mobile Terminal-Based Tools for Diabetes Diet Management
with sociodemographic characteristics, but rather with the level of adherence in the early phase of intervention [16]. Concerning tool features, customization or modification based on personal data or users’ skills is considered important and beneficial [6,9,14]. Timely, automatic and personalized feedback should be incorporated in a motivating and easily interpretable manner [6,13-15]. A database showing nutrient and calorie content is considered powerful if it contains enough variety and numbers of food and drink items that are familiar to users [6,14]. Simple categorization for recording nutrition habits is well accepted and appreciated for routine use [6], but some participants consider such categorization too coarse [7]. Photographs of food and drink items are considered useful, especially if they include a scale or familiar cutlery as a reference of size, for adjusting portion sizes [10]. Photographing food and drink items for recording and later consultation is considered practical for occasional use, but not for routine use [6]. Educational games are considered most suited for the young population and for shortterm use. Thus, the easiness and the ability to quickly launch and complete functions are important [9].
4. Discussion The identified publications show that mobile terminal-based tools have been generally well accepted and shown to be effective for diet management or glycaemic control to a certain degree. For successful diet management, people with diabetes need a good understanding of their diet regimen. In order to make a diet management tool feasible and useful, it should enable recording of food intake in an easy, but accurate enough manner. It should also provide immediate analytical feedback based on personal data in an easily interpretable way, preferably with other data about and exercise so that patients can reflect on their total behaviour. The tool should also include educational materials, with a database of food and drink items familiar to patients. For accurate recording of food quantities, visual reference such as photographs taken using a familiar object to indicate size is considered useful. From this review, key features to achieve both ease of use and accuracy in recording could not be clearly identified because of the mixed feedback from the participants, the time and effort required for user instructions, and the study designs, which do not compare the different tools in some of the studies. Food recognition by photographing may have a high impact when the technology enables reliable identification. Another challenge is how to design a tool that supports adherence in self-monitoring over a substantial period – long enough for achieving healthy effects. It might not be necessary for a tool to be used permanently, if use of the tool leads to better diet management, but often it needs to be used at least periodically for maintaining awareness of the importance of a healthy dietary regimen. As described in [6], simple and quick registration with immediate feedback would be suitable for routine use, but at the same time a tool should be designed so that it will not be tiresome or boring. Key features that encourage a wide variety of patients to be continuously engaged in using a tool should be investigated in future research, borrowing knowledge from the field of persuasive technology, human computer interaction, and psychology. The market for advanced mobile phones, e.g. smartphones, is growing rapidly and a great number of mobile applications are available on the market today. Further
E. Lee et al. / Review of Mobile Terminal-Based Tools for Diabetes Diet Management
27
research is required to examine such applications to identify key features for design of effective and useful support tools for diet management for people with diabetes – and other disease cases that will benefit from diet management. Acknowledgements: This work was supported by the Centre for Research-based Innovation, Tromsø Telemedicine Laboratory (TTL), Norwegian Research Council Grant No. 174934.
References [1]
[2] [3] [4]
[5] [6] [7] [8]
[9] [10] [11]
[12] [13] [14] [15]
[16]
Tatara N, et al. A Review of Mobile Terminal-Based Applications for Self-Management of Patients with Diabetes. In: eHealth, Telemedicine, and Social Medicine, 2009. eTELEMED '09. International Conference on. 2009. p. 166-175. Nagelkerk J, et al. Perceived barriers and effective strategies to diabetes self-management. Journal of Advanced Nursing. 2006;54(2):151-158. Farmer A, et al. A real-time, mobile phone-based telemedicine system to support young adults with type 1 diabetes. Inform Prim Care. 2005;13(3):171-177. Kyung-Soon Park, et al. PDA based Point-of-care Personal Diabetes Management System. In: Engineering in Medicine and Biology Society, 2005. IEEE-EMBS 2005. 27th Annual International Conference of the. 2005. p. 3749-3752. Tsang MW, et al. Improvement in diabetes control with a monitoring system based on a hand-held, touch-screen electronic diary. J Telemed Telecare. 2001 Feb 1;7(1):47-50. Årsand E, et al. Designing mobile dietary management support technologies for people with diabetes. J Telemed Telecare. 2008 Oct 1;14(7):329-332. Årsand E, et al. Mobile phone-based self-management tools for type 2 diabetes: the few touch application. J Diabetes Sci Technol. 2010 Mar;4(2):328-336. Fukuo W, et al. Development of a Hand-Held Personal Digital Assistant-Based Food Diary with Food Photographs for Japanese Subjects. Journal of the American Dietetic Association. 2009 Jul;109(7):1232-1236. DeShazo J, et al. Designing and remotely testing mobile diabetes video games. J Telemed Telecare. 2010 Aug 2;:jtt.2010.091012. Tani S, et al. Development of a Health Management Support System for Patients with Diabetes Mellitus at Home. J Med Syst. 2009 1;34(3):223-228. Mougiakakou S, et al. Mobile technology to empower people with Diabetes Mellitus: Design and development of a mobile application. In: Information Technology and Applications in Biomedicine, 2009. ITAB 2009. 9th International Conference on. 2009. p. 1-4. Shroff G, et al. Wearable context-aware food recognition for calorie monitoring. In: Wearable Computers, 2008. ISWC 2008. 12th IEEE International Symposium on. 2008. p. 119-120. Forjuoh SN, et al. Improving diabetes self-care with a PDA in ambulatory care. Telemed J E Health. 2008 Apr;14(3):273-279. Ma Y, et al. PDA-assisted low glycemic index dietary intervention for type II diabetes: a pilot study. Eur J Clin Nutr. 2006 May 17;60(10):1235-1243. Sevick MA, et al. Design, feasibility, and acceptability of an intervention using personal digital assistant-based self-monitoring in managing type 2 diabetes. Contemp Clin Trials. 2008 May;29(3):396-409. Sevick MA, et al. Factors associated with probability of personal digital assistant-based dietary selfmonitoring in those with type 2 diabetes. J Behav Med. 2010 3;33(4):315-325.
28
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-28
Interaction Between COPD Patients and Healthcare Professionals in a Cross-Sector Tele-Rehabilitation Programme Birthe DINESENa,1 , Stig Kjaer ANDERSEN a, Ole HEJLESEN a, Egon TOFT a a Department of Health Science and Technology, Aalborg University, Denmark
Abstract. This paper explores how technology affects the interaction between chronic obstructive pulmonary disease (COPD) patients and healthcare professionals in a cross-sector tele-rehabilitation programme. The qualitative analysis has shown that a community of rehabilitation can be created despite the presence of long-distance technology. In the tele-rehabilitation programme, the interaction between the COPD patients at home and the healthcare professionals at the clinic has evolved with dialogue as the basis for mutual learning processes and new relationships. Managed properly, rehabilitation at a distance can be both effective and satisfying. Keywords. Tele-rehabilitation, COPD patients, healthcare professionals, wireless technology, preventive integrated care
1. Introduction In 2005, three million people died of chronic obstructive pulmonary disease (COPD), equivalent to 5% of all deaths globally that year [1]. Patients with severe and very severe COPD have a readmission rate of 63% during a mean follow-up of 1.1 year, with physical inactivity as the most significant predictor for readmission [2]. Those COPD patients with serious symptoms experience significant limitations in their everyday life, and the effect of medical treatment is limited. Many COPD patients must live with a reduced level of function, inactivity, frustration and social isolation. The issue, then, is to develop the most effective means of delivering and coordinating multidisciplinary care for COPD patients [3]. Reviews of non-telecommunicationsbased disease management programs for patients with COPD show these programs to be heterogeneous in terms of interventions, outcome measures and study design [4,5]. There is a need for more research on disease management programs for COPD patients that cross-cuts both primary and secondary care [6,7,8]. In the research and innovation project, called “Telehomecare, chronic patients and the integrated healthcare system” (the TELEKAT project), we have taken up the challenge of combining rehabilitation activities and use of new technology in order to develop a cross-sector telerehabilitation programme for COPD patients. The patients in focus are those with severe or very severe COPD. The aim of this paper is to explore how technology 1
Corresponding Author: Birthe Dinesen, Assistant Professor, Aalborg University, Department of Health Science and Technology, Fredrik Bajers Vej 7 D1, DK-9220 Aalborg, Denmark, E-mail:
[email protected], tel. +45 20515944
B. Dinesen et al. / Interaction Between COPD Patients and Healthcare Professionals
29
affects the interaction between COPD patients and healthcare professionals in a telerehabilitation programme. Through user-driven innovation, the TELEKAT project has focused on developing a programme of tele-rehabilitation that can be carried out in the patient’s own home in collaboration with various healthcare professionals. Rehabilitation, instead of being carried out at a clinic, can thus become a part of the patient’s everyday life in the home environment. A telehealth monitor box is installed in the patient’s home for four months. Based on wireless technology the telehealth monitor can collect and transmit data about the patient’s blood pressure, pulse, weight, oxygen level and lung function to a web-based portal or to the patient’s electronic health care record. Healthcare professionals, e.g., general practitioners (GP), district nurses, nurses, doctors and physiotherapists at the health care centre or hospital, can assess the patient’s data, monitor the patient’s disease and training inputs and provide advice to the patient. Patients and relatives can also view the data on the web portal and decide with whom they want to share their data (see figure 1).
Figure 1. The TELEKAT programme for telerehabilitation
2. Theoretical Framework This study is based on the notion of “communities of practice”, as inspired by the learning theorist Etienne Wenger [9]. Wenger defines “communities of practice” as groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly. Wenger sees learning as a social practice centering around knowledge-sharing. Learning process is thus more than an individual cognitive process. Learning takes place in interaction with others, with whom one has a common interest. Hence, one becomes a part of a social learning process. Through the communities of practice, the participants realize that they gain more knowledge and understanding for the common interest. Over time and in sustained interaction, the participants develop a shared practice and repertoire of resources: they exchange experiences, stories, tools, and ways of addressing recurring problems. Participants will be involved in a set of relationships over time.
30
B. Dinesen et al. / Interaction Between COPD Patients and Healthcare Professionals
3. Methods The case study method [10] is chosen as the overall research strategy for this study and serves as an explorative and in-depth study. A randomised study (n=111) has also been conducted. One group of COPD patients, called “intervention group” (n=57), received home monitoring using tele-rehabilitation technology. A second, control group (n=48) of patients followed the traditional rehabilitation programme. Clinical and economic data from the randomised study is not reported in this paper. Data collection techniques included: documentary materials, participant-observation inspired by Delamont [11] (total hours: 163 hours), qualitative interviews inspired by Kvale [12] with healthcare professionals, of which there were 6 GPs, 6 nurses and doctors at hospital, 6 nurses at the healthcare centre and 8 district nurses. Of the 57 COPD patients in the intervention group, 22 were interviewed three times while doing home monitoring. All the transcribed interviews were coded with Nvivo 8.0 software and analyzed and identified in steps inspired by Kvale (2009). The research process was carried out in dialogue with research colleagues. In order to optimize generalization of case studies, reference literature [13] recommends analytical generalization. In the TELEKAT project, analytical generalization has been applied by using a theoretical framework and a triangulation of data collection. Ethical approval was obtained from the local Ethics Committees.
4. Findings The findings of how technology influences the interaction between COPD patients and healthcare professionals in a telerehabilitation programme are presented in terms of themes and examples in table 1. Table 1. Themes of how technology influences the interaction between COPD patients and healthcare professionals Themes Mutual learning process “Community of rehabilitation” From authority to dialogue
Technology as network creator Technology as a pedagogical tool Cared for and feeling secure
Examples Healthcare professionals state that they learn more about COPD patients and rehabilitation in their everyday life.COPD patients state that they were able to integrate and maintain changes of lifestyle in their everyday life. Healthcare professionals and COPD patients have developed a joint commitment and perception of telerehabilitation. Dialogue between hospital and patient (and family) breaches the healthcare professionals’ knowledge monopoly. Patients express the view that they have developed dialogue with the healthcare professionals on a more equal basis. The design of the web portal makes it possible for the healthcare professionals, e.g., doctor at hospital, patients’ GP and the patients to be able to access the same data. Measured values that were accessible and visualised through graphics provide the patients with an overview of the development of their own symptoms. The COPD patients state that they feel cared for and secure in their interaction with the healthcare professionals.
Table 2 presents the baseline characteristics of the participants in the randomized study.
31
B. Dinesen et al. / Interaction Between COPD Patients and Healthcare Professionals
Table 2. Characteristics of interviewed COPD patients at baseline. The values are shown are the mean or median. Variable Number Age (years) FEV1 (liters) Weight (kg) BMI (kg/m2) Oxygen saturation (on air) Blood pressure Pulse MRC dyspnoea score
Telerehabilitation Group (n= 57) Male 23 69,6 1,10 79,61 25,74 93,33 137/79 77 3,5
Female 34 67,2 0,75 67,53 25,31 93,63 136/82 85 3,64
Control Group (n= 48) Male 22 70,6 1,16 79,56 26,8 94,11 136/80 80 3,6
Female 26 59,9 0,74 60,67 22,76 94,42 132/77 80 4,00
5. Discussion Based on a qualitative analysis, the interaction between COPD patients and healthcare professionals in the tele-rehabilitation programme can be characterized in terms of Wenger’s “community of rehabilitation”, linking COPD patients and healthcare professionals across sectors (see table 1). The characteristics of the interviewed COPD patients in the tele-rehabilitation group are representative compared to the control group (see table 2) in the TELEKAT study. The COPD patients expressed the view that their relationships with the healthcare professionals had developed from one of being subordinated to professional authority to a relationship of dialogue where the focus was on mutual learning. Observations showed that COPD patients tended to become more active as they participated in the programme. The rehabilitation process thus became a learning process. It was more than an individual cognitive process centered on the patient, since the learning was also distributed amongst healthcare professionals, the family and network of the COPD patients. Observations showed that the telerehabilitation programme created a bridge between the healthcare professionals’ domain and the patients’ home domain. Moreover, the programme also challenged the traditional authority relationships and ways of interacting between professional healthcare practitioners and patients. This change in interaction between healthcare professionals and patients from an authoritarian relationship to a more egalitarian relationship based on dialogue between the parties has been seen in another study on home hospitalisation [7]. Equal dialogue not only enhances patients’ personal capacities to handle the consequences of living with the limitations of severe COPD. It also enhances the ability of healthcare professionals to provide treatment and to help patients regain some of their lost potential for staying active and avoiding rehospitalisation. Healthcare professionals and patients expressed the view that the design and function of the web portal promoted networking between the parties. Observations and the qualitative analysis showed that being able to see the measured and visualized values on the screen motivated the patients to involve themselves more deeply in their rehabilitation activities, giving them a better understanding of their own disease. This tendency has been seen in studies of home monitoring of other chronic diseases [14]. In the TELEKAT study, however, the patients articulated the view that they felt well cared for and were secure in the knowledge that the healthcare professionals were there for them “at the end of
32
B. Dinesen et al. / Interaction Between COPD Patients and Healthcare Professionals
the line”. This made them feel more secure in carrying out the rehabilitation activities in their homes, despite the fact that no one was there to supervise them on the spot.
6. Conclusion The qualitative analysis has shown that a community of rehabilitation can be created despite the presence of long-distance technology. In the tele-rehabilitation programme, the interaction between the COPD patients at home and the healthcare professionals at the clinic has evolved with dialogue as the basis for mutual learning processes and new relationships. Managed properly, rehabilitation at a distance can be both effective and satisfying. Acknowledgements. The TELEKAT project is funded by the Program for User-driven Innovation, the Danish Enterprise and Construction Authority, Center for Healthcare Technology, Aalborg University, and by various clinical and industrial partners in Denmark. For further details, see www.telekat.eu.
References [1] [2] [3]
[4] [5]
[6] [7] [8]
[9] [10] [11] [12] [13] [14]
World Health Organization, Chronic obstructive pulmonary disease (COPD). Fact sheet N°315; 2009. Available at: http://www.who.int/mediacentre/factsheets/fs315/en/ Accessed 8th December 2010. Garcia-Aymerich J, Farrero E, Felez MA, Izquierdo J, Marrades RM, Anto JM. Risk factors of readmissions to hospital for a COPD exacerbation: a prospective study. Thorax. 2003; 58:100-105. Niesink A, Trappenburg JCA, Weert-van Oene GH, Lammers JWJ, Verheij TJM, Schrijvers AJP. Systematic review of the effects of chronic disease management on quality-of-life in people with chronic obstructive pulmonary disease. Respiratory Medicine. 2007; 101: 2233-2239. Lemmens KMM, Nieboer AP, Huijsman R. A systematic review of integrated care use of disease management interventions in astma and COPD. Respiratory Medicne. 2009; 103: 670-691. Peytremann-Bridevaux I, Staeger P, Brideaux PO, Ghali WA, Burn B. Effectiveness of Chronic Obstructive Pulmonary Disease- Management Programs: Systematic Review and Metaanalysis. American Journal of Medicine. 2008; 121 (5): 433-443. Paré G, Jaana M, Sicotte C. Systematic Review of Home Telemonitoring for Chronic Diseases: The Evidence Base. 2007; Journal of the American Medical Informatics Association.2007;14(3): 269-277. Dinesen B. Implementation of Telehomecare Technology – impact on chronically ill patients, healthcare professionals and the healthcare system. Aalborg, Aalborg University; 2007. Polisena J, Tran K, Cimon K, Hutton B, McGill S, Palmer K, Scott RE. Home telehealth for chronic obstructive pulmonary disease: a systematic review and meta-analysis. Journal of Telemedicine and Telecare.2010; 16 (3):120-127. Wenger E. Communities of practice. Learning, meaning, and identity. Cambridge, University Press; 1998. Yin R. Case Study Research Design and Methods. London, Sage Publications Inc; 2009. Delamont S. Ethnography and participant observation. In: Seale C, Gobo G, Gubrium JF, Silverman D, editors. Qualitative Research Practice. London: Sage Publications; 2007. pp. 205–17. Kvale S. Brinkmann S. Interviews: Learning the Craft of Qualitative Research Interviewing. Los Angeles, SAGE Publications; 2009. Flyvberg B. Five misunderstandings about case-study research. In: Seale C, Gobo G, Gubrium JF, Silverman D, editors. Qualitative Research Practice. London: Sage Publications; 2007. pp. 390-404. Dinesen B, Andersen PER. Qualitative evaluation of a diabetes advisory system, DiasNet. Journal of Telemedicine and Telecare. 2006; 12(2):71-74.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-33
33
Enhancing Self-Efficacy for SelfManagement in People with Cystic Fibrosis Elizabeth CUMMINGSa, Jenny HAUSERb, Helen CAMERON-TUCKERC, Petya FITZPATRICKc, Melanie JESSUPd, E Haydn WALTERSc, David REIDb,c , Paul TURNERa a eHealth Services Research Group, University of Tasmania b Tasmanian Adult Cystic Fibrosis Unit, Royal Hobart Hospital, Tasmania c Menzies Research Institute, University of Tasmania d Griffith University, Queensland, Australia
Abstract: This paper reports on a research trial designed to evaluate the benefits of a health mentoring programme supported with a web and mobile phone based self-monitoring application for enhancing self-efficacy for self-management skills and quality of life for people with CF. This randomised, single-blind controlled trial evaluated two strategies designed to improve self-management behaviour and quality of life. Task-specific self-efficacy was fostered through mentorship and self-monitoring via a mobile phone application. Trial participants were randomised into one of three groups: Control, Mentor-only and Mentor plus mobile phone. Analysis and discussion focus on the experiences of participants through a methodology utilising descriptive statistics and semi-structured interviews. The results highlight the challenges of stimulating self-management behaviours particularly in adolescents and in the evaluation of the role of mobile applications in supporting them. Keywords: chronic disease, self-management, information technology, m-health
1. Introduction Managing and maintaining health care support for the chronically ill poses numerous challenges for conventional models of health care delivery [1]. These challenges are particularly evident where the chronically ill are primarily children or young adults, as in the case of cystic fibrosis (CF) [2]. In response, new models of care have emerged including some that aim to support more patient involvement through mentoring and self-management [3]. Some evidence suggests that these types of interventions can be as effective as the introduction of new medications [4], although it is acknowledged that there are limitations to the techniques that have so far been utilised to evaluate interventions of this type [5, 6]. At the same time, there has been an increasing diffusion of web based and mobile information and communication technologies (ICTs) to assist in improving care delivery [7]. These systems have strong potential for supporting home based medical care [8] and there are also a number of studies reporting positive outcomes achieved in patients with chronic illness through encouraging self–management supported by technology [9]. It is however, evident that these types of interventions are also highly complex and require sophistication in the approaches utilised to implement them [10] and to evaluate their impacts [11, 12].
34
E. Cummings et al. / Enhancing Self-Efficacy for Self-Management in People with Cystic Fibrosis
This paper reports on a research trial designed to evaluate the benefits of a health mentoring programme supported with a web and mobile phone based self-monitoring application for enhancing self-efficacy for self-management skills and quality of life for people with CF. The paper focuses on the mobile phone application, its usage and participants’ perception of its value in assisting them to self-manage.
2. Methods This randomised, single-blind controlled trial evaluated two strategies designed to improve self-management behaviours and quality of life in adolescents and adults with CF. Task-specific self-efficacy was fostered through mentorship and self-monitoring via a mobile phone and web-based application. Participants were recruited from within the CF community across Tasmania. All potential participants received a letter outlining the study and requesting volunteers. Respondents willing to participate then attended their regular CF clinic to formally consent and to allow baseline measurements and randomisation to be conducted. A total of nineteen participants were recruited through the paediatric and adult CF clinics. The study was approved by the Tasmanian statewide ethics committee (H0008370) and all participants and their parents or guardians (if aged less than 18 years) provided written informed consent. Trial participation eligibility criteria were as follows: Inclusion: formal diagnosis of Cystic Fibrosis (genotype or positive sweat test); able to provide informed consent; and landline telephone (to allow for mentoring). Exclusion: diagnosis of other active lung disease; awaiting organ transplantation; or severe lung disease (FEV1 <35%). Participants were randomised to one of three groups: • Intervention 1: access to a self-efficacy program under guidance of a mentor. • Intervention 2: access to the same self-efficacy program as intervention 1 plus the provision of a mobile phone and web-based application allowing participants to monitor their daily symptoms and quality of life. • Control: participants in this arm received the normal level of CF care Participants took part in the trial for a period of six months, with a further followup at a point six months post completion. Data collection involved both quantitative and qualitative assessments. With the primary quantitative outcomes measures collected at baseline, 3, 6, and 12 months being: SF36 version2 for subjective health status; Stanford Self-Efficacy for Managing Chronic Disease 6-Item Scale, CFQR (QoL); Respiratory function tests; forced expiratory volume in 1 second (FEV1), and forced vital capacity (FVC). The data collected through these outcome measures were non-normally distributed and non-parametric statistics were used to analyse the data1. Primary qualitative data collected were: semi-structured interviews with all participants and mentors at completion of project. These interviews were audio recorded and aimed to explore the experiences of the participants. Mentoring was conducted via telephone by trained health professionals and the ICT application facilitated electronic self-reporting of daily symptoms and access to feedback providing longitudinal views of self-reported data for comparison. The digital 1
Quantitative data analysis subject of forthcoming paper.
E. Cummings et al. / Enhancing Self-Efficacy for Self-Management in People with Cystic Fibrosis
35
symptom monitoring and feedback application posed a set of questions forming a daily symptom diary with an additional randomly generated question to improve data quality. On average it took a patient 1½ minutes to complete the diary. A data packet (via SMS) was sent to a messaging server and on to a database for consolidation and interpretation. Participants were able to request and receive reports on their diary data via SMS. This supported viewing of a graphical representation of longitudinal electronic diary summary data blocked into periods of seven days. It was anticipated that by providing participants in Intervention 2 with this longitudinal data it would support them to reflect on changes in their condition and their development of self-management and self-efficacy skills. Mentors were able to login to the project website and review their mentee’s diary entries and if necessary, they could make telephone contact with participants to discuss clinical status or revise action plans and goals. The digital symptom monitoring and feedback diary application was designed to function at very low cost for the CF community thereby ensuring sustainability of the availability of the application beyond the lifetime of the trial. The application and technology platform consisted of development of three core components: • A suitable mobile phone application that captured and rendered clinical and non-clinical information from patients. • A mobile phone server application to capture and send clinical information to each patient involved in Intervention 2. • A database with web interface to store all phone data, action plans and progress notes and provide graphical representation of longitudinal electronic diary data. Critically, the approach underpinning the development, use and evaluation of this application acknowledged the clinical and information systems challenges faced in trying to understand the impact and outcome of this intervention on individual patients. While statistics on discrete variables highlight evidence of change at a cohort level they explain little about the ‘lived experience’ of the individual’s self-perceived health status [13]. Similarly, from a technology perspective focusing on technological acceptance or usability often fails to reveal detail on the interplay of personal factors that make up an individual’s technology experiences, attitudes and responses [14]. As a consequence a range of quantitative and qualitative data was collected and analysed.
3. Results A total of 20 participants were enrolled in the trial. One participant, randomised to Intervention 2, withdrew and their data removed from the dataset. There was no significant difference in sex or age distribution between the groups (see Table 1). Table 1. Participants by Sex and Age Group Control Intervention 1 Intervention 2
Male 3 4 3
Female 4 3 2
Median Age 19 20 20
Age Range 17-47 16-33 14-45
Eighteen (95%) of the participants had mobile phones that they used frequently. 17 (89%) participants perceived themselves to be either expert or proficient users of mobile phones, with two identifying themselves as beginners. The most frequently
36
E. Cummings et al. / Enhancing Self-Efficacy for Self-Management in People with Cystic Fibrosis
reported uses for mobile phones were SMS [N=14], phone calls [N=12] and camera [N=6]. While phone usage patterns are changing rapidly, these figures appear consistent with patterns of ‘normal’ teenage/young adult phone usage [15]. All participants in Intervention 2 used the symptom monitoring and feedback diary. Noticeably, one mobile phone user was particularly enthusiastic and completed over 100 diary entries. Table 2 shows the number of diary entries for each participant and their ages. Active participation was 6 months so maximum number of diaries was 183. Table 2. Symptom Monitoring Diary Use by Age over the entire active study period Id No 8001 8002 8004 8005 8006 Total
Total Diaries 62 26 132 47 32 301
First Quarter 39 22 66 22 27 176
Second Quarter 23 4 68 25 5 125
Age 14 17 45 21 23
In relation to the Stanford Self-Efficacy for Managing Chronic Disease 6-Item scoring system, the control group experienced a decrease in median score over the 6 month trial period, but median score increased for this group at 12 months. In both Intervention 1 and Intervention 2 there was a significant increase in median selfefficacy score following active intervention, this was sustained at the 12 month point This is consistent with other research [12]. At the end of the 6 month active intervention period, from interviews it was clear that participants using the mobile application found it convenient and fitted their lifestyles. Several also felt the application assisted in thinking about their symptoms. There was a feeling that having a new mobile phone was ‘kinda cool’ and that it opened up potential social interactions with people beyond the trial. Across both intervention groups, participants expressed a strong sense that they had engaged in the trial to ‘help the researchers’ rather than to help themselves. This clearly highlights one challenge faced in engaging patients in self-management through clinical trials [16].
4. Discussion This trial implemented and evaluated a mobile phone supported mentorship system for people with CF, aimed at aiding the enhancement of self-efficacy. Participants were experienced in ICT use and all participants in Intervention 2 used the device, although usage dropped off over time. The application was generally considered to be useful and allowed CF individuals to focus on changes in symptoms. Self-efficacy increased in subjects in both intervention groups, but it is unclear from the results if the application provided additional benefits beyond supporting the mentoring intervention. This trial demonstrates that use of a mobile application is feasible with a geographically dispersed CF population and that most people with CF are confident with use of mobile platforms. Although self-efficacy improved in Intervention 2 it is unclear the extent of the role of the mobile application as similar improvements in selfefficacy were observed in Intervention 1. To answer these questions requires a larger study that is adequately powered to detect differences between treatment arms and to further delineate the contribution of the application to enhancing self-efficacy. To this end insights from this study have been incorporated into the design and implementation of a larger community-centric CF project in which health mentors in regional centres
E. Cummings et al. / Enhancing Self-Efficacy for Self-Management in People with Cystic Fibrosis
37
are being trained and supported by web and mobile phone based applications that allow improved access to education for both healthcare workers and people with CF. The mobile and web applications have been refined to further support self-monitoring. In summary, this paper examined participants in a RCT of a mentoring system plus or minus an ICT application to support the acquisition of self-efficacy skills in adolescents and adults with CF. The study confirms that people with CF are confident and prepared to use ICT devices for self-monitoring and appreciate the value of such an exercise. The results confirm the feasibility of interventions to improve self-efficacy, but the contribution of the ICT component remains to be determined in a larger study. Acknowledgments: The authors would like to thank the Tasmanian CF population, the volunteer mentors, the Tasmanian Adult CF Unit and the wider Pathways Home for Respiratory Illness Project team members.
References [1] [2]
[3] [4] [5] [6] [7] [8] [9] [10] [11] [12]
[13] [14] [15] [16]
Grol R. Improving the quality of medical care: Building bridges among professional pride, payer profit, and patient satisfaction, JAMA 2001; 286: 2578-86 Downs JA, Roberts CM, Blackmorel AM, Le Souef PN, Jenkins SC. (2006). Benefits of an education programme on the self-management of aerosol and airway clearance treatments for children with cystic fibrosis. Chronic Respiratory Disease, 3(1), 19-27. Coleman MT, Newton KS. Supporting self-management in patients with chronic illness. Am Fam Physician 2005; 72: 1503-10. Lorig K, Ritter P, Stewart A, et al. Chronic disease self-management program: 2-year health status and health care utilization outcomes. Med Care 2001; 39(11): 1217-23. Grossman J, Mackenzie FJ. The randomized controlled trial: gold standard, or merely standard? Perspect Biol Med 2005; 48 (4): 516-34. Bluhm R. From Hierarchy to Network: a richer view of evidence for evidence-based medicine. Perspect Biol Med 2005; 48(4): 535-47. Emery D, Hayes BJ, Cowan AM. Telecare delivery of health and social care information. Health Informatics Journal. 2002 March 1, 2002;8(1):29-33. Pollard JK, Fry ME, Rohman S, Santarelli C, Theodorou A, Mohoboob N. Wireless and Web-based medical monitoring in the home. Med Informa and Internet. 2002/09//;27(3):219-27. Celler BG, Lovell NH, Basilakis J. Using information technology to improve the management of chronic disease. Medical Journal of Australia. 2003 Sep 1 2003;179(5):242-6. Gustafson DH, Wyatt JC. Evaluation of ehealth systems and services -We need to move beyond hits and testimonials. BMJ 2004;328(7449): 1150. Harrison M, Koppel R, Bar-Lev S. Unintended Consequences of Information Technologies in Health Care – An Interactice Sociotechnical Analysis. J Am Med Inform Assoc 2007; 14(5):542-9 Cummings E, Turner P. Patients at the Centre: Methodological considerations for evaluating evidence from Health interventions involving Patients Use of Web-based Information Systems, The Open Medical Informatics Journal. 2010; Vol.4, 188-194, Bentham Open. Olsson J, Terris D, Elg M, Lundberg J, Lindblad S. The one-person randomized controlled trial. Qual Manag Health Care 2005; 14(4):206-16. Stoop AP, Berg M. Integrating Quantitative and qualitative methods in patient care information systems evaluation: guidance for the organizational decision maker. Methods Inf Med 2003; 4:458-62. Lenhart A. Teens and Mobile Phones Over the Past Five Years: Pew Internet Looks Back, (2009) < www.pewinternet.org > Felt U, Bister M, Strassnig M, Wagner U. Refusing the information paradigm: informed consent, medical research, and patient participation. Health: An Interdisciplinary Journal. 2009: 13, 87-106.
38
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-38
Evaluation of a Hyperlinked Consumer Health Dictionary for Reading EHR Notes Laura SLAUGHTERa,b,1, Karl ØYRI a, Erik FOSSE a The Intervention Centre, Oslo University Hospital (OUH) and Dept. of Clinical Medicine, Oslo, Norway b Dept. of Computer and Information Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway a
Abstract. In this paper, we report on a pilot study conducted to test the usefulness and understandability of definitions in a Consumer Health Dictionary (IVS-CHD). Our two main goals for this study were to evaluate functionality of the dictionary when embedded in electronic health records (EHR) and determine the methodology for our larger-scale project to iteratively develop the IVS-CHD. The hyperlinked IVS-CHD was made available to thoracic surgery patients reading their own EHR. We asked patients to rate definitions on two 5-level Likert items measuring perceived usefulness and understandability. We also captured the terms that patients wanted defined, but that were not included in the IVS-CHD. Preliminary results indicate the types of problems that must be avoided when creating definitions, for example, that patients prefer detailed explanations that include medical outcomes, and that do not use "unfamiliar" terms they must also look up. We also have gained insight into the types of terms that patients want defined from their EHR notes, especially certain abbreviations. Patients further commented on the experience of reading EHR notes directly from the same system used by healthcare personnel and the help strategy of linking the contents to a hyperlinked dictionary. Keywords. Consumer Health Information, Dictionary, Electronic Health Records
1. Introduction Health records are internal working documentation used by healthcare professionals, and are also official legal documents. Until recently, patients' ability to understand and use the contents of these records has not been a huge concern. Yet now, with more countries passing legislation giving patients legal right to access their records and increasing availability of personal health records systems, many researchers are working on ways to help patients understand their health record content [1-3]. The Intervention Centre (IVS), a multi-disciplinary research centre at Oslo University Hospital (Rikshospitalet) in Oslo, Norway, has developed a consumer health dictionary (IVS-CHD), which is accessible through hyperlinks and embedded in the electronic health records (EHR) used in the hospital. Patients reading their records see, for example, their surgical notes with hyperlinked terms throughout the text. When mousing over a hyperlinked term, definitions are displayed in a pop-up box. At the bottom of the pop-up box, other links can be found that take the reader to further 1
Corresponding Author: Laura Slaughter
L. Slaughter et al. / Evaluation of a Hyperlinked Consumer Health Dictionary
39
information, opening a new web browser window with the contents. The IVS-CHD resources include the patient version of a Norwegian catalog of pharmaceuticals with the detailed drug descriptions [4] and an encyclopedia of medical information that was written for patients [5]. The encyclopedia contains textual information in addition to diagrams, Flash programs, videos, and animations. The third source was the Norwegian Medical Dictionary (Norsk Medisinsk Ordbok) published by Kunnskapsforlaget [6]. Our primary task is the evaluation of IVS-CHD definitions for use by patients and the general usefulness of the embedded consumer health dictionary tool. We report on our study that is seen as a preparation for further work to iteratively improve the patient-friendly definitions of medical terms. The concerns we address are: • Is our consumer health dictionary seen as a useful explanatory tool for patients that will help them understand their own record content? • What makes a good definition for patients? Patients are not one heterogeneous group. How do we write good definitions for everyone? • When patients read their records, what do they really want help with? Which words do they want to look up and why?
2. Methodology We evaluated the IVS-CHD using patients from the thoracic surgery department at Oslo University Hospital (Rikshospitalet). These patients are referred to the hospital from all regions in Norway and may live in either an urban or rural area in the country. Through our interactions with these patients, we iteratively developed the methodology that will be used in the larger-scale research project. 2.1. Participants The five participants in this study were outpatients at the thoracic surgery unit. All the patients were male and between the ages of 58-68, having diagnostic codes related to cardiac transplant, carotid artery stenosis, or myocardial infarction. Their occupational backgrounds were diverse. They all came for tests and preparatory work in advance of an upcoming scheduled surgery. Patients were asked to participate as they became available, over a two-week period in December of 2010. Patients had to fulfill the selection criteria before they were asked to participate: (1) the patient must have had at least one prior surgery at the hospital so that there would be a previous history of notes for the patient to read, (2) the patient must be a native speaker of Norwegian, (3) the patient must be able to read/write in Norwegian, and (4) the patient must have normal cognitive functions (i.e. no stroke patients or known cognitive impairments). 2.2. The IVS-CHD Consumer Health Dictionary As stated above, the definitions displayed when patients mouseover EHR text come from several sources, which are merged in the IVS-CHD. Surgeons affiliated with IVS wrote some of the definitions. They were only instructed to create a definition that would be understandable to patients. We cannot be certain of the rules used for forming definitions in the drug handbook [4] or the encyclopedia (NEL) [5]. For the Norwegian medical dictionary [6], the editor has written that “definitions should not contain words
40
L. Slaughter et al. / Evaluation of a Hyperlinked Consumer Health Dictionary
that cannot be found elsewhere in the dictionary, and they should be built up hierarchically so that the concept group the word belongs to (medicine, disease, muscle clip, etc.) is the first thing explained, and only after that the specifics for the word." [7] Definitions in the Norwegian medical dictionary [6] are often preceded by explanatory synonyms. 2.3. Procedures The total time allocated for each patient was 45 minutes, and this was the maximum possible due to constraints such as staffing time and convenience for the patients. The patients were tested in a private room with two researchers and a nurse present. They used a laptop to read their own records directly within the hospital’s EHR. All patients were explained the purpose of the study and signed a consent form prior to completing study tasks. Task 1, Rate Definitions: The patient selects a part of the record from the doctor’s notes, nursing notes, surgical notes, or discharge summary- the entire record is available so the patient chooses what is of interest to them. All terms in the EHR text having definitions in the IVS-CHD are displayed using standard blue hyperlinks. When a patient clicks on a hyperlink, we automatically record that the term has been accessed. After reading a definition, the patient then rates the definition on two 5-level Likert items. They are: 1) the usefulness of the definition is not useful/useful, and 2) understanding the definition is difficult/easy. In addition to the rating, the patient’s comments about the definition are recorded. There can be more than one definition for a term available (since there are combined sources, e.g. one definition written by surgeons and one from the Norwegian medical dictionary). The patient must give ratings and comment on each definition. Task 2, Complete Brief Questionnaire: The patient answers the following questions: (1) Do they wish to read their EHR notes: on paper, on screen, or no preference? (2) What can be done to make the records easier to read? (3) How can the EHR be improved to make it easier for patients to understand? Task 3, Underline Difficult Medical Terms: Patients read a printed copy of the discharge summary from their last thoracic surgical procedure at the hospital. Terms are not underlined as they were on-screen using the IVS-CHD functionality. The patients are then asked to read and underline for themselves the terms that they feel are necessary to have defined. We do this in order to find out what terms need to be defined that are not yet in the dictionary.
3. Results There were 5 patients participating and together they rated a total of 25 definitions. We were able to capture some aspects of the types of definitions thoracic surgery patients prefer to have and terms that they need defined, though the small sample-size is a limitation of the current study.
41
L. Slaughter et al. / Evaluation of a Hyperlinked Consumer Health Dictionary
3.1. Definitions The definitions written by the published medical dictionaries did not fair any better than those written by the surgeons from our hospital. Problems and desiderata are described in Table 1. Table 1. Lessons Learned From Problematic Definitions. Defining Medical Terms for Thoracic Surgery Patients: Lessons Learned (1) Do not use unfamiliar medical terms within the definition that also need to be looked up. e.g. vertebral artery* - an artery that arises from the subclavian artery supplying the brain with blood (translation of: arteria vertebralis - arterie som avgår fra arteria subclavia og forsyner hjernen med blod) *written by surgeons (2) Make sure the definition is complete. Definitions need to fit in the context of the patient’s situation and must therefore include the necessary information for understanding what happened during a procedure. e.g. thoractomy[6] - surgical opening of the chest (translation of: torakotomi - kirurgisk åpning av brystkassen) should contain additional information about approach: sternotomy, posterolateral, and anterolateral. (3) Avoid single word definitions.e.g. dilation[5] - expansion (translation of: dilatasjon utvidelse) (4) Avoid circular definitions and definitions that are based on the same term but in a different grammatical form; instead go straight to the needed clarification. e.g. palpatory[6] - has to do with palpation (translation of: palpatorisk - som har å gjøre med palpasjon) (5) When possible, write definitions that explain effects. Explanation is crucial to patients who prefer outcome information. e.g. A definition rated highly by a patient: TIA[5] - "transient ischemic attack" transient decreased blood flow to part of the brain with transient loss of body or mental functions, the condition clears within 24 hours (translation of: TIA - "transitorisk iskemisk atakk", forbigående nedsatt blodstrøm til en del av hjernen og med forbigående tap av kropps- eller mentalfunksjoner, tilstanden normaliserer seg i løpet av 24 timer) e.g. A definition given a low rating by a patient: TIA[6] - transient ischemic attack, transient bouts of oxygen deprivation in parts of the brain (translation of: TIA transitorisk ischemisk attakk, forbigående anfall av oksygenmangel i deler av hjernen.)
3.2. Terms to Define Below in Table 2, we present an example list of terms that patients accessed in the IVSCHD while reading their notes related to thoracic surgery, and also those terms in the records they want included in the dictionary in the future. Table 2. Examples of Terms Patients Want in the IVS-CHD to Help With Understanding Their EHR Notes Examples of Terms Accessed in IVS-CHD opiates (opiater) carotid stenosis (carotisstenoser) abdominal aortic aneurism (abdominalt aorta aneurisme) doppler (shortened version of doppler heart monitor) BT (abbreviation of blood pressure in Norwegian) SPO2 (abbreviation of oxygen saturation level) intercurrent (interkurrente) cardiopulmonary (kardiopulmonale) central obesity (sentral adipositas) abdomen (abdomen)
Terms to Include in the IVS-CHD poststenotic (post stenotisk) coartation (coorctasjon- misspelling coartation)
of
Patients want to be able to see expanded versions of all the shortened expressions and abbreviations in their EHR. The IVS-CHD currently has limited ability to identify abbreviations and acronyms. It identifies acronyms with several meanings, but does not have context-sensitivity built-in and therefore displays all possible meanings to the patient. The thoracic surgery patients did not seem disturbed by this and were able to identify the correct definition themselves.
42
L. Slaughter et al. / Evaluation of a Hyperlinked Consumer Health Dictionary
3.3. General Comments and Usefulness of the IVS-CHD for Reading EHR Records We recorded the comments concerning reading of EHR surgical notes and the use of the IVS-CHD. Overall, patients regarded the IVS-CHD positively, and thought it would be useful for themselves as well as for non-specialist healthcare personnel. One of the patients said he was expecting a "translation" and would prefer to receive a different patient-oriented version of his discharge summary. Another clearly voiced that "I’m not interested in learning", meaning that he did not want to learn the anatomy, procedure, etc. connected to his own surgical procedure. It was the outcome and future treatment plans that he wanted. Another said that he wanted the definitions to be personalized with examples from his own records. Lastly, one of the patients stressed that terms having the potential to be misunderstood should be defined in the IVS-CHD, such as "negative" test result. This patient said, "at first I thought it meant that the test result was bad", but then was relieved to know that having a negative result is actually a very good thing. This statement confirms a finding in Keselman et al. [2]
4. Discussion Difficulties that patients experience with medical terminology have been studied extensively. The primary tools that have been developed to help alleviate these problems are consumer health vocabularies (CHV) [e.g. 8,9]. CHV's can be used within information systems in a variety of ways, but they are primarily intended to automatically replace "unfriendly" professional terms with terms that are considered more appropriate for patients, thereby changing the text to a simpler version [1]. Our approach is to provide easily accessible definitions to patients reading their EHR rather than "translating" the text to a patient-friendly version. This is similar to SciReader [10], which is another tool to read medical content with instantaneous definitions. In this pilot study, we have taken a step forward to evaluate this type of proposed aid to understanding. Understanding what terms patients want defined and how to write useful consumer-oriented definitions is a problem to be addressed. Future studies will focus on patients' needs for dictionary resources versus translated versions of EHR content.
References Leroy, G. Eryilmaz, E. Laroya, B.T. Health information text characteristics, AMIA Annu Proc. (2006), 479-83. [2] Keselman, A. Slaughter, L. Smith, C.A. Kim, H. Divita, G. Browne, et.al. Towards consumer-friendly PHRs: patients´ experience with reviewing their health records. AMIA Annu. Proc. (2007), 399-403 [3] Deléger, L. and Zweigenbaum, P. Paraphrase acquisition from comparable medical corpora of specialized and lay texts. AMIA Annu. Proc (2008), 146-150. [4] Felleskatalogen, http://www.felleskatalogen.no/ [5] Norsk Electronisk Legehåndbok (NEL), http://legehandboka.no/ [6] Nylenna, M. (ed.), Medisinsk ordbok 7th ed., Kunnskapsforlaget, Oslo, Norway, 2009. [7] Nylenna, M. Hvordan lages en medisinsk ordbok?, Tidsskr Nor Legeforen 22 (2009), 2401-2400. [8] Messai, R. Simonet, M. Bricon-Souf, N. Mousseau, M. Characterizing consumer health terminology in the breast cancer field. Stud. Health Technol Inform (2010), 160 (Pt2), 991-4. [9] Hong, Y. Ehlers, K. Gilles, R. Patrick, T. Zhang, J. A usability study of patient-friendly terminology in an EMR system. Stud. Health Technol Inform (2010), 160 (Pt1), 136-140. [10] Gradie, P.R. Litster, M. Thomas, R. Vyas, J. Schiller. M.R. SciReader enables reading of medical content with instantaneous definitions. BMC Medical Informatics and Decision Making (2011), 11(4). [1]
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-43
43
A Pilot Assessment of Why Patients Choose Not to Participate in SelfMonitoring Oral Anticoagulant Therapy Morten Algy BONDERUP a, Stine Veje HANGAARD a, Pernille Heyckendorff LILHOLTa , Mette Dencker JOHANSENa Ole K HEJLESENa b 1 a Department of Health Science and Technology, Aalborg University, Denmark b Department of Health and Nursing Science, University of Agder, Norway
Abstract. Patients suffering from heart diseases often face lifelong oral anticoagulant therapy. Traditionally, the patient’s general practitioner takes care of the treatment. An alternative management scheme is a self-monitoring setup where the patient monitors and manages the oral treatment himself.. Despite international evidence of reduced thrombosis risk and death rate among patients enrolled in selfmonitoring, a majority of eligible patients deselect this opportunity. Little is about the causes if this. This study is a pilot assessment of why patients, located in the North Denmark Region, choose not to participate. The study is based on qualitative interviews with two nurses working in a medical practice and two patients participating in conventional anticoagulant therapy. The results of this study seem to suggest that at least some patients feel a lack of information to base their decision regarding self-monitoring or conventional management on and that the knowledge among the health personnel at the medical clinics should be increased. Keywords. anticoagulant therapy, self-monitoring, self-management, INR
1. Introduction Patients suffering from heart diseases often face lifelong oral anticoagulant therapy (OAT) in the form of vitamin K-antagonists like warfarin because of their increased risk of thrombosis [1]. Traditionally, the OAT has been clinic-based as the patient’s general practitioner (GP) monitors treatment effect measured by the International Normalization Ratio (INR) of prothrombin time and adjusts the oral vitamin Kantagonist dosage to maintain INR within the therapeutic range. In recent years, a self-monitoring OAT alternative has been introduced where the patient performs the monitoring using a portable INR measuring device and adjusts the dosage according to an dosing algorithm scheme [1]; a setup inspired by the setup used in diabetes. The self-monitoring system is widely used in Germany [2] and Switzerland [3]. International studies have shown that self-monitoring setups are associated with a reduction in the risk of thrombosis and death [1] and results in higher self-efficacy and improved treatment-related quality of life [4]. 1
Corresponding Author: Ole K Hejlesen, Dept of Health Science and Technology, Aalborg University, Fredrik Bajersvej 7 D1, DK-9220 Aalborg, Denmark; E-mail:
[email protected].
44
M.A. Bonderup et al. / A Pilot Assessment of Why Patients Choose Not to Participate
Nevertheless there are many patients who deselect participation in selfmanagement OAT [3]. In Denmark more than 80.000 patients receive OAT, but only 7% of them perform self-monitoring [5]. Considering the obvious benefits for the individual patient associated with self-monitoring OAT, the low rate of attendance to this treatment appear paradoxical, but little is known about patients’ motives for deselecting self-monitoring OAT. The present paper presents a pilot study assessing why some of the patients located in The North Denmark Region choose not to participate in self-monitoring OAT.
2. Methods 2.1 Design We designed a qualitative interview study with health care professionals (doctors or nurses) responsible for clinic-based OAT and patients on clinic-based OAT who had deselected self-monitoring OAT. 2.2 Respondents The Thrombosis Centre of Aalborg Hospital (a specialized hospital unit responsible for patients on self-monitoring OAT and patients with severe coagulation disorders) selected a GP clinic to recruit patients and health care professional respondents from. The participating GP health care professionals were selected to match the following criteria: 1) employment on a GP clinic in the North Denmark Region, 2) employment as a doctor or a nurse responsible for patients in clinic-based OAT at the GP clinic, and 3) previous referral of patients for self-monitoring OAT in the Thrombosis Centre, Aalborg Hospital. The GP health care professionals selected patients from their clinic to participate in our study. The participating patients were selected to match the following criteria: 1) age ≥ 18 years, 2) clinic-based OAT, 3) eligible for and presented with the opportunity of self-monitoring OAT 4) deselection of self-monitoring OAT within three months. All participants received oral and written information about the study and gave their consent before the interview. All participants were informed that participating was voluntary and that they could withdraw their consent at any time. 2.3 Interviews The health care professionals were interviewed in a focus group interview whereas the patients were interviewed individually. All interviews were semi-structured. The health care professional focus group interview took place in the GP clinic. The patient interviews were intended to take place in the patients’ homes, but for practicality reasons one was conducted via telephone instead. The interviews were based on an interview guide. Topics covered by the health care professional focus group interview were referral criteria for self-monitoring OAT; patient information regarding self-monitoring OAT; patients’ reasons for deselecting self-monitoring OAT; and suggestions for self-monitoring OAT setup and information improvement to enhance selection. Topics covered by the patient individual interviews
M.A. Bonderup et al. / A Pilot Assessment of Why Patients Choose Not to Participate
45
were indication of OAT; information regarding self-monitoring OAT; reasons for deselecting self-monitoring OAT; and suggestions for self-monitoring OAT setup and information improvement to enhance selection. The interview guide was available to the health care professionals before the interview but this was not the case in the patient interviews. One researcher (PHL) conducted all interviews supported by researchers MAB and SVH. All interviews were sound recorded and later transcribed and the transcripts were reviewed for flaws by one other researcher than the one who did the transcription. 2.4 Data Analysis Emergent themes were identified using meaning condensation in the transcripts and a coding frame was formed. The coding frame was applied to the transcripts. The dominant themes relevant for the aim of the current paper were found through an iterative process of transcript coding, analysis of coded text, and discussions between the authors. In this analysis of coded text, the main steps were self-understanding, critical common sense-understanding and theoretical understanding. Transcription, coding, and theme identification was done using the open source OpenCode qualitative data analysis program (version 2.1, Department of Public Health and Clinical Medicine, Umeå University, Sweden). None of the investigators had any financial interests in the study.
3. Results We interviewed two female nurses from one GP clinic and two patients treated in the same GP clinic (one female, 74 years old; and the other male, 66 years old). 3.1. Health Care Professional Interviews We identified four main themes in the health care professional focus group interview: assumptions regarding patients’ attitudes to the inconvenience of going to the GP clinic for OAT management; assumptions regarding patients’ views on security and safety in clinic-based vs. self-monitoring OAT; guidelines for patients eligible for referral to the Thrombosis Center, Aalborg Hospital; and knowledge about self-monitoring OAT as directed by the Thrombosis Center, Aalborg Hospital. The nurses presumed that patients consider it convenient to have the doctor or nurse manage the treatment as visits to the clinic for OAT management are combined with visits for other reasons and thus induce no additional visits. The nurses also presumed that patients feel confident and safe being in clinic-based OAT. The nurses expressed their lack of knowledge about criteria for patients to be eligible for selfmonitoring OAT and they agreed that they use their own judgment which may be different and not always consistent. The nurses did not seem to offer the selfmonitoring OAT to patients, who are fulfilling well established objective criteria for self-monitoring. They also expressed their lack of knowledge about the Thrombosis Centre in Aalborg and about self-monitoring OAT in general.
46
M.A. Bonderup et al. / A Pilot Assessment of Why Patients Choose Not to Participate
3.2. Patient Interviews We identified four main themes in the patient interviews: experience of seriousness and relevance of information; feeling of security and safety; views of responsibilities of patients and health care system; and experienced and anticipated ability and convenience related to each of the two possible OAT regimens. The patients expressed that they had received only limited information about selfmonitoring when they were offered self-monitoring at the clinic and especially they did not note any serious encouraging information about possible benefits for them or specific information about practical issues like measurement frequency and reimbursement/funding of equipment. They did not consider the self-monitoring OAT a serious, relevant offer, and therefore, they felt that they had an insecure basis for choosing self-monitoring OAT as an alternative to clinic-based OAT. Both patients were satisfied with their current clinic-based OAT management and they did not consider it inconvenient to have to come to the clinic, because the clinic was located close to their homes and they had frequent visits to the clinic for other reasons than OAT. One of the patients expressed her distinct feeling of security and safety due to the personal contact with the nurse or the doctor during the OAT management visits and though she expressed her confidence of currently being able to perform self-monitoring OAT she was afraid that her physical and cognitive ability would prevent her from continuing self-monitoring OAT for more than a year. One of the patients also expressed the view that the task of OAT management is a responsibility of the health care system and not something that can be put on a single individual.
4. Discussion The results of the interviews showed that there may be several causes of patients’ deselection of self-monitoring OAT despite obvious advantages of the setup over the clinic-based OAT setup [1, 4]. One major reason for deselection of self-monitoring OAT suggested by the nurses and confirmed by the patients is the convenience as experienced by the patients of a clinic-based OAT, as no additional visits for OAT management is induced and where the geographical proximity of the clinic allows for frequent visits without inconvenience. This is also reported by Fritschi and co-workers to be a reason for discontinuing self-monitoring OAT among patients first started on self-monitoring OAT but switching to clinic-based OAT [3]. The nurses’ lack of knowledge about who to refer to the treatment and which benefits can be expected and how the self-monitoring setup is organized by the Thrombosis Center may be reflected in the patients’ experiences of vague information and no clear recommendation of self-monitoring OAT over clinic-based OAT. Lack of clear criteria from which to refer the patient is confirmed to a major obstacle for selfmonitoring OAT as reported in a Cochrane meta-analysis [1], but criteria have been published and widely accepted [6]. Such criteria and a detailed description of the procedure should be made available for, and used by, the employees at the GP clinics. It should contain guidelines for which patients should be offered self-monitoring, when and how self-monitoring is offered, and which information is to be given when selfmonitoring is offered.
M.A. Bonderup et al. / A Pilot Assessment of Why Patients Choose Not to Participate
47
Assuming that the patients’ apparent lack of knowledge is a result of the observed lack of knowledge at the GP clinic, patients’ lack of knowledge can be addressed by increasing the knowledge level of the GP nurses and by offering detailed referral guidelines to the GP clinic employees. Clearly, the data material in our pilot study is not large enough to justify any conclusions. However, if the results indicated in our pilot study are confirmed in larger studies, the initiatives mentioned above are clearly indicated to increase the number of patients who are referred to self-monitoring. Increasing and supporting health care professionals’ knowledge about organization and eligibility of patients to selfmonitoring OAT seem to be major steps. In addition to the need for larger studies of why patients choose not to participate in self-monitoring oral anticoagulant therapy, there might also be a need for broader studies of the technology “anticoagulant therapy”. Such studies may not only be used to improve the technology, but may also contribute with valuable information, which may be relevant when trying to decrease the number of patients who, despite being qualified for self-monitoring oral anticoagulant therapy, choose not to participate.
References [1] [2]
[3] [4] [5] [6]
Garcia-Alamino JM, Ward AM, Alonso-Coello P, et al. Self-monitoring and self-management of oral anticoagulation, Cochrane Database Syst. Rev. (2010). Sawicki PT. A structured teaching and self-management program for patients receiving oral anticoagulation: a randomized controlled trial. Working Group for the Study of Patient SelfManagement of Oral Anticoagulation, JAMA. 281 (1999), 145-150. Fritschi, J. Raddatz-Muller, P. Schmid, P. Wuillemin WA. Patient self-management of long-term oral anticoagulation in Switzerland, Swiss Med. Wkly. 137 (2007), 252-258. McCahon D, Murray ET, Murray K, Holder RL, Fitzmaurice DA. Does self-management of oral anticoagulation therapy improve quality of life and anxiety? Fam. Pract. 28 (2011), 134-140. Jensen B. /Hjerteforeningen, Banebrydende medicin undervejs. (2009) Downloaded from http://www.hjerteforeningen.dk/index.php?pageid=334&newsid=79 in December 2010. Fitzmaurice DA, Gardiner C, Kitchen S, Mackie I, Murray ET, Machin SJ. British Society of Haematology Taskforce for Haemostasis and Thrombosis, An evidence-based review and guidelines for patient self-testing and management of oral anticoagulation, Br. J. Haematol. 131 (2005), 156-165.
48
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-48
Mobile Peer Support in Diabetes a
Taridzo CHOMUTARE a,b,1 , Eirik ÅRSAND a,b , Gunnar HARTVIGSEN a,b Norwegian Centre for Integrated Care and Telemedicine, University Hospital of North Norway, Norway b Department of Computer Science, University of Tromsø, Norway
Abstract. As in other domains, there has been unprecedented growth in diabetesrelated social media in the past decade. Although there is not yet enough evidence for the clinical benefits of patient-to-patient dialogue using emergent social media, patient empowerment through easier access to information has been proven to foster healthy lifestyles, and to delay or even prevent progression of secondary illnesses. In the design of diabetes-related social media, we need access to personal health data for modelling the core disease-related characteristics of the user. We discuss design aspects of mobile peer support, including acquisition of personal health data, and design artefacts for a healthcare recommender system. We also explore mentoring models as a tool for managing the transient relationships among peers with diabetes. Intermediate results suggest acquiring health data for modelling patients’ health status is feasible for implementing a personalized and mobile peer-support system. Keywords. Social media, personalization, mHealth, diabetes self-management
1. Introduction To improve self-management for a large population with lifestyle-related diseases such as diabetes, it is ideal that users can always access health information [1] in a ubiquitous manner, quickly and easily. Mobile phones are now highly pervasive and becoming more powerful with various new technologies, increasing their potential as universal devices for chronic disease self-management. However, the information and support services must be personalized and tailored to avoid cognitive overload, and to enhance user experience. Several techniques have been developed for filtering and personalizing Internet information. It has been shown repeatedly that user models can be used in recommender systems [9] to personalize web information, as in popular shopping websites. We are increasingly getting used to seeing recommender systems in use, for example, in e-business applications such as online shops (e.g. Amazon) and entertainment systems (e.g. Pandora Internet Radio). There are many trade-offs that need to be made, for instance regarding performance versus precision, and recommendations versus predictions (which are more demanding). Knowing that even straightforward matching of genre and topic sometimes yields unexpected recommendations, we must be particularly aware of trust and privacy issues in the context of health-related systems. One criticism of much of the literature is that too 1
Corresponding author: Taridzo Chomutare, Norwegian Centre for Integrated Care and Telemedicine, University Hospital of North Norway, 9038 Tromsø, Norway; E-mail:
[email protected]
T. Chomutare et al. / Mobile Peer Support in Diabetes
49
little attention has been paid to the practicalities of user profile management regarding privacy concerns in healthcare, especially given integration with social media [10]. Recent research confirms results from earlier studies indicating that users’ disclosure of personal data depended on how sensitive they perceived the data to be, and how much they trusted or stood to benefit from disclosure and use of the system [6]. This paper discusses the design of mobile peer support in a self-help system for people with diabetes. Two main issues are addressed. The first issue relates to improvement of the user profile management model from the European Telecommunications Standards Institute (ETSI) (technical specification, TS 102 747), through consideration of the health status use case. This concerted effort to model core disease-oriented properties of the user represents a paradigm shift. This new paradigm emphasizes relevance of recommendations to the health status of the patient and not necessarily recommendations that reflect the user’s conscious interests. Secondly, the ideas for fostering social engagement with family and friends put forward by Morris et al. [11] are here extended to establishing temporary, conditional friendships with strangers. New ideas about methods for exploring ad hoc social networks involving short-term relationships with peers are investigated. Mentoring models [5] are explored as potentially useful tools for motivating and sustaining participation for both the patient with a specific health challenge (protégé) and her/his mentor. The “Few Touch Application” (FTA) [3], originally developed as a self-help system for Type 2 diabetes, forms the background for the presented work.
2. Methods The presented system is designed to create a dynamic user model that can reason about a lifelong patient, and is used in personalizing health information and interaction with peers, inspired by the “Patients-Like-Me” [12] concept. The research methods for designing the FTA platform have been multidisciplinary, with an engineering approach in the design of the mobile application, user profile and recommender frameworks. Focus group meetings were used for facilitating participatory design methods; brainstorming, paper prototyping, interviews, questionnaires, and usability testing. A scheduled clinical trial will provide evidence of the impact that peer interactions in social networking websites have on clinical outcomes. The FTA platform was designed to collect the following personal information: • personal aims for food habits and physical activity [3] • blood glucose values using a wireless glucometer system [14] • dietary habits information [4] • physical activity using both a step counter [2] and manual registration • weight monitoring using a wireless weight scale (planned FTA add-on) This information forms part of the critical health indicators for both Type 1 and Type 2 diabetes, and comprises more or less mandatory parameters that users are monitoring on a daily basis. The health data are modelled and encapsulated in the user profile, which is applied to Internet social media content using a recommender system.
50
T. Chomutare et al. / Mobile Peer Support in Diabetes
3. Results 3.1. User Profile Management and Peer Support The first task involved in managing a user profile is often creating the profile, followed by making updates. User profiles have traditionally contained information about the user (alias, age, gender, language, etc.), preferences (layout, navigation, etc.), usage behaviour (clicked links, user-created tags, content rating, etc.), and recently social data (information about friends, their usage behaviour, folksonomy, etc.) and context of use (at work/home, driving/shopping, weather, in a meeting, etc.), and presence (available for chat, away, do not disturb, etc.) information. This study adds validated diabetes health data (blood glucose, HbA1c, weight, etc.). The data is a fragment of the health status, the new dimension in managing user profiles for healthcare use. These diabetesrelated data are pivotal in construction of recommendations regarding relevant peers and communities. 3.2. The Mobility Aspect and Recommender Systems A salient aspect to consider when designing mobile applications is the rapid changes in the context of use. Traditional recommender systems do not consider the complex contexts of the user environment. Context modelling techniques are still immature and their use in recommender systems is still relatively undeveloped. In this work, we consider a few high-level contexts, mentioned in the preceding section. 3.3. Healthcare Recommender System Figure 1 illustrates the design of the healthcare recommender system, where the recommender engine is based on a hybrid algorithm, comprising both collaborative filtering and content-based approaches. In the figure, the context of use is related to the user model and fed to the recommender engine, where this knowledge about the user is mapped onto Internet social media content and user profiles. The output is personalized information, presented as recommendations of vital content and predictions about potentially interesting peers or communities.
Figure 1. Recommender framework for the mobile peer-support system.
T. Chomutare et al. / Mobile Peer Support in Diabetes
51
The presentation of recommendations regarding potentially relevant content is based on aggregated ratings by the community. To increase the usefulness of recommended content, users are requested to rate and tag content, and these new ratings are fed back to the recommender system using learning algorithms. The discreet and easy rating mechanism suitable for small screens uses the Facebook style of “Like”. Quality assuring user-generated content and processing it into actionable knowledge is vital. Generally, current health-related social media is not proactive about quality assuring user contributions. One drawback of much of the media, albeit in the spirit of patient safety, is the reliance on manual moderation of user-generated content, with only a few options for automation tools. As user-generated content increases, manual moderation will become increasingly impractical, but natural language processing research is a promising alternative.
4. Discussion Defining a sufficiently comprehensive representation of a patient is not an easy task because many variables affect the person’s total health status. Innovative representations must be extensible and be able to abstract all the relevant health aspects. Intermediate results confirm feasibility of acquiring and modelling diabetes-related health data – as input to a mobile peer-support system. One important outcome emerging from this work is that automatically acquiring health data significantly reduces threats to data validity. Automatic data acquisition using sensors overcomes two challenges faced in healthcare social media. The first challenge is that of data validity, where users register symptoms, medications or outcomes manually [7; 12]. Some healthcare social networks allow users to manually register and see each other’s health data, but the value of such data is degraded by legitimate questions regarding its validity. The other challenge that is potentially addressed by automatic data acquisition is that of motivating users to manually provide their health data frequently over substantial periods of time. Using recommender systems is one of the more practical ways of implementing intelligent web applications. In this work, decoupling the health status and context of use from the user model has scaling advantages. The respective modelling complexities are transparent to the management of the core profile. These design artefacts for applying diabetes data to recommender systems can be generalized to chronic health information systems, making the artefacts appealing and relevant to a wider audience. Recent researchers have gone as far as proposing full integration of recommender systems with Personal Health Records (PHR) [8; 13]. This approach is promising, but is still largely immature, and suffers from inherent information redundancy and severe security risks. The concepts explored in this work for managing pseudonymised, transient and conditional relationships are, however, rather challenging. Mentoring models [5] are promising as implements for managing such relationships. Mentoring relationships are informal and allow a user to mentor or be mentored by another user who has demonstrated consistent control over a particular health aspect such as weight management. Mentoring relationships may be suitable for maintaining high morale in the community and for sustaining the motivation to succeed in a specific health issue. Further work is needed for elaborating social [11] and psychology theories to design algorithms for managing relationship dynamics and motivation.
52
T. Chomutare et al. / Mobile Peer Support in Diabetes
5. Conclusion Coping with the substantial demands for lifestyle changes among diabetes patients requires the right information and sound motivational tools. Given that patients possess appropriate tools for cooperation, sharing everyday experiences with similar-profiled patients may be more effective for enhancing self-management and increasing selfefficacy than relying on generic information found in books and the Internet. Although this paper was not designed specifically to evaluate factors related to relevance or usefulness of peer recommendations, the new diabetes-related dimensions that are addressed add to a growing body of literature on social media and its application aspects in the healthcare domain. The authors foresee great opportunities using mobile phones as means for peer support in enhancing the quality of life of people with diabetes and other chronic diseases. Acknowledgements. This work was supported in part by the Research Programme for Telemedicine, Helse Nord RHF, Norway, and Centre for Research-based Innovation, Tromsø Telemedicine Lab. (TTL), Norwegian Research Council Grant No. 174934.
References [1]
[2]
[3] [4] [5] [6] [7] [8] [9]
[10]
[11]
[12] [13]
[14]
Alpay, L. Verhoef, J. Toussaint, P. and Zwetsloot-Schonk, B. What makes an "informed patient"? The impact of contextualization on the search for health information on the Internet, Stud Health Technol Inform 124 (2006), 913-919. Arsand, E. Olsen, O.A. Varmedal, R. Mortensen, W. and Hartvigsen, G. A system for monitoring physical activity data among people with type 2 diabetes, Stud Health Technol Inform 136 (2008), 113118. Arsand, E. Tatara, N. Ostengen, G. and Hartvigsen, G. Mobile phone-based self-management tools for type 2 diabetes: the few touch application, J Diabetes Sci Technol 4 (2010), 328-336. Arsand, E. Tufano, J.T. Ralston, J.D. and Hjortdahl, P. Designing mobile dietary management support technologies for people with diabetes, J Telemed Telecare 14 (2008), 329-332. Buell, C.Models of Mentoring in Communication, Communication Education 53 (2004), 1. Cunningham, S.J. Masoodian, M. and Adams, A. Privacy issues for online personal photograph collections, J. Theor. Appl. Electron. Commer. Res. 5 (2010), 26-40. Domingo, M.C.Managing Healthcare through Social Networks, Computer 43 (2010), 20-25. Hammer, S. Kim J. and Andre, E.MED-StyleR: METABO diabetes-lifestyle recommender, in: Proceedings of the fourth ACM conference on Recommender sys, Barcelona, Spain, 2010, pp. 285-288. Heitmann, B. Kim, J.G. Passant, A. Hayes, C. and Kim, H.-G. An architecture for privacy-enabled user profile portability on the web of data, in: Proceedings of the 1st International Workshop on Information Heterogeneity and Fusion in Recommender Systems, ACM, Barcelona, Spain, 2010, pp. 16-23. Kristoffersen, S. Privacy Management in a Mobile Setting, in: Proceedings of the 2009 Computation World: Future Computing, Service Computation, Cognitive, Adaptive, Content, Patterns, IEEE Computer Society, 2009. Morris, M.E. Markopoulos, P. De Ruyter, B. Mackay, W. Lundell, J. Dishongh, T. and Needham, B. Fostering Social Engagement and Self-Efficacy in Later Life: Studies with Ubiquitous Computing, in: Awareness Systems, Springer London, 2009, pp. 335-349. Wicks, P. Massagli, M. Frost, J. Brownstein, C. Okun, S. Vaughan, T. Bradley, R. and Heywood, J. Sharing health data for better outcomes on PatientsLikeMe, J Med Internet Res 12 (2010), e19. Wiesner, M. and Pfeifer, D. Adapting recommender systems to the requirements of personal health record systems, in: Proceedings of the 1st ACM International Health Informatics Symposium, ACM, Arlington, Virginia, USA, 2010, pp. 410-414. Årsand, E. Andersson, N. Hartvigsen, G. No-Touch Wireless Transfer of Blood Glucose Sensor Data., COGnitive systems with Interactive Sensors (2007).
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-53
53
Evolution of Health Web certification through the HONcode experience Célia BOYERa, Vincent BAUJARD a, Antoine GEISSBUHLER a,b a Health On the Net Foundation b Service Cyber Santé et Télémédecine, Hôpitaux Universitaires de Genève Geneva Switzerland
Abstract. Today, the Web is a media with increasing pervasiveness around the world. Its use is constantly growing and the medical field is no exception. With this large amount of information, the problem is no longer about finding information but assessing the credibility of the publishers as well as the relevance and accuracy of the documents retrieved from the web. This problem is particularly relevant in the medical area which has a direct impact on the wellbeing of citizens and in the Web 2.0 context where information publishing is easier than ever. To address the quality of the medical Internet, the HONcode certification proposed by the Health On the Net Foundation (HON) is certainly the most successful initiative. The aims of this paper are to present certification activity through the HONcode experience and to show that certification is more complex than a simple code of conduct. Therefore, we first present the HONcode, its application and its current evolutions. Following that, we give some quantitative results and describe how the final user can access the certified information. Keywords. HONcode certification, Trustworthiness, Transparency, Health Web, Internet
1. Introduction In recent years the ease of publishing on the Internet has been further increased with the advent of the Web 2.0 phenomenon. Thus, despite the wealth of content available, the question is not just about finding information but also whether the information provided is credible. The problem is particularly acute in the medical information domain which has a direct impact on the health of public [1]. In response to the lack of transparency of the health information, many theoretical and practical initiatives have marked the short history of the Web. The most significant trends that have been applied to the Web on the quality of information (medical or not) are: the selection of webpages (e.g. Yahoo), self-regulation (e.g. Discern[2]), the popularity of webpages (e.g. Page Rank[3]), the certification of websites (e.g. URAC[4], HONcode[5]), education of the user (e.g. OMNI[6]) and the collaboration of users.
54
C. Boyer et al. / Evolution of Health Web Certification Through the HONcode Experience
2. Material and Methods Initiated in 1995, the implementation of the HONcode (see Table 1) [7] (third party certification) began in 1996, Discern (self-evaluation) in 1998, WebMedica in 1998 (certification only for Spanish), Hi-Ethics (third party certification) in 2000, eHealth Code of Ethics (self-evaluation) in 2001, URAC in 2001 (very detailed but expensive), European Guidelines in 2002 (Eq. HONcode principles of the HON which participated in the development) and AFGIS in 2003 (dedicated to German sites). While some initiatives have disappeared or others do not have many candidates, the HONcode has been translated into 35 languages, had over 7400 sites certified by the end of 2010 in 102 countries and had been selected in 2007 by France to be the official certification body of French health websites. HONcode certification [7] is a voluntary act on the part of the site applicant; the first step is submitting the application form on the HON website. A pre-assessment is proposed to the webmaster to identify the missing principles. Once the certification request is submitted, HON experts evaluate the website. Each ethical principle which is not being complied by and should be added to the content of the webpages is indicated. Once the changes have been made, a unique seal of certification is issued. All HONcode sites are certified for 1 year and are reviewed annually. If a website no longer respects the HONcode, the webmaster receives a warning and if required changes are not made, the site may lose its certification. In addition if a user considers that a web site does not respect one or more of the HONcode principles while displaying the HONcode seal, he/she can report the violation using the complaint system accessible via the HONcode certificate linked to this website. The complaint is treated within 2 weeks by members of the HONcode team. If it is justified, the webmaster of the site is asked to bring modifications. As you can see, the certification process is interactive and provides a constructive contact between HON and the webmaster. Indeed, the aim is to bring up sites to a certain level of transparency. In keeping with this aim, some additions have been made to address the peculiarities of Web 2.0 [7]. The collaborative platform in addition to the current guidelines should respect as well the ones added specific to the Web 2.0. In view of the dynamics of the Web, the certification is in continuous expansion. Initiatives based on algorithms of criteria recognition, based on rules or by automatic learning were presented to give an indication of ethics to the health Web pages. While the model of supervised learning Aphinyanaphongs [8] is based on static examples of good and bad pages and therefore dependent on fields, the HON approach is more generic since it is based on the model of the HONcode [9]. This last approach offer good results with 78% of accuracy over all principles, and its integration in HON daily activity is in progress. The text retrieval was also used in creation of WRAPIN [10], a tool helping to determine the reliability of documents by checking the ideas contained against established benchmarks, and eventually enabling users to determine the relevance of a given document from a page of search results. Currently the visual information retrieval is being developed what is especially important for the doctors working with the images as radiologists. The retrieval of images can be done in two ways. The first one is done via simple text-based queries associated to an image. The second one is performed via requests based on matching exact database fields. The development and use of the second one for medical field is the subject of further research. One of running researches in the field is the 4-year EU project “KHRESMOI” [11] started in September of 2010, which aims to create a biomedical search engine targeted
C. Boyer et al. / Evolution of Health Web Certification Through the HONcode Experience
55
to the needs of lay populations, medical doctors and specifically radiologists. The KHRESMOI will archive effective automated extraction from biomedical documents, including improvements using crowd sourcing and active learning, and automated estimation of the level of trust and target user literacy, automated analysis and indexing for medical images in 2D, 3D and 4D, link unstructured and semi-structured information extracted from texts and images to structured information in knowledge databases, support cross-language search and create the adaptive user interface to assist in formulating the queries. The sources of information retrieval are: books, journals, web sites, images, and semantic data. It will also utilize the language resources to allow the translation and make the results available for a whole EU population. The expected impact on target users is fast availability of required trustworthy information.
Figure 1. Dynamic HONcode logo following the current status of the HONcode certification process Table 1 Presentation of the HONcode Principle (summarized) 1. Authoritative: indicate the qualifications of the authors 2. Complementarity: information should support, not replace, the doctor-patient relationship, the mission and the audience are explicated. 3. Privacy: Respect the privacy and confidentiality of personal data submitted to the site by the visitor 4. Attribution: Cite the source(s) of published information, date and medical and health pages 5. Justifiability: Site must back up claims relating to benefits and performance 6. Transparency: Accessible presentation, accurate email contact 7. Financial disclosure: Identify funding sources 8. Advertising policy: Clearly distinguish advertising from editorial content
3. Results of the Certification and Access to the Final User Currently the database represents more than 10 million pages indexed in Google. 52% of the certified sites are in English and about 11% in French, followed by sites in Spanish, Italian, German and Portuguese. For each evaluated site, the following information is collected: 1/ the HONcode principles respected, 2/ text extracted corresponding to the 8 principles, 3/ URLs of these text extracts, 4/ MeSH terms keyword [12] selected from the site and 5 / more general site label. In early 1996, a simple seal was introduced, allowing users to identify a certified site from a noncertified. However, the HONcode seal quickly became an additional safeguard for the Internet by requiring the sites to link the seal to the unique HONcode certificate on the HON site. The idea is to limit misuse of the HONcode seal, as the final verification is done on the HON site. The new basic principle is that custody by HON ultimately enables control of the display of the HONcode seal depending on the status of web site certification (the unique image generated for a given site is hosted at HON web site, Figure 1). Google is the search engine most used by the Internet; it can become the perfect tool for the promotion and awareness of the quality of medical information on the Internet when a user installs the HON Toolbar. HON Toolbar [13] is the most
56
C. Boyer et al. / Evolution of Health Web Certification Through the HONcode Experience
integrated way to access HONcode certified sites. It is composed of 3 features that are 1/ Identification of the HONcode membership in real time while browsing the Web. 2/ The search tool, HONcodeHunt, exclusively dedicated to certified HONcode sites accessible from the search bar of the browser. 3/ The emphasis of certified sites in popular search tools such as Google, Yahoo, MedlinePlus and Wikipedia. 13 years after the HONcode implementation, HON has looked for a way to evaluate the impact of the certification. To measure this impact a comparative and longitudinal study has been conducted in 2008 in collaboration with the French National Health Authority (HAS). The first study mentioned compared the compliance of 6 to 8 months certified websites (A) with the HONcode to the compliance of non certified French health websites that never asked for a certification and were taken as a Control sample (B). The second one compared the compliance to the eight HONcode principles of health websites before (T0) and 6-8 months after the certification (A). A second analysis was made to observe the website conformity to HONcode Principles 1, 4, 5 and 8 (see Table 1). Certified websites were ordered by their publisher’s type, to allow the building of a comparable sample of Control websites. The use of various sources allowed the decrease of distortions in the studies of non certified Control websites. 0.6% of health websites not asking for HONcode certification (control group B) does respect the eight HONcode ethical standards vs. 89% of certified websites (A). Regarding the principles 1, 4, 5 and 8, 1.2% of B respect these principles vs. 92% for A.
4. Conclusion and Perspectives We aim to show the many facets of the HONcode through its history, its evolution, implementation and use. During the past 15 years, HON has sought to promote the trustworthy medical information on the Web on a global scale. To meet the quantitative requirements of the Web, human expertise is assisted by many automated systems for a systematic, reliable and faster evaluation of websites. It is very important to expand distribution channels to reach as many potential users. Thus the realization of collaborations, to share our information, our philosophy and our vision, with major players such as the National Library of Medicine (USA) or Google is essential. The approach led by the HON is comprehensive and covers more than 35 languages around the world. At the same time, HON aims responding to local needs, the variety of languages, cultural differences and different regulations. The creation of local branches in different parts of the world, such as those initiated in Africa, Italy and Spain, should enable us to think locally and act globally to improve the quality of medical information on the Internet. France is the pioneer in quality eHealth by legislating on the issue of quality of health sites. A similar approach in other European countries will be welcomed to continue promoting the quality of medical information on the Internet for the benefit of Internet users.
References [1]
Fox, S. (2007) E-patients with a Disability or Chronic Disease. Pew Internet & American Life Project, 2007.
C. Boyer et al. / Evolution of Health Web Certification Through the HONcode Experience [2] [3]
[4] [5]
[6] [7] [8] [9]
[10]
[11] [12] [13]
57
Charnock, D. The DISCERN Handbook. 1998. Radcliffe Medical Press. Borges, .H. M. Cervi, P.T. ´Alvarez de Arcaya, G. Guardado, R. Rabaza, J. Sosa, Rate of compliance with the HON code of conduct versus number of inbound links as quality markers of pediatric web sites, in: Proceedings of the Sixth World Congress on the Internet in Medicine, Udine, Italy, 29 November— 2 December 2001. URAC: http://www.urac.org/MMandQualityChasm.asp, Nov 2008. Boyer, C. Baujard V. and Scherrer, J.R. HONcode: a standard to improve the quality of medical/health information on the internet and HON’s 5th survey on the use of internet for medical and health purposes. In 6th Internet World Congress for Biomedical Sciences (INABIS 2000), 1999. OMNI: omni.ac.uk, Dec 2008. HONcode Guidelines: http://www.hon.ch/HONcode/Guidelines/guidelines.html, May 2010. Aphinyanaphongs, Y. Aliferis, C. Text categorization models for identifying unproven cancer treatments on the web. Stud Health Technol Inform, 2007. Gaudinat, A. Grabar, N. Boyer, C. Automatic retrieval of web pages with standards of ethics and trustworthiness within a medical portal: What a name page tells us - 11th Conference on Artificial Intelligence in Medicine (AIME 07) - 07-11 July 2007 Amsterdam, The Netherlands. Joubert, M. Gaudinat, A. Boyer, C. Geissbuhler, A.. Fieschi, M. HON Foundation Council Members. WRAPIN : a Tool for Patient Empowerment within EHR. Stud Health Technol Inform. 2007;129:14751. Hanbury, A. Boyer, C. Gschwandtner, M. Müller, H. KHRESMOI: towards a multi-lingual search and access system for biomedical information, Medetel 2011 National Library of Medicine, Bethesda, Maryland. Medical Subject Headings, 2001. http://www.nlm.nih.gov/mesh/meshhome.html. HON Toolbar: http://www.hon.ch/HONcode/Plugin/Plugins.html
58
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-58
Personal Health Data: Patient Consent in Information Age Dragana MARTINOVICa,1, Victor RALEVICHb, Milan PETKOVICc a University of Windsor, Windsor, Ontario, Canada b Sheridan Institute, Oakville, Ontario, Canada c Technology University, Eindhoven, Netherlands
Abstract. In this paper we report on findings related to treatment of patient consent in various circumstances and geographic domains; explore transfer of health data between custodians and geo-political entities; and emphasize importance of educating general public about issues related to handling health data. A specific set of questions about consent/legislation and related issues in the Canada, the USA and the EU are addressed in an attempt to answer them systematically. This comparison identifies similarities and differences along a set of dimensions. Keywords. Patient consent, data transfer, HER.
1. Introduction Both the literature and the everyday experiences confirm that the Internet has affected the practices in healthcare. Especially chronic patients and those who have recently been confronted with a health crisis are keen on using the Internet to search for information about treatments, risks, alternative cures, medical institutions, healthy life style, and other. Recent statistics indicate that 75%-80% of the Internet users in the US seek out the health-related information [1]. Patients across the world see health information technology as beneficial for scheduling visits, communicating with doctors, receiving results of diagnostic tests, and sending the results from home monitoring instruments to doctors’ offices by email [2]. Among Canadian physicians, the use of handheld personal computing devices to check prescriptions, track patients, check dosages or use decision supporting tools is on the rise [3]. Wireless access and mobile devices, with their portability, immediacy of service, and convenience, may provide for the ubiquitous and participatory healthcare. Mobile phones, may be used for tracking of infectious diseases, health education and promotion, sending warnings and alerts (e.g., to remind patients to take medications or book appointments), and for obtaining educated opinions from medical experts [4]. The recent European Commission reports [5], inform that 66% of European physicians use computers for consultations; among general practices, 80% electronically store administrative patient data, 92% electronically store medical data on diagnoses and medication, while 35% electronically store radiological images. In 1
Corresponding Author: Dragana Martinovic, Faculty of Education, University of Windsor, 401 Sunset Ave., Windsor, Ontario, N9B 3P4, Canada; E-mail:
[email protected]
D. Martinovic et al. / Personal Health Data: Patient Consent in Information Age
59
the Netherlands, for example, 71% of physicians provide “e-prescribing.” However, it is of concern that with this increased reliance on computer applications and digitalization of health records, it becomes simpler to collect, store, and search electronic health data, thereby endangering patient’s privacy. In Canada, both patients and physicians are unaware that personal identifiable prescription data, travel from pharmacy computers via commercial networks to pharmaceutical companies ([6], [7]). Another issue arises from the increase in health services provided to non-residents. In this domain, the US is the biggest net exporter of medical services (supplier of medical services to non-residents travelling for medical reasons), while the residents of Canada acquire abroad over $300 million of medical services. While the health-related travel is on the rise, there are not enough data on security threats of such trade.
2. Method Methodologically, this paper gives a comparative overview of patient consent approaches in three geographic areas: Canada, the US and the EU; and rises concerns about technological solutions when patient health data cross legislative borders. In order to introduce a framework of privacy protection of health-related sensitive personal data, it is necessary to first specify: what is individually identifiable personal health information, and how can personal health-related information be protected? What is individually identifiable personal health information? Usually interpreted in very broad terms, individually identifiable personal information includes any demographic, contact, behavioral, and performance data in any media format [8]. Health data of any individual are most often accompanied with other types of data, i.e., demographics, historical data, family records, etc. Therefore, in cases when health data are compromised, the privacy of other personal information is also affected. How can personal health-related information be protected? This is primarily done through a rigorous legislative process. Because health data need to be comprehensive and easily related to other types of personal data, it is not feasible to protect privacy of the individual just by reducing the amount of stored sensitive information to the necessary minimum, and raising awareness of the threat of identity theft and other compromising actions. Errors and security breaches take their toll too. In 2009, the Privacy Commissioner of Ontario, Canada, issued her sixth order under PHIPA after records containing personal health information were found scattered on an Ottawa street outside a medical centre [9]. Recent security breach in Ontario involved the loss of a memory key “containing the health information of almost 84,000 patients who attended H1N1 flu vaccination clinics in the Durham Region” [10]. This prompted the Ontario Information and Privacy Commissioner to ask that health sector removes any personal health information from mobile devices, unless it is encrypted [11]. While the governments take such cases seriously, the question remains how to protect personal health information when it is in transfer between parties under different legislation, between healthcare institutions of different kind, or between countries? What is the status of patient consent in the information age?
60
D. Martinovic et al. / Personal Health Data: Patient Consent in Information Age
3. Result: E-Health Legislation and Related Technologies− Patient Consent In e-health legislation of Canada, the US and the EU, the expectation of the individual as well as his/her consent for use of personal health records are taken into account. In most cases, the consent determines access to personal health information necessary for consultations and transfers for consulting clinicians, referring clinicians, transferring facilities, receiving facilities and consumers. The patient consent requirement is further derived from different regulations such as the US Health Information and Portability Accountability Act [12], the Directive 95/46/EC of the European Parliament and of the Council, and Ontario Personal Health Information Protection Act [13]. According to these Acts, an individual exercises different levels of control over the collection, access, use and disclosure of his/her health information. This means that in some cases, the patient consent must be obtained before his/her health information may be accessed, used or shared. 3.1. Summary of Main Points Related to Patient Consent Based on Three Privacy Rules 3.1.1. HIPAA (US) • • •
The patient written consent must be obtained by the covered entities for the use and disclosure of the identifiable health information for the purpose other than they are permitted by the privacy rule. Individual has the right to request restriction on the use and disclosure of personal health information, and to request communication to be confidential. Each covered entity must provide a notice of its privacy practices to the patient.
3.1.2. Directive 95/46/EC (EU) • •
The consent is any freely given specific and informed indication of data subject’s wishes by which s/he agrees with processing of personal data. The EU Directive prescribes that the processing of personal data must be carried out with the consent of the data subject or be necessary under other conditions.
3.1.3. PHIPA (Ontario, Canada) • • •
Consent to the collection, use or disclosure of personal health information about an individual may be express or implied. The individual may withdraw the consent/has right to access own health records. Personal health information may change the custodian if the individual is informed before transferring his/her records or, if that is not reasonably possible, as soon as possible after transferring the records.
4. Discussion: Rigidity of the System and Various Threats As an attempt to make a seamless paradigm shift, many concepts in a networked environment are designed to mimic old ways of doing things. For example, there is a trend to replace the paper-based patient consent forms with their exact electronic versions (e.g. IHE Basic Patient Privacy Consent profile). In that way, consent forms remain static, natural language-based, and standard (as opposed to being customized).
D. Martinovic et al. / Personal Health Data: Patient Consent in Information Age
61
It can be concluded that in such case, digitization of health records, including the consent forms, does not take full advantage of the technological medium. In Canada, for instance, the process of digitization of consent forms is in its initial stage as electronic consent forms have to be printed and mailed or submitted in-person to the healthcare provider. Alternatively, the patient is presented with the consent form at the time of registration or consultation in a healthcare institution. However the Canada Health Infoway is announcing introduction of EHR where consent will be part of the record and it will be interpreted differently in different jurisdictions based on provincial health-related legislation. That is because each province may have specific consent policies, although they are mostly opt-out systems with various degrees of granularity [14]. European countries also use different consent models. For example, the Netherlands has an opt-out system, France, opt-in, while some countries want to support more detailed patient consent that can specify exceptions. There are many benefits to digitizing health information; for example, the process is made more efficient (as it allows for fast data storage and retrieval), convenient, costeffective, and the consumer preference could be immediately translated into machine readable polices (e.g. XACML policy), which provides fine grained access control to consumer’s health information [15]. As such, technology needs to support different consent models in a structured document as specified in the Privacy Consent Directive and the CDA R2 implementation guide for consent directive specifications of HL-7. However, besides the stated benefits, many problems intrinsic to privacy and security of electronic data management emerged. The ease with which patients’ sensitive health information are accessible in EHR systems, has raised concerns about the breach of data confidentiality and patient privacy, especially in cases when: • Health information is used outside the professional medical domain (e.g., wellness/personal health services, etc.); • Personal medical information is being used in a manner that acts against the interest of the individual (for example discrimination due to health status); • Health information is accessed wrongfully; • Personal medical information is used for commercial purposes; and, in general, if • Health information gets into wrong hands [16]. Other concerns are related to reliability of electronic data [17], lack of awareness in general public about the risks they encounter with electronic health data transfers, and increased danger of identity theft. This is particularly critical when sensitive information has to cross the border. Therefore, despite technical advances, there are social and cultural issues that limit and make problematic the use, management, and handling of personal health data across and within geographical borders.
5. Conclusions Here we argue that better understanding of issues around management of personal health data is necessary, patient consent should be properly addressed in all cases, and awareness of different technological and legal approaches in dealing with privacy preferences of patients should be raised. To achieve these goals, we embarked to develop a map of existing e-health legislation in Canada, the US, and the EU, and related standards, as well as a comparative model for obtaining patient consent electronically within and across borders, with recorded bottlenecks in both cases. This
62
D. Martinovic et al. / Personal Health Data: Patient Consent in Information Age
model should incorporate the existing mechanisms to identify and authenticate the person giving the consent, as well as to verify that consent has been given. Such comparative model should identify technical needs with respect to transferability from one health service to another, from one geographic domain to another; and develop recommendations that would be based on the ‘privacy by design’ [18] concept. This concept emphasizes that privacy cannot be assured solely by compliance with regulatory frameworks, but must be embedded in organizations’ operational designs. The international approach to this analysis is essential because in the domain of transfer of health data across the electronic networks (like the Internet), there is a knowledge gap across all sectors and lack of synchronization between geographic domains (e.g., the EU and North America). The empirical and comparative approach would bridge this gap.
References [1] [2] [3] [4] [5] [6] [7]
[8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18]
The Pew Internet & American Life Project, www.pewinternet.org HarrisInteractive (2007). http://www.harrisinteractive.com/news/newsletters/healthnews/ HI_HealthCareNews2007Vol7_Iss03.pdf Adatia, F., & Bedard, P.L. (2003). “Palm reading": Handheld software for physicians. Canadian Medical Association Journal (CMAJ), 168(6), 727-734. Morris, K. (2009). Mobile phones connecting efforts to tackle infectious disease. The Lancet Infectious Diseases, 9(5), 274. European Commission (2008) Report. http://europa.eu, ref.=IP/08/641 OPC of Canada (2009). http://www.priv.gc.ca/cf-dc/2010/ 2010_001_0323_e.cfm Zoutman, D.E., Ford, B.D., & Bassili, A.R. (2004). The confidentiality of patient and physician information in pharmacy prescription records: Commentary. Canadian Medical Association Journal (CMAJ), 170(5), 815-816. Martinovic, D., & Ralevich, V. (2007). Privacy issues in educational systems. Int. J. Internet Technology and Secured Transactions, 1(1/2), 132-150. IPC (2009). http://www.ipc.on.ca/site_documents/ar-09-PHIPA-e.pdf News Release (December 24, 2009). http://www.ipc.on.ca/english/About-Us/Whats-New/Whats-NewSummary/?id=132 IPC (2010). http://www.ipc.on.ca/english/Privacy/Stop-Think-Protect/ HIPAA Privacy Rule. http://www.hhs.gov/ocr/privacy/hipaa/ understanding/summary/index.html PHIPPA. http://www.e-laws.gov.on.ca/html/statutes/english/ elaws_statutes_04p03_e.htm#BK25 Canada Health Infoway (2007). White Paper. http://www2.infoway-inforoute.ca/Documents/ Information%20Governance%20Paper%20Final_20070328_EN.pdf Mwangi, E.W. (2008). Patient consent policies in XACML. Unpublished Master’s Thesis. Technical University of Eindhoven. Rooney, T., & Aitken, J. (2002). Consent and electronic health records. http://www.health.gov.au/ internet/hconnect/publishing.nsf/content/e250bd83358d3a56ca257128007b7ec9/$file/cons_dp.pdf Petkovic, M. (2009). Remote patient monitoring: Information reliability challenges, 9th Telsiks International Conference, IEEE Press, 295-301. Cavoukian, A. (2009a). Privacy by design. http://www.privacybydesign.ca/publications.htm#toc
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-63
63
Emotions and Personal Health Information Management: some Implications for Design Enrico Maria PIRASa,1, Alberto ZANUTTO b e-Health unit; Fondazione Bruno Kessler, Italy b Facoltà di Sociologia, Università di Trento, Italy a
Abstract. This work reflects on the translation of a paper-based information system into an electronic one, taking account of the emotional dimension of material artifacts. A qualitative analysis carried out through semi-structured interviews enabled us to describe laypeople’s healthcare practices, and specifically the use of “ pediatric booklets”, which are paper health diaries designed to provide parents with a repository of the most relevant clinical data about their children. Our analysis reveals that parents’ use of the booklet does not depend only on the clinical relevance of the information contained in it. Its success rather depends on practices that reshape the booklet’s original meaning. In particular, parents use booklets as containers for other clinical records, and they consider them more as objects of affection and symbols of their caring for their children than as clinical tools with instrumental value in themselves. In the discussion we consider the risks of dematerializing health information tools by underestimating the relevance of the emotional side.2 Keywords. Personal Health Record, emotions, healthcare practices, booklets, insitu interviews, affordance
1. Introduction The inclusion of patients in monitoring and care processes is considered one of the most urgent needs of Western healthcare systems. The active role of laypeople in this field is perceived as a step towards more democratic and participatory forms of illness management [1], as a way to cope with a general shift from treatment and cure to management and care [2], and as a strategy to reduce the growing costs of healthcare. In this regard, providing laypeople with information and communication technologies (ICTs) is regarded as the best way to deliver more efficient forms of care. This perspective is reflected by the fortune of new labels such as computer health informatics [3] or personal health information management [4] and also by the growing body of literature on Personal Health Records (PHR), these being patient-
1 Corresponding Author: Enrico Maria Piras, c/o e-Health unit, Fondazione Bruno Kessler, Via Sommarive 18, 38123 Trento, Italy. Telephone: 00390461314126, E-mail:
[email protected]. 2 The present article is an entirely collaborative effort by two authors. If, however, for academic reasons individual responsibility is to be assigned, Enrico Maria Piras wrote Introduction and Results, Alberto Zanutto wrote Methods and Discussion.
64
E.M. Piras and A. Zanutto / Emotions and Personal Health Information Management
controlled ICTs that enable laypeople to be a part of a digital environment where information flows seamlessly among a network of caregivers [5]. This scenario presents laypeople as able and willing to manage ICTs just as healthcare providers usually do in their work practices. This, however, is more wishful thinking than an easily achievable result because the pathway that leads from paperbased systems to electronic ones is tortuous, full of pitfalls, and rarely produces the results expected. While some studies have been conducted to understand how people manage paper-based health information in their homes [6; 7], to date little attention has been paid to the emotional implications of substituting paper-based systems with electronic ones. This paper reports qualitative empirical research and reflects on the translation of a paper-based health information system into a service available via the Internet. In particular, it considers the interplay between the emotional and functional dimensions through observation of the practices used to manage health documents in the household, focusing especially on patient booklets.
2. Methods This study is a part of a broader project of research and innovation aimed at prototyping and testing a regional PHR, this being a personally-controlled health information system designed to allow citizens to access, manage, share, and supplement clinical data. We analyzed laypeople’s current practices of document management in order to support the system’s technical development (here by “practice” we mean the relatively stable and socially recognized ways in which heterogeneity is ordered into a coherent set [8]). In particular, we focused on the management of “pediatric booklets” – paper records provided by the local healthcare authority to help parents keep track of the medical histories of their children – so as to explore the challenges of developing web-based versions of these documents. A first overall sample of 32 households, chosen on theoretical bases, was subjected to empirical study. Then, for the purpose of the detailed examination reported here, we selected 16 of them which were still managing a pediatric booklet. Of these, 10 out of the 16 informants interviewed were female, while seven of them had only one child, seven had two, and the last two had three children. In regard to the occupations of the interviewees, we had 5 teachers, 6 office workers, 1 professional, 3 independent professionals, and 1 PhD student. The interviews took place in the homes of the informants. This gave the researchers access to the pediatric booklets and enabled direct observation of the ways in which their management intersected with other dimensions of health information management and the activities of everyday life. The analyses was carried out by means of a grounded theory [9].
3. Results The analysis of the interviews revealed that home health record-keeping is a very complex activity aimed at enabling laypeople to provide healthcare personnel with the information that they need. The classification of documents into distinct thematic areas is invisible work [10] required of the patient by the healthcare system in order to save time for healthcare personnel and to grant the patients themselves their doctors’ full attention [7]. Here we focus on some specific practices related to the use of booklets,
E.M. Piras and A. Zanutto / Emotions and Personal Health Information Management
65
and pediatric booklets in particular. A pediatric booklet is a paper file (20x15 cm.) consisting of about 70 pages. Few of these pages give information to the parents, and even fewer allow the parents to take notes on their children. The other pages are supposed to be filled out by the pediatrician or any doctor that attends to the child, so as to provide an updated clinical history of the latter until s/he turns 14. While the parents are asked to keep the booklet and bring it with them to any encounter with doctors, it is primarily a clinical tool. The cases analyzed exhibited some shared features in how this paper-based information system was used. Firstly, the more the child grew, the fewer data were recorded in the booklet, proof that it is mostly the first phase of development that is of concern to pediatricians. Secondly, most of the parents stated that pediatricians filled out the booklet only occasionally, explaining that they preferred to use their own information systems (electronic or paper-based), and that the contingencies of the examination did not always let them work on the booklet. Thirdly, the pediatric booklet appears to have been of little or no use for the other doctors to which it was shown (e.g. in accident and emergency). Finally, none of the interviewees reported using the booklet to write information about the child’s health, claiming that they were instead only its keeper, and considered it as belonging to the pediatrician. Despite this discontinuous and fragmented use, all the parents kept the pediatric booklet constantly to hand and took it with them for any contact with health facilities, and whenever they travelled with their children (e.g. holidays). This seeming paradox is explained by the fact that, invariably, the book was used by parents as the place in which to keep all other health records. During the interviews we observed pediatric booklets filled with discharge letters, prescriptions, vaccination certificates, and any other diagnostic reports produced by medical facilities, with the exception of voluminous documentation such as radiological images. It was not rare to find booklets containing printed web pages to show to the doctor, leaflets picked up in a pharmacy, business cards of doctors, and, more in general, any other sort of paper document relating to the child's health. Another finding was that when the children turned 14, the booklet was not shown to the general practitioner to be evaluated. Rather, it was kept at home in the same spaces where the parents retained cherished objects concerning their children (toys, first communion favours, school reports) in order to preserve them and eventually give them to their sons and daughters when they left home. For the early years, the booklet was usually regarded as precious because the parents and pediatrician both paid close attention to the healthy development of the children. In these years, therefore, parents accumulated a number of health records concerning both routine contacts (e.g. vaccinations) as well as sporadic and exceptional ones (e.g. emergency room, hospitalization) with health care facilities. At this stage of the child's life, parents took the booklet with them to encounters with doctors, so that it became the easiest place to store documents arising from the meeting. After a few such interactions, the book became almost “naturally” the depository of health documentation. Overall, the paper-based booklet was a tool with limited clinical utility in the strict sense but of great importance in terms of the reassurance that it gave to parents. For the latter, the book represented the history of the child’s development punctuated by ‘minor incidents’. Because of the way in which the booklet was structured, in fact, it was not suited to accompanying a child with a complex pathological condition; and in fact, when such a condition occurred, the booklet was abandoned and specific files or folders were used.
66
E.M. Piras and A. Zanutto / Emotions and Personal Health Information Management
4. Discussion and Implication for System Design The analysis of the practices described above suggests that use of the pediatric booklet by the parents turn it into a new artifact: a booklet-container-symbol used not only for practical purposes but also to testify to parents’ own care-giving role. Initially, it is used by pediatricians to monitor various parameters associated with growth. Thereafter, the material affordances [11] of the booklet are exploited by the parents to appropriate it. The appropriation – the process of adaptation and adoption of the artifact [12] – depends on the materiality of the other objects with which it is put in connection by the users, such as clinical documents produced by healthcare facilities. In other words, the widespread use of the booklet can be interpreted as concerned both with the intrinsic characteristics of the object (e.g. forms to be filled out) and its possession of the same materiality as the other objects, so that it constitutes an “ecology of use” with them. The practices centred on the booklet also suggest that its widespread use also deals with its re-symbolization, so that it becomes not only a clinical tool but also the symbol of the parents’ care and attention for their children. This process culminates with the final loss of all relevance as a medical instrument (when children turn 14) and its new collocation alongside other objects of affection, a clear sign that it now has a purely emotional and symbolic value. The study of the practices related to this object raise substantial concerns about its possible translation into a digital information system. Although the information contained in its pages could be easily engineered, this would not guarantee a systematic use “substitutive” of the one observed. Achieving this result would require providing the users with a “system” (physical and functional) able to act as a frame of reference for all health information in regard to the child. The electronic system should maintain the features of a “booklet-container” of other records and of a “booklet-affective symbol”, in the absence of which it would risk becoming an impoverished artifact. These two dimensions remind us of two distinct challenges for the design of electronic systems for health care. The first requires a response in terms of digitization, access and standardization of health information, bearing in mind the importance of their integration. Although this is a primary objective at the current stage of development of information systems, it is far from being reached. Even more complex is the second challenge: that of responding to the loss of the emotional dimensions due to the practices of using physical objects and replacing them with electronic systems. Responding simultaneously to both these challenges does not seem straightforward, therefore. Health document management combines both retrieval of clinical records and their cataloguing in the booklet, but also the latter’s use as a container of service information (e.g. a specialist’s visiting card); a request to be made to the doctor (e.g. a note on a new therapy); documents useful for obtaining health services (e.g. prescriptions, heath insurance cards). The emotions that parents associate with the paper-based booklet derive precisely from these continuous forms of use, which should, albeit in different forms, be “jointly” reproducible in an electronic device. Currently, the design of healthcare information systems concentrates largely on the development of interoperable tools that afford easy access to information, for example with the support of multi-platform systems. This choice, however, is based on the idea that people give priority to “information.” Our analysis instead suggests that the affective value of the pediatric booklet as a “material object” resides in its ability to convey meanings and feelings generated by the large number of social practices in
E.M. Piras and A. Zanutto / Emotions and Personal Health Information Management
67
which the tool is used. For this reason, the design of systems like this one should incorporate these “emotional experiences of use.” This would seem to explain, for instance, why some objects with scant interoperabiliity but which are “technologically dedicated” seem better suited to emphasising the affective dimension that surrounds the use of any device. These “dedicated” objects safeguard a certain “materiality” of the original use. Study of the paper booklet confirms its limited clinical relevance but, at the same time, its importance in affectively reassuring parents as a comprehensive device “dedicated” to the health practices performed for their children. Also its engineered version must therefore be able, on the one hand, to safeguard the functionalities that make it a support for storytelling about the healthcare experience in which the children’s medical reports, analyses, and growth curves are collected, and on the other, to accompany digital and otherwise [13] memories relative to other aspects of childhood and preadolescence such as school, sport and hobbies. The research confirms that the paper-based booklet is, in this sense, a valuable tool in the construction of the parents’ identity as attentive to their children. This aspect should be set as the basis of the designers’ work. Acknowledgements: The study presented in this paper is a part of a broader project of research and innovation project aimed at prototyping and testing a regional PHR, TreC. The TreC project (Cartella Clinica del Cittadino – Citizen’s Clinical Record) is funded by the Autonomous Province of Trento (Italy) and managed by Fondazione Bruno Kessler.
References [1] [2] [3] [4] [5]
[6]
[7]
[8] [9] [10] [11] [12] [13]
Porter, R. The greatest benefit to mankind: A medical history of humanity from antiquity to the present, Harper Collins, London, 1997. Gerhardt, U. Ideas about illness: An intellectual and political history of medical sociology, Macmillan, London, 1989. Eysenbach, G. Consumer health informatics, British Medical Journal, 320 (2000), 1713-1716 Moen, A. Personal health information management, in P. W. Jones and J. Teevan (eds.), Personal Information Management, University of Washington Press, Seattle, 2007, 221-234. Tang, P. C. Ash, J. S. Bates, D. W. Overhage, J. M. Sands, D. Z. Personal Health Records: Definitions, Benefits, and Strategies for Overcoming Barriers to Adoption, Journal of American Medical Informatics Association, 13 (2006), 121-126. Moen, A. Brennan, P. F. Health@Home: The Work of Health Information Management in the Household (HIMH): Implications for Consumer Health Informatics (CHI) Innovations, Journal of the American Medical Informatics Association, 12 (2005), 648-656. Piras, E. M. Zanutto, A. Prescriptions, x-rays and grocery lists. Designing a Personal Health Record to support (the invisible work of) health information management in the household, Computer Supported Cooperative Work, 19 (2010), 585-613. Gherardi, S. Organizational Knowledge: The Texture of Workplace Learning, Blackwell, Oxford, 2006. Glaser, B. G. Strauss, A. The discovery of the grounded theory: strategies for qualitative research, Aldine Publishing Company, Chicago, 1967. Star, S. L. Strauss, A. Layers of silence, arenas of voice: The ecology of visible and invisible work, Computer Supported Cooperative Work, 8 (1999), 9-30. Gibson, J. J. The ecological approach to visual perception, Houghton Mifflin, Boston, 1979. Dourish, P. The appropriation of interactive technologies: Some lesson from placeless documents, Computer Supported Cooperative Work, 12 (2003), 465-490. Stevens, M. M. Abowd, G. D. Truong, K. N. Vollmer, F. Getting into the Living Memory Box: Family archives & holistic design, Personal and Ubiquitous Computing, 7 (2003), 210–216.
68
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-68
Socio-Technical Challenges in Designing a Web-Based Communication Platform Miria GRISOTa,1 Maja VAN DER VELDENa, Polyxeni VASSILAKOPOULOUb a University of Oslo, Department of Informatics, b National Technical University of Athens, 15780, GREECE
Abstract. This paper takes a socio-technical perspective to analyze the ongoing practices of making an eHealth infrastructure, namely a web-based communication platform, which aims to improve healthcare delivery in Norway. The platform is planned to support interaction between patients and healthcare providers, patient access to personal health information, and dissemination of health knowledge to the public. The analysis is based on the ‘scales of infrastructure’ concept found in Information Systems research, which shows the complexity of the design, development and implementation process across three scales of activities for achieving durability: institutionalization, organizing work, and technology enactment. The case analysis brings the non-linearity of the ongoing practices to the foreground, enabling a more in-depth understanding of the relationship between technology design and infrastructural work. Keywords. co-construction, eHealth, flexibility, durability, scales of infrastructure
1. Introduction Recently there has been an increased focus on the development of web-based eHealth solutions for on-line patient-provider communication. In Scandinavia, examples of such technology are the national Danish portal sundhed.dk, the national Swedish portal 1177.se, and the hospital-based minTRSSIDe portal at Sunnaas Hospital in Norway. The main purpose of these web-based solutions is to offer patients health information of high quality, a secure communication channel with health providers, and on-line access to a variety of services: booking of exams and visits, prescription renewal, direct access to one’s own medical record. The underlying vision is directed towards fostering patient empowerment by making patients more informed and proactive. However, health organizations face significant challenges in providing effective eHealth services. Challenges are related, for instance, to developing solutions that comply with privacy and security regulations [1], defining successful strategies for patient enrollment [2], and facing structural barriers [3]. Responses to these challenges shape design, development and implementation strategies of eHealth solutions. In this paper we are concerned with how decisions taken during the design, development and implementation process affect the durability of web-based eHealth solutions, in our specific case a patient portal.
1
Corresponding author: Miria Grisot, Postboks 1080, Blindern, 0316 Oslo, NORWAY – e-mail: {miriag,majava}@ifi.uio.no
M. Grisot et al. / Socio-Technical Challenges in Designing a Web-Based Communication Platform
69
We understand durability in a socio-technical perspective [4][5]: we argue durability is not only a matter of monitoring system performance and utilization over time, but it is the critical process of co-constructing long-term use where participants are involved in a complex web of institutional, technical and organizational practices. In order to analyze the complexity of these socio-technical activities we use the concept of scales of infrastructure [6]. This concept has been developed to make sense of the everyday practices of participants involved in developing e-infrastructures. Scales of infrastructure analytically differentiate participants’ actions with a specific focus on the temporal dimension, the “long term” [7]. The three scales are specified as: institutionalizing, organizing work, and enacting technology. Institutionalizing indicates actions aiming to achieve institutional persistence and permanence; organizing work indicates actions of articulating project work as it complexifies over time; enacting technology indicates the everyday actions of making technology work in practice by both developers and users. We take this lens to develop a socio-technical analysis of the activities illustrated by our case study, and contribute to understand the complexity of processes of designing, developing, and implementing a durable patientprovider web-based communication platform. The paper is structured as follows: first the case description and methodological approach are presented, then the case is analyzed according to the three different scales of infrastructure. Finally, we bring in the discussion the theme of socio-technical flexibility and conclude by specifying our preliminary (as the study is still ongoing) contribution to current medical informatics literature on web-based platforms for patient-provider communication.
2. Method and Case Description: MyHealthRecord The case reported in this paper is based on an ongoing (at the time of writing) study on the design, development, and implementation of MyHealthRecord (from now on MHR). MHR is a patient portal developed since 2005 by the IT department of a major Norwegian hospital and specifically tailored to the needs of selected patient groups and clinical units. MHR is designed to be a highly adaptable, configurable and scalable platform (selected functionalities and content are available to specific groups), and a secure, private, and trusted environment for communication between patients and health professionals. The research is designed as a case study [8] with focus on the shaping of MHR as technological object along social, technical and organizational dimensions. The research design was planned in order to regularly perform data collection over a oneyear period (September 2010-2011) following the main activities in the MHR project. The empirical material generates from qualitative data gathering: interviews with the project management as the primary method, review of documents and presentations, and observation of workshops with the users, as the secondary methods. All interviews were recorded and fully transcribed. We adopted an interpretive approach for the analysis of the data [9][10] going through transcripts, notes and documents in order to identify relevant themes. Relevance was determined by the use of the analytical concept of ‘scales of infrastructure’ in its three dimensions of practice (institutionalizing, organizing work, enacting technologies). The three scales were used as a sensitizing concept guiding our interpretation, revealing the complexity of coexisting practices, and serving as basis to discuss the relevance of a flexible approach to durable platforms.
70
M. Grisot et al. / Socio-Technical Challenges in Designing a Web-Based Communication Platform
3. Results Our analysis of the case focuses on how the participants’ practices are directed towards constructing a solution for long-term use. At the same time, the analysis brings to the forefront how the concern for durability translates in practices related to designing and developing a solution that is socio-technically flexible: technically and organizationally scalable and extremely adaptable to users needs. The analysis is organized according to the three scales of: institutionalizing, organizing work, and enacting technology respectively. 3.1. Practices of Institutionalizing A critical aspect part of the work of the participants in the MHR project is their reflection and definition of the role of MHR, and how this relates to the on-going discussion in the Norwegian health policy scenario on patients’ active use of Internet, their right to have access to medical records, and the need to develop a national patient portal. This discussion is partially driven by the positive experiences reported from neighbouring countries, Denmark and Sweden. MHR has originally been developed with the idea to offer a portal solution for online access to patient records. One of the managers says: “Access to record was definitely part of MHR from the beginning and one of the very first sketches we did showed the record access. Not only access but also possibility to control others’ access to your record”. MHR is also based on the idea that record access is not enough. The same manager continues: “And it was also from the beginning thought not as just another door into the hospital where to get some information, but it should be a meeting point where also the hospital personnel should meet half ground, and the patient should be able to set the premises to decide how this meeting takes place”. Setting such vision for the platform is instrumental for its longevity: a new personal and secure communication channel between patients and health providers is the basis for improving existing services as well as developing new ones over time. Moreover, strategically patient representatives and patient associations have been involved in designing services together with clinical personnel. Directing it even more towards delivering a longlasting solution, MHR is envisioned as a portal for “a life time”. The same manager states: “it should adapt to different users, users’ needs and ideally also throughout a life time and taking into account that a person is not sick most of his life, so when one is not sick MHR, should be about health maintenance and prevention, more that disease and treatments”. Thus in practice, MHR strategically locates itself within the health policy debate, but proposes in addition to offer a platform that will support patient-health provider communication stretching both in time (a life time) and in space (independently of how many providers are involved in the delivery of care). This ambition translates into presenting MHR concretely as ‘record access’, but also more visionary as interaction tool, which is patient-centred, supports transparency (in relation to access to data), accountability, and continuity of care. 3.2. Practices of Organizing Work Another important ‘scale’ for the MHR infrastructure activities is related to the internal organization of the project as such. The project organization of MHR has been
M. Grisot et al. / Socio-Technical Challenges in Designing a Web-Based Communication Platform
71
arranged to ‘survive’ in the context of the many other IT related initiatives of the Norwegian healthcare system. One of the managers explains: “We did organize this as independently as possible from everything else; we wanted the whole process to be influenced by other processes as little as possible. And that meant doing this ‘guerilla’ tactic: few people involved and designing the system as independent from other systems as possible. Because that is what we see with other projects, if you have a project going on for over three years the environment you work in is going to change drastically in three years, like merging with other hospitals or new management”. Organizing work with this “guerrilla tactic” allows the platform to be flexible and responsive. The team is able to swiftly respond to evolving needs without having to go through cumbersome management procedures and without compromising key MHR characteristics to accommodate other projects’ requirements. This type of work organization addresses the adaptability problem, the aim to “assure that the emerging system will remain adaptable at ‘the edge of chaos’ while it grows” [11]. 3.3. Practices of Enacting Technology A third scale concerns the everyday practices of making technology work. This scale focuses on how project participants work with the users during design, development and implementation in order to stay as close to actual work practices as possible: MHR needs to be configured to fit existing work practices. At the same time, users involved in MHR adoption use MHR activities as an occasion for reorganizing and rethinking through their own routines, forms, and information practices: they are required to actively participate in the tailoring process. Discussing users’ involvement, a manager says: “It is so difficult to attain involvement of clinical departments (…). For each clinical department we need at least one, preferably more, champion! Champions that really want to do it and think it is a splendid idea. Champions that can talk to their patients and to their colleagues and tell them to go for it. We are not in a position (and we should not be) to push this directly to the patients”. The commitment required on the part of the users is a critical factor of the long-term use of the solution. The way participation and commitment is constructed in practice is by promoting both short-term and long-term benefits from MHR-use. Short-term benefits are for instance given by the opportunity to digitize simple paper-based procedures, as the requirement of certain patients to fill out questionnaires before coming to visits. Longterm benefits are related, for instance, to the secondary use of data in the long term. Furthermore, we also see how MHR develops out of user requirements in a very specific and gradual way. Both the technology and the practices of infrastructuring coevolve and become gradually more complex over time.
4. Discussion The three scales of infrastructure, which we presented in our empirical case, make sense of the infrastructural work in the process of designing, developing, and implementing MHR. Project participants co-construct MHR by enacting different practices at the same time. The use of the “scales” concept for analyzing our case study enables us to base our understanding on a co-construction approach rather that linear models of interests and events as proposed in the literature [12], and to identify the concurrency of different concerns that trigger different coexisting practices. A further
72
M. Grisot et al. / Socio-Technical Challenges in Designing a Web-Based Communication Platform
finding emerging from our data, which we understand as an emerging from the coconstruction process, is the centrality of flexibility. First, in the institutionalizing scale we see how MHR project management relates the new platform to the evolving Norwegian health policy by keeping a flexible image and identity and articulating its merits and impact in relation to broader objectives. Secondly, the “work organizing scale” helps reveal how the project itself is put together and kept going as participants reconcile independence with interdependency, local contingencies with universal aspirations and everyday task coordination with visionary work. This is achieved by “guerrilla tactics” aiming again for flexibility and thus allowing responsiveness and dedication. Finally, the “enacting technology scale” exposes the way user enrollment and commitment is constructed in practice by promoting both short-term and long-term benefits from MHR-use, but also how a flexible technical design renders MHR adaptable and configurable to the various situations of use. Within this more complex co-construction view we get a more in depth understanding of the role of flexibility for the long-term use (durability) of the system. This ‘project-wide’ flexibility is enabled by the ongoing co-shaping of technology design and infrastructural work making possible to carry through despite priority shifts, project contingencies and unanticipated requests. Acknowledgements; This work was supported by NFR Verdikt projects n. 176856, n. 193172, and FMO project n. EL0086.
References Kluge EHW. Secure e-health: managing risks to patient health data, International Journal of Medical Informatics 76 (2007), 402–406. [2] Kaplan B, Brennan PF. Consumer informatics supporting patients as co-producers of quality, J Am Med Inform Assoc. 8 (2001), 309–316. [3] Kaye R, Ehud K, Shalev V, Idar D, Chinitz D. Barriers and success factors in health information technology: A practitioner's perspective, Journal of Management & Marketing in Healthcare 3 (2010) 163-175. [4] Berg M. Patient care information systems and health care work: a socio-technical approach, International Journal of Medical Informatics 55 (1999), 87-101. [5] Arts J, Callen J, Coiera E, Westbrook J. Information technology in health care: Socio-technical approaches, International Journal of Medical Informatics 79 (2010), 389-390. [6] Ribes D, Finholt TA. The Long Now of Technology Infrastructure: Articulating Tensions in Development, Journal of the Association for Information Systems 10 (2009), 375-398. [7] Edwards P. Infrastructure and modernity: force, time, and social organization in the history of sociotechnical systems, in Misa TJ, Brey P, Feenberg A, eds. Modernity and Technology, MIT Press, Cambridge MA, 2003. [8] Yin R. Case Study Research Design and Methods, Sage Publications, Thousand Oaks CA, 2003. [9] Klein HK, Myers MD. A set of principles for conducting and evaluating interpretive field studies in information systems, MIS Quarterly, 23 (1999), 67-94. [10] Walsham G. Doing Interpretive Research, European Journal of Information Systems, 15 (2006), 320330. [11] Hanseth O, Lyytinen K. Design theory for dynamic complexity in information infrastructures: the case of building internet, Journal of Information Technology, 25 (2010), 1-19. [12] Wakefield DS, Mehr D, Keplinger L, et al. Issues and questions to consider in implementing secure electronic patient-provider web portal communications systems, International Journal of Medical Informatics 79 (2010), 469-477.
[1]
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-73
73
Results of the 10th HON Survey on Health and Medical Internet Use Natalia PLETNEVAa, Sarah CRUCHETa, Maria-Ana SIMONETa, Maki KAJIWARAa, Célia BOYERa a Health on the Net Foundation, Geneva, Switzerland
Abstract. The Internet is increasingly being used as a means to search and communicate health information. As the mission of Health on the Net Foundation (HON) is to guide healthcare consumers and professionals to trustworthy online information, we have been interested in seeing the trend of the attitudes towards Internet use for health purposes since 1996. This article presents the results of the 10th HON survey conducted in July-August 2010 (in English and French). It was hosted on the HON site with links from Facebook and Twitter and from HONcode certified web sites. There were 524 participants coming mainly from France (28%), the UK (18%) and the USA (18%). 65% of participants represented the “general public”, while the remaining 35% were professionals. Information quality remains the main barrier users encounter while looking for health information online; at the same time, 79% believe they critically assess online content. Both patients and physicians consider the Internet to be helpful in facilitating their communication during consultations, although professionals are more sceptic than the general public. These results justify the continuing efforts of HON to raise public awareness regarding online health information and the ethical, quality and transparency issues, and to educate and guide users towards trustworthy health information. Keywords Survey, Health information, Internet usage, Internet
1. Introduction Since its inception, the Internet has been used for health purposes, and the trend is growing steadily. In 2009 in the USA, 61% of the population looked for health or medical information online [1]. Other US source states the percentage has increased from 27% to 76% from 1998 to 2010 [2]. Both users’ scepticism and the demand for high quality information are growing. In the USA, among those looking for health information online, the number of people dissatisfied with their search results (from 6% to 9% in the last five years) or with the reliability of information (from 5% to 8% in last five years) has been increasing [2]. The Internet influences the doctor-patient relationship. Doctors remain the most significant source of information for patients. In France, in 2010, patients preferred asking doctors rather than the Internet (89% vs. 64%) [3]. The international study (2008) revealed that 88% turn to their physicians to validate online information, but the same number (88%) turn to other sources to validate information from their doctors [4]. As the mission of the Health on the Net Foundation (HON) is to guide the growing community of healthcare consumers and providers on the World Wide Web to sound, trustworthy medical information and expertise, we have been interested in seeing the
74
N. Pletneva et al. / Results of the 10th HON Survey on Health and Medical Internet Use
trend in the attitude towards Internet use for health purposes since 1996. In this article, the results of the 10th survey are presented.
2. Method HON surveys use non-probabilistic sampling and cannot ensure that participants are representative of the entire medical and health information-user community on the Internet. However, taking into account the Internet use experience of participants, we believe they represent the most empowered and actively engaged part of the global Internet population seeking health information. The survey was hosted on the HON web site in English and French between July and August 2010. The survey was open to anyone accessing the HON web page or its Facebook and Twitter accounts. It was also promoted through HONcode-certified web sites. The participants included general public (including patients) and healthcare professionals. The survey consisted of five parts, four parts were identical, and one part had two versions for each group [5]. The 2005 survey had the same structure and questions similar to 2010 survey [6]. Some questions required an answer on a “-4”-“+4” scale. For such questions we summed up the results into 3 groups: “disagree”/“rarely” (-4, -3, -2), “neither agree/nor disagree/rarely” (-1, 0, +1) and “agree”/ “often” (+2, +3, +4). If two out of three groups of results were distributed equally (i.e. disagree 12%, neither 43% and agree 45%), we used “would rather agree (12% disagree)”, and vice versa. We mentioned the difference between the 2005 and 2010 results only where the difference was more than 10%.
3. Results and Discussion 3.1. Who is Searching and for Whom? When, Where and What is Being Searched? Over 500 people participated in the survey (524). 65% filled the questionnaire in English and 35% in French. 65% were individuals, patients, patients’ associations’ members (later referred to as “citizens”/”patients”) and 35% were health and medical professionals (later referred to as “professionals”, “doctors”). Overall, respondents from 60 countries around the world filled the HON questionnaire, most participants coming from France (28%), the USA (18%) and the UK (18%). Compared with the 2005 version, there were more female participants (65% vs. 50% in 2005) which is echoed with other studies [7] [8]. Most of the participants were aged 20-59, the most active group being those aged 30-39 (30%). In the US, most online health information seekers are aged 18-49 [8]. For those aged between 33 and 44, getting health information is the primary Internet activity [9]. Apparently, the geographical coverage of the studies and the different methodology used to collect answers explain the difference, however, generally the tendency is the same. On average, the respondents had been using the Internet for 7 years or more (79%) (44% in 2005). 96% of users spend time checking and writing emails and 93% browsing the web. 60% read newsletter and take part in online communities (28% in 2005) and 51% participate in online communities (23% in 2005). This shows the growing popularity of web 2.0 services. The Internet is being used to retrieve information, but also to communicate with peers [10].
N. Pletneva et al. / Results of the 10th HON Survey on Health and Medical Internet Use
75
In 79% of cases a web search is the starting point to clarify medical information obtained from physicians, the Internet etc. The frequency of search engines use has increased from 86% in 2005 to 94% in 2010. Secondly, web sites about specific health topics were listed (73%), and thirdly there were links from health web sites (66%). The importance of web sites suggested by a healthcare provider increased from 31% to 43%. Specialised search tools such as HONselect have lost popularity (29% in 2010 vs. 52% in 2005). The majority of users (61%) visit two to five web sites and 25% visit up to 10. 44% of users search for health information more than three times a week, 25% do it two to three times. We found no correlation between time spent searching health information and consultation time with a healthcare provider. Of all health information web sources, the most popular are medical journals or publishers (85%), hospitals (77%), universities and governmental agencies (76%) and non-commercial medical organizations (74%). Over the last 5 years the importance of hospitals as a source of online health information has increased from 60% to 77%. Respondents mostly search disease description (69%) and medical literature (62%). Other topics include: clinical trials (28%), patient community (24%), alternative medicine (22%), support groups (19%), weight loss (17%) and others (26%). Regarding medications, citizens mostly search for drug side effects (60%), safety (54%) and efficacy (52%). Over the last five years there were fewer searches on drug interaction (from 59% to 47%). Generic drugs and information regarding herbal or alternative treatments are frequently searched by 37% of citizens. Patients who participated in the survey rarely buy prescription (only 10% declared they did) or OCT (12%) drugs via the Internet. 3.2. Difficulties of Online Health Search We have asked participants about the difficulties they face when searching for online health information. For each barrier a scale of -4 to +4 was proposed. Access to reliable medical information was considered important by English(96%) and French- (76%) speaking respondents, however its quality remains the main barrier users encounter while looking for health information online (80%). Inadequate tools and applications, lack of time and support were considered less important. Internet training is not considered as an obstacle anymore by 47% of respondents (in 2005 this was still an obstacle for most participants, whereas for 34% of them it was not a barrier). The following factors are considered among the most valuable for improving the quality of online health information and services: • Trustworthiness/credibility – 96% • Accuracy and availability of information – 95% • Ease of finding information/Navigation – 93%. Information transfer rate (74%), privacy (73%), accessibility in terms of language and physical impairment (69%), and scientific complexity of information (59%) play a less important role. Commercialisation/advertising and sponsorship are not considered as quality-enhancing factors (from 31 to 42% in 5 years), neither are spam (44%) and Pay-to-view/Pay-for-use information or services (42%). Most citizens (78%) prefer to have the option of seeking complex medical information, especially the French-speaking ones (91%). 57% consider consumer web sites to be often superficial.
76
N. Pletneva et al. / Results of the 10th HON Survey on Health and Medical Internet Use
What domains do users trust? Not surprisingly, .edu (70%), .gov (69%) and .org (65%) domains remain the most credible The.com domain was considered neither credible nor non-credible by 52% of respondents. National domains have gained more trust among French-speaking participants (64%) compared with English-speaking ones (19%). This may be potentially dangerous because .fr domains can be used by fictitious organizations or ones that are not based in France, and this can mislead users considering the .fr domains to be as trustworthy as .gov for example. Most respondents think quality should be ensured by associations representing non-profits organizations, both international (72%) and national (71%), and NGOs (69%). Over the last 5 years, the importance of NGOs has increased significantly from 46% to 69%. 79% believe they critically assess online health information and 83% state that they verify whether the web site is trustworthy or not by checking the source of information (88%), motivation (68%), URL (commercial or not) (66%) and, the sources of funding (55%). However, only 13% of users think their family and friends verify the trustworthiness of web sites, while most of them remain undecided. 49% state they are not anxious when conducting a web search, and 75% do not consider themselves to be cyberchondriatic. The majority (74%) of respondents said they were aware that the ranking of search results could be manipulated by commercial interests. The HONcode seal was the most recognized trust mark among participants of the survey (50%). There was however a significant difference between English-speaking and French-speaking respondents regarding the popularity of the HONcode. 41% of English-speaking participants knew the HONcode seal along with Good House Keeping (36%) whereas 67% of French-speaking participants knew it because of the the collaboration with the French National Authority for Health. 76% think that hospital web sites should always be certified. 66% also consider it appropriate for physicians’ web sites and 46% - for web sites selling software. 3.3. Doctor-Patient Relationships, Perspectives from Both Sides Both citizens and professionals were asked whether they discuss the Internet search results with their doctor. 53% of citizens declared that they did. As for professionals, 62% said they engaged in such communication (75% of English-speaking and 47% of French-speaking). We could not reach a certainty on certain questions. Both professionals and patients rather agree that it increases adherence to a physician's advice (22% and 11% disagree respectively) and instructions on taking prescribed pharmaceuticals (12% and 15% disagree respectively). The most controversial issues turned out to be (1) whether discussing online health information fosters patient mistrust and (2) whether it encourages patients to challenge a physician's authority. With regard to the first issue patients rather think it does not (17% think it does) whereas physicians rather think it does (21% think is does not). Regarding the second issue, patients remain undecided whereas 14% of doctors think it does not. Comparing all these findings with the ones of 2005 we see that both doctors and patients have become more critical by 2010. 80% of citizens keep thinking that a healthcare provider should suggest trustworthy sources of online health information. 72% of professionals agree it would be helpful for them to provide patients with such information (in 2005, only 59%). Most physicians would use a trustworthy online service that allows them to suggest web sites to their patients, especially if it is free for the patient (87%). However, so far 78% of patients say healthcare providers have never given them such information.
N. Pletneva et al. / Results of the 10th HON Survey on Health and Medical Internet Use
77
4. Conclusions The survey findings demonstrate that the target audience is becoming more critical and less satisfied with the quality of online health information. Their worries have solid bases as there is a huge amount of misleading information online. Most respondents recognise this problem and believe they critically assess online health information. Although more than 500 answers do not represent all points of view, we believe that the growing scepticism on the part of physicians and patients justifies continuous efforts from HON and webmasters to increase public awareness of quality issues. First, we need to create more awareness among Internet users of reliable tools for “healthy” online surfing. Secondly we have to educate both the general public and health professionals. In the same direction, the UK Nuffield Council on Bioethics urges physicians to guide patients searching for health information on the Internet [11]. Medical students and practicing doctors should have such courses as a part of their curriculum. We believe that a similar course should be created for Internet users and adjusted to their background. And thirdly, patients and doctors need a communication tool which would be easy to use, save time during consultations, decrease professionals’ workload, and ensure access to trustworthy information on the web.
References [1]
Fox S., Jones S. The Social life of health information. Pew Research Center's Internet & American Life Project. [Online] June 2009. [Cited: 15 December 2010.] http://www.pewinternet.org/~/ media//Files/Reports/2009/PIP_Health_2009.pdf [2] H., Taylor. HI-Harris-Poll-Cyberchondrics. Harris Interactive. [Online] 04 August 2010. [Cited: 01 December 2010.] http://www.harrisinteractive.com/vault/HI-Harris-Poll-Cyberchondriacs-2010-0804.pdf. [3] Vers une meilleure intégration d’Internet à la relation médecins-patients. Conseil National de l'Orde des Medcins. [Online] 06 May 2010. [Cited: 23 November 2010.] http://www.conseil-national.medecin.fr/ article/vers-une-meilleure-integration-d%E2%80%99internet-la-relation-medecins-patients-982. [4] Health Engagement Barometer. Edelman. [Online] 2009. [Cited: 22 November 2010.] http://static.edelman.com/wwwedelman/healthengagement/docs/Edel_HealthBarometer_R13c.pdf. [5] Health On the Net Foundation. Survey 2010: Evolution of Internet use for health purposes. Health On the Net Foundation. [Online] August 2010. [Cited: 05 September 2010.] http://services.hon.ch/cgibin/Survey/Survey2010/quest_oct.pl. [6] Analysis of 9th HON Survey of Health and Medical Internet Users Winder 2004-2005. Health On the Net Foundation. [Online] 2005. [Cited: 23 November 2010.] http://www.hon.ch/Survey/Survey2005/ res.html. [7] Health information seeking: a review of measures and methods. Anker AE, Reinhart AM, Feeley TH. 3, March 2011, Patient Educ Couns, Vol. 82, pp. 346-54. [8] S, Fox. Health Topics. Pew Internet & American Life Project. [Online] 1 February 2011. [Cited: 16 April 2011.] http://www.pewinternet.org/~/media//Files/Reports/2011/PIP_HealthTopics.pdf. [9] Jones S., Fox S. Generations Online in 2009. Pew Internet & American Life Project. [Online] 28 January 2009. [Cited: 5 April 2011.] http://www.pewinternet.org/~/media//Files/Reports/2009/ PIP_Generations_2009.pdf. [10] S., Fox. Peer-to-peer healthcare. Pew Internet & American Life Project. [Online] 28 February 2011. [Cited: 15 April 2011.] http://www.pewinternet.org/~/media//Files/Reports/2011/ Pew_P2PHealthcare_2011.pdf. [11] Nuffield Council on Bioethics. Medical profiling and online medicine: the ethics of 'personalised healthcare' in a consumer age. [Online] October 2010. [Cited: 14 December 2010.]
78
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-78
Social connectedness through ICT and the influence on wellbeing : the case of the CareRabbit Sanne R. BLOM, Magda M. BOERE-BOONEKAMP, Robert A. STEGWEE1 Department of Health Technology and Services Research, University of Twente, The Netherlands
Abstract. The CareRabbit has been introduced as a technological innovation in the care for children, enabling family and friends to stay in touch while the child is hospitalized. This study addresses influence of this innovation on the wellbeing of the children, and uses the validated KINDL questionnaire, eliciting information from children and parents at the end of hospitalization. A baseline and an experimental measurement are compared. The children in the CareRabbit group scored slightly higher on the KINDL questionnaire than children in the control group. For young children (age 4-7) the difference was large. Initial findings indicate that CareRabbit has a positive influence on wellbeing, although sample size and measured differences limit the support for this conclusion. The measured difference suggests that CareRabbit may be more valuable for younger children. Keywords: Innovation, Information and Communication Technology, Evaluation, Wellbeing
1. Introduction The CareRabbit (ZorgKonijn) is an e-health device that can be used to play messages (e.g. text, mp3) sent to it through the Internet. The device2 itself, depicted in figure 1 is a 23 cm high white rabbit with rotating ears and lights in its belly.
Figure 1. The CareRabbit e-health device.
The device is deployed in children’s departments in hospitals. Its aim is to make children feel comfortable and make their stay more pleasant, by keeping in touch with 1 2
Corresponding author. The device is called a Nabaztag, marketed independently by Mindscape France.
S.R. Blom et al. / Social Connectedness Through ICT and the Influence on Wellbeing
79
friends and family. One of the questions in developing the CareRabbit further was formulated by IBM as follows: What is the value of the CareRabbit for its users and how can it best be used in hospitals? This paper addresses the value of the CareRabbit in terms of the wellbeing of the hospitalized children. Rather than measuring the perception of children and their parents regarding the technology used, we decided to employ a validated instrument to actually measure the wellbeing of the child. The e-health device fits in a ‘Family-centred care’ (FCC) approach of hospitals. FCC means that during a hospital admission, care is planned by the health staff around the whole family, not just the individual child, with key-concepts like ‘partnership in caring’ and ‘encouragement of family-to-family/peer support’. The aim of this approach is to minimize the impact of the child’s admission on all the family members and the child’s emotional trauma and to assist in recovery [1, 2]. Several studies show that social connectedness is important for a person’s wellbeing and health. Sadlo [3] states: “the experience of social connectedness makes a more important contribution to an individual’s subjective wellbeing, than the mode of communication”. Especially family members and friends can give us a feeling of belonging, understanding and being cared for. Having a social network for supports can buffer against stress, develop social skills [4], and leads to higher levels of life satisfaction and self-esteem [5]. Even the frequency of the contact with family and friends has been positively related to wellbeing [6]. Research shows that people use technology-based modes of communications as a supplement to their face-to-face communication, and not as a replacement [3]. The CareRabbit will most likely be used to supplement hospital visits. This research was part of a larger project directed at the implementation of the CareRabbit in Dutch hospitals, in itself a complex, multi disciplinary problem with a practice-oriented design. The project’s aim is to develop a business model for this specific innovation in order to get insight in the relevant factors of influence and implementing an e-health innovation in healthcare. Crucial part of this business model is the value of the CareRabbit services to the children and their family. This paper addresses the value in terms of the wellbeing of the children only.
2. Methods The research is carried out as part of the pilot studies with the CareRabbit in paediatric departments in two different hospitals. The hospitals that participated were the Martini Hospital in Groningen and the MST in Enschede. The methods that are used are: • Desk research on the influence of connectedness through an electronic device on the wellbeing of people and to describe the target group • Perform a baseline measurement on children’s wellbeing with children that are in the hospital, but haven’t used the CareRabbit. • Perform a measurement on children’s wellbeing with children that used the CareRabbit for at least two days With this information we will get a fair indication of the effect of the CareRabbit and the responses of the children. However, the validity of the outcomes is limited, since it is not a randomized trial in controlled circumstances. Wellbeing is measured with the KINDL questionnaire (www.kindl.org), a validated list consisting of 24 questions [7]. KINDL was chosen because it can be used with children with the age of four years or older, it has a limited amount of questions in six categories (Physical wellbeing, Emotional wellbeing, Self-esteem, Family, Friends,
80
S.R. Blom et al. / Social Connectedness Through ICT and the Influence on Wellbeing
and School), and it is available in Dutch. For each category of children’s questionnaires (4-7, 8-11, 12-16), a matching questionnaire exists for parents to indicate perception on their child’s wellbeing. The questionnaire for the youngest children (4-7) is easier to fill in, even though help from an adult is desirable. The scores of KINDL can be normalized to a 0-100 point scale to make outcomes comparable. Missing values (mainly in the category School, since it was a holiday) are not included and scores on negatively formulated questions (“double negative”) are reversed. At each hospital, start-up sessions were organized with childcare workers, head of the department, IT staff and nurses. The project, pilot studies, and research were explained; CareRabbits were tested; and instructions were given on how to use the device and website. The control phase was executed first, for two months in both hospitals: first the researcher handed out the questionnaires, but eventually the childcare workers gave KINDL to children in the hospital eligible for participation. After the control phase was completed, the CareRabbit phase started: childcare workers handed out the devices, instructed parents and handed out questionnaires. Children that filled out one of the questionnaires got a a small mascot or “gelukspoppetje” and a card to thank them for their participation and wish them health and luck.
3. Results During the pilots 27 children used the CareRabbit and 32 children participated in the control group (27 of their parents participated). Of the CareRabbit group 12 parents and 11 children (34%) filled out a questionnaire. At the MST 23 children used a CareRabbit, at the Martini Hospital 4 children used one. Table 1 shows the background characteristics of each group of participants. Table 1: Distribution on background characteristics of the CareRabbit group and the control group Parents 4-7
Age
Children 8-16
4-7
8-11
12-16
CareRabbit
Yes
No
Yes
No
Yes
No
Yes
No
Yes
No
#Participants
6
6
21
21
6
6
15
11
6
15
#Questionnaires
4
6
8
21
4
6
6
11
2
15
Boys
2
3
1
3
2
3
1
6
0
7
Girls
2
3
7
18
2
3
5
5
2
8
Average age
4,3
6,0
10,8
11,7
4,3
6,0
9,3
10,2
15,5
13,9
# brothers/sisters
-
-
-
-
1,3
1,7
1,7
1,3
1,0
2,5
Earlier admissions = 0
1
1
2
11
1
3
4
7
0
8
Earlier admissions = 1
1
4
1
4
1
1
0
2
0
2
Earlier admission > 1
2
1
5
6
2
2
2
2
2
5
Based on the information of the childcare workers, three children asked to participate in the control group refused. Four children offered a CareRabbit declined, because they found their current facilities sufficient; one boy of 15 said he considered himself too old for the CareRabbit. However, two girls age 15 and 16 used the CareRabbit at the MST and were enthusiastic about it. This means that depending on the specific child, the CareRabbit may be of value for children older than 14. The childcare workers explained that the low amount of completed questionnaires was caused by not handing out the questionnaires and sometimes because (young)
S.R. Blom et al. / Social Connectedness Through ICT and the Influence on Wellbeing
81
children were too excited about returning home, and therefore couldn’t concentrate on a questionnaire. The control period and CareRabbit period were approximately the same (10 weeks), however the CareRabbit period took place during the summer holidays, when most planned hospital admissions are postponed, and fewer children being admitted since the paediatric department is closed. The average age of the control group was slightly higher and more boys participated than in the CareRabbit group. This is consistent with perceived age of the target group of the CareRabbit. Childcare workers told that boys older than 12 were often not offered the CareRabbit or did not want to use it. Moreover, more children of the control group were admitted to the hospital before: this might indicate that more children of the control group have a longterm illness or serious condition. From here on, only the results from the questionnaires are analyzed, which for the CareRabbit group yields N = 12. The results of our measurements shows, among others, that the difference between the CareRabbit group and the control group on their normalized scores for the KINDL is 2,5 points (66,8 versus 64,3) indicating that the CareRabbit group scores are slightly higher on wellbeing then the control group, but this difference is not significant. For the youngest children (age 4-7) the difference is larger: 12,3 points for parents and 17,5 points for the children. The difference for the parents’ and children’s scores together is 14,52; this difference is significant with 97% certainty (t = 2,87, df = 7). For the children separately the difference is 17,6 points with 90% certainty; for parents the difference of 12,3 is 70% certain. This might indicate that the CareRabbit has a positive influence on wellbeing for younger children. However, since the circumstances are not controlled, other factors might have an influence too. The differences on the separate categories are minimal, except on School, where the CareRabbit group scores 0,4 points higher, and on Family, where the control group scores 0,3 points higher. These differences are not significant.
4. Discussion and conclusion The number of questionnaires filled out in the CareRabbit group is low (n = 12), and it is difficult to generalize the conclusions based on this information. Besides that, other factors might have been of influence as well (e.g. personal circumstances) since both measurements were not done within the same group (i.e. control measurement and CareRabbit measurement with the same person), but between groups. However, all children and parents that actually used the CareRabbit valued it. The children of the CareRabbit group scored slightly higher on the KINDL questionnaire than the children of the control group (66,8 versus 64,3). However, this difference is not significant, but it might indicate that the CareRabbit has an influence on wellbeing. For young children (age 4-7) the difference was larger (14,9), therefore suggesting that the CareRabbit may be more valuable for younger children. Not much information is known on children that did not want to use the CareRabbit. As far as known three children (boys, older then 14) refused the CareRabbit, but it is not clear what their precise motivation was or what they would have thought of it when they had tried it. Furthermore, it is not known what children that were not offered a CareRabbit by the childcare workers (for example, because they expected that it would not be appreciated by older children >16) would think of it. In general, the approach of the childcare workers might influence the results, as they handed out most CareRabbits and questionnaires and their attitude is bound to be
82
S.R. Blom et al. / Social Connectedness Through ICT and the Influence on Wellbeing
different (e.g. enthusiastic, opinion on whether it was valuable for boys or older children, way of supplying). However, it is probably impossible to neutralize this effect in this type of research projects. Almost all questionnaires were filled out at the end of the stay. Some of the questionnaires of the control group were filled out one or two days earlier, but never on the day of admission itself. A future improvement on this research would be to give both the control group and the experimental group a questionnaire at admission and at discharge. In this way, the effect of the hospital stay itself and their improved health on their wellbeing would be excluded and the effect of the CareRabbit can be measured more thoroughly. Future research directions regarding the CareRabbit should include a follow-up study, taking into account this improved research design, with a larger number of participants. In addition, first steps have been set toward the elaboration of a business model, which takes into account the demonstrated value to the stakeholders. Based on the experience and feedback during the pilot study, the applicability to other patient groups in healthcare, such as elderly people, might also be investigated to discover whether the use of ICT in an alternative and inviting shape and form can improve their social connectedness and wellbeing. Given that this study was not limited in its approach but limited by practical issues, the small sample size has not refrained us from seeking publication at this stage. The results are positive and also show statistical significance. Furthermore all stakeholders (most importantly, the children, parents, and hospital staff) valued the CareRabbit. Therefore the study can be seen as a pilot study with positive results and thus invites researchers to do further research on social connectedness through the use of ICT. From a theoretical point of view, we have shown the applicability of a validated clinical instrument (KINDL) to assess the health and wellness outcome of the use of an e-health device. In combination with more traditional information systems approaches to assess the value of e-health applications, such as perceived usefulness and ease of use, this provides a richer methodological basis for future e-health research. Acknowledgements: The CareRabbit is a corporate social responsibility project within IBM The Netherlands, and we thank IBM The Netherlands and Juriën Taams for support of this research.
References [1] [2] [3] [4] [5] [6] [7]
Kuhlthau KA, Bloom S, Van Cleave J, et al. Evidence for family-centered care for children with special health care needs: a systematic review, Academic Pediatrics 11 (2011), 136-43 Mikkelsen G, Frederiksen K. Family-centred care of children in hospital - a concept analysis. Journal of Advanced Nursing 67 (2011) 1152-1162. Sadlo M. Effects of Communication Mode on Connectedness and Subjective Well-Being. Thesis, Australian Centre of Quality of Life, 2005. Cohen S, Sherrod DR, Clark MS. Social skills and the stress-protective role of social support, Journal of Personality and Social Psychology 50 (1986), 963-973. Takahashi K, Tamura J, Tokoro M. Patterns of social relationships and psychological well-being among the elderly, International Journal of Behavioural Development 21 (1997), 417-430. Nezlek JB, Richardson DS, Green LR, Schatten-Jones EC. Psychological wellbeing and day-to-day social interaction among older adults, Personal Relationships 9 (2002), 57-71. Raat H, E.Verrips, U.Ravens-Sieberer, J.M.Landgraf, Essink-Bot ML. Paediatric health profile measures: Does it make a difference? The example of the KINDL and CHQ-CF87, Quality of Life Research 11 (2002), 647.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-83
83
Technological Choices for Mobile Clinical Applications a
Frederic EHRLER a,1, David ISSOMa, Christian LOVIS a University Hospitals of Geneva, Division of Medical Information Sciences
Abstract. The rise of cheaper and more powerful mobile devices make them a new and attractive platform for clinical applications. The interaction paradigm and portability of the device facilitates bedside human-machine interactions. The better accessibility to information and decision-support anywhere in the hospital improves the efficiency and the safety of care processes. In this study, we attempt to find out what are the most appropriate Operating System (OS) and Software Development Kit (SDK) to support the development of clinical applications on mobile devices. The Android platform is a Linux-based, open source platform that has many advantages. Two main SDKs are available on this platform: the native Android and the Adobe Flex SDK. Both of them have interesting features, but the latter has been preferred due its portability at comparable performance and ease of development. Keywords: EPR, Android, Mobile Health
1. Introduction Providing care providers real-time, mobile and easy collaborative interactions with the hospital’s information system is an important challenge. It is a critical element to improve the efficiency and the safety of care processes [1]. Until recently, these interactions have been limited by devices and interaction models [2]. The new mobile devices represent an important step towards a solution. The development of clinical applications on these devices is not a usual problem of moving an application to a new operating system because of two elements: the pervasive presence of these devices and the disruptive new interaction paradigm introduced by multi-touch screens. Providing mobile services to physicians requires wise technological choices regarding the platform and the development environment [3]. In the following sections, we first introduce the context in which we started our development research. Then, we present the selection criteria employed to evaluate the candidate technologies. After that, we describe the application we developed to assess the functionality of the candidate SDKs. Finally, we present the advantages and drawbacks of the OSs and SDKs, which we assessed for our development, and what technology we chose to adopt at the end.
1
Corresponding Author: Frederic Ehrler, University Hospitals of Geneva, Division of Medical Information Sciences, Rue Gabrielle-Perret-Gentil 4, CH-1211 Geneva 14, Switzerland; Email:
[email protected].
84
F. Ehrler et al. / Technological Choices for Mobile Clinical Applications
1.1. Background The Geneva University Hospitals (HUG) is a consortium federating the public hospitals in the Canton of Geneva, Switzerland. It provides primary, secondary, tertiary and outpatient care for the whole region with 45,000 inpatients and 850,000 outpatient visits a year [4]. The Clinical Information System (CIS) of the HUG is mostly an inhouse developed system. It is a service oriented and component-based architecture with a message-based middleware. It is written in Java with J2EE and open frameworks. All exchanges are in SOAP or HTTP/XML [5] [6]. All components building blocks of the CIS, including the ones discussed in this paper are built in such a way that they comply as much as possible with standards, such as IHE (Integrating the Healthcare Enterprise) profiles, so that they are not dependent of any local legacy system. This includes technical, semantically and human-machine interfaces, such as using a terminology server for the language of the interfaces.
2. Method In order to define the most appropriate technology to develop mobile clinical applications, we defined several criteria organized in three axes: • Hardware: market trends, cost, performance and user acceptance of the mobile devices. Strength of the mobile platform with regards to security, reliability, and privacy. • Human: availability of competent developers on the labor market and existence of a developer community. • Software: complexity of the development environment, cost, user friendliness and reusability of existing and new developments. It is important to take into account the price of the physical devices supporting the OS. Indeed, when each care provider of the hospital is equipped with a mobile device, a small difference on price becomes really significant. The performance, including power autonomy of the device, is obviously central. Indeed, the good course of the healing process often relies on the real-time access to the relevant information. The information must obviously remain secured as it concerns the private life of the patient. In addition, we have to consider how quickly developers can master the environment and how easily the work already done inside the CIS can be adapted to the new tools. The choice of widely used languages, such as ActionScript or Java, would definitely facilitate the adoption and development as numerous developers are already familiar with these languages. The existence of a professional development environment, the existence of open source projects in this field, and a sufficient developer community, which has already addressed the most obvious questions, also facilitate the developments. In order to evaluate the features and the ease of development with the different SDKs, we defined a prototype mobile application, sort of test use case, aiming to simplify the care process. With the help of this application, health professionals simply enter the information concerning the patient during the visit instead of recording all the information on laptops. The application is composed of a succession of screens where the user selects the unit, the room, and finally the patient being currently examined. On the last screen, the care provider can enter the vital signs of the selected patient.
F. Ehrler et al. / Technological Choices for Mobile Clinical Applications
85
Figure 1. Communication between mobile applications and existing CIS
2.1. Communication Architecture Regarding the architecture, it was mandatory to think a model that would not create a dependency with any legacy system. Thus, we defined a gateway server providing a centralized access for the mobile application to any required information to or from the CIS. Thus, integrating any mobile application would only require integrating this bridge. It also clearly separates the services that are available remotely from the ones proposed as usual Web services. The gateway server is responsible for formatting the data properly before sending it to the appropriate application on the device. Once the mobile device receives the data, its embedded software is responsible to display the data through its interface and allows the interaction with the user. Figure 1 shows the link between our mobile application and the current CIS. The services of the existing CIS are externalized through a component named CIS gateway. When a mobile application requires data from the CIS, it communicates with the mobile gateway that transmits the request to the CIS gateway. The service directory is then queried to identify the appropriate service where to retrieve the required information. The information then returns through the same channel. All data transiting through the channel is formatted in XML.
3. Results 3.1. Choice of the OS The choice of the OS is challenging. There are numerous OS for mobile devices on the market, some of them with marginal shares. In order to simplify the work, it was decided to address only the four that are currently seen as major player, as per the Table 1 next page. The Apple iPhone is an interesting product as it is widely spread among users [7]. Unfortunately, the development policy of Apple is very restrictive. In addition, the development environment is unique to the OS, thus requiring very specific and devoted skills and education for the development team. Finally, there is a very limited choice of devices, as only the devices provided by Apple are available on the market.
86
F. Ehrler et al. / Technological Choices for Mobile Clinical Applications
Table 1. Comparison of the principal existing OSs to develop on mobile devices (Market shares of Western Europe, November 2010) OS Developer Language Market shares
iOS Apple Objective-C 46.4%
Symbian Nokia C++ 21.77%
Android Google Java+XML 15.65%
RIM Blackberry Java 10.16%
Choosing between Android, Symbian and RIM was trickier. They all possess a significant share on the market, rely on well established language and possess efficient development environment. However, only Android offers all together a huge choice of devices, ranging from very small Smartphone to large tablets, a widespread development environment, a large open source community, and a very transparent development policy. 3.2. Choice of the SDK One would think it is straightforward to adopt the Android SDK to develop on the Android platform. However, it is worth taking into consideration Adobe, a major actor of the IT world that offers development tools for mobile devices running Android. Adobe provides a SDK named Adobe Flex that has the valuable advantage to generate programs that can be supported by several platforms without any change. We made a quick survey (Table 2) of Adobe Flex and Android SDK characteristics to clarify their benefits and limitations. Some restrictions related to the Flex Hero SDK have been identified. As this SDK is an additional layer over the native SDK, there can be a loss of functionalities. Fortunately, the Flex SDK can handle the main functions required to interact with the mobile device, such as positioning, multi-touch, inclination, etc... The only identified limitation is the impossibility to create Android widgets, but this is not required for our application purpose. The additional layer of the Flex SDK can also induce a reduction of performance. However, we did not observe in our tests and did not found objective and serious studies confirming or infirming this fact. Regarding the Integrated Development Environment (IDE), the two languages possess a dedicated tool that helps developers generate accurate code. For Android SDK, the Eclipse IDE is perfectly adapted as the code is standard Java language. With the addition of a plug-in, the Eclipse IDE can manage the installed SDK, the documentation, and some drivers to connect the mobile device to the computer. The plug-in offers automated compilation as well an emulator. It allows testing the application locally instead of loading it into the mobile device. For the Flex SDK, a new version of their development environment, Flex Builder, has been released recently by Adobe to program mobile applications. This IDE based on Eclipse offers programming facility to code in ActionScript and MXML. Like with the Android SDK, there is an emulator that facilitates the development significantly. Table 2. Comparison of principal existing SDKs to develop on Android platform Features Version IDE Language Execution platform
Flex Hero SDK Flex 4.5 Hero Flex builder Burrito ActionScript 3 + MXML Adobe Compatible
Android SDK Froyo 2.2 Eclipse Java+XML Android
F. Ehrler et al. / Technological Choices for Mobile Clinical Applications
87
3.3. Comparing Platforms In order to improve our comparison, we developed our sample application on the two platforms. On the Figure 2, it can be seen that there are no strong differences in the human-machine interaction experience between the two interfaces. Both can display and manipulate lists, radio buttons and text inputs and other graphical component.
Figure 2. Android SDK and Adobe Flex screens to enter vital signs of the patient.
4. Conclusion Our constraints, needs and projects, led us to prefer the android OS due its compatibility with the largest number of devices and its open source policy. The selection of the SDK was more difficult as both the Android SDK and the Flex SDK met most needs in terms of features for the development of a mobile application on Android OS. The Flex SDK was finally chosen based on its portability to other platforms at comparable performance and ease of development.
References [1] [2] [3] [4] [5] [6] [7]
Prgomet M, Georgiou A, Westbrook JI. The Impact of Mobile Handheld Technology on Hospital Physicians' Work Practices and Patient Care: A Systematic Review. JAMIA 16(2009), 792-801. Kubben P. Neurosurgical apps for iPhone, iPod Touch, iPad and Android. Surg Neurol Int 22(2010), 89. Fischer S, et al. Handheld Computing in Medicine. JAMIA (2003), 139–149. Tschopp M, et al. Computer-based physician order entry: implementation of clinical pathways. Studies in health technology and informatics (2009), 673-7. Borst F, et al. Happy birthday DIOGENE: a hospital information system born 20 years ago. International Journal of Medical Informatics 54(1999), 157-167. Geissbuhler A, et al. Experience with an XML/HTTP-based federative approach to develop a hospitalwide clinical information system. Stud Health Technol Inform 84(2001), 735-9. Payne D, Godlee F. The BMJ is on the iPad. BMJ 19(2011).
88
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-88
Modified Rand Method to Derive Quality Indicators: a Case Study in Cardiac Rehabilitation Mariëtte VAN ENGEN-VERHEULa,1, Hareld KEMPSa,b, Roderik KRAAIJENHAGENc, Nicolette DE KEIZERa, Niels PEEKa a Dept. of Medical Informatics, University of Amsterdam, Amsterdam, The Netherlands b Dept. of Cardiology, Máxima Medical Centre, Veldhoven, The Netherlands c NDDO Institute for Prevention and Early Diagnostics, Amsterdam, The Netherlands
Abstract. Quality indicators (QIs) are increasingly used to summarize quality of care and to give professionals’ performance feedback. We have previously developed a continuous multifaceted guideline implementation strategy that integrates computerized decision support with feedback on QIs and benchmarking. This paper focuses on development of QIs, and presents results of a case study in the field of cardiac rehabilitation. We present a modified Rand method that combines results from a literature search and guideline review with knowledge of an expert and patient panel in an extensive rating and consensus procedure. All sources contributed to the final set of 18 QIs for cardiac rehabilitation. Keywords. Quality Indicators, Health Care; Cardiac Rehabilitation
1. Introduction Improving quality and outcomes of care is a central theme in current health care policy. Clinical practice guidelines are considered essential instruments to improve the quality of care as their potential benefits are improved patient outcomes, reduced practice variation, and reduced costs. Despite wide promulgation however, professionals’ often do not follow guideline recommendations. A frequently used classification of barriers to guideline implementation is a division into individual (‘internal’) and environmental (‘external’) barriers [1]. Internal barriers relate professional’s knowledge and attitude towards guidelines. To improve these, computerized decision support (CDS) is known to be effective because it can provide guideline-based recommendations at the time and place where clinical decisions are made [2]. However, medicine is largely practiced as part of a team and embedded within complex organizations. Professionals may also encounter external barriers which hamper their ability to execute guidelines. They stem from environmental factors related to the team, organisation or health system they work in. It is therefore important to apply an implementation strategy with supplementary components directed at both internal and external barriers [1].
1
Corresponding author: M.M. van Engen-Verheul, Dept. of Medical Informatics, University of Amsterdam, PO Box 22700, 1100 DD Amsterdam, The Netherlands; E-mail:
[email protected].
M. van Engen-Verheul et al. / Modified Rand Method to Derive Quality Indicators
89
Feedback on health care performance and outcomes has been shown to be an effective quality improvement method to overcome external barriers and can be used in addition to CDS [3]. It prompts professionals to change their behaviour if they receive feedback that their practice does not meet benchmark values (e.g., national target values or average performance within a peer group). Feedback reports contain results on quality indicators (QIs), i.e. quantitative measures to monitor and evaluate the quality of particular health care processes that affect patient outcomes [3]. QIs help professionals and their managers to identify suboptimal care and opportunity to improve quality and outcomes of care. Several methods exist for developing QIs, each with strengths and limitations. The first goal of this paper is to present a comprehensive method, which combines strengths from multiple methods to develop a QI set. The second goal is to apply our method and present a QI set developed during a case study in the field of cardiac rehabilitation (CR).
2. Methods To develop a QI set, a procedure developed by the Rand Corporation [4] is often used. Like other QI development methods this procedure combines scientific evidence and expert opinion using a consensus technique. Preliminary QIs extracted from the literature are anonymously rated by an expert panel. In a next round the panel meets to discuss, rerate and gain consensus. Criticisms of the Rand procedure include the lack of transparency in applying the definition of appropriate care, and weak reliability of the rating and consensus procedures. Also the lack of patient involvement and the fact that clinical practice guidelines are not consulted are mentioned [5]. To overcome these criticisms, we have modified the Rand procedure with successful elements of rating and consensus procedures from other QI development methods. First we defined appropriate care based on specific judgement criteria from the Organisation for Economic Co-operation and Development (OECD) [6]. Secondly we increased the reliability of the rating procedure by using a 5-point Likert scale for each criterion, as is often used in the Delphi technique [7]. Thirdly we structured the consensus procedure during the discussion meeting of the expert panel by applying the Nominal Group Technique (NGT) [8]. Finally we extended the number of consulted sources for QIs, adding a patient panel and review clinical practice guidelines in CR. Case study – We applied our modified Rand method to the field of CR. CR is a multidisciplinary therapy to support heart patients recover from a cardiac incident or intervention, and aims to improve their overall physical, mental and social functioning. Consistent with international guidelines, the Dutch guidelines for CR state that patients should be offered an individualized rehabilitation programme based on a needs assessment procedure. The guidelines mention all items which need to be collected during this procedure. An EPR with CDS facilities, based on the guidelines, was developed to overcome internal barriers and evaluated in a cluster randomized trial. It was shown that CDS considerably improved guideline adherence. However, the trial also revealed persisting barriers for implementation of the guidelines at organisational levels [9]. To overcome also these external barriers we developed a multifaceted guideline implementation strategy, which expands our CDS intervention with a benchmark-feedback loop including feedback reports on QIs [10].
90
M. van Engen-Verheul et al. / Modified Rand Method to Derive Quality Indicators
3. Results The modified Rand method (see Figure 1) consists of consultation of four sources (experts, patients, literature and guidelines) to collect information QIs. This is translated into a draft QI set which is rated on paper by the expert panel. Finally the expert panel meets to discuss and gain consensus on the final QIs. The steps in Figure 1 will be described in more detail now, followed by their application in the case study. Expert and patient panel – A questionnaire about quality characteristics is sent to consult both an expert panel and a patient panel. The expert panel should include professionals from all disciplines involved in the field of interest. They are asked to mention characteristics of excellent care service and what they would need to know about another clinic to assess their quality. The patient panel is asked to describe positive and negative experiences during their treatment. From the answers provided by experts and patients, quality characteristics of the health services are abstracted. Literature Search – Search terms concerning the field of interest (e.g., CR), are combined with MeSH terms and keywords referring to quality assurance, process and outcome assessment or quality indicators. From all included articles QIs and outcome measures related to high quality of the health service are abstracted. Review of Guidelines – The prevailing guidelines in the field of interest are reviewed to identify procedural and structural properties of high quality health services. Guidelines do not often describe a desirable level of outcomes of care but they do mention minimum procedures, standards and facilities that services should include. Case study: We invited 40 Dutch experts to our expert panel of whom 38 agreed to participate. The experts included professionals from all disciplines involved in CR (cardiologists, rehabilitation and sport physicians, company doctors, nurse practitioners, physiotherapists, psychologists, social workers, dieticians, and CR managers). Also we asked 30 patients of four CR clinics to take place in the patient panel of whom 15 participated. Overall, 92 different quality characteristics of CR were mentioned. The PubMed search identified 314 articles in which 15 QIs and 24 different outcome measures of CR services were mentioned. Most frequently used outcome measures related to exercise therapy and quality of life. Few outcome measures related to patient satisfaction and professional performance. Furthermore, the CR guidelines in the Netherlands were reviewed, from which we extracted 34 procedural quality characteristics and three structural properties of CR services. Translation of Results – The results of the four sources are translated into a draft QI set using the OECD framework on QIs [6]. This framework describes how concepts of health care should be measured by grouping them into dimensions and formulate them according criteria of importance, scientific soundness and feasibility. Rating on Paper – The draft QI set is presented to the expert panel. They rate all QIs on a Likert scale from 1 (total disagreement) to 5 (total agreement) based on three criteria: (i) The QI has a clear relationship with one or more patient outcomes; (ii) The QI can be a departure point for improvement actions; (iii) Information regarding the QI is easy to record [6]. For each QI the mean score per criterion, the standard response levels of individual experts. The rated QI set is ranked and shortened by mean score.
M. van Engen-Verheul et al. / Modified Rand Method to Derive Quality Indicators
91
Figure 1. Overview of the modified Rand method and the results of our case study (in Italics).
Case study: Based on the four sources we assembled a draft set of 81 quality QIs for CR. The draft set was structured into four clusters reflecting the chronological phases of CR (referral, needs assessment, evaluation, and follow-up) and one cluster concerning organization of care. In each cluster the QIs were classified as relating to either process, structure, or outcomes of care. Twenty-two experts rated the draft QI set. The highest ranked QI (patient’s lifestyle is assessed during needs assessment for CR) had a mean overall score of 4.47. The lowest ranked QI (CR patients improve their cognitive functioning) had a score of 2.94. Group Discussion – The NGT is used to lead the expert panel towards consensus through rounds of debate, discussion and an anonymous voting process [8]. Input for the discussion is the ranked QI set, the experts discuss the set and select the final QIs. Case study: We presented the QIs with their ranks, structured into clusters, to the expert panel. The panel voted for the QIs they preferred in an anonymous voting procedure. The results were shown on a screen and discussed. After hearing different opinions, the panel voted again in the light of the discussion to gain consensus. The final QI set and their original sources are presented in Table 1.
4. Discussion In the current study we modified the Rand method to develop QIs for measuring and reporting on quality of care. In our method results from a literature and guideline search are combined with the knowledge of an expert and patient panel in an extensive rating and consensus procedure. We applied our method to the field of CR, where the final QIs set showed that the four sources are complementary. We believe that using all sources results in a well-founded QI set covering all aspects of the health service of interest. Notably, the expert panel mentioned only few QIs related to outcomes of care. Furthermore, many QIs mentioned by the patient panel did not make it to the final QI set because they were opinion-based (e.g., friendly treatment). Our experience with the multidisciplinary expert panel during the group discussion was positive. Because of the early involvement and the reflection of all disciplines in CR, the panel showed great commitment to the QI development process. We believe this will ease implementation and acceptability of the final QI set in daily practice. However, actual benefits (quality improvement) and costs (registration time) can only be assessed afterwards.
92
M. van Engen-Verheul et al. / Modified Rand Method to Derive Quality Indicators
Table 1. Final QI set for CR (E= Expert panel, P= Patient panel, L= Literature and G= Guidelines). Nr
Type
Quality indicator
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Outcome Outcome Outcome Outcome Outcome Proces Proces Proces Proces Proces Proces Proces Proces Structure Structure Structure Structure Structure
Patients improve their exercise capacity during rehabilitation Patients improvement their quality of life during rehabilitation Amount of time needed to start resumption of work Patients quit smoking Patients meet the physical activity norms Average time between hoispital discharge and start of rehabilitation Complete data collection during needs assessment for rehabilitation Patients are offered a rehabilitation programme tailored to their needs Patients finish their rehabilitation programme Rehabilitation goals are evaluated afterwards Cardiovascular risk profile is evaluated afterwards Patients receive a discharge letter Cardiologists receive a report after the rehabilitation Rehab professionals work with a multidisciplinary patient record Specialized education for patients with chronic heart failure Long-term patient outcomes are assessed Clinics perform internal evaluations and quality improvement Patients participate in patient satisfaction research
Source E P L x x x x x x x x x x x x x x x x x x x x x x x x x
G x x x x x x x x x
x x x x
x x
To improve the data collection needed to report on QIs, the QI database should ideally be linked to an already existing data collection system such as an EPR. The next step in our research project is to implement the QI set in all clinics that already use an EPR for CR with CDS functionalities. During a multicenter randomized clinical trial the clinics will also receive feedback on the developed QI set in combination with educational meetings to overcome both internal and external barriers for guideline implementation. We expect that our modified Rand method to develop QIs can also be applied in other medical domains to further improve quality and outcomes of care. Acknowledgements. The authors would like to thank the Committee for Cardiovascular Prevention and Rehabilitation of the Netherlands Society of Cardiology and the National Multidisciplinary Assembly on Cardiac Rehabilitation for their contribution to the development of QIs for CR.
References [1] Cabana MD, Rand CS, Powe NR et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999;282:1458-65. [2] Garg AX, Adhikari NK, McDonald H et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA 2005;293:1223-38. [3] Jamtvedt G, Young JM, Kristoffersen DT et al. Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2006;(2):CD000259. [4] Brook RH, Chassin MR, Fink A et al. A method for the detailed assessment of the appropriateness of medical technologies. Int J Technol Assess Health Care 1986;2:53-63. [5] Hicks N. Some observations on attempts to measure appropriateness of care. BMJ 1994;309(6956):730-3. [6] Kelley E, Hurst J. Health care quality indicators project; Conceptual framework paper. OECD; 2006. [7] Moscovice I, Armstrong P, Shortell S et al. Health services research for decision-makers: the use of the Delphi technique to determine health priorities. J Health Polit Policy Law 1977;2:388-410. [8] Dunham RB. NGT: a users' guide. University of Wisconsin School of Business; 1998. [9] Goud R, de Keizer NF, ter Riet G et al. Effect of guideline based CDS on decision making of multidisciplinary teams: cluster randomised trial in cardiac rehabilitation. BMJ 2009;338:1440-9. [10] Van Engen-Verheul M, de Keizer N, Hellemans I et al. Design of a continuous multifaceted guidelineimplementation strategy based on CDS. Stud Health Technol Inform 2010;160:836-40
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-93
93
A Cloud-Based Semantic Wiki for User Training in Healthcare Process Management D. PAPAKONSTANTINOUa1, M. POULYMENOPOULOUa, F. MALAMATENIOUa, and G. VASSILACOPOULOSa a Department of Digital Systems, University of Piraeus, Piraeus 185 34, Greece
Abstract. Successful healthcare process design requires active participation of users who are familiar with the cooperative and collaborative nature of healthcare delivery, expressed in terms of healthcare processes. Hence, a reusable, flexible, agile and adaptable training material is needed with the objective to enable users instill their knowledge and expertise in healthcare process management and (re)configuration activities. To this end, social software, such as a wiki, could be used as it supports cooperation and collaboration anytime, anywhere and combined with semantic web technology that enables structuring pieces of information for easy retrieval, reuse and exchange between different systems and tools. In this paper a semantic wiki is presented as a means for developing training material for healthcare providers regarding healthcare process management. The semantic wiki should act as a collective online memory containing training material that is accessible to authorized users, thus enhancing the training process with collaboration and cooperation capabilities. It is proposed that the wiki is stored in a secure virtual private cloud that is accessible from anywhere, be it an excessively open environment, while meeting the requirements of redundancy, high performance and autoscaling. Keywords. Semantic wiki; healthcare processes; user training; cloud computing.
1. Introduction The drive in healthcare to reduce cost and improve quality requires enhanced cooperation and collaboration among disparate healthcare units. Hence, considerable attention has been paid to designing process models of healthcare delivery and on developing healthcare information systems that support intra- and inter-organizational healthcare processes, focusing on reducing (or eliminating) medical errors and improving quality of care [1,2]. In many circumstances, a lack of patient care coordination and teamwork is identified. Well-defined healthcare processes and interoperable health IT will enable virtual care teams to cooperate in the care of patients across organizational boundaries [1,2]. Thus, one important consideration in healthcare process management is to enable and foster active user participation, since users are required to think of their activities as constituents of healthcare processes and, hence, to instil their knowledge and expertise in the definition and automation of healthcare processes while paying due regard to culture. 1
Corresponding Author
94
D. Papakonstantinou et al. / A Cloud-Based Semantic Wiki for User Training
The development and management of value-added healthcare processes requires extensive and continuing education of healthcare professionals. Properly designed user training material should enable users to understand process-oriented healthcare delivery, visualize intra- and inter-organizational healthcare processes, assimilate the logic underlying existing processes and identify areas where redesigning existing processes is required in order to adapt to today’s dynamic healthcare environment [1,2]. The knowledge that inter-organizational healthcare processes contain (e.g., flow of activities, resources involved, physical location, coordination requirements) and the data content must be made explicit through training so that users understand the requirements of the environment and participate collaboratively in its development. This paper focuses on the objective of empowering user-to-analyst interaction, being particularly concerned with designing and developing relevant training material for healthcare professionals. In particular, to enable users to understand healthcare process modeling concepts we use a semantic wiki as a collaborative tool that highlights the relevant knowledge expressed by a domain specific ontology and is used to develop and provide a training material. The training system architecture is based on a virtual private cloud environment to allow authorized healthcare professionals to modify the training material in order to adapt healthcare processes to changing requirements, share healthcare process definitions and access them anytime and from anywhere. Further, the cloud-based semantic wiki proposed possesses several advantages including access control (who has access to the information stored on the cloud and under what conditions), redundancy (effective recovery in case of machine/data failure), high performance and auto-scaling (capacity additions/removals into a cloud infrastructure based on actual usage and without human intervention) [3,4].
2. Motivating Scenario Healthcare delivery is, nowadays, characterized by the need for increased cooperation and collaboration among functional units. Hence, considerable attention has been paid on designing new healthcare processes or redesigning existing ones, according to current requirements [1,2,5]. This requires active user participation so that users’ knowledge and expertise is incorporated into healthcare process definitions. In turn, this requires an effective user-to-analyst interaction which can be facilitated by a user awareness activity on healthcare process management concepts which calls for a suitable and adaptable training content to be made available to users anytime and from anywhere. To illustrate the main principles of the training approach proposed, consider a healthcare process scenario concerned with drug prescriptions (ePrescribing service). The benefits accrued from the implementation of an ePrescribing service are many: for example, the service puts eligibility, insurances and formulary information at the physician’s fingertips at the time of prescribing. This enables physicians to select medications that are on formulary and are covered by the patient’s drug insurance. It also informs physicians of lower cost alternatives such as generic drugs. In addition, physicians can access a timely and clinically sound view of a patient’s medication history at the point of care, decreasing the risk of preventable medication errors [6]. This scenario shows an example implementation of a cloud-based ePrescribing service: A physician uses an ePrescribing application which is interfaced to a PHR
D. Papakonstantinou et al. / A Cloud-Based Semantic Wiki for User Training
95
system, reads the summary record of his/her current patient and selects one or more drugs from the Insurance Organization’s formulary based on information regarding eligibility status and ID numbers of the medication list covered. Upon selection of one or more drugs by the physician, the ePrescribing application performs validation checks (e.g. regarding drug interactions, patient allergies and medication history) to either clear the prescription or return alert information to physician. In case of a clear prescription, the prescription is stored, as pending, in the medication profile area of the insurance organization’s designated data center. A pharmacist connects to the insurance organization’s cloud infrastructure, selects the patient’s prescription and executes it. Thus, the patient or a delegated person thereof collects the prescribed drugs from a pharmacy of his/her choice [6]. Figure 1 shows an ontology for the ePrescribing process which has been constructed in order to be used by the semantic wiki proposed. Design or redesign of a healthcare process model can be performed by manipulating already defined objects, providing flexibility, agility and reusability of the training material designed.
Figure 1. A training ontology for the e-prescription process
3. A Semantic Wiki Architecture on a Cloud Infrastructure The need for providing training material for healthcare process management concepts requires a collaborative and cooperative training environment so than users not only acquire knowledge about the training objects but they also learn the relations between them. Wikis allow for collaborative knowledge and can be helpful in learning models [7,8]. In particular, semantic wikis allow users to add semantic annotations to the wiki content, offering better navigation and information retrieval [3,7]. Hence, nowadays, semantic wikis constitute a popular semantically enhanced collaborative knowledge management system, mostly because it tends to make semantic technologies accessible to non-expert users and that they make the inherent structure of a wiki accessible to machines beyond mere navigation [7]. Users can query the annotations directly or create views from such queries, navigate the wiki using the annotated relations and introduce background knowledge to the system.
96
D. Papakonstantinou et al. / A Cloud-Based Semantic Wiki for User Training
In this paper, a prototype system of knowledge acquisition is presented to support training in healthcare process management that consists of the following modules: • An LMS system called JoomlaLMS which supports interfaces with Web2.0 technologies and external applications. • The Semantic MediaWiki (SMW) as a tool to acquire and share knowledge. • The OntologyEditor extension of SMW to develop ontologies and ensure consistency of the knowledge base by a set of knowledge repair algorithms. • The Halo extension of SMW to facilitate the authoring, retrieval, navigation and organization of semantic data in SMW. • The Halo Access Control extension of SMW to protect content, allowing easy administration of access rights and user groups [8]. The training system proposed has been designed to be available on demand (i.e. when and where needed). Hence, it has been implemented onto the Amazon cloud infrastructure that provides a flexible, scalable and low-cost cloud computing platform. The web service of Amazon, Amazon’s Elastic Compute Cloud (EC2) was used to host the required software. Users of the semantic wiki collaborate using the same shared datastore for storing and retrieving the semantic annotations. Amazon Simple Storage Service (S3) is used to provide a highly durable storage infrastructure, while S3 security enables a determination of how, when, and to whom have to be exposed the information stored on the cloud, using also proven cryptographic methods to authenticate users [9,10]. For networking and security issues, Amazon Virtual Private Cloud (Amazon VPC) was used, which integrates with EC2 and functions as a secure bridge, enabling the healthcare organization that provides the training material to connect their existing infrastructure to a set of isolated Amazon web services and compute resources via a Virtual Private Network (VPN) connection using industrystandard encrypted IPsec VPN connections [10,11]. In this context, it is ensured that only users with specific IPsec VPN connections can access the training material included in the semantic wiki developed.
4. Results The approach proposed in this paper is concerned with capturing the knowledge found in healthcare processes and in structuring this knowledge in terms of an ontology that contains all relative concepts, instances of concepts and relations between them. The semantic wiki relates the basic entities defined in the ontology with the corresponding text. Thus, the training material user is enabled to search through the semantic wiki for an ontology construct, understand its meaning and usage with the help of the supportive text and navigate to associated ontology constructs. In this way, an in-depth understanding of each healthcare process is ensured. Semantic wikis address core problems of traditional wikis: consistency of content (same information on many pages), accessing knowledge (finding and comparing knowledge from different pages) and reusing knowledge. With regard to the creator of the training material, the main advantage of the proposed model is content reusability. From the trainee’s point of view, the main advantages are semantic search, knowledge or conceptual navigation and knowledge dissemination and ease of use without further education and training. The cloud solution has significant advantages to healthcare organizations such as cost saving, accelerated time to delivery, offloaded maintenance and management to
D. Papakonstantinou et al. / A Cloud-Based Semantic Wiki for User Training
97
the cloud, elastic resources, redundancy and scalability. More importantly, due to the information sharing capability, healthcare professionals can share standardized and best practice medical protocols thus improving the quality of care provided. In addition, the virtual private cloud approach ensures that training content will always be available to authorized users.
5. Concluding Remarks Healthcare is an increasingly collaborative enterprise involving a variety of activities (administrative, paramedical, nursing and medical) that are interconnected into healthcare processes in a manifold manner and are performed within and outside healthcare organizations. This paper takes the stance that a process-oriented view of healthcare delivery contributes to cost containment and quality improvement that healthcare processes should be designed (or redesigned) through active user participation and that healthcare professionals need an effective training aid that facilitates their participation. Thus, a prototype approach to structuring training content in healthcare process management is proposed. The approach is based on a semantic wiki implemented on a cloud environment. Thus, it defines a general ontology, refines the general ontology by adding all ontology constructs required, implements the semantic wiki infrastructure and implements the semantic wiki in a virtual private cloud. Due to the encouraging results of the approach described, it is intended to evaluate it extensively using more complex healthcare processes.
References [1]
Makris A., Papakonstantinou, D. Malamateniou F, Vassilacopoulos, G. Using Ontology-based knowledge networks for user training in managing healthcare processes, International Journal of Technology Management, 47 (2009), Nos 1/2/3, 5-21. [2] Wieringa, R.J. Blanken, H.M. Fokkinga M.M. and Grefen, P.W.P.J. Aligning Application Architecture to the Business Context, Lecture Notes in Computer Science, Springer-Verlag, 2681 (2003), 209-225. [3] Oren, E. A semantic wiki approach for integrated data access for different workflow meta-models, Digital Enterprise Research Institute, 2006. [4] Fitzgerald J. and Chalk, D. CLOUD TECHNOLOGY: Clear Benefits: The Emerging Role of Cloud Computing in Healthcare, DELL Services, 2010. [5] Lenz R. and Kuhn K.A. (2004), Towards a continuous evolution and adaptation of information systems in healthcare, International Journal of Medical Informatics, 73(1) (2004), 75-89. [6] Puustjärvi J. and Puustjärvi, L. Improving the Quality of Medication by Semantic Web Technologies, Proceedings of the 12th Finnish Artificial Intelligence Conference (STeP), 2006, Helsinki, Finland. [7] Landefeld R. and Sack, H. Collaborating web-publishing with a semantic wiki, Studies in Computational Intelligence, 221 (2009), 129-140. [8] Bratsas, C. Kapsas, G. Konstantinidis, S. Koutsouridis G. and Bamidis, P. A Semantic Wiki within Moodle for Greek Medical Education, Proceedings of CBMS 2009: The 22nd IEEE International Symposium on Computer-Based Medical Systems, 2009, New Mexico, USA. [9] Buyya, B. Yeo, C. Venugopal, S. Broberg, J. Brandic, I. Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility, Future Generation Computer Systems, 25 (2009), 599-616. [10] Baron J, Schneider R. Storage option in the AWS Cloud. Amazon Web Services, 2010, Available from URL http://media.amazonwebservices.com/AWS_Storage_Options.pdf [11] Amazon Web Services, Overview of Amazon Web Services, 2010, Available from URL http://media.amazonwebservices.com/ AWS_Overview.pdf.
98
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-98
Reference Architecture of Application Services for Personal Wellbeing Information Management a
Mika TUOMAINENa,1, Juha MYKKÄNENa University of Eastern Finland, School of Computing, HIS R&D Unit, Kuopio, Finland
Abstract. Personal information management has been proposed as an important enabler for individual empowerment concerning citizens' wellbeing and health information. In the MyWellbeing project in Finland, a strictly citizen-driven concept of "Coper" and related architectural and functional guidelines have been specified. We present a reference architecture and a set of identified application services to support personal wellbeing information management. In addition, the related standards and developments are discussed. Keywords. Citizen empowerment, service-oriented architecture, standards, personal health records, interoperability
1. Introduction According to many political agendas, individual's, citizen's or consumer's personal needs must be at the centre of the development of high quality health and wellnessrelated information services [1,2]. In healthcare, the transition of health care system from provider-centric to patient-centric or consumer view has been seen both necessary and inevitable [3]. This requires empowering individuals to better manage their own wellbeing and health care. Personal Information Management (PIM) solutions have been suggested to promote citizen or patient empowerment [4,5]. The MyWellbeing (OmaHyvinvointi) project was a national-level R&D initiative in Finland which focused on citizen as the center of services ecosystem and developed conceptual and concrete tools and solutions for personal empowerment. The project focused on a holistic concept of a "Coper" to explore and define features of an aid for personal wellbeing. The Coper is designed to help citizens cope with the services they use and to manage them. It also promotes the coordination with and between service providers. In addition, virtual communities and social networks provide information and support, and aid in the decision-making for citizens [e.g. 6]. Platform and service provider interchangeability and use through multiple channels such as internet portals or mobile phones are required characteristics of the Coper. The Coper is not an implemented product as such, but many of its features are supported by existing applications such as personal health records (PHR applications), electronic government services and personal time and content management tools. 1
University of Eastern Finland, E-mail:
[email protected]
HIS
R&D
Unit,
POB
1627,
Fi-70211
Kuopio,
Finland,
M. Tuomainen and J. Mykkänen / Reference Architecture of Application Services
99
The basic idea of the Coper in healthcare closely resembles that of Personal Health Record (PHR) Systems [e.g. 7,8]. There are, however, many different content, use and implementation models related to PHRs, mainly due to different business models [9]. PHRs or other self-managed digital information collections are also mostly absent in collaborative health information system typologies [e.g. 10]. Furthermore, to be able to support information management in different wellbeing-related services, a reference architecture which could be populated by specific services and components was needed.
2. Materials and Methods The objective of this work was to specify a reference architecture for the Coper and to identify components for personal wellbeing management solutions. A service-oriented architecture (SOA) approach which facilitates reuse and integration of application services [11,12] was used. In addition, a classification of services was pursued. The work is based on literature and standards survey, experience from citizen eService development projects, existing products, and results from nine workshops of the project participants. The workshop participants included two EPR vendor companies, vendor companies for community, citizen and knowledge services, message delivery operator, five research institutes, and four health service provider organizations. The literature survey covered articles on personal information management, studies and comparisons of PHRs, and standards such as [8]. The survey confirmed that many PHR solutions share main features of the Coper. Experience from efforts such as guidelines for eBooking [11] stressed the need to identify benefits for both service providers and consumers. Four out of nine workshops in the project focused on service implementation and specification. The services were further prioritized, identifying combinations which could be realized using the existing offerings. The work was also harmonized with the information architecture for the Coper, and the prototypes for the case groups: persons retiring from work life and families having a baby.
3. Results Instead of an enterprise standpoint, the project used the analysis of citizen needs and activities as a starting point for the solutions and the architecture. The architecture is based on the dual model of services. The citizen has the right to receive a copy of documents from wellbeing services. Information is traditionally stored in the providers’ professional systems, but the customer's copy is under the control of the individual and can be used to combine information from various services. Such combination, if performed by service providers, is often difficult due to legal and privacy constraints. In our design, the wellbeing services offered to the citizen are reflected the identified software services. The SOA services are classified according to functional, platform, information or interactivity requirements (See Figure 1). The classification is generic, and all identified Coper services and functionalities can be located in one of the classes. In addition to the core functionalities of the Coper, personal information repository and user interfaces are basic building blocks of Coper realizations. Based on the results of the workshops and surveys, a total of 62 identified services were classified (See Figure 2). Core functionalities of the Coper include basic entry, management and organization of data and documents. Many basic functionalities
100
M. Tuomainen and J. Mykkänen / Reference Architecture of Application Services
follow those of PHR systems [8]. Personal information repository holds the documents received from different sources and personal data, including both structured and unstructured formats. Viewing, sorting and searching functions are supported by webbased, mobile or desktop user interfaces. Added value presentation services for different user devices and presentation personalization can also be provided.
Figure 1. Classification of services supporting personal information management.
Various platform services support communication, information management or user management. Communication platform includes secure communication and messaging services, in addition to support services such as technical service directories. Information management platform consists of services for data management such as synchronizing information repositories or translations between different presentation formats. User management platform services include identification, authorization and access control mechanisms supporting also access logs and digital signatures.
Figure 2. Specific personal wellbeing information management services in different service categories.
M. Tuomainen and J. Mykkänen / Reference Architecture of Application Services
101
Information source services are primary communication channel to import external data into the Coper, in addition to user-entered data. Connections to the systems of the service providers (also including the national eArchive in Finland), document scanning services, as well as connections to different types of personal measurement instruments are supported by these services through dedicated data import interfaces. In addition, there are various added value services which provide functionality related to personal preferences, communication with the selected service providers or communities, or for combining personal information with knowledge repositories. Personal added value services include and link personal tools such as calendars, reminders, diaries or personal trend indicators. Community added value services offer peer-to-peer and other communication and information sharing channels for selected communities. Knowledge services link personal information to external knowledge, interpretation or risk analysis, individual decision support or patient instructions. Finally, provider collaboration services enable transactions and information sharing with the service providers, including eBooking, service directories, prescription renewal and communication with professionals during the patient journey [13].
4. Discussion and Conclusions In contrast to many provider-driven initiatives, only small part of Coper services focus on traditional eServices of health service providers. Many services were identified in readily-made products or completed projects. Prototype implementations of information management and sharing for maternity, as well as document scanning services for persons retiring were implemented in the project. In addition, detailed interfaces were specified for data import from health service provider systems to the Coper and for citizen-oriented decision support. It is not reasonable to expect any given system to contain and integrate all the services, although individual implementations of all services can already be identified. In addition, personal preferences hardly require all the services to be present. The identification of functionalities as SOA services enables stepwise development and individually-driven combination of various services. In addition, service categories promote uniform architecture which is needed to ensure the interoperability of various components. There are readily available standards for many parts of the architecture for integration of services. Many standards are based on the solutions developed for health professionals, but also generic standards can be utilized. Especially relevant are standards for structure and semantics of health information which are in key position for linking personal information to knowledge or provider collaboration services. In our project, the most relevant of these were the national HL7 CDA implementation guides, HL7 Continuity of Care Document specifications and IHE Exchange of Personal Health Record Content profile. Open standards are also available for device connectivity (including Continua and ISO/IEEE 11073 specifications), provider collaboration such as eBooking, and community and web user interface standards, respectively. The service classification is generic and can be used to group electronic services in general. For example, services and interface standards for personal health systems presented in [14] and [15] can be positioned using the framework. The architecture is used and refined in relation to a eService ecosystems architecture produced in an services ecosystems research in Finland (as part of the Mind and Body programme) and
102
M. Tuomainen and J. Mykkänen / Reference Architecture of Application Services
promoted for a national programme for citizen eServices. Several services have been further specified and refined by the participating organizations. In addition to this architecture for the services, infrastructure decisions such as the use of integration platforms, as well as rules for the information architecture including metadata are among the key decisions to be agreed upon within a given ecosystem of services. From citizen perspective, personal wellbeing information management de-couples the individual from health or wellness service provision and avoids several obstacles related to service providers' view. The concept can be extended to cover personal health records, many different domains (healthcare, insurance, social services etc.), interactive eServices and community and knowledge links. The presented classification of individual-oriented application services serves as a step towards open and extensible ecosystem of electronic wellbeing services. Prioritization of services to be implemented and shared, however, must be based on the needs of consumers and aspects which can be implemented by service producers with an acceptable threshold in a sustainable way. Acknowledgements. The authors would like to thank all the members of the MyWellbeing project, the participants of the workshops and the members of the Mind and Body programme.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]
Ministerial declaration of eHealth 2003 conference, Brussels, 22 May 2003. Detmer D, Steen E. Learning from abroad: Lessons and questions on personal health records for national policy. AARP Public Policy Institute, Research Report 2006-10, March 2006. Castro D. Explaining international IT application leadership: Health IT. The Information Technology & Innovation Foundation, September 2009. Angst CM, Agarwal R. Patients Take Control: Individual Empowerment with Personal Health Records. Working Paper No. RHS-06-013, 2004. Pratt W, Unruh K, Civan A, Skeels M. Personal Health Information Management. Commun ACM 2006:49(1):51-55. Eysenbach G. Medicine 2.0: Social Networking, Collaboration, Participation, Apomediation, and Openness. J Med Internet Res 2008;10(3):e22. Iakovidis I. Towards personal health record: current situation, obstacles and trends in implementation of electronic healthcare record in Europe. Int J Med Inf 1998:52 (1-3):105-115. HL7 Personal Health Record Systems Functional Model, Release 1, Draft Standard for Trial Use, HL7 EHR Technical Committee, November 2007. Rocca M, Ritter J, de Faria Leao B, Reynolds M. ISO/HL7 Personal Health Record (PHR) Survey Results. Health Level Seven and International Standards Organization, 17 September, 2008. Balka E, Björn P, Wagner I. Steps Toward a Typology for Health Informatics. Proceedings of CSCW’08, Nov 2008, San Diego, California, pp. 515-524. Mykkänen J, Tuomainen M, Kortekangas P, Niska A. Task Analysis and Application Services for Cross-Organizational Scheduling and Citizen eBooking. Proc. of MIE2009, IOS, 2009, pp. 332-336. HSSP Service Specification Development Framework, version 1.3, Healthcare Services Specification Project, Health Level Seven and Object Management Group, 2007. Richards T. Who is at the helm on patient journeys?. BMJ 2007:335:76. Mikalsen M, Hanke S, Fuxreiter T, Walderhaug S, Wienhofen L. Interoperability Services in the MPOWER Ambient Assisted Living Platform. Proceedings of MIE 2009, IOS, 2009, pp. 366-370. Kaufman JH, Adams J, Bakalar R, and Mounib E. Healthcare 2015 and Personal Health Records: A Standards Framework. Proceedings of IHIC 2008, Oct 2008, Crete, Greece, pp. 19-28.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-103
103
Development of a Web-Based Decision Support System for Insulin Self-Titration A.C.R. SIMONa,b,1, F. HOLLEMANb , J.B. HOEKSTRAb , P.A. De CLERCQc, B.A. LEMKESb , J. HERMANIDESb , N. PEEKa a Department of Medical Informatics, b Department of Internal Medicine, Academic Medical Center, Amsterdam, The Netherlands c MEDECS BV, Eindhoven, The Netherlands
Abstract. Insulin is the most potent agent for the treatment of diabetes mellitus. However insulin treatment requires frequent evaluation of blood glucose levels and adjustment of the insulin dose. This process is called titration. To guide patients with type 2 diabetes using once-daily long-acting insulin, we have developed a web-based decision support system for insulin self-titration. The purpose of this paper is to provide an overview of the phases of development and the final design of the system. We reviewed the literature, consulted an expert panel, and conducted interviews with patients to elicit system requirements. This revealed four important aspects: the insulin titration algorithm, the handling of hypoglycemic events, telemedicine functionalities, and visiting frequency monitoring. We used these requirements to develop a fully functional system. Keywords. Clinical decision support systems, telemedicine, self care, diabetes mellitus, insulin
1. Introduction The prevalence of diabetes is increasing rapidly worldwide. The total number of people with diabetes is projected to rise from 171 million in 2000 to 366 million in 2030 [1]. Diabetes, a chronic metabolic disorder, is hallmarked by increased blood glucose levels. Serious long-term effects of high blood glucose levels are blindness, kidney failure and cardiovascular disease. The main therapies for treating diabetes are dietary adjustments, oral glucose-lowering drugs, and insulin. Insulin is the most potent agent in the therapeutic arsenal but it requires frequent evaluation of blood glucose levels and adjustment of the insulin dose, a process which is called titration. Current clinical pathways for supporting type 2 diabetes patients in their titration of insulin involve either frequent clinical visits or routine visits supplemented by frequent telephone calls or e-mail contact. Both options involve exchanging information about blood glucose results and providing advice on adjusting treatment, but they are also very timeconsuming. Delivery of care between visits improves how fast the patient reaches good glycemic control and reduces the risk of exposure to a high glycemic burden for prolonged periods of time. As internet access in patients’ homes will continue to increase over the coming years, a web-based system to support self-management in insulin titration has the 1
Corresponding Author.
104
A.C.R. Simon et al. / Web-Based Decision Support System for Insulin Self-Titration
potential to reach a large number of patients at low cost. There already exist systems to guide patients with type 1 diabetes in calculating the optimal pre-meal short-acting insulin dose [2;3]. Existing systems for patients with type 2 diabetes mostly focus on providing weight management, physical activity and diet [4]. However, patients with type 2 diabetes initiating once daily insulin form a specific and relatively large patient group that also requires intensive titration of the insulin dose. They do not benefit from the systems for patients with type 1 diabetes because adjustment of long-acting insulin requires a different strategy than adjustment of short-acting insulin. For this reason, we decided to develop a web-based system that facilitates the clinical process of providing insulin dosing advice to patients with type 2 diabetes using any once-daily long-acting insulin, the Patient Assisting Net-Based Diabetes Insulin Titration (PANDIT) system. Patients should be able to access the PANDIT system on a frequent basis to receive insulin dosing advice. In addition, the system should support communication between patients and caregivers, and provide caregivers the possibility to overrule the system’s advice when they deem that this is necessary for safety reasons. PANDIT should also recognize potentially unsafe situations such as hypoglycemic events. The purpose of this paper is to provide an overview of the phases of development and the final design of the PANDIT system.
2. Methods The first step of developing the PANDIT system was to elicit the system’s requirements by reviewing the literature, by consulting an expert panel, and by interviewing patients. We searched the literature using the following MeSH terms: “clinical decision support systems” OR “telemedicine” AND “diabetes mellitus” AND “insulin” to provide us with an up-to-date overview of studies focusing on insulin titration and decision support for patients with diabetes. As currently no single and uniform care standard for the titration of insulin exists, we organized several meetings with an expert panel consisting of physicians, diabetes nurses and medical informatics specialists to generate decision rules for web-based titration of insulin. The rules were assessed by considering specific premeditated consultation scenarios. We performed semi-structured interviews with five experienced patients and five patients who recently started with once-daily insulin, to investigate patients’ habits and behaviors when performing self-measurements of fasting plasma glucose values, injecting insulin and adjusting the insulin dose. We also asked them if and how they could benefit from a web-based system that generates insulin dosing advices. A researcher (AS) translated the gathered information to a system requirements document. The document was repeatedly reviewed by a clinical diabetologist (FH) and a medical informatics specialist (NP) who are both part of the research team. Based on the specified requirements, a fully functional system was developed. Because generation of insulin dosing advice is the main feature of the system, it was decided to use the GASTON framework [5]. GASTON is a state-of-the-art framework for building decision support-systems, and consists of (i) an ontology-based knowledge representation language, (ii) a graphical modeling tool for encoding clinical decision rules and decision support algorithms, and (iii) an execution engine for reasoning and generation of advice. The graphical user interface (GUI) of the system was developed using Microsoft Silverlight. Patient data are stored in a Microsoft SQL database. SSL (Secure Sockets Layer) is used to provide encrypted data exchange over the Internet.
A.C.R. Simon et al. / Web-Based Decision Support System for Insulin Self-Titration
105
3. Results 3.1. Requirements Specifications The elicitation of the requirements for the web-based titration system revealed four important aspects: the insulin titration algorithm, the handling of hypoglycemic events, telemedicine functionalities and visiting frequency monitoring. Insulin titration algorithm – The effect of diabetes treatment is evaluated by the widely accepted marker HbA1c (glycosylated hemoglobin), which reflects the blood glucose levels over a period of six to eight weeks. We used an existing treat-to-target titration algorithm for once-daily basal insulin in type 2 diabetes patients that has already been proven to be effective in lowering HbA1c when used systematically [6]. Discussions with the expert panel revealed that it would be advisable to adapt the existing treatment algorithm in such a way that it incorporates the variables weight and age, e.g. a patient with a higher weight requires more insulin and if the patient is older than 70 years it was deemed safer to titrate less aggressively (Table 1). The expert panel also concretized the necessity to choose a glycemic target that would preserve the benefits of intensive therapy but minimize the risk of severe hypoglycemia. If a patient experiences frequent hypoglycemic events, the caregiver should be able to tailor the target value of the PANDIT system to the individual patient. Table 1. Insulin titration algorithm Lowest Fasting Plasma Glucose of the Preceding Three to Six Days < 4 mmol/l 4,0 – 5,5 mmol/l 5,6 – 9,9 mmol/l > 10 mmol/l * min = minimum
Insulin Dose Adjustment for People < 70 Yrs -0,02 IU/kg (min* -2 IU) Stable dose +0,02 IU/kg (min +2 IU) +0,04 IU/kg (min +4 IU)
Insulin Dose Adjustment for People > 70 Yrs -0,02 IU/kg (min -2 IU) Stable dose +0,02 IU/kg (min +2 IU) +0,02 IU/kg (min +2 IU)
Safety: handling of hypoglycemic events - Achieving lower blood glucose levels carries an increased risk for hypoglycemia. The literature review showed that hypoglycemia and fear of hypoglycemia are considered the main barrier to attain good glycemic control by patients and clinicians [7]. Therefore, the expert panel set up decision rules with the purpose to prevent patients from experiencing hypoglycemic events. Firstly, the expert panel stated that increases of the basal insulin should be based on the lowest of three recent fasting plasma glucose (FPG) measurements, collected in the preceding three to six days. Patients sometimes measure a single fasting blood glucose value that is disproportionally high or low due to measurement errors and general variations in lifestyle (e.g. exercise, food intake). Using these values could cause overdosing of insulin and lead to hypoglycemia. Titration on the lowest FPG value from multiple measurements will prevent the system from using an erroneous measurement. Secondly, the expert panel concluded that if the patient reached the target value, the titration should be based on the lowest of six recent FPG measurements, collected in the preceding six to twelve days. Most importantly, the system should also include a procedure for handling hypoglycemic events. As currently no uniform care standards for handling hypoglycemic events exist, we used both the literature and expert opinion to set up this procedure. The expert panel stated that every hypoglycemic episode should be graded in terms of severity and safety risks, and if the episode is considered a reason for more intensively guided treatment, the patient should be redirected to the caregiver.
106
A.C.R. Simon et al. / Web-Based Decision Support System for Insulin Self-Titration
Telemedicine functionalities - Several patients emphasized the importance of the involvement of the caregiver. Also the expert panel stated that caregivers should be able to maintain their responsibility while delegating the task of providing insulin advices to the system. Telemedicine functionalities should therefore enable caregivers to access their patients’ records of blood glucose values. Furthermore, caregivers should be warned by the PANDIT system when patients experience hypoglycemic events. In such cases caregivers should have the opportunity to overrule the system’s algorithm and directly provide insulin dosing advice through the PANDIT interface for a certain period of time. Visiting frequency monitoring - The expert panel stated that patients should be encouraged to use the PANDIT system frequently in order to achieve good glycemic control. It was decided that patients would be reminded automatically by e-mail or SMS to use the system if they had not entered FPG values for more than three days. According to the patient interviews not all patients would appreciate receiving such reminders from the system; consequently this feature should be optional. 3.2. System Architecture and Implemented Functionalities The PANDIT system consists of three different components: a decision support system, a GUI and a database. The system architecture is shown in Figure 1.
Fig 1. System architecture of the PANDIT system
The PANDIT system uses an interface resembling a plasma glucose diary to facilitate the collection of fasting blood glucose values2. Upon each consultation of the system, the patient updates the diary with recently measured FPG values and the amounts of insulin used, and reports whether he or she has recently experienced hypoglycemic events. Subsequently, the GUI performs a store-operation on the database and sends a request to GASTON. GASTON runs the applicable query on the database and executes the PANDIT algorithm. If necessary, GASTON executes the additional hypoglycemia algorithm. Finally, an advice will be transmitted to the GUI. The GUI stores the advice in the database and displays the advice to the user. The diary also facilitates adding annotations to the blood glucose values. These annotations could have an added value if the titration is redirected to the caregiver. In addition to the automated reminders sent by the PANDIT system, the system includes an e-mail functionality enabling the patient to directly contact the caregiver if considered necessary by the patient.
2
A video presenting the main features of the system is available at www.pandit-online.nl/demo
A.C.R. Simon et al. / Web-Based Decision Support System for Insulin Self-Titration
107
4. Discussion In the current study we developed a web-based insulin titration system with telemedicine functionalities, the PANDIT system. PANDIT distinguishes itself from existing web-based self-titration systems as it focuses on patients with type 2 diabetes using once daily long acting insulin. In addition, while most existing systems do not involve the caregiver, the PANDIT system is specifically developed to be embedded in routine care. Earlier studies have already shown that frequent insulin dose adjustments which are set by clinicians according to a predefined algorithm can lead to substantial decreases in HbA1c. Even greater reductions are achieved when such an algorithm is applied by patients themselves [6]. A web based algorithm, like the PANDIT system, will spare the patients calculating a new insulin dose themselves. The challenge of providing automated insulin dosing advices at a patient’s home is enabling caregivers to maintain their responsibility in this process. A strength of the study is that we involved both patients and care professionals in clarifying the system requirements. We therefore believe that the results of our study are useful for the development of future telemedicine tools for diabetes patients. A web-based titration system such as the PANDIT system could be extended to patient groups using multiple injection therapy. Additionally, future studies should aim to develop web-based systems for diabetes patients addressing multiple treatment targets and facilitating integrated care involving a diabetes nurse, a dietician and other health care providers. A limitation of our study is that dietary and lifestyle aspects of diabetes were not considered to be implemented in the system. In the near future we will perform a pilot study and a randomized controlled study to investigate the efficacy of the PANDIT system. During the pilot we will also perform a usability test with patients.
References [1] [2]
[3]
[4]
[5]
[6]
[7]
Wild S, Roglic G, Green A, Sicree R, King H. Global prevalence of diabetes: estimates for the year 2000 and projections for 2030, Diabetes Care 27 (2004), 1047-53 Hejlesen OK, Andreassen S, Frandsen NE, et al. Using a double blind controlled clinical trial to evaluate the function of a Diabetes Advisory System: a feasible approach?, Computer methods and programs in biomedicine 56 (1998), 165-173 Rossi MC, Nicolucci A, Di Bartolo P, et al. Diabetes Interactive Diary: a new telemedicine System enabling flexible diet and insulin therapy while improving quality of life: an open-label, international, multicenter, randomized study, Diabetes Care 33 (2010), 109-115 Brown LL, Lustria ML, Rankins J. A review of web-assisted interventions for diabetes management: maximizing the potential for improving health outcomes, Journal of diabetes science and technology 1 (2007), 892-902 De Clercq PA, Hasman A, Blom JA, Korsten HH. Design and implementation of a framework to support the development of clinical guidelines, International journal of medical informatics 64 (2001), 285-318 Davies M, Storms F, Shutler S, Bianchi-Biscay M, Gomis R. ATLANTUS Study Group, Improvement of glycemic control in subjects with poorly controlled type 2 diabetes: comparison of two treatment algorithms using insulin glargine, Diabetes Care 28 (2005), 1282-1286 Cryer PE. Hypoglycemia: the limiting factor in the glycaemic management of Type I and Type II diabetes, Diabetologia 45 (2002), 937-948
108
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-108
TreC - a REST-based Regional PHR Claudio ECCHERa,1, Enrico Maria PIRAS a, Marco STENICO a a Fondazione Bruno Kessler, Trento, Italy
Abstract. The Personal Health Record (PHR) is progressively becoming a fundamental tool to allow people to control their health. User needs, however, impose to design a PHR solution that must offer a great flexibility in terms of managing heterogeneous health data, composing data in higher level concepts and interfacing the PHR with different devices to collect and visualize data. We propose to adopt REST as core of a regional PHR architecture and present a PHR based on this architecture implemented and tested in our Province. Keywords. PHR, REST architecture
1. Introduction Since few years there has been a growing interest in Personal Health Records (PHR), electronic tools aimed at laypeople to support them in accessing, managing, and sharing their personal health information [1]. This interest is demonstrated by the increasing number of articles about this technology. The vast majority of the literature deals with issues like acceptability (by both institutions and laypeople), business models, fields of application, or present long term scenarios of a revolutionized healthcare sector. On the other hand, the ongoing debate shows little concern about which technical solutions are to be adopted in order to achieve the expected results. Reflections on these issues, though, are not mere details to be left to technologists but they require a specific attention. In this work we present the pathway that led us to choose a REST based architecture for TreC, a regional-wide PHR for the citizens of the Province of Trento (Northern Italy, 400 000 inhabitants) [2], in order to satisfy the requirements emerged from sociological research on future users. In the next paragraph we will present the research project and the requirements analysis carried out in the preliminary phase. In the subsequent paragraph we will discuss the implications for design emerged that suggested the use of REST-style architecture and present it. Finally we will briefly describe the implementation process and the work planned for the future.
2. The Personal Health Record PHR can be roughly described as the laypeople’s counterpart of clinical Electronic Health (or Patient) Records (EHR and EPR). The latter are typically designed around the needs of the healthcare personnel (physicians, nurses, and clerical staff) or the 1
Corresponding author.
C. Eccher et al. / TreC – A REST-Based Regional PHR
109
organizations they work for. These professionals have patterns of actions that can be studied and, to some extent, formalized which makes possible to build technologies that fit into their workflow. This is not the case with citizens, whose activities do not fit into clearly defined schemas and consequently a formal data model cannot be defined. The impossibility to represent a “typical citizen workflow on health related activities” is not the only problem. To take care of personal health and wellbeing, in fact, a citizen might need to go beyond strict medical data and to record data related to lifestyle (e.g. cigarettes smoked, miles run). These differences between EPRs and PHRs require carrying out the requirement analysis in different ways: while in the first case a detailed study of the workflow and formal rules of conduct is necessary, in the second case this it is just impossible. What is relevant, though, is to provide a broad conceptualization of how health information is managed by laypeople in search of macro-concepts to be later transformed into system requirements. 2.1. Requirement Analysis We conducted 42 semi structured interviews focused on three major topics: 1) how paper based health records are sorted out and shared among relevant actors in the care process; 2) when and where this happens; and 3) if people produce medical documentation (e.g. personal diaries to keep track of health parameters). The analysis (see [3] for details) led to the following considerations. Everyday observations are defined by individuals and may be meaningful only for the person in the particular context in which they are collected or used. Moreover, to collect and visualize information people can use a number of different devices and applications that constitute a digital ecosystem able to satisfy the users’ needs in different contexts. The PHR, in substance, must allow the collection and aggregation of data without constraining them to be structured in a predefined data model. The main requirements of a PHR are summarized in the following: • • •
•
The PHR must allow accessing relevant information produced elsewhere: e.g., laboratory test and diagnostic imaging reports from the hospital, GP’s drug prescriptions, etc. The PHR must allow collecting and managing heterogeneous data: health data (pressure, blood tests) as well as lifestyle data (diet, physical activity), etc. The PHR must allow a flexible organization of low-level data in higher level concepts with different meaning and importance in different contexts: e.g., the weight associated to diet (lifestyle information) or to heart failure (health data for disease control). The PHR must allow to interface heterogeneous devices and applications to collect and visualize data when, where and how the user prefers/needs but also for receiving messages, alerts, etc.
2.2. An architectural solution for the PHR The choice of the PHR architecture was mainly determined by the last two requirements. The PHR is based on the Representational State Transfer (REST) paradigm, initially described in the context of HTTP, but not limited to that protocol [4]. which offers greater flexibility and control respect to other Web Service architectures [5]. REST-style architectures consist of clients, which initiate requests,
110
C. Eccher et al. / TreC – A REST-Based Regional PHR
and servers, which process requests and return responses. Requests and responses are built around the transfer of representations of resources, i.e. any coherent and meaningful concept. The representation of a resource is typically a document capturing its current or intended state. For our purposes, the REST architecture presents several advantages: •
• • •
The resource representation format is independent from the database. REST Web services do not use a single format for representing resources, but they can provide a variety of MIME document types (JSON, XML). This allows different devices to access ecosystem resources. Clients and servers are separated by a uniform interface with a small set of general methods to manipulate resources2, which represent primitive concepts that can be composed by clients in higher level domain concepts. REST applications maximize the use of the pre-existing, well-defined interface and other capabilities provided by the chosen network protocol, and minimize the addition of new application-specific features on top of it. In the last few years, there has been a growing interest in REST-based architectures to develop Web 2.0 applications.
3. The TreC System Architecture The TreC System architecture, depicted in Figure 1, can be divided in three blocks: the TreC REST core, the Data Model (DM) and the database, and the TreC clients.
Figure 1. System architecture of the TreC PHR implemented in the Province of Trento.
2
For example, a HTTP-based RESTful web service implements the methods GET, PUT, POST and DELETE.
C. Eccher et al. / TreC – A REST-Based Regional PHR
111
3.1. The TreC REST Core The core of TreC is a RESTful service (e.g. a service respecting the REST principles) which manages resources and transfers their representations to/from the clients. The idea is that a medical concept is a resource, and its transferred representation is structured in a XML or JSON document. One document can contain more resources, depending on the request of the specific client. The REST layer is responsible to store the incoming resource representation into a relational database of clinical concepts and to send the representation of clinical concepts in response to a client request according to the TreC data model (see next section). Moreover, the REST layer interfaces the PHR with other systems providing REST-based access to the external resources. 3.2. The TreC Data Model To allow the maximum flexibility in structuring heterogeneous data at application level, the DM is characterized by the definition of low-level concepts only, each corresponding to a resource in the REST paradigm, that do not depend on the particular context or the application used to collect them. Essentially, DM is base on the Entity Attribute Value (EAV) model, where each entity corresponds to a parameter with a value and several attributes (see Table 1). Primitive data are stored in a relational database; the REST-based access to the DB is mediated by the REST layer. Structured information generated by third parties (e.g., hospital reports) are managed by the REST layer as a resource whose representation is a complex file that can contain different kinds of information: plain text, coded parameters, images, etc. 3.3. TreC REST Clients Essential component of TreC is the set of client applications satisfying specific needs. A client application must know how to re-aggregate in complex concepts the primitive data exposed by the data model. To this end, the clients are thick clients able to build custom data structures to represent context and situation-specific concepts. Clients can also implement complex logic for analyzing and monitoring data according to the specific application domain independently from the PHR core and other applications, allowing managing health parameters even in the case of poor or absent client-server connectivity. Table 1. Simplified example of the data model structure. The most important attributes are: ID of the subject whom the measure refers to, ID of the resource, name, value, unit of measure, parameter code in some medical terminology, and registration time. Parameters for which a code in some terminology does not exist are marked with an internal code. The resource URI is composed by server address, subject ID and resource ID. Values belonging to the same measurement procedure (e.g., the blood pressure) can be identified by the same registration time. Subj ID S-001
ID R-001
S-001
R-002
Name Diastolic Pressure Systolic Pressure
Code SNOMED-CT: 271650006 SNOMED-CT: 271649006
Value 90
Unit mm[Hg]
Time 20/01/2011 12:25:00
140
mm[Hg]
20/01/2011 12:25:00
112
C. Eccher et al. / TreC – A REST-Based Regional PHR
At the same time, they can easily inter-operate with other clients exchanging the low level data. As shown in Figure 1, the client applications can be of different nature: applications running on mobile devices for implementing small local service, as well as web-based applications accessible through a browser. In this view, the PHR is an ecosystem of different applications, running on a range of devices. The responsibility of implementing client applications is left to client developers in response to specific user needs.
4. Trec Implementation and Future Work In the last half of 2010 we released a first version of the system for extensive testing in a real life setting, according to the philosophy of the living lab approach [6]. 450 people are currently using the system. On a 2-month base they are asked to respond to an on line survey about general satisfaction, patterns of use and other specific issues. The released system is constituted by the REST core, the database and the Web portal, based on Liferay, that allows to access to a set of client applications implemented as widgets. At the moment the system offers three main functionalities. The first is the possibility to manage on line clinical information produced by the institutions (e.g. lab test results, admission and discharge letters, specialist consultancy reports). These documents can be visualized but also annotated and classified. The second is a structured personal health diary where individuals can keep track of clinical parameters, medications, and reconstruct their past clinical history. The third function allows parents to manage their kids’ personal pages. In 2011 we aim at testing TreC as a communication tool between individuals and physicians to support the personal management of three chronic conditions: pediatric asthma, diabetes and chronic hearth failure and to add a set of mobile applications for collecting and monitoring health data at home.
References [1] [2] [3]
[4]
[5]
[6]
Tang, P.C., Ash, J.S. Bates, D.W.. Overhage, J.M and Sands, D.Z. Personal Health Records: Definitions, Benefits, and Strategies for Overcoming Barriers to Adoption, JAMIA, 13 (2006), 121–126. Piras, E.M Purin, B. Stenico, M. and Forti, S. Prototyping a Personal Health Record Taking Social and Usability Perspectives into Account, Electronic Healthcare, Springer, 2010, 35–42. Piras, E.M. and Zanutto, A. Prescriptions, X-rays and Grocery Lists. Designing a Personal Health Record to Support (The Invisible Work Of) Health Information Management in the Household, Comput. Supported Coop. Work, 19(6) (2010), 585–613. Fielding, R. Architectural Styles and the Design of Network-based Software Architectures, PhD dissertation, 2000, University of California Irvine, available at http://www.ics.uci.edu/̃fielding/pubs/dissertation/top.htm, last accessed January 2011. Pautasso, C. Zimmermann, O. and Leymann, F. RESTful Web Services vs. Big Web Services: Making the Right Architetural Decison, Proceedings of the 17th International World Wide Web Conference (WWW2008), 2008. Følstad, A. Brandtzæg, P.B. Gulliksen, J. Börjeso,n M. and Näkki, P. Towards a Manifesto for Living Lab Co-creation, Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part II - INTERACT ’09, Uppsala, Sweden, 2009, 979–980.
Decision Support, Knowledge Management, Guidelines
This page intentionally left blank
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-115
115
Next Generation Neonatal Health Informatics with Artemis Carolyn MCGREGORa1, Christina CATLEYa, Andrew JAMESb, James PADBURYc a University of Ontario Institute of Technology, Oshawa, ON, Canada b The Hospital for Sick Children, University of Toronto, Toronto, ON, Canada c Women & Infant's Hospital of Rhode Island, The Warren Alpert Medical School of Brown University, Providence, RI, USA
Abstract. This paper describes the deployment of a platform to enable processing of currently uncharted high frequency, high fidelity, synchronous data from medical devices. Such a platform would support the next generation of informatics solutions for neonatal intensive care. We present Artemis, a platform for real-time enactment of clinical knowledge as it relates to multidimensional data analysis and clinical research. Through specific deployment examples at two different neonatal intensive care units, we demonstrate that Artemis supports: 1) instantiation of clinical rules; 2) multidimensional analysis; 3) distribution of services for critical care via cloud computing; and 4) accomplishing 1 through 3 using current technology without a negative impact on patient care. Keywords. neonatal intensive care, multidimensional data, cloud computing
real-time
analysis,
clinical
rules,
1. Introduction Neonatal Intensive Care Units (NICUs) deploy state of the art medical devices to monitor and support premature babies; however, neonatologists are unable to process the vast quantities of both manually charted data and data collected from medical monitoring equipment. While there has been a sustained effort to move from paper to electronic charting in critical care, including NICUs, these initiatives have not improved the representation of information that can be derived from that charted data or the translation of that information to knowledge for earlier condition onset warnings. Recent research is building a strong case for the benefits of real-time data analysis, with clinical events such as late onset neonatal sepsis (LONS) [1, 2] exhibiting early warning signs in physiological data before the clinical impact is sufficient to exhibit current clinical detection indicators. However, that research takes a condition specific, patient specific or physiological data stream type specific approach. The translation of that knowledge is another ‘black box’ clinical decision support system (CDSS) medical device at the bedside. Patients can develop multiple conditions concurrently or over time and each condition can have a set of behaviours with a pattern of occurrence. An infrastructure that can process currently uncharted higher frequency physiological data 1
Corresponding Author. 2000 E-mail:
[email protected]
Simcoe
Street
North,
Oshawa,
Ontario,
Canada,
L1H
7K4;
116
C. McGregor et al. / Next Generation Neonatal Health Informatics with Artemis
and support the earlier onset detection of multiple conditions has the potential to provide greater knowledge at the bedside than is available today and represents the next generation of informatics solutions for critical care. The provision of this knowledge requires a multidimensional approach as there are multiple conditions and multiple streams of data for which multiple behaviours can exist. In addition, new approaches are needed to enable processing and integration of both real-time synchronous medical device data and asynchronous clinical data to aid in clinical decision-making and improve outcomes for newborn infants. We present the Artemis framework, a platform for real-time enactment of clinical knowledge as it relates to multidimensional data analysis and clinical research. First implemented at The Hospital for Sick Children, Toronto, in August 2009, Artemis has been running continuously since that time. We discuss, with examples, Artemis’ multidimensional approach. Our goal is to provide a comprehensive description of the Artemis platform to date including the introduction of a cloud computing approach to enable distribution and support outsourced service of critical care.
2. Materials and Method Artemis, shown in Figure 1, provides a flexible platform for the real-time analysis of time series physiological data streams extracted from a range of monitors to detect clinically significant conditions that may adversely affect health outcomes. The Data Acquisition component enables the provision of real-time synchronous medical device data and asynchronous Clinical Information Management System (CIMS) data. This data is then forwarded for analysis within the Online Analysis component which operates in real-time. For this real-time component, Artemis employs IBM's InfoSphere Streams, a novel streaming middleware system that processes data in real-time and then enables data storage within the Data Persistency component. It is capable of processing and then storing the raw data and derived data from multiple infants at the rate they are generated [3]. Stream processing is supported by IBM's Stream Processing Application Declarative Engine (SPADE) language, which is the programming language for IBM's InfoSphere Streams middleware. For the Knowledge Extraction component, Artemis utilizes a newly proposed temporal data mining approach [4]. This component supports the discovery of condition onset behaviours in physiological data streams and associated clinical data. New knowledge, once tested through rigorous clinical research techniques, is transferred for use within the Online Analysis through the Redeployment component which translates the knowledge to a SPADE representation. First, this paper tests whether the Artemis platform can enable the instantiation of clinical rules. Second it demonstrates how this platform can support multidimensional analysis. Third, we propose that this platform can be provided not only through an inhouse installation but also through cloud computing providing a service of critical care [5]. This is particularly of interest for remote hospitals whose infrastructure for information technology technical support is much more limited than larger urban centre healthcare organizations. In this way, raw physiological streams and related clinical data can be transmitted securely over the Internet, with de-identified patient identifiers, for processing at the cloud-computing site. Finally, we show that that the current technology is capable of supporting the platform without a negative impact on patient care. The Hospitals’ Research Ethics Boards approved this research.
C. McGregor et al. / Next Generation Neonatal Health Informatics with Artemis
117
Figure 1. Artemis Framework (modified from [3])
3. Results Our first implementation of Artemis is located at The Hospital for Sick Children (SickKids), Toronto. Real-time synchronous data is being acquired from the Philips Intellivue MP70 Neonatal monitors. Asynchronous data is being acquired from CIMS. Clinical protocols require that electrocardiogram derived HR (ECG-HR); transcutaneous oxygen saturation (SpO2); respiration rate (RR); and impedance respiratory rate (IRW) data streams are constantly collected. When available, we also receive the systolic, diastolic and mean blood pressure. We have deployed SPADE code within the Online Analysis supporting our research into early detection of LONS. Data Persistence occurs at SickKids and data is replicated daily to the University of Ontario Institute of Technology (UOIT) where Knowledge Extraction research supports our clinical research into new earlier onset detection of LONS. Artemis has collected data on 174 patients, representing 4.1 patient years of data; all raw and derived data has been stored for retrospective research. Currently supporting eight concurrent patients and collecting approximately 1250 readings a second, Artemis at SickKids is deployed on three laptops: 1) for data acquisition; 2) for online analysis; and 3) for stream persistence. An incremental backup of the data is made to a persistence storage mirror at UOIT and used by the knowledge extraction component. In April 2010 a second Artemis instance began collecting data from the Women and Infants Hospital in Rhode Island (WIHRI), USA. The WIHRI has successfully used a cloud-based deployment, where spot readings taken each minute are collected from the bedside SpaceLabs devices and fed in raw form to the Data Acquisition component, implemented in Mirth, of the Artemis platform running at the UOIT through a secure internet tunnel. In this setting all components of the platform are housed in the Health Informatics Research Laboratory at UOIT. Knowledge Extraction research supports our clinical research into new earlier onset detection of LONS on slower frequency physiological data. To date WIHRI has enrolled 203 patients, representing 10.6 patient years of data. We have implemented a third installation of Artemis that contains only the Data Persistency, Knowledge Extraction and Redeployment components. Using the Knowledge Extraction component, we are performing retrospective data mining on a
118
C. McGregor et al. / Next Generation Neonatal Health Informatics with Artemis
dataset of nearly two years of 30 second spot reading data, obtained from 1151 patients, to further inform our refinement of a clinical rule for earlier detection of LONS which can ultimately be deployed by the Re-deployment component. We have successfully instantiated clinical rules though their implementation in SPADE for deployment by the Online Analysis component for LONS [3], apnoea [6], and hypoglycemia [7]. The three different implementations demonstrate that the platform can support multiple dimensions, shown in Figure 2, including: multiple locations, care providers, patients, conditions, data streams, and data stream behaviors. By care providers we mean that the platform can provide different temporal data summaries to different providers. For example, with apnoea, the neonatal nurse responds to alarms for extended respiratory pauses and falling SpO2 and HR levels indicative of potential apnoea events. Our goal is not to generate further alarms for discrete events, but rather to create integrated temporal summaries of events from multiple data streams. For instance, a single mild apnoea event may not be clinically relevant; however, clusters of such events could be indicative of LONS and this information should be available to the neonatologist.
Figure 2. Multidimensional approach
Our current implementations at SickKids and WIHRI have no impact on care at the bedside. We are collecting and comparing when behaviours in the physiological streams are noted for comparison with current clinical observation and treatment practices. Due to the volume of data collected there were initial concerns expressed by the hospital’s Information Services group about network traffic; however we have found that Artemis consumes less than 0.5% of network bandwidth. We have demonstrated that such a platform is capable of keeping up with the data collected at the speed at which it is received. The two other Artemis environments are currently running at UOIT spread across four servers.
4. Discussion The Artemis platform uses currently available technology to support next generation health informatics, through online analysis and knowledge extraction of currently uncharted higher frequency data. In addition to the three implementations presented here, new implementations of the Artemis platform are in the planning stages for another NICU in Canada, as well as two NICUs in China and one in Australia. Artemis provides clinical decision support in a flexible and transparent manner. Flexibility results from the ability to receive any asynchronous physiological data,
C. McGregor et al. / Next Generation Neonatal Health Informatics with Artemis
119
support the generation of multiple clinical rule representations as autonomous or interrelated SPADE graphs for Online Analysis, and perform multiple clinical research studies within Knowledge Extraction for clinical event analysis. The use of SPADE to represent the clinical rule enables transparency of the representation of the knowledge processing. This is in direct contrast to many CDSSs based on complex mathematical processing, such as artificial neural networks, which from the clinicians’ viewpoint operate as black boxes. While a growing number of studies indicate that properly designed and effectively used CDSSs have the ability to improve quality of patient care [8], black box approaches raise concerns about the possible negative effects of CDSSs, including: potential de-skilling effects if system users do not understand how results were generated; a lack of flexibility and overly prescriptive outcomes; promoting overreliance on software; and difficulty in evaluating outcomes [9]. Artemis is not a black box solution; rather it provides a means to instantiate clinical knowledge into the information processing pathway. From a clinical policy perspective, a number of international regulatory bodies are mandating that CDSSs require regulatory approval [10]. Canada has recently introduced new regulations classifying patient management software as a medical device that must be regulated [11]. The impact this will have on the clinician’s ability to perform updates to CDSSs based on new evidence-based medicine is not yet clear. Acknowledgements: This research is funded by the Canada Research Chairs program, Canadian Foundation for Innovation, an NSERC Discovery Grant, and an IBM First of a Kind award.
References [1]
Flower AA, Moorman JR, Lake DE, Delos JB. Periodic heart rate decelerations in premature infants, Experimental Biology and Medicine 235 (2010), 531-8. [2] Griffin MP, Lake DE, O’Shea TM, Moorman JR. Heart rate characteristics and clinical signs in neonatal sepsis, Pediatric Research 61 (2007), 222-227. [3] Blount M, Ebling M, Eklund J, James AG, McGregor C, Percival N, et al. Real-time analysis for intensive care: development and deployment of the Artemis analytic system, IEEE Eng Med Biol Mag 29 (2010), 110-8. [4] McGregor C. System, method and computer program for multidimensional temporal data mining. Patent # 089705-0009; Canada, Gatineau Quebec (2010). [5] McGregor C, Eklund JM. Next generation remote critical care through service-oriented architectures: challenges and opportunities, Service Oriented Computing & Applications 4 (2010) 33-43. [6] Catley C, Smith K, McGregor C, James A, Eklund JM. A Framework to model and translate clinical rules to support complex real-time analysis of physiological and clinical data, Proc. 1st ACM International Health Informatics Symposium (2010), 307-315. [7] Kamaleswaran R. A SOA method for the integration of heterogeneous data models for decision support, Master’s thesis (in progress), University of Ontario Institute of Technology (2011). [8] Wright A, Sittig DF, Ash JS, Sharma S, Pang JE, Middleton B. Clinical decision support capabilities of commercially-available clinical information systems, JAMIA 16 (2009), 637-44. [9] Open Clinical, DSS Success Factors (2005). Accessed January 2011 from: http://www.openclinical.org/dssSuccessFactors.html [10] Berner ES. Legal and regulatory issues related to the use of clinical software. In Greenes RA, ed. Clinical Decision Support, The Road Ahead. Elsevier Inc., 2007. [11] Health Canada, Software Regulated as a Class I or Class II Medical Device (2010). Accessed January 2011 from: http://www.hc-sc.gc.ca/dhp-mps/md-im/activit/announce-annonce/ md_notice_software_im_avis_logicels-eng.php
120
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-120
Limitations in Physicians’ Knowledge when Assessing Dementia Diseases – an Evaluation Study of a Decision-Support System a
Helena LINDGRENa1 Department of Computing Science, Umeå University, Sweden
Abstract. There is a need to provide tools for the medical professional at the point of care in the assessment of a suspected dementia disease. Early diagnosis is important in order to provide appropriate care so that the disease does not cause unnecessary suffering for the patient and relatives. DMSS (Dementia Management and Support System) is a clinical decision-support system that provides support in the diagnosis of a dementia disease, which is in use in controlled clinical evaluation settings in four countries. This paper reports the results of evaluations done in use environments in these places during a period of two years. Data in 218 patient cases were collected by 21 physicians during their use of the system in clinical practice. In 50 of the cases the use of the system were also observed and the physicians were interviewed in 88 cases. The collected data and inferences made by the system were analyzed. To summarize the results, DMSS gave appropriate support considering the patient case, available information and the user’s skills and knowledge in the domain. However, the results also illuminated the need for extended and personalized support for the less skilled physician in the assessment of basic information about patients. Keywords. Clinical decision support system, dementia, evaluation, diagnosis
1. Introduction In a larger perspective of developing sustainable knowledge-based system in the health domain, results from iterative user evaluations are ideally fed into new versions of the system [1]. Due to the safety-critical nature of health and medical decision-support systems, the integration of prototypes of such systems in their earlier stages are commonly troublesome (e.g., [2]). As a consequence, the ecological validity of the support provided can not be properly assessed, which is particularly important when developing systems for supporting a continuing medical education in individuals. In order to overcome this constraint on the development, efforts have been done to develop methods to integrate early prototypes in clinical practice using an action research and participatory design approach, Herzum and colleagues provide with one example in [3]. Another example is DMSS (Dementia Management and Support System) in focus for our work [4]. DMSS is a stand-alone prototype of a clinical 1
Corresponding Author: Helena Lindgren, Department of Computing Science, Umeå University, SE-90187 Umeå, Sweden; E-mail:
[email protected]
H. Lindgren / Limitations in Physicians’ Knowledge when Assessing Dementia Diseases
121
decision-support system currently used in controlled clinical evaluation settings. The core information needed for assessing types of dementia is typically not collected in electronic patient health records if they even exist, which is one reason to introduce DMSS. What has been seen as beneficial in earlier qualitative studies is the learning potential visible in changes of the user’s assessment procedure and the support in the form of a checklist in the assessment towards deciding upon a diagnosis [5]. The main purpose with the evaluation study presented in this paper was to investigate how the system is used in real clinical settings involving users previously not familiar with the system and being novice or moderately familiar with diagnosing dementia diseases. The work supplements earlier case studies [5, 6] and aims at providing a quantitative evaluation of the outcome of the use (i.e., to what extent does the diagnosis suggestions provided by the system deviate from what the physician assert?), and interpretations of reasons for such noncompliance in the cases when they occur, which can be used for improving the system.
2. Methods The patient data from 218 patient cases was collected by 21 physicians, employed at 12 different health care organizations in four different countries during a period of two years. Three of the 21 physicians were considered experts, since they were enrolled in specialist care for dementia patients. The other participants were considered novices or moderately knowledgeable in the dementia domain, corresponding to typical levels of knowledge among primary care physicians. A range of different specialties was represented in the group, however, sharing a common clinical practice situation in which they need to diagnose dementia more or less frequently. The types of clinics ranged from small family practices with no computers, part from a laptop with DMSS installed, to hospitals with full equipment, where the patients could be either inpatients or outpatients depending on the local organization and the patient’s need. The data was entered into DMSS either as a part of the patient encounter or after the patient encounter. The collected patient data was anonymous, and also the individual physicians were coded and made anonymous in the data sample. Evaluations were done using the set of clinical practice guidelines and consensus guidelines underlying DMSS as baseline for what diagnosis could be regarded as correct in the case of conflicting views on a patient case [7, 8, 9, 10]. The physician’s assessment of specific diagnoses was recorded in the database, and in 50 patient cases the physician was also observed using the system. In addition, the physician was interviewed about his or her reasons for assessment in these 50 cases and in additional 38 cases. DMSS interprets the case as being atypical in the case when the patient data was ambiguous when analyzed using clinical guidelines (Figure 1). In these cases DMSS shows degrees of support for different diagnoses instead of suggesting one particular diagnosis. In such cases, the diagnoses with the highest confidence in the diagnosis were used in the comparison with the physician’s assessment. For instance, if DMSS assesses the reliability in Diagnosis 1 higher (e.g. “probable”) than in Diagnosis 2 (“possible”) and the physician has asserted Diagnosis 1 then their assessments comply, while if the physician has asserted Diagnosis 2 then this is interpreted as a conflict. The patient cases where the assessments did not comply, and the cases where an insufficient amount of information was entered, were subjected to further analysis in order to find reasons for noncompliance and lack of information. These cases and the
122
H. Lindgren / Limitations in Physicians’ Knowledge when Assessing Dementia Diseases
conflicting cases were also re-analyzed using DMSS in order to investigate where the user had stopped filling in information and which information underlies the conflicting views on a case.
3. Results A brief overview of the range of patients that occurred in the sample is the following. Out of 218 patients 125 received a specific dementia diagnosis that the system and physician agreed upon and 26 did not have a cognitive disease according to the physician and the system. 15 cases were in agreement diagnosed with mild cognitive impairment (MCI). This means that in 166 out of 218 cases (76,1%) it was possible to reach as far as a diagnosis agreed upon (or non-diagnosis) based on the collected information. The distribution of different types of dementia among the 125 cases with a specific dementia diagnosis was the following: Alzheimer’s disease (AD) 72%, vascular dementia (VaD) 6,4%, combined AD and VaD 2%, Lewy body dementia (DLB) 8,0%, frontotemporal dementia (FTD) 5,6%, dementia due to alcohol abuse 2,4% and dementia due to Parkinson’s disease (PDD) 1,6%. In addition to these 166 cases, there were cases in which the system and user also agreed upon that it was not possible to come to a diagnostic conclusion based on the insufficient information available. The views on the results of using the system and reasons for incomplete information in these cases were assessed by interviews. The physician agreed with the system that more information is needed, leaded to additional examinations, these however, being out of scope for our evaluation. In total, the physician and the system agreed on a view on diagnosis in 185 of 218 patient cases (84,9%). In additional 17 cases there were incomplete information, however, in these cases the physicians were not available for interviews for evaluating the level of agreement.
Figure 1. Part of an overview of DMSS analyses in an atypical patient case.
In 16 cases (7,3%) there were a conflict between the physician’s assessment and the system’s analyses of the collected data. Four of these cases were identified as caused by system’s failure to assess a correct diagnosis, mainly due to insufficient handling of the type of dementia that is caused by excessive alcohol consumption. When the system had been adjusted, a re-analysis of the four cases generated satisfactory results, thus increasing the proportion of agreement to 189 cases (86,7%). In 10 of the 12 remaining conflicting cases the pattern was seen that the physician assessed
H. Lindgren / Limitations in Physicians’ Knowledge when Assessing Dementia Diseases
123
Alzheimer’s disease based on a set of data in which the physician has asserted necessary symptoms such as episodic memory dysfunction as absent. The remaining cases showed neither a clear agreement nor clear disagreement, and the responsible physicians were not available for interviews. They were characterized by scattered and incomplete data collection and nine of these were collected by two of the physicians with minor experience in dementia diagnosis. One of the physicians had not asserted a diagnosis in four of the cases, and another physician had in five cases asserted a diagnosis, but had not entered enough information so that the system was able to come to any conclusion. In these cases the feed-back provided by the system was either highlighting data necessary to be entered for establishing diagnosis and/or information that the collected data was ambiguous and did not comply with implemented clinical guidelines.
4. Discussion In 10 of the contradicting cases, the system can be viewed as being correct based on the collected data and following the clinical guidelines. This would imply an increase in the “correctness” of DMSS. However, in an interaction design perspective the noncompliance is indeed not satisfactory. If it would be the case that the physician is correct about the memory deficit not being present, then the patient receives an incorrect diagnosis from the physician. If the physician is wrong about the memory function, this indicates that suitable interventions aimed at reducing consequences of cognitive dysfunctions may not be provided. In both cases, the physician needs to become educated in assessing cognitive disorders and their interventions. Reasons why the contradictory information has been entered may be stressful work situations, or lack of knowledge about dementia diagnosis, or simply that the interaction design of the system does not provide enough support to complete the task in a satisfactory way. Regarding the feed-back provided by the system in these cases, the user is given an overview of the support and lack of support/contradicting data for each potential dementia diagnosis. This feed-back is given in order to provide the user with explanations and a chance to reflect upon their own assessment. The reasons for the missing information may have been the same as in the cases where the physicians described that they did not have all the necessary information, or that the entering of information was not possible due to a stressful situation. Another reason may be lack of knowledge about the phenomenon to assess, as observed in earlier studies [6]. There was a set of symptoms that seemed to cause more confusion than other in the assessments. We have already mentioned episodic memory, which seems to be difficult for inexperienced physicians to distinguish. In addition, whether the patient has been exposed to toxic substances (e.g., drugs) and to assess characteristics of the cognitive decline caused difficulties. The physician must value whether the onset of the cognitive decline is rapid or insidious, and whether the decline is progressing. This is difficult, especially when there may be a case of multi-diagnosis with sudden rapid decrease in functioning due to vascular incidents along with a more slow progression due to Alzheimer’s disease. Also evaluating severity levels in different cognitive functions in order to distinguish between normal ageing, MCI and dementia is difficult, but necessary. In addition, at least two of the physician did not seem to know about the importance to enter information about an ongoing Parkinson’s disease so that this information can be valued together with other information. In an earlier study it was
124
H. Lindgren / Limitations in Physicians’ Knowledge when Assessing Dementia Diseases
observed that in spite of knowledge that a related disease was present in health record and in the user, this was not included in the information, and in spite of that the patient showed typical symptoms, the physician assessed these to be absent [5]. This leaded to an agreement between the physician (who did not take the information into account which should be done) and the system, since the system draws conclusions based on the entered knowledge, although possibly not correct. Therefore, in the 185 cases in which the physician agreed with the system’s analysis, there may be agreement but not on the correct diagnostic conclusion, due to inaccurate data entry. This emphasizes the importance to provide the user with support also in the basic tasks of data collection and interpretation, and integrate DMSS locally with general health information systems.
5. Conclusions The work presented in this paper shows how a CDS for supporting dementia diagnosis comply with assessments done by physicians, and reasons for noncompliance are detected and discussed. The results show that the system performs well, with agreement in 84,9% and disagreement in 7,3% of the cases. In the remaining cases (7,8%) the information was incomplete and physician’s view was unknown. The reason for disagreement was in a majority of the cases due to a possible misconception in physicians of necessary symptoms for diagnosing Alzheimer’s disease. Therefore, future work will focus on developing the support in the system for assessing core symptoms since a correct diagnosis depends on correct assessment of basic cognitive functions. The cases will be further analyzed with automated methods in order to find patterns of behavior in the participating physicians that can be responded to when incorporated in a web-based adaptive support system.
References [1] Kaplan B. Evaluating informatics applications – clinical decision support systems literature review. Int J Med Inf. 2001:64:15-37. [2] Kaplan B. Evaluating informatics applications – some alternative approaches: theory, social interactionism, and call for methodological pluralism. Int J Med Inf. 2001;64:39-56. [3] Hertzum M, Simonsen J. Positive effects of electronic patient records on three clinical activities. Int J Med Inf. 2008;77(12):809-817. [4] Lindgren H, Eriksson S. Sociotechnical Integration of Decision Support in the Dementia Domain. Stud Health Technol Inform. 2010;157:79-84. [5] Lindgren H. Towards personalized decision support in the dementia domain based on clinical practice guidelines. UMUAI 2011; DOI: 10.1007/s11257-010-9090-4 [6] Lindgren H. Decision Support System Supporting Clinical Reasoning Process – an Evaluation Study in Dementia Care. Stud Health Technol Inform. 2008;136:315-320. [7] American Psychiatric Association. Diagnostic and statistical manual of mental disorders, fourth edition, text revision (DSM-IV-TR). American Psychiatric Association; 1994. [8] McKeith IG, et al. Diagnosis and management of dementia with Lewy bodies: third report of the DLB consortium. Neurology 2005;65(12):1863-187. [9] Neary D, Snowden JS, Gustafson L, et al. Frontotemporal lobar degeneration. A consensus on clinical diagnostic criteria. Neurology 1998;51:1546-1554. [10] Petersen RC, Stevens JC, Ganguli M, Tangalos EG, Cummings JL, DeKosky ST. Practice parameter: Early detection of dementia: Mild cognitive impairment (an evidence-based review). Neurology 2001;56:1133-1142.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-125
125
A Generic System for Critiquing Physicians' Prescriptions: Usability, Satisfaction and Lessons Learnt Jean-Baptiste LAMYa,1, Vahid EBRAHIMINIAa, Brigitte SEROUSSIa, Jacques BOUAUDb, Christian SIMONc, Madeleine FAVREd,e, Hector FALCOFFd,e, Alain VENOT a a Laboratoire d'Informatique Médicale et Bioinformatique (LIM&BIO), UFR SMBH, Université Paris 13, Bobigny, France b AP-HP, DSI, STIM, Paris, France; INSERM, UMRS 872, eq. 20, Paris, France c Silk Informatique, 40 bis avenue du général Patton, Angers, France d Université Paris Descartes, Faculté de Médecine, Département de Médecine Générale, Paris, France e Société de Formation Thérapeutique du Généraliste (SFTG), Paris, France
Abstract. Clinical decision support systems have been developed to help physicians to take clinical guidelines into account during consultations. The ASTI critiquing module is one such systems; it provides the physician with automatic criticisms when a drug prescription does not follow the guidelines. It was initially developed for hypertension and type 2 diabetes, but is designed to be generic enough for application to all chronic diseases. We present here the results of usability and satisfaction evaluations for the ASTI critiquing module, obtained with GPs for a newly implemented guideline concerning dyslipaemia, and we discuss the lessons learnt and the difficulties encountered when building a generic DSS for critiquing physicians' prescriptions. Keywords. Evidence-based guidelines, Dyslipaemia, Drug prescription
Decision
support,
Evaluation,
1. Introduction Clinical guidelines (CG) provide physicians with recommendations, but paper guidelines are difficult to use effectively during medical consultations [1]. This difficulty has led to the development of decision support systems (DSS) based on CG [2]. The ASTI project aims to develop a DSS to help physicians to take into account the treatment recommendations expressed in CG for chronic diseases [3]. ASTI includes a critiquing module that is automatically activated when the physician writes a drug prescription, and which issues an alert if the prescription does not follow the CG. The critiquing module was initially developed for hypertension and type 2 diabetes. However, unlike many DSS which focus on a single CG, ASTI is designed to be generic enough to cover all chronic diseases. To ensure the generic aspect of the 1
Corresponding Author: Jean-Baptiste Lamy, E-mail:
[email protected]. LIMBIO, UFR SMBH, Université Paris 13, 74 rue Marcel Cachin, 93017 Bobigny cedex, France.
126
J.-B. Lamy et al. / A Generic System for Critiquing Physicians’ Prescriptions
system, a new CG concerning dyslipaemia [4] was implemented, and evaluated in the current study. The critiquing module [5] and the validation of its knowledge bases [6] have been presented elsewhere. The CG recommendations are modeled in the critiquing module’s knowledge base, and are then automatically translated into critiquing rules of the form “if physician prescribed treatment X to a patient with clinical condition P, then show criticism C”. An inference engine applies these rules, and has been integrated into éO généraliste, an electronic patient record (EPR) for general practitioners (GPs). Drug prescriptions, laboratory test results and some clinical conditions are automatically extracted from the EPR and used by the critiquing module. Other clinical conditions required by the critiquing module are entered manually by physicians on a special form integrated in the EPR and displayed when a patient is included in the ASTI study at the beginning of the consultation. We present here the results of the usability and satisfaction evaluations of the ASTI critiquing module for dyslipaemia, and we discuss the lessons learnt and the difficulties we encountered in the construction of this generic DSS.
2. Methods A knowledge base has been designed and tested for the CG relating to dyslipaemia, as previously described [5, 6]. It includes 28 decision criteria (patient's clinical conditions, laboratory test results, etc.), 15 drug treatments and 17 recommendations, resulting in 73 critiquing rules. We evaluated the critiquing module for dyslipaemia in the laboratory with 33 GPs. The GPs were éO users who volunteered to participate. Two evaluations were performed with the ASTI critiquing module for dyslipaemia. Usability was evaluated using five simulated cases. These cases were derived, by an expert, from real cases, and were selected to cover the various aspects of the CG. The GPs were first briefly introduced to the use of the ASTI critiquing module. They were then asked, for each case, to code the data for the patient into the EPR and to enter two prescriptions: the usual prescription the doctor would write and a prescription that he or she did not consider to satisfy the CG. For each prescription, the physicians were asked to indicate whether they expected an alert, whether an alert was raised, whether the alert (or the absence of it) was justified, and whether the explanations and proposals accompanying the alert were appropriate. Additional textual comments were possible. Satisfaction was evaluated just after the GPs had used the system. This evaluation was based on seven sentences. For each sentence, the GP had to tick one of four boxes, indicating strong agreement with the sentence, weak agreement, weak disagreement or strong disagreement. The evaluation was followed by a focus group, during which GPs were asked about the system, the way they used it and their feelings about it.
3. Results The usability evaluation involved 299 prescriptions (less than 2x5x33, because some GPs did not reply to all questions, e.g. they rarely tried a second prescription if the first one was already criticized), divided as shown in Figure 1. The system's specificity was 94±1.4% (95% confidence interval) and the sensitivity 84±2.1%. The 136 true positives includes both prescriptions criticized as expected, or unexpectedly if the GP then
127
J.-B. Lamy et al. / A Generic System for Critiquing Physicians’ Prescriptions
agreed with the criticism; for 114 (84±2.1%) of the true positives, the GP considered the system’s explanations and treatment proposals as appropriate. In 80±2.3% of cases, the system raised an appropriate criticism or was silent with good reason.
Figure 1. Results of the usability evaluation.
The results of the satisfaction evaluation are shown in Table 1. The physicians were interested in receiving automatic criticisms about their prescriptions and they found the ASTI critiquing module easy to use. However, they also felt that the use of the module interfered with doctor-patient relations. Further discussions with physicians showed that this problem was related to the time required to code the clinical context for the patient in the form displayed during the first consultation with the patient. Table 1. Evaluation of satisfaction, expressed in percent (%) Question I would like to receive automatic criticisms or suggestions relating to my prescriptions The ASTI critiquing module is easy to use The response time of the ASTI critiquing module is satisfactory The ASTI critiquing module is ergonomic The ASTI critiquing module can be effectively integrated into my daily practice The ASTI critiquing module interferes little with my relationship with the patient Extending the ASTI critiquing module to other guidelines would be a major step forward
Agreement Strong Weak 39 58
Disagreement Weak Strong 3 0
3 73 18 28
88 24 73 66
9 3 9 6
0 0 0 0
0
27
67
6
33
64
0
0
4. Discussion and conclusion In this study, we evaluated the usability of and satisfaction with the ASTI critiquing module for dyslipaemia, a condition different from the hypertension and type 2 diabetes initially used for designing the system. Many other chronic diseases (e.g. asthma, cystic fibrosis) also have complex drug treatments evolving over long periods of time, the optimal treatment depending on several factors (lab test results, clinical conditions,...). The system's specificity was high, and the GPs expressed an interest in critiquing systems. Similar evaluation designs, based on simulated cases, have already been used for evaluating DSS [7, 8]. It would be interesting to carry out further evaluations of the critiquing system on real practice. Performing evaluation on voluntary GPs is a possible bias since they are usually enthusiastic, but this can hardly be avoided. The first lesson learnt from this study is that it is possible to design a generic DSS supporting several CG for chronic diseases, despite the considerable heterogeneity of
128
J.-B. Lamy et al. / A Generic System for Critiquing Physicians’ Prescriptions
the various CG, which follow different treatment strategies (e.g. the CG for type 2 diabetes follows a “waterfall”-like linear strategy, depending on the stage of the disease, whereas the CG for dyslipaemia follow a “star”-like non-linear strategy depending on the type of dyslipaemia) and are often based on implicit knowledge (e.g. the CG does not generally mention that drug doses can be lowered to reduce adverse effects). Currently, five CG have been implemented: hypertension, type 2 diabetes, dyslipaemia, tobacco addiction, atrial fibrillation [5]. A few other DSS frameworks, such as Asbru [9] and others [10], have achieved a similar level of genericity. Another lesson is that an automatic DSS, like this critiquing module, requires tight integration with the EPR used by the physician. However, as the various EPR include essentially the same patient data, it is possible to integrate a DSS into many different EPR. Semantic interoperability is easy to achieve, because a DSS usually has a limited number of decision criteria (e.g. 28 for dyslipaemia). During the ASTI project, the critiquing module was integrated into another EPR, ALMA Pro, produced by the ALMA association. The difficulties encountered during the integration process were organizational and financial rather than scientific or technical. A third lesson is that physicians are interested in receiving automatic criticism on their prescriptions. This finding is consistent with other studies showing that physicians prefer automatic “background” DSS over “on-demand” DSS [11, 12]. However, we also learnt that displaying the CG textual excerpts applying to the patient (as in older versions of the critiquing module, but not the one used during the evaluation) is not sufficient for the critiquing of physicians’ prescriptions. Indeed, CG give recommendations such as “when the patient has clinical context C, drug W should be prescribed”. However, they do not explain to the physician why the drug X, Y or Z he prescribed is not appropriate. For a given patient, many prescribing errors are possible and should receive different criticisms: e.g. drug X may be contraindicated due to another disease that the patient has, drug Y may be indicated only as a second-line treatment, and drug Z may already have been prescribed a year ago without success. The major difficulty encountered is the coding of the patient's clinical conditions. The existing terminologies were developed for the coding of patient data in the EPR, but are not always relevant for coding decision criteria from CG. For instance, we were unable to code “family antecedent of myocardial infarction in the father before the age of 55” and “type 2 diabetes discovered at an advanced stage”. Moreover, physicians do not usually code clinical elements in patient records. Instead, they tend to write them in free text, which is not usable by DSS as-is. While it might be possible to convince some physicians to code the principal diseases and antecedents of the patient, they are unlikely to code systematically complex decision criteria, such as those cited above. This problem has also been encountered in the ASTI guiding module [13], and is considered as one of the tenth “DSS grand challenges” [14]. By contrast, the coding of laboratory results and drug prescriptions is less problematic: test result criteria are generally simple in CG, and drug databases can be used for coding drug prescriptions. Other difficulties relate to the CG themselves: they do not always provide clear recommendations, instead sometimes providing only “food for thought”, which is not sufficient for critiquing. The various strength levels of recommendations are useful but not always mentioned in CG. In some situations, two CG may be contradictory. For example, the French CG for hypertension and for dyslipaemia give different formulas for determining cardiovascular risk level. Formalizing CG during their development may help to resolve these problems [15].
J.-B. Lamy et al. / A Generic System for Critiquing Physicians’ Prescriptions
129
In conclusion, we have shown that the ASTI critiquing module, initially developed for hypertension and type 2 diabetes, is generic enough for application to dyslipaemia with good results. We have also shown that this module is of interest to physicians. The main difficulty is the coding of the patient's clinical conditions, but several approaches could be applied to this problem. First, graphical user interfaces or automated text processing tools could be designed to help physicians with data entry. Second, the coding of some clinical conditions could be done after the possible criticism rather than before, the physician explaining the reasons of his decision to the system. Finally, rather than executing the CG entirely, as the critiquing module does, other DSS might consist in presenting the CG to the physicians in a more usable form than plain text, possibly through graphical approaches. Acknowledgments: We thank the HAS (Haute Autorité de Santé, the French health authority) and the CNAM (Caisse Nationale d'Assurance Maladie, the French health insurance fund for employees) for funding the ASTI project.
References [1]
[2] [3] [4] [5]
[6]
[7]
[8] [9] [10] [11]
[12] [13]
[14] [15]
Dufour J, Bouvenot J, Ambrosi P, Fieschi D, Fieschi M. Textual Guidelines versus Computable Guidelines: A Comparative Study in the Framework of the PRESGUID Project in Order to Appreciate the Impact of Guideline Format on Physician Compliance. In: Proc AMIA Symp. Washington, DC; 2006. p. 219–223. Isern D, Moreno A. Computer-based execution of clinical guidelines: A review. Int J Med Inf. 2008;77(12):787–808. Séroussi B, Bouaud J, Dreau H, et al. ASTI: a guideline-based drug-ordering system for primary care. In: Medinfo. vol. 10. the Netherlands; 2001. p. 528–32. AFSSAPS. Prise en charge thérapeutique du patient dyslipidémique; 2005. Available at http://www.afssaps.fr/content/download/3967/39194/version/6/file/dysreco.pdf Lamy JB, Ebrahiminia V, Riou C, et al. How to translate therapeutic recommendations in clinical practice guidelines into rules for critiquing physician prescriptions? Methods and application to five guidelines. BMC Medical Informatics and Decision Making. 2010;10:31. Lamy JB, Ellini A, Ebrahiminia V, Zucker JD, Falcoff H, Venot A. Use of the C4.5 machine learning algorithm to test a clinical guideline-based decision support system. In: Proceedings of Medical Informatics Europe (MIE2008). Göteborg, Sweden; 2008. p. 223-228. Ramnarayan P, Roberts G, Coren M, et al. Assessment of the potential impact of a reminder system on the reduction of diagnostic errors: a quasi-experimental study. BMC Medical Informatics and Decision Making. 2006;6:22. Bury J, Hurt C, Roy A, et al. LISA: a web-based decision-support system for trial management of childhood acute lymphoblastic leukaemia. Br J Haematol. 2005;129(6):746–54. Young O, Shahar Y, Liel Y, et al. Runtime application of Hybrid-Asbru clinical guidelines. J Biomed Inform. 2007;40(5):507–26. Peleg M, Tu S, Bury J, et al. Comparing computer-interpretable guideline models: a case-study approach. J Am Med Inform Assoc. 2003;10(1):52–68. Van Wyk J, van Wijk M, Sturkenboom M, Mosseveld M, Moorman P, van der Lei J. Electronic alerts versus on-demand decision support to improve dyslipidemia treatment: a cluster randomized controlled trial. Circulation. 2008;117(3):371–378. Ash J, Sittig D, Dykstra R, et al. Identifying best practices for clinical decision support and knowledge management in the field. In: Proceeding of MEDINFO. Cape Town, South Africa; 2010. p. 806–810. Séroussi B, Bouaud J, Sauquet D, et al. Why GPs do not follow computerized guidelines: an attempt of explanation involving usability with ASTI guiding mode. Stud Health Technol Inform. 2010;160(Pt 2):1236–40. Sittig DF, Wright A, Osheroff JA, et al. Grand challenges in clinical decision support. J Biomed Inform. 2008;41(2):387–92. Goud R, Hasman A, Strijbis AM, Peek N. A parallel guideline development and formalization strategy to improve the quality of clinical practice guidelines. Int J Med Inf. 2009;78(8):513–520.
130
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-130
An OCL-compliant GELLO Engine Jing MEIa1, Haifeng LIUa, Guotong XIEa, Shengping LIUa, Baoyao ZHOUa a Information and Knowledge Department, IBM Research, Beijing, China
Abstract. GELLO, an expression language for clinical decision support, has been approved as an HL7/ANSI normative standard for years. Unfortunately, there are few GELLO engines available in use, and the limited tooling seems to hamper a widespread adoption of GELLO. The objective of this paper is to validate the availability of implementing an OCL-compliant GELLO engine. Experimental results show that our GELLO engine runs successfully in a clinical guidelinebased decision support system for chronic disease management. Keywords. GELLO, virtual medical record (vMR), clinical decision support (CDS), clinical guideline
1. Introduction GELLO, an object-oriented query and expression language for clinical decision support (CDS) [1], was published in 2005, and approved as an HL7/ANSI normative standard. Syntactically, the GELLO language is based on the Object Constraint Language (OCL) that applies to an object-oriented data model. The underlying data model for GELLO was called a “virtual medical record” (vMR), and recently, the HL7 CDS Work Group embarked on the development of an HL7 vMR standard based on a multi-national, multi-institutional analysis of CDS data needs [2]. Two options appeared in the literature for GELLO implementation. One is to translate GELLO expressions into another language by means of a compiler producing executable codes. The other is to build a GELLO engine to evaluate native GELLO expressions. Obviously, the former option requires ad-hoc translations due to a variety of target languages while the later is more promising towards a generic solution. In this respect, we aimed at implementing a native GELLO engine, and we observed two alternative approaches. One is to regard GELLO as an independent language, and implement a standalone GELLO parser. The GELLO authoring tool [8] developed by Medical Objects is such a representative. Alternatively, we may make GELLO fully compliant with OCL, and leverage a range of well-developed OCL tools such as the Eclipse MDT (Model Development Tools) OCL [9] to implement a GELLO engine. Previous studies on GELLO implementation mainly focused on the relationship between GELLO and OCL, which promoted the HL7 CDS Technical Committee to approve the full compliance of GELLO with OCL [5]. Since then, few related work has been devoted to the OCL-compliant GELLO implementation. In this paper, we introduce our implementation for an OCL-compliant GELLO engine, which has been used in a clinical guideline-based decision support system for 1
Corresponding author: Jing Mei, Diamond building A, Zhongguancun Software Part 19, Dongbeiwang West Road 8, Haidian District, Beijing 100193, China; E-mail:
[email protected].
131
J. Mei et al. / An OCL-Compliant GELLO Engine
chronic disease management. As a case study, we also present the experimental results when deploying our system to a clinical setting in the daily care routine of diabetes patients. Finally, strengths, limitations and future directions of our work are discussed.
2. Method The objective of our GELLO engine is to evaluate GELLO expressions against clinical data in order to reaching a clinical conclusion (diagnosis result, therapy plan, etc.), where GELLO expressions are OCL expressions with reference to the data model of vMR. To this end, there are four steps. 1. implement the vMR model 2. define the GELLO expressions 3. feed the clinical data 4. execute the evaluation As shown in Figure 1, we develop a three-layer model of GELLO engine to accomplish the tasks above. We set up a model layer which provides the vMR model implementation in the first step. Specifically, it includes a core module of function implementation for manipulating HL7 data types where an ontology reasoner is employed to compute the implication relationship of concept descriptors. In the second step, as GELLO expressions normally contain terminologies, such as SNOMED CT for observation codes, we develop a configurable ontology repository to load relevant terminologies in the configuration layer. In the third step, in order to accommodate clinical data in non-vMR form, we develop a schema registry in the configuration layer to help the engine understand them. Finally, we build a service layer, which takes clinical data and GELLO expressions as input and puts evaluation results as output. Here, an OCL engine is borrowed to parse the input GELLO expressions, and a vMR transformer is developed to transform the input clinical data into the form of vMR. GELLO expression
Clinical data
Service layer vMR transformer
OCL engine Configuration layer
schema registry
ontology repository Model layer
vMR model implementation function implementation ontology reasoner vMR model Figure 1. Three layers for GELLO engine implementation.
Samples in Table 1 illustrate the input for executing our GELLO engine. The left column is a fragment of clinical data in CDA (Clinical Document Architecture [7]), which describes a blood glucose observation with value of 120 mg/dL in the first row
132
J. Mei et al. / An OCL-Compliant GELLO Engine
and a diagnosis of diabetes mellitus type 2 in the second row. In the right column, we first define a GELLO expression to evaluate whether the patient’s blood glucose level is higher than the threshold of 7 mmol/L, followed by another GELLO expression in the second row to evaluate whether a patient has diabetes mellitus as implied in his/her problem observations. We remark that such data type functions as the comparison of physical quantity (PQ) and the implication of concept descriptor (CD) have been provided by our GELLO engine. So, the engine concludes the first evaluation result is false while the second is true, based on the facts that 1 mmol/L =18 mg/dL (where both mmol/L and mg/dL are standard units for measuring blood glucose), and diabetes mellitus type 2 is subsumed by diabetes mellitus in the SNOMED CT ontology. Table 1. Samples of input to GELLO engine
Clinical data (in CDA)
<statusCode code="completed"/> <effectiveTime value="20104071530"/>
<statusCode code="completed"/> <effectiveTime value="201004071630"/>
GELLO expressions package vMR context Patient def: BG : CD = self.factory.CD(‘1558-6’, ‘LOINC’, ‘Glucose p fast SerPl-mCnc’) def: threshold : PQ = self.factory.PQ(‘7’, ‘mmol/L’) def: obs : Sequence(LaboratoryObservation) = self.isAssociatedWith.laboratoryObservation obs -> exists(testCode.equal(BG) and value.oclAsType(PQ).greaterThan(threshold))
package vMR context Patient def: DM: CD = self.factory.CD(‘73211009’, ‘SNOMED CT’, ‘diabetes mellitus’) def: obs : Sequence(ProblemObservation) = self.isAssociatedWith.problemObservation obs -> exists(problemCode.imply(DM))
2.1. Model Layer This layer is responsible for implementing the underlying vMR model for GELLO. We first take the latest HL7 vMR domain analysis model (version 2010-03-22 [6]) as an input, and leverage EMF (Eclipse Modeling Framework, a modeling framework and code generation facility for building tools and other applications based on a structured data model) to generate the vMR model code packages automatically. Next, to provide support for HL7 data type functions, we code the implementation by ourselves. Taking the first GELLO expression in Table 1 as an example, the greaterThan is a function of PQ which needs the comparison between measurements with different units. Because of 1 mmol/L =18 mg/dL, the evaluation of “120 mg/dL is greater than 7 mmol/L” answers false. In addition, to provide the mechanisms for creating instances of classes through the Factory method in GELLO [5], we define a Factory class and implement its instantiation functions. As shown in Table 1, “def: threshold: PQ = self.factory.PQ(‘7’, ‘mmol/L’)” is such an example which creates an instance of the PQ class, namely “threshold”.
J. Mei et al. / An OCL-Compliant GELLO Engine
133
It is remarkable that, using an ontology reasoner [4], we implement a unique function in HL7 data types that is the implication of concept descriptor (CD), aka CD.imply (we do not use CD.implies because “implies” is an OCL reserved keyword). Recalling the example in Table 1, the evaluation of diabetes mellitus type 2 implying diabetes mellitus answers true (in terms of the SNOMED CT ontology). 2.2. Configuration Layer In order to adapt to various deployment environments across clinical institutions, we provide two configurable components in the configuration layer. One is the schema registry. If the clinical data as input to the service layer is not in vMR form, the registry is configured to assist the engine in understanding the schema of the input clinical data and transforming them into vMR. The other is the ontology repository. For those terminologies appearing in GELLO expressions, the engine is able to reason on them with the access to the repository of the referenced ontology. 2.3. Service Layer Clinical data and GELLO expressions are input to this layer, with output of evaluation results. As mentioned above, if the input clinical data is in other format rather than vMR, a vMR transformer will perform the transformation task, where the schema of the input clinical data has been registered via the configuration layer. Meanwhile, considering that GELLO is fully compliant with OCL, we utilize an OCL engine (the Eclipse MDT OCL [9]) to parse the input GELLO expressions, where all functional computations are passed to the underlying vMR model implementation. Note that our methodology is vendor-independent and other OCL tools could be applicable as well.
3. Result We have implemented our GELLO engine in a clinical guideline-based decision support system for chronic disease management. The prototype system has been successfully deployed to Peking University People’s Hospital (one of the largest health providers in China), for managing diabetes patients. Specifically, we computerize a diabetic guideline (as defined by consulting an expert and referring the literature) as a clinical decision process which represents decision conditions with GELLO expressions and assists clinicians in the following three aspects. First is to raise health alerts to patients, such as a continuously high blood glucose alert. Second is to provide prescription advices, in process of oral glucose control therapy and insulin therapy. Third is to make referral suggestions, for transferring patients from primary to secondary care, and vice versa. In total, 36 GELLO expressions are defined, two of which are presented in Table 1. Besides PQ and CD, another 13 HL7 data types are used in these expressions such as TS (Point In Time) and IVL
(Interval of Point In Time). Almost 80% of the 15 HL7 data type functions are implemented, including minus, plus, imply, equal and et al. When deploying our system to the local clinical setting where the clinical data is represented in the form of CDA, we register the corresponding CDA schema and develop the vMR transformer to transform CDA into vMR using the technique of XSLT transformation. Furthermore, as SNOMED CT is used in our GELLO expres-
134
J. Mei et al. / An OCL-Compliant GELLO Engine
sions to represent clinical concepts, we build up a SNOMED CT ontology repository and leverage the ontology reasoner [4] for subsumption reasoning. At runtime, our GELLO engine is plugged in a process engine namely FileNet P8 to go through the diabetic guideline-based process where decisions are made through evaluating GELLO expressions. Those 36 GELLO expressions are all correctly evaluated against the real clinical data.
4. Discussion Compared with the standalone GELLO implementation like a GELLO authoring tool [8] developed by Medical Objects, an OCL-compliant GELLO engine undoubtedly could profit from the utility of well-developed OCL tools. This paper validates the feasibility of implementing an OCL-compliant GELLO engine with minimal effort. Particularly, if the HL7 vMR model updates (which is currently a draft), our modeldriven development facilitates the code re-generation, and moreover, the manual part (for function implementation) will not be overrode. In this respect, we pave a way for GELLO tooling, so as to improve the GELLO widespread adoption. Moreover, the novel features of our GELLO engine include providing support for the HL7 data type functions and the GELLO factory initialization functions. To the best of our knowledge, this paper is the first to bring an ontology reasoner into a GELLO engine, making the implication of concept descriptor sound and complete. However, as pointed in [3], overlapping and semantically non-compatible terminologies are in concurrent use, which is a significant challenge for scalable clinical decision support. By far, we provide support for ontology reasoning within one single terminology, and our ongoing work addresses the problem across terminologies. Another imperfection is that we currently do not provide full support for all HL7 data type functions. The justification is that some HL7 data type functions such as type conversions of promotion and demotion are rarely used in practical GELLO expressions. More importantly, we notice that such functions can be replaced by OCL operations, e.g., the oclAsType() operation is a good candidate for HL7 data type conversions. Finally, as GELLO expressions could also serve as queries to fetch vMR data, we also consider enriching our GELLO engine as a vMR query engine in future.
References [1] [2]
[3]
[4] [5] [6] [7] [8] [9]
Sordo M, Ogunyemi O, Boxwala AA, Greenes RA. GELLO: An Object-Oriented Query and Expression Language for Clinical Decision Support, AMIA Annu Symp Proc. 2003: 1012. Kawamoto K, Del Fiol G, Strasberg HR, Hulse N, Curtis C, Cimino JJ, Rocha BH, et al. Multi-National, Multi-Institutional Analysis of Clinical Decision Support Data Needs to Inform Development of the HL7 Virtual Medical Record Standard, AMIA Annu Symp Proc. 2010: 377-381. Kawamoto K, Del Fiol G, Lobach DF, Jenders RA. Standards for Scalable Clinical Decision Support: Need, Current and Emerging Standards, Gaps, and Proposal for Progress, the Open Medical Informatics Journal, 2010 (4): 235-244. Mei J, Liu S, Xie G, Kalyanpur A, Fokoue A, Ni Y, Li H, Pan Y. A Practical Approach for Scalable Conjunctive Query Answering on Acyclic EL+ Knowledge Base, ISWC Proc. 2009: 408-423. HL7 GELLO standard, available from http://www.hl7.org/v3ballot/html/infrastructure/gello/gello.htm HL7 vMR wiki, available from http://wiki.hl7.org/index.php?title=Virtual_Medical_Record_(vMR) HL7 CDA standard, available from http://www.hl7book.net/index.php?title=CDA Medical-Objects GELLO wiki, available from http://wiki.medical-objects.com.au/index.php/GELLO Eclipse Modeling OCL, available from http://www.eclipse.org/modeling/mdt/?project=ocl
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-135
135
Improvement of Inter-Services Communication through a CDSS Dedicated to Myocardial Perfusion Scintigraphy Julie NIESa1, Gersende GEORGb, Marc FARAGGIc, Isabelle COLOMBETd, Pierre DURIEUX d a MEDASYS, Espace technologique de St Aubin, Gif-Sur-Yvette Cedex, France b French National Authority for Health (HAS) Saint-Denis La Plaine, France; c Department of nuclear medicine and dMedical Informatics Department at Georges Pompidou European Hospital, Paris, France
Abstract. This study addresses the question of communication between medical wards and the nuclear medicine department for the realization of myocardial perfusion scintigraphy. It analyses the effects of a reminder for completing the content of an order form. It shows that the CDSS impacted ordering practices. It could be seen as a system enabling to structure the information and improve the quality of orders. Keywords: medical ward – technical service communication, organization, CDSS, CPOE
1. Introduction Clinical Decision Support Systems (CDSS) have demonstrated their efficacy in improving clinical practices and patient outcomes [1-3], particularly in the form of onscreen computer reminders [4]. However, previous experimental works, set up in different domains, show the absence of learning effect associated with the reminder effect. For example, Weingarten et al. evaluated telephone reminders to encourage rapid discharge of patients with chest pain without increasing the risk of post discharge complications. Using an alternating-time series design, they showed that the degree of medical compliance to guidelines decreased back to its pre intervention level [5]. Similar effects were reported by Durieux et al. with a CDSS dedicated to venous thrombosis prevention: each time the system was inactive, medical practices came back to the initial level before intervention [6]. The present work consists in implementing an on-screen computer reminder to help ordering Myocardial Perfusion Scintigraphy (MPS). It was performed in the Georges Pompidou European Hospital (HEGP), a university teaching hospital in Paris, France. Since its opening in 2000, the hospital has an entirely computerized Hospital Information System (HIS) with patient centered Electronic Health Record (EHR), DxCare® [7]. The EHR allows the computerized prescription of drugs, imaging and laboratory tests by means of a Computerized Physician Order Entry (CPOE) system. 1
Corresponding Author.
136
J. Nies et al. / Improvement of Inter-Services Communication Through a CDSS
All MPS orders are made by physicians through the CPOE. Physicians of the nuclear medicine department answer demands and schedule examinations on the basis of information transmitted by prescriber through the CPOE, in a free-text field associated to orders (thereafter called “comment”). This information should describe patient characteristics and the aim of the examination. However, numerous orders are transmitted to the nuclear medicine department with no comment. This lack of information on clinical context leads to the cancellation of many scheduled examinations. Some studies demonstrated that lack of information sharing could lead to misunderstanding [8]. Some common representation is required to communicate about a shared task [9]. Indeed, information contained in the comment (i.e. objective of the examination) is a precondition for the nuclear medicine department to perform the examination. The implementation of a CDSS attached to MPS ordering was required by the nuclear medicine department to improve the transmission of specific patient data needed to schedule the examination. This study analyses the effects of content of MPS orders, by checking the existence of a comment associated to orders and seeking for information useful for MPS realization in the comment.
2. Methods 2.1. Intervention The myocardial scintigraphy consists in creating functional images of the myocardium showing where the blood is flowing, by following over time the distribution of tracers injected into the blood stream. The MPS may or may not be performed during a stress test (i.e. exercise), measuring to which extend myocardial perfusion and oxygen consumption adapt to exercise. This examination is therefore performed to search for myocardial ischemia and its functional consequences in patients for whom this primary diagnosis is suspected or in case of documented and already treated coronary artery disease for patient and therapy monitoring: evaluation of residual myocardial ischemia under medical therapy, search for post-infarction myocardial viability. Therefore, some knowledge of clinical context and diagnosis objective is needed to anticipate the conditions of tracer administration (i.e. at rest, during a muscular effort or during a pharmacological stress) and therefore to appropriately schedule the examination and prepare patients. All physicians of the hospital could order MPS. The aim of our work consists of characterizing the missing information in the orders which could help them to identify the objective of the examination. The content of the reminder and a dedicated questionnaire have been designed with the physicians performing the MPS. The reminder proposed one or several MPS types to the prescriber according to the patient characteristics and to prevent undesirable or fatal events which could occur in case of medical contraindication for the stress test. It also reminded the prescriber with the dedicated data to be transmitted to the nuclear medicine department. During the MPS ordering, a dedicated questionnaire appeared once by patient stay. This questionnaire helped the prescriber to complete clinical data required by the patient-specific reminder: 1) coronary disease history, myocardial infarctions and/or revascularization interventions; 2) coronary risks factors when needed, in case of primary diagnosis objective; 3) contra-indications for stress test.
J. Nies et al. / Improvement of Inter-Services Communication Through a CDSS
137
The reminder was displayed to the prescriber, proposing a pre-formatted text to be pasted in the comment attached to the order (Figure 1). The memo proposed by the CDSS is a well structured resume of all the data contained in the questionnaire. An explanation justifying the proposed CDSS memo is also provided to improve the adherence of the prescriber. The memo is not automatically integrated in the order window. The prescriber can have different choices to complete his order: 1) copy/past the CDSS memo in the comment area of the order window, 2) modifies the CDSS memo, or 3) writes his own comment.
Figure 1: Example of a Myocardial-Scintigraphy-CDSS display: a framed memo and a text justifying the proposed decision support. The reminder content has been translated from French.
2.2. Quantitative Evaluation During the study period (31 months, from January 2005 to July 2007), the CDSS was activated during two periods (A1 and A2) and not activated during two control periods (C1 & C2). The length of each period was: • C1 – 23 weeks from 1st January 2005 to 13th June 2005 • A1 – 43 weeks from 14th June 2005 to 13th April 2006 • C2 – 23 weeks from 14th April 2006 to 27th September 2006 • A2 – 43 weeks from 28th September 2006 to 31th July 2007. In the HIS, it was not possible to directly link the imaging orders with their realization. Thus, we could not verify if the reminder had an impact on the number of MPS cancelled. We analyzed the content of the comments which should be directly affected by the CDSS display. We analyzed the alternated series with ‘the number of comments influenced by the CDSS memos’ as primary outcome and ‘the number of empty comments’ as secondary outcome. We evaluated the number of MPS orders and the presence (or not) of associated comments according to the distinct experiment periods. Comments were blindly classified by two authors (JN and GG) in 4 categories: ‘Identical’, ‘Modified’, ‘Different’, and ‘Empty’ (see Table 1 for categories description). Divergences were resolved by consensus. Comments classified as ‘Identical’ or ‘Modified’ correspond to comments influenced by the CDSS.
138
J. Nies et al. / Improvement of Inter-Services Communication Through a CDSS
2.3. Qualitative Evaluation We performed also a comparative study of the comments content for every period. We used software dedicated to statistical analysis of texts: TropesTM. We focalized on concepts used by the CDSS and appearing in C2 period.
3. Results 3.1. Quantitative Results Comments typed as ‘Identical’ and ‘Modified’ show that the CDSS recommendation has been followed in the A1 and A2 periods, 288 (36.9%) and 314 (39.2%) times, respectively. The percentage of empty comments decreased during and after the first activated period (Table 1). Table 1: Description of the comments epidemiology according to experiment periods: n (%) [95%CI]. 95%CI: 95% Confidence Intervals for proportions were computed using exact binomial distribution. Type of comment Identical (copied and pasted from the CDSS memos) Modified (partly copied and pasted from the CDSS memos with additional information; totally written by the prescriber containing information from the CDSS memos, with or without complementary information) Different (with no link with the CDSS memos) Empty
C1 (N=859) N/A
N/A
A1 (N=779) 75 (9.6%) [7.6-11.9%]
213 (27.3%) [24.2-30.6%]
C2 (N=323) N/A
N/A
A2 (N=801) 57 (7.1%) [5.4%-9.1%]
257 (32.1%) [28.8%-35.4%]
739 (86.0%)
414 (53.2%)
314 (97.2%)
455 (56.8%)
[83.5%-88.2%]
[49.5-56.6%]
[94.7%-98.7%]
[53.3%-60.2%]
120 (14%)
77 (9.9%)
9 (2.8%)
32 (4.0%)
[11.7%-16.4%]
[7.8%-12.1%]
[1.3%-5.2%]
[2,7%-5,5%]
3.2. Qualitative Results TropesTM analysis demonstrated that some concepts are present in every study periods, such as the goal of the examination, e.g. ‘search for ischemia’ or ‘search for viability’. However, some concepts which didn’t exist in C1 appeared in C2, e.g.: ‘contraindications’ (179 occurrences), ‘asthma’ (11 occurrences), and ‘aneurysm’ (10 occurrences). Others concepts are more represented in C2, e.g. 6 occurrences representing beta-blocking drugs were retrieved in C1, 37 in C2. All these concepts were used in the memos proposed in A1. We can thus deduce a type of learning or sensibility to the information to be communicated to the nuclear medicine department.
J. Nies et al. / Improvement of Inter-Services Communication Through a CDSS
139
4. Discussion and Conclusion Our study suggested that the CDSS could have impacted the MPS orders. The quantitative analysis showed that the percentage of empty comments decreased after the first activated period and that the contents of comments were directly influence by the CDSS display. The qualitative analysis showed that the prescribers still used the CDSS concepts during CDSS inactivation periods. The CDSS could therefore be seen as a system enabling to structure the information and improve the comments quality. In previous studies [5, 6], the support compensated an error or omission: as long as the system was active, the reminder was efficient but all effects stopped when the system was disabled. In our experiment, the support was used to structure reasoning which is always done by the prescriber but which is not reported along with the order. Our study has some limits. 1) The CDSS which was stopped in April 2006 was reintroduced in September 2006 upon request from the nuclear medicine department physicians who had noticed a difference in the comments content. Consequently, study periods have different length. The reintroduction of the CDSS confirmed their observation. 2) Patient profiles were not analyzed but we have no reasons to believe that patient profiles changed over time. In a further work, we will specifically analyze the content of the ‘Modified’ comments from A1 and A2 periods in order to determine the information which is not displayed by the CDSS but considered important to be communicated by the prescriber.
References [1] [2] [3] [4] [5] [6] [7] [8]
[9]
Garg AX, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. Jama 293 (2005), 1223-38. Mollon B, et al. Features predicting the success of computerized decision support for prescribing: a systematic review of randomized controlled trials. BMC Med Inform Decis Mak 9 (2009), 11. Pearson SA, et al. Do computerised clinical decision support systems for prescribing change practice? A systematic review of the literature. BMC Health Serv Res 9 (2009), 154. Shojania KG, et al. The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database Syst Rev 3 (2009), CD001096. Weingarten SR, Riedinger S, et al. Practice guidelines and reminders to reduce duration of hospital stay for patients with chest pain. Intern Med 120 (1994), 257-63. Durieux P, et al. A clinical decision support system for prevention of venous thromboembolism: effect on physician behavior. Jama 283 (2000), 2816-21. Degoulet P, et al. The HEGP component-based clinical information system. Int J Med Inform 69 (2003), 115-26. Beuscart-Zephir MC, Pelayo S, Anceaux F, Maxwell D, Guerlinger S. Cognitive analysis of physicians and nurses cooperation in the medication ordering and administration process. Int J Med Inform.76 (2007), S65-77. Epub 2006 Jul 7. Cooke NJ, Salas E, Cannon-Bowers JA, Stout RJ. Measuring Team Knowledge. Human Factors 42 (2000), 151-73.
140
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-140
Prognostic Data-Driven Clinical Decision Support - Formulation and Implications Ruty RINOTTa,1 Boaz CARMELI a Carmel KENTa, Daphna LANDAU a Yonatan MAMANa Yoav RUBIN a, Noam SLONIMa a IBM Haifa Research Labs, 165 Aba Hushi st., Haifa 31905, Israel
Abstract. Existing Clinical Decision Support Systems (CDSSs) typically rely on rule-based algorithms and focus on tasks like guidelines adherence and drug prescribing and monitoring. However, the increasing dominance of Electronic Health Record technologies and personalized medicine suggest great potential for prognostic data-driven CDSS. A major goal for such systems would be to accurately predict the outcome of patients' candidate treatments by statistical analysis of the clinical data stored at a Health Care Organization. We formally define the concepts involved in the development of such a system, highlight an inherent difficulty arising from bias in treatment allocation, and propose a general strategy to address this difficulty. Experiments over hypertension clinical data demonstrate the validity of our approach. Keywords. Clinical Decision Support, Data Driven, Machine Learning, Prognostic
1. Introduction The need for Clinical Decision Support Systems (CDSSs) increases rapidly [1]. Most existing systems are rule-based systems focused at guidelines adherence, drug prescribing and monitoring, etc. [2]. The increasing pace by which Health Care Organizations (HCOs) adopt Electronic Health Record (EHR) technologies and the increasing recognition of personalized medicine importance suggest great potential for another type of CDSS, aiming to predict the outcomes of treatments considered for an individual patient via statistical and machine learning algorithms. We suggest a formal general description for such a prognostic data-driven CDSS (pdd-CDSS) and highlight an inherent difficulty associated with the development of such a system, related to the inherent bias in HCO's clinical data. We then propose a general strategy to address this difficulty and demonstrate our approach over clinical data of hypertension patients.
2. Methods 2.1. Defining Relevant Concepts We consider a patient who is at stage k of disease d. The pdd-CDSS should assist the physician by predicting the expected outcome of relevant candidate treatments for this 1
Corresponding Author.
R. Rinott et al. / Prognostic Data-Driven Clinical Decision Support – Formulation and Implications 141
individual patient, through mining the HCO's clinical data. Let T be a random variable with values {t1 ,…, tNt}, representing distinct candidate treatments. Let O be a random variable with values {o1 ,…, oNo}, representing distinct outcomes. We assume that the HCO maintains data about Nf clinical features, denoted by the random variables, {f1 ,…, fNf}. The sample population for the pdd-CDSS consists of Np patients that have already been at stage k of disease d and their received treatment and resulting outcome are recorded in the HCO's database. These Np patients can thus be divided into mutually exclusive and exhaustive treatment groups, according to the their treatment value, T, denoted {gt1 ,…, gtNt}.2 The data mined by the pdd-CDSS can thus be represented by a matrix M, where M(i,j) indicates the value of the i-th patient according to the j-th feature. The treatment and the outcome variables can be represented via two additional column vectors. Finally, we denote a new patient by the index i*, and the data associated with her is represented via an additional row in M, while T(i*) and O(i*), are obviously unknown. All these notations are depicted in Fig. 1a. 2.2. Treatment Groups are Inherently Biased Our first observation is that from a statistical perspective, different treatment groups often represent different populations, reminiscent to an observational study [3]. As an extreme example, let us assume that gender, denoted for example by fj, affects treatment success. We further assume that in the HCO's data for all patients in gt1, fj=M, while for all patients in gt2, fj=F, e.g., due to the HCO's guidelines. Next, we consider a new female patient. Since there are no examples in the data for female patients who received treatment t1, and assuming gender affects the treatment success, machine learning and statistical analysis algorithms will not be able to properly predict the outcome of applying t1 to this new patient based on the HCO's records. In practice, we do not expect the distinction between the treatment groups to be that obvious. However any bias in baseline covariates between treatment groups will affect prediction ability and must be considered in the design of a pdd-CDSS. Next, we propose one strategy to address this issue.
Figure 1. (a) Notations. (b) A flow chart for the proposed pdd-CDSS.
2
For simplicity, if a patient received more than one treatment during the same stage of the disease, her assignment to a treatment group is done based on the most recent treatment she received.
142 R. Rinott et al. / Prognostic Data-Driven Clinical Decision Support – Formulation and Implications
2.3. A Valid Flow for pdd-CDSS In the example above, while we could not predict the outcome of applying t1 to the new patient, we could have predicted the outcome of applying t2 to that patient. Thus, if the “customary”3 treatment can be determined for a new patient, the outcome of that treatment may be reliably predicted. This suggests a strategy of limiting outcome prediction to “customary” treatments. However, identifying the "customary" treatment for a new patient might be far from trivial, involving complex considerations. Here, we propose to first exploit the bias in treatment allocation to predict the HCO's “customary” treatment. If a treatment group is clearly identified, it implies that the patients in that treatment-group are relatively similar to the new patient, in particular in the context of the covariates that distinguish the different treatment groups. Hence, outcome prediction can be reliably performed in that treatment group. Thus we propose to decompose outcome prediction for a new patient into two separate tasks (cf. Fig 1b): • Treatment prediction: predict T(i*), i.e., the HCO's “customary” treatment for the new patient, using all Np patients as training data. • Outcome prediction for the predicted treatment: predict the outcome only for the predicted treatment; namely, predict O(i*) given that the treatment is T(i*), using only patients who underwent T(i*) as training data.
3. Results We demonstrate our methodology over clinical data collected for hypertension patients as part of the Hypergenes project4. We identified three major possible treatments in the data: non-drug therapy (t1); angiotensin II receptor blockers (t2); and beta blockers (t3). We focused on patients that suffer from Stage-1 hypertension and for which: (a) the treatment group is known and the date in which this treatment was assigned is known5; (b) Systolic and diastolic blood pressure (BP) were measured when treatment was selected and at an additional later time point. This led to a dataset of Np=1771 patients with respect to 181 clinical features. Decrease in BP to below hypertension levels (diastolic < 90, systolic <140) was denoted as outcome o1, while failing to do so was denoted by o2. In treatment groups gt1, gt2, gt3, we had 750, 475, and 63 patients, respectively, for which 39%, 51%, and 37%, had a resulting outcome o1, respectively. 3.1. Prediction Algorithms For both classification tasks (Section 2.3) we used a k-Nearest Neighbor (kNN) classifier [4]. Given a new patient, the algorithm finds her k NNs in the training data and predicts her label via a weighted majority of their labels. In the treatmentprediction task the training data consisted of all 1771 patients and the label was the given treatment. For the outcome-prediction task, we first predict the patients “customary” treatment, and then use the patients within this treatment-group as the training data, and their outcome as the label. An inherent challenge in k-NN 3
Importantly, this “customary”' treatment is not necessarily optimal for the new patient. Rather, it solely reflects decisions made in the past in this HCO for patients with somewhat similar characteristics. 4 For more details, See http://www.hypergenes.eu/. 5 For simplicity we assume that this is also the date when treatment course started.
R. Rinott et al. / Prognostic Data-Driven Clinical Decision Support – Formulation and Implications 143
classification is to define the distance measure used to determine the NNs. Ideally, this measure should be adapted to the classification task, e.g., by assigning different weights to features based on their prediction power. In our context, while some features might contribute to treatment prediction, others might contribute to outcome prediction. Further, different features may affect the success of different treatments. For example, initial weight may significantly affect the success of a life-style change treatment while having a smaller affect on the success of drug therapy. This suggest that 4 different distance measures should be learned in our data; one for treatment prediction, and 3 for outcome prediction - one within each treatment group. 3.2. Information Based Distance A natural way to quantify the dependency of a feature with a label is via the Mutual Information (MI) associated with their joint probability [5]. This measure is especially attractive in our context as it similarly applies for continuous and categorical random variables; allows capturing any type of dependency, including non-linear relations; and there is much literature on correcting MI estimates due to sample size effects, a dominant problem in real world clinical data. Here, we used the technique in [6] to estimate the MI between each feature and the relevant label. As expected, the MI value associated with a feature changes along with the task. For example, for the feature “Patient age” we observed high MI for the treatment-prediction task (0.17 bits), while nearly zero MI in all outcome-prediction tasks. In Table 1 we present the MI estimates in each prediction task for the three features with the highest observed MI. In all prediction tasks we used the obtained MI values to determine the distance measure [7], discarding features with MI<0.01 bits, and weighting the remaining features by their relative MI value. This led to 4 different similarity measures, where in each prediction task the most informative features contribute the most to the similarity estimates. Table 1. Features MI (in bits) under different tasks for the three features with the highest MI per task. Treatment Prediction Diastolic BP at decision (0.34) Systolic BP at decision (0.23) Age (0.17)
gt1 Outcome Prediction Average systolic BP prior to decision (0.34) Average LDL cholesterol prior to decision (0.24) Average systolic BP prior to decision (0.19)
gt2 Outcome Prediction Systolic BP at decision (0.08) Diastolic BP at decision (0.04)
gt3 Outcome Prediction Systolic BP at decision (0.13) Height (0.03)
LDL cholesterol at decision (0.02)
Alcohol consumption at decision (0.01)
3.3. Prediction Results Using the distance measure learned for the treatment-prediction task and k=10 we predicted the treatment for all 1771 patients. The prediction accuracy was 85% suggesting a significant statistical bias between the treatment groups, exploited by the k-NN classifier. Next, we focused on patients for whom the predicted treatment was correct and relatively certain, i.e., there was a relatively clear majority for the correct label amongst the patient's k-NN. This resulted with 632, 361 and 10 patients in gt1, gt2, and gt3, respectively. For each of these patients we predicted the outcome using the similarity measure learned within the relevant treatment-group and k=10, ending up with an average prediction accuracy of 66%.
144 R. Rinott et al. / Prognostic Data-Driven Clinical Decision Support – Formulation and Implications
4. Discussion The increasing scale and complexity of recorded clinical features that affect treatment choice highlights the need for CDSSs [1]. Here we formally defined pdd-CDSS that utilize HCO's clinical data to predict patient outcome to candidate treatments. In recent years, several decision support tools have been developed that rely on mining clinical trials' results [8]. However, the increasing pace by which HCOs adopt EHR technologies suggests great potential in mining HCOs' clinical data, along with non obvious challenges. Here we discussed how treatment allocation bias hampers the ability to predict outcome for all candidate treatments. We suggested a framework to identify such biases and pinpoint for which treatments prediction can be made reliably. Treatment group bias has been discussed in papers that evaluate non-randomized clinical trials and observational studies [3] and various methods have been proposed to try and correct for this bias [9]. In contrast, here we do not aim to correct for treatment bias, but to narrow outcome prediction to cases where this bias is less harmful. Considering bias-correction tools within the pdd-CDSS framework suggested here is left for future research. The pdd-CDSS framework raises additional challenges. First, such systems cannot be detached from external knowledge sources such as published guidelines. Integrating guideline based CDSSs with pdd-CDSSs can add important information such as contraindications and sharpen the recommendations created by such systems. In parallel, much work remains in developing prediction algorithms that properly handle the heterogeneity in clinical data, consider dependencies between features, and more. Finally, remains the challenge of prediction evaluation. Obviously it is impossible to measure outcomes of treatments not delivered. Thus, alternative methods to evaluate the accuracy of prediction algorithms must be formulated.
References [1]
[2]
[3]
[4] [5] [6] [7] [8]
[9]
Kawamoto K, Lobach DF, Willard HF, Ginsburg GS. A national clinical decision support infrastructure to enable the widespread and consistent practice of genomic and personalized medicine, BMC Medical Informatics and Decision Making, 9:17 (2009), 1-14. Kuperman GJ, Bobb A, Payne TH, et al. Medication-related Clinical Decision Support in Computerized Provider Order Entry Systems: A Review, Journal of the American Medical Informatics Association, 14:1, (2007), 29-40. Dreyer NA, Tunis SR, Berger M, Ollendorf D, Mattox P, Gliklich R. Why Observational Studies Should be Among the Tools Used in Comparative Effectiveness Research, Health Affairs, 29, (2010), 1818-1825. Cover TM, Hart PE. Nearest neighbor pattern classification, IEEE Transactions on Information Theory, 13:1, (1967), 21-27. Cover TM, Thomas JA. Elements of Information Theory, Wiley, New York, (2006). Slonim N, Atwal GS, Tkacik G, Bialek W. Information based clustering, PNAS, 102:51 (2005), 82978302. Garcia-Laencina PJ, Sancho-Gomez J, Figueiras-Vidal A, et al. K nearest neighbours with mutual information for simultaneous classification and missing data imputation. Neurocomputing,72 (2009), 83–93. Thome SD, Loprinzi CL, Heldebrant MP. Determination of Potential Adjuvant Systemic Therapy Benefits for Patients With Resected Cutaneous Melanomas, Mayo Clinic Proceedings, 77:9 (2002) 913-917. Rubin D. Estimating causal effects of treatments in randomized and nonrandomized studies. J. Ed. Psychol. 66 (1974), 688–701.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-145
145
Knowledge-based Surveillance for Preventing Postoperative Surgical Site Infection Arash SHABAN-NEJADa,1, Gregory W. ROSE b, Anya OKHMATOVSKAIAa, Alexandre RIAZANOVc, Christopher J.O. BAKERc, Robyn TAMBLYNa, Alan J. FORSTERb, David L. BUCKERIDGEa a McGill Clinical & Health Informatics, Department of Epidemiology and Biostatistics, McGill University, Montreal, Quebec, H3A 1A3Canada b Department of Medicine, University of Ottawa, Ottawa, ON, Canada c Department of Computer Science & Applied Statistics, University of New Brunswick, Saint John, New Brunswick, E2L 4L5,Canada
Abstract. At least one out of every twenty people admitted to a Canadian hospital will acquire an infection. These hospital-acquired infections (HAIs) take a profound individual and system-wide toll, resulting in thousands of deaths and hundreds of millions of dollars in additional expenses each year. Surveillance for HAIs is essential to develop and evaluate prevention and control efforts. In nearly all healthcare institutions, however, surveillance for HAIs is a manual process, requiring highly trained infection control practitioners to consult multiple information systems and paper charts. The amount of effort required for discovery and integration of relevant data from multiple sources limits the current effectiveness of HAIs surveillance. In this research, we apply knowledge modeling and semantic technologies to facilitate the integration of disparate data and enable automatic reasoning with these integrated data to identify events of clinical interest. In this paper, we focus on Surgical Site Infections (SSIs), which account for a relatively large fraction of all hospital acquired infections. Keywords. Bio-ontologies, Surgical Site infections, Hospital acquired infection, Knowledge management
1. Introduction A Surgical Site Infection (SSI) is commonly described as a type of Hospital Acquired (or Healthcare-Associated) Infection (HAI), related to a surgical procedure and occurring at the site of a surgical incision. SSIs are divided into three categories: superficial incisional (occurs in skin and subcutaneous fat), deep incisional (occurs in fascia and muscle), and organ/space. SSIs have a major impact on morbidity and mortality, and result in substantially increased medical costs [1]. Several risk factors for SSIs are known [2], including patient-associated factors (e.g., nutritional impairment, immunocompromised state, old age, diabetes mellitus, smoking, obesity) and surgical operation-associated factors (e.g., length of the operation, malpractice of sterilization and decontamination methods). Currently, identification and diagnosis of 1
Corresponding author: Arash Shaban-Nejad, Department of Epidemiology and Biostatistics, McGill University, 1140 Pine Avenue West, Montreal, Quebec, H3A 1A3Canada; Tel: +1 (514) 934-1934 ext. 32970; Fax:+1 (514) 843-1551; E-mail: [email protected].
146
A. Shaban-Nejad et al. / Knowledge-Based Surveillance
SSIs relies mainly on direct observation of physical signs and symptoms of infection in an incisional wound and a case cannot be confirmed solely by analyzing data given in laboratory reports. An accurate surveillance method often requires close collaboration between several healthcare professionals, including physicians, surgeons, microbiology lab technicians, nurses, epidemiologists, and infection prevention and control professionals (IPCPs). To facilitate knowledge-based decision making, availability of a reference vocabulary is crucial. Despite several modifications and improvements to existing terminologies made by the Centers for Disease Control and Prevention (CDC) in the last decade, e.g., specifying the location of infections related to surgical operations and clarifying the criteria to identify the exact anatomic location of deep infections [3, 4], inconsistencies, discrepancies, and confusion in the application of the criteria in different medical/clinical practices still exist, and there is a need for further improvement and clarification of the current nomenclature [5, 6]. To develop common understanding about infection control domain and achieve data inter-operability in the area of hospital-acquired infections, we present the HAI Ontology as part of the HAIKU (Hospital Acquired Infections – Knowledge in Use) project (Figure 1).
Figure 1. Overview of the Hospital-Acquired Infections: Knowledge in Use (HAIKU) project.
2. Method HAIKU has four specific aims: i) Development of HAI Ontology, which will rely, where possible, on the existing knowledge from published ontologies and controlled vocabularies; ii) Mapping of local terminologies onto concepts in HAI Ontology, by specifying how the ontology primitives can be used to express clusters of data using local terminologies in the form of RDF graph fragments; iii) Development of ontologypowered, rule-based and statistical methods for case detection, which will be deployed as web services and operate on mapped HAI data; iv) Evaluation of accuracy of case detection methods using the results of patient chart review as a gold standard; v) providing uniform semantic interface to the data via querying and browsing, to facilitate research in the form of discovery and hypothesis testing.
A. Shaban-Nejad et al. / Knowledge-Based Surveillance
147
For the experimental part of our work, we are using data from the McGill University Health Centre (MUHC) and the Ottawa Hospital (TOH) that are already assembled in research data warehouses at each site. The data warehouses draw from multiple-source information systems, including laboratory (microbiology and clinical chemistry), pharmacy, operating room, and patient demographics and movement. In addition, at both sites we have already identified through exhaustive chart review, patients that have experienced SSIs. We use clinical data and chart review results for these patients to develop and validate detection methods. For surveillance of SSIs, the demographic and operational data about selected patients undergoing one or more operative procedures during a specific observation time period are collected. The use of two clinical sites is critical to the proposed research so that we can evaluate the transferability of methods. Several databases (e.g., those containing information on hospital morbidity and discharge abstracts), existing bio-ontologies (e.g., SNOMED, MeSH, ICD9, HL7, FMA, CheBI2, Infectious Disease Ontology (IDO)3), and textual resources have been used to design and implement the integrated HAI Ontology (Figure 2). We use OWL 2.04 as the formal representation language. To validate the ontology, we have used OWL reasoners such as RACER [7] and Pellet [8].
Figure 2. Partial view of the major components of the HAI ontology for surveillance of SSIs (visualized using OntoGraph5).
2
Chemical Entities of Biological Interest (ChEBI): http://www.ebi.ac.uk/chebi/ http://infectiousdiseaseontology.org/page/Main_Page 4 http://www.w3.org/TR/owl2-overview/ 5 http://protegewiki.stanford.edu/wiki/OntoGraf 3
148
A. Shaban-Nejad et al. / Knowledge-Based Surveillance
3. Results Partial view of the major components of the HAI ontology is presented in Figure 2. We evaluate the ontology using OWL reasoners to check for consistency, satisfiability, expected or unexpected inferred relationships, and subsumption. For example, we can define a query that checks whether the concept “Postoperative mediastinitis” is satisfiable over our defined axioms, or if our conceptualization allows for existence of “Stitch abscess” as a surgical site infection (based on the CDC guideline, it is not permitted by our ontology). To evaluate case detection methods, we rely on results from the MUHC and TOH's ongoing chart review process for HAI. We assess the ontology based on ability to meet initial design requirements, e.g. by defining different queries over the defined axioms. To do this, we have defined a set of potential application scenarios to address specific tasks divided into three categories (Table 1). Table 1. Application scenarios to perform specific tasks using the HAI ontology.
Category
Potential Use
Potential Users
(I) HAI case identification
-Case enumeration -Care/product evaluation -Intervention (e.g., outbreak analysis) and outcome analysis
-IPCP -Public health -Medical staff -Health care workers (HCW), Manufacturers -Patients/lawyers/risk management - Researchers
(II) Risk/causative factor identification/evaluation
-To evaluate outcomes singly or across multiple HAIs -To look for modifiable risk factors -To look for interactions between risk factors -To evaluate the strength of association or attributable risk
-IPCP -Medical staff -HCW -Manufacturers -Researchers
(III) Diagnostic factor identification/evaluation
-Calculation of diagnostic accuracy of factors for surveillance or clinical purposes -Creation of models or algorithms for case detection -Identification of new detection method -Evaluation of interactions among/between identification factors and risk factors
-IPCP -Medical staff -HCW -Manufacturers -Researchers
For each category we define a set of queries. For example for Category I, the following queries can be answered: • • • •
What are common patient-associated risk factors for both SSIs and Catheter Associated Urinary Tract Infections (CAUTIs)? What effect has installation of new alcohol hand gel dispensers had on the Serratia SSI incidence in Cardiac Surgery Intensive Care Unit (CSICU)? Are patients with SSI at risk of developing severe sepsis? How many SSIs have been associated with our new brand of implantable ventricular assist device (VAD)?
For Category II we can define queries such as: • Potentially discontinuable medications associated with development of SSI • Interaction between hypoxia (decreased oxygen supply) and hypoalbuminemia (reduced serum albumin concentration) in development of SSI Moreover, the following queries can be asked for Category III:
A. Shaban-Nejad et al. / Knowledge-Based Surveillance
• • •
149
Combination of laboratory and radiographic findings that best identifies cases of prosthetic hip infections following hip replacement surgery? Given that C-reactive protein elevation is sensitive and specific in patients with HAIs, what other potential acute phase reactants may be used for diagnosis? What effect does the use of drugs with anti-inflammatory side effects have on the sensitivity of CT (computed tomography) findings of post-operatic abscess formation?
4. Discussion and Conclusions The HAI Ontology as a part of the HAIKU framework is compatible with the definitions, recommendations, and specific criteria that are specified by the Centers for Disease Control and Prevention (CDC) [3] for identifying and preventing SSIs. Different releases of the ontology are freely accessible through the HAIKU wiki6 and can be used for HAI case identification, diagnostic identification/evaluation, and risk/causative factor identification/evaluation. One of the major challenges in our research is integration process and dealing with several mismatches at the language level and model level between different knowledge sources. At this point the integration has been mostly performed semi-automatically under human supervision and control. Our future work will focus on improving the integration process and population of the ontology using the local terminologies, as well as utilizing the rulebased and statistical methods for case detection. Moreover we will leverage the SADI [9] framework as a medium for ontology-based query answering of HAI-related data, especially in the form of relational databases, possibly in combination with analytical resources, such as dynamic computation of various index values and scores. Acknowledgements. HAIKU is funded by the Canadian Institutes of Health Research (CIHR) and the Natural Sciences & Engineering Council of Canada (NSERC).
References [1] [2] [3]
[4]
[5] [6] [7] [8] [9]
6
Urban, J.A. Cost Analysis of Surgical Site Infections. Surg Infect 7(s1) (2006), s19-s22. Cheadle, W.G. Risk Factors for Surgical Site Infection. Surg Infect 7(s1) (2006), s7-s11. Mangram, A.J. Horan, T.C. Pearson, M.L. Silver, L.C. Jarvis. W.R. Guideline for prevention of surgical site infection, 1999. Hospital Infection Control Practices Advisory Committee. Infect Control Hosp Epidemiol. 20(4) (1999) 250-78; quiz 279-80. Siegel, J.D. Rhinehart, E. Jackson, M. Chiarello L.; Health Care Infection Control Practices Advisory Committee. 2007 Guideline for Isolation Precautions: Preventing Transmission of Infectious Agents in Health Care Settings. Am J Infect Control. 2007 Dec;35(10 Suppl 2):S65-164. Lee, J.T. Nomenclature nightmare. Surg Infect 4 (2003), 293–296. Lee, J.T. Precision, accuracy, clarity, meaning. Surg Infect (Larchmt) 8(1) (2007), 1-4. Haarslev, V. Möller, R. RACER System Description. In Proceedings of the First International Joint Conference on Automated Reasoning (IJCAR01), Italy, 2001, p.701–706. Pellet: OWL 2 Reasoner for Java: http://clarkparsia.com/pellet Wilkinson, M. Vandervalk, B. McCarthy, L. SADI Semantic Web Services – ‘cause you can’t always GET what you want! In Proceedings of APSCC 2009: 13-18.
http://surveillance.mcgill.ca/wiki/HAIKU
150
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-150
Factors Known to Influence Acceptance of Clinical Decision Support Systems E. KILSDONK a, L. W. P. PEUTE a, S.L.KNIJNENBURG a, M. W. M. JASPERS a a Department of Medical Informatics, Academic Medical Center – University of Amsterdam, The Netherlands
Abstract. Clinical Decision Support Systems (CDSS) have been shown to improve clinical performance and patient outcomes, but the failure rate of such systems is still over 50 percent. To contribute to a wider understanding of issues surrounding CDDS acceptance, we performed a systematic review of studies that evaluated CDSS implementations in clinical care to determine the factors that are associated with acceptance of CDSS by physicians. The factors that were found were categorized according to the HOT-fit framework. The mapping of factors concerning CDSS acceptance on the HOT-fit framework revealed gaps in each domain of the framework and showed that research has mainly focused on human and technology factors and a lack of research on organizational factors. A potential area of research could thus be studying the organizational factors that may influence CDSS acceptance. Keywords. Clinical Decision Support System, Physician Acceptance
1. Introduction Even though the evidence of Clinical Decision Support Systems (CDSSs) improving clinical performance and patient outcomes is convincing, the failure rate in introducing CDSS in clinical practices is still over 50 percent [1]. Introducing a CDSS seems fraught with obstacles among which low ease of system use [2], negative end-user attitudes towards the system and negative impact on clinical workflows [1]. But studies that evaluate CDSS implementation in clinical care continue to provide insight into these and other factors influencing acceptance of CDSSs. By systematically reviewing the status quo on what is known on factors contributing to CDSS acceptance this study aims to contribute to a wider understanding of issues surrounding CDSS implementations in clinical care and in doing so illustrate the gaps in current research on CDSS acceptance.
2. Methods 2.1. Systematic Literature Research A literature search was conducted to determine the factors that are associated with acceptance of CDSS. Pubmed, Web of Science, The Cochrane Library and IEEE
E. Kilsdonk et al. / Factors Known to Influence Acceptance of Clinical Decision Support Systems
151
explore were systematically searched. The combinations of search terms applied can be found at [3]. All abstracts resulting from these search queries were reviewed by the first author. The second and third author each reviewed half of the set of abstracts. Studies were included if they assessed factors contributing to or impeding acceptance of CDSS or physician’s attitudes towards CDSS. A second screening of the resulting set of included papers was done on basis of full text review by the first, second and third author. All papers that were finally included were textually analyzed for their description of factors influencing CDSS acceptance among physicians. Each of these factors was categorized according to the HOT-fit framework [4] by the first author. 2.2. The HOT-fit Evaluation Framework Building on the knowledge base of evaluation studies of Health Information systems (HIS) Yusof et al. proposed a framework to evaluate HIS while incorporating the concept of fit between Human, Organization and Technology (HOT-fit). In the HOT-fit framework these three domains are subdivided into eight interrelated dimensions: 1) System Quality, Information Quality and Service Quality fall under the Technological domain, 2) System Use and User Satisfaction under the Human domain and 3) Structure and Environment under the Organization domain. The eight dimension is Net benefits. While human, organizational and technology are the essential components of Information Systems, factors concerning the impact of HIS are categorized under the Net benefits dimension. In the framework, the concept of fit is concerned with the alignment between and compatibility of the human, technology and organization. The studies included in this systematic literature review were analyzed for their description and evaluation of factors related to user acceptance of a CDSS. Factors found were subsequently mapped on the HOT-fit framework. This mapping provided an overview of those domains and dimensions in which factors have been evaluated with regard to physicians’ acceptance of CDSSs in clinical practice. Factors concerning the domains or dimensions which have not been studied provide insight into the gaps in the literature on CDSS acceptance and provide directions for further research.
3. Results The literature search generated a total of 321 articles. After removing the duplicates and reviewing the abstracts, 70 articles were selected for full text review. In the end 29 articles were found eligible for inclusion. The factors studied in these publications were mapped on each of the Technology, Human, Organizational and Net benefits factors as proposed by the HOT-fit framework. A total of 240 factors were found, including 116 technological factors, 79 human factors, 37 organizational factors and eight pertaining to the dimension of Net Benefits. References to all included papers and the resulting mapping of the factors can be found in [3].
152
E. Kilsdonk et al. / Factors Known to Influence Acceptance of Clinical Decision Support Systems
HUMAN - 25 TECHNOLOGY - 46
System Use - 22
System Quality - 20
User Satisfaction - 3
Information Quality - 18
ORGANIZATION - 17
Fit Influence
- # Number of articles
Net Benefits - 7
Structure - 14 Service Quality - 9
Environment - 3
Figure 1 Number of articles on user acceptance of CDSS found per domain
3.1. Overview of Status Quo Figure 1 presents an overview of the articles that report on factors concerning CDSS acceptance in relation to the specific HOT-fit domains. Factors that have often been studied can be foremost found in the dimensions of System Quality, Information Quality and System Use. Forty-eight factors in 20 articles were found to be associated with System Quality. The most frequently reported factors mapped under System Quality having a positive impact on user acceptance are Ease of system use and System flexibility. Ease of system use and System flexibility were reported on 9 and 10 times, respectively. To increase the ease of system use, screen design should provide clear directions to the user for how to navigate through the system [5]. The system should be flexible so that it lets the physician explore and keep his or her autonomy [6]. Fifty-one factors revealed in a total of 17 articles concerned the dimension of Information Quality. The most often factor noted to contribute to CDSS acceptance by physicians is the relevance of data and messages delivered by the system: those should be suited to the particular clinical situation at hand [5]. Seventeen factors studied in a total of 9 articles concerned Service Quality. Easy access to computers and technical support are often named as facilitating factors for CDSS use. Most of the factors studied in the CDSS acceptance literature concern the dimension System Use, with 76 factors found in 22 articles. Overall, physicians’ expectations and believes concerning a particular CDSS are the most often noted factors influencing its acceptance. Physicians are more willing to use a CDSS when they believe they are in control of the system and that using it is worth the effort. Physicians are less willing to use the system when they believe it will harm the physician-patient relationship and when it reduces their decision making power. Physicians’ computer skills are also important in their acceptance of a CDSS. Only three factors in three articles were mentioned that concerned User satisfaction. All these publications reported that perceived usefulness of the system influences acceptance of CDSS. A total of 34 factors in 14 articles were categorized under the dimension Structure in the domain of Technology. It appears that the structure/organization of the clinical
E. Kilsdonk et al. / Factors Known to Influence Acceptance of Clinical Decision Support Systems
153
process is an important factor in physician acceptance of CDSS and that clinician involvement in introducing CDSS has a positive impact on acceptance [1]. Three factors in three studies were found that concerned the dimension of Environment. Two studies noted that social pressure influences acceptance and in one expert panel the need for adequate budgeting was noted [2]. There are eight factors mentioned in six articles concerning the dimension of Net Benefits. Most important here is that the physician sees a direct benefit [1]. 3.2. Gaps Analysis Table 1. Gaps analysis of factors not mentioned/ assessed in CDSS acceptance literature Domain - dimension Technology System Quality Information Quality Service Quality Human System Use User Satisfaction Organization Structure Environment Net benefits
HOT-fit Factors not mentioned in CDSS acceptance literature Data currency, database content, security, resource utilization Importance, legibility, conciseness Assurance, empathy Amount/duration, use by whom, actual vs. reported use, nature of use, level of use, recurring use, percentage used, voluntaries of use Satisfaction with specific functions, overall satisfaction, enjoyment, software satisfaction, decision making satisfaction Nature, autonomy, communication, champion, mediator, teamwork Government, politics, localization, competition, population served, external communication Effectiveness, error reduction, communication, clinical outcomes
The literature review and consequent mapping of factors on the HOT-fit framework showed that many factors suggested by the HOT-fit to be of potential influence to successful implementation and acceptance of HIS in clinical practice have not yet been studied in the CDSS acceptance literature and provide means for further research. Table 1 gives an overview of these factors which are not yet researched in the CDSS acceptance literature. Also, many of the factors which are enumerated by the HOT-fit framework are only mentioned in CDSS acceptance literature and not assessed or their impact measured. Three of such factors fall under the domain Organization-Structure. Leadership, management and strategy have been suggested as potentially impacting CDDS acceptance. Most of these factors relate to recommendation or suggestions made in the article such as to include management and administrative staff in planning of CDSS introduction to assist with arising workflow issues [6]. Accuracy of system data, which falls under the domain of Information Quality, was mentioned twice in expert panels and once as an expert opinion to influence CDSS acceptance. Provision of irrelevant or erroneous information might have a negative impact on user acceptance of CDSS [2]. In the domain of Environment, adequate budgeting was also mentioned once in an expert panel [2].
4. Discussion This systematic literature review revealed that research conducted on acceptance of CDSS systems have mainly focused on factors concerning the Technological and
154
E. Kilsdonk et al. / Factors Known to Influence Acceptance of Clinical Decision Support Systems
Human domain, reflected in the high number of factors of 116 and 79 found, respectively. More specifically a high amount of factors revealed concerned System Quality, Information Quality and System Use. While these factors can have an impact on the acceptance of CDSS, conclusive evidence is not yet provided on the effect of these factors on CDSS physician acceptance. For example, it was reported that perceived usefulness of CDSS can have a positive impact on its acceptance, but in other studies this association was not found [7]. The mapping of factors concerning CDSS acceptance in clinical care on the HOTfit framework mainly showed a lack in research on factors concerning the Organization domain. Experts have suggested that factors like leadership, management and strategy might be of impact on CDSS’ acceptance as well. There are other issues, like teamwork and communication, that have never been studied for their impact on CDSS acceptance but that might likewise be of influence. Other organization issues not yet subjected to research concern for example, type, size, hierarchy, politics, culture, and autonomy [2]. These organizational issues are a potential area for further research on CDDS acceptance. The results of this systematic literature review nevertheless revealed that the organizational structure of the clinical process has a great impact on physician acceptance of CDSS. How to fit the CDSS seamlessly into the particular clinical process at hand needs to be the first objective of a CDSS implementation and acceptance study. The HOT-fit framework has proven to be helpful in categorization of factors revealed by the research literature on CDSS acceptation. The framework was useful for revealing gaps in research of factors of each of its dimensions concerning CDSS acceptance with organizational issues as main underexposed domain of study.
References [1]
[2]
[3]
[4]
[5]
[6]
[7]
Trivedi, M.H. Kern, J.K. Marcee, A. Grannemann, B. Kleiber, B. Bettinger, T. et.al., Development and implementation of computerized clinical guidelines: barriers and solutions, Methods Inf Med, 41 (5)(2002),435-42. Varonen, H. Kortteisto, T. Kaila, M. EBMeDS Study Group, What may help or hinder the implementation of computerized decision support systems (CDSSs): a focus group study with physicians, Fam Pract 25(3)(2008), 162-7. Kilsdonk, LPeute, .L.W.P. Knijnenburg, S.L. Jaspers, M.W.M. Technical Report 2011-01, Department of Medical Informatics, University of Amsterdam. Available at http://kik.amc.uva.nl/KIK/reports/TR2011-02.pdf Yusof, M.M. Kuljis, J. Papazafeiropoulou, A. Stergioulas, L.K. An evaluation framework for Health Information Systems: human, organization and technology-fit factors (HOT-fit), Int J Med Inform77 (6)(2008) ,386-98. Sheehan, B. Kaufman, D. Stetson, P. Currie, L.M. Cognitive analysis of decision support for antibiotic prescribing at the point of ordering in a neonatal intensive care unit, AMIA Annu Symp Proc 14 (2009) ,584-8. Trivedi, M.H. Daly, E.J. Kern, J.K. Grannemann, B.D. Sunderajan P, Claassen, C.A. Barriers to implementation of a computerized decision support system for depression: an observational report on lessons learned in "real world" clinical settings, BMC Med Inform Decis Mak 21 (2009) ,9:6. Marcy, T.W. Skelly, J. Shiffman, R.N. Flynn, B.S. Facilitating adherence to the tobacco use treatment guideline with computer-mediated decision support systems: physician and clinic office manager perspectives, Prev Med 41(2)( 2005) ,479-87.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-155
155
Cross-Frontier Information Provision in the ALIAS European Project Frédérique LAFORESTa,1, Atisha GARIN-MICHAUDa,c, Thierry DURANDb, Emmanuel EYRAUDb, Edouard BARTHUETc a Université de Lyon, CNRS, INSA-Lyon, LIRIS UMR5205, F-69621 France b GCS SISRA, Centre Léon Bérard, Lyon, France c Sword, F-69771 St Didier au Mont D'Or
Abstract. The ALIAS project addresses medical services and information inadequacy to ensure healthcare provisions in Alpine space where telemedicine services are not widely exploited and linguistic barriers represent an obstacle. Alpine space touristic vocation makes its healthcare structures periodically inadequate to face a widened request of services supply. On the other hand, a major receptivity of those structures during the rest of the year is unnecessary due to the low density of local residents. ALIAS is aimed at linking together a number of hospitals enabling the creation of a network shaping the ALIAS Virtual Hospital Network for sharing medical information and adopting telemedicine services to improve the efficiency of hospitals in Alpine Space areas. This article focuses on the clinical information provision service provided in ALIAS and on the translation service that has been associated to it. Keywords. eHealth, hospital network, clinical information provision, information translation
1. Introduction Mountain territories have specific characteristics that have an impact on the health strategies that should be set up to improve quality of care in such regions. Among them, we can cite: the scarcity of local residence and the particular ageing residents rate, the huge invasion of tourists during short periods and the difficulties of transportation of people with a more acute importance in winter. This has to be associated to the prevalence of some specific diseases like chronic obstructive pulmonary disease or trauma accidents and also to the desertification of local health practitioners. Information and Communication Technologies and eHealth should bring answers to such characteristics. More and more hospitals, regions or countries have an electronic patient record management system (EPRS) for their citizens. Practitioners from a region then get access to their regional system. But when the patient requires care outside the region, the local practitioner cannot access to the corresponding EPRS. The Alpine Hospitals Networking for Improved Access to Telemedicine Services (ALIAS) project [1, 2] started in August 2009 for 3 years. It is a pilot project involving the Alpine territories of six European countries in the experimentation of a new model of cooperation among hospitals, aimed at issuing services to residents as well as 1
Corresponding author.
156
F. Laforest et al. / Cross-Frontier Information Provision in the ALIAS European Project
citizens requesting healthcare assistance in the involved areas. Using eHealth technologies, the ALIAS project links together participating hospitals in order to facilitate the cooperation among healthcare professionals to foster the exchange of knowledge. It improves the ability to diagnose and therapy in a cross-border context. After a rapid presentation of the ALIAS project services and virtual hospital network, we focus the article on the information provision service and its associated translation service. We will show how we have built in a very short period a system that allows communication without imposing standardization of regional EPRS.
2. The ALIAS project The ALIAS Project originates from telemedicine and electronic health record initiatives which are already underway in the territories involved and complies to the different data security, privacy and protection regulations of the countries participating in the initiative. From a technical point of view, ALIAS is a shared platform which allows hospitals to connect with the ALIAS central service (ACS) to access information, share professional expertise and knowledge. The ALIAS project intends to deliver and pilot two telemedicine-related services: •
•
teleconsulting through which citizens can access specialist medical counsel, enhancing the professional profile of the involved healthcare centres thanks to online collaboration with the wards of the best hospitals and centres of excellence in the network, working jointly on complex clinical cases, with the goal of improving the quality of treatments, clinical information provision aims at improving the accessibility and quality of hospital services and clinical practice. This is achieved through a better use, over territories, of already existing information. Accessing patients' clinical information from any hospital of the network has the main objective of improving citizens' wellbeing.
The ALIAS Virtual Hospital Network (VHN) is composed of hospitals and specialized diagnostic centers. Initially formed by eight hospitals, it will expand to include new nodes in the future. ALIAS services will be piloted within the VHN. ALIAS pilot sites are the Varese Hospital in Lombardy, the Tolmezzo Hospital in Friuli Venezia Giulia, the Garmisch-Partenkirchen Hospital in Oberbayern, the Grenoble Hospital in Rhône-Alpes, the Bolnisnica Golnik Hospital and the Splosna Bolnisnica Izola Hospital in Slovenia, the Landeskrankenhaus Villach Hospital in Carinthia and the Hôpitaux Universitaires de Genève in Région Lémanique.
3. Clinical Information Provision in ALIAS 3.1. Interconnecting Existing Electronic Health Record Management Systems In the following, we focus on the clinical information provision service. A scenario that can be sketched is the case of a patient from region R looking for care in region S. For example, a citizen of Lombardia on holidays in a Rhône-Alps ski resort gets a serious
F. Laforest et al. / Cross-Frontier Information Provision in the ALIAS European Project
157
heart problem that requires immediate care. The region S physician who takes this patient in charge requires clinical information on the patient but they are stored in the EPRS of region R. Today such a provision of information is not possible, as physician of region S is not known by the region R EPRS. The ALIAS clinical information provision service permits this scenario to occur. We have designed and implemented the ACS that allows building a circle of trust among the virtual hospital network partners [3]. The ACS defines principles to authenticate the physician and then to grant him access to the entire circle of trust, to identify the patient, to check that the patient consent is acquired, to provide the patient EPR to the physician and to propose a partial translation of opened documents in the physician's language. All these steps have been defined in accordance with the legal framework of all participating countries. Participating hospitals all have their own EPRS. Each EPRS has been structured in a specific way so that the content and organization of clinical information is very different from one EPRS to the other. The common point of all these EPRS is the use of pdf documents. Most of them also use the IHE PDQ and XDS standards for documents exchange. Few EPRS do not offer an automatic way to provide documents. According to the regional EPRS capabilities, the provision of clinical information is of two kinds. Some queries will be automatically treated and instantaneously replied by the concerned regional EPRS. Queries to regional EPRS that cannot be automatically treated send a kind of secured e-mail to a corresponding physician in the regional partnering hospital who will select and/or build documents to send on reply. The set and structure of answered documents is not standardized within the partnership, so that no intrusion is done in the local EPRS. This choice has the double advantage to be rapidly efficient and not to interfere with local legal, social and political aspects of EPR. The patient consent is obviously required before any document provision. It is formalized by a written and signed document provided in the two languages of the patient and the physician. This consent has been specifically written for ALIAS. 3.2. Multilinguism Information exchanged with the clinical information provision service comes from the different partners EPRS, written in the language of the country. Images are quite international, but the practitioner who receives a full-text document may not know its language. Moreover, some drugs brand names are local to a country, as authorizations for drugs are made at the national level. To enhance information exchange, a computerbased service is required to help understand documents written in a foreign country. In the medical domain, precise information translation is unavoidable: mistakes in translation could induce danger for patients' health and is thus not acceptable. Full-text automatic translation proposals one can find in the literature or on the Web do not ensure precise translation of full text [4, 5]. Everybody knows for example the Google Translate [6] online service that can be used for approximate translation of short texts, but obviously not for medical documents. Privacy concerns are also a strong barrier to the use of already existing online services. Moreover, medical documents are official traces of procedures and results, they cannot be modified without consequences; the original document must be the reference. We have thus designed a translation service that has the following characteristics:
158
F. Laforest et al. / Cross-Frontier Information Provision in the ALIAS European Project
•
•
•
•
A precise but incomplete translation of documents: only precisely identified terms are translated. As documents come from heterogeneous existing information systems, no hypothesis is made on the document structure. Its format is most often pdf, but we also manage text and rtf documents. The original document is presented to the reader with translation annotations. No modifications are made on the document. Bullets are set onto the document at the place where terms have been identified. They can be opened to get the translation of the concerned term. Diseases and Drugs are concerned by translation. The most important information in documents is the list of drugs taken by the patient, and his/her known diseases. As standard classifications and databases exist for diseases and drugs, they are also precisely identifiable in full text documents. Translation is optional, the practitioner clicks on a dedicated button to activate translation.
Translation of diseases is based on the ICD-9 and ICD-10 International Classifications of Diseases [7]. It is available is all the languages used by the ALIAS partners, i.e. French, German, Italian and Slovenian. "Translation" of drugs is more complicated. Translation is not the right term in this case: the objective is to provide equivalences between national drugs. National databanks provide drugs with their brand name and active components as a list of codes, standardized by the Anatomical Therapeutical Codes (ATC) [8]. Level 5 ATC encodes chemical substances. In our service, drugs are equivalent is they have exactly the same ATC codes list at level 5. Annotations provide the level 5 and level 4 (therapeutic class) labels for each molecule in the drug. Figure 1 shows a snapshot of an annotated Italian document. An annotation is open and provides the French translation of a drug. This translation gives the ATC code and label at level 5, the ATC label at level 4, and the equivalent drugs names found in the French national databank.
Figure 1. Snapshot of the French annotation of an Italian document.
We have developed a disease translation module and drugs equivalence module. They form a Web Service that is available within the ACS circle of trust. The translation of a document is made at request. The process for documents translation is the following. The document region allows selecting the ICD language and the adequate drugs databank. Using the GATE [9] morpho-syntactic text analysis tool, diseases and drugs are identified in the document. Using our translation modules, annotations of diseases and drugs are created. The user interface of our service places bullets on the document. A click on a bullet makes the corresponding annotation appear. The GATE tool is a global architecture for text engineering. It requires as entry dictionaries under the form of OWL ontologies. The transformation of ICD, ATC and
F. Laforest et al. / Cross-Frontier Information Provision in the ALIAS European Project
159
national drugs to the required format has been done. This transformation results in very large ontologies that require the GATE enhanced ontologies management module called Gazetteer LKB.
4. Conclusion Today three partners have connected their local systems to the ACS. In June 2011, the other five participating hospitals will also be connected. A piloting and assessment phase will be launched for one year to evaluate the practical application of the VHN. In the ALIAS project, we are building a rapidly operational solution, and thus the diversity of the today operational platforms is a prerequisite. Information provision is not standardized: each platform provides the information as it is created at source. Nevertheless, the ALIAS platform ensures a uniform way to access information. The additional translation service fosters the accessibility of information written in exchanged documents. Improvements of the translation services are expected by end 2011. The objective is to enhance this service by identifying contextual information that would improve the document understanding. The piloting phase of ALIAS will allow getting a consequent feedback, both on the service and its user interactivity. Acknowledgments. This work is supported by the European Alpine Space Programme, under the project ref 4-2-2-IT. We thank all the partners included in this project: the General Directorate for Health of Lombardy in Italy, the Healthcare Regional Agency of Friuli Venezia Giulia in Italy, the Garmisch-Partenkirchen Hospital in Germany, the French Rhône-Alps Healthcare Information System, the French Université de Lyon, LIRIS CNRS UMR5205, INSA-Lyon, the General Hospital Izola in Slovenia, the University Clinic of Pulmonary and Allergic Diseases of Golnik in Slovenia, the Regional Hospital Villach in Austria, the Geneva University Hospitals in Switzerland and the Republic and Canton of Geneva, Department of economy and health in Switzerland. We also thank the public healthcare authorities working as observers: the Bavarian Health Ministry, the Carinthia Government, the Austrian Ministry of Research and the Rhône-Alps Regional Council.
References [1] ALIAS partnership, The ALIAS project web site, available at http://www.aliasproject.eu [2] Laforest F, S Sassi, Scuturici V, et al. ALIAS: Alpine Hospital Networking for Improved Access to Telemedicine Services. In: MedInfo, Cape Town, South Africa. 2010. [3] Eyraud E, Durand T, Barthuet E, Bochet R, Laforest F. Building a Circle of Trust for the Virtual Hospital Network of the ALIAS Project. Poster at Medical Informatics Europe 2011. [4] Pazienza M, Pennacchiotti M, Zanzotto F. Terminology extraction: an analysis of linguistic and statistical approaches. Knowledge Mining. Studies in Fuzziness and Soft Computing, Springer Berlin / Heidelberg, 185 (2005), 255-279 [5] Claveau V. Translation of biomedical terms by inferring rewriting rules. In Prince V, Roche M, Eds., Information Retrieval in Biomedicine: Natural Language Processing for Knowledge Integration. IGI Global, 2009. [6] Google Translate, available at http://www.google.com/language_tools [7] World Health Organization, The International Classification of Diseases, available at http://www.who.int/classifications/icd [8] World Health Organization, The ATC structure and principles, available at http://www.whocc.no/atc/ [9] Cunningham H., Roberts I, Funk A., Developing Language Processing Components with GATE Version 5 (a User Guide). Management, 5(Gate 2), 2009.
160
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-160
Event-Driven Architecture for Health Event Detection from Multiple Sources Kerstin DENECKEa,1, Göran KIRCHNER b, Peter DOLOG c, Pavel SMRZd, Jens LINGEe, Gerhard BACKFRIEDf, Johannes DREESMANg a L3S Research Center, Hannover, Germany b Robert Koch Institut, Berlin, Germany c Aalborg University, Aalborg, Denmark d Brno University of Technology, Brno, Czech Republic e Joint Research Centre, Ispra, Italy f SAIL Labs Technology, Vienna, Austria g Niedersächsisches Landesgesundheitsamt, Hannover, Germany
Abstract. Early detection of potential health threats is crucial for taking actions in time. It is unclear in which information source an event is reported first and, information from various sources can be complementing. Thus, it is important to search for information in a very broad range of sources. Furthermore, real-time processing is necessary to deal with the huge amounts of incoming data in time. Event-driven architectures are designed to address such challenges. This will be shown in this paper by presenting the architecture of a public health surveillance system that follows this style. Starting from concrete user requirements and scenarios, we introduce the architecture with its components for content collection, data analysis and integration. The system will allow for the monitoring of events in real-time as well as retrospectively. Keywords. Epidemic Intelligence, Text Mining, Disease Surveillance, Eventdriven architecture
1. Introduction Various factors such as globalization, climate change, or behavioral changes contribute to continuous emergence of public health hazards. A health hazard can be described as a sudden, unexpected event, incident or circumstance, confronting public health officials with a situation threatening the health of people and society with substantial consequences, e.g., outbreak of an infectious disease like swine flu or measles. Early detection of disease activity followed by an appropriate assessment of its risk and a corresponding reaction can help reduce and manage risk produced by health hazards [1]. Surveillance systems aim at supporting health officials in obtaining information on potential health hazards as early as possible. A main requirement of such systems is the processing of incoming data in real time. Event-driven architectures are designed to support this kind of processing. It is an architectural style that orchestrates behavior around the production, detection and consumption of events [3]. An event in this context is some message, token, count or pattern that can be identified within an 1
Corresponding author
K. Denecke et al. / Event-Driven Architecture for Health Event Detection from Multiple Sources
161
ongoing stream of monitored inputs, such as network traffic, specific error conditions or signals, thresholds crossed, counts accumulated etc. In this paper, we provide an overview on the characteristics of event-driven architectures for disease surveillance and present an architecture that follows this style. After providing an overview on related work in section 2, one contribution of this paper is the presentation of requirements for improved event-based surveillance systems. Then, the suggested event-driven system architecture for a disease surveillance system is described. The paper will finish with lessons learned from user feedback sessions and with conclusions on future work.
2. Related Work Epidemic intelligence is the science of collecting, filtering, verifying and analyzing information related to potential health threats [1]. It includes traditional approaches, such as public health surveillance where data from hospitals and laboratories are monitored. Further, it comprises systems that scan the Internet for relevant events (e.g. news wires, media sources or websites, twitter) [9, 10]. For example, the Medical Information System (MedISys, [2]) is a fully automatic public health surveillance system to monitor reporting on human and animal infectious diseases, and other public health threats. The system retrieves news articles from the Internet and classifies them according to pre-defined multilingual categories. It identifies entities like organizations, persons and locations. Using Pattern-based Understanding and Learning System (PULS, http://puls.cs.helsinki.fi/medical, [2]) event information is extracted and clustered. The main objective of event-driven architectures is to facilitate immediate information dissemination and reactive business process execution [11, 12]. In contrast to other architectural styles, they are characterized by actuality (events are monitored in real time), efficiency (huge amount of data is processed), robustness (components can be added and replaced easily) and their flexibility and adaptability (new types of events can be integrated easily) [3]. So far, mainly business applications took advantage from the event-driven style. As summarized by Li [4], other application domains require real-time processing, too, and could thus benefit from the same paradigm. In this paper, the focus is on the application domain of disease surveillance. Systems following the event-driven architectural style have not yet been described for this domain.
3. Requirements and Scenarios In a workshop and discussions with representatives of health organizations like World Health Organization or European Center of Disease Control, we collected requirements for an improved system for disease surveillance. They concern four main issues: 1. Content collection: The system should monitor a broad range of sources in a multitude of languages and from being broadcasted or produced around the globe. Complementing results from existing systems need to be accessible through a single user interface. Besides monitoring diseases and their mentions, it is of interest to monitor symptoms and their mentions as well as behavioral changes.
162
K. Denecke et al. / Event-Driven Architecture for Health Event Detection from Multiple Sources
2.
3.
4.
Result filtering: Users don’t want to be overwhelmed with information. Thus, results need to be carefully filtered according to various filter criteria such as relevance, novelty, source of information etc. Result presentation: Event information should be presented in a structured, appropriate and user-friendly way and allow accessing the original information sources for event verification processes. The user would like to interact with results, e.g. by narrowing the result set or by redefining his interest. User feedback and interaction: Interactions of interest include a) specification of essential signal and event information (e.g., disease name, location), b) selecting result presentation formats and storing event information and c) providing feedback for future result adaptations.
A potential use case is the user notification scenario where a system regularly provides information on new upcoming health threats identified in various information sources. The input is therefore a specified user interest (called signal definition). The system outputs signals matching this definition or those that might be of potential interest (see Fig. 1 for some example output).
Figure 1. Screenshot of the result page: One signal has been generated and related information is given.
4. Architecture Our architecture consists of various technical components that realize the individual processing steps. The components interact via web services. Collected (textual) content and processing results are stored in a database and transferred as RSS feeds. Figure 2 shows the information flow between the single components. For simplification reasons, the database accesses are not shown in the diagram. Knowing the user interest specified in the signal definition, the system continuously monitors the incoming text and data streams for relevant events. Once patterns of interest are identified, appropriate services for pattern analysis and interpretation are triggered and an alert is produced. The single components will be described in the following. The Content Collection Component collects continuously data from various sources including TV, radio, online news, blogs and Twitter. TV and radio data is collected via satellite and transcribed by the SAIL Media Mining System (http://www.sail-technology.com/products/commercial-products/media-mining-
K. Denecke et al. / Event-Driven Architecture for Health Event Detection from Multiple Sources
163
indexer.html). Medical Blog data includes MedWorm (http://www.medworm.com/) listed blogs and manually selected blogs collected through corresponding APIs (e.g., Twitter, MedWorm). Data collected by other surveillance systems can be integrated easily when the data is made accessible via RSS (e.g., MedISys data [2]). The Document Analysis Component filters and pre-processes the collected (textual) data before making it available for the event detection component. Pre-processing includes filtering of irrelevant data, recognition of mentions of disease names and symptoms, locations, time etc. The latter is realized by OpenCalais (http://www.opencalais.com). The documents are analysed linguistically by Minipar (http://webdocs.cs.ualberta.ca/~lindek/minipar.htm). As a result, a set of documents annotated with named entities and linguistic structures is produced. These tagged documents are indexed and made available through MG4J (Managing Gigabytes for Java, http://mg4j.dsi.unimi.it/) which is a free full-text search engine for large document sets.
Figure 2. Information flow
The Event Detection and Signal Generation Component exploits the tagged documents to identify patterns of interest and to produce signals. It works in two modes: The unsupervised event detection (introduced in [6]) groups documents into clusters by a retrospective event detection algorithm and intends to identify signals that might be of potential user interest, not particularly matching the signal definition. These clusters are interpreted as signals and exploited by the recommendation component (see below). The supervised event detection considers the signal definition entered by the user. Given this information, it first retrieves data from the data repository that is relevant for the specified information need. The system then identifies segments (e.g., sentences, paragraphs) in the relevant documents by means of a supervised machine learning algorithm (see [8] for details). This information is then exploited by standard statistical algorithms for biosurveillance (e.g. CUSUM, Farrington [7]) to produce signals as alerts for health officials (signal generation). The Recommendation Component gets as input the document clusters or the calculated signals and either selects those that are of interest for the user according to his profile or ranks the signals appropriately. This component requires the user profile that consists of information on a specified signal definition as well as user feedback from previous searches and user interactions. The ranking of signals in the result
164
K. Denecke et al. / Event-Driven Architecture for Health Event Detection from Multiple Sources
presentation is adapted to the user interest and irrelevant signals can be filtered out. The produced signals are presented in the user interface. The user interface allows a user to specify his interest in terms of a signal definition. It collects information on disease names or symptoms, locations to be considered by the surveillance system. Further, the generated signals and related information on indicators and the information source are presented to the user. Users are enabled to browse through the results. Various visualization methods are applied to present the results in an easy understandable way (e.g., as word clouds, in maps or graphs).
5. Conclusion In this paper, the architecture for an event-driven disease surveillance system has been introduced. The system allows monitoring a broad range of sources including indicator data from traditional surveillance. The first versions of the single components are currently integrated and will be tested by epidemiologists with real world data in near future. From the user feedback we learned so far, that it is necessary to carefully select the social media sources in order not to get to many false alarms. In future work we will focus on testing and improving the algorithms. Acknowledgements: This research is part of the M-Eco project funded partly under 247829 by the European Commission.
References [1]
Paquet C, Coulombier D, Kaier R, Ciotti M. Epidemic intelligence: a new framework for strengthening disease surveillance in Europe. Euro Surveill. 2006; 11(12) [2] Steinberger R, Fuart F, van der Goot E, Best C, von Etter P, Yangarber R. Text Mining from the Web for Medical Intelligence. In: Perrotta D, Piskorski J, Soulié-Fogelman F, Steinberger R, eds.: Mining Massive Data Sets for Security, OIS Press. (2008) The Netherlands [3] Bruns R, Dunkel J. Event-Driven Architecture: Softwarearchitektur für ereignisgesteuerte Geschäftsprozesse. Springer, Berlin; 1st Edition. (29. Mai 2010) [4] Li C-S. Real-time event driven architecture for activity monitoring and early warning. Emerging Information Technology Conference, 2005. [5] Faensen D, Claus H, Benzler J, et al.: SurvNet@RKI – a multistate electronic reporting system for communicable diseases. Euro Surveill 2006;11(4):100-3 [6] Fisichella M, Stewart A, Denecke K, Nejdl W. Unsupervised Public Health Event Detection for Epidemic Intelligence. CIKM’10, October 25-29, 2010, Toronto, Ontario, Canada [7] Höhle M. surveillance: An R package for the surveillance of infectious diseases, Computational Statistics (2007), 22(4), pp. 571-582 [8] Stewart A, Denecke K. Using ProMED Mail and MedWorm Blogs for Cross-Domain Pattern Analysis in Epidemic Intelligence. In: Safran C, Reti S, Marin HF, eds. Studies in Health Technology and Informatics: MEDINFO 2010, IOS Press, Amsterdam, 2010, pp. 473-481 [9] Corley CD, Cook DJ, Mikler AR, Singh KP. Using Web and Social Media for Influenza Surveillance. Book chapter in Advances in Computational Biology, Springer, 2010 [10] Linge JP, Steinberger R, Weber TP, et al.: Internet surveillance systems for early alerting of health threats. Editorial. Eurosurveillance, Volume 14, Issue 13, 02 April 2009 [11] Michelson BM. Event-Driven Architecture Overview, Patricia Seybold Group, February 2, 2006 [12] Chandy KM. Event-Driven Applications: Costs, Benefits and Design Approaches, California Institute of Technology, 2006
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-165
165
Towards an Interoperable Information Infrastructure Providing Decision Support for Genomic Medicine
b
Matthias SAMWALDa,b,1, Holger STENZHORNc, Michel DUMONTIERd, M. Scott MARSHALLe,f, Joanne LUCIANOg,h, and Klaus-Peter ADLASSNIGi a Section for Medical Expert and Knowledge-Based Systems, Center for Medical Statistics, Informatics, and Intelligent Systems, Medical University of Vienna Institute of Software Technology and Interactive Systems, Technical University of Vienna, Austria c Department of Pediatric Oncology and Hematology, Saarland University Hospital, Germany d Department of Biology, Institute of Biochemistry, School of Computer Science, Carleton University, Canada e Informatics Institute, University of Amsterdam, The Netherlands f Department of Medical Statistics and Bioinformatics, Leiden University Medical Center, The Netherlands g Rensselaer Polytechnic Institute, USA h Predictive Medicine, Inc.,USA i Medexter Healthcare Gmbh, Austria
Abstract. Genetic dispositions play a major role in individual disease risk and treatment response. Genomic medicine, in which medical decisions are refined by genetic information of particular patients, is becoming increasingly important. Here we describe our work and future visions around the creation of a distributed infrastructure for pharmacogenetic data and medical decision support, based on industry standards such as the Web Ontology Language (OWL) and the Arden Syntax. Keywords. genomic medicine, decision support, interoperability, ontology, Arden Syntax
1. Introduction There is growing consensus in the medical and pharmaceutical community that further progress in the development of new therapies will necessitate a fundamental change in medical practice: away from broadly defined disease concepts and therapeutic regimes, and towards a fine-tuned evidence-based, personalized medicine. Genomic medicine is an important component of personalized medicine, and refers to a system in which medical decisions are refined by combining medical history with current physiological indicators against a genetic background for a particular patient [1]. Since genetics plays 1
Corresponding author.
166
M. Samwald et al. / Towards an Interoperable Information Infrastructure
a major role in determining the response to a broad range of therapeutic treatments, the appropriate use of this pharmacogenetic information for guiding treatment decisions has the potential to improve the efficacy of treatments and reduce the incidence of adverse drug events. While nearly one fourth of all outpatients in the US received one or more drugs for which pharmacogenetic knowledge is available [2], it is still not common that pharmacogenetic findings are used in medical practice. Doctors are usually not specifically trained in genomic medicine, the cost-benefit trade-off of genetic testing is often unclear, and there is not enough time to incorporate potentially complex pharmacogenetic reasoning in routine medical decision making. Therefore, the development of decision support systems capable of handling pharmacogenetic data is clearly essential to the realization of personalized medicine. These systems need to provide accurate and timely reminders and decision support tailored to each individual patient, drug and therapeutic regime. However, creators of decision support systems for genomic medicine face the challenge of working with highly heterogeneous information concerning the relationship between genetics and drug responses based on limited trials. They need to deal with distributed, incomplete and possibly contradictory information. Here, we describe our ongoing work and future visions of employing information technologies to address this problem and towards 1) seamless integration of relevant pharmacogenetic data in a distributed setting, 2) the exploitation of clinically relevant pharmacogenetic knowledge in clinical decision support and 3) the design and dissemination of clinical decision support systems that improve the quality of health care delivery.
2. Methods 2.1. Data Sources Several relevant data sources have already become available in an open, interlinked format, or will be made available soon. We, together with other participants of the Health Care and Life Science Interest Group [3] of the World Wide Web Consortium (W3C, [4]), worked on making several relevant datasets accessible in RDF/OWL [5] format. The extraction and conversion of additional relevant datasets such as the Pharmacogenomics Knowledge Base (PharmGKB [6]), Drugbank [7], Online Mendelian Inheritance in Man (OMIM [8]), dbSNP [9] or SNPedia [10] is currently ongoing. In addition to manually curated data, natural language processing has been successfully used to identify pharmacogenomic information, such as gene-drug-disease relationships [11] or descriptions of new molecular diagnostics [12]. Organisations dedicated to reviewing current evidence and publishing recommendations about pharmacogenetics have emerged. For example, the Clinical Pharmacogenetics Implementation Consortium (CPIC) was recently initiated in the context of the PharmGKB. The CPIC members create, curate, review, and update written summaries and recommendations for implementing specific pharmacogenetic practices. Levels of evidence and strength of recommendations are documented. Another example of such organisation is the Evaluation of Genomic Applications in
M. Samwald et al. / Towards an Interoperable Information Infrastructure
167
Practice and Prevention initiative (EGAPP [13]). The text-based recommendations provided by such initiatives can be formalized as rules for clinical decision support. 2.2. Enabling Data Integration and Semantic Interoperability Ontologies help improve interoperability and data consistency. Several ontologies relevant to pharmacogenetics have become available in recent years. The Translational Medicine Ontology (TMO, [14]) provides a foundation upon which chemical, genomic and proteomic data can be harmonized and linked to disease, treatments and electronic health records. The Suggested Ontology for Pharmacogenomics (SO-PHARM, [15]) was the first to demonstrate how pharmacogenomic knowledge can be captured based on the Open Biomedical Ontologies (OBO) resources. The Sequence Ontology aims to describe the features and attributes of biological sequences [16]. It holds terms and relations of value for describing genetic variation including single nucleotide polymorphisms (SNPs) at the sequence level. Our work is guided by international standardisation efforts, and we also participate in standardisation activities. The most important standardisation organizations in this context are Health Level 7 (HL7 [17]); and the World Wide Web Consortium (W3C), which develops standards for large-scale, distributed data integration and access. A number of developments in the pharmaceutical domain should help to drive the practice of applying standards for interoperable information systems. The European FP7 Innovative Medicines Initiative (IMI) grants, with matching sponsorship from pharmaceutical companies, have created several projects which need interoperable information systems in order to share results and information across several IMI projects that cover domains including drug discovery, electronic patient records, clinical trials, quantitative modeling, and tissue banking. Participants of the IMI projects include many academic and pharmaceutical partners, as well as participants in the EU Biobanking and Biomolecular Resources Research Infrastructure (BBMRI), which aims to improve access to biological resources required for health-related research and development. 2.3. Creating Decision Support Systems Rule-based systems are useful for creating pharmacogenetic decision support systems [18]. We are exploring the use of standards-based rule frameworks such as the Arden Syntax [19] for this task. Arden Syntax is an HL7 standard that specifies various aspects of medical logic representation, including mechanisms for triggering rules based on certain conditions, retrieval of data from medical information systems and generating conclusions from input data. Since current findings about the relationships between genetic variability, diseases and treatment responses are often vague and contradictory, the use of classical rule engines can be augmented by fuzzy and probabilistic reasoning and consistency checking. This is being addressed by recently created systems such as Fuzzy Arden Syntax [20] or the probabilistic OWL reasoner Pronto [21]. To ensure that these developments have a real impact on clinical practice, they will be complemented by extensive collaboration with clinical practitioners and international stakeholders. Key factors for successful deployment of decision support systems have been described in the literature [22]. Based on these findings, the systems we envision need to be directly connected to hospital information systems, seamlessly
168
M. Samwald et al. / Towards an Interoperable Information Infrastructure
integrated into existing workflows and able to handle information from electronic patient records and clinical laboratories.
3. Preliminary Results and Discussion The Medical University of Vienna together with the Vienna General Hospital are currently finalizing the establishment of an informatics platform for integrating clinical data with genomic data, as well as providing clinical decision support based on the Arden Syntax (Fig. 1).
Figure 1. The Arden Syntax-based decision support infrastructure at the Vienna General Hospital. The Medical University of Vienna is working in close collaboration with the Vienna General Hospital to implement a new hospital information system that is used for patient care, clinical research and documentation. The clinical decision support system is part of the new hospital information system. Clinical and genomic data of patients can be transferred to the service-enabled clinical decision support server.
Another relevant development is the European integrated project p-medicine, which started recently. It focuses on the transformation from reactive to preventive medicine and a novel systems approach on integrated diagnosis, treatment and prevention in individuals. Within the project, an open, standards-compliant and modular framework of tools and services is being developed to enable efficient, secure sharing and handling of personalized data and in-silico models. Some important aspects are privacy, nondiscrimination, and access policies to maximize patient protection and benefit. The tools are being validated within concrete, advanced clinical research settings: Pilot cancer trials have been selected on clear research objectives to emphasize the need for multilevel data integration. One specific task in p-medicine to provide capabilities to communicate directly with existing clinical trial and hospital information systems via push and synchronization services. These services are being implemented based on existing standards, such as HL7, SNOMED CT, International Classification of Diseases and Related Health Problems, Tenth Revision (ICD-10), specifications of the Clinical Data Interchange Standards Consortium (CDISC) and Logical Observation Identifiers Names and Codes (LOINC) to overcome the inherent heterogeneity of those systems.
M. Samwald et al. / Towards an Interoperable Information Infrastructure
We expect the research programme implications for clinical practice, such pharmacogenomic findings into clinical automated clinical reminders based on improving the quality of treatments.
169
outlined in this paper to have several as improving the translation of basic practice, increasing the deployment of patient characteristics and, ultimately,
References [1] [2]
[3] [4] [5] [6]
[7] [8] [9] [10] [11] [12]
[13]
[14]
[15] [16]
[17] [18]
[19] [20] [21] [22]
Shastry, B.S. „Genetic diversity and new therapeutic concepts“, Journal of Human Genetics, vol. 50, 2005, S. 321-328. Frueh, F.W. Amur, S. Mummaneni, P. Epstein, R.S. Aubert, R.E. DeLuca, T.M. Verbrugge, R.R. Burckart G.J., und Lesko L.J., Pharmacogenomic Biomarker Information in Drug Labels Approved by the United States Food and Drug Administration: Prevalence of Related Drug Use“, Pharmacotherapy, vol. 28, 2008, S. 992-998. „Semantic Web Health Care and Life Sciences (HCLS) Interest Group“ Available: http://www.w3.org/2001/sw/hcls/. „World Wide Web Consortium (W3C)“ Available: http://www.w3.org/. „OWL Web Ontology Language Overview“ Available: http://www.w3.org/TR/owl-features/. Hernandez-Boussard, T. Whirl-Carrillo, M. Hebert, J.M. Gong, L. Owen, R. Gong, Met.al., „The pharmacogenetics and pharmacogenomics knowledge base: accentuating the knowledge“, Nucleic Acids Research, vol. 36, Jan. 2008, S. D913-D918. Wishart, D.S. „DrugBank and its relevance to pharmacogenomics“, Pharmacogenomics, vol. 9, Aug. 2008, S. 1155-1162. „OMIM Home“ Available: http://www.ncbi.nlm.nih.gov/omim. „dbSNP Home Page“ Available: http://www.ncbi.nlm.nih.gov/projects/SNP/. „SNPedia“ Available: http://www.SNPedia.com/. Coulet, A. Shah, N.H. Garten, Y. Musen, M. und Altman, R.B. „Using text to build semantic networks for pharmacogenomics“, Journal of Biomedical Informatics, vol. 43, Dez. 2010, S. 1009-1019. Gwinn, M. Grossniklaus, D.A. Yu, W. Melillo, S. Wulf, A. Flome, J. Dotson, W.D. und Khoury, M.J. „Horizon scanning for new genomic tests“, Genetics in Medicine: Official Journal of the American College of Medical Genetics, Jan. 2011. Teutsch, S.M. Bradley, L.A. Palomaki, G.E. Haddow, J.E. Piper, M. Calonge, N. Dotson, W.D. Douglas, M.P. und Berg, A.O. „The Evaluation of Genomic Applications in Practice and Prevention (EGAPP) initiative: methods of the EGAPP Working Group“, vol. 11, Jan. 2009, S. 3-14. Dumontier, M. Andersson, B. Batchelor, C. Denney, C. Domarew, C. Jentzsch, et.al. „The Translational Medicine Ontology: Driving personalized medicine by bridging the gap from bedside to bench“, Proceedings of the 13th Annual Bio-Ontologies Meeting, 2010. Coulet, A. Smaïl-Tabbone, M. Napoli, A. und Devignes, M.-D. „Suggested Ontology for Pharmacogenomics (SO-Pharm): Modular Construction and Preliminary Testing“, 2006. Eilbeck, K. Lewis, S.E. Mungall, C.J. Yandell, M. Stein, L. Durbin, R. und Ashburner, M. „The Sequence Ontology: a tool for the unification of genome annotations“, Genome Biology, vol. 6, 2005, S. R44. „Health Level Seven International - Homepage“ Available: http://www.hl7.org/. Overby, C.L. Tarczy-Hornoch, P. Hoath, J.I. Kalet, I.J. und Veenstra, D.L. „Feasibility of incorporating genomic knowledge into electronic medical records for pharmacogenomic clinical decision support“, vol. 11, S. S10-S10. „Arden Syntax“ Available: http://www.hl7.org/implement/standards/ardensyntax.cfm. Vetterlein, T. Mandl, H. und Adlassnig, K.-P. „Fuzzy Arden Syntax: A fuzzy programming language for medicine“, Artificial Intelligence in Medicine, vol. 49, Mai. 2010, S. 1-10. „Pronto—A Probabilistic Reasoner for OWL DL and Pellet“ Available: http://pellet.owldl.com/pronto. Kawamoto, K. „Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success“, BMJ, vol. 330, 2005, S. 765-0.
170
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-170
Identifying Patients for Clinical Trials Using Fuzzy Ternary Logic Expressions on HL7 Messages a
Raphael W. MAJEEDa, Rainer RÖHRIGa,1 Department of anesthesia and intensive care medicine Justus-Liebig University Giessen (Germany)
Abstract. Identifying eligible patients is one of the most critical parts of any clinical trial. The process of recruiting patients for the third phase of any clinical trial is usually done manually, informing relevant physicians or putting notes on bulletin boards. While most necessary information is already available in electronic hospital information systems, required data still has to be looked up individually. Most university hospitals make use of a dedicated communication server to distribute information from independent information systems, e.g. laboratory information systems, electronic health records, surgery planning systems. Thus, a theoretical model is developed to formally describe inclusion and exclusion criteria for each clinical trial using a fuzzy ternary logic expression. These expressions will then be used to process HL7 messages from a communication server in order to identify eligible patients. Keywords. Clinical trials, patient recruitment, hl7, communication server, fuzzy logic, ternary logic, data warehouse
1. Introduction Conducting clinical trials comes hand in hand with immense efforts and high costs. Delays or even the failure of a study leads to consequences of ethical and economical nature. Most clinical trials require precisely defined collectives, described by eligibility criteria for inclusion and exclusion. Failures of clinical trials are usually due to falling below necessary patient numbers [1]. On one hand, study centers often underestimate the number of patients actually matching the required eligibility criteria, while on the other hand many patients fail to be enrolled due to nescience of medical personnel. A government funded research project at five universities thus aims to investigate how the recruiting process for clinical trials can be electronically assisted by hospital information systems (HIS) and clinical information systems (CIS). As a matter of course, implementing patient recruitment functionality in HIS strongly depends on the deployed software. The prevalence of the communication standard HL7 (version 2.x) and the nearly nationwide use of communication servers suggest the possibility to create a generic solution to identify possibly eligible patients for clinical trials. Previous attempts perform database queries [2] or require users to interactively import or enter patient information. Aim of this paper is the development of a solution suitable 1
Corresponding author: [email protected]
R.W. Majeed and R. Röhrig / Identifying Patients for Clinical Trials
171
to identify eligible patients for clinical trials by listening to communication server messages. The envisaged automatic inclusion of information from a clinical integration server presents a novel approach.
2. Methods An automated recommendation of eligible patients for clinical trials requires first of all a formalization and electronic description of eligibility criteria. Since we aim at a routine use of the recommendation system to be developed, realistic requirements play an important role in the development process. Therefore, our strategy consists of firstly deducing application-oriented requirements for a formal description of eligibility criteria. Subsequently, a computer processible description language is to be developed which conforms to our requirements. Finally, a feasibility study will be conducted to evaluate the suitability of our approach for routine usage. 2.1. Requirements for Formally Describing Eligibility Criteria Key requirement for a formal description language is its ability to describe the targeted scenario. Hence, the web site ClinicalTrials.gov is used to find all trials currently in phase III (e.g. recruiting patients) which are enrolled in all five German universities participating in the government funded project. The formal description is required to represent most of the eligibility criteria found in these trials. Not all eligibility criteria are satisfiable with equal precision. Discussions with local experts in medical informatics on decidability of patient information yielded four distinct criteria groups: If a criterion is based on data from master patient records or laboratory results (a), it is completely decidable once the information arrives. If, however, a criterion bases on the existence a diagnosis (b), it is immanent that the patient was previously examined for the diagnosis and the diagnosis was confirmed. The same also holds for medical procedures, prescribed medication. More difficult are eligibility criteria basing on the nonexistence of certain diagnoses, procedures or medication (c). It is forbidden, to conclude from a nonexistent diagnosis that the patient does not have the diagnosis – he just might not have been examined for the diagnosis yet. Finally, some eligibility criteria might completely resist automatic or electronic verification (d), like conditions concerning patients history, future or intimate information. Since electronic verification of the described criteria groups (a)-(d) occurs with different precision, a formal description language for eligibility criteria is primarily required to satisfy criteria of groups (a), (b) and (c). Applicability of the defined groups is to be determined by having two experts assign these groups to all criteria independently, with a third expert to resolve conflicts. 2.2. Approach The concept of interfacing with a communication server follows a primarily passive approach, because no additional action by the medical personnel or the patient is needed. In order to solely rely on HL7 messages for deciding patient eligibility, it has to be investigated whether all required information can be delivered unsolicitedly by the communication server and by what means missing information can be acquired from different sources.
172
R.W. Majeed and R. Röhrig / Identifying Patients for Clinical Trials
2.3. Feasibility Study To determine whether the developed formal description model is suitable for routine usage, an evaluation of the model for all trials described in section 2.1 needs to be performed. Additionally, a simple implementation of a software prototype will serve as hint, if interfacing with a communication server is possible without modifications to the server and whether eligible patients can be identified using the description model.
3. Results Our search for clinical trials on the web site ClinicalTrials.gov meeting the previously described criteria yielded 11 relevant trials. All of previously described criteria groups (a)-(d) were present in these trials. Table 1 shows example criteria for groups (a)-(d). Results of two experts assigning those groups to all criteria (Cohen’s kappa 0.7) are shown in table 2. Assignment conflicts were resolved by a third expert. Table 1. Examples illustrate the four semantic groups used to categorize eligibility criteria. Semantic group (a) Completely decidable facts: Master patient record, laboratory tests, numeric scores (b) Partially decidable facts: positively formulated diagnoses, procedures or medication (c) Undecidable facts: negated diagnoses, medication, procedures (d) No automatic data processing: Information about past events/history, Information about the future
-
Example criteria “Age 3 Months to 30 Years” “Hemoglobin > 10g/dl” “No overt renal disease” (creatinine < limit) “Performance status ECOG ≥ 3” “Medulloblastoma, cerebral PNET or Ependymoma” “More than 4 weeks since prior radiotherapy” “treatment with peginterferon alfa-2A” “no experimental drugs” “no chronic renal disease”
-
“Refactory or relapsed disease” „subject unlikely to comply with protocol “ “no allergy or intolerance to study medication” “no pre-existing illness preventing treatment“ „no refusal to use effective contraception“ “available for long term follow up through treating center”
Table 2. Results of applying criteria groups to clinical trial eligibility criteria by two experts (Cohen’s kappa 0.7), with a third expert to resolve conflicts. Trial NCT00749723 NCT00876031 NCT01011738 NCT00733343 NCT01077232 NCT00526318 NCT00554502 NCT01155193 NCT00290667 NCT01127750 NCT00410631
Total Criteria 28 13 6 29 8 10 24 8 32 10 12
Group (a) Group (b) 11 1 5 2 1 2 6 2 5 1 3
Group (c) 7 5 1 9 1 4 3 3 2 1 5
5 4 0 15 4 3 13 0 22 8 3
Group (d) 5 3 0 3 2 1 2 3 3 0 1
3.1. Eligibility Criteria as Propositional Calculus Expressions All revised clinical trials from ClinicalTrials.gov describe eligibility criteria in a similar way. The patient is eligible to a clinical trial if and only if all items of a bulleted list of
R.W. Majeed and R. Röhrig / Identifying Patients for Clinical Trials
173
inclusion criteria evaluate to true and none of the exclusion criteria evaluates to true. Some bulleted items contain alternative conditions of which one suffices to satisfy the criteria. Since all of the reviewed trial descriptions follow this scheme, a formal description language shall reflect this property. The presence the logic terms and, or and not suggests a formalization in propositional calculus. In propositional calculus, boolean formulas displaying the previously described properties are said to be in conjunctive normal form (CNF). Therefore, CNF formulas are chosen for formally describing eligibility criteria. The availability of advanced algorithms, like satisfiability problem (SAT) solvers [3] provide an additional advantage. 3.2. Fuzzy ternary Logic Evaluating previously constructed CNF formulas using clinical trial data within the scope of the planned feasibility study led to the conclusion that classical Boolean interpretation of the CNF formulas is insufficient for the recruitment process. Since most information is missing at the beginning of the recruitment process and only logic values true and false are allowed, logic expressions containing missing data simply evaluate to false resulting in an exclusion/rejection of the patient. In general, the nonexistence of a value does not permit a conclusion to a logic value false. Allowing a third logic value unknown, leads to three-valued logic also known as ternary logic. Ternary logic still allows the basic logic operations and, or, not which are also used by CNF formulas. A ternary CNF formula may now produce the value unknown. Evaluation of the CNF formulas using eligibility criteria also resulted in a second problem: While inclusion and exclusion criteria are usually sharp in the sense of either true or false, medical parameters vary over time and in precision. Since the final decision whether a patient is eligible or not is always performed by a physician, it is desired that patients slightly outside of the eligible range are also identified to allow a physician to perform a more precise examination of critical values. If, for example, a patient’s creatinine value exceeds the permissible range slightly, a physician might still examine the patient to determine whether his condition might change. This leads to fuzzy logic, which enables logic expressions to assume values between 0 (false) and 1 (true). Operations and, or, not are also defined as min(A,B), max(A,B) and 1-A. It is possible to combine both ternary logic and fuzzy logic to overcome the previously described restrictions.
4. Discussion The chosen approach to describe eligibility criteria of clinical trials with fuzzy ternary logic formulas suffices for the purpose of assisting patient recruitment. Results of Table 2 and the prototype implementation suggest that most eligibility criteria can be sufficiently described using fuzzy ternary logic CNF formulas. One might question, why to choose logic formula in favor of more powerful approaches like the ARDEN syntax. The advantage of logic formulas lays in its low complexity compared to full programming languages according to formal languages and complexity theory. While most problems are decidable or even solvable for logic formulas, the contrary applies to Turing complete languages, for which e.g. the halting problem is known to be unsolvable [4]. In contrast to previous formalizations (recently
174
R.W. Majeed and R. Röhrig / Identifying Patients for Clinical Trials
reviewed by Weng et al.[5]), the developed model focuses on decidability of patient information and is especially suited for live recruitment using communication servers. 4.1. Storage and Caching of Medical Facts The communication server is able to provide information about diagnoses, procedures, laboratory results and medication. Still, many eligibility criteria require information from the patient’s clinical history. Using the routine clinical information systems to acquire this information is on the one hand not always practical due to quality and performance reasons and on the other hand results in vendor specific implementations. To overcome this limitation, a clinical data warehouse (e.g. project i2b2) might be used to store incoming HL7 messages from the communication server. 4.2. Runtime Restrictions for Deployment to Routine Usage Since our goal is using the presented technique in a 1200 bed hospital with more than 40000 patient encounters per year, careful attention is needed regarding processing and resource limitations. After full deployment, any diagnosis, procedure, prescribed or documented medication and laboratory result will be forwarded to our clinical trial patient identification software. Therefore, the software is required to process around twenty thousand HL7 messages per day, at peak times around 30 messages per second. According to the developed model, the recruiting process of one study for one patient terminates if and only if the generated expression evaluates to either true or false. Consequently a recruitment process might be active concurrently for each and every patient and study, which will sum up to several thousand recruitment processes at any time. Thus, a concept is needed to swap out and store recruitment processes, also to enable the software to stop and resume at a later time. As a clinical data warehouse is already needed for storing and caching medical facts, it might also serve to store the state of recruitment processes. Additionally, the number of concurrently active recruitment processes might be reduced by declaring certain facts as trigger facts to explicitly start a recruitment process while storing other facts for later evaluation. Since the study results concluded a general feasibility of the presented concept, the prototype is currently being extended for full functionality. Acknowledgements: Funded by the German ministry for education and research (BMBF), Project 01 EZ 0941 X
References [1] [2] [3] [4] [5]
Campbell MK, Snowdon C, Francis D et al.: Recruitment to randomised trials: strategies for trial enrolment and participation study. The STEPS study. Health Technol Assess 2007;11(48) Thadani SR, Weng C, Bigger JT, Ennever JF, Wajngurt D. Electronic screening improves efficiency in clinical trial recruitment. Journal of the American Medical Informatics Association 2009;16(6):869–73. Schuler R. An algorithm for the satisfiability problem of formulas in conjunctive normal form. Journal of Algorithms 2005;54(1):40–4 Turing AM. On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society 1937;2(1):230. Weng C, Tu SW, Sim I, Richesson R. Formal representation of eligibility criteria: A literature review. Journal of biomedical informatics 2010;43(3):451–67.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-175
175
Towards a Metadata Registry for evaluating Augmented Medical Interventions Anne-Sophie SILVENTa1, Alexandre MOREAU-GAUDRYa,b , Philippe CINQUIN a,b INSERM / CHU de Grenoble / UJF-Grenoble 1 / CIT803, Grenoble, F-38041, France b UJF-Grenoble 1 / CNRS / TIMC-IMAG UMR 5525, Grenoble, F-38041, France
a
Abstract. Quality evaluation in the field of Augmented Surgery is strategic for public health policies. It implies to be able to effectively perform evaluation of Quality in term of Expected Medical Benefit (EMB). The notion of EMB is complex and not standardized in this field. To define and to evaluate EMB, it is necessary to discover the knowledge on the domain targeted by the device and to structure it. This paper presents first parts of this work. Focused on navigated knee surgery, it led us to obtain two main results: the identification of a new criterion for evaluating EMB obtained thanks to the formalization of a new kind of metadata. These encouraging results seem to offer new perspectives for the evaluation of devices from the field of augmented surgery. Keywords. Augmented Medical Intervention, Computer Assisted Medical Intervention, Expected Medical Benefit, metadata registries.
1. Introduction In this paper, we present a new approach for evaluating the quality of innovative medical devices in surgery. This approach is based on the discovery of new criteria for evaluating them and how to collect data relevant to the calculation of these criteria.. A Computer Assisted Medical Intervention (CAMI) aims at assisting the clinician or the surgeon in performing his clinical or surgical act. For more than 25 years, medical devices from this field have been developed and are now used daily. Nevertheless, medical innovative devices from this field, as from other fields, may suffer from difficulties in quickly establishing what their real added-value is. Nowadays, it becomes essential to obtain proofs of this added-value. Indeed, these proofs are more and more expected by the different actors, who are involved directly or not in the medical procedure, i.e. not only the patient or the surgeon, but also the company which is developing the device or the relevant bodies that take part to public health policies (such as HAS in France). Most of the time, they are based on drug evaluation methods but, the clinical effect of a drug is not directly dependent on the skill of the prescriber [1]. Therefore, we focus only on one aspect in this paper: the modelling of surgical procedures which will allow to determine relevant concepts for evaluating the use of the medical device in practice. This is a part of EMB which relates to experience with 1
Corresponding author: [email protected]
176 A.-S. Silvent et al. / Towards a Metadata Registry for Evaluating Augmented Medical Interventions
users and is evaluated a posteriori [2]. What may be relevant for an evaluation during the surgery? Following [3], the EMB for an innovative medical device should take into account each step of the surgical intervention, instead of focusing only on the results of the global intervention. The way surgical processes are modelled affects the choice of data acquisition, which can influence the modelling process. Different kind of data and ways of modelling can be used. For example, video observations or/and live observations in [4], knowledge in [5]. In [4], the authors combine sensor-based data with an ontology in order to detect the use of surgical instruments; in [5], [6], a Unified Modelling Language (UML) diagram for multimodal neurosurgical procedures is presented. Different data sources and modelling tools have been used to try to overcome difficulties inherent to the surgical data and knowledge: data may often be uncertain, imprecise, ambiguous and incomplete and knowledge is also difficult to standardize, it is commonly based on experience and often implicit. Our work uses original metadata about the surgery which integrates a first knowledge modelling and data with an high informative potential. These metadata are acquired in an objective way during the surgery.
2. Materials & Method In this work, we define an Augmented Intervention in Surgery as a set of surgical steps that requires the modelling of the surgeon’s expertise (mathematical, biomechanical, biochemical modelling, etc.) to guide him towards a precisely defined target, in order to maximize the benefit/risk ratio. We also have restricted the “Augmented Intervention in Surgery” knowledge domain to the particular case of Anterior Cruciate Ligament (ACL) navigated surgery. ACL function is to limit rotation and forward motion of the tibia with respect to the femur. Damaged ACL causes an instable knee with higher laxity. So, the surgery consists in replacing a damaged ligament by a graft in order to restore an optimal functionality..The medical device used is an image-less acquisition system based on knee modelling (see [7]). 2.1. Materials Navigation systems allow recording of metadata about the surgery. A metadata is structured information that describes, explains, locates, or makes it easier to retrieve, use, or manage an information resource [8]. The metadata we have now comes from log files used by developers to trace the course of surgery to provide explanations for an abnormal procedure. Two levels of information are identified. Firstly, the temporal level: roughly fifty states are identified, each state being characterized by a succession of time-stamped elementary acts. Secondly, the parameters level: clinical parameters values came from modelling. The first level is implicit. The surgeon has no feedback on temporal information; it corresponds to the algorithm modelling. The second level is used by surgeons during the surgery: the clinical parameters help him (or not) to take a decision about his surgical strategy. All the metadata is recorded on CD-ROM during surgery. In this work and following the regulations for biomedical research in France, 79 data log files have been analyzed. Each file corresponds to a navigated surgery. The navigated surgeries were made by several surgeons in several hospitals [9].
A.-S. Silvent et al. / Towards a Metadata Registry for Evaluating Augmented Medical Interventions 177
2.2. Method At first, we had to understand, to characterize and to organize the set of available concepts for knowledge discoveries. Such an organization is essential to have data readily available for analysis. It will take all its meaning in the future formalization of the EMB knowledge domain (see discussion). Following this first step, a specific visual tool has been developed to help understanding the surgical process. It allows getting a synthetic view of the metadata and enables producing hypotheses. Moreover, an explicit visualization helps for discussion with surgeons. This allows implicating the expert in the modelling process in several ways: formalization of his knowledge, display of another point of view that can produce information, assessment of the relevancy of the model. Finally, inferential analysis allows to characterize first relations between discovered new information and clinical parameters.
3. Results In order to represent the different states in a relevant way, we abstracted the fifty states into seven ordered “super states”, defined as a set of successive states: Installation (of patient and device), Calibration (of skills), Acquisition (of anatomical points), Modelling (of the anatomy of the patient), Pre Navigation Measures (clinical parameters), Navigation (therapeutic action with help guidance) and Post Navigation measures. These “super states” have been identified from the implicit representation of the surgical process implemented in the device (see Figure 1).Visualization representation enabled to highlight the notion of “return”. During the intervention, the surgeon can come back from the current super state to a previous one. For instance, he returns frequently to the calibration step to calibrate his tools again. Note that this visualization provides an explicit summary of the surgical procedure for the surgeon. Two types of information are shown: an approximate duration of super-states and the states chain in practice. For instance, in Figure 1, during the Acquisition super-state, the surgeon has to return to the Calibration super-state around time 3000. This leads to the definition of the notion of a linear surgical procedure: a procedure where there is no return between super-states. So, the example procedure is clearly non linear (50 returns during one procedure). We still tolerate one return, which can come, from a false manipulation for instance. With this definition, 53 surgeries among 79 are linear. Laxities are a relevant parameter to characterize clinical results: their reduction after navigation reflects a positive result. No significant difference is observed for the reduction of the laxities between linear and non linear procedures (Wilcoxon test, p=0.3). A significant difference is shown for the laxities before navigation between linear and non linear procedures (Wilcoxon test, p=0.0014). The laxities reduction seems to be the same whatever the types of procedure. Nevertheless, pre laxities are higher in linear procedures than in non linear procedures. This new information might generate a new knowledge: a surgery with small laxities will need a lot of returns between the different super-states, i.e. this surgery would need to use all the capabilities of the medical device, in order to be very accurate.
178 A.-S. Silvent et al. / Towards a Metadata Registry for Evaluating Augmented Medical Interventions
4. Discussion & Conclusion Our final aim consists in evaluating the EMB of a medical device in the field of Augmented Surgery.. This work is in progress. Nevertheless, to be able to perform such evaluation, it is necessary to get relevant data, information and knowledge devoted to this evaluation. The adoption of a metadata formalized model goes in this direction since it allows to define a set of descriptions of items, their relationships, their type and possible values. Though an extremely rich source of information, log files have not yet been considered in an objective evaluation of the Expected Medical Benefit. Our next challenge is to formalize these metadata in partnership with several companies in the field of CAMI in order to pool data and inter-operability. At first, log files for all navigation applications of our partners will be explicitly modeled according to the same pattern presented in this paper with super states. Discussions with these manufacturers and with clinicians and researchers have been held regularly for a year. Many difficulties related to legislative, regulatory and industrial property aspects are met in this project in addition to the conceptual, technical and practical ones. The project is very ambitious and it began modestly with close partners. A consortium agreement will be signed soon by all partners to ensure the confidentiality of medical devices software. Because they do not all have the same maturity (medical devices with CE and FDA markings for certain, no marking for other), the collection project is now beginning concretely with an industrial partner whose the medical device (CE and FDA marked) is used in clinical routine at the European level. For now, this collection is considered by the transmission of metadata on physical media (CD-ROM). The implementation of the conceptual model will be based on common specifications according to international standard ISO / IEC 11179 [10] for metadata registries. They will be written in a XMLSchema formalism (which may allow the opportunity to bring them to OWL in order to have a knowledge model).
Figure 1 : Discover of a new criteria for evaluation
For us, the realization of such a database of metadata is the adequate means to obtain relevant data needed to assess the actually Expected Medical Benefit of a medical
A.-S. Silvent et al. / Towards a Metadata Registry for Evaluating Augmented Medical Interventions 179
device from the field of augmented surgery. In a similar goal, works on metadata registries as eCRF are in progress [11]. Our added value lies in the particularly singular metadata available to us. In the longer term, we would like to integrate data and information from routine clinical assessments in the register data, and in particular the outcomes that should cover both near-term and longer-term health, addressing a period long enough to encompass the ultimate results of care [12]. Moreover, new directive 2007/47/EC, effective since March 2010 specifies that this evaluation must be actively updated with data obtained from post-marketing surveillance. This project will allow us too to establish a relevant post market monitoring. Acknowlegments: Data analyzed in this work are coming from the clinial trial “Evaluation médico-économique de la navigation chirurgicale dans le traitement des insuffisances du ligament croisé antérieur du genou”. This clinical trial was funded by the french DGOS, and carrying out thanks to the contribution of the CIC of Grenoble (Pr JL Bosson, Dr S. DavidTchouda) and of the Grenoble Orthopaedic Surgery and Sports Traumatology Academic Clinic (Dr S. Plaweski).
References [1] [2]
Konstam MA, Pina I, Lindenfeld J, PackerM. A device is not a drug. J Cardiac Failure 2003;9:155-7. Banihachemi JJ, Moreau-Gaudry A, Simonet M, Saragaglia D, Merloz P, Bosson JL et al.. Towards a Structure of the Knowledge Domain for Augmented Surgery with an Ontological Approach. Paper presented at: Journées Francophones sur les Ontologies, 2008 dec; Lyon: France. [3] Stindel E. Analyse morpho-fonctionnelle de l’appareil locomoteur pour la chirurgie assistée par ordinateur [Habilitation à Diriger des Recherches]. Toulouse, France. Qld : University Paul Sabatier ;, 2007. .4/ ##" ' ' " #!!'%! *"$#!" !# !!!* !!2009(16)72+80* .5/ ' #" ' ' # ' # * ! !# # ! #"+## !# &*# #" !# &2003(8,2-)98+106* [6] Jannin P, Morandi X. Surgical models for computer-assisted neurosurgery. NeuroImage 2007;37: 783– 791. [7] Plaweski S and Juillard R. Reconstruction du ligament croisé antérieur assistée par ordinateur : techniques et résultats. In e-mémoires de l'Académie Nationale de Chirurgie 2008 ;, 7(3) :78-87. [8] Understanding metadata. National Information Standards Organization Press. 2004. http://www.niso.org/publications/press/UnderstandingMetadata.pdf [9] Plaweski S. Medico-economic evaluation of surgical navigation in the treatment of deficiencies of anterior cruciate ligament of the knee. Proceedings of the 10th Annual Meeting of the International Society for Computer Assisted Orthopaedic Surgery ,CAOS 2010, june 16-19; Paris, France. [10] ISO/IEC 11179-3:2010 Information Technology - Metadata Registries (MDR) Part 3: Registry Metamodel and Basic Attributes. Final Committee Draft, 2010-03-30. [11] Stausberg J., Löbe M., Verplancke P., Drepper J., Herre H., Löffler M. Foundations of a metadata repository for databases of registers and trials. .In KP. Adlassnig et al, editors MIE 2009. Proceedings of XXII International Conference of the European Federation for Medical Informatics ; 2009; Sarajevo, Bosnia-Herzigovina. [12] Porter Michael E. What Is Value in Health Care? New England Journal of Medicine 2010;363 (26): 2477-2481.
180
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-180
A Comparison of Internal Versus External Risk-Adjustment for Monitoring Clinical Outcomes a
Antonie KOETSIERa,1 , Nicolette DE KEIZERa , Niels PEEKa Dept. of Medical Informatics,University of Amsterdam, Amsterdam, The Netherlands
Abstract. Internal and external prognostic models can be used to calculate severity of illness adjusted mortality risks. However, it is unclear what the consequences are of using an external model instead of an internal model when monitoring an institution’s clinical performance. Theoretically, using an internal prognostic model is preferred while external models are often more widely available. In this simulation study we explored the difference between the use of internal and external models on the degree and types of warning signals given by RA-EWMA control charts in the detection of increasing mortality in the ICU. Increases in mortality were correctly detected in 60% of cases (after 24 months) with the internal model, regardless of prior ICU performance. When using the external risk adjustment model, such increases were only detected for the average and poor performing ICUs. When the mortality rate was held constant, using the external model resulted in many incorrect warning signals. We conclude that the use of internal risk-adjustment models is preferable for monitoring clinical performance. Keywords. Risk adjustment, Heath Care Quality Assurance, Computer simulation
1. Introduction When monitoring clinical performance with outcome data the severity of illness of the patients fluctuate over time and so will the corresponding outcomes. These fluctuations could falsely imply varying clinical performance. To correct for this, casemix correction for the patient population is necessary. When clinical performance is monitored by mortality data, an internal or external prognostic model can be used to calculate severity of illness adjusted mortality risks. An internal model (i.e. internal risk-adjustment) is based on the historical performance of the centre where monitoring takes places and thus requires sufficient historical data of that specific institution to be available. In practice external models (i.e. external risk-adjustment), which can be based on the mean historical performance of all (external) centres where monitoring occurred, are more easily to create due to the large amount of data available when combining data from multiple centers. Risk adjusted (RA) control charts continuously monitor the rate of occurrence of an event (for example mortality rate) over time and incorporates the number of deaths and the corresponding mortality risks and generates a warning signal when there is 1
Corresponding Author: A Koetsier, Dept. of Medical Informatics, University of Amsterdam, PO Box 22700, 1100 DD Amsterdam, The Netherlands; E-mail: [email protected].
A. Koetsier et al. / A Comparison of Internal Versus External Risk-Adjustment
181
enough evidence for an increasing or decreasing shift in mortality. Theoretically, using an internal model with a RA control chart will generate more reliable warning signals than an external model. Several studies on monitoring institutions’ clinical performance used RA control charts based on external models [1;2] while other studies used internal models [3;4]. This paper presents a simulation study to compare the use of internal and external risk-adjustment models in clinical outcomes monitoring. We explore the differences in the degree and type of warning signals given for shifts in mortality by the RA control chart when either type of model is used. Furthermore we determined the ability of the RA control chart in detecting true (sensitivity) and false (specificity) shifts in mortality. We assessed this by simulating fictitious well, average and poor performing Intensive Care Units (ICUs) that have a simulated increasing or constant patient mortality rate and thereby using a prognostic model based on either the center’s own historical data (internal model) or on a multicenter average (external model).
2. Methods We used data from 76 ICUs participating in the Dutch National Intensive Care Evaluation (NICE) registry in 2009, consisting of more than 72,000 records. Each record in the NICE registry [5] consists of severity of illness data on the first 24 hours of one ICU admission, quantified by among others the APACHE IV score and predicted mortality risk [6]. We constructed fictitious ICUs which represented institutions with well (adjustment factor of 0.50), average, and poor performance (adjustment factor of 2.00). The corresponding fictitious admissions were generated by only randomly drawing series of predicted APACHE IV mortality risks, with replacement, from the NICE registry database. For the average performing ICU, these risks were equal to the APACHE IV mortality risks, while for the other two fictitious ICUs, the mortality risks were multiplied by the corresponding adjustment factor, on the odds scale. Each fictitious ICU admission was supplemented with a binary outcome representing survival or non-survival of the patient in question. This was done, for each the three fictitious ICUs, in two scenarios. In the first scenario the overall mortality rate was held unchanged; in the second scenario the overall mortality rate was increased to an “unexpected” higher level, after the twelfth month in the simulated series, by multiplying the predicted mortality risk with the factor of 1.50 on the odds scale. So, for instance, when this scenario was applied to the poor performing ICU, all predicted mortality risks after the twelfth month were multiplied by a factor of 2.00*1.50. Survival outcomes were generated using a random number generator (Bernoulli experiment) with the adjusted mortality risk as input parameter. We simulated series of 60 months with an average of 50 fictitious admissions per month. This process was repeated 10,000 times. In total, 120,000 datasets were created. For risk adjustment either external or internal prognostic models were used. The external prognostic model was the original APACHE IV model [6]. Internal prognostic models were obtained, for each of the three ICUs, by fitting a logistic regression model to the first 12 months of each simulated dataset, using the logit-transformed predicted mortality risk as only covariate [7]. We used the RA Exponentially Weighted Moving Average (EWMA) control chart [8;9]. The RA EWMA control chart is a useful tool to monitor ICU mortality data and
182
A. Koetsier et al. / A Comparison of Internal Versus External Risk-Adjustment
is able to detect slowly changing mortality ratios [8]. It compares the weighted (recent observed mortality are given exponentially more weight) mean of the mortality rate with the weighted mean of the expected mortality. The upper and lower control limits were set at approximately three (3.3) sigma (for a normal distribution, more than 99% of observations lie within this range) from the weighted mean of the expected mortality. If the weighted mean of the observed mortality rate is above or below the control limits a warning signal is given indicating an upward shift in mortality rate or a downward shift in mortality rate. The originally drawn mortality risks and the simulated observed mortality were input for the RA EWMA control chart. We analyzed in both scenarios the percentage of each of the 10,000 runs where the RA EWMA control chart issued a warning signal for an upward or downward shift in mortality after 12, 18, 24, 30, 36, 42, 48, 54 and 60 months (the first three months were excluded in the analysis because the generated warning signals were not yet reliable). Also, we recorded the percentage of runs where no warning signal was given. Ideally, when the mortality rate is artificially increased, immediately the corresponding warning signal should be given. No warning signal should be given by the control chart if the mortality rate is held constant. Warning signals for decreases in mortality are always wrong in our simulation. The sensitivity of the RA EWMA control chart when using either model after 60 months was calculated by dividing the number of true warning signals given for upward shift in mortality by the number of runs where the mortality rate was actually increased. The specificity was calculated by dividing the number of absent warning signals when mortality was not increased by the number of runs where the mortality was not increased.
3. Results Figure 1 shows the results of the scenario where the internal model was used and the mortality rate was artificially increased. For all three fictitious ICUs the RA EWMA control chart gave warning signals for an upward shift in mortality rate. After 24 months (including the 12 months where the mortality rate was constant), the percentage of warning signals for an upward shift in mortality rate were 55%, 62% and 68% for the well, average and poor performing ICU, whereas after 60 months it was 89%, 92% and 94%. Between 2.5-3.0% warning signals were given for a downward shift in mortality rate and for 9%, 5% and 3% no signals were given.
Figure 1. Percentage of warning signals: mortality rate was artificially increased and use of internal model
A. Koetsier et al. / A Comparison of Internal Versus External Risk-Adjustment
183
Figure 2 shows the results of the scenario where the external model was used and the mortality rate was artificially increased. The RA EWMA control chart only gave warning signals for an upward shift in mortality rate for the average and poor performing ICU. After 24 months (including the 12 months where the mortality rate was constant) the percentages of warning signals for an upward shift were 68% and 100%, respectively. For the well performing ICU, 97% of the warning signals incorrectly indicated a downward shift in mortality (after 24 months) and no warning signals were given for an upward shift in mortality. Warning signals were absent in 0.8%, 0.2 and 0% of the cases, respectively.
Figure 2. Percentage of warning signals: mortality rate was artificially increased and use of external model
For the scenario of constant mortality rate (results not shown) and internal model use, in total 33-34% warning signals were given by the RA EWMA control chart after 60 months for all three ICUs (upward and downward shift combined). With the use of the external model very few warning signals were given after 60 months (11% in total) for the average performing ICU. For the well performing ICU, 100% of the time a signal was given for a downward shift after 24 months, whereas for the poor performing ICU 100% of the time a signal was given for an upward shift in mortality rate after 24 months. The sensitivity and specificity for the RA EWMA control chart using the internal model was 0.91 and 0.67, whereas for the external model it was 0.39 and 0.28.
4. Discussion In this study we compared the use of internal and external risk-adjustment models when monitoring institutional clinical performance over time with mortality outcomes data. We simulated ICU data and compared the numbers and types of warning signals given by RA EWMA control charts when using the two different types of models. Increases in mortality were correctly detected on average in 60% of cases with the RA EWMA control chart using the internal model, regardless of prior ICU performance. When using the external risk adjustment model, such increases were only detected for the average and poor performing ICUs. When the mortality rate was held constant, using the external model resulted in many incorrect warning signals. For ICUs monitoring their clinical performance it is important to realize the impact of risk-adjustment and of the warning signals given by monitoring tools. Warning
184
A. Koetsier et al. / A Comparison of Internal Versus External Risk-Adjustment
signals falsely indicating an increase in adverse clinical outcomes may lead to unnecessary investigations of the care process. Conversely, warning signals falsely implying a decrease in adverse outcomes will give the illusion of good performance. Using internal prognostic models will give less incorrect signals as is also emphasized by the higher specificity. However development of an internal model requires sufficient (historical) data. An external model should be used with caution. When a well performing ICU is monitored, warning signals for an increasing shift in mortality rate are rarely given by the RA EWMA control chart. Instead, (incorrect) warning signals are given for a downward shift in mortality rate. The strength of our study is that we simulated the data and therefore know if and what type of warning signal the RA EWMA control chart should give for each scenario. Additionally, we simulated large amounts of risk data from a large national database through using the APACHE IV mortality risks, resulting in data closely representing reality [7]. A limitation of our study is that we did not gradually increase the mortality rate but immediately increased it with factor 1.50 and assumed the absence of population drift. A second limitation is that we simulated one ICU size with an average of 50 admissions per month. A final limitation is that we used only one type of RA control chat. However, we believe that the results would only be slightly different, thus our conclusions would hold. We conclude that mortality data should be adjusted by an internal risk model when monitoring an ICU’s own performance for unexpected increasing or constant mortality rate. This will result in the fewest incorrect warning signals that would warrant unnecessary investigation. This holds regardless of the ICU’s initial performance.
References [1] [2]
[3] [4]
[5] [6]
[7]
[8] [9]
Axelrod DA, Kalbfleisch JD, Sun RJ, et al. Innovations in the assessment of transplant center performance: implications for quality improvement. Am J Transplant 2009 April;9(4 Pt 2):959-69. Baghurst PA, Norton L, Slater A. The application of risk-adjusted control charts using the Paediatric Index of Mortality 2 for monitoring paediatric intensive care performance in Australia and New Zealand. Intensive Care Med 2008 July;34(7):1281-8. Cockings JG, Cook DA, Iqbal RK. Process monitoring in intensive care with the use of cumulative expected minus observed mortality and risk-adjusted P charts. Crit Care 2006 February;10(1):R28. Novick RJ, Fox SA, Stitt LW, Forbes TL, Steiner S. Direct comparison of risk-adjusted and non-riskadjusted CUSUM analyses of coronary artery bypass surgery outcomes. J Thorac Cardiovasc Surg 2006 August;132(2):386-91. Website : http://www.stichting-nice.nl Zimmerman JE, Kramer AA, McNair DS, Malila FM. Acute Physiology and Chronic Health Evaluation (APACHE) IV: hospital mortality assessment for today's critically ill patients. Crit Care Med 2006 May;34(5):1297-310. Brinkman S, Bakhshi-Raiez F, Abu-Hanna A, et al. External validation of Acute Physiology and Chronic Health Evaluation IV in Dutch intensive care units and comparison with Acute Physiology and Chronic Health Evaluation II and Simplified Acute Physiology Score II. J Crit Care 2010 September 23. Cook DA, Duke G, Hart GK, Pilcher D, Mullany D. Review of the application of risk-adjusted charts to analyse mortality outcomes in critical care. Crit Care Resusc 2008 September;10(3):239-51. Grigg O, Spiegehalter D. A simple risk-adjusted exponentially weighted moving average. Journal of the American Statistical Association 2007;(102):140-52.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-185
185
Interoperability Driven Integration of Biomedical Data Sources Douglas TEODOROa,1, Rémy CHOQUETc, Daniel SCHOBERd, Giovanni MELSe, Emilie PASCHEa, Patrick RUCHb, Christian LOVISa a SIMED, University Hospitals of Geneva and bHEG, University of Applied Sciences, Geneva, Switzerland; cINSERM, Université Pierre et Marie Curie, Paris, France; d Freiburg University Medical Center, Germany; eAGFA Healthcare, Ghent, Belgium
Abstract. In this paper, we introduce a data integration methodology that promotes technical, syntactic and semantic interoperability for operational healthcare data sources. ETL processes provide access to different operational databases at the technical level. Furthermore, data instances have they syntax aligned according to biomedical terminologies using natural language processing. Finally, semantic web technologies are used to ensure common meaning and to provide ubiquitous access to the data. The system’s performance and solvability assessments were carried out using clinical questions against seven healthcare institutions distributed across Europe. The architecture managed to provide interoperability within the limited heterogeneous grid of hospitals. Preliminary scalability result tests are provided. Keywords. Data Integration, Interoperability, Semantic Integration, Ontology
1. Introduction The last ten years have been marked by the most important increase of biomedical information in human history. Electronic health records cover a growing part of these data, ranging from clinical findings to genetic structures. However, secondary data usage to improve healthcare quality and patient safety are very limited. Several integration systems have been proposed to handle issues related to lack of technical standards and semantics among different data sources [1-3]. These systems provide methods to cope with data location and accessibility but do not necessarily manage data content and their semantics. Recently, with the advent of semantic web technologies, new data integration approaches using ontologies were proposed [4,5]. This paper introduces a three-layer ontology-driven data integration framework [5] that provides interoperability to heterogeneous storage systems. The methodology does not restrict data sources to an enforced common schema and the integration is done ondemand. The system called virtual Clinical Data Repository (vCDR) is being deployed and evaluated in a network of seven European hospitals in the DebugIT (Detecting and Eliminating Bacteria Using Information Technology) project [6]. The vCDR is used by decision support systems for data mining and monitoring tasks, especially at population
1 Douglas Teodoro, University Hospitals of Geneva - Division of Medical Information Sciences, Rue Gabrielle-Perret-Gentil 4, 1211 Geneva, Switzerland; E-mail: [email protected]
186
D. Teodoro et al. / Interoperability Driven Integration of Biomedical Data Sources
level. Nevertheless, its pseudo-anonymized data allows unique identifiers to be linked back to actual patient information by authorized actors.
2. Methods The vCDR architecture provides homogeneous real time view on the data sources, featuring common access mode, standard syntax and unified computer-interpretable semantics. In the healthcare field, for cross-border integration, the data warehouse approach [1] is not a viable solution. Data providers are not allowed to store patient data outside of their intranet domain due to ethical reasons. Furthermore, view integration [2] cannot be applied because operational databases (ODB) have to be protected from on-the-fly accesses to preserve system stability. To solve the aforementioned constraints, the vCDR is based on a hybrid ontologydriven integration approach [5], where multiple semantically flat data description ontologies (DDO) are mapped to a common semantically defined DebugIT Core Ontology (DCO) and its extending operational ontologies (OO) [7]. As shown in Fig. 1, the system focuses on three levels of conceptual interoperability [8]: technical (network protocol, database), syntactical (terminology) and semantic (knowledge formalization).
Figure 1. Three levels of interoperability in the integration platform - Technical (left) illustrating ODB standardization via SPARQL protocol and RDF storage; syntactic (center) illustrating the unification of site dependent values with terminologies and DCO instances; and semantic (right) illustrating how a description logics rooted formal ontology allows for DDO content unification and verification.
2.1. Technical Interoperability Clinical Information System (CIS) ODBs include different database management systems and access protocols. To provide a homogenous access layer, an intermediate storage is introduced between the CIS and the query point (Fig. 1 - left). The connection between the CIS and this local mirror - so called local CDR (lCDR) - is fulfilled by periodic Extract-Transform-Load (ETL) processes, which retrieve the content from the CIS, perform model transformations and load the data into the lCDR. An lCDR comprises an RDF-like storage, usually backed by a relational database (RDB), featuring SPARQL communication protocol [7]. Numerous relational-data to RDF middleware approaches are proposed in the literature [9,10]. Despite not addressing data integration problem, D2R [11] was chosen because it relies on underlying RDB indexes to formulate the query plan, which gives better performance and scalability when compared to approaches that use native triple stores.
D. Teodoro et al. / Interoperability Driven Integration of Biomedical Data Sources
187
2.2. Syntactic Interoperability The content of DebugIT data sources are expressed in several languages and usually using free text. Thus, spelling mistakes and abbreviations such as Staphyloccocus aureus and S. aureus are commonly found. In order to bring syntactic alignment to the lCDRs, their contents were transformed into a common syntax defined by biomedical terminologies (SNOMED CT, WHO-ATC, NEWT, etc.). These terminologies are mapped to DCO (terminology-to-DCO) using the SKOS ontology and Notation3 rules. Specialized text mining algorithms were developed to perform term normalization [12,13] depending on the instance type. For example, for pathogen instances, the algorithm first tries to match the NEWT terminology against species, then against genus only. For antibiotics, it first tries to match the complete drug name against the WHO-ATC terminology, then the truncated 5-letters name. Finally, instances with small enumerated lists as value ranges were mapped manually. 2.3. Semantic Interoperability To bridge the gap between operational data and formal representations of concepts, the lCDR information model is formally defined using OWL language [7] to create a sitespecific DDO. Moreover, shared representations of the domain concepts are derived to cover the clinical domain (DCO) and additional domains (OO) such as units, maths, hypothesis-generation, etc. Finally, links between the formal data source representations and the domain concepts are made through ontological mappings implemented via the SKOS ontology using the Notation3 format (DDO-to-DCO). The SPARQL query language allows graphs to be built (“construct” clause) with DCO concepts using DDO terms in the “where” clause. Thus, a Global-as-View (GaV) approach (global ontology as view on the local ontology) can be applied in order to mediate data over the SPARQL endpoints of the lCDRs. For example, the query “What is the resistance to of during at ?” is translated as CONSTRUCT { ?antibiogram a dco:AntimicrobialSusceptibilityTest; biotop:hasAgent ?antibiotic; biotop:hasParticipant ?bacteria; biotop:hasOutcome ?outcome; dco:hasDate ?date. WHERE { DDO_SOURCE_1 } { DDO_SOURCE_2 } { DDO_SOURCE_N
} }
with each DDO_SOURCE clause representing a lCDR query based on DDO terms. It is during the query translation process provided by the “construct” algorithm that DDO concepts are annotated with DCO classes and properties. Binding variables are further converted using the terminology-to-DCO mappings provided in the syntactic alignment layer. Once this is done, the results are fully represented in terms of a formal ontology and their semantics are hence readily exploitable by computers.
3. Results Seven healthcare institutions collaborated to evaluate the approach. They shared pseudo-anonymized historical episodes of care information, aggregated on unique identifiers of pathogens, thus avoiding patient-centric views. In order to assess the system integration capability, i.e. which sites are able to answer clinical queries, and
188
D. Teodoro et al. / Interoperability Driven Integration of Biomedical Data Sources
performance, i.e. how long it takes to retrieve a result set, in real life use-cases, the query “What is the evolution of resistance to during at ?” was exercised against the vCDR. Fig. 2 shows the result of above query for Pseudomonas aeruginosa and ciprofloxacin in the last 48 months up to Jun 2009 in the different hospitals. The system was able to obtain results from five out of seven institutions. The aggregated “DebugIT antibiogram” trend is shown in blue. Two of the lCDRs were not able to answer the query due to its constraints (antibiotic, bacteria and period).
Figure 2: P. aeruginosa vs. ciprofloxacin resistance rate - Results shown here are not clinically relevant but rather useful to exercise the vCDR and were intentionally unlabelled to conform to hospital requirements.
To evaluate the performance of the SPARQL queries against the lCDRs, we executed the aforementioned query for Klebsiella pneumonia matching any antibiotic in order to increase the result set. Results presented in Table 1 show that network time is responsible for 41% to 49% of the retrieval time for the sets containing more than 1000 tuples. Indeed, due to their early stage of development, most SPARQL engines lack in aggregation functions such as group by and count, increasing the retrieval time. Table 1: vCDR performance - The total time is the sum of the SPARQL engine time plus the network time. IZIP does not contain microbiology test results and TEILAM and GAMA have only a limited sample set. Source HUG INSERM LIU UKLFR
#Tuples Retrieved 74150 330360 9905 155315
Retrieval time (s) SPARQL Network 5.72 3.91 20.38 14.22 1.70 1.23 6.34 6.19
#Tuples/sec 7704 9550 3371 12394
Finally, we compared the performance of the HUG SPARQL query presented in Table 1 with an equivalent SQL query using a direct access to HUG’s RDB. The SQL query was executed in total 3.52s, which reduced the query time by 63%.
4. Concluding Remarks The proposed vCDR architecture provides a three level integration framework. It is important to note that the approach deals with interoperability at each layer. Currently, data integration cannot be fully achieved with only the third layer of the proposed methodology, particularly for the case of operational databases. The inexistence of global data model facilitates the seamless integration of new sources and ensures scalability. New data sources are only required to have a SPARQL endpoint formally described by a DDO and normalized instances. The domain ontology is not affected with the introduction of a new source. Instead, new terminology- and DDO-to-DCO mappings need to be created to represent each source added. The syntactic alignment
D. Teodoro et al. / Interoperability Driven Integration of Biomedical Data Sources
189
has shown to be a very complex process. The existence of linguistic and data type variances make it very difficult to find a common syntax; hence the need for advanced natural language processing normalizers such as SNOCat [12]. The problem becomes even worse if intrinsic differences in defining “normal” values and thresholds are taken into account. For example, the measure for pathogen sensitivity to antibiotics is computed differently from country to country. The presence of a local expert is of utmost importance in these cases. So far, semantic integration is extensively used without source model transparency. The final solution is a semantic mediator that allows users and query builders to select ontologically constrained idioms for query building. A proof of concept implementation is in an early stage. A previous version of a mediated vCDR was already described [14]. They report that besides efficiency of the system in accomplishing the integration task, constraint of a common unique schema has shown to be very restrictive to project needs. In this paper, an ontology-driven integration framework has been described. The architecture provides interoperability at technical, syntactic and semantic levels for heterogeneous clinical data sources. The system was assessed in a limited grid of seven EU healthcare centers. Despite increase in response time com-pared to traditional methods, vCDR was able to retrieve results for a pre-defined set of queries in a satisfactory time for the project. The next step is finalization of the semantic mediator contributing to increase end user compliance. Moreover, we plan to extend the syntactic aligner to a flexible framework to directly serve terminological servers and ontology look up services such as those maintained by epSOS, ECDC or the EBI. Acknowledgements: This research is supported by the EU-IST-FP7 DebugIT project # 712139.
References [1] [2]
Shah SP, et al. Atlas - a data warehouse for integrative bioinformatics. BMC Bioinformatics. 2005;6: 34. Davidson SB, Overton GC, Tannen V, Wong L. BioKleisli: A Digital Library for Biomedical Researchers. International Journal on Digital Libraries. 1997;1:36-53. [1] Stevens R, Baker P, Bechhofer S, et al. TAMBIS: Transparent Access to Multiple Bioinformatics Information Sources. Bioinformatics. 2000;16(2):184-186. [2] Shironoshita EP, Jean-Mary YR, Bradley RM, Kabuka MR. semCDI: a query formulation for semantic data integration in caBIG. J Am Med Inform Assoc. 2008;15:559-568. [3] Cruz I, Xiao H. Ontology driven data integration in heterogeneous networks. Complex Systems in Knowledgebased Environments: Theory, Models and Applications. 2009:75-98. [4] Lovis C, Colaert D, Stroetmann VN. DebugIT for patient safety - improving the treatment with antibiotics through multimedia data mining of heterogeneous clinical data. Stud Health Tech Inform. 2008;136:641 [5] Schober D, Boeker M, Bullenkamp J, et al. The DebugIT Core Ontology: semantic integration of antibiotics resistance patterns. Proceedings of MEDINFO 2010; Cape Town; 2010. [6] Tolk A. What Comes After the Semantic Web - PADS Implications for the Dynamic Web. Proceedings of the 20th Workshop on Principles of Advanced and Distributed Simulation (PADS'06); 2006. [7] Broekstra J, Kampman A, Van Harmelen F. Sesame: A generic architecture for storing and querying RDF and RDF schema. Proceedings of The Semantic Web - ISWC 2002; 2002. p. 54-68. [8] Erling O, Mikhailov I. Virtuoso: RDF Support in a Native RDBMS. Semantic Web Information Management; 2010. p. 501-519. [9] Bizer C, Cyganiak R. D2RQ Lessons Learned. W3C Workshop on RDF Access to Relational Databases; 2007. [10] Ruch P, Gobeill J, Tbahriti I, et al. Automatic Assignment of SNOMED Categories: Preliminary and Qualitative Evaluations. First Semantic-Mining Conference on SNOMED CT – SMCS; 2006. [11] Daumke P, Enders F, Simon K, et al. Semantic Annotation of Clinical Text - the Averbis Annotation Editor. Proceedings of the GMDS 2010; Mannheim, Germany. [12] Teodoro D, Choquet R, Pasche E, et al. Biomedical Data Management: a Proposal Framework. Stud Health Technol Inform. 2009;150:175-9.
190
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-190
Creating Knowledge Archive in the Internet Medical Consultant for Decision Support at the Point Of Care 1
Draško NAKIĆ , Suzana LOŠKOVSKA a Ss. Cyril and Methodius University Faculty of Electrical Engineering and Information Technologies, Karpos 2 bb. Skopje, Macedonia
Abstract. The Internet Medical Consultant – IMC is a knowledge sharing system for physicians. The system’s main purpose is to collect and store the communication between its users and to provide easy retrieval of stored information. The system provides access to human generated knowledge at the point of care. Having that kind of knowledge at hand can be very helpful for physicians when they make decisions. This paper describes the process of knowledge capturing, creating and searching the knowledge archive, for final utilisation of that knowledge at point of care. Keywords. medical consultation, medical text archiving, medical text retrieval, knowledge-sharing, knowledge-utilisation, point of care decision making.
1. Introduction Consultation systems for physicians [12, 13, 14[ generally share some common deficiencies: large number of answers for the user, no option for attaching files, no insight in who answers the question, no knowledge archive, no knowledge download and offline access to knowledge content for use at point of care. The Internet Medical Consultant system [8], presented in Figure 1, tends to overcome these problems. The main process is demand for consultation and the basic mode of knowledge sharing is via concept of messaging. The whole consultation traffic is recorded and that conveyed knowledge is archived in a way that provides easy retrieval. The system part, Mobile Application Layer (MAL), that delivers the knowledge at point of care is comprised of two modules [9]: a web-based Mobile Web Application (MWA), and the Mobile Device Application (MDA) that resides on the device itself. The MWA carries the main load of end user’s asynchronous communication, whereas the MDA is mainly intended for offline usage and acts like an offline support for the MWA.
1
Corresponding Author: E-mail: [email protected], [email protected]
D. Naki´c and S. Loškovska / Creating Knowledge Archive in the Internet Medical Consultant
191
2. The Asynchronous Model of Communication Our focus will be on the asynchronous type of communication. There are two types of asynchronous consultations: individual and group. Both types of consultations rely on exchange of basic portions of communication – BPC, which consists of text complemented with links to web
Figure 1. Logical and functional diagram of the IMC system (left), asynchronous consulting (top, right) and structure of the BPC (bottom, right)
The consultation question is the generator of new knowledge in the system. It is comprised of receiver information: heading, body, which itself has problem description and question fields, and files; and system information: specialty, subspecialty, emergency level, expiry date and time, receivers list and type [8]. In the case of individual consultation (Figure 2), the initiator forms a list of receivers to send the CQ to. There is a possibility for the user to leave some space in the receivers list for the system to locate the most compatible experts. The initiator can enable forwarding of the CQ. The threads of communication in the individual consultation are not visible amongst the users who receive the same CQ. The thread of communication is a set of BPS interchanged between the initiator of the CQ and the receiver. With group consultation the CQ is opened to all receivers i.e. all the users that receive it discuss about it in a mutual place visible for everyone. A receiver can invite other user to participate in the discussion, very similar to CQ forwarding with the individual consultation.
3. Archiving the Knowledge from the Consultation Traffic Consultations are the basic logical entities in the IMC system. As a result, a document that includes textual and non-textual content is generated for each consultation. The textual content can be: the text typed by the user, attached textual files, referenced web pages or textual web documents. In the current implementation of the IMC, only typed textual content for building the archive is considered. Figure 3 illustrates the process of forming a textual archive document – TAD from an individual consultation. Forming a
192
D. Naki´c and S. Loškovska / Creating Knowledge Archive in the Internet Medical Consultant
TAD from a group consultation is completely analogous, in respect to the format of the consultations. There are two types of archives in the IMC system: system archive – accessible for all users where all consultations are stored; and personal archive – private user archive, where he/she stores selected consultations that are frequently accessed. At the beginning, the archive is represented by predefined two-level hierarchical structure where clusters are set according to the value domain of the specialty and subspecialty fields of the CQ. For the TADs with CQs that lack value in those fields, classification is done by calculating the TAD-to-cluster similarity measure for each cluster and determination of the closest one based on the suggestions in (2) and Zhu et al [11]. If a TAD is below the similarity threshold for all clusters, additional cluster is formed for this TAD and it becomes its initial pivot. When a cluster becomes large enough it is divided into sub-clusters, conforming to the hierarchical principle of clustering [2, 5].
Figure 2. The BPC flow in individual consultation (left) and group consultation (right)
Figure 3. Transformation of an individual consultation into TAD. The heading field contains the heading of the CQ and the body of the CQ (problem description + question)
4. Information Retrieval Our search engine supports free text querying, meaning that there is a possibility, for the user to type free text to describe the problem (patient’s condition), and then pose the actual question (Figure 4, left) in separate text fields. The retrieved documents are ranked by their similarity to the query and available in full content. TADs that contain at least half of the MeSH concepts from the problem field and the question field in their heading and body separately are retrieved as relevant. The query to TAD similarity function Sim that is based on phrase-based vector space model according to Mao & Chu [7] and refers to the concept similarity measure by Li et al [6] and indexing paradigm presented in [4], is then used for ranking the relevant TADs. This function is shown in equations (1) and (2), and is based on our algorithm that incorporates NCvalues calculated according to paper [1] in the phrase based vector space model: (1)
D. Naki´c and S. Loškovska / Creating Knowledge Archive in the Internet Medical Consultant
193
(2)
where CQhUq extracts the heading and the question, and CQp the problem description from the CQ that the TAD was created for. B extracts the body of the TAD. P and Q extract the problem description and the question of the query, respectfully. simp is the cosine between the phrase vector representations of textual content x and y and ‹x,y›p is their dot product. The meaning of equation (2) can be investigated in detail by referring to paper [1[ and the work of Mao & Chu [7].
Figure 4. Searching the knowledge archive
The user can select which clusters to search or perform a complete search over all clusters. Clusters are represented by the specialty or subspecialty they are built around, if available, and most important MeSH descriptors that are used in that cluster.
5. Using the Knowledge at Point of Care The main purpose of the IMC system is to provide assistance to physicians in making decisions trough a cycle of knowledge generation and knowledge utilization shown in Figure 5. The mobile web application and the on-device application have the role to find the desired portion of knowledge and organize it for better utilization at point of care using the ideas from Bardram [3] and Siegemund & Floerkemeier [10]. When downloading to personal archive or mobile device, the user can select which parts of the inbox consultation or the search retrieved document he/she finds useful to reference. This includes all pieces of information that comprise the consultation: attached files, web links to pages and referenced web files. Moreover, the user can extract parts of text from large textual content and store it for quick reference (Figure 5).
6. Conclusion Human knowledge is the most precious knowledge of all, especially when it comes to recognition and interpretation of complex problems, and pointing to knowledge resources as well. By archiving the knowledge generated in the process of consulting using complex consultation structure with textual and non-textual content, and
194
D. Naki´c and S. Loškovska / Creating Knowledge Archive in the Internet Medical Consultant
providing the users to search it, store it and organize it for use at the point of care, an improvement in the decision making process of the physicians can be achieved.
Figure 5. The cycle of knowledge generation and utilisation, and transforming the demand for knowledge into organised knowledge document ready for use at point of care
References [1]
[2]
[3] [4]
[5]
[6]
[7] [8]
[9]
[10] [11] [12] [13] [14]
Ananladou S, Frantziy K, Mimaz, H. The C-value/NC-value Method of Automatic Recognition for Multi-Word Terms. In: Nikolaou C, Stephanidis C, editors. ECDL 98. Proceedings of the Second European Conference on Research and Advanced Technology for Digital Libraries; 1998 Sep. London: Springer-Verlag; 1998. P. 585 - 604 Andreasen T, Bulskov H, Knappe R. Similarity for Conceptual Querying. In: Andreasen T, Motro A, Christiamsen H, Larsen HL, editors. FQAS 02. Proceedings of the 5th International Conference on Flexible Query Answering; 2002 Oct. London: Springer-Verlag; 2002. P. 100 – 111 Bardram EJ. Activity-based computing for medical work in hospitals. TOCHI. Jun 2009; 16 (2). doi:10.1145/1534903.1534907 Hliaoutakis A, Zervanou K, Petrakis GME. Automatic Document Indexing in Large Medical Collections. HIKM 06. Proceedings of the international workshop on Healthcare information and knowledge management; 2006 Nov 5-11; Arlington, Virginia, USA. New York: ACM Press; 2006. P.18 Holub M. A New Approach to Conceptual Document Indexing: Building a Hierarchical System of Concepts Based on Document Clusters. ISICT 03. Proceedings of the 1st international symposium on Information and communication technologies; 2003 Sep. Dublin: Trinity College; 2003. P. 310-315 Li Y, Bandar AZ, McLean D. An Approach for Measuring Semantic Similarity between Words Using Multiple Information Sources, IEEE Trans.on Knowledge and Data Engineering. 2003 Jul; 15(4): 871882 Mao W, Chu WW. The phrase-based vector space model for automatic retrieval of free-text medical documents. Data & Knowledge Engineering. Apr 2007; Volume 61(1): 76-92 Nakic D, Loskovska S. Internet Medical Consultant - A knowledge-sharing system. ITI 09. Proceedings of the Information Technology Interfaces 2009 31st Int. Conference; 2009 Jun 22-25; Cavtat, Croatia. Dubrovnik; 2009. P. 79 - 86 Nakic D, Loskovska S, Knowledge Sharing Mobile Application Layer for the Internet Medical Consultant. ITI 10. Proceedings of the Information Technology Interfaces 2010 32nd Int. Conference; 2010 Jun 21-24; Cavtat, Croatia. Dubrovnik; 2010. P. 243-248 Siegemund F, Floerkemeier C, Vogt H. The value of handhelds in smart environments. Personal and Ubiquitous Computing. 2005 Mar; 9(2): 69 - 80 Zhu S, Zeng J, Mamitsuka H. Enhancing MEDLINE document clustering by incorporating MeSH semantic similarity. Bioinformatics. 2009 Aug; 25 (15): 194–195 Available from: www.mayoclinic.com Available from: www.doctorinternet.co.uk Available from: www.docsboard.com
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-195
195
Architecture of a Decision Support System to Improve Clinicians’ Interpretation of Abnormal Liver Function Tests Raphaël CHEVRIERa1, David JAQUESa, Christian LOVISa Division of Medical Information Sciences, Geneva University Hospitals
a
Abstract. The objective of this work was to create a self-working computerized clinical decision support system (CDSS) able to analyze liver function tests (LFTs) in order to provide diagnostic suggestions and helpful care support to clinicians. We developed an expert system that processes exclusively para-clinical information to provide diagnostic propositions. Drugs are a major issue in dealing with abnormal LFTs, therefore we created a drug-disease causality assessment tool to include drugs in the differential diagnosis. Along with the results, the CDSS will guide clinicians in the care process offering them case-specific support in the form of guidelines, order sets and references to recent articles. The CDSS will be implemented in Geneva University Hospitals clinical information system (CIS) during year 2011. For the time being, preliminary tests have been conducted on case reports chosen randomly on Pubmed. Considered as medical challenges, case reports were nevertheless processed correctly by the program to the extent that 18 cases out of 20 were diagnosed accurately. Keywords. Clinical decision support system (CDSS), Expert system, Liver diseases, Liver function tests abnormalities, Laboratory results interpretation.
1. Introduction Clinical Decision Support Systems (CDSS) is an important challenge to address in the development of clinical information systems (CIS). CDSS were introduced more than 35 years ago, holding great promises for the future. Initial expectations have been mitigated by their real impact in clinical practice and their influence on outcomes [1,2]. Substantial work has been done to understand the reasons for that mixed success. According to publications, the main problems are: 1. a poor integration of DSS in the clinical practice workflow [3,4] 2. the excessive users’ inputs requirements [5] 3. the lengthening of care processes [6] 4. the reliance of the CDSS on the existing CIS [7] 5. human factors such as users’ reticence and frustration [8,9,10] 6. the need for users’ continuous formation and feedback [11] 7. the considerable system maintenance requirements [12]. Most of the work concerning CDSS addresses computerized physician order entry (CPOE), guidelines implementation, and complex diagnosis or signal processing [13,14,15]. Little has been published specifically on laboratory results DSS. As an example, the following search on Medline : "liver function 1
Corresponding Author: Raphaël Chevrier; E-mail : [email protected]. University Hospitals of Geneva, Division of Medical Information Sciences, Rue Gabrielle-Perret-Gentil 4, 1211 Switzerland.
196 R. Chevrier et al. / Architecture of a Decision Support System to Improve Clinicians’ Interpretation
tests"[MeSH Terms] AND "decision support systems, clinical"[MeSH Terms] returns only five papers, none of them handling exclusively laboratory results. The objective of this work was to create a computerized CDSS aiming to ease and improve doctors’ diagnostic process in case of liver function tests (LFTs) abnormalities. Keeping in mind obstacles encountered by predecessors, we designed the system to optimize fieldwork integration. Most importantly, we wanted the DSS to perform independently in order to meet the requirements of a self-working system. To reach this goal, it became mandatory to use only structured information that was 100% available in the CIS. Clinical information, not systematically available under a structured computable form was therefore excluded. Our DSS focuses on liver diseases. The prevalence of liver diseases is difficult to ascertain, since universal definitions are lacking and few population-based registers exist. However, from a laboratory focused perspective, researchers found substantial and consistent data. According to the ALFIE study [16], 21.7% of a normal asymptomatic population (Scotland) presented at least one abnormal liver function test (ALFT) during a median follow-up period of 3.7 years. In that group of patients with ALFT, 5.2% eventually developed a liver disease. LFTs are very popular tests within hospital practice as well as general practice. Their interpretation frequently addresses abnormal results that do not reflect underlying liver disease as a majority. Supporting interpretation is therefore required to aim best practice and cost-effective care. This conclusion is shared by ALFIE study’s authors as well as Steinke et al., who wrote in 2002: “electronic diagnostic algorithms are sensitive enough to identify liver disease using para-clinical data” [17].
2. Background The University Hospitals of Geneva (HUG) constitute the major public care providing consortium and teaching hospitals in Switzerland. It covers primary, secondary, tertiary and ambulatory care. HUG is using an in-house developed CIS that integrates commercial systems and covers all clinics and care. The system is Java service-oriented and has a component-based architecture with a message-oriented middleware. It has a full paperless CPOE coverage; it supports workflows, clinical pathways and complex decision-support. For the time being, DSS is however limited to drug interactions and dosage. Once fully tested, validated and implemented, our CDSS will therefore represent an important step towards intelligent and interactive tools introduction in the HUG information system.
3. Chosen Approach Several computerized DSS techniques have been applied to liver diseases diagnosis [16,18-22]. Elaborate options, such as case-base reasoning (CBR), artificial neural networks (ANN) and hybrid approaches have had encouraging results but still faced some limitations. We previously listed obstacles but it is important to say that using their opposite actions has proved to have a favorable effect on CDSS introduction and efficiency [1,7,15]. Practitioners’ confidence in CDSS, or their understanding of its functioning, is a key point which influence users’ acceptance. For these reasons and after discussions with domain experts (gastroenterologists), we chose to use an expert
R. Chevrier et al. / Architecture of a Decision Support System to Improve Clinicians’ Interpretation 197
system as the artificial intelligence (AI) technique to process laboratory values. The reason behind it is that expert systems process data using rules that are intelligible to clinicians. We created these rules after a review of articles, textbooks, evidence-based guidelines and experts’ opinions. The whole system has been designed to run automatically and iteratively on the same patient without user interaction. It uses only laboratory values and order entry information which are fully structured and 100% available in the CIS. The DSS consist of three parts: 1) the expert system algorithm; 2) the drug induced liver injury (DILI) causality assessment tool; 3) the interactive interface to guide, facilitate and improve clinicians’ work. While running, the first step of the program will provide a set of possible diagnosis if a substantial ALFT is detected. If not, the program stops and requires no interaction. Step two initiates an automatic review of the patient’s drugs from available CPOE information. Matching of substances with DILI and diseases is then performed to assess possible causality relationship between drugs and diseases suggestions. Step three consists of DSS reporting. Firstly, the user interactive interface will display the DSS results as a list of possible diseases. Secondly, “drug-to-disease” causality relationship will be (if existent) suggested to the doctor. Finally, the case specific care support such as guidelines, order sets, graphs and links to knowledge database will be provided.
Figure 1.
The expert system decisional strategy is based on rules. It analyzes every patient LFTs in the hospital so as to detect any abnormality and uses eleven different enzymes or biologic parameters to do so. Progression through decisional algorithm is illustrated in figures 1 and 2. The first question that the system must answer is whether or not cholestatic enzymes are elevated in order to separate cholestatic (Figure 1) from noncholestatic diseases (Figure 2). Then, depending on patient specific values, rules will lead to one or more diagnostics, which will be suggested to the user as possible causes of abnormalities. For the time being, we introduced 24 disease entities in the model most of which are liver diseases. The way values are handled at each junction is not by arbitrary cutoffs, as it could seem on the figures. Instead, transition functions will be used for better interpretation of borderline values.
198 R. Chevrier et al. / Architecture of a Decision Support System to Improve Clinicians’ Interpretation
Figure 2.
Diagnostic(s) provided by the algorithm will be compared to patterns of DILI and ultimately to drugs that the patient is taking in order to reveal possible causality relationship between drugs and ALFT. This comparison process is depicted in Figure 3. Over a hundred hepatotoxic drugs were grouped by pattern of DILI they are known to cause (one drug can possibly be in several groups). Each pattern is linked to a group of disease entities that provoke the same enzymatic alteration. Any ALFT will thus correlate specific drugs, and relationship suggestions to clinicians will be more accurate.
Figure 3.
4. Discussion Full CDSS is to be implemented and evaluated in 2011. Firstly, decisional algorithm will be tested on a set of existing values from hospital database. As evaluated retrospectively, we will be able to confront the expert system sensitivity and specificity in term of diagnostic accuracy with definitive diagnosis established by doctors during patients’ hospitalization. Secondly, field evaluation will take place and will include user satisfaction, system intrinsic performance and influence on outcome evaluation.
R. Chevrier et al. / Architecture of a Decision Support System to Improve Clinicians’ Interpretation 199
So far, preliminary results on the algorithm’s precision are promising. We tested it with patients’ values extracted from case reports randomly selected on Pubmed. In 18 cases out of 20, the system was able to find the precise cause of ALFT or to guide the clinician towards the right set of diagnosis.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]
[13] [14] [15] [16]
[17] [18] [19] [20] [21] [22]
Garg et al. Effects of Computerized Clinical Decision Support Systems on Practitioner Performance and Patient Outcomes. JAMA 293 (2005), 1223-1238 Hunt DL, et al. Effects of Computer-Based Clinical Decision Support Systems on Physician Performance and Patient Outcomes: A Systematic Review. JAMA 280 (1998) Ash JS, et al. Perceptions of house officers who use physician order entry. AMIA Symp (1999), 471-5 Graeber S. Application of clinical workstations: functionality and usability. Clin Perform Qual Health Care (1997), 71-5. Martin-Baranera M, et al. Assessing physician’s expectations and attitudes toward hospital information systems: the IMASIS experience. MD Comput (1999), 73-6 Varonen H, et al. What may help or hinder the implementation of computerized decision support systems (CDSSs): a focus group study with physicians. Family Practice 25 (2008), 162–167. Holbrook A, et al. What factors determine the success of clinical decision support systems? AMIA Annu Symp Proc. (2003), 862. Sittig DF, et al. Evaluating physician satisfaction regarding user interactions with an electronic medical record system. Proceedings/AMIA Annual Symposium (1999), 400-4. Payne TH. The transition to automated practitioner order entry in a teaching hospital: the VA Puget Sound experience. Proc AMIA Symp (1999), 589-93. Anderson JD. Increasing the acceptance of clinical information systems. MD Comput (1999), 62-5. Trivedi MH, et al. Development and Implementation of Computerized Clinical Guidelines: Barriers and Solutions, Methods Inf Med 5 (2002) Trivedi MH, et al. Barriers to implementation of a computerized decision support system for depression: an observational report on lessons learned in "real world" clinical settings. BMC Medical Informatics and Decision Making 9 (2009) Kaplan B. Evaluating informatics applications CDSS literature review, International Journal of Medical Informatics 64 (2001), 15–37 Pearson S-A, et al. Do computerised clinical decision support systems for prescribing change practice? A systematic review of the literature (1990-2007). BMC Health Services Research (2009) K. Kawamoto, et al. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ, (2005) Donnan PT, et al. Development of a decision support tool for primary care management of patients with abnormal liver function tests without clinically apparent liver disease: a record-linkage population cohort study and decision analysis (ALFIE). Health Technol Assess (2009) Steinke DT, Weston TL, Morris AD, MacDonald TM, Dillon JF. The epidemiology of liver disease in Tayside database: a population-based record-linkage study. J Biomed Inform. 35 (2002) 186-93. Kim YS, et al. Screening test data analysis for liver disease prediction model using growth curve. Biomedecine & Pharmacotherapy, 57 (2003), 482-488 Comak E, et al. A new medical decision making system: least square support vector machine (LSSVM) with fuzzy weighting pre-processing. Expert Systems with Applications 32 (2007) 409-14. Lin RH. An intelligent model for liver disease diagnosis. Artificial Intelligence in Medicine (2009) Lin RH, Chuang C-L. A hybrid diagnosis model for determining the types of the liver disease. Computers in Biology and Medicine 40 (2010), 665–670 Nakano H, et al. Application of neural network to the interpretation of laboratory data for the diagnosis of two forms of chronic active hepatitis. International Hepatology Communications 5 (1996), 160-165
*The complete bibliography (over 200 articles, including case-reports) is available on demand.
This page intentionally left blank
Education – Professional Development
This page intentionally left blank
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-203
203
Push and Pull Models to Manage Patient Consent and Licensing of Multimedia Resources in Digital Repositories for Case-Based Reasoning Andrzej A. KONONOWICZ a,1, Nabil ZARY b, David DAVIES c, Jörn HEID d, Luke WOODHAM e, Inga HEGE f a Jagiellonian University Medical College, Kraków, Poland b Karolinska Institutet, Stockholm, Sweden c Warwick University, United Kingdom d Centre for Virtual Patients, University of Heidelberg, Heilbronn University, Germany e E-learning Unit, St. George’s, University of London, United Kingdom f Medical Education Unit, University of Munich (LMU), Germany
Abstract. Patient consents for distribution of multimedia constitute a significant element of medical case-based repositories in medicine. A technical challenge is posed by the right of patients to withdraw permission to disseminate their images or videos. A technical mechanism for spreading information about changes in multimedia usage licenses is sought. The authors gained their experience by developing and managing a large (>340 cases) repository of virtual patients within the European project eViP. The solution for dissemination of license status should reuse and extend existing metadata standards in medical education. Two methods: PUSH and PULL are described differing in the moment of update and the division of responsibilities between parties in the learning object exchange process. The authors recommend usage of the PUSH scenario because it is better adapted to legal requirements in many countries. It needs to be stressed that the solution is based on mutual trust of the exchange partners and therefore is most appropriate for use in educational alliances and consortia. It is hoped that the proposed models for exchanging consents and licensing information will become a crucial part of the technical frameworks for building case-based repositories. Keywords. case-based learning, repositories, virtual patients, patient consent, metadata
1. Introduction Patient-related images and videos are precious assets for eLearning resources in medicine. Virtual Patients (VPs), defined as “interactive computer simulations of reallife clinical scenarios” [1] are perfect examples of case-based reusable learning objects, often including a wide range of multimedia content recorded in a medical context. Their primary function is to facilitate and assess the development of clinical reasoning 1
Corresponding Author: Andrzej A. Kononowicz, Department of Bioinformatics and Telemedicine, Jagiellonian University Medical College, Łazarza 16, Kraków, Poland; [email protected].
204
A.A. Kononowicz et al. / Push and Pull Models to Manage Patient Consent
skills [2]. National and international bodies promote the setup of Internet repositories for exchange of VPs (e.g. MedEdPortal, eViP project [3]). The concept of reusing the educational content will work only under the condition that the resources are easily accessible for the community. On the other hand, no matter how appealing the prospect of building a common case-based repository of educational materials might be, the presented content is very sensitive [4]. It is therefore a moral (and legal) obligation of the authors and users of the resources to protect the interests of people (patients and their relatives, students, actors) depicted in the educational materials. The hard task of finding the right balance between the openness for exchange and the protection of sensitive materials from malpractices in clinical non-academic context has been recognised and preliminarily evaluated by projects such as CHERRI [5] or OER [6]. Both studies revealed many open questions and dilemmas of ethical, legal, medical and technical nature that need to be addressed very carefully in future projects. The issue of informed consents to use multimedia material in medicine is more complex than in other disciplines since the presented multimedia in the case of real patients involves personal and very sensitive information. It can be regarded as a special kind of licensing agreement between the VP authors and the patient, allowing the publishing of personal material like photographs in a defined context and under certain conditions. Undoubtedly, it is important to give patients (i.e. licensees) the possibility of withdrawing or changing their consent to publish the materials at any time. This step poses a great technical challenge in the case of Internet repositories. In this paper we promote two models to address the challenge of managing patient consents or licenses in this context.
2. Methods and Challenges In 2010, at the end of the EC-cofounded project eViP, a freely-accessible repository of 340 virtual patients (VPs) has been established [7]. The VPs are available in a repository under the creative commons license "Attribution-NonCommercialShareAlike" (CC BY-NC-SA) [8]. These VPs, including their multimedia material, have been mainly repurposed from VPs contributed by project partners [9,10]. As part of the initiative the regulations concerning copyright and data protection in each of the 6 European partner countries have been collected [4]. This provided the basis for the development of the eViP license agreement and informs also the development of the withdrawal workflows with additional metadata to meet the requirements. Given the issues under consideration, it was determined to be necessary for all VPs containing multimedia material to be able to trace back information and send out notification about changes to the consent or licencing status. The cause of such status changes may include: (1) withdrawal or modification of given consent for usage of multimedia material (e.g. by a patient or any other person displayed on an image or in a video); (2) change of licensing conditions for the VP or the material within the VP; (3) improved/changed versions of single multimedia resources available in the VP packages. If it is unclear whether a consent or license allows a certain type of usage, contact details of the consent holder are required.
A.A. Kononowicz et al. / Push and Pull Models to Manage Patient Consent
205
3. Results Based on the previous discussion, our proposal is to extend VP systems that are equipped with a function for exchanging content (e.g. CAMPUS, CASUS, OpenLabyrinth or Web-SP) to include an additional software module for consent and licensing management. In the sections below two possible workflows to track back multimedia material are presented. 3.1. The PUSH model When adding a multimedia resource containing features that require a consent or licensing agreement (e.g. in ophthalmology or dermatology), the author is requested to add additional metadata to it specifying an institutionally unique id of the consent or license to use it. The consent itself, depending on the local legal regulations, may be stored in a scanned form in the VP system, in the patient’s health record or in any other kind of documentation system, and should be available locally for reference on request. When exporting content including sensitive multimedia material the system should request the contact details of the recipient of the package who will take responsibility for it. Obtaining an exported package should be conditional upon accepting the agreement to respect the patient’s potential wish to withdraw the image as far as the technical aspects permit. The contact information could be a plain e-mail address or, in future, a web interface address for automatic notifications. The receivers of the VP packages are permitted to redistribute the material further provided that they take similar precautions for protecting the image in line with the original system. This would result in a network of VP systems containing sensitive data that could broadcast information about the request for withdrawal of a multimedia resource. The withdrawal request would be sent from the originating VP system (PUSH model). To assure the reception of the withdrawal message a confirm withdrawal message should be implemented. The above described protocol is summarized in Figure 1a presenting stub methods of an interface of the consent management module.
Figure 1. Consent status exchange protocols: a) PUSH and b) PULL model
206
A.A. Kononowicz et al. / Push and Pull Models to Manage Patient Consent
3.2. The PULL model An alternative approach to the PUSH model is to have a central trusted repository of sensitive multimedia resources with aggregated information about consents. This system would maintain a list of consent revocations (similar to the X.509 Public Key List CRL [11]) updated by patients. Following this model every author wishing to repurpose a VP with sensitive multimedia resources should check (i.e. PULL) information on the availability of this resource for repurposing from the revocation list. The check on the legality of existing images in VP systems using multimedia from this repository could be made on a regular periodic basis (e.g. daily, or weekly). The above described protocol is summarized in Figure 1b.
4. Discussion The proposed protocol is based on trust and therefore does not give formal guarantee of multimedia withdrawal. The risk of ignoring a withdrawal request could be minimised by permitting the export of sensitive data only to trusted partners – e.g. members of an educational alliance or consortium. Users from outside the consortium would have access to a download option without sensitive (consent relevant) data. We would favour the PUSH model, because the PULL solution offers a slower update mechanism, gives less control over what happens to the multimedia material, and requires the construction of an additional service infrastructure for maintaining the consent revocations lists. The PUSH model is conformant to data protection regulations in most countries, because the consent itself is safely stored on the original system and cannot be accessed by unauthorised persons. The recipients will be notified about the status changes as soon as the original system receives the information. VP systems which do not confirm the withdrawal message (Figure 1a step 5) will be blocked from access to new multimedia. The patient should be informed that the withdrawal request will be followed with due diligence in all possible cases, but that there are situations in which it will not be possible to withdraw all instances of the resource (e.g. in the case of printed materials). The proposed models require that each multimedia resource is addressed separately by specific consent and licensing metadata. The VP standard ANSI/MEDBIQ VP.10.1-2010 (MVP) [12] is designed to describe the structure and content of a VP [9,10], but is less well suited to describing the licensing model. The Learning Object Metadata (LOM) [13] or its extension Healthcare Learning Object Metadata (HealthCare LOM) [14] used in MVP packages is relatively unsophisticated in the way it describes consents and copyright issues. The (Healthcare) LOM field "copyrightAndOtherRestrictions" has several limitations: (1) this field is valid for the overall VP, there is no metadata available on a multimedia material level, or indeed for individual multimedia resources which may have been obtained from multiple sources; (2) there is a lack of clarity about the purpose of the field and differences in the way it is interpreted. Some interpretations consider that the field describes whether copyright applies to the object, while others consider it to describe whether the copyright issues of that VP have been cleared. This distinction has a significant impact on whether a resource can be freely shared; (3) the licence field is freetext and uses no standardised vocabulary; (4) there is no field dealing with consent issues or limitations expressed within a consent, which for example might allow only usage for a certain target group
A.A. Kononowicz et al. / Push and Pull Models to Manage Patient Consent
207
(e.g. medical students). For these reasons we propose as a next step to extend the current set of metadata by new fields describing explicitly the consent status of individual multimedia resources and to publish best practice guidelines on how to encode information about consents in VP packages. As mentioned in the introduction, we perceive some common traits between consents and licensing, and indeed the terms under which consent is provided influences the license under which a VP can be distributed. Therefore we suggest applying this workflow not only to consents but also to licensing management in general.
5. Conclusions This paper addressed the research question of how consent and licensing of multimedia in virtual patients could be managed in an effective way, thus empowering patients to withdraw permission to use their multimedia resources. Two different protocols: the PUSH and PULL models have been presented, with the use of the PUSH model recommended by the authors. The presented algorithms, due to space restrictions, show only the crucial steps of the consent management protocol. More detailed descriptions of aspects of the protocols (e.g. updating responsibility lists) will be presented in later course. We believe that this work is relevant for other e-learning disciplines apart from medicine, since licensing and consent issues play an important role in other fields as well.
References [1] [2] [3] [4] [5] [6]
[7] [8] [9]
[10]
[11] [12] [13] [14]
Ellaway R, Candler C, Greene P, Smothers V. An Architectural Model for MedBiquitous Virtual Patients. Technical report. Baltimore:MedBiquitous; 2006. Cook D, Triola MM. Virtual Patients: a Critical Literature Review and Proposed Next Steps. Med Edu. 2009; 43(4):303-311. eViP Project [Internet]. 2011. http://www.virtualpatients.eu Campbell G, Miller A, Balasubramaniam C. The Role of Intellectual Property in Creating, Sharing and Repurposing Virtual Patients. Med Teach. 2009; 31(8):709-712. Ellaway R, Cameron H, Ross M, Laurie G, Maxwell M, Pratt T. Clinical Recordings for Academic Non-clinical Setting. CHERRI Project Report. UK: JISC Project; 2006 Mar. Williams J, Hardy S, Quentin-Baxter M. Proposing a Consent Commons in Open Education. Balancing the desire for openness with the rights of people to refuse or withdraw from participation. In: Open ED 2010 Proceedings; 2010 Sep 15; Barcelona, Spain. Available from: http://hdl.handle.net/10609/4864 eViP Project Virtual Patient Referatory [Internet]. 2011. http://www.virtualpatients.eu/referatory Creative Commons Initiative [Internet]. 2011. http://creativecommons.org Kononowicz AA, Heid J, Donkers J, Hege I, Woodham L, Zary N. Development and Validation of Strategies to Test for Interoperability of Virtual Patients. Stud Health Technol Inform. 2009; 150:185189. Zary N, Hege I, Heid J, Woodham L, Donkers J, Kononowicz AA. Enabling Interoperability, Accessibility and Reusability of Virtual Patients Across Europe – Design and Implementation. Stud Health Technol Inform. 2009; 150:826-830. RFC 5280 Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile [Internet]. 2008 May. http://tools.ietf.org/html/rfc5280 MedBiquitous Virtual Patient Standard ANSI/MEDBIQ VP.10.1-2010 (MVP) [Internet]. 2010. http://www.medbiq.org/working_groups/virtual_patient/VirtualPatientPlayerSpecification.pdf IEEE WG12: Learning Object Metadata [Internet]. 2005. http://ltsc.ieee.org/wg12 MedBiquitous Healthcare LOM [Internet]. 2010. http://www.medbiq.org/working_groups/learning_objects/Healthcare_LOM_Overview.html
208
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-208
Next Steps in Evaluation and Evidence – from Generic to Context-Related Michael RIGBYa, Jytte BRENDERb, Marie-Catherine BEUSCART-ZEPHIRc, Hannele HYPPÖNENd, Pirkko NYKÄNENe, Jan TALMONf, Nicolette de KEIZERg, Elske AMMENWERTHh a School of Public Policy and Professional Practice, Keele, University UK b Department of Health Science and Technology, and Virtual Center for Health Informatics, Aalborg University, Denmark c INSERM-CIC-IT EVALAB, University of Lille Nord de France, Lille, France d National Institute for Health and Welfare, Helsinki, Finland e Department of Computer Sciences, eHealth Research, University of Tampere, Finland f School for Public Health & Primary Care: Caphri,Maastricht University, Netherlands g Dept. of Medical Informatics, Academic Medical Center, Amsterdam, Netherlands h University for Health Sciences, Medical Informatics and Technology UMIT, Hall in Tyrol, Austria
Abstract. Introduction: E-health systems are increasingly important and widespread, but their selection and implementation are still frequently based on belief, rather than scientific evidence, and adverse effects are not systematically addressed. Progress is being made in promoting generic evaluation methodologies as a source of scientific evidence, but effort is now needed to consider methods for special situations. Method: Review of five evaluation contexts - national e-health plans, telemedicine, Health Informatics 3.0, usability and economics. Conclusion: Identification of requirements for approaches to be developed in these five settings. Keywords: Health Informatics; evaluation; evidence; policy; usability; economics
1. Introduction Health information systems harnessing computing, software, automated sensing, and telecommunication technologies offer considerable opportunities to improve health and health care, but are seldom fully exploited. There is an increasing political urge to adopt Information Technology (IT) solutions in health, assuming automatic benefits. European policies call for progress, including national plans and road maps [1]. However, progress in investment and in uptake has been slow, and there is both professional apathy in many cases, and public concern about loss of sensitivity or confidentiality. Other areas of health care science and investment rightly are evidencebased. Pressure rightly is increasing for policy decisions in health to be also evidencebased, as shown in [2,3]. But health informatics as both a discipline and a supplier community, and proponents of Information and Communication Technology (ICT) based health information systems, still fight shy of this. Too often decisions are made on the basis of hope, expectation of economic benefits, or promise of systems not yet developed, and these expectations often do not materialize. Conversely, errors or poor
M. Rigby et al. / Next Steps in Evaluation and Evidence – from Generic to Context-Related
209
design can lead to rejection, poor levels of benefits, impediment to professional practice and care delivery, or patient harm or death, as reported in [4,5,6,7], while robust evidence of benefits is sparse [8]. Key to improving this situation is development of a robust scientific evidence base, the main source of which will be rigorous scientific evaluation. Recent activities of the EFMI and IMIA working groups on evaluation and health technology assessment go a considerable way to developing the appropriate principles and tools, an important one of which is the reporting guideline STAtement on Reporting of Evaluation studies in Health Informatics (STARE-HI), adopted by IMIA, and endorsed by EFMI and the AMIA special interest group on evaluation [9]. Other EFMI and IMIA initiatives include a web-based database of health IT evaluation studies, a website with examples of health IT that harmed patients, and guidelines for planning, performing and reporting health IT evaluations (GEP-HI) - see [10]. 1.1. Strengths and Weaknesses of Genericism All these activities have concentrated on a generic approach applicable to any type of application. This is in principle legitimate, and as core aspects of evaluation such as defining objectives and ensuring analysis of stakeholder interests apply to all systems. However there can also be weaknesses in this approach. Certain types of application, or particular dimensions of use, can require specific approaches and tools. The plan of the EFMI and IMIA working groups is to examine the issues of developing and supporting such focused interest. This paper seeks to identify the issues needing consideration in the special situation of National E-Health Plans, Telemedicine, Health Informatics 3.0 (Web 3.0) systems, Usability Studies and Economic Analysis.
2. Expert Review of Special Issues In order to present an informed view of the next stage of evaluation study development, the next sections give an overview of core issues, and in some cases current work. Each has been drafted from an expert viewpoint, but (for reasons of space and time) is an overview rather than a formal systematic review, and only key references are cited. 2.1. National E-Heath Plans Most EU Member States are now drafting e-health plans, for intrinsic reasons and to comply with European goals [1]. However, the plan should not be an end in itself, and both the outcomes and the effects should be assessed. Most States are becoming aware that there is an urgent need for (continuous) evaluation activities, both to better control policy progress and to learn from challenges and experiences, but documented comprehensive frameworks defining the necessary evidence to manage different stages of national plan implementation are still few. A sound model comes from Finland, where the legislation of 2007 stipulated that a National electronic Health Information System (KanTa) [11] is to be built. The Social Affairs and Health Committee of the Parliament required an action to monitor and assess the implementation of national ehealth services with a view to providing timely support to the different actors involved. An evaluation planning project (KaTRI), launched in November 2008 as a joint venture between the Ministry of Social Affairs and Health and the National Institute for Health
210
M. Rigby et al. / Next Steps in Evaluation and Evidence – from Generic to Context-Related
and Welfare (THL), drafted a framework to support implementation of the plan and monitor its progress and outcome [12,13]. From literature in 2009, Australia, Canada and the UK were found to have documented national e-Health evaluation frameworks, which were used as a reference to draft the GEP-HI-compatible Finnish framework. The Finnish framework was used as a case in an MIE 2009 workshop to reflect on core issues and challenges in large-scale evaluation for supporting system development, implementation and impact assessment. The discussions were then used in refinement of the methodology-based definition of core concepts and variables to be monitored. A refined methodology was published at Medinfo 2010 [13], and some results gained by measuring the baseline situation in eTelemed 2011 [14]. In Medinfo 2010 a question was raised on an international dataset on monitoring National eHealth solutions, and a workshop is being planned in collaboration with Danish, Swedish and Austrian experts on further elaboration of this idea. These are early steps along a promising path. 2.2. Telemedicine (including Pervasive and Ubiquitous Systems) Telemedicine is an important and developing area of health informatics applications. However, it has some aspects which make it distinct from other health information systems. These differences fall naturally from the nature of telemedicine systems, namely that they cross boundaries and distance, and that there are two sets of users, who may not know one another, and may have little affinity. The remote user may be a partner health professional, such as when a second opinion or guidance on a diagnosis is sought; they may be a different type of health professional, as when a nurse-led casualty service is supported by a parent trauma unit; or (increasingly) the remote user may be the patient, either being monitored passively or as an active care participant. Organisationally telemedicine is different, too – it is seldom deployed within one organisation, but usually crosses organisational and geographical boundaries, and increasingly international borders, making the identification of a user community and corporate responsibility almost impossible [15]. Further, the behavioural issues are different, particularly when patients are the remote users. These affect both patients as users, who may comply partially or poorly, and clinicians whose practice may change. Some of the issues of responsibility and evaluation have been highlighted [16,17]. They are also recognized internationally as ISO TC215 is developing a technical specification for quality criteria for telehealth. Already one important proposal to modify the STARE-HI principles to apply to telemedicine has been published [18]. 2.3. Health Informatics 3.0 and other Virtual Systems Web 3.0 and thus Health Informatics 3.0 offer new paradigms of health information system, based on semantic tagging – of things, data values, encounters, or actions. Tagging and accumulation may include, for example, diseases and their occurrence, clinical orders and their means of implementation, prescribed pharmaceuticals, or implanted devices. As the data can be delivered to another person or organization with an interest in the data subject, the systems are largely virtual. Furthermore there is not a single physical system, and the use of cloud computing gives another dimension to virtuality. Health Informatics 3.0 should be able to integrate multiple information sources such as clinical data, laboratory data, and model clinical pathways. This comes at the time when the desire to move health care from hospital and organisation-centric health care to home-based care is a policy priority.
M. Rigby et al. / Next Steps in Evaluation and Evidence – from Generic to Context-Related
211
In promising a new pattern of systems and activity, Health Informatics 3.0 thus brings a new paradigm of risks, from errors within the virtual system to changes of user perception and behavior. However, the evaluation of Health Informatics 3.0 systems raises new challenges. The concepts of ‘stakeholder’ and ‘user community’ become nebulous, as there is no physically defined user population, and like telemedicine is also not bounded by organizational or geographical boundaries. The concept of active user is different from that of unanticipated recipient. The virtue of the new virtual systems is that they break previous constraints, but in being outside traditional controls, the evaluation issues and methods need to be rethought too. 2.4. Usability Studies Specifically for health technologies the EU requires usability evaluation of all medical devices. These obligations may be extended to any “software […] intended [...] to be used specifically for diagnosis and/or therapeutic purposes” [directive 2007/47/EC]. The recommended way to fulfil this is to adopt a user-centred design cycle of the product and to document both the methods applied and the outcomes of the usability studies. Alternatively, it is possible to perform a summative usability evaluation once the product is ready to go to market, to check there is no major risk of usage errors. Fundamentally, usability studies are integrated in the (re-)design cycle of products and therefore serve mainly formative evaluation purposes. Their main objective is to find the usability flaws of the product and of its user interface and to propose solutions to fix the problems. Several iterations and discussions / negotiations with the vendors / designers of the product might be necessary to get a usable and safe product. Guidelines such as GEP-HI, organized in sequential phases, may have some limitations in supporting such constructive evaluation. Moreover, Health IT evaluation studies aim at establishing evidence regarding the impact of systems in use, while usability studies aim at optimizing the application. Finally, usability studies benefit from a number of standards (e.g. ISO 13407) structuring their methodological approach. Therefore, evaluation guidelines may require adaptation to integrate more closely with usability. Nonetheless, the use of guidelines such as GEP-HI looks promising in informing usability studies. A recent usability study applied GEP-HI phases 1 and 2 and could identify the following benefits: (i) clarification of the information need, which led to a slightly extended scenario for usability tests; (ii) strong involvement of the designers and vendors of the product; (iii) early clarification and consensus about usability goals; (iv) shortened iterations and re-engineering cycles, and (v) better contract basis. 2.5. Economic Analyses Economics are crucial in responsible health management. However, the economics of health informatics applications has been under-addressed. Many advocates expect electronic systems to reduce costs, but though they eliminate some processes, or improve quality, they much less frequently reduce spending. Quality improvements do not yield cash, and increases in throughput may actually increase spending. The situation is further complicated by the fact that significant studies showing positive return on investment in e-health have computed societal gains [19] – this is laudable, but societal gains do not pay running costs nor reimburse the system operator. Costing methods should identify implementation costs (including training and process re-engineering), operating costs (including maintenance), and savings, but also
212
M. Rigby et al. / Next Steps in Evaluation and Evidence – from Generic to Context-Related
wider impacts including quality improvement and risk reduction, and additional costs such as increased activity. But in the same way that service industries such as banking and civil aviation recognise that effective informatics systems are key to core business success, the health sector needs means of identifying and funding net enterprise value.
3. Discussion Time is overdue for health informatics to move to being a science-based health technology, implemented according to robust evidence. The EFMI and IMIA groups have progressed well generic evaluation methods to enable generation of such evidence. The next important steps needed are production of targeted methods for specific aspects.
References [1]
[2] [3] [4] [5] [6] [7] [8]
[9] [10] [11] [12] [13] [14]
[15] [16]
[17] [18]
[19]
Commission of the European Communities. Communication from the Commission to the Council, the European Parliament, the European Economic and Social Committee and the Committee of the Regions: e-Health - making health care - COM (2004) 356; Luxembourg, 2004. Ham C, Hunter DJ, Robinson R. Evidence-based Policy; British Medical Journal, 310, 71, 1995. Gray JAM. Evidence-based Healthcare: How to make Health Policy and Management Decisions. 2nd ed. Edinburgh: Churchill Livingstone. 444 pages, 2001. Ammenwerth E, Shaw NT. Bad health informatics can kill - is evaluation the answer? Methods Inf Med. 2005;44(1):1-3.3] Han YY, Carcillo JA, Venkataraman ST, et al. Unexpected increased mortality after implementation of a commercially sold computerized physicians order entry system. Pediatrics 2005; 116(6): 1506-12. Koppel R, Metlay JP, Cohen A, Abaluck B, et al. Role of computerized physician order entry systems in facilitating medication errors, JAMA. 2005 Mar 9; 293(10):1197-203. Bad Health Informatics web site – accessed from http://iig.umit.at/efmi/ (accessed 2 February 2011) Black AD, Car J, Pagliari C, Anandan C, Cresswell K, et al. 2011 The Impact of eHealth on the Quality and Safety of Health Care: A Systematic Overview. PLoS Med 8(1) 2011: e1000387. doi:10.1371/journal.pmed.1000387. Talmon J, Ammenwerth E, Brender J, de Keizer N, Nykänen P, Rigby M. STARE-HI - Statement on reporting of evaluation studies in Health Informatics, Int J Med Inform 2009; 78(1): 1-9. EFMI Evaluation Group web site - http://iig.umit.at/efmi (accessed 2 February 2011) www.kanta.fi/web/en/frontpage. (accessed 3 February 2011). Hyppönen H, Doupi P, Hämäläinen H, Ruotsalainen P. Planning for National Health Information System evaluation [Finnish with English summary]. THL Report 33/2009, Helsinki, Finland, 2009. Hyppönen H, Doupi P, Hämäläinen P, Komulainen J, et al. Towards a National Health Information System Evaluation. In: Safran C, et al. Eds. MEDINFO 2010, IOS Press, 2010, pp. 1216 - 1220. Hyppönen H, Viitanen J, Reponen J, Doupi P, et al. Large-scale eHealth Systems: Providing Information to Support Evidence-based Management. In eTELEMED 2011: The Third International Conference on eHealth, Telemedicine, and Social Medicine . February 23-28, 2011 - Gosier, Guadeloupe, France (accepted). Rigby M. The Management and Policy Challenges of the Globalisation Effect of Informatics and Telemedicine; Health Policy, 46, 97-103, 1999. Wyatt J. Evaluating the Impact of Telemedicine on Health Professionals and Patients; in Rigby M, Roberts R, Thick M (eds.). Taking Health Telematics into the 21st. Century; Radcliffe Medical Press, Abingdon, 2000, pp 61-76. Wallace P. Telemedicine in Primary Care: Evaluating the Effects on Health Practice and Health Practitioners, in ibid, pp 83-90. Kaldoudi E, Chatzopoulou A, Vargemezis V. Adapting the STARE-HI Guidelines for the Evaluation of Home Care Telehealth Applications: An Interpretive Approach; Journal on Information Technology in Healthcare, 2009; 7(5): 293–303. Study on Economic Impact of eHealth: Developing an Evidence-based Context-adaptive Method of Evaluation for e-Health; empirica, Bonn, 2005.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-213
213
Virtual Ward Round a
Michael STORCKa,1, Frank ÜCKERT a Institute of Medical Informatics, University of Münster, Germany
Abstract. “Virtual Ward Round” is a web-based blended learning tool. The program simulates hospital ward rounds. Within a virtual environment, students make diagnosis and order treatments. Tutors prepare cases easily to ensure realistic cases directly linked to the corresponding classes. The program “Virtual Ward Round” will hopefully be enrichment to the curriculum-based teaching. Keywords. Education, Blended e-learning, Application, Ward round
1. Introduction Currently, there are a few well developed e-learning tools available, for example the INMEDEA simulator [1] and CASUS Online [2,3]. Many of these tools are especially appropriate for simulating the workflow within a doctor’s office, since one patient is treated in one specific episode of her/his disease. In this present paper, we describe the development of a new blended learning tool called “Virtual Ward Round”. “Virtual Ward Round” simulates a ward round on a hospital ward. The primary goal of the application is not exercising medical students in the treatment of specific diseases. “Virtual Ward Round” aims to transfer knowledge on typical medical processes in different hospital settings encouraged by the related tutors. In order to guarantee learning success the usage of this tool may be combined with courses or seminars.
2. Method The Institute of Medical Education at the University of Münster conceptualized the idea of “Virtual Ward Round” and developed its professional content in collaboration with the Clinic and Polyclinic for General and Visceral Surgery at the University Hospital of Münster. In 2010 the Institute of Medical Informatics at the University of Münster implemented the Internet portal “MeinCampus”2 which unites several web applications. The development framework “Zend Framework” [4] is the basis of this portal and facilitates PHP [5,6] web developing in an object oriented manner. The application data is saved in a MySQL [6] database. The access to the portal is realized by a single signon solution which allows the user to access all enabled modules.
1 2
Corresponding Author. http://meincampus.uni-muenster.de
214
M. Storck and F. Ückert / Virtual Ward Round
“Virtual Ward Round” was developed as a complementary module of this Internet portal with appropriate technical characteristics. The application is available on the Internet and easily accessible by tutors and students. It is set up multilingual and offers the student to operate the program on an English or German user interface. The access control is realized by a simple role model, which ensures that each user has to sign on with her or his personal credentials.
3. Result The administrator is authorized to create new wards within the application and to define the patient’s characteristics and metadata for medical conditions in the corresponding ward. This determines the tutor’s input options as well as the presentation of the program for the student. Hence, the pool of available medical examinations and diagnosis available to the student are dynamic in each ward. The system includes a pool of standard patients differentiated by age and gender. These patients have stored a range of standard findings and medical examinations. 3.1. Tutor’s View The tutor’s primary interest is to pass on knowledge to his or her students, applying a sufficient set of self-selected medical cases. A major feature of “Virtual Ward Round” is the tutor’s opportunity to add new cases easily to the application. The anamnesis and finding input documents are given and adjustable by the system administrator to enable the system’s adjustment to fit the needs of the tutor.
Figure 1. Tutor’s view, creating a ward round.
M. Storck and F. Ückert / Virtual Ward Round
215
The program allows the stepwise creation of a new case without risking of lose any information. The tutor defines the case’s anamnesis information and specific medical findings. The available findings for each case are complemented by the pool of those defined within the profiles of the standard cases to avoid unwanted hints for students. Moreover, the tutor specifies which examinations are required, adequate and not adequate for the created case. Based on this information, the system evaluates the student’s performance after finishing a ward round. The tutor combines the cases to ward rounds, as shown in Figure 1. The cases can be integrated in multiple ward rounds and their order is easily modifiable within these ward rounds. The created ward rounds remain modifiable by the tutor, even after its completion and application within the system. During an adjustment session, the ward round under revision is invisible to the students. 3.2. Student’s view Another important feature of “Virtual Ward Round” is the active engagement of students in a decision-making process. Simulating physicians’ work in a hospital ward, the student makes diagnoses and gives treatment recommendations of hospital patients.
Figure 2. Student’s view, starting a prepared ward round.
After selecting a prepared ward round, the student will be guided to the user interface, as shown in Figure 2, in which the student can check the patient records including the anamnesis and medical findings or enters the patient rooms directly. The student has a large range of various examinations available. The selection of a appropriate examination needs to be done under the consideration of the patient’s records and time consumption for the examinations. Each examination is linked to specific findings. If the student needs to review patient’s records again a time penalty is
216
M. Storck and F. Ückert / Virtual Ward Round
imposed and registered in the student’s evaluation. The program’s work flow encourages memorizing patient records and to consider them as the foundation of decisions on diagnoses and treatment recommendations. At the end of the ward round, each student is evaluated upon the elapsed time and the appropriateness of the ordered examinations.
4. Discussion With “Virtual Ward Round”, tutors receive a useful tool to engage the students in an active decision-making process on medical treatment and make them familiar with common medical processes in a hospital setting. The setting of the program is chosen oppositional to previous e-learning systems which are mainly simulating the work process of doctor’s offices [1]. The system gives the opportunity to educate all students with a standardized set of medical cases. In addition the application is user-friendly designed and highly accessible via the Internet. The content of the program is certainly relevant to the teaching and learning objectives of the medical curriculum. Other applications have pre-integrated cases and the creation of cases is long-winded and demanding. Although a set of well-developed pre-integrated cases is an advantage of these applications, these cases are not hand-tailored to specific learning objectives. 4.1. Benefits for the Tutor Applying “Virtual Ward Round”, the tutor gets the chance to integrate self-created cases easily into the e-learning system. Thus, the tutor is able to teach cases well known to him or her based on his or her own personal preferences. The previous established e-learning systems realize this feature only partially, since it is difficult to integrate a new case into the system [3]. For reason, the tutor’s only option is to search for a similar case within the incorporated cases of the system. In “Virtual Ward Round” the tutor is able to design ward rounds with an individual selection of cases to improve the student’s memorization of patient records. This is a clear advantage compared to other systems which only teach one case at a time [1,2]. The application is a positive contribution to the retention of student’s knowledge and can improve the tutor’s teaching performance [7-9]. 4.2. Benefits for the Student “Virtual Ward Round” offers a wide range of advantages for students. The program introduces the medical students to the hospital setting by giving them the opportunity to act from a physician’s perspective. The program offers a sufficient environment to get familiar with the work procedures and methods of hospital settings. Like the INMEDEA simulator, “Virtual Ward Round” “allows students to navigate freely within the system and to determine […] their next diagnostic steps [by themselves] at all times” [1]. The diverse approach of teaching improves the motivation to study and reach the education goals easier [10]. Generally it has been proofed that e-learning improves the efficiency in gaining knowledge and skills [11] and their retention [7, 8, 9]. The acceptance of a case-based computerized learning program (CASUS) was evaluated by A.B. Simonsohn and M.R.
M. Storck and F. Ückert / Virtual Ward Round
217
Fischer in 2004 [10]. In the summer semester of 2001, the evaluation showed that 60% of the students classified the learning tool as useful, nearly 50% of the students stated that the teaching was “fun” and 42% felt more motivated [10]. 4.3. Perspective The implementation of the program has reached an early beta stage and the first testable prototype is nearly completed. The blended e-learning tool will be tested in the winter semester of the year 2011/2012 in collaboration with the Clinic and Polyclinic for General and Visceral Surgery at the University Hospital of Münster. The evaluation of the program will consider the ease of use, the user acceptance, the efficiency on learning and retention of knowledge. In case of a positive evaluation the application is expanded to other medical areas and becomes part of the faculty’s curriculum.
References [1]
Horstmann M, Renninger M, Hennenlotter J, Horstmann CC, Stenzl A. Blended E-learning in a Webbased Virtual Hospital A Useful Tool for Undergraduate Education in Urology. Education for Health. 2009. Volume 22, Issue 2 [2] Franke C, Holzum A, Böhner H, Baehring T, Ohmann C. Computer-based case-study teaching in surgery. Springer Verlag; 2002. Chirurg 73: P. 487-491. [3] Kolb S, Reichert J, Hege I, et al. European dissemination of a web- and case-based learning system for occupational medicine NetWoRM Europe. Int Arch Occup Environ Health. 2007. P. 553-557 [4] Weier O’Phinney M. Zend Framework. The Official Programmer’s Reference Guide. Apress; 2010. [5] Lecky-Thompson E, Eide-Goodman H, Nowicki SD, Cove A. Professional PHP5 (Programmer to Programmer). John Wiley & Sons. 2004. [6] Gilmore JW. Beginning PHP 5 and MySQL: From Novice to Professional. Apress; 2009. [7] Ruiz JG, Mintzer MJ, Rosanne M. The impact of E-learning in medical education. Academic Medicine. 2006. Vol. 81, No. 3; P. 207-212. [8] Clark D. Psychological myths in e-learning. Medical teacher. 2002. Vol. 24, No. 6, P. 598-604. [9] Greenhalgh T. Computer assisted learning in undergraduate medical education. BMJ. 2001. Vol. 322, P.40-44. [10] Simonsohn AB, Fischer MR. Evaluation of a case-based computerized learning program (CASUS) for medical students during their clinical years. Dtsch Med Wochenschr. Georg Thieme Verlag; 2004. P.552-556. [11] Lyon HC, Ed D, Healy JC, et al. PlanAlyzer, an interactive computer-assisted program to teach clinical problem solving in diagnosing anemia and coronary artery disease. Academic Medicine. 1992. Vol. 67; P. 821-828.
218
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-218
Professional Development of Health Informatics in Northern Ireland Paul MCCULLAGHa,1 Gerry MCALLISTER a, Paul HANNA a, Dewar FINLAY a, Paul COMAC b a School of Computing and Mathematics, University of Ulster b Northern Ireland Health and Social Care ICT Training Group Ireland
Abstract. This paper addresses the assessment and verification of health informatics professional competencies. Postgraduate provision in Health Informatics was targeted at informatics professionals working full-time in the National Health Service, in Northern Ireland, United Kingdom. Many informatics health service positions do not require a formal informatics background, and as we strive for professionalism, a recognized qualification provides important underpinning. The course, delivered from a computing perspective, builds upon work-based achievement and provides insight into emerging technologies associated with the ‘connected health’ paradigm. The curriculum was designed with collaboration from the Northern Ireland Health and Social Care ICT Training Group. Material was delivered by blended learning using a virtual learning environment and face-to-face sessions. Professional accreditation was of high importance. The aim was to provide concurrent qualifications: a postgraduate certificate, awarded by the University of Ulster and a professional certificate validated and accredited by a professional body comprising experienced health informatics professionals. Providing both qualifications puts significant demands upon part-time students, and a balance must be achieved for successful completion. Keywords. Health informatics, professionalism, education, accreditation
1. Introduction The United Kingdom’s National Health Service (NHS) faces major challenges: rising expectations; changing demographics; the continuing development of the information society; advances in treatments; the changing nature of disease; and changing expectations of the health workplace. These drivers are ubiquitous and influence healthcare delivery throughout the developed world. Darzi [1], recommended: “A clear focus on improving the quality of NHS education and training.” In Northern Ireland (NI) over the past 4 years, significant investment was set aside to remotely monitor up to 5000 patients in long term conditions such as heart failure, pulmonary disease and diabetes. This approach has become known as the ‘connected health’ paradigm, and challenges the mechanisms for skills update of professionals in this form of healthcare delivery [2]. Whilst these challenges are of course multidisciplinary, we believe that Health Informatics (HI) can provide a significant 1
Corresponding author. Dr Paul McCullagh, School of Computing and Mathematics, University of Ulster, Jordanstown campus, Co. Antrim, BT37 0QB, E-mail: [email protected]
P. McCullagh et al. / Professional Development of Health Informatics in Northern Ireland
219
contribution, both in managing the strategy and equipping the workforce with skills. HI lies at the intersection of informatics and the health and social care disciplines. It equips healthcare professionals with better information handling and interpretation skills. It may be defined as: “The knowledge, skills and tools which enable information to be collected, managed, used and shared to support the delivery of healthcare and to promote health” [3]. In 2007, the University of Ulster’s School of Computing and Mathematics, in collaboration with the Northern Ireland’s Health and Social Care (HSC) Information and Communications Technology (ICT) Training group, designed a course to provide education and training for the ICT ‘specialists’, with professional accreditation built in [4]. The case for enhancing the learning agenda has been strengthened by HSC ICT Strategy [5]. A survey of over 1000 Healthcare Professional staff in NI [6] indicated a positive perception to ICT but revealed significant gaps in levels of awareness, attitudes, knowledge and skills. Only 44% of respondents had any formal ICT training and there was strong support for continued ICT education. Recommendations supported a two-tier approach with multi-professional training for all staff and additional training specific to the needs of specialisms. A further recommendation was to “maximise online learning strategies for delivery content” [6]. The need for specific technical skills in HI has been recognised by the International Medical Informatics Association (IMIA) Work Group 1 (HI education) [7]. IMIA recommend that the ‘informatics based approach’ should focus on data, information and knowledge, as appropriate to the skills of an informatician and treat health care problems cooperatively with physicians and other healthcare professionals. We have developed specialised HI education and professional development, with a specific CS underpinning [8]. Content should be practical and relevant to the profession, and should permit a student to concurrently satisfy the learning objectives associated with a Professional Certificate in HI and support the National Occupational Standards for Health Informatics. Addressing professional development requires additional competencies to recorded, evaluated, assessed and accredited. This by its nature is a personalised component, dependent on the discipline and career path of the individual healthcare professional; achieved by compiling a portfolio of evidence, documented during the course, the material for which is acquired through various sources including pedagogic instruction, inter-professional group work and work based learning.
2. Methods: Planning and Mode of Delivery Postgraduate and professional provisions were planned in close collaboration with the Health and Social Care Services ICT Training Group [9]. The content is delivered by blended learning, providing a learning opportunity for all; participation by some would otherwise be impractical, due to the demands of the workplace and geographical distribution of students [10]. The CampusOne Virtual Learning Environment (VLE) [11] is employed for delivery, and is complemented by face-to-face meetings. The course provides education for those with first degree or appropriate experience (Accreditation for Prior Experiential Learning, APEL). It utilises best practice in software engineering methodologies and technology localised towards health and with particular relevance to the health system in NI, which is organized autonomously with the general structure of the United Kingdom’s National Health Service. Delivery utilizes research expertise in areas such as requirements analysis, signal and image
220
P. McCullagh et al. / Professional Development of Health Informatics in Northern Ireland
processing and data mining. It also benefits from applied research and knowledge transfer projects in areas such as smart homes, ambient assisted living, medication management, and self-management of chronic disease. The course comprises four taught modules, (each worth 15 credit points at level 7), leading to a postgraduate certificate: ‘Electronic healthcare’, ‘Information management in health and social care’, ‘Analysing and presenting data and Information’, ‘Emerging healthcare technologies’. Postgraduate Diploma and MSc routes will be available in the future.
Figure 1. Assessment of professional (left) and academic (right) elements.
Figure 1 illustrates how the professional element is addressed, in parallel with the academic course. The academic course (right branch) is 100% assessed by coursework, which should be passed at a level of greater than 50%. It assesses learning outcomes (LOs) in the form of: knowledge and understanding (K), intellectual qualities (I), professional and practical skills (P) and transferrable skills (T). The professional element (left branch), assesses professional competencies associated with the academic learning outcomes, but independent of them. Indeed much external material (e.g. work based learning) is required for completion. The competencies are based on the UK’s professional certificated in health informatics [12], and are correlated with Connected for Health recommendations [13], but have evolved separately following ongoing dialogue between the Northern Ireland Health and Social Care ICT Training group and University staff and hence do not map directly. The competencies are assessed by an external evaluator and accredited by an independent NI panel of healthcare professionals. In order for a student to receive the professional certificate, all competencies must be achieved.
3. Results The target is for 20 students per year, with a programme lifespan of at least five years, providing NI with 100 HI specialists. In year 1 of the programme, delivery started with
P. McCullagh et al. / Professional Development of Health Informatics in Northern Ireland
221
one academic module (15 CATS) per semester, which also typically incorporates one professional certificate module. This provided good retention statistics (16/20). However for year 2 entry (Sept 2009), the picture was not as promising. This can be attributed to delivering 2 academic modules (30 CATS) per semester plus the professional certificate, which caused 7/19 students to defer, and two students to drop out. The dilemma is to provide such a course in a reasonable time frame but to incorporate the professional training and accreditation. When interviewed, students indicated that they were not fully prepared for the demands of part-time study and portfolio building, purely in terms of workload. Few cited the level or relevance of the course as the main problem. This prompted a revision of the competencies in July 2010 [12], to provide more focus, to update content and to provide quantification of workload, and limits of expected content. A major issue with portfolio development is containing the overall workload required by the student. Modules are assessed by an internal assessor (the course tutor), who provides feedback and support, and an external verifier, who accepts/rejects the portfolio. If the external verifier accepts the portfolio then accreditation is recommended by a board of HI professionals in NI. Any rejected competency can be updated and resubmitted. Experience with the 2010 intake (17 students) has been positive, regarding the updates to the competencies. Students have not been deterred as in the previous year. Engagement with the all modules (‘Electronic Healthcare’, ‘Analysing and Presenting Data’, Emerging Healthcare Technology, and Information Management in Health and Social Care) has been good. Flexibility with professional assessment has allowed students to submit portfolios for early verification (Jan 2011) or wait until the end of semester 2 (May 2011).
4. Discussion and Conclusion As the healthcare industry contains many vocational professions, there has been a view in society that these workers do not achieve the same rewards as in the commercial/private sector. Enhanced training and education and the move to professional recognition for ICT expertise can address this issue [14]. In addition it should enhance of the quality of health-care delivery, particularly in connected health. Keogh [15], addressing the demand for information across the NHS and social care, stated: “Good informatics services are vital to delivering the health and social care services we hope for, and the only way of knowing how well we have delivered. By focusing on high quality informatics services, we will improve patient experience and enable NHS staff to make better use of information to improve the quality of care.” The professional development provision discussed in this paper, is intended for a HI specialist who requires an understanding of core computing concepts, supplemented by knowledge of specialist health informatics systems, clinical terminology, health nomenclature, health standards, ethics and governance and the role of decision support in healthcare. Due to the complexities of HI, the specialist should have core HI skills and an emphasis on the local health system organization. Based on three years of intake, the students have found the course demanding, and it could be that two modules (30 CATS) per semester with both academic and professional assessment requires too much time commitment for a part-time student. Discussions with students indicate that the demands of the professional portfolio provide significant workload. Although there is intentional overlap in topics between the postgraduate and professional threads of the
222
P. McCullagh et al. / Professional Development of Health Informatics in Northern Ireland
pedagogy, the necessity to address and document all learning outcomes in a comprehensive manner is a significant burden. The workload is exacerbated by an HI landscape that is fast changing. Of course there is normally no ‘template’ answer, as the portfolio has a significant element of personalization. Solving this dilemma of concurrently delivering postgraduate challenge with professional verification will ultimately determine the success of the initiative. One major benefit of the course has been the establishment of an HI ‘community’ comprising many roles (consultants, junior doctors, network managers, administrators) with the NI HI sector, partially due to teamwork elements of the course. While this initiative has been aimed at the HI specialist, we are now exploring the possibility of extending provision to the wider (non-specialist) informatics community, with the provision of short courses, which can achieve a continuing professional development accreditation component and bear concurrent academic credit (at UK level 4). Acknowledgements: The authors wish to acknowledge the contribution of the tutors, students, external verifier and members of the Northern Ireland Health Informatics Assessment Panel; advice from Dr Peter Murray and Dr Jean Roberts in course design; and support funding from the Higher Education Academy: Information and Computer Science.
References [1]
[2] [3]
[4] [5] [6]
[7] [8]
[9] [10] [11] [12] [13] [14] [15]
Darzi, NHS Next Stage Review, High Quality Care For All, 30 Jun 2008, http://www.dh.gov.uk/en/Publicationsandstatistics/Publications/PublicationsPolicyAndGuidance/DH_0 85825, accessed Feb 2011. McGimpsey M. Ministerial Speech, European Centre for Connected Health. http://www.eucch.org/index.htm, accessed Feb 2011. Department of Health, 2002, Making Information Count., http://www.dh.gov.uk/en/Publicationsandstatistics/Publications/PublicationsPolicyAndGuidance/DH_4 073078, accessed Feb 2011. McCullagh PJ, Finlay DD. Health Informatics education: balancing academic achievement and professional development, Swiss Medical Informatics 70 2010,18-21, Verlag-Johannes Petri. Department of Health, Social Services and Public Safety (2005) HPSS ICT Strategy 2003-2010. Sinclair M, McGlade K, Comac P, Kelly B, Brown H, Hatamleh R, Stockdale J, Knowledge, Skill and Attitude of NI FHSSPS Healthcare Professionals towards Information and Communication Technology: Report of a Northern Ireland Survey (2007). Recommendations of the International Medical Informatics Association (IMIA) on Education in Biomedical and Health Informatics. First Revision. Methods Inf Med. 2010 Jan 7;49(2):105-120. McCullagh PJ, Murray P. Health Informatics Education in the ICS Curriculum - The Need for Benchmarking. In Steele H, Hackett J, eds. Proceedings of 7th Annual Conference of the Subject Centre for Information and Computer Sciences, University of Ulster, Dublin, 2006: 34-39. HSC ICT Training Group, http://www.beeches-mc.co.uk/, accessed Feb 2011. Alexander S, Kernohan WG, McCullagh P. Self Directed and Lifelong Learning, in Global Health Informatics Education Studies in Health Technology and Informatics 109 (Hovenga & Mantas), 2004. CampusONE, http://learning.ulster.ac.uk/webct/entryPageIns.dowebct, accessed Feb 2011. Rigby M. Northern Ireland Review of Professional Competencies in the NHS Professional Certificate in Health Informatics – 2010. Connecting for Health, Learning to Manage Health Information: a theme for clinical education: Making a difference, Crown Copyright, 2009. UK Council for Health Informatics Professions, http://www.ukchip.org/, accessed Feb 2011. Keogh B. Health Informatics Review, Department of Health, July 2008.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-223
223
How Important is Theory in Health Informatics? A Survey of UK Academics a
Philip SCOTT a, 1, James BRIGGSa, Jeremy WYATTb, Andrew GEORGIOUc Centre for Healthcare Modelling and Informatics, University of Portsmouth, UK b Institute of Digital Healthcare, University of Warwick, UK c Centre for Health Systems and Safety Research, University of New South Wales, Australia
Abstract. The disciplinary status of health informatics remains unclear. Is it an art or a science? Does it have a body of theory? A survey was devised for UK academics that teach or research health informatics. Forty-six responses were received, twenty-five from the target group (representing between a quarter and a third of the population of interest). Health informatics is not perceived to have a well-known and clearly definable body of theory, but there is a clear demand for a more theoretical basis for the discipline. Journals and conferences were rated as the best sources of theory and seven key textbooks were identified. Keywords. Medical informatics; health informatics; theoretical models; review
1. Introduction The purpose of this paper is to report a survey of UK academic opinion about theory in health informatics. We first consider the status of the field and the nature of theory. Is health informatics truly a scientific discipline in which it is right to expect theory to exist? One argument is that it is both art and science: its applied features are the art and its more fundamental characteristics are the science of medical informatics [1]. Another view is that health informatics should be viewed as a scientific discipline only if it has specific principles that are enduring, evidence-based, easily applied and original. That would imply that if health informatics solely shows how to use principles from other fields then it must be accepted as an application area, not a scientific discipline [2]. This uncertainty is not surprising given that the name and definition of the field of health informatics remains unresolved [3-4]. 1.1. The Nature and Significance of Theory and its Relevance in Health Informatics What is theory? One dictionary definition is that theory is either a conjecture or “an explanation or system of anything; an exposition of the abstract principles of a science or art” [5]. It may comprise synthesis, abstraction and interpretation of research findings or a purely hypothetical explanation of observed phenomena. It may take the form of a predictive model, a proposed causal relationship or a conceptual framework. 1
Corresponding Author: Dr. Philip J. Scott; E-mail: [email protected]. Centre for Healthcare Modelling and Informatics, University of Portsmouth, Buckingham Building, Portsmouth PO1 3HE.
224
P. Scott et al. / How Important is Theory in Health Informatics? A Survey of UK Academics
Theory may be derived either philosophically or empirically and its application may be either explicit or tacit [6]. A theory is capable of application in multiple scenarios. Why does theory matter? Firstly, formulating a proposed general principle allows it to be empirically tested, then accepted, qualified or rejected. Secondly, Kuhn [7] proposed that a sign of maturity of a scientific field is its acquisition of a “paradigm”, defined as the theory, methods and standards of the given domain of knowledge. Does theory matter in health informatics? Theory in health informatics might be expected to provide helpful frameworks for evaluation, design or implementation. The absence or presence of theory in health informatics, and its relative maturity, is arguably significant to an assessment of the disciplinary maturity and professional credibility of the field. Health informatics authors have called for more explicit epistemology and theory in the design and reporting of its research [8-10]. The authors wanted to determine how theory and disciplinary maturity are perceived within the health informatics academic community, assess the level of interest in this topic and sample informed opinion about relevant sources, to guide the design of some form of systematic review. A survey instrument was devised to answer the questions: Is the planned review worth doing? If so, where should it look?
2. Methods 2.1. Survey Instrument As the aim of the survey was specific, a new twenty-item instrument was devised for the purpose. The instrument was developed and tested for usability and face validity within the research group of PS and JB. The first ten questions asked about the nature and importance of theory in health informatics and the professional maturity of the discipline. The remaining questions asked participants to rate the relative value of particular sources for the proposed review, to identify the main textbooks in health informatics and comment whether theory is given enough attention in the literature. Questions 1-17 offered a five-point Likert-type scale (Strongly Agree, Agree, Neutral, Disagree, Strongly Disagree) with a “No opinion” option and a free-text box to allow qualifying comments to be made. Questions 18 and 19 asked for free-text lists. Question 20 offered a dichotomous yes/no choice with the option of qualifying comments. The demographics requested were sector (academic institution type, NHS, private healthcare or information technology), job role, gender and whether UK-based. Standard systematic review methods are designed to examine the body of evidence on particular interventions [11]. The review contemplated here has a rather different purpose: to identify certain types of conclusion (theoretical inferences) within a general field, not substantive conclusions about specific interventions. As this aim might require adaptation of the standard review approach, the survey offered participants repeated opportunities to make qualifying comments to elucidate this. 2.2. Target Population The population of primary interest was the UK health informatics academic community, defined as professional teachers or researchers in the field and their research students. This population was selected to offer a well-informed community of interest on the subject. The UK focus was chosen merely to circumscribe a definite sampling frame
P. Scott et al. / How Important is Theory in Health Informatics? A Survey of UK Academics
225
and facilitate survey administration. There were understood to be 20-25 UK academic institutions that teach or research health informatics. Given that the research groups are relatively small, 75-100 seems a plausible estimate of the target population. This is consistent with the figure of approximately 70 academically affiliated members voluntarily enrolled in the UK Faculty of Health Informatics (Bruce Elliott, Faculty coordinator, personal communication, 25 August 2009). 2.3. Administration The survey was constructed as a web site and advertised to selected UK email lists and web sites relevant to health informatics educators and practitioners. The survey was open during August–September 2009. Given its public nature, participation was not restricted to the target population but was open to anyone with an opinion to offer. 2.4. Analysis Answers were scored as ‘Strongly Agree’=1, ‘Agree’=2, ‘Neutral’=3, ‘Disagree’=4, ‘Strongly Disagree’=5. Although data from Likert scales is commonly treated as interval rather than ordinal, it has long been debated whether this is statistically correct [12]. For the purposes of this paper, the data is treated as interval given that the semantic ranges of the questions intuitively offer a symmetrical continuum and the very broad nature of the overall study question does not demand more of the data than it legitimately offers. This approach allows summary statistics and confidence intervals to be computed, though with the caution that applies to small samples. The mean score, standard deviation and 95% confidence interval were calculated for each question. The free text comments were qualitatively analyzed by grouping them into common themes and noting particular extremes of opinion. The sample size was anticipated to be too small to use factor analysis to test construct validity, so this was assessed subjectively from the summary statistics and the pattern of scores. Formal calculations of reliability and sample size were not judged to be necessary given the modest precision level required for the purposes of the study.
3. Results Forty-six responses were received: 33 male and 13 female; 39 UK-based and 7 not. In total, 25 participants were from the target UK academic population, suggesting a response rate in the order of 25-33%. Of the others, 12 were NHS staff, mostly in management roles, 4 were in IT roles and 4 were non-UK academics (plus one “other”). As the population of interest was the academic community, the subgroups used for data analysis are academics and practitioners. The supplementary tables [13] present a summary of the responses. We found agreement that theory is important in health informatics teaching and research, uncertainty as to whether a distinct body of theory exists or if health informatics is usually evidence-based and mild disagreement that health informatics is a mature academic discipline or that theory is irrelevant. The emergent themes derived from the free text comments in the first section were: the ambiguity of the term “theory”; the need for a multi-disciplinary approach; the foundational importance of theory both in education and research; a repeated failure to learn from experience in healthcare IT implementation; and the need for theory about
226
P. Scott et al. / How Important is Theory in Health Informatics? A Survey of UK Academics
actual clinical usage of informatics. Some participants stressed that the field is almost entirely about application of knowledge from more fundamental disciplines. Several commented that social and organizational theory has a longer history and a more extensive body of published research than health informatics, whereas others were sceptical about the validity of theory in those areas. The recurring themes in the section on sources for theory were: an unsatisfactory lack of theoretical content in evaluation studies and textbooks; a view that governmental strategy is a dubious source of theory; and minimal support for professional standards as a relevant source. The other sources for theory suggested were: community mailing lists/forums, blogs and social networking sites and unpublished views of experts. Seven sources were cited more than once; ten other textbooks were only cited once. Citations [13] have been consolidated to the most recent known edition where older versions were listed. A surprising omission from the responses to question 18 was the IMIA Yearbook, which selects some of the “best” literature of the preceding year [14]. In answer to question 20, whether theory is given sufficient attention in health informatics literature, 77% of academics thought not, as against 60% of practitioners. Several participants said that they would have liked a “not sure” option on this question.
4. Discussion This survey has satisfied its aims and shown clear evidence of the demand from an informed group of practitioners and academics for a more theoretical basis for health informatics as a discipline. It has also given useful indications of the likely sources of such theories and suggested their relative relevance. The survey has several important limitations. The sample was subject to selfselection bias in that those uninterested in health informatics theory were less likely to participate. The data is subject to UK realm bias (though in fact open to anyone) and was conducted within a fairly short time frame. The instrument failed to make allowance for staff with more than one role, for example dual clinical and academic posts, so may have somewhat misstated the subgroup allocation. Academics showed an unsurprisingly stronger agreement with the importance of theory in both teaching and research. However, both groups disagreed that theory was irrelevant for practitioners or that theories from other domains were sufficient, with academics disagreeing more strongly. None of the sources suggested by the survey was strongly supported as a key source of health informatics theory. The best rated sources were journals and conference proceedings, but even they did not attain strong agreement as a key source. Most of the confidence intervals included or were close to a neutral rating. This is consistent with the view that theory receives insufficient attention in the literature. 4.1. Theory, Maxim or Speculation? Health informatics is arguably more likely to produce theories offering qualitative credibility (like reconstructions of literary texts [15] or historical events [16], rather than the precise formulae of the physical sciences). Health informatics today is perhaps susceptible to the same criticism that Francis Bacon made of medicine in 1603. Bacon criticized both the untheoretical empiricists who (like Hippocrates) only produced “a few maxims” and speculative rationalists (like Galen) who “spin webs out of
P. Scott et al. / How Important is Theory in Health Informatics? A Survey of UK Academics
227
themselves”. Bacon commended the synthesis: empirically based theory [17]. The proposed review will seek to determine whether, in Bacon’s terms, health informatics theory comprises any more than empirical maxims and rationalist speculations.
5. Conclusion Health informatics is not perceived to have a well-known and clearly definable body of theory but there is a clear demand for this. The authors are designing a review of health informatics theory, drawing upon a recent meta-narrative framework [18], that aims to provide both quantitative and qualitative analysis and develop a comparative typology.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]
[15] [16] [17] [18]
van Bemmel JH. Medical informatics, art or science? Methods Inf Med. 1996;35(3):157-72. Wyatt J. Medical informatics, artefacts or science? Methods Inf Med. 1996;35(3):197-200. Bernstam EV, Smith JW, Johnson TR. What is biomedical informatics? J Biomed Inform. 2010;43(1):104-10. Hersh W. A stimulus to define informatics and health information technology. BMC Medical Informatics and Decision Making. 2009;9(1):24. Chambers. The Chambers Dictionary. 9th ed. Edinburgh: Chambers; 2003. Atkinson E. In defence of ideas, or why 'what works' is not enough. Br J Soc Education. 2000;21(3):317-30. Kuhn T. The structure of scientific revolutions. Chicago: University of Chicago Press; 1962. Brennan PF. Standing in the shadows of theory. J Am Med Inform Assoc. 2008;15(2):263-4. Kaplan B. Evaluating informatics applications--some alternative approaches: theory, social interactionism, and call for methodological pluralism. Int J Med Inform. 2001;64(1):39-56. Scott PJ, Briggs JS. A pragmatist argument for mixed methodology in medical informatics. Journal of Mixed Methods Research. 2009;3(3):223-241. Higgins J, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions 5.0.2 [updated September 2009]: The Cochrane Collaboration; 2009. Carifio J, Perla R. Resolving the 50-year debate around using and misusing Likert scales. Medical Education. 2008;42(12):1150-2. Scott P, Briggs J, Wyatt J, Georgiou A. Supplementary tables. 2011 [cited 1 February 2011]. Available from: http://userweb.port.ac.uk/~scottp/MIE2011/Supplementary%20Tables.pdf Ammenwerth E, Wolff AC, Knaup P, Ulmer H, Skonetzki S, van Bemmel JH, et al. Developing and evaluating criteria to help reviewers of biomedical informatics manuscripts. J Am Med Inform Assoc. 2003;10(5):512-4. Metzger B. A textual commentary on the Greek New Testament. 2nd ed. Stuttgart: German Bible Society; 2001. Gaddis J. The landscape of history: how historians map the past. Oxford: OUP; 2002. Rusnock A. Hippocrates, Bacon and medical meteorology at the Royal Society, 1700-1750. In: Cantor D, editor. Reinventing Hippocrates. Aldershot: Ashgate; 2002. Greenhalgh T, Potts HW, Wong G, Bark P, Swinglehurst D. Tensions and paradoxes in electronic patient record research: a systematic literature review using the meta-narrative method. Milbank Q. 2009;87(4):729-88.
228
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-228
Better Quality in Healthcare Through Gamified Simulation Based Skill Training Application Weronika TANCREDIa, Mikael WINTELLb, Lars LINDSKÖLDb,c a Chalmers University of Technology, Gothenburg, Sweden b HSA, Region Västra Götaland, Gothenburg, Sweden Division of Medical Imaging and Technology. Department of Clinical Science, Intervention and Technology (CLINTEC), c Karolinska Institutet, Stockholm, Sweden
Abstract. Although the screening of abdominal aortic diameter helps to identify men with abdominal aortic aneurysm and saves lives, there is need to coordinate and synchronize screening personnel's way to work. This article describes the design of a game based skill training application that could give the screening personnel an additional opportunity to refine measuring of abdominal aortic diameter in ultrasound images. The design work follows the steps of the Goal Directed design process. Consequently, the design activities are divided into six phases: the Research, Modelling, Requirements Definition, Framework Definition, Refinement and Development support. The design process described in this paper finishes with usability testing of an interactive prototype. The evaluation of the design was conducted with end users by studying their subjective ratings and performance on given tasks. The overall results of the usability testing show that the interactive prototype of the skill training application is not yet fully usable. Consequently, further improvement of the interface design is needed. The identified usability issues and collected qualitative and quantitative material about the interaction between test participants and the interface can guide the next design iteration and lead to more usable design. Keywords. Skill training application, game based training, interaction design, Goal Directed Design, screening for abdominal aortic aneurysm, usability evaluation
1. Introduction Abdominal aortic aneurysm (AAA) is a widening of the aorta in the abdomen area that is over 30 mm in diameter [1]. The aneurysm weakens the wall of the aorta and leads to the aorta rupture and death [1]. Ruptured AAA causes death of about 1-2 % of men over 60 years old [2, 3]. Detecting the condition early helps to prevent rupture and death [3]. In the Region Västra Götaland (VGR) screening of the aortic abdominal diameter was introduced in 2008 and it is offered to all men, at the age of 65 [4]. This systematic measurement of aortic diameter is expected to identify men with AAA and save lives. However a study performed in Sahlgrenska University Hospital shows that the measure of aortic diameter differs between screening performers with around 5-7% [5]. Patients
W. Tancredi et al. / Better Quality in Healthcare
229
with identified AAA are controlled regularly and first when the aortic diameter is wider than 50 mm the surgery is considered. It is therefore essential that measurement of aneurysmal aortas is as accurate as possible. At the same time it is known that measuring wide aortic diameter can be more challenging and contributes to higher variation in measurement [5]. Consequently, it was decided that there is need to coordinate and synchronize screening personnel's way to work by providing an additional opportunity to train and improve measurement of abdominal aortic diameter. Games are proven to be an effective solution for improving work related skills [6]. Positive emotions, excitement and stress relief that occur while playing games maximizes learning [7]. As the results achieved in a game are monitored by the system the game based training gives trainees and stakeholders a possibility to watch and assess the skills that were taught [8]. The purpose of this study was to explore if a game based online training application could be a feasible and acceptable solution helping the screening personnel to train measuring abdominal aortic diameter in ultrasound images. The goal of this study was to design a usable game based online skill training application helping the screening personnel to train measuring abdominal aortic diameter in ultrasound images. The Goal Directed Design process was applied.
2. Method According to the Goal Directed Design the final behaviour and appearance of a product is based on goals of users, needs of stakeholders and limitations of technology [9]. The Goal Directed Design process is a user centred methodology providing guidance throughout the product development process and consists of six phases: Research, Modelling, Requirements, Framework Definition, Refining and Supporting. Those steps were followed in this study. During the Research phase six biomedical analysts, two nurses and two physicians from five hospitals in the VGR were observed and interviewed. Additionally, two senior physicians from Sahlgrenska University Hospital and a system administrator from Bild- och Funktionsregistret in VGR were identified as stakeholders and subject matter experts and interviewed. When analysing the interview data, two methods for qualitative analysis were applied: identifying recurring patterns and categorizing data [10].
3. Results 3.1. Research Phase Stakeholders described the current learning style of the personnel conducting aorta screening, desired usage of the training application and needed features. All aorta screening activities in the Sahlgrenska hospital are conducted by two employees and learning occurs through direct observation, asking questions and active participation. The interviews with two senior physicians indicate that the general aim of this group is a systematic, continuous quality improvement of the aorta screening program in the VGR. The primary expectation is that the application will make the diagnostics safer, more accurate and that the average error of measurement will be reduced. In addition,
230
W. Tancredi et al. / Better Quality in Healthcare
the stakeholders assume that the program will coordinate and synchronize personnel's way to work. To satisfy needs of stakeholders the application should act as a portfolio and contain video sequences of longitudinal section of aorta from one hundred patients with registered widening of aorta. The program is expected to be run by the screening personnel once or twice a year; it should give users feedback on choice of image and on the measurement accuracy. Viewing own development should be possible as well as comparing with others. According to the stakeholders, the players should strive for reaching the gold standard that is to place the digital callipers where the selected, experienced radiologist would do. Information about screening personnel's activities, attitudes, aptitudes, motivations and skills was extracted from the interview material. It was established that all interview subjects have experience in aorta screening and feel confident about measuring aortic diameter. They pointed out that it is difficult to determine walls of aorta for instance when the patient is obese or there is much plaque in aorta's walls. The majority of personnel stated they are positive towards skill training by an online game based training application. Interview subjects emphasize that they like working together as it gives them possibility to ask a colleague whenever it is needed. Some of personnel train diagnostics skills by watching ultrasound video sequences. Identifying aneurysm and measuring accurately is the goal of the personnel. All personnel use daily the medical record system and the patient administration system. 3.2. Modeling Phase Identified user goals and characteristics were basis for a textual portrait. The persona is a third-person narrative describing Erica, a 45 years old biomedical analyst. Work setting is presented together with what contacts and relations with others she has at work. The user description shows Erica's goals and competence as well as her attitude toward further skill training. Observations and interviews with the personnel provided material for creation of the workflow model. Measuring aortic diameter starts with finding the right segment of the aorta, then the walls of the vessels are sought. Further, the examination performer checks for plaque in the walls of vessels, analyses curves and structures around the vessel. Eventually, the widest part of the vessel is identified and callipers are placed in the image. The result of the measurement is documented and if the diameter is higher than 30 mm then the Sahlgrenska University Hospital is informed. If any of the steps appears to be problematic a colleague or a physician is consulted. 3.3. Requirements Definition phase Persona expected that the application will function in a similar way as an ultrasound machine does, for instance that lifting and relocating digital calipers will be possible in the image as well as searching for the appropriate image by rewinding the sequence forth and back. Even function that isn't included in ultrasound machines was expected, that is viewing a feedback on the angle of measurement. Once persona's expectations were listed a context scenario was created. The context scenario described in very general terms how persona interacts with the training application.
W. Tancredi et al. / Better Quality in Healthcare
231
3.4. Framework Definition Phase Video sequences, games and results were identified as data and functional elements. Then, necessary operations on these elements were listed, for instance entering and exiting games or viewing sequences and game results. The possible operations on the functional elements were then listed in chronological order, arranged into screens and described in the key path scenario defining how persona manipulates elements of the interface to achieve her goals. 3.5. Refinement Phase Heuristic evaluation, test of paper prototypes with end users and cognitive walkthrough helped to refine the wireframes. During the last iteration of the design process, the interactive prototype was tested for usability by six biomedical analysts. The test participants tested the application by doing five tasks: creating a new account, reading the rules of the game, playing the game, viewing own results and controlling how experienced nurses measure. The test was conducted twice. Performance and satisfaction data was collected, for instance task success rates, number of times participants needed help in each test and System Usability Scale (SUS) scores showing user satisfaction. The results of the second testing session show that only four of six test participants completed successfully tasks two and three without any help. During the first test, the participants requested help more frequently, during the second testing session help was required as the test participants worked on task two, three and four. After the second test the participants rated satisfaction higher (59 of 100 SUS scores) than after the first test (55 of 100 SUS scores).
Figure 1. Wireframes conveyed the design idea and were shown e.g. to the developer. The play section.
The content of the interactive prototype is divided into four categories: introduction, play, results and discuss images. The introduction category contains rules of the game and measurement principles. Critical information for gameplaying such as textual instructions for the user and player's outcome are displayed on the left side of the play section (figure 1). The central part of the page is devoted to the video player
232
W. Tancredi et al. / Better Quality in Healthcare
playing sequences recorded by the personnel during the screening. User can control the video sequence by manipulating a slider and pressing pause, play or stop buttons. The results option displays player's outcome in form of an accuracy curve. It informs about user's progress over the time and provides means for comparison between results of different groups of users. The fourth section of the global navigation gives user possibility to discuss images. 3.6. Development Support Phase Wireframes and scenarios were presented to the developer. The developer together with the designer drew a flow diagram as a support for programming work.
4. Discussion Involvement of end users was very valuable and made that the design process focused on adapting the training application to the actual work practice of the screening personnel. Key path scenarios helped to picture and imagine different design solutions. In conclusion, although all the expectations of the persona concerning e.g. the similarity of the game functionality to functionality of the ultrasound machine were addressed (section 3.3), the usability study conducted in the end of this design process has shown that the interactive prototype is not yet fully usable, manipulating slider and watching the video sequence at the same time appeared to be difficult. Material collected during the usability evaluation could be used in the next design iteration and lead to more usable design.
References [1]
Bergqvist D. Bukaortaaneurysm. 2007 (online) [Accessed 24 November 2010] Available at: http://www.internetmedicin.se/dyn_main.asp?page=1523 [2] Johansson G, Swedenborg J. Ruptured abdominal aortic aneurysms: a study of incidence and mortality. British Journal of Surgery 73(1986), pp. 101-103. [3] Pleumeekers HJ, Hoes AW, Mulder P, et al. Differences in observer variability of ultrasound measurements of the proximal and distal abdominal aorta. Journal of Medical Screening 5(1998), pp.104–108 [4] Wiel-Hagberg E. Livsavgörande screening erbjöds 65-åriga män. Borås Tidning. 2008 (online) Available at: http://www.bt.se/nyheter/boras/livsavgorande-screening-erbjuds-65-ariga-man %28884359%29.gm [Accessed 25 April 2010] [5] Dijnér T. Mätvariabilitet vid screening av abdominala aortaaneurysmer med ultraljud. Sahlgrenska University Hospital, Gothenburg, 2010 (unpublished document). [6] Sitzmann T. Game On? The effectiveness of game use in the workplace depends on context and design. Training & Development 64 (2010), pp.20-20. [7] Yaman D. Why games work? (2001) (online) [accessed 26 November 2010] Available at: http://www.learningware.com/LearningCenter/WhitePaper1.html [8] Bergeron B. Developing Serious Games. Course Technology, Boston, 2006. [9] Cooper A, Reimann R, Cronin D. About Face: The essentials of Interaction Design 3rd ed. Wiley Publishing, 2007. [10] Sharp H, Rogers Y, Preece J. Interaction design beyond human-computer interaction. 2nd edition. John Wiley & sons, Barcelona, 2006 [11] Danforth, L. Gamification and libraries. Library Journal February (2011), pp.84-84
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-233
233
Implementation of a Web-Based Interactive Virtual Patient Case Simulation as a Training and Assessment Tool for Medical Students A. OLIVENa,11, R. NAVEa, D. GILADb, A. BARCHc a Faculty of Medicine, Technion, Haifa, Israel, b Dalia Statistics, Haifa, Israel, c Jazzis Computing & Programming, Haifa, Israel
Abstract. Objective Structured Clinical Examinations (OSCE) are resource intensive, not practical as teaching tools, and their reliability depends on evaluators. Computer-based case simulations ("virtual patients", VP) have been advocated as useful and reliable tools for teaching clinical skills and evaluating competence. We have developed an internet-based VP system designed both for practice and assessment of medical students. The system uses interactive dialogue with natural language processing, and is designed for history taking, evaluation of physical examination, including recognition of visual findings and heart and lung sounds, and ordering lab-and imaging tests. The system includes a practice modality that provides feedback, and a computerized OSCE. The reliability of our system was assessed over the last three years by comparing the clinical competence of medical students in similar VP and human OSCE. A total of 262 students were evaluated with both exam modalities. The correlation between the two exams scores was highly significant (p<0.001). Alpha Cronbach for the computerized exam was 0.82-0.89 in the 3 years, and was substantially higher than that of the conventional OSCE each year. We conclude that a computerized VP OSCE is a reliable examination tool, with the advantage of providing also a training modality. Keywords. Virtual patient, OSCE, medical education,
1. Introduction Clinical medical education is based on a combination of lectures, written literature and "bed-side" teaching during clinical rotations. The latter is obviously the cardinal and most important part of medical education. Unfortunately, bed-side teaching is associated with many practical limitations. It requires a large core of experienced and well trained instructors, is time consuming, and the number of adequate patients available for teaching and the spectrum of diseases are insufficient. Similarly, medical exams based on real patients are known to have unacceptable low reliability, and were usually replaced by OSCE (objective structured clinical examination) simulations by actors or medical staff. However, OSCE's are very resource intensive [1], and judgments made by individual evaluators jeopardize both reliability and validity [2]. 1
Corresponding author: A. Oliven.
234
A. Oliven et al. / Implementation of a Web-Based Interactive Virtual Patient Case Stimulation
The development of computerized systems has enabled to develop a new level of tuition and evaluation, that can be placed between the book and written exams on the one hand, and bedside teaching and assessment on the other. Over recent years, an increasing number of computer-based simulations of patients have been proposed for both training and assessment in medical education, often referred to as "virtual patients" (VP). Although the use of VP has not yet entered the main stream of medical education and is only currently starting to be integrated into the regular curriculum in few universities, there is no doubt in the potential of VPs to fill significant gaps in the current tuition of medical students. VPs are excellent teaching tools for developing clinical reasoning and decision-making skills and improving clinical competency [3,4]. Clinical reasoning is a process that matures through deliberate practice with multiple and varied clinical cases. VPs are ideally suited to this task, as potential variations in VP design are practically limitless [5]. VPs can incorporate images, sound, videos, lab tests and imaging results, both for early medical students [4], and for later years' courses [3]. Although VPs cannot replace interpersonal communication with real patients, students tend to react similarly to real and simulated patients [6]. Web-based VP software has the most important advantage of providing an interactive "game", enabling practicing patient management in a fast sequence whenever and wherever convenient for the student, providing real-time and accurate feed-back. The same system can be used for exams, providing statistically analyzed results within seconds after the exam. The most advanced VP programs replaced the multiple choice system by natural language processing, designed to enable the student to ask any medical related question via an interactive dialogue with text entry [7,8]. However, due to practical and logistic problems, it appears that despite the obvious theoretical advantages of VPs, few if any university has published its experience with a VP system that was implemented into the curriculum and used for regular exams. Over the last years we have developed and implemented a web-based VP system that simulates and replaces OSCE with broader scope, and designed for both training and exams. This VP system is currently used by students at the beginning of their clinical year to practice clinical reasoning skills and enlarge the spectrum of diseases evaluated, in parallel with encounters with real patients. The same system was also used over the last 3 years for the final exam of the introductory course, in parallel with a "traditional" OSCE, performed by human assistants, independently but at the same time. The current paper describes our OSCE-based VP system and compares the results of the human and virtual OSCE.
2. Methods The web-based VP system described was developed by our group, and was designed for medical students at their clinical years, to resemble a realistic patient encounter, covering medical history, physical examination (H&P) and laboratory and imaging tests. The primary goal was to teach the student how to evaluate patients with specific, common clinical conditions. Accordingly, similar to the methodology used in OSCE, the student is encountered with the main complaint and vital signs of a VP, and should know the mandatory questions needed to be asked, specific signs that need to be looked for during physical examination, and tests that need to be ordered, based on the answers and findings. The student conducts an open dialog with the VP, writing medically relevant questions in free wording, and the VP "understands" the question
A. Oliven et al. / Implementation of a Web-Based Interactive Virtual Patient Case Stimulation
235
and replies a pre-programmed reply. The ability to conduct a free dialog is based on natural language processing, with a lexicon of keywords. Obviously, the number of questions the VP can answer is much larger than the limited number of questions considered as mandatory. The student asks, in writing, for findings in physical examination, and may get a written reply (like for palpation findings), picture (for findings that can be seen), or audio-video clips of heart and lung sounds. Based on the H&P, the student has to order initial laboratory and imaging tests from a computerized order list similar to lists used in hospitals and other medical agencies.. Our VP system was designed for both exercise and exams. Accordingly, the student can practice solving medical problems and get a grade during exercises conducted at home. Mandatory items are based primarily on the ability of the student to apply the relevant differential diagnosis of the main complaint to H&P and the initial tests. During exercise, the student can pause anytime and ask to see all the relevant questions/findings/tests considered as mandatory, and the lists, ordered in groups of asked or not-asked (or incorrectly identified) items, provide a practical feedback (not shown during exams). A large number of training VP cases is available on the VP website for all students and instructors. Few cases were practiced in small groups with the instructors, but the students were encouraged to practice all cases repeatedly on their own, the same way as preparing for exams from books. A major shortcoming of VPs is the difficulty in preparing new cases, particularly when natural language recognition is used. To facilitate this task, complex visual modalities were avoided, and multiple mechanisms were introduced to enable simple creation of new cases, particularly when a basic prototype of a specific complaint or finding was already created. Before implementation into the teaching and exam curriculum, 25 clinical prototypes (like chest pain, anemia etc.) with several case scenarios for each were prepared (for example: for chronic diarrhea, a patient with Crohn's disease, a patient with celiac etc.). In addition, a "learning modality" for the VP was created, to add words and items asked by the students, that the VP did not recognize. The exam is conducted with the same system in the computer center of the faculty. Several cases are presented sequentially, with time limitation for each case and a limitation to the number of tests that can be ordered. The test results, including analysis of the difficulty of each item, the ability of each item to discriminate between "better" and "poorer" students (point-biserial correlation), and the reliability of the whole exam (Cronbach’s alpha), are produced automatically at the end of the exam. Over 2 years before implementation, the VP-OSCE exams were offered to the students on a voluntary basis. Experience gained in these exams was used to improve the system to the level that enabled implementation. Over the next 3 years, on the day of the exam, the students were divided randomly into two groups, the one underwent conventional OSCE with 5 positions/clinical cases, and the other was examined with the VP system on equal number of different cases. During the VP exam, instructors were available to help the students with questions the VP did not "understand". No explanations were permitted, and the instructors only provided adequate wording if the question was relevant, but inadequately worded or misspelled. The human and VP OSCEs ended at the same time, and the groups were switched, each one taking the different OSCE. Hence, every student evaluated on the day of the exam 10 different clinical cases, half performed with actors and half with VPs. The grade in each OSCE modality was calculated equally, as the percent of mandatory items the student knew.
236
A. Oliven et al. / Implementation of a Web-Based Interactive Virtual Patient Case Stimulation
The OSCEs performed over the last 3 years are the subject of the current paper, and were considered as a randomized, crossover, prospective non-inferiority study. To compare the human- and VP-OSCEs, we correlated the individual grades of the students achieved with both types of OSCE (Pearson correlation). Cronbach’s alpha was calculated separately for every OSCE each year.
3. Results. Comparison of the grades and reliability of the human and VP-OSCE is given in table 1. The mean grades tended to be higher in the conventional human OSCE, but this difference was statistically not significant. The Cronbach’s alpha were substantially higher every year in the VP OSCE. The relationships between individual grades obtained by all students over the 3 years is shown in figure 1. Table 1. Comparison of grades and α Cronbach obtained in the human and VP OSCEs, and the correlations between grades in both exam modalities. *- p<0.01 Human OSCE
VP OSCE
Year
n
Grades; mean±SD
2008
87
78.2±7.3
0.65
78.9±8.2
0.82
0.68*
2009
76
83.3±8.0
0.74
78.9±9.6
0.89
0.71*
2010
99
85.1±6.9
0.71
80.0±8.8
0.85
0.65*
2008-10
262
82.3±7.9
α Cronbach
Grades; mean±SD
α Cronbach
R
79.3±8.9
0.70*
Figure 1. Relationships between grades of the conventional, human OSCE (x-axis) and the VP-OSCE (yaxis), over the 3 years (2008-2010).
4. Discussion The results of our study indicate that our OSCE-base VP system is well suited to replace the conventional human OSCE. The correlation between the results of the 2 modes of OSCE was similar to the correlation found when the grade of half of a
A. Oliven et al. / Implementation of a Web-Based Interactive Virtual Patient Case Stimulation
237
conventional OSCE is compared to the grade of the second half. Also, the reliability of the VP OSCE was consistently higher than that of the conventional OSCE. Most importantly, the VP system can be used as a practice tool both for training and for preparation for this type of examination. A large number of VP systems have been developed for multiple purposes [9]. The presented system was designed to train and assess students in patient evaluation, based on clinical reasoning and competence. We believe that several lessons can be learned from our experience: First, it is imperative that a VP system should be well designed and assessed in pilot trials before trying to implement it in a regular course. This includes both the development of a high-fidelity, fast response web-based software, and the creation of multiple cases by an experienced physician. Second, we believe that the most important value of our VP system is in its ability to enable students to practice patient evaluation wherever and whenever convenient, in addition to, and in a clinically more relevant fashion than, learning from books. VP should not replace bedside teaching but rather complement it by providing additional cases similar to real patients, but even more by enabling virtual encounters with clinical scenarios not previously encountered. Finally, we believe that in order to motivate students to use the VP training system in their free time and without tutors, it has to be user-friendly, provide real-time feedback, but most of all be directly related to an exam. Even more than other assessment modalities, the students need to be well familiar with the VP system in order to succeed in a VP based exam. The only way to master this exam modality is by practicing many training cases, thereby fulfilling the goal of the VP system. Not surprisingly, almost all entries of our students to the VP site were recorded during the preparation period to the final OSCE examination.
References [1] [2] [3] [4] [5]
[6] [7] [8] [9]
Van der Vleuten CPM, Van Luyk SJ, Van Ballegooijen AMJ, Swanson DB. Training and experience of examiners. Med Educ 23(1989), 290–296. Weatherall DJ. Examining undergraduate examiners. Lancet 338(1991), 37–39. Round J. Conradi E. Poulton T. Improving assessment with virtual patients. Med Teach 31 (2009), 759763. Cook DA, Triola MM. Virtual patients: a critical literature review and proposed next steps. Med Edu. 43 (2009), 303-311. Gesundhei N, Brutlag P, Youngblood P, Gunning WT, Zary N, Fors U. The use of virtual patients to assess the clinical skills and reasoning of medical students: initial insights on student acceptance. Med Teach 31 (2009), 739-742. Sanson-Fisher RW, Poole AD. Simulated patients and the assessment of medical students’ interpersonal skills. Med Educ 14 (1980), 249–253. Bergin RA, Fors UGH. Interactive simulated patient - an advanced tool for student-activated learning in medicine and healthcare. Computers & Education. 40 (2003), 361–376. Courteille O, Bergin R, Stockeld D, Ponzer S, Fors U. The use of a virtual patient case in an OSCEbased exam – A pilot study. Med Teach 30 (2008), 66–76. Huwendiek S, De Leng BA, Zary N. Fischer MR, Ruiz JG, Ellaway R. Towards a typology of virtual patients. Med Teach 31 (2009), 743-748.
238
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-238
Online CME Usage Patterns M. Cristina MAZZOLENIa,1, Carla ROGNONIb, Enrico FINOZZIa, Ines GIORGIa, Marco PAGANIc, Marcello IMBRIANIa a Salvatore Maugeri Foundation IRCCS, Pavia, Italy b Dept. of Computer Engineering and Systems Science, University of Pavia, Pavia, Italy c CBIM – Consorzio di Bioingegneria e Informatica Medica, Pavia, Italy
Abstract. The paper reports the findings of the analysis of a sample of 829 online Continuous Medical Education (CME) enrolments aimed at inspecting users’ preferences and behaviours. The contents of the analyzed course are provided as online SCORM (Sharable Content Object Reference Model) resources together with the corresponding Pdf downloadable versions allowing different usage patterns (online only, Pdf only, online AND Pdf, mixed online OR Pdf). The results point out that there is not a specific preference for one of the four patterns and that most of the users access both navigable modules and Pdf documents. Demographic characteristics and initial knowledge level do not influence the choice of a specific usage pattern that probably depends on internal or context factors. From the point of view of knowledge acquisition, the four patterns are equivalent. As regards users’ behaviour, the analysis has pointed out two issues: 1) the attitude to conclude the course in a short time and to reach good test scores, but not the excellence; 2) learning activity tracing data were not available for all the enrolments. Cues for discussion are proposed. Keywords. e-learning, CME, evaluation, occupational medicine
1. Introduction Online CME is an educational way that can offer advantages to healthcare personnel, including easy use without time and space constraints and low costs. Its educational effectiveness is comparable with traditional CME [1].Web-based CME is growing worldwide in a variety of forms among which the most popular one follows the easy model [2] based on the download of a document and questionnaires to fill in. Since 2008, Salvatore Maugeri Foundation has developed and provided e-learning courses in the field of Occupational Medicine [3]. The used model is twofold: contents are provided as navigable SCORM resources and as Pdf downloadable documents, letting the user make his/her choice. Each activity performed during course attendance is recorded, allowing the inspection of users’ behaviour. The offer of educational systems is largely documented in the scientific literature, while evaluation studies on how e-learning is used by healthcare personnel are few. The present paper deals with the analysis of usage data of a course for occupational physicians on mechanical vibrations. The inspection is aimed at understanding: which kind of resources users prefer, if there are differences among resource utilization 1
Corresponding Author: M. Cristina Mazzoleni, Fondazione Maugeri IRCCS, Via Maugeri 4- 27100 Pavia Italy. [email protected]
M.C. Mazzoleni et al. / Online CME Usage Patterns
239
patterns in terms of knowledge acquisition, and finally how users have attended and concluded the course. Online CME course evaluation is recommended [4] per se, moreover the knowledge about users’ behaviour and preferences might be useful to orient new developments and to exploit the potentialities of e-learning for CME at best.
2. Methods The CME system is based on Moodle e-learning platform (www.moodle.org). The analyzed course covers T1: anatomy, epidemiology, physiopathology; T2: symptoms, diagnosis; T3: prevention, regulatory aspects, and is structured according to the following educational model: • An initial test (IT), not selective, mandatory to access educational resources • Three SCORM modules for free navigation (Md1, Md2, Md3) related to T1, T2, T3 respectively • Pdf downloadable versions of T1, T2, T3 (Pdf1, Pdf2, Pdf3) • Two hyper-flowcharts of guideline-based decision processes (GL1 and GL2) • Two case-based exercise/tests (CBT1 and CBT2) based on the hyperflowcharts (4 attempts allowed each, minimum score to pass the test: 75/100) • One final test (FT) (4 attempts, minimum score to pass the test: 75/100). A tutor was available for asynchronous communication. To obtain CME credits, the user has to reach a positive result in FT and in each case-based exercise/test. Starting from the analysis of the Postgres database connected with the platform, a set of ad hoc queries has been developed in order to retrieve data fitting our information needs such as: Demographic data of the user (age, gender); date and time of tests’ completions with score and number of performed attempts for FT and CBT1 and CBT2; initial knowledge level: 3 classes according to the result of IT: low (0<=IT<50), insufficient (50<=IT<75), good (75<=IT<=100); number of accesses to each resource; status of the course: passed, not passed, not concluded. To inspect users’ preferences for online or Pdf resources use, four complete learning patterns (all topics and GLs) have been defined: CMod (complete modules only): Md1+Md2+Md3+GL1+GL2; CPdf (complete Pdf versions only): Pdf1+Pdf2+Pdf2+GL1+GL2; CMP (complete all modules and all Pdf versions): (Md1 AND Pdf1)+(Md2 AND Pdf2)+(Md3 AND Pdf3) +GL1+GL2; CMix (complete mixed modules or Pdf versions): (Md1 XOR2 Pdf1)+(Md2 XOR Pdf2)+(Md3 XOR Pdf3)+GL1+GL2.
3. Results with Discussion The inspected sample is composed by 829 enrolments. The course has been concluded by 711 users (retention rate 85.8%, successful rate 95.5%, 260 females with mean age 44 and mean IT score 60.1, 451 males with mean age 50 and mean IT score 56.7). Table 1 shows the distribution of users taking into account the four complete learning patterns (Full Completion - FC). Analogous figures are provided for the complementary group of users (Not Full Completion – NFC) for which activity tracing 2
XOR = exclusive OR, excluding combinations that refer to CMod and CPdf
240
M.C. Mazzoleni et al. / Online CME Usage Patterns
data showed gaps in the course resource (modules/Pdf versions/GLs) utilization. Table 1. Learning pattern distribution (percentages refer to FC group, and to each pattern) CMod Users total Passed Non passed Male Female Mean male age Mean female age Low IT Insuff IT Good IT Global mean score
43% 102 (23%) 97 (23%) 5 (22%) 61 (21%) 41 (26%) 50 46.3 40 (23%) 39 (20%) 23 (30%) 266.6
CPdf
FC (447) CMP
NFC CMix 57%
90 (20%) 85 (20%) 5 (22%) 61 (21%) 29 (18%) 46.8 41.8 39 (23%) 33 (17%) 18 (23%) 269.4
121 (27%) 115 (27%) 6 (26%) 84 (30%) 37 (23%) 50.2 43.9 45 (26%) 57(29%) 19 (24%) 265.8
134 (30%) 127 (30%) 7 (30%) 81 (28%) 53 (33%) 49.3 44.7 47 (27%) 69 (35%) 18 (23%) 260.7
264 255 9 156 108 50.4 43.8 87 100 77 272.8
As regards the four FC patterns, the usage of both online modules and Pdf (CMP+CMix) seems to be preferred by most (57%) of the FC users, and only 90 users have used the system simply to download and then study text documents. The distribution of initial knowledge level, gender and age, are not statistically different in the four FC groups. Pdf version is probably more used by users that do not have an easy access to the web and/or that prefer to keep durable material, accessible in every moment, in addition or in alternative to online contents. Comparing the global mean scores (FT+CBT1+CBT2 mean scores) of the four FC patterns, no significant difference was found in terms of knowledge acquisition. The number of incomplete patterns was surprisingly high (264/711, 37%). In addition, Kruskal-Wallis test applied to global mean scores of FC and NFC groups returned p<0.01. An embarrassing result: it seems that the less you study, the better the score. The suspect about honesty of some users is natural, but it is hard, and probably unjust, to accept it as the only explanation. For this reason, further inspections have been carried out, defining the completion degree of the incomplete learning process as partial completion included guidelines (PxGL) and excluded guidelines (Px), where x is the number of topics (0, 1, 2 or 3) accessed via online module or Pdf.
Figure 1. Distribution of initial knowledge versus completion degree
P0 and P3 classes are composed by 138 and 81users respectively; 29 users have at least accessed 2 topics and the others have accessed at least one topic or one guideline. Such a large number of P0 users might be explained by a high initial knowledge level:
M.C. Mazzoleni et al. / Online CME Usage Patterns
241
the users already knew the topics and didn’t need to learn. Figure 1 shows initial knowledge versus completion degree of the learning process. Differently from what expected, 84 out of 138 users show a not good initial knowledge level. This fact opens two questions: “Is the IT result representative of their initial knowledge?” and “If yes, could they have accessed resources without leaving traces of their activity?”. As regards IT, it was clearly stated that IT was mandatory to access resources, but not selective, hence users might have answered with low attention, inducing an underestimation. For the second question, the platform can document only that tracing data are missing, but not that learning activities have not been carried out. P0 users could have attended the course with a colleague using just one account, and it is known that interaction with peers can lead to better learning [5].
Figure 2. Elapsed time to conclusion, for FC and NFC groups (A) and global score (maximum 300) for all those users who have positively concluded the course performing no more than two attempts (B)
Abandoning the issue about the reasons for NFC, in the following some results are reported focusing on the gross attendance duration, from IT to the last test performed. Figure 2A shows the elapsed time to conclusion, for FC and NFC groups. Globally, out of the 679 users who have passed the course, 429 (63%) concluded it within 24 hours, the mean elapsed time is 4.2 hours, that is shorter than the expected time (5 hours). Is it due to users’ task orientation or hurry to conclude? Splitting the sample into FC group (237 users) and NFC group (192 users), the mean elapsed time are 5.8 and 2.22 hours respectively. The 95% distribution of elapsed isn’t affected by initial knowledge level. For FC group, Table 2 reports the intensity of usage and test results versus elapsed time. The number of accesses here reported can’t quantify the number of readings of Pdf materials and the comprehensiveness of content inspection within modules/GLs. Table 2. Intensity of usage and test results versus elapsed time (* indicates statistically significant difference) Hours ≤24 >24
Mean number of accesses GL1+GL2* T1* T2* T3* 2.76 2.3 2.6 2.1 3.3 3.3 3.5 2.6
Mean n. of attempts FT CBT1* CBT2 1.45 2.12 1.54 1.53 2.33 1.52
FT 89.1 88.8
Mean score CBT1 CBT2 89.1 91.2 88.4 90.3
Prolonged elapsed time affects the number of accesses to the contents, but not the result in terms of test scores that are suboptimal. Figure 2B shows the global score (maximum 300) for all those users who have positively concluded the course performing no more than two attempts, out of the four allowed attempts, per test: there is margin for an improvement of knowledge acquisition. A threshold of 75% of correct answers is widely used to evaluate results, both in traditional and e-learning CME. But
242
M.C. Mazzoleni et al. / Online CME Usage Patterns
Online CME allows multiple attempts, and often offers a tutor. So, why don’t drive the attitude to excellence moving the traditional threshold to 100% for e-learning CME? The observations here reported are supported by the preliminary results obtained analyzing other three analogous courses for occupational physicians.
4. Conclusions The performed analysis has proved to be fruitful in terms of suggestions for future course developments and cues to reinforce the effectiveness and trust in online CME. There is not a specific preference for online SCORM resources or Pdf downloadable documents, and the majority of the users access both of them. The availability of contents in various formats hence seems to be appreciated by the users (retention rate=85.8%) and to meet their needs that probably depend on either personal attitudes and context of usage. Demographic characteristics and initial knowledge level do not influence the choice of a specific usage pattern. From the point of view of knowledge acquisition, the inspected learning patterns are equivalent. Both formats should be provided by an e-learning course for CME. The analysis has revealed that for some users (37%) activity tracing data showed gaps in the course resource utilization. Thanks to e-learning platforms’ functionalities, the learning process of online CME is much more provable than the one of traditional courses. Apparent incompleteness of course attendance might induce doubts about the reliability of online CME and could label e-learning as just an easy mean to collect credits. This is to be avoided. Easy technical solutions (i.e. activity locking) should be used as preventive action together with increasing users’ awareness, in order to preserve the undeniable benefits of online CME in terms of effective and democratic knowledge diffusion. The study points out the attitude, reported also in [6], to finalize the course in a short span of time and to reach good test scores, but not necessarily the best possible. E-learning for CME, differently from traditional CME, offers the possibility of multiple exposures to the same learning activities, of testing and re-testing the acquired knowledge and the support of a tutor. During attendance, users should be stimulated to take advantage at best of e-learning chances and pursue the excellence in the results.
References [1]
[2] [3] [4]
[5] [6]
Davis J, Chryssafidou E, Zamora J, Davies D, Khan K, Coomarasamy A. Computer-based teaching is as good as face to face lecture-based teaching of evidence based medicine: a randomised controlled trial. BMC Med Educ, 7:23, 2007. Harris JM Jr, Sklar BM, Amend RW, Novalis-Marine C. The growth, characteristics, and future of online CME, J Contin Educ Health Prof, 30(1):3-10, 2010. Mazzoleni MC, Rognoni C, Finozzi E, et al. Usage and effectiveness of e-learning courses for continuous medical education, Stud Health Technol Inform, 150:921-5, 2009. Shortt SE, Guillemette JM, Duncan AM, Kirby F. Defining quality criteria for online continuing medical education modules using modified nominal group technique, J Contin Educ Health Prof, 30(4):246-50, 2010. Philpott J, Batty H. Learning best together: social constructivism and global partnerships in medical education, Med Educ, 43(9):923-4, 2009. Mazzoleni MC, Rognoni C, Finozzi E, et al. Earnings in e-learning: knowledge, CME credits or both? Hints from analysis of attendance dynamics and users' behaviour, Stud Health Technol Inform, 160(Pt 1):576-80, 2010.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-243
243
How do Nursing Students Perceive the Notion of EHR? an Empirical Investigation Parisis GALLOSa1, Stelios DASKALAKISa, Maria KATHARAKIb, Joseph LIASKOSa, John MANTASa a Health Informatics Laboratory, Faculty of Nursing b Faculty of Economics National and Kapodistrian University of Athens, Greece
Abstract. This paper describes an empirical study aiming to assess nursing students’ perceptions on Electronic Health Record (EHR) concepts and their potential future attitude towards use. Based on the theoretical ground of Rogers’ Innovation Diffusion Theory and other research works, a formulated model was empirically validated among ninety nursing undergraduates. Data analysis was based on partial least squares path modeling. Results highlighted the very strong significant effect of relative advantage and observability as well as the significant effect of perceived ease of use to attitude towards using EHR systems. The study findings are discussed along with limitations and future work in the current field. Keywords. Empirical study, Electronic Health Record, partial least squares
1. Introduction The use of Information and communication technologies (ICT) is important to healthcare [1] with Electronic Health Record (EHR) implementations being within such a scope, as they carry a series of advantages [2]. According to the ISO Technical Report (ISO-TR20514), as cited in Hovenga, Garde and Heard [1], EHR is “a repository of information regarding the health status of a subject of care (patient or consumer), in computer processable form”. Despite the fact that a variety of studies pinpoint a shift towards EHR implementations in several countries over the last years [1,3,4], results on EHR adoption and use are not always encouraging, as outlined at relevant studies [3,5,6]. Past research highlights the importance of nurses in the context of adoption and use of such systems [1,6], with issues raised by Woodruff and Selway [2], that are both in favor and against the EHR education of nursing students [2]. In that context, the research aim of this study is twofold; Primarily to investigate the understanding and perceptions of Electronic Health Record (EHR) at a preprofessional nursing level, in terms of its potential innovativeness as a remarkably new change on the delivery of healthcare services. Secondly, to draw useful conclusions on aspects to be further revised and refined towards a more concrete nursing informatics educational curriculum. This research work assumes that is of great interest to investigate students’ perceptions on key domains of Health Informatics, such as the 1
Corresponding author: Parisis Gallos, Health Informatics Laboratory, Faculty of Nursing, National and Kapodistrian University of Athens, 123 Papadiamantopoulou str., Goudi, 115 27, Athens, Greece; E-mail: [email protected]
244 P. Gallos et al. / How Do Nursing Students Perceive the Notion of EHR? An Empirical Investigation
notion of the EHR, both in terms of current appreciation of its advantages compared with past practices, but also in terms of future attitude towards use, as part of their future profession as nursing healthcare professionals. Related work in the current field includes a series of studies [4-7] that investigate the EHR adoption and use in healthcare settings for a variety of stakeholders, such as physicians [5,7], nurses [6] but also from a citizens’ point of view [4]. The pre and post adoption of an EHR system by nurses has been studied [6] whereas other studies performed research on nursing students’ attitudes towards different technology related issues [8,9]. The rest of the paper is organized as follows: Section 1, Methods, outlines the formulation of the theoretical framework and the respective hypotheses. Continuously, Section 2, Results, present the study findings whereas Section 3, Discussion, presents the results in terms of the formulated hypotheses significance. Finally, Section 4, Conclusions, provides a discussion on the findings along with study limitations and future work in the current field.
2. Methods 2.1. Research Model The theoretical research framework utilized at the current study was influenced by Rogers’ Innovation Diffusion Theory (IDT) [10] along with the research works of Peslak [11] and Gibson [7]. Past research in the area of healthcare empirical studies is relevant with the use of IDT. IDT has been applied as it is (core adaptation) or as a theoretical synthesis of constructs from various models. In other studies IDT has been applied in the context of Electronic Medical Record (EMR) [7], or in its relation to healthcare systems in general [12-14]. In this study, based on the aforementioned research works [7,10,11], specific modifications were performed. In particular, the dimension of complexity was formulated from an ‘ease of use’ perspective and the dimension of adoption was empirically substituted with attitude towards use, in order to reflect the pre-adoption nature of the study. Overall, the following hypotheses were utilized at the proposed model: Relative advantage positively affects attitude towards use (H1+), Compatibility positively affects attitude towards use (H2+), Perceived Ease of Use positively affects attitude towards use (H3+), Trialability positively affects attitude towards use (H4+) and Observability positively affects attitude towards use (H5+). 2.2. Procedure and Measures Based on the formulated research model, a relative study questionnaire with the following elements was constructed: relative advantage (coded as RA), compatibility (coded as COMP), ease of use (coded as EOU), trialability (coded as TRIA), attitude towards use (coded as ATT) and observability (coded as OB). Each questionnaire item was adapted from previous research works [7,11,13,15,16], thus the most possible standardized and validated measures were utilized. Questions were translated into Greek and refinements were made in order to reflect the study context and language, where applicable. All items followed a 7-Likert scale from strongly disagree to strongly agree along with a section for “do not know/do not answer”. The questionnaire was
P. Gallos et al. / How Do Nursing Students Perceive the Notion of EHR? An Empirical Investigation 245
anonymous and was distributed to third-year nursing undergraduates of the Faculty of Nursing at the University of Athens, Greece. With respect to the data analysis approach, it was performed using SPSS [17] for the demographics and Partial Least Squares path modeling, with SmartPLS 2.0 M3 [18].
3. Results A total of 90 valid questionnaires were completed (18 male and 72 female participants). Responders had a certain Health Informatics series of classes’ background, in relation with three specific modules. Apart from the compulsory attendance of the module 'Health informatics', 73.3%, 82.2% and 98.9% of the sample attended the optional modules of ‘Introduction to Informatics’, ‘Hospital Information Systems’ and ‘Biomedical Informatics’ respectively. Thus, it was assumed that they have received a sound theoretical background experience in EHR practices.
Figure 1. Results of the structural model
With regard to the partial least squares analysis, the assessment followed the investigation of the measurement and the structural model. Concerning the measurement model, individual item loadings, internal consistency, convergent validity and discriminant validity were investigated. In specific, individual item loadings produced reliable results (>0.7) [19] except the value of the first question of OB which however exceeded 0.6 and decided to remain at the model [20]. For internal consistency, the values of composite reliability exceeded 0.7 [21], thus considered reliable [21]. Furthermore, convergent validity was assessed based on the Fornell and Larcker [22] cut-off value of 0.5 for the Average Variance Extracted (AVE), being greater than 0.5 thus considered reliable [22]. At last discriminant validity was assessed based on the squared root value of the AVE for each construct [21,22] and produced reliable results. Continuously, the structural model was investigated by applying a bootstrapping technique (with 1000 resamples) and three statistically significant levels: p<0.05(*), p<0.01(**) and p<0.001(***), based on a two-tail test. Results are shown at Figure 1. Dotted lines emphasize the hypotheses that were not confirmed.
246 P. Gallos et al. / How Do Nursing Students Perceive the Notion of EHR? An Empirical Investigation
4. Discussion The results of the structural model highlight the very strong significant effects of relative advantage (H1 at p<0.001) and observability (H5 at p<0.001) to attitude towards using an EHR along with the significant effect of perceived ease of use (H3 at p<0.05) to the dimension of attitude towards use. Such results underline the positive perceptions and the appreciation of students with regard to the added value of EHR, as compared to previous practices. The positive attitude of students towards an EHR may also be explained by their experience and exposure with technology from a young age, thus being non-reluctant in adopting technology-oriented ideas. In addition, the key role of the dimension of relative advantage is in line with Gibson [7] findings in real healthcare settings. On the other hand, the non-significant effect of trialability and compatibility may be interpreted in the context of the students’ limited experience in terms of an actual EHR system use while it may also pinpoint the need for a more pragmatic interrelation of EHR philosophy and applicability within the nursing science at the educational level. At last, the R2 value was found to be equal to 0.6823, thus explaining 68.23% of the variance at the construct of attitude towards use of EHR.
5. Conclusions This study attempted to assess the perception of nursing students with regard to EHR initiatives. An empirical investigation was conducted and certain outcomes were produced and presented. However, the current study does not come without limitations. In specific, nursing students, who constituted the study sample, did not have an actual/real-world experience of an EHR implementation and consequently, the study findings are restricted within a limited theoretical and contextual scope. The study limitations outline the future work in the current field. In particular, a more detailed analysis in real healthcare environments that have deployed and operate EHR systems may be conducted, with a clear separation of healthcare professional groups that have an actual experience with such computerized environments. These types of investigations will assist further to the understanding of the factors that affect the adoption and use of EHR in healthcare settings. Despite the aforementioned limitations, the current study attempts to shed light in health informatics aspects in relation to nursing science. In that context, it was decided to assess the perceptions of future stakeholders on such concepts since the positive opinions and beliefs may lead to an overall positivism towards EHR and potentially to future actual acceptance and use. The study results reveal the positive perceptions of nursing students against EHR, and their appreciation of the added value EHR may provide to healthcare services. Assessments of this kind may be applied at a preprofessional level in order to record the dominant trends and opinions for a variety of Health Informatics conceptual domains in general. Acknowledgements. The authors would like to thank the nursing students for their participation at the study.
P. Gallos et al. / How Do Nursing Students Perceive the Notion of EHR? An Empirical Investigation 247
References [1] [2] [3] [4] [5] [6] [7] [8]
[9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22]
Hovenga E, Garde S, Heard S. Nursing constraint models for electronic health records: A vision for domain knowledge governance, International Journal of Medical Informatics 74 (2005), 886-898. Woodruff K, Selway J. Are Electronic Health Records a Barrier to Nurse Practitioner Student Learning?, The Journal for Nurse Practitioners 6 (2010), 279-280. Nøhr C, Andersen KS, Vingtoft S, Bernstein K, Bruun-Rasmussen M. Development, implementation and diffusion of EHR systems in Denmark, International Journal of Medical Informatics 74 (2005), 229-234. Hoerbst A, Kohl CD, Knaup P, Ammenwerth E. Attitudes and behaviors related to the introduction of electronic health records among Austrian and German citizens, International Journal of Medical Informatics 79 (2010), 81–89. Holden JR. Physicians’ beliefs about using EMR and CPOE: In pursuit of a contextualized understanding of health IT use behavior, International Journal of Medical Informatics 79 (2010), 7180. Laramee AS, Bosek MS, Shaner-McRae H, Powers-Phaneuf T. Nurses' attitude toward the Electronic Health Record still uncertain after 6 months, Heart & Lung: The Journal of Acute and Critical Care 39 (2010), 357-358. Gibson SG, Seeman ED. Predicting Acceptance of electronic medical records: What factors matter most? Proceedings of the 2009 Southeast Decision Sciences Institute Conference, 18-20 February, Charleston, SC, USA, (2009), 164-170. Daskalakis S, Katharaki M, Liaskos J, Mantas J. Behavioral security: Investigating the attitude of Nursing students toward security concepts and practices, 264-285, In Varlamis I, Chryssanthou A, Apostolakis I, (ed.). Certification and Security in Health-Related Web Applications: Concepts and Solutions, IGI Global Publishing, Hersley, PA, 2011. Katharaki M, Daskalakis S, Mantas J. Investigating the potential of e-Learning in healthcare postgraduate curricula: a structural equation model, Studies in Health Technology and Informatics 160 (2010), 572-575. Rogers E. The Diffusion of Innovations, 4th Edition, Free Press, New York, 1995. Peslak A, Ceccucci W, Sendall P. An Empirical Study of Social Networking Behavior Using Diffusion of Innovation Theory, Proceedings of the Conference on Information Systems Applied Research (CONISAR 2010), 28-31 October, Nashville, TN, USA, (2010), v3 n1526. Tung FC, Chang SC, Chou CM. An extension of trust and TAM model with IDT in the adoption of the electronic logistics information system in HIS in the medical industry, International Journal of Medical Informatics 77 (2008), 324-335. Lee T. Nurses’ Adoption of Technology: Application of Rogers’ Innovation-Diffusion Model, Applied Nursing Research 17 (2004), 231-238. Yang HJ, Lay YL, Tsai CH. An implementation and usability evaluation of automatic cash-payment system for hospital, Journal of Scientific & Industrial Research 65 (2006), 485-494. Ilie V, Van Slyke C, Green G, Lou H. Gender Difference in Perception and Use of Communication Technologies: A Diffusion of Innovation Approach, Information Resources Management Journal 18 (2005), 13-31. Zhang L, Wen H, Li D, Fu Z, Cui S. E-learning adoption intention and its key influence factors based on innovation adoption theory, Mathematical and Computer Modelling 51 (2010), 1428-1432. SPSS for Windows, Rel 16.0.0, 2007, SPSS Inc., Chicago Ill. Ringle CM, Wende S, Will A. SmartPLS 2.0 (M3) beta, Available from: http://www.smartpls.de, Hamburg, 2005. Chin WW. The partial least squares approach for structural equation modeling, 295-336, In Marcoulides GA, (ed.). Modern methods for business research, Lawrence Erlbaum Associates, Mahwah, NJ, 1998. Shepherd MM, Tesch DB, Hsu JSC. Environmental traits that support a learning organization: The impact on information system development projects, Comparative Technology Transfer and Society 4 (2006), 196-218. Henseler J, Ringle CM, Sinkovics RR. The use of partial least squares path modeling in international marketing, 277-319, In Sinkovics RR, Ghauri PN, (ed.). New Challenges to International Marketing (Advances in International Marketing, Volume 20), Emerald Group Publishing Limited, Bingley, 2009. Fornell C, Larcker DF. Evaluating structural equation models with unobservable variables and measurement error, Journal of Marketing Research 18 (1981), 39-50.
248
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-248
Recording and Podcasting of Lectures for Students of Medical School a
Pierre BRUNETa,1, Marc CUGGIA a,b, Pierre LE BEUX a,b Laboratoire d’Informatique Médicale, Faculté de Médecine, 35043 Rennes, France b INSERM U936, Faculté de Médecine, 35043 Rennes, France
Abstract. Information and communication technology (ICT) becomes an important way for the knowledge transmission, especially in the field of medicine. Podcasting (mobile broadcast content) has recently emerged as an efficient tool for distributing information towards professionals, especially for e-learning contents.The goal of this work is to implement software and hardware tools for collecting medical lectures at its source by direct recording (halls and classrooms) and provide the automatic delivery of these resources for students on different type of devices (computer, smartphone or videogames console). We describe the overall architecture and the methods used by medical students to master this technology in their daily activities. We highlight the benefits and the limits of the Podcast technologies for medical education. Keywords. E-Learning, Medical education, Podcast, Podcasting, Virtual university
1. Introduction Historically, the Medical School of Rennes University has greatly expanded the dematerialization of educational resources, through its free web content on education servers [1] and the Virtual Radiology University in 1996 [2]. Podcasting (mobile broadcasting of audio and audio-video) has recently emerged as an important information technology tool aimed at information professionals, for communication, continuing education and professional support for research and health training [3-4-5]. The chosen system had to offer a simple solution for collecting records from the teachers and requiring cheap technical support for the institution by promoting the use of material and human resources available. It was then extended as a social possibility to exchange comments (blogs) between teachers and students that is more interactive and attractive than traditional email system while being visible to everybody. The system would also allow the transfer and consultation of the new medical educational resources on new mobile devices such as tablets and smartphones.
1
Corresponding author: Pierre Brunet, Laboratoire d’Informatique Médicale, Faculté de Médecine, 2 rue du Professeur Léon Bernard 3543 Rennes Cedex. E-mail : [email protected]
P. Brunet et al. / Recording and Podcasting of Lectures for Students of Medical School
249
2. Method To support the development of this work, we used the teachers’ portable personal computer or workstations and platforms currently available (Mac OS X, Linux and Windows XP). For the acquisition of audio-video data, we used solution-oriented software "embedded" primarily in commercial products such as Podcast Producer (Apple), Camtasia Studio (TechSmith), Profcast Company (Humble Daisy) and Inwicast (Inwicast). The audio-video supports from these lectures were then collected through the proposed open-source software tools for communication download with the opportunity to comment these resources. These software solutions are suitable for the integration of podcasting opportunities via RSS (multimedia) syndication to deliver information and resources medical teaching in an automatic and voluntary basis (subscribing to these feed).
3. Result The audio-video and audio are directly available from the portable computer of the teacher and is associated with the course materials such as PowerPoint (Mac OS, Linux and Windows) or Keynote (Mac OS X) using the software solutions currently available such as Camtasia Studio (Windows and Macintosh), and Podcast Producer ProfCast (Mac OS X). These audio-video files are the exact copy of the course and the teacher can then check them to correct any errors. Audio clips are saved as standard MP3 playback suitable for most digital music players owned by students. The audio-video recorded in standard MP4 format suitable for playback on most portable video held by students. This same format is ideal for workstations and mobile devices.
Figure 1. A Physiology resource with its chapters
The resulting files are then transferred to a storage space on a workstation to be reencoded in other formats specific to certain mobile platforms like smartphones or personal digital assistants (PDAs) The new Internet Tablet, the games consoles, laptops, televisions, etc ... The same audio and audio-video can also be "enriched" by the inclusion of a presentation lectured for navigation and access facilities to specific parts of the course record.
250
P. Brunet et al. / Recording and Podcasting of Lectures for Students of Medical School
Figure 2. The podcast of PCEM2 (2nd year) from the iTunes application
Once the files (audio and audio-video) are validated by the teacher, they are available to students through a website (blog under Wordpress engine) for direct visualization of the resource or download associated with a short abstract presentation of the teacher to the students can respond specifically after authentication. To make available a simple keyword search at the application responsible for the collection of podcasts, the different podcast feeds (carrying multimedia educational resources) are declared and referenced. Direct access to these resources enables automatically download depending on the status of university students. There is indeed a podcast feed specific to each academic year for a better dissemination of these resources, but also for direct playback through a podcast aggregator application itself (the application has a built-in video player). Medico-pedagogical resources obtained directly from their in-class teaching source are made available on major mobile players such as smartphones or other audio-video players. They are converted to standard formats like MP3 and MP4 compatible with the main devices of the market.
Figure 3. Mobile access to the medical resource from a smartphone
This mobile access is used to supplement the students taking notes and thus enhance the process of class attendance: it is very easy to revise or re-listen to all or part of the course at any place and at any hour of the day to improve his own notes or « groupnotes » frequently used by students.
P. Brunet et al. / Recording and Podcasting of Lectures for Students of Medical School
251
4. Discussion and Conclusion The concept of automated dissemination of classical educational resources (at format text and/or diaporama presentation) or multimedia (audio and/or audio- video) seems to have caught the attention of medical students [6-7-8-9 -10-11]. Indeed, a recent survey with 108 students at Medical School of Rennes on the evaluation of digital resources website (www.umvf.org) and the podcasting system confirmed that students know this concept (94%), incorporate it in their studies (73%) and that it brought additional important improvment to taking notes (51%) while regretting that it is not widely offered in different years of medical training at the medical school[12]. A new more comprehensive study detailing the usability of this new delivery method will be offered with a larger number of students during the academic year 2010/2011. The collection system of teaching ressources directly in classroom to make available new resources on the Medical School of Rennes meets the key objectives we set ourselves using a facility for recording live lectures.for the teachers. A new feature for the teacher to re-listen or to review and validate his teaching before release to these students will also be given during the academic year 2011/2012. The students fully embraced this concept by adopting it as additional support to their own notes while awaiting the arrival of new mobile devices like Internet tablets perfectly suited to this release. Under the program UMVF Podcast, over 250 hours of teaching courses (Physiology, Physio-Pharmacology, Pharmacology, Bacteriology, Oncology, Medical Informatics, etc ...) are currently available in several standard formats for mobile education from the website UMVF and from various websites of the Medical School of Rennes. Finally, this study could be potentiated by another mode of dissemination of resources like streaming for more efficient memory resource of mobile devices and the best use of wireless networks (Wi-Fi) deployed on the university campus.
References [1] [2] [3] [4] [5] [6]
[7] [8] [9] [10]
Pouliquen, B. Le Duff, F. Delamarre, D. Cuggia, M. Mougin, F. Le Beux, P. Medical pedagogical resources management, Studies in Health Technology and Informatics, vol. 95, 2003, p. 486-491. Séka, LP., Duvauferrier, R., Fresnel, A., Le Beux. P. A virtual university Web system for a medical school. Stud Health Technol Inform. 1998;52 Pt 2:772-6. Boulos, MN. Maramba, I. Wheeler. S. Wikis, blogs and podcasts: a new generation of Web-based tools for virtual collaborative clinical practice and education. BMC Med Educ. 2006 Aug 15;6:41. Alikhan, A. Kaur, RR. Feldman, SR. Podcasting in dermatology education. J Dermatolog Treat. 2010 Mar;21(2):73-9. Vogt, M. Schaffner, B. Ribar, A. Chavez. R. The impact of podcasting on the learning and satisfaction of undergraduate nursing students. Nurse Educ Pract. 2010 Jan;10(1):38-42. Epub 2009 Sep 24. O'Neill, E. Power, A. Stevens, N. Humphreys. H. Effectiveness of podcasts as an adjunct learning strategy in teaching clinical microbiology among medical students. J Hosp Infect. 2010 May;75(1):834. Epub 2010 Mar 15. Trelease. RB. Diffusion of innovations: smartphones and wireless anatomy learning resources. Anat Sci Educ. 2008 Nov;1(6):233-9. Kho, A. Henderson, LE. Dressler, DD. Kripalani. S. Use of handheld computers in medical education. A systematic review. J Gen Intern Med. 2006 May;21(5):531-7. Wilson, P. Petticrew, M. Booth. A. After the gold rush? A systematic and critical review of general medical podcasts. J R Soc Med. 2009 Feb;102(2):69-74. McKinney, AA. Page. K. Podcasts and videostreaming: Useful tools to facilitate learning of
252
[11] [12]
P. Brunet et al. / Recording and Podcasting of Lectures for Students of Medical School pathophysiology in undergraduate nurse education? Nurse Educ Pract. 2009 Nov;9(6):372-6. Epub 2009 Jan 4. Jham, BC. Duraes, GV. Strassler, HE. Sensi. LG. Joining the podcast revolution. J Dent Educ. 2008 Mar;72(3):278-81. Hassane. A. In: Evaluation des ressources numériques de l’UMVF/UNF3S. ed Rapport de stage. Master Modélisation et Traitement de l’Information Biomédicale et Hospitalière ; 2010 Jun; pp. 37
Electronic Health Record, Workflow, Intra- and Interorganizational Collaboration
This page intentionally left blank
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-255
255
Developing an Electronic Health Record for Intractable Diseases in Japan Eizen KIMURAa,1, Shinji KOBAYASHI b, Yasuhiro KANATANIc, Ken ISHIHARAa, Tsuneyo MIMORId, Ryousuke TAKAHASHIe, Tsutomu CHIBAf, Hiroyuki YOSHIHARAg a Department Medical Informatics of Ehime University Hospital, b Department of Bioregulatory Medicine, Ehime University Graduate School of Medicine, c National Institute of Public Health, d Department of Rheumatology and Clinical Immunology, Graduate School of Medicine, Kyoto University, e Kyoto University Hospital Neurology, f Department of Gastroenterology and Hepatology Kyoto University, g Dept. Medical Informatics of Kyoto University Hospital, Japan
Abstract. Because intractable diseases result from unidentifiable causes and are very difficult to treat, they require a lifelong epidemiology database. Japan does not use global unique identifiers, such as social security numbers, so we conducted a feasibility study regarding an electronic health record (EHR). An EHR can be used as a lifelong database and reduce conventional administrative work. However, it will be necessary to develop additional tools to overcome issues specific to Japan before an EHR can be implemented. Keywords. Archetype, EHR, intractable disease, ISO/CEN 13606, openEHR
1. Introduction The Japanese Ministry of Health, Labor, and Welfare (MHLW) defined specified intractable diseases as ‘Tokutei Shikkan’ and established ‘The Specified Disease Treatment Research Program’ in 1972. ‘Tokutei Shikkan’ refers to intractable diseases that result from unidentifiable causes and are very difficult to treat; no treatment procedure has been established [1]. Typically, these diseases require long-term care and medicine, which involves great financial and mental stress for the patient. This project selected 56 diseases from specified intractable diseases as part of a research program about publicly funded assistance for medical expenses. The selected diseases were so rare that it was necessary to conduct a nationwide investigation. In Japan, when a physician examines a patient, s/he fills out a clinical research form and gives it to the patient with an application for a subsidy. Patients take these documents to a public health center to request a subsidy. The form contains demographic and insurance information for administrative processing and some 1
Corresponding Author: Associate Prof. Eizen Kimura, BM, PhD, Medical School of Ehime University, Situkawa Toon City Ehime Japan TEL.: +081 089 960 5695; E-mail: [email protected].
256
E. Kimura et al. / Developing an Electronic Health Record for Intractable Diseases in Japan
treatment information for clinical research. Public health centers gather these applications and submit them to the governor of the prefecture. The governor decides which patients are eligible for subsidy and sends them claimant certifications. This research group collected forms from the governors of the prefecture to conduct this epidemiological study. In 1999, the MHLW decided to enter the forms in an Intractable Disease Database, but, to date, the form has not been digitized, so data is entered manually. This has resulted in low-quality data, and clinical form digitization is urgently required. Theories about intractable diseases may be modified by long-term research, so we need to build EHR that includes clinical information of intractable diseases and retains the data over a lifetime. The EHR must also be designed to match the standard clinical information model to meet the needs of administrative procedure and clinical research. In 2009, we implemented the clinical research form using the standard clinical model of ISO/CEN13606 [2]. The Japanese government has presented a plan for social system reformation to establish a social security number (SSN), but there has been no progress in review session. We had to overcome constraints related to non-applicable global unique identifiers of patients and collecting personal data in a central repository given the limited administrative procedure and legal requirements. This paper describes the challenges related to building an EHR in Japan under these limitations.
2. Methods 2.1. EHR Standard Adaption Intractable diseases are difficult to distinguish from diseases with a similar clinical profile and are prone to conceptual transition because of developments in medical research. Intractable diseases may also alter in clinical patterns over the long term, so it is important to accumulate life-long health records and standardize clinical information models to enable cross-sectional study between diseases. We needed to achieve these goals within a short time, so we defined selection criteria to standardize the process. We ensured that: 1) specification was open and the license had few restrictions, 2) clinical models were registered to centralize repository, ensuring sustainability for standardization and maximizing reusability by restricting derivation of models, and 3) the standard had a long history and had been used sufficiently to be stable and should not require major modifications in the future. We used “openEHR; [3]” the reference implementation of ISO/CEN 13606 has been used for more than 10 years in Europe. 2.2. Template of the Clinical Research Form Beginning in 2009, we organized items from the conventional clinical research form using mind mapping and grouped relevant disease-specific themes into clusters [4]. Although many physicians participate and develop openEHR archetypes, the number of archetypes is not sufficient to compose templates for intractable disease. We developed intractable disease-specific archetypes from previously described deliverables and composed a form template in combination with archetypes registered at the Clinical Knowledge Manager (CKM) [5]. We developed templates for six diseases: ulcerative colitis, Crohn’s disease, fulminant hepatitis, primary biliary cirrhosis, severe acute pancreatitis, and myasthenia gravis, because these diseases have a large number of
E. Kimura et al. / Developing an Electronic Health Record for Intractable Diseases in Japan
257
cases and many pathognomonic symptoms. The standard domain-specific language used in openEHR is Archetype Definition Language (ADL), but we converted template ADL files to XML files (described with XML Schema) to maximize the reusability. 2.3. Workflow We spent extra time ensuring that computerizing the clinical research form would not affect current administrative procedures and that it would enhance convenience at healthcare and public health centers and enable a harmonious introduction of the EHR (see Fig. 1). We set up the EHR at the data center contracted by the National Institute of Public Health (Fig. 1-a). The EHR has two interfaces: one is used by healthcare workers to submit the form to the EHR; the other is used by the governor of the prefecture to retrieve the form. We distributed the clinical research form entry system to participating hospitals (Fig. 1-b). For hospitals that had already introduced an electronic medical record (EMR), we supplied a tool to capture essential patient information, insurance, and disease name from the EMR and to auto-fill this data into the form. The forms entered by physicians were transferred to EHR via the Internet through a secure messaging system (Fig. 1-c). We allocated a unique document identifier (UDID) to each form. This UDID was generated by combining the unique healthcare facility ID and document ID. The Social Insurance Agency assigns an individual facility ID to each healthcare. We removed personal information from the form before transferring it via the Internet. The form was also printed on a conventional paper form with a Quick Response (QR) code, which includes the patient’s demographic information, insurance, and healthcare information. We intentionally implemented the new clinical research form in the format of the conventional paper form so that public health centers without our system could accept the form and process it conventionally. Patients bring the paper form to the public health center in their region to apply for subsidy (Fig. 1-d). The forms are collected at the public health centers and then sent to the governor of the prefecture. The person in charge at the prefecture downloads the content of the clinical research form from EHR and merges it with the data in QR code to generate a complete form for subsidy application (Fig. 1-e) and registers the patient in the intractable disease database on the Wide-area Information-exchange System for Health and Welfare Administration (WISH) network (Fig. 1-f). The governor of the prefecture judges whether the application satisfies accreditation criteria for subsidy, implemented by the MHLW, and for qualifying patients issues a claimant certification (Fig. 1-g). We assumed the subsequent workflow until the subsidy was paid would remain the same. 2.4. Clinical Research Form Entry System The clinical research form entry system was built on an HTML5-capable web browser to support drag-drop data files exported from the EMR. We used the template engine to map the information entered in the form on a web browser into the XML template file of the clinical research form to compile the submission to the EHR. The system also generates a QR code, which embeds the patient’s name, gender, address, place of birth, and UDID. That is, sensitive information is only stored into QR code and no personal information is submitted to the EHR. Finally, it uploads the form data to the EHR through a secure messaging system.
258
E. Kimura et al. / Developing an Electronic Health Record for Intractable Diseases in Japan
2.5. Registration Tool Currently, the people in charge at the prefecture manually enter clinical research forms into the intractable disease database at the WISH network. To automate this, we developed a registration tool. The registration tool extracts UDID from QR code and retrieves the corresponding form from the EHR using web service (Fig. 1-e). Because the intractable disease database only has a legacy CSV import interface, we converted XML-formatted form to a CSV file using XSLT. The tool also extracts some patient demographic data and insurance information from QR code and merges them with the CSV file. The governor exports CSV files to encrypted memory and uses this to register patients in the intractable disease database (Fig. 1-f).
Figure 1. Research process related to establishing an EHR in Japan, (2009).
3. Results and Discussion We had to model 49 archetypes (new 15 archetypes and specialized 34 archetypes from CKM) to define the templates for clinical research forms. Because we converted private information to QR code, the EHR database does not hold any private information. However, public health centers and the governor can acquire a complete clinical research form by uniting the data from the EHR and QR code. Embedding the QR code in a traditional clinical research form avoids large-scale administrative reorganization, and even improves operating effectiveness and the quality of data. This study demonstrated the feasibility of building an EHR without fundamental changes to administrative procedures and health regulations.
E. Kimura et al. / Developing an Electronic Health Record for Intractable Diseases in Japan
259
Japan does not use a global personal identifier system, such as social security numbers, because it has not reached a consensus that global personal identifiers are constitutional. Thus, patient information may overlap in each prefecture. We assigned a UDID to every clinical research form so that we may implement computer-assisted name identification and minimize overlaps in the future. Currently, the major issue in data quality is caused by human error, such as input errors at registration. After solving this issue, we will investigate the magnitude of statistical influence in epidemiological research caused by patient information overlap. Converting ADL to XML enables us to search EHR semantically by combining an archetype ID and a node ID using general technology, such as XQuery. However, it is difficult to foresee the structure of the XML file converted from ADL. Implementing data mapping to legacy applications requires a trial-and-error approach and increases the complexity of the overall development process. We will need to develop a tool that enables efficient mapping from XML-converted ADL to the formats of legacy applications. The template editor can generate an input form from a template automatically, but we did not use this feature. We achieved localization through wordto-word translation, but we must redesign the form layout by moving field controls, based on grammatical context, and we must develop additional implementation for entry constraints. Japanese has multiple ways to describe words: Kanji (Chinese character), Hiragana (Japanese phonetic syllabary), Katakana (also phonetic syllabary, used mainly for foreign words), and the alphabet. We will need to solve the Japanese language-specific issues to allow smoother induction of ISO/CEN 13606 in Japan.
4. Conclusions The clinical research form has two functions: it is used in subsidy applications and it supports data for epidemiological research. The current version of the form is primarily designed for subsidy application; as a clinical research form it contains a log of vague information and is difficult to use for research. Moreover, Japan’s health system has a decentralized design; it has complex administrative procedures and is unlikely to undergo fundamental reform. This study demonstrated that our approach can be used to build an EHR that keeps personal information private, even in these difficult conditions. Acknowledgement. This work was supported by the Research on Measures for Intractable Diseases Project of MHLW. (H20-IntractableDisease-Generic-039).
References [1] [2] [3] [4] [5]
Nakatani H, Kondo T. Characteristics of a medical care program for specific diseases in Japan in an era of changing cost-sharing. Health Policy. 2003 Jun;64(3):377-89. ISO, Committee CTHIT. ISO/CEN13606. Health informatics – Electronic healthcare record communication – Parts 1-5. 2008. openEHR Foundation. openEHR. Accessed Jan 2009; Available from: http://www.openehr.org. Kimura E, Kobayashi S, Kuroda T, et al. Lessons learned from modeling archetypes for intractable disease surveys. Japan journal of medical informatics. 2011;In Publishing. Ocean Informatics. The openEHR Clinical Knowledge Manager. Accessed 2010/April; Available from: http://www.openehr.org/knowledge/.
260
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-260
Three Key Concerns for a Successful EPR Deployment and Usage Rebecka JANOLSa,1, Bengt GÔRRANSONa, and Bengt SANDBLADa a Department of Information Technology, Uppsala University, Sweden
Abstract. The health care environment is unique because of the large and complex organisation with a traditional hierarchic structure that is governed by laws and regulations. This paper examines how a large Swedish health care organisation work with usability issues regarding Electronic Patient Record (EPR) deployment and usage. EPR systems have great impact on work environment and clinical work routines will not be performed in the same way as before. This paper analyse how the EPR management and core business understand their EPR responsibilities and work with usability aspects at different levels in the organisations. The paper reveals that there is a conflict about responsibility between EPR management and core business management. The reasons for the confusion are contradictive understanding of what an EPR system is, an IT system or a tool for the core business to perform better health care work. This leads to that care staff's experience regarding the EPR system's usability, is not being listened to within the organisation. Three key concerns for a successful EPR deployment and usage are identified and further analysed; education, evaluation and support & improvement ideas. Keywords. usability, organisational change, health care, electronic patient record system
1. Introduction Today many health care organisations are deploying computer-based systems such as the Electronic Patient Record (EPR). The rationale behind EPR is to save money and achieve an effective support for the care staff. According to Ann-Britt Krog [1] there are three common assumptions about the EPR systems; 1) better overview, 2) less hazard and 3) less time consumption. Krog’s thesis is based on a qualitative study at a Danish hospital and comes to the conclusion that the system gives the care staff a better insight in other care professions' work, by increased accessibility and communication. Krog’s study and the result in our study shows that the three mentioned assumptions about the EPR systems benefits have not been met in practice. The care staff had strong opinions about the lack of usability, poor efficiency and that the systems were not able to fully support their specific organisational needs. Almost all care staff appreciated the accessibility and the reliability with an EPR and understood that it is impossible to go back to paper-based patient records [2, 3], but they thought that they spend more time than before with the computers and had less time for patients [4]. These kinds of usability problems are common in both Swedish and international health care organisations. In this paper we examine how a Swedish health care organisation with 1
Corresponding Author.
R. Janols et al. / Three Key Concerns for a Successful EPR Deployment and Usage
261
10,000 employees work with usability issues in the often neglected perspective the deployment phase[5]. The studied health care organisation, with a university hospital, a smaller hospital and several primary care centres has since 2004 deployed the same module-based EPR system. In the paper we use a broader definition of usability [6] and focus on the organisational perspective, not the more traditional usability problems concerned with the software, user interface and system usage. Nancy Lorenzi [6] argues that the challenge with introducing IT in complex organisations, such as health care organisations, is mainly behavioural rather than technical. This paper focuses on two stakeholder groups, EPR management and core business. The EPR management were responsible for manage, deploy and support the EPR system and the core business is divided into two sub groups, managers and care staff. Three key concerns that are crucial for successful deployment and user acceptance have been identified and further analysed; education, evaluation and support & improvement ideas.
2. Methods The data gathering has been conducted together with EPR managers responsible for deployment and support, core business management and care staff during a 2.5 years research project. During this period three modules, patient administration (PAS), referrals and drugs has been deployed. The data gathering during the project focused on both the deployment processes and on the everyday working situation. Several research methods were used including field studies, validated questionnaires [8], interviews and observations. During the deployments we interviewed educators and participants and participated in organised activities such as education sessions and meetings. We also conducted semi-structured interviews with physicians, nurses, enrolled nurses and staff at the EPR management organisation. The questions focused on their responsibility, experience and attitude towards deployment processes and EPR systems. All interviews were recorded and partly transcribed.
3. Result and Discussion 3.1. Who is Responsible for the EPR System’s Usability? The health care organisations were separated into two parts, the core business, with clinical managers and care staff, as well as an EPR management organisation. The EPR management’s responsibilities were to support the care staff, be responsible for the EPR deployment and have a close relationship to the company that developed and supplied the EPR system. The EPR management was aware of that an EPR deployment affects the organisations core business and working routines, but they considered it to be the clinical managers’ responsibility to handle these aspects. The clinical managers consider the EPR system to be a technical tool and not an integral part of the health care process and therefore the EPR management’s responsibility to deploy and support. This uncertainty, about responsibilities has also been seen in other organisations. Cajander et al [7] have analysed how managers at a Swedish public authority work with usability issues. They conclude, “the manager in the organisation did not have a common view about who is responsible for usability issue and how this responsibility works in their organisation” [7]. This uncertainty and confusion over responsibilities
262
R. Janols et al. / Three Key Concerns for a Successful EPR Deployment and Usage
had a major effect for the care staff. A common opinion among the care staff was that the EPR system needed major improvements in order to fully support them. The care staff had told both EPR management and the clinical managers for several years, but nothing had happened. They experienced that the EPR management and clinical managers mainly considered usability problems to be caused by the care staff and not the system. Both management organisations argued that some of the major problems would be solved if the care staff participated in deployment activities, used the system in the right way [4] and that time would heal some of the usability problems. The care staff’s reacted to that by not participating in deployment activities and not deliver improvement suggestions, but the usability issues did not disappear. Kjeldskow et al. [9] have examined if the usability problems that the nurses’ experience changed when they transform from novice to expert users. They have identified three different usability problems experienced by nurses 1) complexity of information, 2) poor relation to work activities and 3) lack of support for mobility. They conclude that time does not heal the usability problems, the usability problems must be addressed in some other way. Kjeldskow’s study indicates that the usability problems in our study most likely would not disappear with time. EPR management and clinical managers need to address the problems in different areas. Below we will discuss three key concerns that are crucial for decreasing usability problems and increasing user acceptance when deploying an EPR system: education, evaluation and support & improvement ideas. The responsibilities for these concerns are highly important to solve in order to succeed with the deployment. 3.2. EPR Education The EPR management, that were responsible for the EPR deployment, consider it to be important that deployment activities should be close to the core business. Therefore they educated “normal” care staff within the organisation. Their task was to plan education sessions that fit their unit’s needs and instruct and support colleagues at the own unit. The EPR management organisation prepared the educators with an extensive introduction to the system so that the educators could customise it to their core business’ needs. Some of the educators had earlier experience of EPR deployments and others had no earlier experience. The educators said that they were unsecure with this responsibility and found it hard to customise the education and therefore they gave the colleagues the same extensive education that they got themselves. The educators experienced that during education sessions everything worked well but after the education, in the clinical work a lot of problems regarding working routines and new unknown terminology occurred. One care staff said: “it’s not computer nor health care words, I don’t recognise the meaning of the words” To handle the care staff’s uncertainty they made custom manuals that described the workflow in the system. In the questionnaires we asked the care staff if they asked or used colleagues, support services or manuals when they needed assistance. The result indicated that the care staff did not use the manuals as much as the EPR management thought and that they rather (80%) asked a colleague and/or called the support service. A key concern is to have educators that have the knowledge and support necessary to customise the education sessions so that it supports the core business needs. A care staff expressed that EPR systems should be tailored to the users’ needs, not the other way around. Therefore the education should focus on performing clinical work routines and how the systems can be supportive.
R. Janols et al. / Three Key Concerns for a Successful EPR Deployment and Usage
263
3.3. Evaluation The different health care units within the organisation deployed the EPR modules either one by one or the whole system at once. These deployments were rarely formally evaluated even though they often started with a smaller pilot deployment at one or several units to see how the system worked in the new context. Evaluating deployments from a user perspective would highlight problems and provide recommendations for improving the different steps in the deployment process. In the studied organisation, EPR and the clinical managers said that lack of time and resources were the reasons for not evaluating the deployments and EPR usage. During our study two types of evaluations with different focus were made. One focused on the care staff’s work environment. That evaluation excluded questions about the IT and EPR systems, which indicated that the core business did not consider IT and EPR system as a parameter that, influence the care staff’s work environment. This is surprising because our interviews and observations showed that the EPR system and other IT systems had a huge impact on the care staff work environment. EPR management made the second evaluation, an extensive questionnaire about the care staff's experience of the different modules and what kind of problems they experienced. The evaluation indicated that the care staff thought that the system was non-intuitive, and had low usability. This is important knowledge for the EPR management but it did not give them any deeper understanding about the reason for the problems or how to solve them. A key concern is not just to perform evaluations, it is also to ask the right questions so that the result can be useful and a solid ground for improvements in both the EPR management and clinical managers' deployment routines. A clarification of what the problems really are about can help the health care organisation establish whose responsibility it really is. 3.4. Support and Ideas for Improvement During the deployment it is crucial that the care staff gets the support they need to feel safe and secure about the new system. The studied organisation had an EPR support organisation that operates at three different levels; local, department and central. All EPR support persons had a clinical background, mainly nurses or medical secretaries, with special interest in IT. The local support person was responsible for supporting colleagues and forward ideas for improvements. The local support called the department service or the central support if further support were needed. The interviewed care staff had many ideas about how to change the EPR so that it would support the clinical routines better. The interviews and questionnaires revealed that the care staff rarely informed the local support person about their problems and ideas for improvement. In the questionnaires 50 % did not know who to contact if they had ideas and wishes about how to improve the system. 60% answered that they very rarely contacted the local support and 12.5% answered that they do not know whom to contact when they have problems. The interviews and observations confirmed that the care staff was not aware of the existence of the local support and their responsibility. There was also confusion among the local support about their responsibilities. In the interviews the care staff said that they asked a trusted colleagues if they needed help or support. In some cases it was the same person as the formal local support but often it was “their own” informal local support person. We believe it is a good idea with a support at all units, but our questionnaires and interviews show that the care staff did not know what to do or who to contact when they had problems. Some had made
264
R. Janols et al. / Three Key Concerns for a Successful EPR Deployment and Usage
complaints and offered improvement ideas but nothing had changed for years so they did not think that it mattered if they reported problems or not. A key concern is to have a transparent support and improvement chain.
4. Conclusion In this study we have examined how a large Swedish county with several health care units works with usability problems in the EPR deployment process. The study shows that there is confusion about the responsibility for usability issues within the organisation. The confusion is because some of the stakeholders consider the EPR system to be an IT system, not an integral part of the health care process. Others consider it to be a core business system and therefore the core business responsibility. The confusion and uncertainty about responsibility leads to an unsustainable work situation for the care staff that needs an effective EPR system to perform a high quality work. In order to get a successful deployment and a durable working environment for the care staff it is important that the responsibilities for education, evaluation and support & improvement ideas, are clear. Both EPR management and the core business need to know and understand their mandate and responsibility to achieve an improved work environment. The organisation needs to continuously search and perform improvements in both work routines and the EPR system that aim to support the care staff in their health care activities. The support system also needs to be more transparent in order to give the care staff feedback on the status of their complaints.
References [1]
[2]
[3] [4] [5] [6] [7] [8]
[9]
Krog, A.-B.: Forhandlinger Om Patienten : Den Elektroniske Patientjournal Som Kommunikationsmedie, in Syddansk Universitet. Det Humanistiske Fakultet. Syddansk Universite. p. 204. (2009) Moody, L.E., Slocumb, E., Berg, B., and Jackson, D.: Electronic Health Records Documentation in Nursing: Nurses' Perceptions, Attitudes, and Preferences. CIN: Computers, Informatics, Nursing. 22, 6, 337 (2004) Meijden, M.J.v.d., Tange, H., Troost, J., and Hasman, A.: Development and Implementation of an EPR: How to Encourage the User. International Journal of Medical Informatics. 64, 173-185 (2001) Janols, R., Göransson, B., Borälv, E., and Sandblad, B.: Physicians' Concept of Time Usage a Key Concern in EPR Deployment. In: World Computer Congress, Springer, Brisbane, Australia (2010) Berg, M.: Implementing Information Systems in Health Care Organizations: Myths and Challenges. International Journal of Medical Informatics. 64, 2-3, 143-156 (2001) Lorenzi, N. and Riley, R.T.: Managing Change: An Overview. Journal of the American Medical Informatics Association 7, 2, 116-124 (2000) Cajander, Å., Gulliksen, J., and Boivie, I.: Management Perspectives on Usability in a Public Authority: A Case Study: ACM New York, NY, USA. (2006) Kavathatzopoulos, I.: Usability Index. In: I.A. Toomingas, A. Lantz, and T. Berns, (eds.) Work with Computing Systems: Computing Systems for Human Benefits from the 8th International Conference on Working with Computing Systems. (2007) Kjeldskov, J., Skov, M.B., and Stage, J.: A Longitudinal Study of Usability in Health Care: Does Time Heal? International Journal of Medical Informatics. (2008)
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-265
265
Implementation of an Open Source Provider and Organization Registry Service Markus BIRKLEa,1, Benjamin SCHNEIDERa, Tobias BECKb, Thomas DEUSTERb, Markus FISCHERb, Florian FLATOWb, Robert HEINRICHb, Christian KAPPb, Jasmin RIEMERb, Michael SIMONb, Björn BERGHa a Center for Information Technology and Medical Engineering, University Hospital Heidelberg, Speyerer Straße 4, 69115 Heidelberg, Germany b Institute of Computer Science, University of Heidelberg, Im Neuenheimer Feld 326, 69120 Heidelberg, Germany
Abstract. Healthcare Information Exchange Networks (HIEN) enables the exchange of medical information between different institutions. One of the biggest problems running a HIEN is the unique identification of the care providers. The provider and organisation registry service (PORS) has to provide a unique identifier for care providers. The concept and the implementation of PORS will be described in this article. Finally the PORS implementation will be compared with the Integrating the Healthcare Enterprise (IHE) profile for a Healthcare Provider Directory (HPD). Keywords. provider and organization registry, eHealth, IHE, open source, regional health networks
1. Introduction Healthcare Information Exchange Networks (HIEN) enables the exchange of medical information between different institutions. One of the biggest problems running a HIEN is the unique identification of the care providers like hospitals, health care professionals and so on. Heinze et. al. illustrated that problem on the example of a personal electronic health record (PEHR) which is implemented by the University Hospital Heidelberg (UHH) to improve the information exchange between UHH and other hospitals, primary care givers and the patient itself [1]. Today a unique identification for physicians, the so called “lebenslange Arztnummer” (lifelong physician number) and for organizations, the so called “Betriebsstättennummer” (permanent establishment number) are available in Germany. But they are only used for billing purposes and there is no central electronic registry for these values. The solution to address this issue is the development of a service called Provider and Organisation Registry Service (PORS). Heinze et. al. developed a IHE-compliant concept how a PORS should be implemented [2]. This approach can solve the problem in German HIEN but it is also generic enough to be used in other countries because they implemented similar 1
Corresponding author: Dipl.-Inform. Med. Markus Birkle, Center for Information Technology and Medical Engineering, University Hospital Heidelberg, Speyerer Straße 4, Heidelberg, Germany; E-mail: [email protected]
266
M. Birkle et al. / Implementation of an Open Source Provider and Organization Registry Service
identification concepts for care providers. According to the concept of Heinze et. Al. a prototype was developed by students within the scope of a software master class [3]. The result successfully provided a proof of concept whilst it also showed that additional important functionality still needs to be added. In order to provide a fullfeature version of the PORS that could also be deployed in a productive environment, a new project was initiated in cooperation with faculty of computer science at the University of Heidelberg. The knowledge and the experiences gathered within the development of the prototype lead to an extension of the original concept. Furthermore it was decided to release the resulting application as Open Source. 2010 the IHE (Integrating the Healthcare Enterprise2) published a draft of an IHE IT Infrastructure Technical Framework Supplement for public comments [4]. This Supplement describes an IHE Profile for a Healthcare Provider Directory (HPD). The HPD profile supports the management of healthcare provider public information in a directory structure and defines an interface to query the stored data. The HPD profile suggested the implementation of this service with a Lightweight Directory Access Protocol (LDAP) Server. The extended concept and implementation of the PORS will be described in this article. Finally the result will be compared with the HPD LDAP Server implementation concept.
2. Method Based on the concept of Heinze et. al. [2] and the already implemented prototype [3] the PORS was developed within an Information Systems Engineering internship of the faculty of computer science at the University of Heidelberg. The results will be released as an Open Source Software project via the Open eHealth Foundation under the Apache Software Licence 2. In preparation to define the software architecture the HL7 v2 standard was analyzed in order to determine if the provided HL7 v2 messages will fit the PORS requirements. To adequately address the given requirements they have been enhanced and based on this the system and database architecture was developed. The technical implementation is based on Java. For persistence purposes the Hibernate 3 framework is used. The data storage is realized with a PostgreSQL 4 database. The graphical user interface (GUI) for manual administration is based on Java Server Faces.
3. Result 3.1. Adapted HL7 v2 Messages The messages already provided by the HL7 v2 standard have to be extended for the PORS. The functionality add, update and de- or reactivation of an entry is carried out by a HL7 v2 Master File Notification (MFN) message. Queries can be triggered by 2
http://www.ihe.net/ http://www.hibernate.org/ 4 http://www.postgresql.org/ 3
M. Birkle et al. / Implementation of an Open Source Provider and Organization Registry Service
267
Conformance Based Master File Queries (QBP) and the result is provided by a Segment Pattern Response (RSP). This messages are described below. 3.1.1. Master File Notification (MFN) With a Master File Notification Message (MFN) shown in Figure 1 a PORS entry could be added, updated and de- or reactivated. To add an entry the MFE-1 segment should contain the value MAD. If an entry should be updated the MFE-1 segment should contain the value MUP. To deactivate an entry the MFE-1 segment should contain the value MDC. In case of entry deactivation the provision of a deactivation reason is mandatory. The reason can be provided in the STF-38 segment. The success of this transaction is reported to the sending system by an HL7 v2 Master File Acknowledgement (MFK) message. MSH|^~\&|SAP-ISH^sapr3t^002|UKHD^0999|PORS||201012151600||MFN^M02^MFN_M02|1234|P|2.5.1 MFI|PRO^Provider^UKHD0001||UPD|||AL MFE|MAD|||0000001234|PL STF|0000001234|179999900|Beckenbauer^Franz^JR^Dr|Internist|M|19501101|||999^UKHD^^ 1.2.276.0.76.3.1.78^Universitätsklinikum Heidelberg| 00496221566736^00496221562000^[email protected]|Musterstraße 14^^ Musterhausen^^12345^DE|20091211|||||||||||||||||||||||||
Figure 1. Exemplary Master File Notification Message
3.1.2. Conformance Based Master File Query (QBP) With a Conformance Based Master File Query (QBP) shown in Figure 2 the PROS can be queried. Two query types are available. The Z80 PORS Query combines the given query criteria’s by a logical “AND”. The Z81 PORS Query combines the given query criteria’s by a logical “OR”. So any complex type of a query can be created. The query result is submitted by a Segment Pattern Response (RSP) message. MSH|^~\&|SAP-ISH^sapr3t^002|UKHD^0999|PORS||201012151600||QBP^Z80^QBP_Q11|1234|P|2.5.1 QPD|Z80^PORS Query^HL70471|1234|PRO^Provider^UKHD0001||||Beckenbauer RCP|I||R
Figure 2. Exemplary Conformance Based Master File Query Message
3.1.3. Segment Pattern Response (RSP) With a Segment Pattern Response (RSP) shown in Figure 3 the PORS answers a query message. The response contains the complete SFT Segment similar to the MFN Message. MSH|^~\&|PORS||SAP-ISH^sapr3t^002|UKHD^0999|201012151600||RSP^Z80^QBP_Q11|1234|P|2.5.1 MSA|CA|1234 QAK|1234|OK|Z80^PORS Query^HL70471|1|1|0 QPD|Z80^PORS Query^HL70471|1234|PRO^Provider^UKHD0001||||Beckenbauer MFE|MAD|||0000001234|PL STF|0000001234|179999900|Beckenbauer^Franz^JR^Dr|Internist|M|19501101|||999^UKHD^^ 1.2.276.0.76.3.1.78^Universitätsklinikum Heidelberg| 00496221566736^00496221562000^[email protected]|Musterstraße 14^^ Musterhausen^^12345^DE|20091211|||||||||||||||||||||||||
Figure 3. Exemplary Segment Pattern Response Message
3.2. System Architecture The PORS architecture is component based and service oriented. Subsequently you can find a short description of the main components.
268
M. Birkle et al. / Implementation of an Open Source Provider and Organization Registry Service
3.2.1. Core / Controller The core and the controller provide the main functionality of PORS. The controller takes over the internal message handling between the different components and handles the requests from the message interface and the graphical administration frontend. It translates each request into an internal task format. The core component is based on a multi-thread engine and enables a parallel processing of multiple tasks. In the future a multi-processor parallelization approach could address problems that may occur during the processing of huge amount of simultaneous requests. 3.2.2. Communication Interface PORS provides several interfaces. A HL7 message interface handles generic HL7 messages and a HTTP and SOAP interface handles appropriate encapsulated HL7 messages. The SOAP interface can also be used to connect another graphical user interface or external software component. Last but not least a graphical administration interface was implemented. All PORS functionalities and administrative adjustments can be done over it. 3.2.3. Authentication and Administration Module PORS implements a user and role based security management concept. Each PORS transaction (e.g. execution of a MFN message) requires a successful user login and associated role privileges that are required for the transaction. There is also an LDAP interface implemented to handle user logins and role privileges via a global LDAP server. 3.2.4. Persistence and Database Module The persistence layer and the search engine of PORS are implemented using the Hibernate framework. Data transfer objects are used for the internal data exchange. The service uses a PostgreSQL database. PostgreSQL provides the possibility to enhance the functionality on the database layer by adding C program code. This is enhancement functionality used for the implementation of the Jaro-Winkler Algorithm [5] for the duplicate recognition functionality directly on the database. 3.3. Comparison of PORS and HPD The comparison of the PORS implementation with IHE HPD shows some fundamental differences between the concepts. The IHE HPD profile expects the implementation of a LDAP infrastructure. The LDAP protocol provides out of the box well standardized interfaces to add, update, delete and query entries within an LDAP server. Duplicate recognition mechanisms are note supported from LDAP out of the box.
4. Discussion The PORS implementation has shown to be a quite complex task as the software design has to be selected very carefully. Still first tests affirmed that the chosen software
M. Birkle et al. / Implementation of an Open Source Provider and Organization Registry Service
269
architecture is adequate and the overall system is stable and capable of providing the required functionality. Because communication within an IHE based HIEN infrastructure is based on HL7 messages the HPD solution implementing a LDAP server is not ideal. Also the missing duplicate recognition functionality is a problem for data quality. A HL7 to LDAP adapter and a duplicate recognition script have to be written. Another difficulty is the management of local id’s because the LDAP schema has be updated by adding a new id. In big LDAP databases this may take a long time. The Java based PORS implementation with an HL7 standard based message interface has solved all this problems. Within the PORS core engine we have the full functionalities of an object orientated programming language and problems like duplicate recognition can be solved directly. Acknowledgements: PORS was implemented within a cooperation project between the Center for Information Technology and Medical Engineering of the Heidelberg University Hospital, the Software Engineering Group and the Database Systems Research Group at the University of Heidelberg. Within the master course Applied Computer Science the students could choose the internship Information Systems Engineering. During this internship, which duration is about one semester, the students have to handle a complete software development project. At the winter semester 2010 / 2011 the software to develop was the PORS. Special appreciation is given to the professors Prof. Dr. Barbara Peach and Prof. Dr. Michael Gertz who headed this course. Many thanks also to Dipl.-Math. Florian Flatow and Dipl.-Inf. Robert Heinrich for the perfect supervision of the project. And last but not least a special thanks to the student’s Tobias Beck, Thomas Deuster, Markus Fischer, Christian Kapp, Jasmin Riemer, Michael Simon for there extensive contribution to the project.
References [1]
[2]
[3] [4] [5]
Heinze, O. Brandner, A. Bergh, B. Establishing a Personal Electronic Health Record in the RhineNeckar Region. In Proceedings of MIE 2009 – The XXIInd International Congress of the European Federation for Medical Informatics, Stud Health Technol Inform (2009), 150-119. Heinze, O. Ihls, A. Bergh, B. Development of an Open Source Provider and Organization Registry Service for Regional Health Networks, Third International Conference on Health Informatics (HealthInf 2010), Spain, 2010. Doods, J. Porzelt, J. Stricker, L. Berz, C. Chruscz, K. Duda, et.al., Technische Umsetzung eines regionalen Provider and Organization Registry Service, GMDS Jahrestagung, Germany, 2010. Kande, S. Jain, N. Briks-Fader, T. Witting, K. IHE IT Infrastructure Technical Framework Supplement, Healthcare Provider Directory (HPD), Draft for Public Comment, IHE International, 2010 Winkler, W. W. Overview of Record Linkage and Current Research Directions, Research Report Series, Statistics 2006-2, 2006.
270
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-270
Implementation and Experimentation of TEDIS: An Information System Dedicated to Patients with Pervasive Developmental Disorders Mohamed BEN SAIDa,b,1, Laurence ROBELc, Erwan VIONc, Antoine EL GHAZALIc, Bernard GOLSEc, Jean Philippe JAISa,b, Paul LANDAISa, b a Department of Biostatistics and Medical Informatics, APHP - Necker Enfants Malades Hospital, Paris, France b UPRES EA 4067- Paris Descartes University, Paris, France c Department of Pedo- Psychiatry APHP - Necker Enfants Malades Hospital, Paris, France
Abstract This article aims at describing the implementation and experimentation of TEDIS, an information system dedicated to patients with Pervasive Developmental Disorder. The experiment included 30 prospective patient records aged from 3.2 to 7.5 with an average of 6.3. Preliminary patient data analysis highlighted the need of improving the data collection process, by making relevant data systematically and accurately documented. Despite a small study ample size, data analysis also showed the interest of such information system in making evident improvements in patient care and resources allocation after medical and clinical expert assessment. Keywords: Autism, Pervasive Developmental Disorder, Information System, Internet, Dynamic Web Server
1. Introduction TEDIS, a french acronym, refers to an information system dedicated to patients with Pervasive Developmental Disorder. It was introduced in a prior publication [1]: Pervasive Developmental Disorder (PDD) represents a broad range of disorders characterized by the association of difficulties to communicate (verbal and non verbal communication), impaired social interaction and restricted, repetitive and stereotypes of behaviour patterns [2]. It may be associated or not to a mental delay. The frequent association with other neurological and organic disorders suggests a multifactor aetiology of PDD [3,4,5]. Diagnosis is based upon a precise behavioural and communication analysis of children about three years old. The treatment consists of life term care: a series of early and individually adapted measures in the domains of education, behaviour and psychology. Treatment compliance, may significantly 1
Corresponding author: Dr. M. BEN SAID, Service de Biostatistique et d’Informatique Médicale, Hôpital Necker Enfants Malades, 149 rue de Sèvres, 75015 Paris, France; email: [email protected]
M. Ben Said et al. / Implementation and Experimentation of TEDIS
271
improve the relational capabilities and social interaction with some degree of autonomy and possibility of language acquisition and non verbal communication [6]. Estimates of the prevalence of autism and PDD are moving towards increase in rates [3,4,7,8]. The need for a database system to automatically process patient’s medical and clinical information and support the multidisciplinary efforts in characterizing PDD remains a challenge [9, 10, 11]. TEDIS production database system will focus on prospective patient assessment in child-psychiatry department and integrates conclusions from genetics, neurology, ophthalmology, ORL, radiology, biochemistry departments. Longitudinal follows-up of PDD patients, will help evaluating clinical evolution and adjusting medical, educative and social therapies. TEDIS was designed from the beginning to easily integrate PDD’ expert assessments in multiple centres at the region and the national levels. Expected dataset growth will allow evaluating the significance of correlation between PDD phenotypes and genetic and/or biological disorder, and support research and decision making. For the last five years, about 250 children between 3 and 6 years old who consulted in the Necker child-psychiatry department were diagnosed as affected with PDD. In a prior phase, PDD’ patient record information was colligated and represented in paper forms identical to TEDIS’ layout screens [1]. In the present work we will present TEDIS implementation and discuss both technological and organizational aspects.
2. Material and Methods TEDIS application development tried to favour user control and easiness to access patient information for consultation, editing, updating and monitoring. Consistent, structured and explicit navigation with reduced specific application’ mode facilitated direct use of the application by physicians and health care professionals. Authorized users first connect to the appropriate child psychiatry department and select one of the major action/menus proposed by the system: “adding a new patient record, consulting an existing patient file, updating a patient file, monitoring patient data sets”. Creating a new patient record was organized in three steps: creating a patient identification then the initial state and medical histories and finally the actual assessment in child-psychiatry and crossdomain disciplines. Medical diagnoses and therapy recommendations were completed at the cross-domain medical staff meeting. Experimentation included 30 prospective PDD patient records. They were selected among patients of interest in the context of case reporting during the cross-domain medical staff meetings. Each patient had at least past clinical information (initial state and medical history) and actual medical assessment, medical diagnoses and suggested therapy recommendations. Patient data were collected by medical experts using paper forms, updated during the staff meetings then they were entered into the system. We wanted to build a system to be used by physicians in clinical settings over decades. The supporting software had to be free of charges or of reduced costs. The information system to build had to be easy to maintain, modular, scalable and perennial. Structured and modular design allowed progressive implementation and functional enhancement [12,13]. The production database is part of the information system in ntiers architecture. Secure connections [14] between the end user and the web server reinforce patient data privacy and confidentiality.
272
M. Ben Said et al. / Implementation and Experimentation of TEDIS
3. Results Clinical data represented 80% of data in TEDIS patient record. Quality control data such as: “whether or not the patient underwent a specialized exam? Were preclinical exam results available? Etc.” represented appreciatively 15% of data represented while administrative data represented about 5% of the data in the TEDIS patient record. Patients’ clinical anomalies are summarized in table 1. They concern 30 patient records subset of 29 boys and a girl aged from 3.2 to 7.5 with an average of 6.3. In 20 cases the main diagnoses was “Pervasive developmental Disorder” (ICD 10 codes: « F840-9»). They were corroborated in 15 cases with secondary diagnoses of « Mental Delay » (ICD 10 codes « F70-9 »). In 12 cases patients had appropriate diagnoses prior the actual assessment and in 3 cases the diagnosis was not appropriate. Table 1 confirms the heterogeneity of the affection [9], while table 2 confirms the improvement after the cross-domain experts’ assessment, in PDD’ patients schooling and social measures as well as in institutional care and medical coverage qualification.
4. Discussion Involving medical experts to directly document and control patient data quality guided the design and implementation of TEDIS information system. Despite the small sample size, the preliminary results motivated physicians and health professionals and behavioural change awareness was observed with the experimentation of TEDIS. Beside clinical data, recommendations were made to systematically and explicitly document administrative and clinical quality control data in the free text patient records to ease drawing such information and feed TEDIS database. We also had to consider technology evolution and make argumentative decisions. Both software tools Java™ and Mysql™ database system, used in TEDIS were owned by Oracle™ in 2010. Java™ and MySQL™ will be supported and kept available for free to community of developers with some uncertainty when they will be charged. MariaDB [15], PostgresSQL [16] represent an alternative issue for MySQL™, while Java™ will remain widely supported by large community of developers. We considered developing a rich client application using JavaFx™ [17], a consistent language independent from the browser, for coherence controls at the clientside. After some experience, the conclusion was not to use it essentially because of deployment issues. We kept with light-weighted Internet client logic. Table 1. 30 patients had an initial assessment before visiting Necker hospital. Clinical anomalies are heterogeneously observed among patients with PDD.
Clinical assessment at child psychiatry dept at Necker hospital
Psychopathology assessment specialized tests) Motor and Speech tests Genetics Neurology Electroencephalogram Radiology Hearing
(average
of
2.7
Observed anomalies 30 10 5 2 4 7
Total patients 30 13 14 9 13 11 6
273
M. Ben Said et al. / Implementation and Experimentation of TEDIS
Table 2. Improvement and more adapted care recommendations after cross domain expert assessment.
Health care provided prior actual assessment at Necker Hospital Health care recommendations after clinical assessment at Necker Hospital
Schooling measures
Ambulatory care 26
Institutional care 12
20
21
17
Social care 16
19
Qualified affected Long Duration Disease 2
17
5. Conclusion Building an information system to manage patient data in a specific domain, is a long challenging process of data modelling, application design, implementation, deployment, evolving and maintaining. All the process occurs closely with medical experts and professionals involved in the pervasive developmental disorder to insure quality data and the easiness to systematically and continuously provide it by the professionals into the information system. System design must be opened to be extended to multiple clinical settings and promote collaborative national and international research, with the respect of patient data privacy and interoperability standards. Such systems will contribute to a better knowledge of the aetiology and epidemiology and support decision making and research. In this context, TEDIS contributes in building an information system in the domain of Pervasive Developmental Disorder. It is at an early stage of a growing and promising process for the medical, clinical and researchers communities. Acknowledgments. Medical experts and health professionals particularly at the Child-Psychiatry Department at NECKER Hospital are warmly thanked for their support and feedback as well as J.P. Necker for the technical assistance. This work was supported by University Paris Descartes, Necker Hospital – APHP – Paris France.
References [1] [2] [3] [4] [5] [6]
[7] [8] [9]
Ben Saïd M, Robel L, Vion E, Golse B, Jais JP, Landais P. TEDIS: an information system dedicated to patients with pervasive developmental disorders. Stud Health Technol Inform. 2010;160(Pt 1):198-202. Wilson HS, Skodol A. Special report : DSM-IV : overview and examination of major changes. Arch. Psych. Nurs. 1994 Dec. Lenoir P, Bodier C, Desombre H, Malvy J, Abert B, Ould Taleb M, Sauvage D. Prevalence of pervasive developmental disorders. A review. l’Encepahle (2009) 35,36-42. Fombonne F. Epidemiology of Pervasive Developmental Disorder. Pediatr Res 2009 Jun; 65(6): 591-8. http://www.ccne-ethique.fr/docs/CCNE-AVISN102_AUTISME.pdf Last visited on Apr 30th 2011. Politique de prise en charge des personnes atteintes d’autisme et de troubles envahissants du développement (TED) C. interministérielle n° 2005-124 du 8-3-2005 (NOR : SANA0530104C) http://www.education.gouv.fr/bo/2005/15/default.htm. Last visited on Apr 30th 2011 Dodds L., Spencer A., Shea S., Fell D., Armson B.A., Allen A.C. Validity of Autism Diagnoses using Administrative Health Data. Chronic Dis Can. 2009;29(3):102-7. Direct Health Care Costs for Children with Pervasive Developmental disorder : 1666-2002. Adm Policy Ment Health & Ment Health Serv Res (2007) 34:213-220. [9] Amaral D. G. Brain . The promise and the pitfalls of autism research: An introductory note for new autism researchers. Res. 2011 Mar 22;1380:3-9.
274
M. Ben Said et al. / Implementation and Experimentation of TEDIS
[10] Jhonson S.B., Withney G., McAuliffe M., Wang H., McCreedy E., Rozenblit L., Evans C.C. Using Global Unique Identifiers to Link Autism Collections. J Am Med Inform Assoc 2010;17:689-695. [11] McConachie H, Barry R, Spencer A, Parker L, Le Couteur A and Colver A. Daslne: The challenge of developing a regional database for autism spectrum disorder. Arch Dis Child 2009;94;38-41 [12] Nikolaidou M, Anagnostopoulos. A Systematic Approach for Configuring Web-Based Information Systems. Distributed and Parallel Databases. Volume 17 Issue 3, March 2005. Kluwer Academic Publishers Hingham, MA, USA [13] Natrmos N. Triantafillou P., Weikum G. Statistical Structures for Internet-scale Data Management. The VLDB Journal (2009) 18:1279–1312. Springer Eds. [14] Ben Saïd M, le Mignot L, Mugnier C, Richard JB, le Bihan-Benjamin C, Jais JP, Guillon D, Simonet A, Simonet M, Landais P. Multi-Source Information System via the Internet for End-Stage Renal Disease: Scalability and Data Quality. Stud Health Technol Inform. 2005;116:994-9A [15] http://montyprogram.com/ Last visited on Apr 30th 2011. [16] http://www.postgresql.org/ Last visited on Apr 30th 2011. [17] http://download.oracle.com/javafx/tutorials.html Last visited on Apr 30th 2011.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-275
275
Traceability of Patient Records Usage: Barriers and Opportunities for Improving User Interface Design and Data Management a
Ricardo CRUZ-CORREIAa,1, Luís LAPÃO a,b, Pedro Pereira RODRIGUES a CINTESIS - Centre for Research in Health Technologies and Information Systems, Faculdade de Medicina da Universidade do Porto, Portugal b Instituto de Higiene e Medicina Tropical – Universidade Nova de Lisboa
Abstract. Although IT governance practices (like ITIL, which recommends on the use of audit logs for proper service level management) are being introduced in many Hospitals to cope with increasing levels of information quality and safety requirements, the standard maturity levels of hospital IT departments is still not enough to reach the level of frequent use of audit logs. This paper aims to address the issues related to the existence of AT in patient records, describe the Hospitals scenario and to produce recommendations. Representatives from four hospitals were interviewed regarding the use of AT in their Hospital IS. Very few AT are known to exist in these hospitals (average of 1 per hospital in an estimate of 21 existing IS). CIOs should to be much more concerned with the existence and maintenance of AT. Recommendations include server clock synchronization and using advanced log visualization tools. Keywords. Traceability, patient records, IT governance, AT
1. Introduction The practice of medicine has been described as being dominated by how well information is collected, processed, retrieved, and communicated [2]. An important challenge is to guarantee the optimal conditions for health professionals to access clinical data while hospital Information Systems (IS) are still being developed [12]. Although great advances have been made over the years, on-demand access to clinical information is still inadequate in many settings, contributing to duplication of effort, excess costs, adverse events, and reduced efficiency [7]. Shapiro et al. [21] found that, although emergency department doctors believe their patients would benefit from longitudinal records, they only try to obtain such data in 10% of the cases. Furthermore Hripcsak et al. described access rates to WebCIS in the emergency department [9], which indicated that data generated before the current emergency visit are accessed often, but by no means in a majority of times (5% to 20% of the encounters), even when the user was notified of the availability of such data. Cruz-Correia et al. [6] have shown that some clinical reports are still used one year after creation, regardless of the 1
Corresponding author: E-mail: [email protected]
276
R. Cruz-Correia et al. / Traceability of Patient Records Usage
context in which they were created, although significant differences existed in reports created during distinct encounter types. The usage of patients past information (data from previous encounters) varied according to the setting of healthcare and content. Audit Trails (AT) are records with retention requirements that show who has accessed what in a IS, when and what operations were involved [3].Thus, the most common AT function is access management [19]. Nonetheless, other relevant functions exist such as the monitoring of employee behavior and computer failures. Actually, an AT can be used and analyzed with many different purposes, but the impact is certainly higher when used as evidence in lawsuits concerning healthcare practices, or in the internal validation of new practices or IS. The best way to assess the accountability in lawsuits would be to backtrack the records to their state at the point of care; thus, a trail is mandatory [1]. On the clinical point of view, relevance has been given to the importance of AT, for example, in the validation of clinical decision support systems in pediatric critical care [16] or in the integration of multi-centric clinical trial data management [4]. Creating an AT is an effort involving the implementations of several audit controls [19], possibly integrated among multiple IS (e.g. RIS, EHR and PACS). RIS are one of the groups of systems where this integration has been better audited [5]. The resulting audit files, possibly distributed in the network, are what in computer science is commonly called “log files” of an IS. However, log files are most of the times lightly defined having only debugging purposes. The notion of AT extends that of a log file, in the sense that it can also be used to monitor the creation of electronic health records, the import and export of protected health information from and to external entities, the modification, viewing and deletion of information, making it possible to rely on the integrity of the records in health information exchanges and the data fed into personal health records [19]. Accordingly, the EuroRec approaches this issue in the ISO/HL7 21731 (RIM) standard as well as the IHE initiative [10].. Although IT governance practices (like ITIL, which recommends on the use of logs for proper service level management) are being introduced in many Hospitals to cope with increasing levels of information quality and safety requirements, the standard maturity levels of hospital IT departments is still not enough to reach the level of frequent use of logs [15]. The application of data mining and machine learning techniques to medical knowledge discovery tasks is now a growing research area. These techniques vary widely and are based on data-driven conceptualizations, model-based definitions or on a combination of data-based knowledge with human-expert knowledge [18]. Process mining is a recent technology that aims to derive process models from observed user behaviour with IS [13]. The main goal of process mining is to extract knowledge from the event logs recorded by an IS. These logs can be used to uncover process, control, data, organizational, and social structures [22]. Grilo, et al. [8] consider that hospital IT department play a critical role in developing and implementing adequate IS strategies, supported by a flexible and robust healthcare information management framework that leads to interoperable platforms. Therefore we inquired relevant CIO to develop a scenario of the logs usage. This paper aims to address the issues related to the existence of AT/logs in patient records, describe the Hospitals scenario and to produce recommendations.
R. Cruz-Correia et al. / Traceability of Patient Records Usage
277
2. Current Scenario in Portuguese Hospitals Although some Hospital Information Systems (HIS) have AT to record actions performed by users, it is not known the widespread of this functionality. Aiming at describing the current scenario in Portuguese Hospitals interviews were performed to representatives of 4 IT departments. These representatives were the director of the department (n=3) or someone that was directly responsible for maintain the HIS (n=1). These interviews were performed by telephone during January 2011. The main questions and answers given are presented in Table 1. Table 1. Questions made to IT department representatives of hospitals about AT in their IS Question Number of IS that have AT?
What is the frequency that anybody asks to use this data What were the reasons to access AT What are the potential main benefits to record these logs
What are the main problems to record those logs Have direct access to them, or need to ask the SW providers
Answers • One IS per institution. Actual answers were: E/R, LIS and PACS (twice).There are applications that although have the ability to store viewing logs, have this functionality disabled (e.g. Pathology Lab IS) • All representatives said very rarely or none. One representative argued that very few people knew it was even possible to have this information • Only one representative answered that the AT were used (very rarely) to audit that doctors have seen the radiology reports in ER before making the patient discharge • It allows the institution to reason about the usefulness of the each software component and calculate its cost/benefit • It helps to do health services research • It may legally support medical decisions • It may dissuade inappropriate access to patient data from users • Too much data to maintain (one answer) • Makes the systems slower (one answer) • Direct access, by accessing the database tables (all answered the same)
These representatives were only certain of one IS allowing AT (in average there is 21 different IS in these hospitals [20]). At least one more IS with AT is known by the authors to exist in 3 of the 4 hospitals, although the representatives did not know about. In only one of theses hospitals was the AT feature mentioned in the requirements for new IS. It was obvious from the interviews that there was little concern in confirming the AT existed and were being maintained, and in including this feature in the requirements of new IS, or to ask for upgrades that include AT in existing IS. In one of the hospitals there was even a LIS that had the ability to maintain AT disabled by the IT department staff. Regarding the frequency of access to the AT, all representatives mentioned that they accessed or were asked to access the AT very rarely or never. They didn’t feel it was of their duty to control who is accessing patient data. It was also mentioned that very few people outside the IT departments knew it was even possible to collect this data, and that if other people knew maybe they would ask for it. Concerning the reasons to access AT, one representative said he together with the responsible of the radiology department used them to audit which doctors were discharging patients from the Emergency Room without viewing the reports. The other three representatives didn’t remember of ever using them. Four potential benefits were stated in the interviews. One representative intended to use AT to maintain or to remove IS implemented functionalities based the amount of usage they had. Another mentioned that further health services research could be performed (e.g. workflows) based in these AT. Checking what patient data was available and was seen by health professionals at the time of clinical decisions for legal
278
R. Cruz-Correia et al. / Traceability of Patient Records Usage
or ethical reasons was also stated. Finally, it was mentioned that the announcement of AT could dissuade inappropriate access to patient data. The two problems associated with AT were the amount of disk space they took, and the fact that they would make the IS work slower. Regarding the access to the AT data, all representatives said they would need to access directly the IS database tables if they wanted to audit the access of a particular user or to a piece of patient data.
3. Discussion One of the causes for the representatives lack of awareness on the potential of using audit logs for improving both quality and safety of hospital processes is certainly grounded on the shortage of professionals in the IS department [14]. One of the most important pieces of AT is the time when each operation is performed. For it to be comprehensible, the values must have sufficient detail (seconds or milliseconds) and be unambiguous. The difficult part to guarantee is unambiguity, due to differences in storing time formats, to un-synchronization of clocks and to changes in summer time (i.e. daylight saving time). We recommend that dates are stored has complete dates [11] and that all servers have their clocks synchronized with each other using known standards [17] and with an official time server. Regarding the session information, it can useful that besides username, start and end session date it is also stored the terminal IP number used to access information and how was the session terminated (e.g. user, timeout or other). The IP number can be used to more effectively re-create the scenarios of information access, and to detect irregular behaviours as having the same users logged in two different terminals at the same time. The method used to terminate the session can be used to audit different user habits regarding letting the session opened for other users to use the IS. Apart from session start and end, the AT should also record patient searches, data changes and the visualization of patient of data. Although the recording of data insert and updates is more common, recording patient searches and visualization of patient of data can help attain some of the most important potentialities of the AT, namely health services research and to legally support medical decisions. A final requirement, that is seldom observed, is that the simple access to the AT, including the audit control functions and the audit files, should always be controlled to ensure the integrity of the records. The implications of security in AT are vast and harmful. Furthermore, health information managers must confirm that the auditing functions are turned on and fully functional. In fact, health information managers would be quite surprised to learn that AT systems are not active or their resulting audit files are kept only for a rather small period of time. De-identifying users when doing health services research should be mandatory. Finally we recommend the existence of an auditing visualization tool that allows to easily present (1) all actions performed by an user, (2) all actions performed on the record of a patient, and (3) the navigation flows of groups of users by presenting them on graphs and using graph theory to analyse them. As future work, we aim to analyse the few existing databases of AT separately to detect the factors associated with the viewing of patient data. We intend to implement a prototype based on these factors to calculate the information relevance in real-time. Finally we will evaluate the accuracy and potentially effectiveness of the prototype by
R. Cruz-Correia et al. / Traceability of Patient Records Usage
279
analysing the accuracy of its estimations and changes in user behaviour in a real hospital scenario. Acknowledgements: This work was supported by the Portuguese FCT, through the research project “Optimizing Information Systems for healthcare: improving Graphical User Interface and Storage Management through Machine Learning techniques on user logs data” [PTDC/EIAEIA/099920/2008].
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22]
Bakker AR.The need to Know the history of the use of digital patient data, in particular the EHR, IJMI 76 (2007), 438-441. Barnett O. Computers in medicine, JAMA 263 (1990), 2631-2633. Brodnik MS, et al. Fundamentals of Law for Health Informatics and Information Management., AHIMA, Chicago, USA., 2009. Chen D, et al. Turning Access into a web-enabled secure information system for clinical trials, Clinical Trials 6 (2009). Chen X, et al. HIPPA's compliant Auditing System for Medical Imaging System, Conference Proceedings of the IEEE Engineering in Medicine and Biology Society 1 (2005), 562-563. Cruz-Correia RJ, et al. Determinants of frequency and longevity of hospital encounters' data, BMC Med Inform Decis Mak 10 (2010). Feied CF, et al. Clinical Information Systems: Instant Ubiquitous Clinical Data for Error Reduction and Improved Clinical Outcomes, Acad Emerg Med 11 (2004), 1162. Grilo A, Jardim-Goncalves R. Challenges For The Development of Interoperable Information Systems in Healthcare Organizations. Hripcsak G, et al. Emergency Department Access to a Longitudinal Medical Record, JAMIA 14 (2007), 235-238. IHE - Integrating the Healthcare Enterprise, IT Technical Framework Profiles, in, 2005. ISO, Data Elements and Interchange Formats - Information Exchange Representation of Dates and Times in, 2004. Kuhn K, et al. Expanding the scope of health information systems-from hospitals to regional networks, to national infrastructures, and beyond, Methods Inf Med 46 (2007), 500-502. Lang M, et al. Process Mining for Clinical Workflows: Challenges and Current Limitations, in: MIE 2008, Gotenburg, Sweden, 2008, pp. 229-234. Lapão L. Survey on the status of the hospital information systems in Portugal, Methods Inf Med 46 (2007), 493-499. Lapão L. Organizational Challenges and Barriers to Implementing “IT Governance” in a Hospital, EJIS 14 (2011). Mack EH, et al. Clinical decision support systems in the pediatric intensive care unit, Pediatr Crit Care Med (2009), 1. Mills D. Internet time synchronization: The network time protocol, Communications, IEEE Transactions on 39 (2002), 1482-1493. Mitchell T and I. Burr Ridge, Machine learning, McGraw Hill, 1997. Nunn S. Managing audit trails, Journal of AHIMA 80 (2009), 44-45. Ribeiro L, et al. Information systems heterogeneity and interoperability inside hospitals - A Survey, in: HealthInf 2010, Valencia, Spain 2010, pp. 337-343. Shapiro J, et al. Approaches to patient health information exchange and their impact on emergency medicine, Ann Emerg Med 48 (2006), 426-432 Van der Aalst W, et al. Workflow mining: Discovering process models from event logs, Knowledge and Data Engineering, IEEE Transactions on 16 (2004), 1128-1142.
280
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-280
Important Ingredients for Health Adaptive Information Systems a
Yalini SENATHIRAJAHa,1, Suzanne BAKKEN a,b Columbia University Department of Biomedical Informatics b Columbia University School of Nursing
Abstract: Healthcare information systems frequently do not truly meet clinician needs, due to the complexity, variability, and rapid change in medical contexts. Recently the internet world has been transformed by approaches commonly termed ‘Web 2.0’. This paper proposes a Web 2.0 model for a healthcare adaptive architecture. The vision includes creating modular, user-composable systems which aim to make all necessary information from multiple internal and external sources available via a platform, for the user to use, arrange, recombine, author, and share at will, using rich interfaces where advisable. Clinicians can create a set of 'widgets' and ‘views’ which can transform data, reflect their domain knowledge and cater to their needs, using simple drag and drop interfaces without the intervention of programmers. We have built an example system, MedWISE, embodying the user-facing parts of the model. This approach to HIS is expected to have several advantages, including greater suitability to user needs (reflecting clinician rather than programmer concepts and priorities), incorporation of multiple information sources, agile reconfiguration to meet emerging situations and new treatment deployment, capture of user domain expertise and tacit knowledge, efficiencies due to workflow and human-computer interaction improvements, and greater user acceptance. Keywords. healthcare Web 2.0, collaboration, human-computer interaction, user configurability, architecture of participation, MedWISE.
1. Introduction Current healthcare information systems suffer from numerous difficult problems, in part due to the high variability of medical information needs, the difficulty of addressing rapidly changing or emergent conditions, the highly collaborative, social, and high-stakes nature of the work, limitations of human cognition and current humancomputer interaction paradigms, and other factors. These contribute to the lack of acceptance of EHRs by clinicians [1, 2, 3]. Therefore we created a new model for healthcare information systems, embodied in MedWISE, a widget-based EHR platform we have built, based on ‘web 2.0’ philosophies and technical approaches, which include emphasis on user control, participation, and communication. This paper puts forth a vision of how these might solve problems in healthcare, with implications at multiple levels, from small individual user cognitive effects to large sociotechnical changes in the way healthcare information systems are designed and built. We present a model for discussion, and a 1
Corresponding Author.
Y. Senathirajah and S. Bakken / Important Ingredients for Health Adaptive Information Systems
281
clinical practice scenario2. The discussion covers implications and areas for further inquiry. Our belief is that only a scientific approach (i.e. testing such systems in controlled conditions) will resolve some of the potentially controversial issues. MedWISE embodies some parts of the model. 2
2. Web 2.0 Model for a Healthcare Adaptive Architecture Figure 1 presents a model schematic showing information flow within a worldwide system and the authoring and distribution process. A description of model components follows.
Figure 1. Model diagram.
2.1. Model Description: Essential components of our Web 2.0 model for adaptation in health care include a standards-based platform supporting the following functions: • A back-end service-oriented architecture (SOA) that permits applications to incorporate information from diverse within-hospital, external clinical, other data stores and applications, (e.g. HL7 and medical RSS feeds, notes, alerts). • A display/interaction layer that allows users, individually and col-laboratively, to recombine this information in unanticipated ways via drag/drop assembly using a palette of data and formatting options. This includes programming, data visualization and transformation, rich interfaces. This must be as effortless and transparent as possible such as by saving objects from the viewing history as a template. • Self-describing widgets that enable annotation and incorporation into larger collections, with facility for the output of one process to become the input of another. • Means to store, share, and aggregate widgets and views that supports future use, repurposing, and sharing with others at multiple levels, e.g., specialty, institution, national, international. Machine learning applied to aggregations of 2
More detailed information on MedWISE and web 2.0 is available at http://www.ehr2.org, in a paper by Cheung et al [4.], and at http://www.ncbi.nlm.nih.gov/pubmed?term=senathirajah . http://tinyurl.com/ehr2scenario contains an illustrative scenario.
282
Y. Senathirajah and S. Bakken / Important Ingredients for Health Adaptive Information Systems
•
widgets and views would enable dynamic suggestions of elements for viewing or authoring (much as Amazon suggests books). Eventually this aggregation would form a large collection embodying clinicians’ tacit or explicit expertise and institutional knowledge, allowing for insights based on group wisdom. Explicit communication and collaboration features for collegial communication & group authoring.
3. Discussion 3.1. Advantages Increased Information availability: integration of more information sources and meaningful display tools, at the point of decision, via self-updating feeds, Capture of tacit knowledge: Expert-created ‘views’ would appeal to colleagues, facilitating adoption (prestige bias) [5], and could inform novices [6], capturing on the fly what specific clinicians considered important. Views encompassing institutional policies could capture knowledge which is often currently tacit. Information display & visualization: Flexible placement of elements allows grouping those which should be viewed together. Juxtaposition can foster insight [7-8], and constitute decision support, serving as a reminder. Custom alerts (geared to the individual patient) delivered to various devices, could assist with error avoidance. Displays involving direct perception rather than calculation, can assist safety. Usercomposable tools may facilitate more reflective understanding of the relationship of data to patient care and ultimately, better design. HCI and Workflow Advantages: Distributed cognition theory states that people use tools to decrease the cognitive load (internal mental processing) in their tasks, freeing up finite cognitive resources (perception, attention, memory) for the main task. An example is writing something down instead of keeping it in memory. Mitigation of the keyhole effect3 is another result, since the user can assemble together highly relevant elements in the scarce screen space [9]. This may substitute for usual actions of workers in high-reliability, high-stress environments, who often workaround computers [10] using easily manipulable supplementary objects (e.g. paper) [10, 11, 12]. Workflow facilitation: team members could contribute widgets pertaining to their own roles, and highly local needs accommodated. (e.g., mashup of admission data and dialysis chair locations for amputee dialysis patients could meet fire regulations mandating staff assistance for evacuation). Reduced and accumulated work due to decreased search and material re-use. Sharing created interfaces reduces total work, since subsequent users need not do the same search and retrieval, resulting in substantial time savings over a population of users and patient records. Complete views also serve as a reminder system [13, 14]. Sociotechnical change: Communication/collaboration:, Hospital workers prefer synchronous communication [15], leading to an interrupt-driven environment with high cognitive costs in terms of memory failure, and errors [15]. Care interfaces with Web 2.0 rapid communication tools may help minimize this cost. Shareable user-customized displays also provide ‘common ground’ for consults or handoffs. 3
‘Keyhole effect’ refers to the problem in computer system design that the user must access a vast array of data elements through a small screen, as if viewing a large room through a keyhole.
Y. Senathirajah and S. Bakken / Important Ingredients for Health Adaptive Information Systems
283
Rapid reconfigurability: since system change does not require programmers, emerging needs such as public health emergencies or new treatment deployment could be addressed more rapidly. Public health elements are easily incorporated. 3.2. Disadvantages Foremost is the concern that excess variability or omissions could lead to errors (e.g diagnosis momentum [16]). We believe a situation of equipoise with respect to current systems exists. Currently, aside from attested notes there is no real monitoring of which user views what data (this is highly variable [17], and lost as a search sequence)4. Our study did not find substantial diagnosis momentum errors. Decision support to enforce viewing of complete element sets can be built in. There are perhaps three levels of concern. 1) Widgets which merely rearrange preexisting data close to their usual form, 2) widgets which reformat, perform calculations, or otherwise fallibly change the data representation, and 3) widgets which implement more sophisticated decision support. The latter is the most concerning and would require careful oversight, e.g. a vetting system involving clinical experts, administration, and IT. Default configurations and conventions (e.g. for format, layout, display) can be used to foster consistency important for usability. Will sufficient numbers of users want to adopt and use the system to create their own interfaces? HIS users have high levels of education, numeracy, algorithmic thinking, computer savvy, and great interest in improving care effectiveness. Younger users already create and share content online. Our studies show great user enthusiasm, skill and engagement in using MedWISE features to solve problems and fit different contexts and needs. It is not necessary that a majority embrace it; in fact we expect that a small proportion of users will create many complex tools, which their colleagues will adopt, along with widespread use of the simpler functions.
4. Conclusion Further laboratory study and controlled deployments are needed to resolve issues of concern. Implementation of such systems is likely to open up new avenues for research, in HCI and efficiency, tacit knowledge, technology acceptance/adoption, new evaluation methods, and data mining on a vast library of user-created tools. The different mode of software creation has implications for tool development in other complex, critical environments in which user expertise is paramount. We believe this approach can facilitate evolutionary development of HIS and improve task-technology fit. This would allow users to make software reflect and fit their mental models, domain knowledge, collaborations, and emerging needs, reversing the need for the user to fit herself to the software instead of the software fitting the domain and user. It makes domain knowledge paramount. We hope this may allow the full potential of computing to leverage human creativity and knowledge in health care to better meet the full requirements of this most complex and critical of domains.
4
In MedWISE the usual CIS is simultaneously available, so there is no functional penalty for incorporating the Web 2.0 interface.
284
Y. Senathirajah and S. Bakken / Important Ingredients for Health Adaptive Information Systems
Acknowledgments: Dr. Senathirajah was funded by Irving Institute for Clinical and Translational Research grant #UL1RR024156, and NLM/RWJ ST15 LM007079-15. We thank Drs. David Kaufman and Herbert Chase.
References [1] [2] [3]
[4] [5]
[6]
[7]
[8] [9]
[10] [11]
[12] [13] [14]
[15] [16] [17]
Wilson EV, Lankton NK. Modeling patients' acceptance of provider-delivered e-health. J Am Med Inform Assoc. 2004 Jul-Aug;11(4):241-8. Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: Toward a unified view. MIS Quarterly. 2003;27(3):425-78. Gadd CS, Baskaran P, Lobach DF. Identification of design features to enhance utilization and acceptance of systems for Internet-based decision support at the point of care. Proc American Medical Informatics Association Symp. 1998:91-5. Cheung KH YK, Townsend JP, Scotch M. HCLS2.0/3.0: health care and life sciences data mashup using Web 2.0/3.0. Journal of Biomedical Informatics. 2008;41(5):694-705. Henrich J. Cultural Transmission and the Diffusion of Innovations: Adoption Dynamics Indicate that Biased Cultural Transmission Is the Predominate Force in Behavioral Change. American Anthropologist 103. 2001;103:992-1013. Patel V, Groen GJ. The General and Specific Nature of Medical Expertise: A Critical Look. In: Smith AEaJ, editor. Toward a General Theory of Expertise: Prospects and Limits. Cambridge, UK: Cambridge University Press; 1991. p. 93-125. Kerne A, Koh E, Smith S, Choi H, Graeber R, Webb A, editor. Promoting emergence in information discovery by representing collections with composition. Proceedings of the 6th ACM SIGCHI conference on Creativity and Cognition; 2007 June. Kerne A, Koh E, Dworaczyk B, Mistrot JM, Choi H. combinFormation: a mixed-initiative system for representing collections as compositions of image and - all 2 versions. 2006 2006. Woods D. Toward a theoretical base for representation design in the computer medium: Ecological perception and aiding human cognition. In: Flach JPH, Caird J, Vicente KJ, editor. Global perspectives on the ecology of human-machine systems. Hillsdale, NJ: Lawrence Erlbaum. 1995. p. 157–88. Xiao Y. Artifacts and collaborative work in healthcare: methodological, theoretical, and technological implications of the tangible. Journal of Biomedical Informatics. 2005 February 2005;38(1):26-33. Bentley R, Hughes JA, Randall D, et al, editor. Ethnographically-informed systems design for air traffic control. 1992 ACM conference on Computer-supported cooperative work; 1992; Toronto, Ontario Canada: Assocation for Computing Machinery. Cook RI, Woods DD. Adapting to new technology in the operating room. Hum Factors. 1996 Dec;38(4):593-613. Currie LM, Mellino LV, Cimino JJ, Li J, Bakken S. Requirements specification for automated fall and injury risk assessment. Stud Health Technol Inform. 2006;122:134-8. Currie LM, Graham M, Allen M, Bakken S, Patel V, Cimino JJ. Clinical information needs in context: an observational study of clinicians while using a clinical information system. AMIA Annu Symp Proc. 2003 2003:190-4. Coiera E, Tombs, Vanessa. Communication behaviours in a hospital setting: an observational study. British Medical Journal. 1998 February 28, 1998;316:673-6. Croskerry P. The Importance of Cognitive Errors in Diagnosis and Strategies to Minimize Them. Academic Medicine. 2003;78:775-80. Chen E. Knowledge discovery in clinical information system log files and the implications for wireless handheld clinical applications [Doctoral dissertation]. New York: Columbia University; 2004.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-285
285
Everyday Ethical Dilemmas Arising With Electronic Record Use in Primary Care a
Ellen BALKAa,1, Marianne TOLAR a Simon Fraser University, Burnaby BC, CANADA
Abstract. The introduction of electronic medical record systems (EMRs) into primary care settings alters work practices, introduces new challenges, and new roles. In the process of integrating an EMR into a primary care setting, clinic staff faced ethical challenges in their everyday work practices resulting from workarounds undertaken to compensate for a poor fit between system design and work practices, issues related to system access, and governance gaps. Examples of these issues are presented, and implications for system design are discussed. Keywords. Electronic Health Records, Electronic Patient Records, Healthcare Policy Issues, System Implementation and Management, Security and Privacy
1. Introduction In recent years, primary care practitioners, governments and health advocates have increasingly turned towards computerization of primary care settings in an effort to improve continuity of care [1], service delivery through enhanced chronic disease management [2], and to support population health initiatives and improve monitoring of practice service delivery and quality [3, 4]. Health researchers, payers and policy makers have sought access to data collected and organized through electronic medical records (EMRs) in primary care settings in an effort to answer varied questions about topics such as the relationship between primary care interventions and health outcomes, access to services, achievement of health management targets for the practice, etc. The introduction of EMRs into primary care settings has altered work practices in clinical settings [5,6], and in some cases, has led to new roles (e.g., for technical support staff or consultants required to maintain a practice’s computer system). Amidst the emergence of new work practices which have accompanied EMR use, primary care clinics often face new ethical dilemmas. In this paper, we provide an overview of some of the ethical dilemmas that arose for staff working in a community based primary health care centre, in relation to the clinic’s use of an EMR. We learned of these dilemmas through ongoing work with the clinic over several years, during which time we adopted an interventionist research role in relation to the clinic [5, 6, 7, 8, 9]. Members of our team conducted in depth ethnographic observations at the clinic, while we provided support to the clinic (e.g., documenting EMR meetings [9]; providing technical support [8, 9]). In this paper, we identify three types of ethical issues which emerged in relation to the use of the EMR: issues related to a poor fit between the EMR 1
Corresponding Author: Prof. Ellen Balka, School of Communication, Simon Fraser University, 8888 University Drive, Burnaby BC, V5A 1S6 Canada; E-mail: [email protected].
286
E. Balka and M. Tolar / Everyday Ethical Dilemmas Arising With Electronic Record Use
and work practices; issues related to system access, and issues related to gaps in governance. After a brief overview of our study site and data collection methods, examples of each type of issue are presented and discussed. We conclude the paper with a discussion of implications for design and related actions.
2. The Research Site: Computerization of an EMR in a Community Health Centre We followed a community health centre (CHC) in their use of an EMR system, which was introduced to support multiple goals, including prevention and management of chronic diseases. In the CHC, clinical staff (six physicians, one clinical pharmacist, and one nurse practitioner) work together with medical office assistants (MOAs), a medical office administrator, and the executive director. The clinic also employs and trains students for clinical and administrative tasks. The EMR system has been used since its implementation in 2004 which we have written about elsewhere [5]. The system supports storage and retrieval of administrative and clinical information (diagnoses, medication, lab results, etc.), documentation of patient encounters, and appointment scheduling and billing. Research reported here draws on the clinic’s recent efforts to utilize advanced features of their EMR system, particularly those involving secondary use of data collected through the EMR. 2.1. Methods of Data Collection Our research has followed a socio-technical approach [10], and used ethnographic research methods. We began working with the clinic in 2003, before they had implemented an EMR. Our work with the clinic has continued through a succession of grants, the most recent of which had a specific action research (AR) focus. Designed to bridge the gap between theory, research, and practice [11], AR generates research about a social system while trying to change it [12]. As a planned part of the AR orientation of the project, a researcher assumed responsibility for IT support in a broad sense, while simultaneously conducting participant observation and informal interviews. A concluding round of formal interviews with clinical and administrative staff was conducted. The analysis in this paper draws on data collected by researchers between August 2008 and December 2010. One researcher spent an average of nine days a month at the CHC for nine months. A second researcher averaged four days a month at the CHC over eighteen months. Research staff were integrated into ongoing clinic activities, and interacted constantly with all clinic staff. The focal point of researchers’ observations and interviews were health care practitioner and support staff use of electronic records, and the issues and challenges clinic staff faced as they worked to incorporate secondary use of clinical data into the clinic’s work practices. Research reported here was approved by a university research ethics board, operating within the guidelines of Canada’s Tri-Council research ethics guidelines. All study participants—support staff and practitioners at the CHC—were aware of the study and consented in writing to participate in the study, where participation consisted of being observed using the electronic record system outside of clinical encounters, and being interviewed about use of the electronic record system.
E. Balka and M. Tolar / Everyday Ethical Dilemmas Arising With Electronic Record Use
287
3. Results: Ethical Issues Arising in EMR Use In identifying ethical issues, we rely on an understanding of ethics as a situation where “two or more valid ethical requirements or legitimate interests conflict and consensus does not exist as to how it should be resolved” [13]. Ethical issues which arose in our primary care study site can be grouped into three areas, which are discussed below. 3.1. Ethical Issues Arising as a Result of Work Practice Changes That new computer systems play a significant role in changing work practices is a well documented phenomenon [14, 15]. In the clinic in which our research was carried out, we observed several instances in which the EMR system, if used as intended, altered work practices in a manner which caused problems for staff. In order to maintain a workflow which worked for staff, staff collaboratively developed workarounds, which solved the workflow issues, but also created ethical dilemmas. For example, the EMR system was designed so that only doctors could add information to a patient’s record. However, in our clinical setting, normally MOAs collected some information (such as a patient’s height and weight and some vital signs) before patients are seen by a doctor. In a paper based system, they could simply record the information in a chart. However, with the EMR, access permissions could not be altered on the computer to allow the MOAs to enter such information into the chart. As all staff in the clinic were in agreement that it was desirable for the MOAs to collect and record this information, a workaround was developed. A computer id for a fictitious doctor was created, for use by all of the MOAs, who could, using this id, enter the information they had always collected, into the patient’s chart. However, because the id was shared (reflecting licensing of the product which was billed by number of user ids created), it would not be possible to accurately audit which of the MOAs had entered information, which was identified as a problem and presented an ethical dilemma for some. 3.2. Ethical Issues Related to Access Numerous ethical issues arose in relation to who should have access to what information, in what format, and under what circumstances. Some of these issues related to the emergence of new roles related to EMR use, while other issues related to either work practices or technical affordances, or both. The most challenging issue our research team faced had to do with emergence of new roles related to our explicit action research focus. Part of the challenge related to a conflict between the provincial medical association’s views of privacy; that no one other than a patient’s physician should see any information about a patient, and realities associated with using an EMR in a clinical setting, e.g., software allowed MOAs to see some patient information,2 and resolution of technical problems often required technical support staff to come into incidental contact with patient records. As the EMR system stabilized and the clinic began pursuing advanced features (such as practice searches, which allowed the clinic to identify all patients with a particular attribute, such as diabetes), the fact that we were engaged in research (though not about patients or using patient data) caused one staff member to suggest that researcher involvement with practice searches (undertaken to support the practice) contravened the medical association’s privacy policy. Together, 2
Interacting with patient data had always been part of the MOAs practice and was viewed as acceptable.
288
E. Balka and M. Tolar / Everyday Ethical Dilemmas Arising With Electronic Record Use
we decided the researcher should not assist with practice searches. As researchers, we understood the need to protect patients from unauthorized use of patient data, yet at the same time we remained perplexed about what made it alright for a technician to see data, but not a researcher. As researchers, our activities were scrutinized from a different vantage point than the same activities undertaken by support staff. This conflict—related to our explicit role as action researchers—brought issues about who should have access to patient data under what circumstances into clearer view, and underscored a need to develop policy to govern new roles (be they technicians, consultants or action researchers) which emerge in relation to implementation of EMRs. Several of the ethical issues related to access we observed also existed in a paperrecord based world and were brought back into view in relation to privacy concerns concerning the EMR. For example, in one instance, a computer terminal froze during a clinical encounter in a consultation room. The clinician, frustrated, left the consultation room to seek help, and, because the system was non-responsive, remained signed onto the system (leaving the potential for the patient who remained in the room to access the system, if it began working again). In contrast to leaving a paper file in a room, this left access to the entire clinic’s patient population potentially exposed. In another instance, a doctor working from home was surprised to find that as she went to access the clinic records, no password or user id were required. It was eventually determined that this had occurred because on a prior occasion, she had checked the “save user id and password option, which effectively created open access to the clinic’s records for anyone using that computer. Again, this would have been similar to the security risks associated with taking a paper based file home, with the exception that an unauthorized user of a doctor’s computer would have potentially had access to all clinic records. 3.3. Ethical Issue Related to Governance Gaps Some ethical issues which arose related to gaps in governance. For example, the clinic would like to support e-mail contact with patients, but remains concerned that an absence of governance instruments pertaining to the status of e-mail communication between clinicians and doctors will leave the clinic legally exposed, in the event of a problem. The province’s failure to develop governance measures for anticipated uses of computers in clinical settings means the clinic must either curtail services (e.g., not offer e-mail communication) or leave themselves exposed to legal risk.
4. Discussion The ethical issues which arose in the practice setting observed suggested that there is an ongoing need to monitor systems in order to insure that a poor fit between system design and use does not result in workarounds that compromise ethical standards. Workarounds should not interrupt the ability to audit systems accurately, which can be partly addressed by accommodating further customization in permissions for different types of users, which will accommodate more varied workflows. In addition, licensing software by log-ons (for example, rather than the number of full time equivalent staff) is also likely to result in practices which can diminish audit accuracy. Privacy and security remain a concern for EMR users. Care providers must trust the security of systems they are required to upload data to. Several instances from our fieldwork suggest there is a need to maintain ongoing oversight of electronic systems,
E. Balka and M. Tolar / Everyday Ethical Dilemmas Arising With Electronic Record Use
289
in order to identify both technical and user error which can compromise privacy as well as care. In some instances, ethical issues related to EMRs existed prior to the advent of the EMR, but the potential for harm was amplified through use of EMRs. In other cases, the use of EMRs brought existing issues (e.g., viewing patient data incidentally while supporting operational needs vs. while supporting research needs) into clearer view. Privacy and ethical guidelines should address the variety of issues practitioners face. To support clinicians interested in pursuing new forms of interaction and care made possible with the advent of EMRs, strategies for developing anticipatory governance tools (e.g., to address the medico-legal status of e-mail communication between providers and patients) need to be developed. In addition, governance structures must be responsive to emerging roles (e.g., technical support personnel and researchers who may not be able to carry out their work tasks without coming into incidental contact with patient data) or changing access needs (e.g., accountability in relation to medical data entered by MOAs), and physicians and other care providers need to know where to turn for policy guidance about emerging practice issues. Acknowledgements: This research was funded by the Canadian Institutes of Health Research and Canada Health Infoway, through the Knowledge to Action funding program.
References [1] [2] [3]
[4] [5] [6] [7]
[8] [9] [10] [11] [12] [13] [14]
[15]
Reid RJ, Wagner EH. Strengthening primary care with better transfer of information, CMAJ 179 (2008), 987-988. Muttitt SC, Alvarez RC. Chronic disease management: IT’s time for transformational change! Healthcare Papers 7 (2007), 43-47. Sullivan-Tayor P, Webster G, Mukhi S, Sanchez M. Development of electronic medical record content standards to collect pan-Canadian primary health care indicator data, Stud Health Technol Inform 143 (2009), 167-173. de Lusignan S. Developing primary care informatics, Informatics in Primary Care 16 (2008), 1-2. Boulus N. A journey into the hidden lives of Electronic Medical Records (EMRs): Action research in the making, PhD thesis, School of Communication, Simon Fraser University, Burnaby, BC, 2010. Boulus N, Bjorn P. A cross-case analysis of technology-in-use practices: EPR-adaptation in Canada and Norway, Int J Med Inform 79 (2010) e97-e108. Bjørn P, Boulus N. Dissenting in reflective conversations: Critical components of doing action research, Action Research Journal, published online before print March 31, 2011. Available from http://arj.sagepub.com/content/early/2011/04/05/1476750310396949, accessed on April 20th, 2011. Tolar M, Balka E. Infrastructure in the making: The case of an EMR system in a general practice setting, CD ROM Proceedings of AHIC 2010, April 28-30, Kitchener, Ontario, Canada. Tolar M, Balka E. Beyond individual patient care: Enhanced use of EMR data in a primary care setting, CD ROM Proceedings of ITCH 2011, February 24 - 27, 2011, Victoria, BC Canada. Aarts J, Callen J, Coiera E, Westbrook J. Information technology in health care: Socio-technical approaches, Int J Med Inform 79 (2010), 389-390. Holter IM, Schwartz-Barcott D. Action research: What is it? How has it been used and how can it be used in nursing? J Adv Nurs 128 (1993), 298-304. Hart E, Bond M. Action Research for Health and Social Care: A Guide to Practice, Open University Press, Buckingham, UK, 1995. Geva A. A typology of moral problems in business: A framework for ethical management. Journal of Business Ethics 69 (2006), 133-147. Hanseth O, Monteiro E. Changing irreversible networks. Institutionalization and infrastructure, Proceedings of the Sixth European Conference on Information Systems, Aix-en-Provence, France, June 4-6 1998. Available from http://www.idi.ntnu.no/~ericm/ecis.html, accessed on April 20th, 2011. Hartswood MJ, Procter RN, Rouchy P, Rouncefield M, Slack R, Voss A. Working IT out in medical practice: IT systems design and development as co-realisation, Methods Inf Med 42 (2003), 392-397.
290
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-290
The Shift in Workarounds Upon Implementation of Computerized Physician Order Entry a
Heleen VAN DER SIJSa,1 , Irene ROOTJES b, Jos AARTSc Department of Hospital Pharmacy, Erasmus University Medical Center, Rotterdam b Department of Hospital Pharmacy, Westfries Gasthuis, Hoorn c Institute of Health Policy and Management, Erasmus University Rotterdam. The Netherlands
Abstract. Workarounds are working processes deviating from formal rules or intended work methods to smooth workflow and circumvent problems without eliminating them. Former research focused on workarounds in the medication use process after implementation of computerized physician order entry (CPOE). This study on 2 wards of a general hospital shows that workarounds exist in both paperbased and electronic prescribing systems. After CPOE implementation workarounds present in the paper-based system had disappeared or remained existent, and new ones had emerged. Keywords. Computerized physician order entry, workarounds, qualitative study
1. Introduction The implementation of computerized physician order entry (CPOE) is often mentioned as an important measure to reduce medication errors [1]. In recent years however, studies showed that CPOE may introduce new errors compromising patient safety [2]. Absence of a fit between working processes and the CPOE, and the emergence of workarounds are mentioned as cause of these new errors [3]. Studies on workarounds in the medication process have focused only on CPOE, whereas workarounds may also exist in the paper-based medication process. The aim of this study is to describe the bottlenecks and workarounds in a hospital by comparing a paper-based and an electronic prescribing system to get an insight in the emerging, remaining and disappearing workarounds.
2. Background The medication use process in hospital consists of several phases: drug prescribing by physicians, dispensing by the pharmacy, administration by nurses, and monitoring by nurses and physicians. Although the phases of the medication use process are generally described as consecutive steps performed by specific (groups of) health care providers, 1
Corresponding Author.
H. van der Sijs et al. / The Shift in Workarounds Upon Implementation of Order Entry
291
they often deviate from this formal process to enable a smooth workflow. Examples of these deviations are that drugs are administered in acute situations before being formally prescribed and that nurses perform ‘invisible work’: checking and communicating beyond their formal tasks. Clinical workflow is defined as the flow of care-related tasks as seen in the management of a patient trajectory: the allocation of multiple tasks of a provider or of co-working providers in the processes of care and the way they collaborate [4]. Although the workflow of individuals should be smooth, the total workflow of the collaborative health care providers should be the main objective, as this determines the outcome for the patient. Workarounds are working processes deviating from formal rules or intended work methods to smooth workflow and circumvent problems without eliminating them [5,6]. Workarounds emerge when rules are stretched by negotiating to enact a deviation of the rule [5]. Workarounds may positively affect the working process by providing a temporary solution. On the other hand, workarounds may introduce inefficiencies that increase the burden on clinicians and may even threaten patient safety. The Westfries Gasthuis is a 506 bed general hospital in Hoorn, the Netherlands, which is implementing an electronic patient record (EPR) with a CPOE (Chipsoft, Amsterdam, the Netherlands). Although the EPR implementation was successful, the introduction of CPOE was stopped after implementation on 3 wards. Since 2007, the hospital uses both paper-based and electronic prescribing.
3. Methods Documents with respect to the medication process were analyzed to get an insight in formal processes. Interviews were conducted to prepare for ward and respondent selection for further qualitative study (observations and semi-structured interviews). The study took place in April and May 2010 on the oncology and cardiology wards of the Westfries Gasthuis, which use paper-based (Kardex) and electronic prescribing respectively. On each ward, 2 observations (during the daily patient round and the 4 pm drug administration round) were performed. Furthermore, semi-structured interviews were held with a resident, a medical specialist, a senior nurse, and a second nurse on each ward, and with a hospital pharmacist. Interviews were transcribed verbatim and checked for correctness by the respondents. Notes from observations were processed immediately. Results were put in a data-matrix for qualitative analysis.
4. Results 4.1. Paper-based system The formal task of nurses in the Westfries Gasthuis consists of: keeping the nursing record and Kardex (drug administration form) up-to-date, filling medication carts, and administering drugs to patients. Only physicians are formally responsible for drug prescribing, including adjusting and stopping medication orders. Whereas the formal model does not include nurses in drug prescribing, they generally check the handwritten medication orders for correctness and completeness during the patient rounds, in order to prevent pharmacy phone calls. If drugs are prescribed later during
292
H. van der Sijs et al. / The Shift in Workarounds Upon Implementation of Order Entry
the day or night, these orders often are surrounded by miscommunication and delay. Physicians should inform nurses, and nurses should handle the orders directly, but both are forgotten regularly. Urgent verbal orders are written down by nurses and not always authorized by physicians. Nurses fulfill an informal role in drug prescribing in the paper-based system. The risk of errors increases when drugs are prescribed after patient rounds, outside the regular workflow. Nurses are responsible for cart filling and admitted that they sometimes stick the medication order labels (MO labels) on the wrong patient chart, which may result in incorrect drug administrations. To prevent erroneous drug administrations, nurses therefore mark or circle important parts on the labels. Nurses should await the definitive MO labels that have been verified by the pharmacy, but they often administer drugs before receiving them. Although physicians determine whether orders have to be adjusted or stopped, nurses often draw physicians’ attention towards the required adjustments. The physician has to sign the stop order that has to be sent to the pharmacy, the nurse stamps STOP next to the order on the Kardex card. However, stop orders often do not reach the pharmacy, nor are orders with handwritten adjustments. The pharmacy consequently does not have a correct medication overview and cannot completely fulfill its medication vigilance role. 4.2. Electronic system On the cardiology ward of the Westfries Gasthuis, physicians electronically prescribe drugs on a computer on wheels (COW) during patient rounds. Nurses recommend physicians during order entry to adjust administration times, but are not authorized to enter or adjust electronic orders themselves. Therefore, nurses manually adjust administration times on the printed MO labels. Nurses admit that they are glad not to be asked anymore to prescribe drugs, which fell outside their responsibility, but was possible and often done in the flexible paper-based system. On the other hand, they told they now had to ask physicians to enter orders of drugs already administered in urgent situations. Physicians mention that electronic prescribing takes more time, unless standard order sets are used, gives better medication overview, and prevents phone calls from the pharmacy because orders are now legible and complete. Sorting out and sticking the MO labels on the right Kardex cards is an error-prone process, worsened by the fact that stickers are printed per administration time instead of per patient. If medication orders are adjusted after consultation with a supervisor or other medical specialist, nurses are confronted with a new order and verify the unknown order with the physician. Physicians agree with these additional checks. As the order printer is next to the medication cart, orders are noticed easily and physicians do not check anymore whether nurses have received them. Nurses admit to sometimes attaching MO labels on the wrong Kardex cards, and the labels of electronically prescribed medication orders contain encircled and marked items to attract attention. Administration times are adjusted manually without adjustment in the CPOE, but other order adjustments are often made electronically. Stop orders have very similar appearance to start orders, which resulted in errors, and in the renewed introduction of the STOP stamp on the Kardex card used by nurses. However, the medication overview in the pharmacy is up-to-date for the greater part. As the pharmacy still verifies all orders by printing definitive MO labels to be stuck on
H. van der Sijs et al. / The Shift in Workarounds Upon Implementation of Order Entry
293
the Kardex card, information crossed each other sometimes if orders were started and stopped within a small timeframe.
5. Discussion The introduction of CPOE shifted tasks from nurses to physicians. In the flexible paper-based system, nurses played an important informal role beyond their formal responsibility. However, the rigid electronic system, in which formal professional roles are inscribed, protected them from performing these unauthorized tasks [7]. Nurses perceived the clear responsibility separation as an important advantage, but missed the possibility to adjust administration times. Physicians perceived the better medication overview and the faster process towards a definitive medication order as advantages, although the order entry took some time more. Both the paper-based and electronic prescribing systems contained workarounds, probably due to the inherently flexible process, and provoked medication errors that arose if drugs were prescribed outside the normal routine of the patient round. Without extra communication, the handwritten orders remained unnoticed in the paper-based system, as were unprinted orders in case of printer problems in the electronic system. Problems with medication cart filling and drug administration did not change upon CPOE implementation: MO labels were stuck on wrong patient cards and important information on the labels were marked and circled. However, monitoring became easier in the electronic system as order stops and adjustments were registered better, resulting in a better overview on the Kardex card and in the pharmacy. The use of the old STOP stamp was reintroduced in the electronic system as a workaround to prevent medication errors due to look-alike start and stop orders. The implementation of CPOE did result in a shift in workarounds: several bottlenecks were solved, new bottlenecks and workarounds emerged, and several old bottlenecks and workarounds existed in both systems. The workarounds in the paperbased system arose from a flexible interpretation of formal rules and roles that were negotiated between different health care providers. In the electronic system, formal roles were rigidly embedded and controlled by access rights. As nurses were not authorized to adjust orders in the electronic system, they had to negotiate with physicians to adjust them, or simple perform adjustments themselves on the printed MO labels. Authorization for nurses to adjust drug administration times, without further prescribing rights, could be a solution for this bottleneck and workaround. However, in a hospital where nurses were allowed to adjust drug administration times in the CPOE, they made simple handwritten adjustments on the Kardex card instead of electronic adjustments [8,9]. The introduction of CPOE reduced the number of information sources and transcribing errors, but the existence of a paper-based drug administration record next to the electronic prescribing system still resulted in problems. The old workaround with the STOP stamp was therefore reintroduced in the electronic system. The implementation of an electronic drug administration record is suggested as a solution for this bottleneck and workaround, as it reduces the number of information sources to one single system. The use of COWs enabled the nurses to perform their ‘invisible work’ of checking orders during patient rounds. Communication problems therefore arose only if orders were entered outside the normal routine. In a hospital lacking COWs, all orders were to
294
H. van der Sijs et al. / The Shift in Workarounds Upon Implementation of Order Entry
be entered after the patient round on computers in the physicians’ room and consequently, communication problems were more prominent [10]. This study was performed in one general hospital, on a cardiology and oncology ward, with 4 observations and 9 interviews. However, respondent selection was carefully prepared by document analysis and interviews beforehand, and based on theoretical sampling. Only non-surgical wards were included to prevent selection bias due to differences in medical specialty and ward organization. Both oncology and cardiology belong to internal medicine and had a comparable ward organization before CPOE implementation on the cardiology ward, except for chemotherapy prescribing that did not play a role in the interviews and observations. The workarounds in the electronic system were consistent with those reported in literature. The strength of this study is that the bottlenecks and workarounds in the paper-based and electronic system were compared.
6. Conclusion Workarounds exist in both paper-based and electronic prescribing systems. Upon CPOE implementation, several problems are solved, but new ones and corresponding workarounds may emerge. Several workarounds remain present if bottlenecks are still unsolved. The flexibility of the paper-based system and the rigidness of the electronic system both provoke workarounds. The existence of a paper-based drug administration record next to the electronic drug prescribing system results in problems that can probably be solved by electronic drug administration. Bedside order entry on COWs by physicians in the presence of nurses is recommended.
References [1]
Bates DW, Teich JM, Lee J, Seger D, Kuperman GJ, Ma’Luf N, Boyle D, Leape L. The imparct of computerized physician order entry on medication error prevention. J Am Med Inform Assoc 1999;6:313-21. [2] Koppel R, Metlay JP, Cohen A, ABaluck B, Localio AR, Kimmel SE, eta al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005;293(10):1197-2003. [3] Aarts J, Koppel R. Implementation of computerized physician order entry in seven countries. Health Aff 2009;28:404-14. [4] Niazkhani Z, Pirnejad H, Berg M, Aarts J. The impact of computerized physician order entry (CPOE) systems on inpatients clinical workflow: a literature review. Int J Med Inform 2009;16:539-49. [5] Azad B, King N. Enacting computer workaround practices within a medication dispensing system. Eur J Inform Sys 2008;17:264-78. [6] Merriam-Webster’s Collegiate Dictionary 11th Edition. Merriam-Webster Inc. Springfield, Massachusetts, USA, 2004. [7] Goorman E, Berg M. Modelling nursing activities: electronic patient records and their discontents. Nurs Inq. 2000 Mar;7(1):3-9. [8] Van der Sijs H, Lammers L, van den Tweel A, Aarts J, Berg M, Vulto A, van Gelder T. Timedependent drug-drug interaction alerts in care provider order entry: software may inhibit medication error reductions. J Am Med Inform Assoc 2009;16(6):864-8. [9] Pirnejad H, Niazkhani Z, van der Sijs H, Berg M, Bal R. Evaluation of the impact of a computerized physician order entry system on nurse-physician communication: a mixed method study. Meth Inform Med 2009;48(4):350-60. [10] Niazkhani Z, Pirnejad H, van der Sijs H, de Bont A, Aarts J. Computerized provider order entry system- does it support the inter-professional medication process? Lessons from a Dutch academic hospital. Meth Inform Med 2010;49:20-7.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-295
295
Task Analysis and Interoperable Application Services for Service Event Management Juha MYKKÄNENa,1 , Hannu VIRKANENa, Pirkko KORTEKANGASb, Saara SAVOLAINENa, Timo ITÄLÄc a University of Eastern Finland, School of Computing, HIS R&D, Kuopio, Finland b Medbit Oy, Turku, Finland c Aalto University School of Science and Technology, SoberIT, Espoo, Finland
Abstract. In addition to the information specifications for electronic health records, functional and behavioral capabilities need to be agreed to achieve interoperability. In this paper, we present results from task analysis and specification of software services to support the management of service events. The work has been performed to support the management of the nationally shared EPR in Finland. The results support the specification of information sharing and composition in relation to healthcare workflows and activities. The specification of a functional reference model and software services for the management of service events and encounters promotes the integration of shared EHR and systems adaptability for migration towards interoperable electronic health records in healthcare networks. Keywords. Health information systems, SOA, Interoperability, Activity analysis, Service events, Encounters, Electronic Health Record
1. Introduction The requirement to collect and share service encounter data is the foundation of an electronic health record [1]. The challenge is to integrate these data from multiple source systems with different syntax and semantics. Encounters are used for organizing 1) healthcare acts which pertain to patient's health history [2] and 2) guidelines for multistep clinical processes [3]. EHR Systems should cover functionality for managing encounters and episodes of care [4]. If relationships between such functionality and documentation of care cannot be established, risks of disconnect between domain knowledge and databases as well as access control failures are increased [5,6]. Since 2006, the national health IT infrastructure projects in Finland have been producing support for information sharing and preservation for the entire health sector [7]. The main approach of the projects has been to provide national IT services for ePrescription and structured EPR. There services are used through the local health information systems of the service providers. The national ePrescription has been in rollout phase since 2010. All national EPR services and specifications for local systems will become mandatory for all public and private service providers in 2015. 1
University of Eastern Finland, E-mail: [email protected]
HIS
R&D
Unit,
POB
1627,
Fi-70211
Kuopio,
Finland,
296
J. Mykkänen et al. / Task Analysis and Interoperable Application Services
Numerous specifications have been produced to support the introduction of the national IT services. These specifications include the shared EPR content definitions and their refinements using HL7 CDA. In addition, architecture, requirements and use cases have been specified. In this paper, we focus on the functional capabilities of shared EHR which need to be agreed upon to achieve interoperability, focusing on the management of encounters using a service event concept. One of the main challenges in the national EPR development efforts has been the difficulty of providing uniform and comprehensive composition rules. These rules are needed for integrating patient information from various systems and sources and from disparate local workflows in a consistent way. By 2009, some of these aspects had been specified as part of several different documents such as national use case specifications and interoperability standard implementation guides using the HL7 CDA R2 standard. The requirements and specifications stating the concepts and functionality related to the management of EPR information in clinical and administrative activities, however, were inconsistent and dispersed. One of the central concepts to support interoperable EPR and the care continuum was service event which has been defined as "organizing or performing one healthcare service". Management and organization tasks in addition to clinical tasks are central in the management of service events. This concept is central in patient consent management, reporting and billing, as well as in achieving clarity and integrity of EPR use for professionals. Examples of service events include ambulatory visits and episodes of care, along with related examinations, procedures and encounters. These definitions, however, required clarification and harmonized interpretation. There was a need for shared specification of activities of care processes and their temporal constraints to achieve shared terminology for healthcare processes. Furthermore, support for uniform and interoperable implementation of these concepts was necessary for information systems which are used in the management of care processes and documentation.
2. Materials and Methods The SOLEA project is a national R&D project focusing on the use of service-oriented enterprise architecture to support the management and interoperability of complex information systems in healthcare and other industries. The refinements required to support service events in relation to the national EPR specifications was selected as one of the main working items of the project in late 2009. The goals of this activity included the clarification of definitions of service events and coherent specification of their lifecycle. In addition, refinements to the interoperability specifications between various applications or services were pursued, supported by related rules and information and concept models. Furthermore, reusable application services were to be specified to support the management of care process documentation and consent management. These services could be combined depending on the local requirements, as long as they have open interfaces. To support these goals, it was deemed necessary to analyze the activities related to the information management of the EPR. These activities are performed by the users of local or regional health information systems. An activity analysis approach was selected which has been previously used for analyzing tasks related to the national guidelines for citizen eBooking and electronic scheduling [8].
297
J. Mykkänen et al. / Task Analysis and Interoperable Application Services
The goals, requirements and process descriptions related to service events were analyzed using a four-level process and activity modeling approach [9], based on various use case and requirements specifications of the national EPR project KanTA. Clinical document and concept specifications of national IT services and EPR were also reviewed. This document analysis produced the initial list of activities and tasks related to the management of service events. The requirements, tasks, activities and solutions were also discussed and refined in eight workshops. The results incorporate input from a group consisting of 29 experts in addition to authors. The constructive results reported in this paper include the identification and analysis of tasks in service event management. These tasks and other requirements are mapped to functional application services following service-oriented architecture (SOA) specification and design techniques [10,11,12,2]. This paradigm promotes flexibility, modularity and reuse of the solutions. It also makes it possible to specify open interfaces to support gradual and incremental migration paths in different local settings. We focused on action and activity levels of process modeling [9] to identify tasks and services which should be implemented or automated in which national services, patient administration systems, local EHR systems or specialized departmental or clinical information systems in different settings.
3. Results 3.1. Task Analysis of Service Event Management Activities Table 1. Information processing tasks in the management of service events (Pr = provider, Pa = patient, EHR = local EHR application, EAr = national EPR archive service, P = always participating, a = task could be partially automated or streamlined if the corresponding human or system actor participated in the task) Task 1 Make a referral 2 Receive and process a referral 3 Insert a patient in the queue 4 Schedule an encounter based on a referral 5 Schedule an encounter without a referral 6 Cancel a scheduled encounter 7 Unanticipated registration and admission 8 Define information needs 9 Define justification of information transfer across organizations 10 Document query 11 Document retrieval 12 Assess and use information in care 13 Make a new information entry 14 Make an information delivery consent or denial 15 Transfer patient to another ward / unit 16 Discharge patient / conclude a visit 17 Make a study or consultation request 18 Receive and fulfill a study or consultation request
Pr P P P P P P P P P P P P P a P P P P
Pa
a a a a a
P
EHR
Ear
P P P P P P P a P P P a P a P P P P
P a a a a a a a a P P P P a P a a
The task analysis in Table 1 provides a generalized functional model for tasks which require the management of service events. The analysis provides a reference model to define actors, information and tools used, and constraints applied for each task. The tasks in Table 1 are present in many healthcare workflows and high level processes. Tasks which could be streamlined or increasingly automated are also
298
J. Mykkänen et al. / Task Analysis and Interoperable Application Services
identified. All tasks were further refined in 22 fine-grained information processing functions. These functions formed basis for specifying the operations of service event management in information systems and SOA services. All tasks were analyzed in detail to specify inputs, outputs, participants, exceptions, automation potential and interoperability needs. The lifecycle of service events was also refined using these tasks. 3.2. SOA Services for Service Event Management One of the strengths of the SOA approach is flexibility in positioning tasks in legacy applications or new application services. The activities and tasks were specified in detail to support this and to identify interoperability needs. This supported the specification of the needed application services. Furthermore, the solutions were refined in terms of implementation scope: whether each function would be deployed on local (provider organization specific core or specialized application), regional (crossorganizational on regional level) or national (nationally centralized) level. These aspects which guide the implementations are summarized in table 2. The functionality of each service was further refined with service specification templates [12,11]. Table 2. Identified SOA services and application roles for service event management, in relation to potential deployment domains (N = national service, R = regional or local shared service, C = included in local core EHR / ADT applications, D = included in departmental / specialized systems, I = decision of implementation on this level has been made at least in one of the involved organizations, * = plans for implementation on this level have been made at least in one of the involved organizations) Name of SOA Service or Application Role Centralized EPR archive Document composition service Document transmission service Entry composition service Event information repository Document retrieval service Information query service Content producer Content consumer Controller of the patient administrative process Integration infrastructure for service events Context manager
N I
* I I
R
C
D
* * * * * *
I I I I I I I I I
I * *
* * I I
I * I *
*
In addition to these functional and architectural results, user storyboards and example scenarios, conceptual and ontological analysis models and specifications of shared information models related to service event management were produced. In addition, architectural diagrams for various migration phases were specified. The main results were published in a 107-page report in Finnish.
4. Discussion and Conclusions The results of this work were quickly taken up by other national projects. The National Insurance Institution Kela who is responsible for the national IT services used the results in an HL7 interface specification project for service event integration and the specification project for the architecture of lab and imaging integration. In addition, the national Viila and TAPAS projects used the results for harmonization of architectures.
J. Mykkänen et al. / Task Analysis and Interoperable Application Services
299
The resulting specifications of the service event work of SOLEA were evaluated using a survey which was sent to the participants of the workshops and users of the results. The eight respondents saw summaries such as those in Table 2 and detailed task specifications to be very useful for system integrations and implementations. Consistent information classification and organization is crucial for documentation management supporting seamless care in distributed healthcare networks. A key success factor is the support for information processing tasks in everyday healthcare workflows through EPR and administrative systems. This can be supported by analyzing healthcare acts and tasks and by using a combination of application services. Many of these tasks can also be automated, at least partially. Similar or identical concepts and services have been identified and specified in projects initiatives such as LuMiR [13] and Canada Health Infoway [2]. The uniform vocabulary, consistent service and interface specifications and shared reference models for tasks and functions provide a coherent basis for flexible application systems. This, in turn, supports the improvement and reorganization of care processes in networked care. Acknowledgements. The authors thank the national agency for technology and innovation TEKES as well as the participants of the SOLEA project and service event management working group.
References [1] [2] [3]
[4] [5] [6]
[7] [8] [9] [10] [11]
[12] [13]
Schloeffel P, Jeselon P. Standards Requirements for the Electronic Health Record & Discharge/Referral Plans. ISO/TC 215 Ad Hoc Group, Final Report, July 26, 2002. EHRS Blueprint - an interoperable EHR Framework, version 1.0. Canada Health Infoway, 2003. Latoszek-Berendsen A, Tange H, van den Herik HJ, Hasman A. From clinical practice guidelines to computer-interpretable guidelines A literature overview. Methods Inf Med. 2010 Dec 8;49(6):550-70. Epub 2010 Nov 18. Electronic Health Record System Functional Model, Release 1.1. ISO/HL7 10781:2009. O’Connor MJ, Shankar RD, Parrish DB, Das AK. Knowledge-level querying of temporal patterns in clinical research systems. Stud Health Technol Inform. 2007;129(Pt 1):311-5. Lovis C, Spahni S, Cassoni-Schoellhammer N, Geissbuhler A. Comprehensive management of the access to a component-based healthcare information system. Stud Health Technol Inform. 2006;124:251-6. Saranummi N, Ensio A, Laine M, Nykänen P, Itkonen P. National health IT Services in Finland. Methods Inf Med. 2007;46(4):463-9. Mykkänen J, Tuomainen M, Kortekangas P, Niska A. Task analysis and application services for crossorganizational scheduling and citizen eBooking. Stud Health Technol Inform. 2009;150:332-6. Mykkänen J, Paakkanen E, Toivanen M. Four-level process modeling in healthcare SOA analysis and design. Stud Health Technol Inform. 2009;146:276-80. Arsanjani A, Zhang L-J, Ellis M, Allam A, Channabasavaiah K. S3: a service-oriented reference architecture. IT Professional 2007:9(3):10-17. Mykkänen J, Riekkinen A, Sormunen M, Karhunen H, Laitinen P. Designing web services in health information systems: from process to application level. Int J Med Inform. 2007 Feb-Mar;76(2-3):89-95. Epub 2006 Nov 22. HSSP Service Specification Development Framework, version 1.3, Healthcare Services Specification Project, 2007. Contenti M, Mercurio G, Ricci FL, Serbanati LD. LuMiR: A Region-wide Virtual Healthcare Record. In: International HL7 Interoperability Conference IHIC 2008 80-83.
300
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-300
Organs Transplantation – How to Improve the Process? Viriato FERRAZa1, Gerardo OLIVEIRAa, Pedro VIERA-MARQUESa, Ricardo CRUZ-CORREIAa a CINTESIS - Center for Research in Health Technologies and Information Systems, Faculdade de Medicina da Universidade do Porto - Portugal
Abstract. The transplant of cadaveric organs must be performed in a short period of time in order to achieve satisfactory results. In Hospital S. João (HSJ), a large Portuguese hospital, during 2008 and 2009, 65 and 61 respectively potential donors were identified, but 12 and 19 of them were not validated as such in time. The number of validated donors could increase if the information workflow between donor hospitals and coordinator offices became more efficient. The goal of this work is to design and implement a multi-agent software platform to assist the information workflow between donor hospitals and coordinator offices. Through several meetings with HSJ coordinator office it was characterized a set of basic data that would allow coordinator offices to early identify possible organs donors. This preliminary characterization provided the necessary grounds for the development of an agent based software application allowing the storage and management of potential donors’ information and optimizing the information workflow. The information workflow and the current communication processes characterization allowed the development of a multi-agent web platform, providing a way to assist the information workflow, between coordinator hospitals and their attached hospitals network. The platform also improves direct communication between coordinator offices about most relevant facts. By using this tool or a similar one the information workflow between donor hospitals and coordinator offices can become more efficient, optimizing the pre-transplantation tasks and consequently the number of successful transplants in our country. Keywords: Organs Transplantation, Agents
1. Introduction Organs transplantation can be defined as the removal of an organ of a donor body and its reintroduction into the body of the receiver. On the last years organs transplantation process suffered huge improvements, due to significant scientific and technical changes and today is considered to be a medical procedure with a high success rate [1]. Despite the huge improvements suffered by this medical procedure, the number of available organs for donation is much lower than the number of needed organs to reply of a growing size of the waiting lists for donation [2]. One of the first steps to do on a transplantation process is the timely identification of the donors’ brain death and also the identification of his donor potential [3]. A physician working in an intensive care unit where a potential donor was admitted is responsible to communicate this fact to his coordinator office, which is responsible for validating the donor and identify his donor 1
Corresponding Author: E-mail: [email protected]
V. Ferraz et al. / Organs Transplantation – How to Improve the Process?
301
potential and to take care of all the preliminary steps needed for a transplant in a shortly period of time. Communication between health care units and clinics is the base for any health system and in the organs transplantation specific case is where the most significant portion of time is lost. At an international level, due to the relevance of the organs transplantation issue, many studies were performed across many countries that point to a significant similarity between their donation network and the Portuguese network [3,4,5,6]. Many authors believe that despite the current optimization of the organs transplantation process, there are still some points where it can be improved. One of the most referred issues is the timely and optimized communication between health care professionals and units [3,7,8,9,10]. A study performed in Spain in 1996 to inquire the donation potential in Spain involved 25 hospitals and a retrospective study of 843 patients that had died on intensive care units. In 59% of these cases the process ended without any donation. This study reached the conclusion that many of these losses had to do with a lack of timely identification of the potential donor [4]. Despite Spain is considered to be one of the countries with higher successful rates in this type of medical procedures there is capacity to optimize the process in the referral of potential donors, in their assessment, maintenance and in the special case of rejection of donation by the family [2, 8]. The workflow and the information flow of HSJ related with the organs transplantation process, including average times of tasks calculated by the HSJ coordinator office, is presented in figure 1 and also the main areas that the developed platform should support. Those measured times were calculated by the HSJ coordinator office. This workflow model was described with the support of the coordinator office located on the same institution. In this health care institution, in 2009, 61 potential donors were identified, 42 that were timely validated allowing successful harvests, 19 who were not timely validated due to a lack of more effective and consistent communication between donor unities and coordinator office [11]. The number of inpatients followed by the health care units’ clinics is high, making hard to perform a timely validation of all the signs that a specific patient will be on brain death and consequently that will be a potential donor. On many occasions this signs are ignored and the opportunity of a timely communication with the coordinator office is lost, making that all coordinator office activities have to be done in a race against time, reducing the probability of success [2,8,12]. With the growing complexity of computing systems, new development models are appearing namely the multi-agent systems[13]. The use of agents on the health care environment, and more specifically on organs transplantation, process is not a new concept, and has proven to be useful to improve the information workflow between health care units[14,15]. This work aims to design and implement a multi-agent software platform to represent the information workflow model between donor hospitals and coordinator offices, allowing increasing control over the evaluation of inpatients and consequently optimizing the information workflow.
302
V. Ferraz et al. / Organs Transplantation – How to Improve the Process?
Figure 1 - Hospital de São João Information Workflow Supported by the Platform
Table 1 presents the daily records base information needed by coordinator offices about each patient Table 1 - Digital impatient record base information Data Area Neurologic Respiratory
Cardiovascular Digestive
Renal Coetaneous Hematologic Infection
Data Identification Glasgow Neurocritic Injuries SpO2 Ph Fio2 Ratio Thoracic Drains Day Rx NA Dopa Eco Cardiogram Eco Abdominal Abdominal Drains Eco Renal Oliguria Proteinuria Urine Sediment Tattoos Transfusions Infection Focus
Data Type Number Yes/No Number Number Number Number Yes/No Yes/No Normal/High Normal/High Yes/No Free Text Yes/No Free Text Yes/No Yes/No Number Yes/No Yes/No Yes/No Free Text
V. Ferraz et al. / Organs Transplantation – How to Improve the Process?
303
2. Methods Through several meetings with HSJ coordinator office it was characterized a base structure data that will allow coordinator offices to work and early identify possible organs donors. This characterization was the base to study the development of a software application using Java, MySQL and Jade to allow the storage and management of potential donors’ information, optimizing the information workflow. The use of agents through Jade framework should also provide information to coordinator offices about signalized patients into coordinator network hospitals and should allow directly communication between coordinator offices about relevant facts. The system will take advantage of the daily patient monitoring records already existing in paper format. The system will mimic those paper records in digital format allowing the intensive care units doctors to maintain their essential work methods, having the possibility at any moment to signalize patients for follow up by his coordinator office. This act will enable a more close and timely monitoring of the patient by the coordinator office, giving the possibility to start procedures earlier.
3. Results This characterization allowed the development of a multi-agent web system, providing a way to optimize the studied information workflow. The web platform works with all the inpatients simulated digital records. A set of agents was developed representing each coordinator office and each of the health care units belonging to the respective coordinator office network. Agents from different institutions communicate providing updated information about other inpatients in external intensive care units hospitals, allowing the connection of distinct hospitals of each donation network. Table 2 represents the main developed agents with their main functions. Table 2 - User UCI, GCCOT Receiver and GCCOT Sender Agent Agent UCI User
GCCOT Receiver Agent
Tasks Signalize Patient
Receive Patients Information
Receive Information from other GCCOT
Methods Name: Accessing Signalized Inpatients(); Description: Accesses the database and identifies signalized inpatients, retrieving their information; Name: External Inpatients Processing(); Description: Saves impatient information sent by coordination network external intensive care units; Name: Signalization Process(); Description: Presents information about coordination network signalized inpatients. Name: Receiving External Messages(); Description: Receives information from other GCCOT agents;
Figure 2 - Developed Platform Interface
304
V. Ferraz et al. / Organs Transplantation – How to Improve the Process?
Figure 2 represents the developed platform interface. This work accesses the usability of agents to be in charge of information workflow tasks between hospital units. On the best case scenario all the transplant preliminary steps done by coordinator office to validate the potential donor and to identify his donor potential could be started earlier, optimizing the process. HSJ coordinator office believes that this platform can optimize the information workflow.
4. Discussion We believe that using this tool or a similar one the information workflow between donor hospitals and coordinator offices can become more efficient optimizing the pretransplantation tasks. With the help of HSJ coordinator office it was simulated the information workflow using the developed platform and it was estimated that the saved time might be among three to five hours. Authors Contribution and Acknowledgements. The system study, conceptualization and development was done by Viriato Ferraz and Gerardo Oliveira from HSJ coordinator office. Pedro Marques and Ricardo Correia contributed with numerous project development suggestions and also on the paper review. This work is funded by FEDER funds (Programa Operacional Factores de Competitividade – COMPETE) and by National funds (FCT – Fundação para a Ciência e a Tecnologia) through project SAHIB - Enhancing multi-institutional health data availability through multi-agent systems [PTDC/EIA-EIA/105352/2008].
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]
Kazemeyni, B. Chime, et al. (2004). "Worldwide Cadaveric Organ Donation Systems (Transplant Organ Procurement)." Urology Journal 1. Miranda, B., J. Vilardell, et al. (2003). "Optimizing Cadaveric Organ Procurement: the Catalan and Spanish Experience." American Journal of Transplantation 3. Studer, S. M. and J. B. Orens (2006). "Cadaveric Donor Selection and Management”. Miranda, B., M. F. Lucas, et al. (1999). "Organ Donation in Spain." Nephrol Dial Transplant 14. Sells, R. A. (1999). "Three ways to improve the supply of cadaveric organs for transplantation." Journal Of The Royal Society Of Medicine 92. Pszenny, A., J. Czerwinski, et al. (2008). "Organ Donation and Transplantation in Poland in 2007." Ann Transplant 13. Orloff, M. S., A. I. Reed, et al. (1994). "Nonheartbeating cadaveric organ donation." Annals of Surgery 220. Goffin, E., J. P. Devogelaer, et al. (2001). "Expanding the organ donor pool: The Spanish Model." Kidney International 59. Mascia, L., I. Mastromauro, et al. (2009). "Management to Optimize Organ Procurement in Brain Death Donors." Minerva Anestesiol 75. Sanchez-Fructuoso, A., D. P. Sanchez, et al. (2004). "Non-heart beating donors." Nephrol Dial Transplant 19. GCCOT (2009) - Gabinete Coordenador de Colheita de Órgãos e Transplantação do H S. João. Sousa, J. P. d. A. e. (2009). Uma Experiência Hospitalar. Tempo Medicina. Cruz-Correia R, Vieira-Marques P, et al. (2005). "Integration of hospital data using agent technologies - A case study". AI Communications 18. Cortés, U., A. López-Navidad, et al. (2000). "Carrel: An Agent Mediated Institution for the Exchange of Human Tissues among Hospitals for Transplantation." Ribas, A. M. and A. V. Mateu. (2010). "Intelligent Technologies for Advanced Knowledge Acquisition." Retrieved 25-09-2010, 2010, from http://deim.urv.cat/~itaka/CMS/
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-305
305
A Reference Architecture for Integrated EHR in Colombia Edgar de la CRUZa, Diego M. LOPEZa,b 1, Gustavo URIBEa, Carolina GONZALEZa,b, Bernd BLOBELb a Electronics and Telecommunication Faculty, University of Cauca, Colombia b eHealth Competence Center, University Hospital Regensburg, Germany
Abstract. The implementation of national EHR infrastructures has to start by a detailed definition of the overall structure and behavior of the EHR system (system architecture). Architectures have to be open, scalable, flexible, user accepted and user friendly, trustworthy, based on standards including terminologies and ontologies. The GCM provides an architectural framework created with the purpose of analyzing any kind of system, including EHR system´s architectures. The objective of this paper is to propose a reference architecture for the implementation of an integrated EHR in Colombia, based on the current state of system´s architectural models, and EHR standards. The proposed EHR architecture defines a set of services (elements) and their interfaces, to support the exchange of clinical documents, offering an open, scalable, flexible and semantically interoperable infrastructure. The architecture was tested in a pilot tele-consultation project in Colombia, where dental EHR are exchanged. Keywords: Integrated EHR, interoperability, system architecture, CDA
1. Introduction One of the main challenges that health systems around the globe are facing today is to support communication and cooperation between the different actors participating in patient care. Communication and cooperation must be supported by appropriately designed and implemented information and communication technology (ICT). The solutions have to be open, scalable, flexible, user accepted and user friendly, trustworthy, based on standards including terminologies and ontologies [1]. The development of such a complex integrated system, especially in national projects, has to start by analyzing and designing the overall structure and behavior of the system, characterized through its components, their functions and interrelations (system architecture) described at the required level of detail. A formal system architecture, meeting the aforementioned quality requirements prevents project´s failure, delays and over budget. Despite national legislation on EHR does not exist in Colombia, current legislation establishes a series of characteristics to be met by health records which can only be met by a National Integrated EHR: comprehensiveness registering every patient´s health status, sequentially of information, scientific rationality, and full accessibility independent of time and place and opportunity of use [2]. The objective of this paper is to 1 Corresponding Author: Diego M. Lopez, PhD, Full Professor, Telematics Department, University of Cauca. Calle 5 No 4-70. Claustro de Santo Domingo, Popayán, Colombia; e-mail: [email protected]
306
E. de la Cruz et al. / A Reference Architecture for Integrated EHR in Colombia
propose a reference architecture for the implementation of an integrated EHR in Colombia, based on the current state of EHR architectures and standards, and responding to the needs of the Colombian Health System.
2. Materials and Methods The reference EHR architecture is described based on the Generic Component Model – GCM [3] and the Health Information Systems Development framework (HIS-DF) [4]. The GCM provides an architectural framework created with the purpose of analyzing any kind of system, including EHR system´s architectures. The analysis framework consists of three interrelated dimensions or perspectives: the domain perspective, the architectural, and the development process perspective. HIS-DF supports the development process perspective based on the ISO 10746 and RUP standards.
Figure 1. The Architecture Development Proces
Figure 1 presents the architecture development process (which specializes GCM and HIS-DF) used to design the reference EHR architecture proposed. The process starts with the EHR system definition (step 1), followed by the analysis of different EHR system´s clinical domains involved, also considering policy, security, administrative and other related domains (step 2); and the definition of standards for health knowledge representation (step 3). Due to the fact that the Colombian government is looking for an Integrated EHR solution in the short time (initial 2 years phase), the proposal is to be constrained at this phase, to the use of HL7 standards (especially CDAs and Templates) and existing terminologies for clinical information representation. Nevertheless, the proposal is to consider in a second phase of the National EHR project (following 4 years), the use of biomedical ontology for knowledge representation. Finally, in step 4, the different views on the architecture are described based on ISO 10746 and the RUP as proposed in the HIS-DF framework. The framework establishes five system´s perspectives: modeling of the EHR-S business processes (enterprise viewpoint), their reflection on information models (information viewpoint) based on the RIM and local and/or international terminologies, e.g., CIE-10, SNOMCE-CT; functional aggregation of the system´s components and services (computational view-
E. de la Cruz et al. / A Reference Architecture for Integrated EHR in Colombia
307
point); and the platform-specific models of the engineering view and the technology view, describing how the system is implemented and deployed. The results presented in the remaining sections of the paper are restricted to the engineering and technological viewpoints. The main decision during the engineering viewpoint is the architectural model to be implemented. Jalal-Karim and Balachandran [5] identified three basic architectural models for shared EHR: a single centralized repository, a decentralized or distributed repository, and a hybrid model combining both approaches. The reference architecture presented in the following section is based on the hybrid architectural model, adapting and adding new components.
3. The Reference Architecture for Integrated EHR in Colombia The platform dependent architecture for shared electronic health records in Colombia is based on the hybrid architectural model, combining a centralized Patient Record Index service and distributed CDA repositories. Figure 2 shows the shared electronic health record system architecture from the engineering perspective.
Figure 2.The Reference Architecture for Shared EHR in Colombia
The architectural components and its relationships (interfaces) are: • Proxy: This service is the interface between the different health care service providers (in Colombia known as IPS - Instituciones Prestadores de Salud) and other entities of the architecture. Due to the fact that IPS are the institutions that by law get custody of every patient health record, information of a single patient is distributed among the different IPS. Proxy service allows IPS’ interoperability by: • Receiving the required clinical data sets (clinical information) from the IPS and transferring it to the CDA Service, which creates the CDA document based on pre-established CDA implementation guides. • Transfer the newly created CDA to the CDA Repository service. • Patient Document Index (PDI). It indexes and displays patient´s health records chronologically. Links to each document are displayed, pointing to the documents stored in remote repositories. Each time a new record (document) is generated, the PDI service is updated. There is one PDI for each patient. PDI are commonly managed by health insurance companies (called EPS –Entidades Promotoras de Salud in Colombia) which are responsible for citizens’ social security. • ONS (Object Name Service) is a service used to return the location of the corresponding PDI, based on the received Patient Identification. The service is based on
308
• • • •
E. de la Cruz et al. / A Reference Architecture for Integrated EHR in Colombia
the EPCglobal standard, which is a standard derived from the Domain Name Service (DNS) standard used in the Internet. CDA Service: Service used to create, validate, and return a CDA document, using the datasets received from the Proxy. CDA Repository: service to store the clinical documents generated by the CDA Service component. EIS (Entity Identification Service): unifies and manages entities (patients, institutions, users) identification based on the HL7 Entity Identification Service Functional Model (EIS FM). PASS (Privacy, Access and Security Services): services for privacy, and access control based on the ongoing HL7-SOA specifications.
4. Discussion Recently the Colombian government approved a law reforming the healthcare system, where the implementation of a national integrated EHR is a main pillar. The integrated EHR is intended to provide organizational changes such us efficiency, improved patient care, public health services, scientific rationality and opportunity of care. However, the impact evaluation of the integrated EHR can only be provided after the project is launched (budget is allocated) and the system is implemented and deployed. In the meantime, the proposed architecture is being implemented and deployed (Technological Viewpoint) in the context of pilot tele-dentistry service in Colombia (ImagenMantis project) [6]. The Pilot connects a primary dental service (at University of Cauca) to a Specialized Dental Clinic at CES University in Medellin. The necessary exchange of EHR documents is supported by a Pilot implementation of the architecture. In [6], project deliverables are available. Deliverables include an analysis of current dental standards for interoperability (including ADA standards), CDA implementation guides developed for general dentistry and intraoral diagnosis photography, and technical description of the implementation process of the architecture’s components (Figure 1) based on open source platforms. The communication infrastructure and repositories are supported by a GRID platform that is operating along the country (MantisGRID supported on RENATA IPV6 network). Different to other telemedicine projects in the country, the main advantage of the Pilot tele-consultation service is that institutions are not obliged to migrate or use a different EHR or clinical information systems. Both dental care establishments (primary dental service and specialized clinic) run the tele-consultation service by interoperating the Dental EHR systems (proprietary) they are used to work with. So far, 240 CDAs including a general dentistry CDA and a intraoral dental photograph CDA have been exchanged. The exchange of Electronic Dental Records (especially intraoral dental photograph CDA) has improved the availability time of specialist diagnosis. Related architectural approaches are found in the literature. Stolba and Schanner [7] present a case study for integrating 27 hospitals in Lower Australia based on the XDS (Cross-Enterprise Document Sharing) profile. The disadvantage of this approach is that the registry where CDA metadata is stored is not distributed, so if the registry fails, the complete system is down. The Spine [8] is the architecture adopted in the UK. It is based on a centralized repository that allows patients and healthcare professionals to access clinical document summaries based on CDA. The project run over budget due to the lack of an architectural behind. Canada Health Infoway Infrastructure [9] pro-
E. de la Cruz et al. / A Reference Architecture for Integrated EHR in Colombia
309
poses an infrastructure for health systems, which is based on a service-oriented architecture (SOA) and standards for data and services exchange. The main limitation of the proposed architecture is its dependence to the CDA standard, preventing the co-existence of different EHRs models. This issue is currently being tackled by the proposal of a semantic interoperability service based on ontology mapping [10]. Also the scalability of the architecture is a concern. However, as the architecture is based on a distributed model, it can scale up to a SOA architecture.
5. Conclusion The Colombian government has identified the need of a national integrated EHR system. The paper proposes a reference architecture for the implementation an integrated EHR in Colombia, based on the current state of EHR architectures and standards. The proposed architecture has the following advantages: • Provides metadata indexing of clinical documents for each patient, increasing system reliability and in case of a failure, the overall system is not committed. • Provides implementation flexibility, supporting different technologies for CDA repositories. Also different EHR systems (from different IPS) can be integrated without needing to develop their own HL7-CDA interfaces. • Scalability, because different EHR systems (from different IPS) can be easily integrated without affecting system´s performance. Also migration to a SOA architecture is envisaged. • It is completely based on open source technologies, platforms and web standards. Acknowledgments: This work was founded by COLCIENCIAS (contract IF004-09), QUIPU Program under US National Institutes of Health Fogarty grant D43TW008438, and University of Cauca in Colombia.
References [1]
Blobel B. Ontology driven health information systems architectures enable pHealth for empowered patients. Int J Med Inform. 2011 Feb;80 (2):17-25. [2] Colombia. Ministerio de la Protección Social. Resolución 1995 de 1999. [3] Blobel B. Architectural approach to eHealth for enabling paradigm changes in health. Methods Inf Med. 2010;49(2):123-34. [4] Lopez DM, Blobel BG. A development framework for semantically interoperable health information systems .Int J Med Inform. 2009 Feb;78 (2):83-103. [5] Jalal-Karim, A, Balachandran, W. The optimal network model's performance for sharing Electronic Health Record. Multitopic Conference- INMIC. IEEE International, 2009. P. 149–154. [6] ImagenMantis Project. [Internet]. 2011 [updated April 20, 2011]. Available from: http://mantisgrid.eia.edu.co . [7] Stolba, N, Schanner, A. eHealth Integrator-Clinical Data Integration in Lower Austria. 3rd International Conf. on Computational Intelligence in Medicine and Healthcare, Plymouth, England, 2007. [8] NHS National Programme for Information Technology (IT). [Internet]. Available from: http://www.connectingforhealth.nhs.uk . [9] EHR-S Blueprint. [Internet]. 2011 [updated Jan 30, 2011] http://www.infoway-inforoute.ca. [10] Gonzalez S, Blobel B, Lopez D. Ontology-based Framework for Electronic Health Records Interoperability. In this series. 2011.
310
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-310
Integration Services to Enable Regional Shared Electronic Health Records a
Ilídio C OLIVEIRAa, b,1, João P S CUNHA a,b IEETA – Inst. of Electronics and Telematics Engineering of Aveiro, Aveiro, Portugal. b Dep. of Electronics, Telecomm. and Informatics, University of Aveiro, Portugal.
Abstract. eHealth is expected to integrate a comprehensive set of patient data sources into a coherent continuum, but implementations vary and Portugal is still lacking on electronic patient data sharing. In this work, we present a clinical information hub to aggregate multi-institution patient data and bridge the information silos. This integration platform enables a coherent object model, services-oriented applications development and a trust framework. It has been instantiated in the Rede Telemática de Saúde (www.RTSaude.org) to support a regional Electronic Health Record approach, fed dynamically from production systems at eight partner institutions, providing access to more than 11,000,000 care episodes, relating to over 350,000 citizens. The network has obtained the necessary clearance from the Portuguese data protection agency. Keywords. Regional health information networks; systems integration; Electronic Health Record; Service-oriented architectures.
1. Introduction Modern health care systems are expected to coordinate service points and information flows to provide patient centric care, in which a comprehensive set of clinical data is available to the relevant care teams, in spite of where it was generated. Such secure and efficient use of clinical data is challenged by the fragmentation of the existing ICT implementations [1]. Several approaches have been proposed to address the limitations of the so called “information silos” in healthcare [2], most notably using decentralized clinical messages exchange networks (e.g.: using HL7 and IHE standards). In this work we advocate the use of an architectural approach to leverage the integration of regional clinical information systems [3]. The information bus provides a layer of unification, by dynamically federating distributed, heterogeneous information sources, and exposes an object model accessed through a services programming interface. Information consumers (like web Portals) can then abstract from the complexity of accessing multiple sources. The proposed architectural approach to clinical data integration is instantiated in the Rede Telemática da Saúde (RTS) system (www.rtsaude.org), a platform deployed in the region of Aveiro (Portugal) to enable primary and secondary care to share patient data and best practices. RTS is in pilot use and provides access to more than 11,000,000 care episodes by integrating 10 information sources at 8 different care organizations. 1
Corresponding author: Ilídio Oliveira, IEETA, Campus Universitário Santiago, 3810-193 Aveiro, Portugal; E-mail: [email protected].
I.C. Oliveira and J.P.S. Cunha / Integration Services
311
2. Methods 2.1. Virtual Regional Electronic Health Record RTS enables the access to a virtual, regional Electronic Health Record (R-EHR). The virtual concept is used to denote that (1) users perceive a unified Record and (2) no materialized integration repository is kept at any central storage. The system is able to fulfill each user request by keeping an index of the information available for each patient to support later on-demand retrieval from relevant source system, where the clinical information is effectively kept (similar, for example, to [4]). No information is replicated centrally, with the exception of a Minimal Data Set (MDS) to enable the immediate lookup of patients (Table 1). After finding the desired patient in the RTS index, the information bus visits the relevant information sources and collects the fragments needed to supply the Common Information (CI) layer (Table 1). Information in this layer follows a shared information model (thus the term Common), as specified by a consensus process with the domain professionals. The most important information entities at this level are structured summaries for each care episode (Table 1), e.g., RTS defines a uniform discharge letter structure, which is fed from the discharge information in the source systems. The MDS plus the CI layers set the scope of information available in the RTS information hub and accessible through the RTS portal (though ad hoc integrations have been developed to extend the portal, for example, to access the actual imaging modalities from the PACS systems when browsing an episode summary in RTS). Table 1: Information layers in RTS. Information layers
RTS R-EHR scope
Minimal Data Set (MDS) layer
Part I: Essential patient demographics (health system ID, name, birth date, gender, contacts, primary care GP and organization) Part II: Minimal clinical summary List of active health conditions and alerts (e.g.: allergies) List of known contact cases (data, type, service point and health care professional in charge)
Common Information (CI) layer
Episodic data: normalized data set for different care contacts (primary and secondary care). Prescriptions available for each encounter. Vaccination chart.
Source information systems layer
RTS does not interfere with source information systems content or implementation (they keep all their functions and liabilities).
2.2. Regional Health Information Hub with Services The foundation to the R-EHR is an information bus with health domain semantics, called HIETA (Figure 1). HIETA data discovery processes will look for patient demographics in the partners’ systems and build the RTS Patient Index (no clinical data replicated). The first time a patient is accessed his MDS is built and inserted in the RTS Catalog, being autonomously updated afterwards (by pulling changes from sources). Clinical details are fetch on-demand and not materialized. To achieve these functions, the HIETA integration engine coordinates complementary modules (Figure 1): Wrappers extract and align data from source systems; the Catalogue provides metadata on information systems and the entry points to scattered patient data; the Master Patient Index provides a reliable source of patients
312
I.C. Oliveira and J.P.S. Cunha / Integration Services
identification in the region, applying domain logic to perform basic data cleaning; Directory and Authority services provide a reliable source of identification of health care professionals. Data Integration Services is the core coordinator module triggering the data discovery and distributed integration plans proactively; it acts as a central information broker, receiving the user queries and obtaining the required data invoking the relevant Wrappers. The effort to integrate a new source in RTS is essentially the effort to develop a specific Wrapper that will expose the selected local data according to the RTS technical interfaces and object model, by extending the base RTS Wrapper implemented in Java. 2.3. Semantic Information Integration The RTS information bus exposes a shared object model for applications, by aligning the fragments from source information systems. The semantic alignment is performed at each source by the Wrapper. To this end, the Wrapper needs to map concepts and perform local data transformations. Semantic mismatches may occur [5], especially if the source and common information models largely diverge. In RTS, a compromise approach was adopted on the CI layer design, ensuring that the mapping process was feasible within the context of existing information sources, while, however, keeping an architectural style that could enable the evolution of the RTS information model to implement a well-accepted, open standard [6]. Presently, the RTS object model is a compromise between CEN ENV 12967 reference information model (to which we have previous contributions) and the dominant production systems. As the majority of source information systems integrated in RTS are supplied by the public central administration of health IT, existing commonalities with respect to data sets and values sets were included in RTS after to the existing implementations, facilitating semantic concepts matching. In some cases, conciliation was not possible and different terminologies coexist, e.g., RTS classifies encounters using ICPC-2 for primary care episodes (it is widely used by GPs in Portugal) and ICD-9 for secondary care. Although formal relationships can be defined between ICPC-2 and ICD-10 [7], RTS is still using ICD-9 as it is the de facto standard in our Hospitals (and no validated mapping between ICD-9/ ICD-10 is available in Portugal). 2.4. Security Model Trust is a major requirement in health information sharing and has been addressed in RTS at different dimensions:
Figure 1: The RTS logical architecture.
I.C. Oliveira and J.P.S. Cunha / Integration Services
313
Infrastructure security. RTS is deployed on the private national Health Information Network (isolated from the Internet). The communications between distributed RTS modules goes through on secure channels (SSL) and the invocation of the services API implements the WS-Security specification. Professionals authentication and authorization. Role based authentication is implemented in RTS and only professionals bound to appropriate deontological codes are allowed to access patient clinical data. The RTS introduces a “circle of trust” between the partner institutions, by means of cross-certification agreements between the partner institutions. The health care professional’s home institution infrastructure emits a short lived certificate, which also holds the Professional’s role at their home institution for role-based authorization (see [8] for details). Auditing and traceability. All user actions are logged for auditing purposes. Not only this information is available for the partner institutions in RTS, as the patient is able to monitor the use if his data. Using the Citizen portal, the patient can keep watch on the accesses made to his regional record and request for clarifications, if appropriate.
3. Results The RTS has been deployed in the region of Aveiro, located at the centre littoral of Portugal, connecting two hospitals and six primary care centers. Each care organization contributes to the virtual EHR with different sources. The two hospitals use the same Patient management solution (called SONHO), but running independent instances. Each Hospital runs different Laboratory and Radiology information systems that were integrated in RTS (four systems supplied by four different vendors). The primary care centers run independent instances of the SINUS/SAM basic EHR system. Professionals interact with the RTS using a common web browser. After authentication, the Professional queries for a given patient using standard accession keys (especially the National Health Service number). The first view over the existing RTS clinical record is a list of episodes in a grid, providing essential facts such as a time line and data provenance (Figure 2). In this interface, the health professional can attain a quick overview of known care episodes occurred region-wide. The user may then open each entry to access the encounter details, for example, a discharge letter, a radiology report, or lab results. An episode entry may provide a direct link to the source information system’s Web site (with single sign-on support where available).
Figure 2: RTS-Professionals portal user interface. The patient is identified (1) and the known care episodes listed (4). An episode can cluster sub-episodes (2). Data is being pumped from several institutions (3). On the right: an episode summary, in this case, a sample discharge letter normalized in RTS.
314
I.C. Oliveira and J.P.S. Cunha / Integration Services
The RTS Citizen’s portal is accessible after authentication using the national citizen card (a smart card supporting certificates-based authentication). The patient has then access to a calendar-style view of events being automatically pooled from existing information systems in the partner institutions (his regional health agenda).
4. Discussion The RTS system provides coherent and secure access to a longitudinal EHR fed from multiple care organizations. Unlike clinical messaging systems, RTS adopts a services approach, providing an information bus and a programming interface to facilitate the development of clinical applications. The discovery-based processes allow new systems to join (or leave) at any time. Professionals can browse a comprehensive region-wide EHR and Patients are allowed partial access to their information in RTS. RTS is a seminal effort in Portugal to deliver a dynamic interoperability platform for multi-institutions patient data access and is in pilot use at the region of Aveiro, considered as a case study to the now starting project for a national federated EHR, called “Registo de Saúde Electrónico” (RSE), promoted by the Ministry of Health. Future prospects for the RTS system include the introduction of additional information sources, especially with respect to the integration of the national database of electronic prescriptions as a provider to the RTS information hub. Given the ability of RTS to build digests on the distributed patient EHR, it has been selected to pilot cross-border interoperability use cases in the scope of the epSOS European project that is under development until 2013. Acknowledgements: We like to thank fruitful collaboration of HIP, SRS-A, HDA, and technical teams at UA/IEETA. RTS was partially funded by the “AveiroDigital/PortugalDigital” initiative of the Portuguese Government.
References [1] Stroetmann, V.N. Kalra, D. Lewalle, P. Rector, A. Rodrigues, J.M.. Stroetmann, K.A., et.al. Semantic Interoperability for Better Health and Safer Healthcare, in: SemanticHEALTH Report, European Comission, 2009. [2] Cruz-Correia, R. Vieira-Marques, P. Ferreira, A. Almeida, F. Wyatt, J. and Costa-Pereira, A. Reviewing the integration of patient data: how systems are evolving in practice to meet patient needs, BMC Medical Informatics and Decision Making 7 (2007), 14. [3] Grimson, J. Grimson, W. and Hasselbring, W. The SI challenge in health care, Commun. ACM 43 (2000), 48-55. [4] Harno, K. and Ruotsalainen, P. Sharable EHR systems in Finland, Studies in health technology and informatics 121 (2006), 364-370. [5] Qamar, R. and Rector, A. Semantic Issues in Integrating Data from Different Models to Achieve Data Interoperability, Medinfo 2007: Proceedings of the 12th World Congress on Health (Medical) Informatics, Pts 1 and 2 129 (2007), 674-678. [6] Kalra, D. Electronic health record standards, Yearbook Of Medical Informatics (2006), 136-144. [7] Cardillo, E. Eccher, C. Serafini, L. andTamilin, A. Logical Analysis of Mappings between Medical Classification Systems, in: Artificial Intelligence: Methodology, Systems, and Applications, D. Dochev, M. Pistore, and P. Traverso, eds., Springer Berlin / Heidelberg, 2008, pp. 311-321. [8] Gomes, H. Cunha, J.P. and Zuquete, A. Authentication architecture for eHealth professionals, in: On the Move to Meaningful Internet Systems 2007: CoopIS, DOA, ODBASE, GADA, and IS, Proceedings, R. Meersman and Z. Tari, eds., Springer-Verlag Berlin, Berlin, 2007, pp. 1583-1600.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-315
315
Towards Smart Environments Using Smart Objects a
Martin SEDLMAYRa,1, Hans-Ulrich PROKOSCHa, Ulli MÜNCH b Medical Informatics, Friedrich-Alexander-University Erlangen-Nuremberg b Fraunhofer IIS-SCS, Fürth Germany
Abstract. Barcodes, RFID, WLAN, Bluetooth and many more technologies are used in hospitals. They are the technological bases for different applications such as patient monitoring, asset management and facility management. However, most of these applications exist side by side with hardly any integration and even interoperability is not guaranteed. Introducing the concept of smart objects inspired by the Internet of Things can improve the situation by separating the capabilities and functions of an object from the implementing technology such as RFID or WLAN. By aligning technological and business developments smart objects have the power to transform a hospital from an agglomeration of technologies into a smart environment. Keywords. Smart Objects, Logistics, Internet of Things
1. Introduction For many years, barcodes have been used e.g. on wristbands, packages and documents to identify patients, drugs, lab samples and charts. Today, radio-based technologies such as RFID and sensor networks are already in use and under development in the healthcare domain [1, 2]. Wireless LAN is built into many mobile computers and Bluetooth is often used for point of care devices [3]. This is why hospitals are confronted today with a plethora of approaches for implementing business scenarios such as asset management with RFID and WLAN, cooling chain monitoring using sensor networks and RFID, and patient tracking using virtually any existing technology. The problem is that most approaches are not necessarily compatible or seamlessly integrated into legacy systems. Thus, the full power of innovative identification and locating systems is not utilized. A conglomerate of technologies/products/concepts exists but hardly any attempts to integrate those into a (virtual) single platform have been made. Introducing the concept of smart objects inspired by the Internet of Things will improve the situation. Smart objects are using smart tags attached to an object or person to identify or locate it, or to enable further functions of sensing and acting. Smart object networks should be interoperable by design allowing the user to purchase
1
Corresponding Author: Martin Sedlmayr, Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Krankenhausstrasse 12, 91054 Erlangen, Germany; E-mail: [email protected]
316
M. Sedlmayr et al. / Towards Smart Environments Using Smart Objects
or replace a smart tag according to functional requirements not what else is installed. Of course, this will require technological as well as organisational advances. For the remainder of this contribution, first smart objects and related technical implementations are described. Then, we depict the vision of turning hospitals into smart environments based upon smart object technologies before concluding with future research challenges.
2. Smart Objects By combining a tag with sensors and actors thus enabling it to participate in its surrounding we call it a smart tag. And by pairing smart tags with real life objects such as mobile medical devices, blood bags and patients, the objects become “smart”, i.e. a smart object. Smart objects typically meet following requirements [4]: • An identity, which must be unique in its context or system • Sensors to gather the actual status of the object and actuators to interact with the environment • The ability to determine the current position • Memory to store master data, to record past events and gathered data • The ability to communicate with other objects • The intelligence to act autonomously to reach a given goal As can be seen, these properties can be met by using Automatic Identification and Data Communication (AIDC) technologies. These main technologies for building smart tags are already existent and in use in hospitals: • Barcodes are the oldest and most widely used AIDC technology. Barcode labels are very cheap, can hold up to a few kilobytes of information and be attached to virtually any product such as blood bags and lab probes [5]. • Radio-frequency Identification (RFID) [6] is often considered as a radio-based successor of barcodes. In contrast to other radio-based technologies, RFID tags are extremely robust and solutions exist for deep-frozen bio probes, blood bags in a centrifuge or extreme heat while e.g. sterilizing surgical instruments. Recent developments in RFID labelling include printable tags [7]. • Wireless Sensor Networks [8] consist of active tags that not only communicate data upon request like RFID, but exchange data in a network via multiple hops. Computing power on the tags allows for local processing of sensor data and sometimes tags even work collaboratively to solve complex problems, including measurement, detection, classification, and tracking in the physical world. • Wireless LAN (WLAN) is well known from mobile computing and often infrastructures already exist in hospitals and can be reused. WLAN tags may be used similar to sensor network tags but require more energy thus limiting long term use [9]. • Ultrasound tags can be used as a replacement for radio-based tagging of patients and devices on room level [10]. A major advantage is the absence of electromagnetic interference. • Infrared waves allow for short-range communication similar to ultrasound [10].
M. Sedlmayr et al. / Towards Smart Environments Using Smart Objects
317
An integration platform (also: middleware) serves as façade and bridges the gap between the technology and the legacy systems. It is an abstraction layer, which controls the interaction between these technologies and existing enterprise infrastructures, supports intra-corporate and cross-company integration, and aims at reducing integration costs significantly. Beside the simple data exchange, typical integration functions comprise the filtering, aggregation and (pre-) processing of sensor data. This is why they often include a business rule engine and solutions for complex event processing [11, 12]. Both can be used to detect relevant events or alarms [13]. The combination of AIDC technologies and integration platforms aims at identifying objects, knowing where they are and collect data without further manual efforts [6]. This allows synchronizing the real world with its virtual counterpart and therefore enables better process and decision support.
3. Building Smart Environments But making objects smart is only a first step. Knowing the identity of an object (“what is”) should be complemented by “where is”, “when is”, and “in what condition” [14]. While existing wireless AIDC technologies meet many smart object requirements, the concept of smart objects goes beyond the application of a single technology, as there will be no one-size-fits-all solution based on RFID or WLAN. Sensor networks are much more expensive than RFID as they require local intelligence for processing complex protocols, but meshing nodes can generate energy efficient communication networks for pro-active services where RFID fails. And the omission of a fixed infrastructure such as gates and reader with RFID is another advantage in dynamic environments. WLAN on the other hand is more energy consuming, but has much larger bandwidth for communication and is built in many new mobile devices. Besides technological differences, hospitals already have invested in products and infrastructures of disparate technologies. This is why many different types of smart objects will exist, depending on the objects to which a smart label is attached, the functional requirements and the extent to which the smart object has to participate with its surrounding [15]. Therefore, also the developments come from various application areas each focusing on different aspects of sensors, communication, interaction and use cases such as Ambient Assisted Living, Near Field Communication, and even Building Automation [16-18]. Overcoming these mostly segregated developments is the main vision of the Internet of Things [19], that is based on the idea that every object (person, machine, system) participates and communicates in a universal network. Concepts beyond the classical Internet are required to integrate networking with service development and advanced concepts of network and service capabilities, trust and security and well as new architectures for distributed, adaptive and “intelligent” systems [20]. This is where the concept of simply connected objects ends and the Internet of Things begins [19]. In the end, aligning the technology with organizational and work processes enables smart environments. We believe the Internet of Things has advantages also in the healthcare setting, enabling the building of smart environments supporting many medical and logistics scenarios. There should be a common vision on what hospitals of the future could look like to help coordinating the individual development efforts (Figure 1).
318
M. Sedlmayr et al. / Towards Smart Environments Using Smart Objects
Figure 1. Aligning disparate developments by a holistic view on a hospital as smart environment
Many different technologies, each with its own capabilities and restrictions are available. There is also an enormous speed in innovation and evolution, making technology affordable, smaller, faster, more energy efficient. Yet, there is still a lack in integration on a technical as well as a business level.
4. Conclusion Introducing the technologies into the healthcare sector by tagging people, material, and buildings for identification, tracking and monitoring will increase transparency and will allow better process- and information logistics. This is where the concept of smart object will help to overcome technological barriers by separating the implementation of the infrastructure from the development of smart object based business services. Smart object networks should be interoperable by design allowing the user to purchase or replace a smart tag according to functional requirements not what else is installed. Advancing the smart object concept requires on one side work on the technological solution, making the hardware smaller, cheaper and more energy efficient; making the protocols more reliable and secure; designing services that last. On the other side, integration and interoperability is still neglected. To unleash the full potential, smart object networks require interchange of data and services across technologies and locations. But in the end, new developments will only stay, if a real benefit is perceived. Business models along the whole value chain have to be found so that cost can be minimized or split among stakeholders, and benefits exist. However, this will require organisational changes beyond short-sighted replacement of one technology with another [14].
M. Sedlmayr et al. / Towards Smart Environments Using Smart Objects
319
References [1]
[2] [3] [4] [5]
[6] [7] [8]
[9] [10]
[11]
[12] [13]
[14] [15] [16] [17]
[18] [19] [20]
Vilamovska AM, Hatziandreu E, Schindler R, et al. Scoping and identifying areas for RFID deployment in healthcare delivery. Study on the requirements and options for RFID application in healthcare. Brussels: RAND Europe; 2008. Sedlmayr M, Becker A, Muench U, Meier F, Prokosch HU, Ganslandt T. Towards a smart object network for clinical services. AMIA Annu Symp Proc. 2009 Jan 1;2009:578-82. Campbell RJ, Durigon L. Wireless communication in health care: who will win the right to send data boldly where no data has gone before? Health Care Manag (Frederick). 2003 Jul-Sep;22(3):233-40. Pflaum A. Theft prevention system based on networked active Tags (sensor networks). RFID Smart Labels; 22. Feb. 2007; Boston, USA2007. Li BN, Dong MC, Vai M. From Codabar to ISBT 128: Implementing Barcode Technology in Blood BankAutomation System. Engineering in Medicine and Biology Society, 2005 IEEE-EMBS 2005 27th Annual International Conference of the. 2005:542 - 5. Want R. An introduction to RFID technology. IEEE Pervasive Computing. 2006 Jan 1. Harrop P. Printed Electronics - The Big Picture. Ink Maker. 2007 Jan 1. Akyildiz I, Su W, Sankarasubramaniam Y, Cayirci E. Wireless sensor networks: a survey. Computer Networks: The International Journal of Computer and Telecommunications Networking. 2002 Mar 1;38(4). Alemdar H, Ersoy C. Wireless sensor networks for healthcare: A survey. Computer Networks: The International Journal of Computer and Telecommunications Networking. 2010 Oct 1;54(15). Shahid B, Kannan A, Lovell N, Redmond S. Ultrasound user-identification for wireless sensor networks. Engineering in Medicine and Biology Society (EMBC), 2010 Annual International Conference of the IEEE. 2010:5756 - 9. Black J, Segmuller W, Cohen N, et al. Pervasive computing in health care: Smart spaces and enterprise information systems. MobiSys 2004 Workshop on Context Awareness; Jan 1; Boston, Massachusetts2004. Cook D, Augusto J, Jakkula V. Review: Ambient intelligence: Technologies, applications, and opportunities. Pervasive and Mobile Computing. 2009 Aug 1;5(4). Lempert S, Pflaum A, editors. Über die Notwendigkeit einer Integrationsplattform für unterschiedliche Smart Object Technologien. 9 Fachgespräch Sensornetze 2010: GI/ITG Fachgruppe Kommunikation und Verteilte Systeme; 2010 16-17.12.2010; Würzburg: Universität Würzburg, Institut für Informatik. Michael K, Roussos G, Huang G, et al. Planetary-Scale RFID Services in an Age of Uberveillance. Proceedings of the IEEE. 2010;98(9):1663 - 71. Fujinami K. Interaction Design Issues in Smart Home Environments. Future Information Technology (FutureTech), 2010 5th International Conference on. 2010:1 - 8. Frehill P, Chambers D, Rotariu C. Using Zigbee to integrate medical devices. Conf Proc IEEE Eng Med Biol Soc. 2007;2007:6718-21. Florentino GHP, Paz de Araujo CA, Bezerra HU, et al. Hospital automation system RFID-based: technology embedded in smart devices (cards, tags and bracelets). Conf Proc IEEE Eng Med Biol Soc. 2008 Jan 1;2008:1455-8. Kumar S, Swanson E, Tran T. RFID in the healthcare supply chain: usage and application. Int J Health Care Qual Assur. 2009 Jan 1;22(1):67-81. Zouganeli E, Svinnset IE, editors. Connected objects and the Internet of things: A paradigm shift. Photonics in Switching, 2009 PS '09 International Conference on; 2009 15-19 Sept. 2009. National Intelligence Council, editor. Appendix F- Background: The Internet of Things. Disruptive Civil Technologies Six Technologies with Potential Impacts on US Interests out to 2025; 2008.
320
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-320
Interoperability in Hospital Information Systems: a Return-On-Investment Study Comparing CPOE with and without Laboratory Integration a
Rodolphe MEYERa,1, Christian LOVISb Division of Medico-economic analysis, Geneva University Hospitals, Switzerland. b Division of Clinical Informatics, Geneva University Hospitals, Switzerland.
Abstract: Despite its many advantages, using a computerized patient record is still considered as a time consuming activity for care providers. In numerous situations, time is wasted because of the lack of interoperability between systems. In this study, we aim to assess the time gains that nursing teams could achieve with a tightly integrated computerized order entry system. Using a time-motion method, we compared expected versus effective time spent managing laboratory orders for two different computerized systems: one integrated, the other not integrated. Our results tend to show that nurses will complete their task an average of five times faster than their expected performance (p<0.001). We also showed that a tightly integrated system provides a threefold speed gain for nurses compared to a nonintegrated CPOE with the laboratory information system (p<0.001). We evaluated the economic benefit of this gain, therefore arguing for a strong interoperability of systems, in addition to patient safety benefits. Keywords: Computerized physician order entry, electronic prescribing, nurse, time consumption.
1. Introduction Computerized physician order entry (CPOE) and e-prescribing systems constitute a major component of electronic health records (EHR) and are identified as a strategic goal for most modern hospitals [1]. Many studies showed strong evidences of reduction of medication errors and adverse drug events following CPOE implementation [2]. However, adoption of EHRs and CPOEs in hospitals has been slow due to many factors including: the high purchase-implementation-maintenance cost of such systems, the immaturity of software products, the lack of integration between different components of a hospital information system (HIS), medical staff resistance and the emergence of new mortality and morbidity causes [1][3-4]. EHRs’ actual return on investments has been debated since their inception and represents another implementation barrier. Recent studies have shown that these systems may be worth every dollar invested if implemented correctly with a high level of integration [5,6]. Such conclusions are now 1
Corresponding author: Rodolphe Meyer, Geneva Universities Hospitals, DAME/DTSD, Rue Gabrielle Perret-Gentil 4, CH-1211 Geneva 14 – E-mail: [email protected]
R. Meyer and C. Lovis / Interoperability in Hospital Information Systems
321
acknowledged by the US administration with the ARRA HITECH stimulus act of 2009. Regarding CPOE, studies are showing a gap between medical teams’ perceived value on the topic and their intent to adopt it. Most of the teams are reluctant to cross the gap on the premises that CPOE is a time-inefficient or time-consuming task [1]. Different methods have been tried to objectively evaluate the impact of CPOE systems on physicians and nurses’ workflow in terms of time gains or losses [7-9]. Studies focused on CPOE concluded that, if carefully implemented, it will not greatly disrupt nursing staff workflow [10-11].
2. Research Question and Methods Issues around perceived time consumption generated by e-prescribing systems represent an important barrier to rapid adoption of CPOE by nursing teams. In this paper we aim to measure the difference between perceived time spent on e-prescribing and real time measurements. We also computed time gains for nurses derived from the implementation of a second generation CPOE with a higher integration level with the Hospital Information System (HIS). The setting for the study was the Geneva university hospitals in the department of geriatrics and rehabilitation. A computerized patient record (CPR), mostly developed in-house, is used in all facilities and runs on more than 7’500 desktops and more than 1’000 laptops. The CPR uses a Java component-based architecture with a message oriented middleware. It includes an inhouse CPOE system, called “Presco”, a widely deployed multi-purpose order entry platform for drugs, radiology, laboratory, nursing care, to name a few. Presco has builtin decision support, such as information on drugs, specific rules according to specialty, (for example pediatrics orders); it supports orders sets, various kinds of alerts and reminders up to complex clinical pathways. It is tightly integrated with the CPR, thus rules can be based on laboratory or other clinical information. The study takes benefit from the fact that we still use a specific user interface for laboratory requests (Request, used by nurses to rewrite orders given by physicians in the CPOE). The direct link between the CPOE and the laboratory system is available, but the study was performed during the rollout of the direct link, thus allowing measuring the time that was spared by building the direct link.We used time-motion methods to evaluate time nurses would have spent on computers by either transcribing order entries from Presco to “Request” or directly validating order entries in Presco, using the direct link. In addition, and for each ways, nurses were first asked to subjectively evaluate the time they would spent doing the job. Time-motion methods are considered the gold standard as they capture the nurses’ tasks continuously during a set time interval (here the computer activity related to e-prescribing) [11]. The time motion measures were performed by the same person during fall 2008 in a geriatric ward on 101 prescribing tasks in 33 units involving 66 nurses. Statistical analysis was performed using PASW® Statistics17. We didn’t apply normal law since our number of observation (100) doesn’t respect Gaussian distribution for all the parameters and used Wilcoxon signed ranks tests to compare means.
322
R. Meyer and C. Lovis / Interoperability in Hospital Information Systems
3. Results 3.1. Order Entry Values Nbpr represents the number of prescribed exams entered in Presco for one patient. Chkb represents the number of checkboxes needed to validate the exams using the direct link. The difference between Chkb and Nbpr is explained by the use of order sets that regroups orders in one entry. Labs represents the number of different laboratories that will perform the lab analysis. This number reflects the quantity of stickers printed to identify samples that will be carried to the various laboratories. Nonc represents the number of lab exams that cannot be prescribed using CPOE. This number reflects the quantity of exams that cannot be e-prescribed and need paper work to be sent. 68% of the patients in our sample were not concerned; 19 patients had one Nonc exam; 8 patients had two and 5 patients had three or four Nonc exams. 3.2. Time Observed and Measured Test represents the estimated time that nurses expect to process the job. This estimation is purely subjective depending on the expected complexity of the order entry task. Treq represents the observed time spent by nurses to complete their order entry in the Request environment. This measure is objective. Tpre represents the observed time spent by nurses to complete the job using the direct link, thus only validating the order. This measure is objective. 3.3. Potential Gain Means comparison of Test versus Treq to complete an order entry in the Request environment shows a gain of 104 seconds on what was expected by nurses (3 minutes instead of 4 minutes 44). Means comparison of the Test versus Tpre to complete an order entry in the Presco environment shows a gain of 233 seconds on what was expected by nurses (less than one minute instead of 4 minutes 44). Means comparison of the Treq versus Tpre to complete an order entry using the direct link shows a gain of 129 seconds compared to retranscription of orders in Request (less than one minute instead of 3 minutes) (p < 0.001). 3.4. Time Observed and Measured Related to Number or Exams Test/N represents the estimated time that nurses expect to spend to keyboard their order entry (Test) divided by the number of exams to enter (N). This estimation is purely subjective depending on the expected complexity of the order entry task and experience doing so. For our sample we have an average 55 seconds of estimated time consumption by each patient’s order entry with a Median of 30 seconds per order entry. The maximum estimation is 6 minutes (360 sec.) and the minimum estimation is 9 seconds average time per order entry. Treq/N represents the measured time that nurses spent to keyboard their order entry (Treq) in the Request environment divided by the number of exams to enter (Nbpr). This estimation is objective. In our sample, we have an average 37 seconds of time consumption by each patient’s order entry with a Median of 21 seconds per order entry. The maximum measure is 4 minutes and 10
R. Meyer and C. Lovis / Interoperability in Hospital Information Systems
323
seconds (250 sec.) but concerns only two nurses. The minimum measure is 7 seconds average time per order entry. Tpre/N represents the measured time that nurses spent to validate the order that has been sent automatically using the direct link (Tpre) divided by the number of exams to enter (Nbpr). This estimation is objective. In our sample, we have an average of 11 seconds of time consumption by each patient’s order entry with a Median of 5.6 seconds per order entry. The maximum measure is 52 seconds per order entry but concerns only two nurses. Ten nurses have an average time of entry per order of more than 45 seconds. The minimum measure is around 2 second average time per order entry. 3.5. Potential Gain Related to Numbers of Exams Means comparison of Test/N versus Treq/N to complete an order entry in the Request environment considering the number of exams, shows an average gain of 18 seconds per order entry on what was expected by nurses. Means comparison of the Test/N) versus Tpre/N to complete an order entry, shows an average gain of 44 seconds on what was expected by nurses per factor (almost five times faster than expected). Means comparison of the measured time in the Request environment (Treq/N) versus measured time (Tpre/N) to complete an order entry in the Presco environment, shows a gain of almost 26 seconds on what was realized by nurses in the old system (more than three times faster than in the former system) (p < 0.001). 3.6. Influence of the Number of Exams in the Time Estimated and Observed Plotting Test and Treq with the Nbpr confirms a positive correlation between these variable. This relationship diminishes when plotting Tpre with Nbpr. The number of order entry to keyboard in the system does not influence the time consumption in the same amount than in the older system or the nurses’ estimation. 100% of the time needed in Presco to complete order sets range between 45 to 60 seconds. Time gain from the Presco system plotted to the number of exam show a high time gain level when the number of exams increases in volume.
4. Discussion and Conclusion There are several limitations to this study. The level of nurses’ technophilia has not been evaluated, but certainly accounts in the observed results. Even if we can assume a fairly distributed level (only one or two extra ranged time) in the dataset, this will certainly be a variable to measure and integrate in future work. Several other parameters associated with technophilia would be interesting to collect (age, country of origin, level of academic formation, training courses followed, seniority in the hospital, etc.). Another limit of the study lies in the type of care provided in the selected ward. Measurements should be performed in other hospital wards with a high level of complex and various order entries at different hours (intensive care units, emergency departments, etc.) along with acute and short-term care wards. The time period and the number of observed orders should also be increased. An automated process monitoring this activity could also be integrated in the HIS and study nurses behavior regarding future developments of the CPOE to avoid the Hawthorne effect (nurses could change their behavior when they know that they are being measured). In this study we wanted
324
R. Meyer and C. Lovis / Interoperability in Hospital Information Systems
to answer two different questions. First we wanted to verify the hypothesis that nurses tend to overestimate the time spent using the CPOE system. We showed that they work five times faster than their own estimation with a direct highly integrated CPOE system (Presco). This assessment also applies to an older system where they have to keyboard their orders from one system (Request) to another (Presco). With this legacy system, they still overestimate their time consumption by 33%. Secondly we wanted to document the return on investment of the development of a tight integration between CPOE and the laboratory information system. Such integration is often available, but while the cost of interoperability is frequently evaluated, the economic benefit of interoperability is little documented. Our data confirmed that building a tight interoperability allows to use the service three times as fast, saving an average of 26 seconds per order entry. Another interesting point is that the number of exams to process by nurses in Presco is not a major determinant of the total time spent. For a number of exams ranging from 1 to 26 the time spent will be between 45 to 60 seconds in the new system (90 to 480 without interoperability). This is a direct consequence of efforts put in ergonomics of the system and the broad interaction between computer scientists and physicians to design order sets well-tailored to daily medical activities. Finally, one can hope that the time gains evidenced in this research could be employed in numerous ways aiming to enhance the quality of care delivered in the hospital. Considering the aspect of analytic accounting, the average salary of the nurses involved in this study was 0.73 dollars per minute of working time. Building interoperability saved 2600 seconds (43 minutes) of working time for the 100 order sets studied. About 20’000 lab orders are generated by physicians each month using the CPOE, each order containing at least one analysis, but usually much more, this represents an a potential $6.325 of working time equivalent saving per month.
References [1]
Ash, J. S. and Bates, D. W. "Factors and forces affecting EHR system adoption: report of a 2004 ACMI discussion," J Am Med Inform Assoc 2005:12(1):8-12. [2] Bates, D. W. et al., "The impact of computerized physician order entry on medication error prevention," J Am Med Inform Assoc 1999:6(4):313-21. [3] Schade, C. P. Sullivan, F. M. de Lusignan, S. and Madeley, J. "e-Prescribing, efficiency, quality: lessons from the computerization of UK family practice," J Am Med Inform Assoc 2006:13(5):470–5. [4] Middleton, B. Hammond, W. E. Brennan, P. F. and Cooper, G. F. "Accelerating U.S. EHR adoption: how to get there from here." J Am Med Inform Assoc 2005:12(1):13–9. [5] Meyer, R. Omnes, L. and Degoulet, P. "Impact of Health Care Information Technology on Hospital Productivity Growth: a Survey in 17 Acute University Hospitals," Medinfo 2007:12(1):203-7. [6] Meyer R. and Degoulet, P. "Assessing the Capital Efficiency of Healthcare Information Technologies Investments: An Econometric Perspective," IMIA Yearbook 2008:3(1):114-127. [7] Hollingworth, W. et al., "The impact of e-prescribing on prescriber and staff time in ambulatory care clinics: a time motion study," J Am Med Inform Assoc 2007:14:722-30. [8] Pizziferri, L. et al., "Primary care physician time utilization before and after implementation of an electronic health record: a time-motion study," J Biomed Inform 2005:38(3):176-88. [9] Bosman, R. J. "Impact of computerized information systems on workload in operating room and intensive care unit," Best Practice & Research Clinical Anaesthesiology 2009:23:15-26. [10] Mador R. L. and Shaw, N. T. "The impact of a Critical Care Information System (CCIS) on time spent charting and in direct patient care by staff in the ICU: A review of the literature," Int J Med Inform 2009:In Press. [11] Finlker, S. A. KnickMan, J. R. Hendrickson, G. Lipkin, M. and Thompson, W. G. "A comparison of work-sampling and time-and-motion Techniques for Studies in Health Services Research," Health Serv Res 1994:29(5):623-6.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-325
325
Building the Technical Infrastructure to Support a Study on Drug Safety in a General Hospital Melanie KIRCHNERa,1, Thomas BÜRKLEa, Andrius PATAPOVASa, Anja MATHEWSb,c, Reinhold SOJERa, Fabian MÜLLERc, Harald DORMANN b, Renke MAASc, Hans-Ulrich PROKOSCHa a Chair of Medical Informatics, University Erlangen-Nuremberg, Germany b Klinikum Fürth, Germany c Institute of Experimental and Clinical Pharmacology, University ErlangenNuremberg, Germany
Abstract. We describe reorganization steps and the required technical infrastructure to support a multidisciplinary research project aimed at improving the safety of drug therapy in an emergency department (ED) of a community hospital. Assessment of drug safety required consolidation of data from various sources in a single source approach. We solved this by transferring digital data from the hospital information system (HIS) and attached clinical systems into a pseudonymized study database (secuTrial), which is also used as a web based data capturing tool to rate drug associated risk situations, extended by a technical extension for dynamic upload of further data. Paper-based documentation in the ED was digitized using a digital pen technology. Keywords. Drug Safety Research, Technical Infrastructure, Digital Pen, Electronic Data Capture, Clinical Trial
1. Introduction Medication errors and adverse drug effects may cause up to 10% of hospital admissions and are under discussion as one of the most frequent death causes in the western world [1, 2]. Up to 50% may be preventable [3]. Here we present an Electronic Data Processing (EDP) infrastructure which was developed to support a drug safety research project by integrating heterogeneous data sources attributable to drug risks into single source. The research project is currently conducted in the emergency department (ED) of a German municipal hospital (Klinikum Fürth, Germany), aiming at evaluation of measures to diminish the number of adverse events and risk situations associated with drug therapy. Goal of the research project (termed study in this paper) is to evaluate efficiency and feasibility of measures to improve drug safety in the ED. This is done by comparing the number of drug-associated risk situations in each study phase, before and after different interventions were accomplished (see fig. 1). All cases are assessed 1
Corresponding author: Melanie Kirchner, Department for Medical Informatics, University of Erlangen, Krankenhausstr. 12, 91054 Erlangen, Germany. E-mail: [email protected]
326
M. Kirchner et al. / Building the Technical Infrastructure to Support a Study on Drug Safety
in parallel by specialists in emergency medicine, clinical pharmacologists and pharmacists located in Fürth and at the University of Erlangen. This requires rapid access and clear presentation of clinical data, which in clinical reality is very difficult to provide. Information is usually scattered across various data formats ranging from handwritten medication notes to ECG data available in a manufacturer-specific format. We therefore had to devise a new EDP system approach that supports data which has been drawn from multicentre activities and builds on single source data collection in the hospital information system (HIS) and data reuse for study purposes. This is combined with organisational changes and additional data integration tasks in shape of a study data system (secuTrial®) and digital pen (ePen®) based data acquisition.
Figure 1. Study workflow
2. Methods Compared to inpatient departments of German hospitals, ED documentation is still often paper based due to rapid patient turnover and the information demand at the bedside. Although Klinikum Fürth runs a commercial HIS (ORBIS), workflow and documentation analysis showed that much of the required data elements for the study resided on various paper based forms used for each ED patient. Data analysis was used to identify all elements required for the study and their origin. Workflow analysis showed where and when which type of data was generated. Following the analysis a reorganization process was started for the ED. It comprised the development of new and more comprehensive paper forms containing all relevant patient data on just two paper forms. In addition, reorganization concerned methods to increase the amount of digitally recorded data, resulting in establishing an ePen [4] based documentation method for the redesigned forms. All reorganization steps were closely monitored to prevent unintended effects such as delays in patient care and to avoid extra effort or documentation workload for the ED staff. Furthermore a cooperative study coordination centre and its EDP infrastructure were established. As Reorganization was completed before starting the first study phase, altering documentation processes did not affect study results.
3. Results Within the Klinikum Fürth ED, the paper based patient record served as the primary document for diagnosis, treatment and follow up. Study relevant information on drugs was recorded on up to five different paper forms.
M. Kirchner et al. / Building the Technical Infrastructure to Support a Study on Drug Safety
327
We identified 350 potentially relevant variables for the study. A patient, for example, may receive up to 30 different drugs before, during and after ED stay for which relevant dosing data should be recorded. Any drug can induce several drugassociated risk situations. For screening for these situations much clinical data is required: personal details (e.g. age, sex), administrative information (e.g. admission date), medical data (like diagnoses, findings, lab results) and order information (drug orders etc.). But e.g. drug order data recorded on paper in the Fürth ED proved incomplete for study purposes, because standardized documentation and coding into the Anatomical Therapeutic Chemical Classification (ATC) was missing. To rate the probability of medication errors or adverse effects, further information such as Naranjo score [5] or Common Toxicity Criteria (CTC) score [6] had to be recorded. Some of the required items were already stored digitally in the HIS or connected systems. To enable cooperation with the clinical pharmacologists at Erlangen University a secure external access was required for the study database that also should be able to deliver data for import into statistic packages. For the study a new digital infrastructure was established, which is shown in fig 2. It spans all sites involved in the study, Klinikum Fürth, the Institute of Experimental and Clinical Pharmacology and the Chair Medical Informatics at Erlangen University. The first decision was to establish a study database for collection of all study relevant data. At Erlangen University a commercial GCP (Good Clinical Practice) certified [7] study data documentation system (secuTrial®) has already been established for other studies. It was therefore decided to collect the final pseudonymized patient data set in a new instance of secuTrial®, which offers a protected web interface for data retrieval and data recording in multicentre studies. All systems dealing with non pseudonymized patient data are safely located within the internal private network of Klinikum Fürth to guarantee data privacy. The clinical network architecture includes the HIS, delivering e.g. demographic data and lab data, other attached clinical systems storing e.g. ECG findings and the ePen® server. A collector application collects data from those different sources, feeds it into a comprehensive pseudonymization process using the special PID-Generator web-based application [8] and transfers it to the study database through SSL protocol.
Figure 2. System architecture
Storing all available finding data would have led to an overload of the study database, whereas only few values were relevant for drug-associated risk situations. To avoid this situation a technical procedure based upon the Greasemonkey web browser
328
M. Kirchner et al. / Building the Technical Infrastructure to Support a Study on Drug Safety
extension [9] was developed for the secuTrial® frontend. Thus a researcher may dynamically upload lab data or ECG measurements on demand to secuTrial® database. As mentioned above much study relevant patient information in the Fürth ED was and is recorded on paper. Reorganization resulted in an improved set of two paper forms containing virtually all data items required for diagnosis and therapy in the ED. Direct recording of all data in the HIS proved too time consuming and not feasible because of physician's preferences and place restrictions to provide computer terminals in all locations. Instead we decided to establish an ePen®-based digital acquisition process of the new redesigned ED forms. The digital pen contains a camera and identifies a unique pattern printed on each form. Data is allocated to the correct patient when scanning a barcode from a patient label. The digitalized image of the paper form is made available within the HIS via linkage to the ePen® server. Handwritten data is transferred to the digital ED form when medical staff plugs the pen into a docking station connected to a clinical workstation. The person uploading data is prompted to confirm structured data elements which are subjected to an OCR process. Linkage of controlled vocabularies (such as MMI PHARMINDEX drug database) ensures that e.g. the ATC code is stored together with the correct drug and substance name. Usage of the ePen is now an integral part of nursing staff´s daily work processes.
4. Discussion Different approaches are possible to establish such an EDP infrastructure for research. A study system should comprise a centralized database, easy data submission, computer- and platform-independent access, remote application maintenance, accessibility inside the clinical network and appropriate data privacy. Either individual self-programmed software [9] or established commercial software products such as secuTrial can be employed. For FDA- or EMA-regulated trials it may be mandatory that the study database system is GCP certified. Multi centre studies set additional requirements for the EDP environment in terms of multicentre access to study data, e.g. via World Wide Web. German regulations require that any study database is either anonymized or pseudonymized [8]. We have presented an approach using a combination of commercial applications (HIS, study database, ePen) and some software adaptations (e.g. collector, browser extension) to fulfill the above named requirements. In contrast to clinical trials, in the daily clinical routine only very little is predefined. There is a huge variability in the number and type of variables you get per patient and neither the vocabulary nor the place where information is stored is standardized, so far. Thus we had to combine the large amount of direct clinical data required for assessment of each drug associated risk situation and research data [10, 11]. As an alternative, individual self-programmed software might be the most preferable solution, but even if e.g. Pavlovic writes that Cliniporator [12] was “easy to implement”, the time needed to set up a customized solution would have exceeded our time schedule by far, especially as the question of safe multicentre documentation had to be considered. Using our approach, however, one should be aware, that available commercial solutions hardly meet specific trial needs and modifications are indispensable. We note that due to the limited flexibility of secuTrial and the dynamic study design data
M. Kirchner et al. / Building the Technical Infrastructure to Support a Study on Drug Safety
329
structure and electronic case report forms (eCRF) used in this study are complex and partially redundant. We found that enabling additional “on demand” data upload using the Greasemonkey web browser extension helped a lot to overcome these restrictions. This was achieved inhouse, with no additional expenditure for customization or consultancy services [13].For using our approach to enable several concurrent studies, further instances of secuTrial are needed. Data transfer infrastructure may be reusable in large part, even if study specific customization will be indispensable. Furthermore, a study in a busy hospital department should strive to minimize negative impact upon the clinical environment. Additional workload and supplemental study forms or activities bear the potential to quit a study, if not covered with sufficient extra resources. In an emergency department delay in patient treatment may even result in serious patient injury or death. Therefore single source documentation with minimum influence upon proven treatment workflows is mandatory. We found, that maintaining paper based documentation, but on improved forms and combined with digital pen technology simplified data acquisition and documentation without changing daily work processes, what was highly appreciated especially among nursing staff. In summary, we conclude that an approach combining single source HIS data with ePen recordings plus on demand transfer of additional data items into a study database is a feasible way to find a compromise for the many requirements a study on drugassociated risk situations imposes. In realizing this project we found that commercially available software provides a valuable foundation but still requires significant in-house development. Acknowledgements: This study was supported by The German Federal Ministry of Health within the “German Coalition for Patient Safety” [14].
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]
[13] [14]
Lazarou J, Pomeranz B, Corey P. Incidence of adverse drug reactions in hospitalized patients, JAMA 279 (15) (1998), 1200-1205 Wester K, Jönsson A, Spigset O, et al. Incidence of Fatal Adverse Drug Reactions. British Journal of Clinical Pharmacology (2008), 573-579 Winterstein AG, Sauer BC, Hepler CD, et al. Preventable drug-related hospital admissions, The Annals of Pharmacotherapy 36 (2002), 1238-1248 Web url: http://www.anoto.com/the-pen-2.aspx Naranjo CA, Busto U, Sellers EM, et al. A Method for Estimating the Probability of Adverse Drug Reactions, Clinical Pharmacology and Therapeutics 30 (1981), 239-245 Web url: http://ctep.cancer.gov/protocolDevelopment/electronic_applications/ctc.htm Guideline to Good Clinical Practice, European Medicines Agency, 2002 Pommerening K, Reng M, Secondary Use of the EHR Pseudonymisation, Medical Care Compunatics 1 (2004), 441-446 Web url: http://www.greasespot.net/ Dugas M, Breil B, Thiemann V. Single Source Information Systems to Connect Patient Care and Clinical Research, Medical Informatics in a United and Healthy Europe, IOS Press (2009) Safran C. Using Routinely Collected Data for Clinical Research, Statistics in Medicine 10 (1991), 559564 Pavlovic I, Miklavcic D. Web-based Electronic Data Collection System to Support Electrochemotherapy Clinical Trial, IEEE Transactions on Information Technology in Biomedicine 11 (2007), 222-230 Brandt CA, Deshpande AM, Lu C, et al. TrialDB: A Web-based Clinical Study Data Management System, Amia Symposium Proceedings (2003), 794 Web url: http://www.german-coalition-for-patient-safety.org/
330
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-330
Implementing Change in a Diverse and Politicized Landscape Espen SKORVEa University of Oslo, department of informatics, Norway
a
Abstract. Based on the experiences from an ongoing IT implementation project, this paper illustrates the complexity of large scale projects through the concept of diversity. The analysis shows how, no matter how mature the project becomes at coping with local diversity, it is still vulnerable to contextual diversity; especially when this is politicized. The paper concludes by pointing to the special responsibilities this puts on higher level decision makers. Keywords. Implementation, diversity, politics.
1. Introduction Since its introduction in the Norwegian health care system, the electronic patient record (EPR) has attracted the attention of a growing number of stakeholders. Health care managers, politicians, legislators, standardization agencies, researchers, software vendors and developers, different medical professional groups, different health institutions, patients and the general public are all amongst those that seem to hold a stake in the evolution of the EPR. Thus the number of different roles required by the EPR to fulfil is also growing. While its primary role is as a tool for the documentation of clinical operations, secondary roles include diverse expectations such as source of research data, source of revenue, tool for regulation, tool for planning and control, tool for increased efficiency, tool for integration as well as container of personal health histories. Decreased cost and increased quality of the Norwegian healthcare services through ICT based coordination and exchange of information has been high on the political agenda for several years already, and the EPR plays a role in this as well. Fragmentation and integration are acknowledged as important challenges in the literature on clinical information system implementations [1][2], closely related to diversity among stakeholders [3][4]. But while most contributions focus on internal diversities (areas of use, professions and disciplines) this paper illustrates the impact of diversities initially extrinsic to the case on which it is based. There are currently only 2(3) applications categorized as EPRs in use in Norwegian hospitals, and one of them seems to be gaining terrain on behalf of the other(s). But the general concept of Patient Record is changing, and the integration with a range of other applications is getting increasingly tighter. These other systems include patient administrative systems (PAS), lab systems, various specialist systems and the latest addition – electronic medical chart (EMC) systems for vital signs monitoring. The latter is currently assumed to be able to complete what the ‘old’ EPR failed to do in respect of integration and coordination across professional and
331
E. Skorve / Implementing Change in a Diverse and Politicized Landscape
organizational boundaries within the health care system – initially limited to the support of cross hospital patient trajectories. These systems also mark a break with the traditional approach to clinical information systems in Norway, based on a high degree of free-text and unstructured data. The chart-systems are envisioned to primarily be based on standardized and structured data-elements, thus improving the support for exchange and secondary use of clinical information. This is the background for the Electronic Medical Charts (EMC) project at the Norwegian National hospital (Rikshospitalet). The EMC-project was initiated in spe as early as 2003. After producing a comprehensive requirement specification, a contract was signed in June 2006 with IMD soft, the vendor of MetaVision, an off-the-shelf medical charts system intended primarily for intensive care and anaesthesia units. It was already clear that the system would not satisfy all requirements and that the vendor did not have previous experience from hospital wide implementations, but the alternative offers were even less capable of meeting the requirements of the specifications. Further developing the software in order to satisfy these was thus a central part of the contract; welcomed by the vendor as an opportunity to expand its market and operational functionality. After a pilot in the thorax surgical department that by most was considered a failure, a new project manager took over. She managed to conduct a successful implementation for the anaesthesia section, and then moved on to the children’s intensive care unit. Considerable learning experiences were harvested during this part of the project, but after the new project manager was phased out of the project (January 2010), at least partly as a result of political issues, there has been no further progression in the project at Rikshospitalet.
2. Empirical Material The following case presentation is based on • Interviews with participants in the project group, including clinicians that are or were involved in or influenced by the project, in addition to other (more or less peripheral) stakeholders. Twelve semi structured interviews are conducted, all recorded and transcribed. • Participant observation. Thirteen sessions are conducted, mostly in project meetings and similar. • Documents, including the project’s requirements specification, the internal evaluation report after the pilot and strategy documents from different levels. Table 1. Distribution of interviews and observations Thorax inpatient 2
Thorax ICU Nurses Doctors
5 2
Anesthesia
1
Others Observations
1
Project group (6)
External
(2) 1
3
9
3
The doctors at the thorax surgical department are, in contrast to the nurses, not assigned to a specific unit. The project manager, who is not a practising clinician, was interviewed thrice (2007, 2010 and 2011), the second time together with to nurses. Two
332
E. Skorve / Implementing Change in a Diverse and Politicized Landscape
of the doctors and six of the nurses was involved in the project group. The three externals are employed at respectively the public health institute, the regional health authority and another university hospital. They were interviewed to get an impression of the project in a national, regional and historical context. Equivalently, participation in seminars and conferences on EPR and healthcare IT has contributed to expand the perspective.
3. The Electronic Medical Charts projects The EMC project quickly became deeply entangled with diversities in many forms at several levels; some close and some more peripheral – some predictable, some surprising and some emergent. This posed considerable challenges, but also served as a resource in ensuring the sustainability of the project. The first encounter with this diversity might have been the variations in oral use of medical terminology, which clashed with the structured nature of a computerized database unable to support such a linguistic diversity. This led to the standardization project, where the variety of logics in how to handle the problem became an issue. While access to different solutions provided a certain freedom of choice, differences in logics at work became a growing problem when a second hospital joined the project. This also fore-grounded the tensions between standardization and local adaptation, and thus the challenge of balancing the diverse goals of the project. While the organizational diversity [4][6] provided a similar freedom of choice in venue for a pilot, the initial approach to the project proved to be based on several unfortunate choices. One challenge was the internal diversity of the thorax surgical department; consisting of three units with very different practices, as well as a huge variety of patients in different phases of their often life long trajectories as clients of the department. But the issue that finally proved fatal to the pilot was related to differences in the logic of software engineering versus the logic of medical practices in the department. While the software required the doctors to traverse multiple menus in order to find the right data elements when prescribing medications, the doctors were used to a very different mode of conveying this information. On the paper charts, prescriptions were not elements in a database, but rather instructions from the doctors to the nurses, and each prescription took seconds to complete. In the new system, it took minutes which aggregated into a time-consumption that daily operations in the department did not allow room for. When this finally came to jeopardize patient safety, the strength of operational logics over project logics became manifest; thus the pilot was terminated. While this might be considered an unfortunate decision for the project, one could contemplate on what its further destiny would have otherwise been. If the pilot was continued and was to cause a patient fatality, would the project have been allowed to continue at all? In this perspective the diversity of logics worked as a security mechanism not only for the patients, but also for the project. Now the project could explore the organizational diversity for other points of leverage [7]. This led them to the anaesthesia section. While anaesthesia and intensive care are the two core use areas for medical charts, intensive care had in the thorax surgical department proved too challenging. Thus anaesthesia became the choice of venue for what some called ‘a second pilot’, as this was assumed to represent a considerably less complex task. This assumption proved to be correct, and a successful implementation in this section completely replaced the paper charts in all operating rooms. This was accomplished not
E. Skorve / Implementing Change in a Diverse and Politicized Landscape
333
only due to a change of venue, but also due to a change of strategy. From a steering logic in the initial pilot, where the goal was immediate change, the next phase was guided by a learning logic, where the immediate goal was to harvest experiences that could bring the project forward. I have seen no evidence that the difference in outcomes of the first and second pilot should be attributed to bad and good project management. From what I understand, the initial project manager did what was expected of him according to wide spread best practices in project management. But the diversity in goals, potential strategies and possible starting points allowed the next project manager to take a different approach. Also, the second pilot had the advantage of drawing on learning experiences harvested from the first pilot. Thus the project grew capable of coping with, and at least partially exploiting, the diversity at practice and project level. But the diversity among stakes and stakeholders external to the project proved harder to cope with, and thus eventually became what (at least temporarily) halted the project at Rikshospitalet. The most influential of these stakeholders are found at political/legislative level, at regional administrative level and in the partnering hospital. The two hospitals are located less than four kilometres apart and share many of their patients. Personnel also move between them. Hence it was desirable to develop a solution that could be implemented in both hospitals; both for personnel familiarity with the system and for exchange and secondary use of information. It was initially agreed that Rikshospitalet would lead the way, and that the other hospital would follow. But when the project manager at Rikshospitalet wanted to stop further roll outs (summer 2010) until a stable version of the system was available, the other hospital got inpatient and wanted to pursue a solution of their own based on the current version. This escalated already ongoing political struggles in a way that got severe consequences for the project. Before I go into that I need to introduce the influences from the political/legislative and regional levels. It has for several years been an explicit political aim to decrease the cost and increase the quality of the Norwegian healthcare system through ICT based coordination and exchange of information. However, the current legislation does not allow electronic exchange of sensitive information between hospitals. When it was decided (2008) to merge the two hospitals, it was clear that an important reason for this was that it would allow them the ability for such exchange. During 2009-10 the struggle for positions in the new organization exploded. This also included the two ITdepartments, where Rikshospitalet’s people were granted most of the leading roles. As a part of this bargain, Rikshospitalet’s IT-department agreed that the managers in the respective EMC-projects (who both were external consultants) should be replaced by a new project manager from the partnering hospital’s IT-department. Meanwhile the regional health authorities (the formal owner of the two hospitals and several others as well) had initiated an EMC project of their own. As they initially wanted to run this independent of the two hospitals, they engaged a project manager full time for this purpose. This was a pragmatic man who quickly realized the benefits of drawing on the already developed solution, and subsequently initiated a dialog with Rikshospitalet. However, when the new project manager took over at the two hospitals it was decided that he should also lead the regional project which by then was decided should be based on the solution from Rikshospitalet. From the reports that I have received so far, not much has happened at Rikshospitalet since he took over (January 2010). The main reason for this is that the new project manager did not consider the system mature enough for further roll out. Thus the local implementation project at Rikshospitalet is currently awaiting the results from the regional project, which is defined as a
334
E. Skorve / Implementing Change in a Diverse and Politicized Landscape
development rather than an implementation project. The decline is thus restricted to the local project, and hopes are still alive of a future revival. Finally it should be mentioned that the diversity among external stakeholders initially also was considered a resource for the local project. For one, the explicit political aim of coordination through ICT was important in allocating resources for the project and in allowing it to continue despite all problems and budget excesses. Secondly, considerable synergy effects were expected from running the two local projects as one. Finally, it was anticipated that when the regional project got going, it would provide important input for the standardization of the solution, not at least in regards of terminology and database configuration. However, while the pros and cons of local diversity were balanced (however troublesome) in a sustainable way, the contextual diversity finally became too complex for the local project to cope with. We should note, though, that the roll out at Rikshospitalet is not ended, but merely suspended until further notice.
4. Conclusions Large scale implementation projects are inherently complex [8], much due to the “built in” diversity they have to cope with [4][9]. When they become politicized, which they often do [10], a special responsibility lies with higher level decision makers to ensure the sustainability of such projects [11]. The EMC project at Rikshospitalet grew to a fairly mature level in coping with local diversity. Decisions were made, much based upon continuous negotiations between diverse logics and practices. But the project had only limited influence on decisions made at contextual level, while decisions made at this level proved to have a powerful impact on the project.
References Ellingsen, G. Coordinating work in hospitals through a global tool. Scandinavian Journal of Information Systems 15 (2003), 39-54. [2] Elingsen, G. and Monteiro E. Seamless integration: standardisation across multiple settings. Computer supported cooperative work: the journal 15 (2006)., 443-466. [3] Fitzpatrick, G. 2000. Centres, Peripheries And Electronic Communication: Changing Work Practice Boundaries, Scandinavian Journal of Information Systems 12 (2000), 115-148. [4] Winthereik, B.R. The Project Multiple: Enactments of systems development, Scandinavian Journal of Information Systems 22[2] (2010), 49–64. [5] Hannan, M.T. Uncertainty, Diversity and Organizational Change, in N.J. Smelser and, D.R. Gerstein (eds.), Behavioral and social science: fifty years of discovery, National Academies Press, 1986. [6] March, J.G. Exploration and Exploitation in Organizational Learning, Organization Science 2[1] (1991), 71-87. [7] Senge, P. The fifth discipline: The art & practice of the learning organization, Random House, 2006. [8] Hanseth, O. Jacucci, E. Grisot, M. Aanestad, M. Reflexive Standardization: Side Effects And Complexity In Standard Making, Misq 30 (2006), 563-581. [9] Hanseth, O. Ciborra, C. and Braa, K. The Control Devolution. ERP and the Side Effects of Globalization, The Data base for advances in information systems. Special issue: Critical Analysis of ERP systems: The Macro Level, 32[4] (2001), 34-46. [10] Mulugeta, N. Hailu, W. Sahay, S. Making it Work: Navigating the Politics around ART systems implementation in Ethiopia, The Int. Federation of Information Processing 9.4 conference (2007). [11] S. Sahay, E. Monteiro, M. Aanestad, Configurable politics: trying to integrate health information systems in developing countries, Journal of the Association for Information Systems 10[5] (2009), 399414. [1]
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-335
335
Characteristics of German Hospitals Adopting Health IT Systems – Results from an Empirical Study Jan-David LIEBEa,1, Nicole EGBERTa, Andreas FREYa , Ursula HÜBNERa a University of Applied Science Osnabrueck, Health Informatics Research Group
Abstract. Hospital characteristics that facilitate IT adoption have been described by the literature extensively, however with controversial results. The aim of this study therefore is to draw a set of the most important variables from previous studies and include them in a combined analysis for testing their contribution as single factors and their interactions. Total number of IT systems installed and number of clinical IT systems in the hospital were used as criterion variables. Data from a national survey of German hospitals served as basis. Based on a stepwise multiple regression analysis four variables were identified to significantly explain the degree of IT adoption (60% explained variance): 1) hospital size, 2) IT department, 3) reference customer and 4) ownership (private vs. public). Our results replicate previous findings with regard to hospital size and ownership. In addition our study emphasizes the importance of a reliable internal structure for IT projects (existence of an IT department) and the culture of testing and installing most recent IT products (being a reference customer). None of the interactions between factors was significant.
Keywords. adoption, health information systems, clinical information systems, hospital
1. Introduction Health IT adoption is a multidimensional process that is influenced by internal and external factors as well as technological and institutional issues. It can be analysed at the level of individuals, groups (micro level), of organisations (meso level) and countries (macro level). In the past 20 years a great number of studies focussed on the meso level and addressed internal characteristics of hospitals such as hospital size (number of beds [1]), type (teaching vs. non-teaching hospitals [2]), ownership (forprofit vs. not-for-profit hospitals [3]), system affiliation (hospitals in a system vs. single hospitals [4]), location (urban vs. rural area [5]), IT budget [6], IT plan [7] and IT staff [8]. IT adoption often referred to clinical IT systems and in recent years to the adoption of electronic patient/medical/health record systems [9]. Whereas hospitals in a health system and teaching hospitals were uniformly found to have more IT systems, all other factors were discussed controversially or were mentioned by one study only. 1
Jan-David Liebe, University of Applied Science, Health Informatics Research Group, PO Box 1940, 49009 Osnabrueck, Germany; E-Mail: [email protected]
336
J.-D. Liebe et al. / Characteristics of German Hospitals Adopting Health IT Systems
Interactions between factors cannot be excluded such as hospital size and IT budget [8]. Other factors may have selective effects, e.g. while clinical IT systems were reported to have a greater prevalence in not-for-profit hospitals than in for-profit hospitals [2,6], managerial IT systems were found to be more often installed in for-profit hospitals [10]. Yet other factors are not independent from each other, such as hospital size and teaching status. Despite the amount of previous studies there remain several uncertainties. The aim of this work therefore is to perform a combined analysis of the factors reported in the literature and to distinguish between the overall adoption of IT systems in hospitals and the specific adoption of clinical systems.
2. Method A set of variables was drawn from the literature that represent those factors which were most likely to have an effect on IT adoption at the meso level. These variables were sorted by the number of studies that reported a significant influence and were matched with the attributes that were collected in the data set of the 2010 IT Report Gesundheitswesen, a national survey of health information systems in German hospitals [11]. Based on the match the following independent variables were selected for a stepwise multiple regression analysis (SPSS 18.0): hospital size (logarithm of number of beds), system affiliation, ownership, location (logarithm of population density), IT plan, IT department, IT decision making, reference customer, IT budget in relationship to economic development of the hospital. Due to low data quality absolute values of IT budget were excluded. Nonlinear variables were made linear by taking the logarithm. Nominal and ordinal attributes were represented as dummy variables. In addition to the variables mentioned above we also included several interaction variables, e.g. location by ownership. IT adoption was measured by the number of subsystems in the health information system (overall adoption) and the number of clinical subsystems (specific adoption). The regression analyses were performed on a data of 126 acute German hospitals [11] which resulted from a mail survey including all 2061 German acute hospitals (6.12% response rate). These hospitals represent all different sizes, types and geographical regions of Germany. In χ2 tests the sample differed significantly from the population regarding size and region, however not from type [11]. Independent variables to be included into the stepwise regression were checked by histograms whether they were represented sufficiently in the data set. We therefore had to discard teaching status because less than 10% of the hospitals were teaching hospitals. The regression models were tested for normal distribution and homoscedasticity of the residuals.
3. Results The stepwise multiple regression analysis identified four variables that significantly explain the variation of the data (tab.1): 1) hospital size (logarithm of number of beds), 2) IT department (yes, no), 3) reference customer (yes, no) and 4) ownership (private vs. public). All other variables were excluded by the stepwise regression. Whereas
J.-D. Liebe et al. / Characteristics of German Hospitals Adopting Health IT Systems
337
hospital size, IT department and reference customer were positively related to the number of subsystems, ownership had a negative β-coefficient. None of the interaction variables contributed significantly to the model. There was no difference whether total number of IT systems (total IT adoption) or number of clinical IT systems (clinical) was chosen as criterion (tab. 1), the variables selected by regression remained the same. Table 1. Beta-coefficients and significance level of variables included in the regression model (*** p < 0.01; ** p < 0.05) hospital size total 0.450***
clinical 0.450***
IT department total 0.169***
clinical 0.172**
reference customer total 0.184**
clinical 0.176**
ownership (private hospital) total clinical -0.258*** -0.251***
Approximately 60 % of the total variance (as reflected by R2 adjusted for number of predictors and sample size) could be explained by the two models (tab. 2), i.e. whether total IT adoption or adoption of clinical IT systems should be predicted. Residuals of the model were normally distributed and homoscedastic. Table 2. Coefficients of determination and ANOVA F statistic criterion variable total number of IT systems Number of clinical IT systems
R2 0.611 0.603
adjusted R2 0.592 0.583
F 31.449 29.952
Sig. 0.000 0.000
4. Discussion The results show that irrespective of whether overall or clinical IT adoption was chosen as criterion the number of IT systems could be explained by the size of the hospital, the existence of an IT department, being a reference customer and by the public ownership of the hospital. Among these factors hospital size had the largest impact followed by ownership. Our findings support the literature, which by and large underpins the role of hospital size as an important factor [2]. However, there are also studies that came to different conclusions [4]. Our results match previous studies with regard to public hospitals having more clinical IT than private hospitals [2,6]. In our studies this was also true for the overall number of IT systems. The most striking difference with other findings concerned the role of hospitals in a health system. Whereas previous studies – most of which are from the United States [4] – clearly demonstrate that being part of a system facilitates IT adoption, system affiliation did not significantly explain IT prevalence in our data. We do not think that organisational networks have no influence on IT decision making. On the contrary, we discussed network effects to explain differences in IT adoption between Austrian and German hospitals [12]. It rather seems to be a matter of how mature these systems or networks are. In Germany hospitals have only recently started establishing clusters. The IT infrastructure and equipment in the different organisations often still remain to be harmonized and upgraded. In addition to what has been discussed in earlier works we propose two other factors of importance: an IT department in the hospital, i.e. a sufficient internal organisational background for IT projects, and being a reference customer, i.e. maintaining a special relationship with the main IT vendor, interacting with each other
338
J.-D. Liebe et al. / Characteristics of German Hospitals Adopting Health IT Systems
in a trustful way which is the basis for testing and installing most recent and innovative products. Due to an insufficient number of teaching hospitals in our sample we could not gauge their influence on IT adoption. Another factor that could not be included was IT budget because of low data quality. However, the variable „IT budget in relationship to economic development of the hospital“, which we included instead, had no significant influence.
5. Conclusion The regression model proposed has to be tested for robustness with other data sets. These data sets have to make sure that the variable „status of the hospital (teaching vs. non-teaching hospitals)“ can be included and tested for significance. In addition to the two criterion variables „total number of IT systems“ and „number of clinical IT systems“ other variables have to be analysed, in particular the implementation status and the number of functions of the electronic patient/health record system of the hospital. These variables would give insight not only into the breadth but also into the depth of IT adoption. Acknowledgments: This work was supported by a research grant from the Ministry of Science and Culture Hannover Germany.
References [1]
Hikmet N, Bhattacherjee A, Menachemi N, Kayhan VO, Brooks RG. The role of organizational factors in the adoption of healthcare information technology in Florida hospitals. Health CareManage Sci 2008:11. [2] Fonkych K, Taylor R. The state and pattern of health information technology adoption, RAND Corporation, Santa Monica, 2005. [3] Parente ST and Van Horn RL. Hospitals Investment in Information Technology: Does Governance make a Difference? Health Care Finance Review 2006;28(2):31-43. [4] Mc Cullough JS. The Adoption of Hospital Information Systems. Health Economics 2007;17:649-664. [5] Burke DE, Wang BB, Wan TT, Diana ML. Exploring hospitals’ adoption of information technology. J Med Syst. 2002;26(4):349-55. [6] Cutler DM, Feldman NE, Horwitz JR. U.S. Adoption of Computerized Physician Order Entry Systems. Health Affairs. 2005; 24: 1654-1663. [7] Ash JS, Gorman PN, Seshadri V, Hersh WR. Computerized physician order entry in U.S. hospitals: results of a 2002 survey. J Am Med Inform Assoc. 2004;11:95–9. [8] Amarasingham R, Diener-West M, Plantinga L, Cunningham AC, Gaskin DJ, Powe NR. Hospital characteristics associated with highly automated and usable clinical information system in Texas, United States. BMC Medical Informatics and Decision Making 2008:(8)39. [9] Jha AK, DesRoches CM, Campbell EG, Donelan K, Rao SR, Ferris TG, Shields A, Rosenbaum S, Blumenthal D. Use of Electronic Health Records in U.S. Hospitals. N Engl J Med. 2009; 360: 16281638. [10] Wang BB, Wan TT, Burke DE, Bazzoli GJ, Lin BY. Factors influencing health information system adoption in American hospitals. Health Care Manage Rev. 2005; 30:44-51. [11] Hübner U, Sellemann B, Egbert N, Liebe JD, Flemming D, Frey A, IT-Report Gesundheitswesen – Schwerpunkt Vernetzte Versorgung, Niedersächsisches Ministerium für Wirtschaft. Arbeit und Verkehr, Hannover, 2010. Available from: www.it-report-healthcare.info. Accessed at 25 Feb, 2011. [12] Hübner U, Ammenwerth E, Flemming D, Schaubmayr C, Sellemann B. IT adoption of clinical information systems in Austrian and German hospitals: results of a comparative survey with a focus on nursing. BMC Medical Informatics and Decision Making 2010, 10:8.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-339
339
Nursing Information System: a Relevant Substitute of the Paper Nursing Record Margreet B. MICHEL-VERKERKEa1 University of Twente, The Netherlands amd Saxion University of Applied Sciences, The Netherlands
a
Abstract. Objective: A teaching hospital in the Netherlands has developed a Nursing Information System (NIS). After the NIS was implemented in six wards in March 2009, the NIS was evaluated. Since micro-relevance is a key-factor in adoption of Information Systems, the objective of this study is to reveal which aspects of the NIS are micro-relevant to nurses. Methods: A paper questionnaire was distributed among all 195 nurses, who used the system. Included in the research were 93 (47.7%) respondents. Also six NIS-users were interviewed, using the USE IT-model. Results: Nurses mainly used those functions of the NIS, which were essential for reporting or retrieving patient information. The NIS was appreciated for supplying unhampered access to complete, legible, structured patient data. Conclusions: For nurses the NIS is a good substitute for the paper record. The micro-relevance of other functionality than supplying information seems to be low. Keywords: Evaluation, Nursing Information System, Electronic Patient Record, Socio-technical approach, Usefulness, Relevance
1. Introduction The introduction and implementation of innovations, like electronic patient records or healthcare information systems, does not automatically lead to adoption of the innovation. In the Technology Acceptance Model (TAM3), perceived usefulness is the main determinant for intention to use an innovation [1]. Job relevance and output quality both contribute to the perception of usefulness in TAM3 [1]. In the IS Success Model relevance and usefulness are aspects of Information Quality [2]. The USE ITmodel theorizes, that user characteristics determine adoption. These user characteristics are described by four determinants: requirements, relevance, resources and resistance, of which relevance seems the dominant factor [3]. Relevance can be regarded at two levels: Macro-relevance en micro-relevance. Relevance at the macro-level is defined as: the “degree to which the user expects that the IT-system will solve his problems or helps to realize his actually relevant goals”. Macro-relevance defined this way, matches the concept of usefulness. Micro-relevance focuses on the present situation of the individual user: “Micro-relevance is defined as the degree to which IT-use helps to solve the here-and-now problem of the user in his working process.” [3]. If relevance and specifically micro-relevance, is a key-aspect for an individual in the decision to
Saxion, M.H. Tromplaan 28, 7513 AB Enschede, The Netherlands; E-mail: [email protected]
340
M.B. Michel-Verkerke / Nursing Information System
adopt the new system, the question occurs: What is micro-relevant to the intended users, and does the NIS meet the nurses’ expectations in these aspects?
2. Methods Before the implementation of the NIS in a Dutch teaching hospital, the nursing process and record were standardized. The development and implementation of the NIS, as a module of the Hospital Information System (HIS) has started in 2005. The objectives of the hospital were to reduce time spent on administrative tasks by nurses, to improve the quality of care, and to enhance continuity of care. In November 2008, the complete NIS was implemented in six wards, replacing the paper nursing records. The evaluation is performed in March 2009. A multi-method socio-technical approach, based on [4] is used, containing both quantitative and qualitative methods. A paper questionnaire with questions partially derived from the research of Garrity and Sanders [5] and TAM3 [1], and six complementary interviews with end-users based on the USE IT-model, were used to evaluate the implementation of the NIS [3]. The questionnaire was constructed before to measure the adoption of an Electronic Patient Record (EPR) by medical specialists, and in a nursing home [6, 7]. The participants in the evaluation research were all 195 nurses using the NIS. Ninety-three (47.7%) questionnaires were included in the research. The quantitative data were entered in SPSS 18.0 for statistical analysis. The scores were treated as nonparametric data, since the scales are ordinal. The transcriptions of the interviews were analyzed by splitting the answers into elementary statements and grouping and labeling the answers, which were then counted.
3. Results The interviews learn that nurses start their shift reading the record of the patient (in the NIS). They need to know the history of the patient, the treatment, present situation, provided care, and nursing plan, and consider all this information necessary to deliver good care. Nurses provide each other with additional oral information. Patient information of other disciplines, which do not use the NIS for their discipline, is sometimes entered in the NIS, but often given orally or on paper (orders). Nurses prefer to have all information in the NIS and abandon all paper forms, because the NIS is orderly, readable and contains more accessible information than the paper record. Additional paper forms cause extra work. A prerequisite is that every user enters data immediately, instead of creating paper buffers. The NIS is used throughout each shift by all nurses. All interviewed nurses think the use of computers in their job a positive development, some consider it also inevitable. Disturbances in their daily routine are caused by acute situations with patients and the care for patients with complex health problems. Time is always short. Although they think using the NIS costs more time than using a paper record, nurses prefer the NIS. To all interviewed nurses providing good care in order to make the patient feel well, is most important. All 93 respondents to the questionnaire were working as a nurse. The average age is 36.5 year (range: 19-60 years); most common job size is 32 hours/week. The use of the NIS is mandatory for many functions, but not all.
341
M.B. Michel-Verkerke / Nursing Information System Table 1. Frequency of use of functions in the NIS. How often do you use the part of the nursing information system mentioned below? > Copy data from previous record to anamnesis (patient case history) > Copy data from anamnesis (patient case history) to care plan > Nursing plan
Daily or weekly 76% 85% 79%
n 92 93 91
>> Use Nursing plan example >> Make Multi-disciplinary problem in care plan >> Report on Category
10% 29% 63%
69 87 90
>> Report on Health pattern >> Report on Multi-disciplinary problem or nursing plan
44% 51%
89 90
The highest level of the NIS consists of tabs. The use of tabs is mandatory and not shown. Each submenu is marked by >. A sub-submenu is marked by >>. All percentages differ significant from filling out at random (χ2-test at all columns equal, p < .05).
Analysis of the results in Table 1 shows, that the NIS is used intensively for making care or nursing plans, but supporting functions, such as the nursing plan example and the option to start a multi-disciplinary problem, are hardly used. The NIS supports the provision of care, especially by presenting accurate patient information, which enables the nurse to have a quick overview of the patient’s need of care (Table 2). The responses on the statements of the NIS on the collaboration with other disciplines are inconclusive. A possible explanation is that not all disciplines use the NIS. Table 2. Support of provision of care. Support of provision of care Quick first impression Quick complete view Quick insight in necessary care Quick overview of provided care Quick insight in goals Care is provided according to NIS Information for after-care is in the NIS Everyone enters data in the same way Everyone knows meaning of abbreviations Reports are kept regularly and up-to-date. Oral and written reports are not contradictory Patient data are not entered in wrong record I enter orders immediately in the NIS I enter change of care immediately in the NIS Support collaboration of disciplines is better
fully agree 32% 10% 24% 20% 7% 14% 10% 8% 2% 34% 25% 7% 30% 29% 13%
partially agree 37%Mdn 40%Mdn 41%Mdn 39%Mdn 31% 47%Mdn 23% 18% 14% 43%Mdn 48%Mdn 24% 32%Mdn 34%Mdn 30%
neutral 12% 18% 15% 16% 30%Mdn 26% 39%Mdn 27%Mdn 38%Mdn 13% 19% 19%Mdn 27% 27% 27%Mdn
partially fully disgree disagree 14% 4% 25% 7% 15% 4% 21% 4% 26% 7% 11% 2% 21% 7% 30% 18% 32% 13% 7% 2% 7% 1% 30% 20% 4% 7% 7% 3% 17% 13%
All percentages differ significantly from filling out at random (χ2-test at all columns equal, p < .05). Category contains median value.
n 91 89 91 90 90 90 89 90 90 90 89 90 90 90 90 Mdn
=
A remarkable result is the high frequency of users, who disagree that data of patients are not entered in the wrong record. This could be related to the small percentage of nurses who agree that every colleague enters data in the same way, and every colleague knows the meaning of abbreviations and symbols. This suggests that the NIS is not used consistently. Although the NIS does not seem to make nursing easier or faster, the nurses do have a positive opinion on the NIS and prefer the NIS to the paper record (Table 3). The use of the NIS raises the quality of recording according to the respondents. Törnvall et al. also found that using an electronic standardized
342
M.B. Michel-Verkerke / Nursing Information System
wound record improved the quality of documentation [8]. A hindrance in usability is the frequent occurrence of computer errors. Due to technical problems with the Computer-on-wheels (COW’s), entering data at the bedside together with the patient is often hindered, which reduces the involvement of patients in their care process. The NIS cannot be accessed by the patient, but the patient can request for a printed summary. The nurses consider this to be a draw-back of the NIS and do not consider the NIS to be patient friendly. Table 3. Usefulness of the NIS. To what extent do you agree? Perform tasks better. Care process passes more smoothly. Spend more time on direct care. Perform tasks easier. Useful and usable in my job. Do not want without anymore. Precisely provides the information I need. Precisely offers the functionality I need. No superfluous functionality. Contains all information I need. Contains all functionalities I need. No superfluous information. Can enter all information Access to all information anytime Can use all functionality anytime Access to all information anywhere Can use all functionality anywhere Quality of recording raises Advantages compensate the disadvantages Many advantages above paper record.
fully agree 7% 8% 2% 6% 22% 23% 11% 8% 29% 13% 12% 22% 18% 13% 12% 15% 14% 22% 25% 26%
partially agree 27% 21% 16% 17% 47%Mdn 33%Mdn 33% 33% 29%Mdn 36% 39%Mdn 38%Mdn 44%Mdn 41%Mdn 38% 35% 30% 39%Mdn 29%Mdn 33%mdn
neutral 34%Mdn 39%Mdn 28% 36%Mdn 16% 18% 34%Mdn 36%Mdn 26% 26%Mdn 25% 32% 17% 23% 25%Mdn 23%Mdn 26%Mdn 23% 25% 25%
partially disagree 19% 14% 29%Mdn 25% 9% 8% 14% 17% 9% 16% 15% 6% 15% 20% 21% 23% 23% 8% 13% 8%
fully disagree 14% 19% 24% 16% 6% 17% 9% 6% 7% 9% 9% 2% 6% 3% 5% 5% 7% 8% 8% 7%
n 86 87 86 87 87 87 86 86 87 87 87 87 87 87 87 87 87 87 87 87
All percentages differ significant from filling out at random (χ2-test at all columns equal, p < .05). Mdn
= Category which contains the median.
The questionnaire ended with three open questions. Multiple answers were given by the respondents. Top of the list of advantages were readability (45%), orderly (27%), easy and quickly to use (24%) and copying previous recorded information (22%). Disadvantages are dominated by technical problems (50%) and usability issues, such as complexity of some specific functions (23%), time consuming (21%), many mouseclicks (20%), no overview (18%), mistakes cannot be corrected by nurse (14%), and not everybody uses the NIS accurately (11%). Also 15% report the lack of direct access for the patient as a disadvantage. The desires for changes were all suggestions to resolve previously mentioned issues.
4. Discussion and conclusion Nurses intensively used those functions of the NIS, which were essential for reporting or retrieving patient information. Elements of the NIS meant to structure patient data, to support making nursing plans, and to improve the quality of recording were used less frequently. According to [9] predefined nursing plans stimulate nurses to make nursing plans. This study shows that only part of the nurses use these. It seems that these functions are not micro-relevant enough to overcome the extra effort of additional
M.B. Michel-Verkerke / Nursing Information System
343
mouse-clicks. The NIS is mainly appreciated for supplying unhampered access to complete, legible, structured patient data, anywhere, anytime. This is in line with the outcome of a previous study on physicians [7]. Although training is necessary, overall the NIS was considered easy to use. The nurses thought the NIS did not save time, except for copying data. Some functions were regarded complex and error-prone. It would be interesting to investigate whether the doubt on accurate use by colleagues is based on facts. Technical problems in using bedside computers, made the NIS less patient-friendly than the paper record, but do not hinder the nurses to express their preference for the NIS. This confirms that usability problems do not obstruct perceived usefulness and use [1]. Combining qualitative methods with quantitative methods added value to the research, since results of the questionnaire could be explained and interpreted better by comparing the answers to the open questions and with the results of the interviews. The socio-technical approach reveals that not only system quality or system features determine or explain success or failure. The way colleagues use the system is equally important. The NIS is micro-relevant because it solved the information problem, and can be regarded as an improved version of the paper record, but does not solve the time problem. Information quality is probably more micro-relevant than time. It would be interesting to investigate whether micro-relevance is a relative or absolute phenomenon. If the use of functions that seem to be irrelevant, raises when the technical problems are solved, micro-relevance is likely to be relative.
References [1] Venkatesh, V. and H. Bala, Technology Acceptance Model 3 and a Research Agenda on Interventions. Decision Sciences, 2008. 39(2): p. 273-315. [2] DeLone, W.H. and E.R. McLean, Information Systems Success: The Quest for the Dependent Variable. Information Systems Research, 1992. 3(1): p. 60-95. [3] Spil, T.A.M., R.W. Schuring, and M.B. Michel-Verkerke, Chapter IX: USE IT: The Theoretical Framework Tested on an Electronic Prescription System for General Practitioners, in E-health Systems Diffusion and Use: The Innovation, the User and the USE IT Model, T.A.M. Spil and R.W. Schuring, Editors. 2006, Idea Group Publishing: Hershey, USA. p. 147-177. [4] Babbie, E., The Practice of Social Research. Seventh Edition ed. 1995, Belmont: Wadsworth Publishing Company. [5] Garrity, E.J. and G.L. Sanders, Dimensions of information success, in Information Systems Success Measurement, E.J. Garrity and G.L. Sanders, Editors. 1998, Idea Group Publishing: Hershey, USA. p. p.13-45 [6] Michel-Verkerke, M.B., What makes doctors use the Electronic Patient Record? Master Thesis. 2003, Enschede: University of Twente. [7] Michel-Verkerke, M.B., An Electronic Patient Record in a Nursing Home: One Size Fits All?, in Nursing 2010: Rotterdam. [8] Törnvall, E., L.K. Wahren, and S. Wilhemsson, Advancing nursing documentation - An intervention study using patients with leg ulcer as an example. International Journal of Medical Informatics, 2009. 78: p. 605-617. [9] Ammenwerth, E., et al., A Randomized Evaluation of a Computer-Based Nursing Documentation System. Methods of Information in Medicine, 2001. 40(2): p. 61-8.
344
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-344
GP Connector – a Tool to Enable Access for General Practitioners to a StandardsBased Personal and Electronic Health Record in the Rhine-Neckar Region a
Oliver HEINZEa,1 , Holger SCHMUHLa, Björn BERGH a Center for Information Technology and Medical Engineering of the University Hospital Heidelberg, Germany
Abstract. Electronic health records (EHR) as well as personal health records (PHR) are in widespread use today. Since several years the University Hospital Heidelberg is implementing a so-called personal and electronic health record (PEHR). The joint approach is standards-based and includes several needed services. However a remained unresolved issue is how to connect general practitioners (GP) and their systems to the record. This work describes a tool called GP Connector that provides access for GPs to the PEHR within the law. GPs can profit from all advantages of the PEHR usage. Only adding documents to the record comfortably through standards-based interfaces remains still open. Thus, deep integration of the PEHR into primary systems is preferable anytime. Yet the continuous trend towards multi-institutional health network may also pave the way for standards-based interfaces also in the field of practice management systems. Keywords. PHR, EHR, PEHR, eConsent, Standards-based, GP
1. Introduction In the last fifteen years electronic health records (EHR) have dominated eHealth projects around the world [1]. In the last years the usage of personal health records (PHR) rose in order to empower patients enlarging their role to actively manage their health [2, 3, 4, 5, 6, 7, 8]. Since several years the University Hospital Heidelberg (UHH) is implementing a personal and electronic health record (PEHR) to improve communication with partner hospitals in the region and to give patients a tool to manage their health. The concept foresees an integration of an EHR with a PHR according to the definitions of the Healthcare Information and Management Systems Society (HIMSS, see [9, 10]) using the advantages of both record types. The ownership of the whole longitudinal, lifelong record is given to the patients. The information exchange among their healthcare professionals is achieved by using the PEHR as central source of all relevant data. Due to data privacy legislation in Germany, the patient has to give his consent in order to 1
Corresponding Author: Dipl. Inform. Med. Oliver Heinze, University Hospital Heidelberg, Center for Information Technology and Medical Engineering, Speyerer Str. 4, 69115 Heidelberg, Germany; E-mail: [email protected]
O. Heinze et al. / GP Connector
345
allow connected primary systems sending data to or receiving data from the record. From a technical point of view the PEHR is based on profiles of Integrating the Healthcare Enterprise (IHE) like PIX/PDQ for patient identification and XDS.b for document sharing via a central repository as well as international standards like HL7 and DICOM. The PEHR provides an integrated web-based view to enable access to patients and professionals [11]. Due to the lack of consent management functionalities of the deployed record system a centralized consent management was developed for the PEHR [12]. Healthcare providers and their organizations can be identified by a provider and organization registry service (PORS) [13]. Hospital information systems (HIS) are directly connected to the PEHR. Single sign on mechanisms provide the capability to seamlessly integrate the web-based access to the PEHR within the context of the HIS [11, 14]. In contrast to the successful interconnectivity between HIS and PEHR, the situation is quite different when it comes to the practice management systems (PMS) for general practitioners (GP) in Germany. A deep integration into PEHR like mentioned above is not possible because PMS neither do support international standards in order to share structured or unstructured information nor could their proprietary interfaces be used for that purpose at full extend. The German market for PMS is huge and diverse. PMS companies are often small dreading the effort to develop appropriate interfaces. Direct usage of the PEHR web interface is not possible due to the fact that on the one hand the PEHR vendor does not integrate the developed consent management deeply and on the other hand the PMS currently do not provide any means for consent management. Therefore the objectives of this paper are to describe a solution how to give GPs access to the PEHR in the Rhine-Neckar-Region without having a deep integration of their systems and without harming German privacy laws and regulations using the regional consent management service.
2. Method In expert workshops with computer scientists and physicians the requirements for the accessibility of GPs to the PEHR and the underlying workflows have been analyzed taking the systems landscape of the PEHR into account in order to identify technical possibilities as well as limitations of an integration approach. Based on the outcome the system architecture of GP connector (GPC) has been designed under the premise to reuse as many software components as possible and to use Open Source Software wherever possible. As database engine PostgreSQL was chosen. Apache Tomcat was used as servlet container and the Open eHealth Integration Platform (IPF) from the Open eHealth Foundation as middleware handling the messaging. The application logic was written using Java Spring and Spring Security. The interface of GPC is written in HTML5 and CSS3.
3. Result The GPC is a software component fitting into a service-oriented architecture enabling GPs in the Rhine-Neckar Region to access the PEHR without having a deep integration of their PMS and without harming data privacy laws.
346
O. Heinze et al. / GP Connector
3.1. Workflow and Functionalities GPC provides a user management for GPs who want to participate in the regional network. If they do not have a user account they can request for one. The administrators of the network then do the initial authentication and provide them with the referring login credentials as well as a client certificate. In addition, the data of the GP is squared with the PORS to uniquely identify the GP within the network. After being logged in, a GP can search for a patient. Access is granted to the PEHR of the searched patient if its consent contains a policy that allows the GP to see existing data or to add some new. Otherwise the search will return no results. Due to privacy reasons it remains unclear if there are no rights or the patient has no record (See Fig. 1).
Figure 1. Workflow using GP Connector
3.2. Architecture GPC consists out of a three-tier architecture (See Fig 2). The presentation tier provides the user interface for GPs to login and to search for their patients. The logic tier is the most important tier. It is responsible for the whole program interaction. It coordinates the different service queries and acts according to their results. It is the connecting link between interface, backend and the external services. The third tier is the data tier being responsible for data storage of the user accounts and the GPC configuration. If a GP is logged in and searches for a patient by given name, last name and date of birth, the GCP passes the request to the master patient index of the PEHR using IHE patient data query transaction (ITI-21). The results also contain the master patient index identifications (MPI_ID). Together with the unique identifier of the GP from the provider and registry service (PORS_ID), the GCP queries the consent manager by using an HL7 conformance based query (QBP). The consent manager uses PORS_ID and MPI_ID to verify if there exists a policy granting access for this GP. Every authorized match is displayed in a list on the interface. Now, the GP can access the PEHR of these patients by simply clicking a button within GPC. This starts a new browser window containing the interface of the PEHR. No further login is required. Technically this is done by a SSL secured http POST request using the single sign on interface of the PEHR. The functionality is encapsulated in the Java-based PEHR launcher that is also used by the context-based single sign on from the
O. Heinze et al. / GP Connector
347
connected hospital information systems. Inside the PEHR the GPC is registered as authorized system the same way like it is done for a HIS. It transfers its parameters including a password and receives a one-way token. This token together with the PEHR login credentials of the GP is used to authenticate him at the record. In addition, the MPI_ID of the patient is transferred allowing opening its record directly. Due to security and privacy reasons GPs are not allowed to search inside the PEHR directly. They always have to use the GPC search in order not to bypass the consent management.
Figure 2. Architecture of GP Connector
4. Discussion The GPC is a tool for all physicians in the Rhine-Neckar Region not working in a hospital. It provides easy accesses to the PEHR of their patients without having a deep integration into their primary systems like the hospitals in the region have. The GPC as solution meets the current privacy laws and regulations in Germany. The involvement of the consent manager was feasible. It is for the moment the only viable solution to adequately enable GPs access to the PEHR. Thus, they and their patients can benefit from all PEHR advantages. Accessing contents from the PEHR is not different from having a deep integration when being logged in into the web-based GPC-Interface. The main disadvantage exposes when it comes to add new information to the PEHR. For GPC users this is still a manual process due to the missing standards-based interfaces and integration into their primary systems. Therefore a deep integration is preferable anytime and still stays worthwhile. Yet the continuous trend towards multi-institutional health network may also pave the way for standards-based interfaces also in the field of PMS. Future developments will be carefully observed. As soon as a standard-compliant
348
O. Heinze et al. / GP Connector
interface would be available for a specific PMS, it could be integrated the same way like is already done for HIS. The choices of the applied technologies exposed to have been quite sufficient. Open Source technologies provided cost efficient, reliable programming libraries and tools making it easy getting quick results. Due to the service oriented and web-based environment of the PEHR it opened up to use well proven Java-based tools like Tomcat and IPF which made it quite easy to handle the messaging aspects. Spring was chosen due to the build-in login and security functionality which points out to have been a good decision. At the moment GPC is in alpha release phase and will be open sourced with its first release candidate.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13]
[14]
Iakovidis, I., Towards personal health record: current situation, obstacles and trends in implementation of electronic healthcare record in Europe. Int J Med Inform, 1998. 52(1-3): p. 105-15. Hassol, A., et al., Patient experiences and attitudes about access to a patient electronic health care record and linked web messaging. J Am Med Inform Assoc, 2004. 11(6): p. 505-13. The future of healthcare - it's health, then care, W. Koff and P. Gustafson, Editors. 2010, Leading Edge Forum CSC: Falls Church, Virgina, USA. Norgall, T., B. Blobel, and P. Pharow, Personal health--the future care paradigm. Stud Health Technol Inform, 2006. 121: p. 299-306. Blobel, B., Introduction into advanced eHealth -- the Personal Health challenge. Stud Health Technol Inform, 2008. 134: p. 3-14. Kaelber, D.C., et al., A research agenda for personal health records (PHRs). J Am Med Inform Assoc, 2008. 15(6): p. 729-36. Ahmadi, M., et al., A Review of the Personal Health Records in Selected Countries and Iran. J Med Syst, 2010. Li, Y., et al., Electronic Health Record Goes Personal World-wide. Yearb Med Inform, 2009: p. 40-3. HIMSS. PHR Definition. 2008 April 2011]; Available from: http://www.himss.org/ASP/topics_FocusDynamic.asp?faid=228 HIMSS. EHR Definition. April 2011]; Available from: http://www.himss.org/ASP/topics_ehr.asp. Heinze, O., A. Brandner, and B. Bergh, Establishing a personal electronic health record in the RhineNeckar region. Stud Health Technol Inform, 2009. 150: p. 119. Birkle, M., O. Heinze, and B. Bergh, Entwurf eines elektronischen Einwilligungsmanagements für ein intersektorales Informationssystem, in eHealth 2010. 2010: Wien. p. 113-119. Heinze, O., A. Ihls, and B. Bergh. Development of an Open Soruce Provider and Organization Registry Service for Regional Health Networks. in Third International Conference on Health Informatics (HealthInf 2010). 2010. Valencia, Spain. Heinze, O. and B. Bergh. Experiences integrating RIS/PACS into personal electronic health records. in 13th World Congress on Medical and Health Informatics. 2010. Capetown, South Africa.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-349
349
Proposal of an End-to-End Emergency Medical System Samir EL-MASRIa,1, Basema SADDIKb College of Computer and Information Systems, King Saud University, Saudi Arabia b College of Public Health and Health Informatics, King Saud Bin AbdulAziz University for Health Sciences a
Abstract. A new comprehensive emergency system has been proposed to facilitate and computerize all the processes involved in an emergency from the initial contact to the ambulance emergency system, to finding the right and nearest available ambulance, and through to accessing a Smart Online Electronic Health Record (SOEHR). The proposed system will critically assist in pre-hospital treatments, indentify availability of the nearest available specialized hospital and communicate with the Hospital Emergency Department System (HEDS) to provide early information about the incoming patient for preparation to receive and assist. Keywords. Emergency, EHR, Ambulance, Mobile Web Services, SOA, GPS
1. Introduction In this paper, we are a proposing a new comprehensive emergency system. The objective of this system is to respond to the needs of an efficient and error free emergency system which, in cases of car accidents or other emergencies can quickly and accurately locate the right ambulance and send it to the place of accident. Our proposed system will operate without or at least with minimal or limited human intervention in order to reduce human errors and to accelerate the lifesaving process. All of the current ambulance systems rely on calls from people who give information about the accident and the accident’s approximate location. Most human operators use a type of traditional or computer aided dispatching system to find an ambulance according to the information given by the caller. The challenge with these types of systems is the potential for errors from the caller, or from the transfer and entering of wrong data into the dispatch system. These may put the patient at risk and cause substantial harm or loss of life as a result of human errors or late arrival of an ambulance, wrong information or treatment. Many organizations and governments have realized the importance of building better systems to spare the lives of patients or the injured involved in an accident. In the last decade, there has been a lot of effort to improve and automate emergency dispatching systems. One of these serious efforts is that of the Victoria 1
Correspondent author: Samir El-Masri, Associate Professor, Department of Information Systems, College of Computer and Information Sciences, KSU, P.O. Box 51178 Riyadh 11543, Kingdom of Saudi Arabia. E-mail: [email protected]
350
S. El-Masri and B. Saddik / Proposal of an End-To-End Emergency Medical System
Ambulance department in Australia [1]. The Victorian system was introduced in 1998 and provides clinical information about the patient to the hospital as well as recommending care from the hospital to the ambulance. Another advanced system is the Hospital & Emergency Ambulance Link (HEAL) which was implemented in Singapore. The HEAL system is based on wireless data communication between ambulances and hospitals an assists hospitals and doctors at the emergency departments with providing information about the incoming patients arriving by ambulance [2]. More research works have been conducted and a variety of new systems have previously been proposed [3, 4, 5, 6, 7]. El-Masri [8] proposed a preliminary version of the current proposed system in 2005.
2. System Components and Architecture The proposed emergency system consists of 5 components as shown in Figure 1 and is listed as follows: 1. Emergency requester device (Emergency application for mobile devices): This is s a mobile phone equipped with a Geographical Positioning System (GPS) 2. Main Central System (MCS): This is the main server for the whole system 3. Ambulance system: Each ambulance system will be equipped with a GPS and navigation system. The system will utilize touch screen (to indicate availability and reaction) 4. SOEHR: Smart Online Electronic Health Record 5. HEDS: Hospital Emergency Department System
Figure 1. System components and communications
3. System Processes The system processes and the communications between components are:
S. El-Masri and B. Saddik / Proposal of an End-To-End Emergency Medical System
351
3.1. Reporting an Accident The start of the system will be triggered by the emergency requester device reporting an accident. With a simple mobile application installed on the device, the caller can quickly and easily enter information about the accident (for example: the number of injured people and the number of cars involved in the accident). The application will automatically send the coordinates and the number of the mobile phone to MCS. The mobile application will then send the approximate accident location using general packet radio service (GPRS) in case GPS coordinates are not available.MCS automatically and without human intervention receives the request and searches for a suitable ambulance. In case the details for the accident are insufficient, a human operator will then be warned and will intervene immediately by calling the accident requestor and asking for more details. MCS can alternatively receive normal requests through the phone by talking to a human operator. 3.2. Finding an Ambulance MCS receives the emergency request from the requester and again without human intervention; sends a request to all available ambulances to report their GPS coordinates (other algorithms are also available where ambulances continually report coordinates to MCS. The selection of a specific algorithm will be based on how busy the environment is). MCS will then compare accident and ambulances coordinates and send a job request to the nearest ambulance based on the navigation system map rather than the direct distance. The ambulance officer has 10 seconds to accept or reject the request. If the request is accepted, MCS will send the accident’s coordinates to the ambulance and automatically, the ambulance system shows the road map to the accident location. If the ambulance officer rejects the job request or does not reply within 10 seconds, the MCS will pick up the second nearest ambulance to the accident, assuming that in 10 seconds the positions of ambulances don’t change much. In cases of longer delays MCS will restart the process from the beginning.
4. Ambulance System Processes 4.1. Setting up Availability and Communicating with MCS The ambulance officer can setup up the status of the ambulance to available or not available. They can also accept or reject job requests. In cases of rejection, the reasons for rejection should be entered into the system. When an ambulance accepts the job, the ambulance’s status will show “in mission”. 4.2. Accessing SOEHR After picking up the patient or the injured, the ambulance system can access the SOEHR using the patient’s fingerprint and the officer can then quickly and easily enter the current patient conditions such as injuries, fractures, or level of consciousness. SOEHR, based on the patient’s medical history and the current conditions will recommend urgent and pre-hospital treatment.
352
S. El-Masri and B. Saddik / Proposal of an End-To-End Emergency Medical System
SOEHR is a separate and independent system which is under construction by few countries such as Australia, New Zealand, and recently USA, Saudi Arabia and others. SOEHR is a unique electronic health record system which can quickly retrieve all details about the patient from different hospitals and clinics, process the medical histories and the current conditions and come up with a pre-hospital treatment recommendation. Accessing the online health record, can be done from the ambulance systems, HEDS or from the hospitals. Each access will be different in terms of security and the quantity of information required. For example, access from the ambulance will require quick and light information whereas access at HEDS and from the hospitals will require more detailed information in which, healthcare professionals can explore more details, track any visits, medications, or check results from any hospital. 4.3. Finding Hospital The ambulance needs to find the right hospital for the patient onboard. The ambulance system, based on the patient’s conditions, distance, availability and specialty of hospitals, will select the appropriate hospital and the roadmap on the navigation system will automatically be shown. Identity of the patient will be communicated to HEDS. All hospital GPS coordinates will already be available in the database of the ambulance system. 4.4. Preparing for Incoming Patients and Monitoring Incoming Ambulances Once the ambulance selects a hospital, the Hospital Emergency Department System (HEDS) books a bed or place for the incoming patient and accesses the patient’s health record. The ambulance system starts to continuously send the ambulance coordinates to HEDS so the department’s medical staff can monitor the incoming ambulance on the HEDS. HEDS will show in real-time all the incoming ambulances on a map with a list of information about distances and time. Department staff will prepare what they plan to do for the incoming patient which will include the operation theatre, medications, and consultants if needed. Through HEDS, staff can update the availability of beds or operation theatres based on the discharge or transfer of patients.
5. Advantages of the Proposed System over other Existing Systems Advantages of the proposed system over other systems can be listed as follows: 1. The system is fully computerized from start to end 2. It is very comprehensive and includes all components and involved parties. 3. It is based on new advanced technologies 4. It eliminates all human errors and reduces time to find and send ambulance 5. By accessing SOEHR, the ambulance officer can efficiently treat the injured, in comparison to traditional systems where the officer has no information about the patient’s medical history and has no system to assist 6. It identifies and selects the right hospital and communicates patient details to the selected hospital 7. It allows the emergency department to efficiently prepare and monitor incoming patients and ambulances
S. El-Masri and B. Saddik / Proposal of an End-To-End Emergency Medical System
8.
353
The ambulance officer can access SOEHR only with the patient’s fingerprint HEDS staff can only access SOEHR after the ambulance system sends the patient ID thus ensuring privacy and security of patient details
6. Conclusion and Future Work Using advanced technology such as Mobile Web Services and SOA, a new comprehensive emergency system has been proposed. We are currently in the process of developing all the components of the system. The new system will respond to all the needs of medical emergency from the initial emergency request until the transfer of the patient to the hospital. This system includes components that no other systems have proposed before. Upon the completion of system development, a pilot system will be deployed and tested. The system will be evaluated on a small scale and results and performance will be studied and compared with existing systems. Acknowledgement: This work is part of a two year research project fully funded by a grant through KACST/National Plan for Science and Technology in the Kingdom of Saudi Arabia. Grant number: 09-INF880-02.
References [1] [2] [3]
[4] [5]
[6] [7]
[8]
Metropolitan Ambulance Service, The Metropolitan Ambulance Service, http://www.ambulance.vic.gov.au/Ambulance-Victoria.html [accessed: 11 February 2011.] Anantharaman V, Lim Swee Han. Hospital and emergency ambulance link: using IT to enhance emergency pre-hospital care. International Journal of Medical Informatics 61 (2) (2001)147-161. FitzGerald G, Tippett V, Elcock M, et al. Queensland Emergency Medical System: A structural and organizational model for the emergency medical system in Australia. Emergency Medicine Australasia [serial online]. December 2009;21(6):510-514 Atkin C, Freedman I, Rosenfeld J, Fitzgerald M, Kossmann T. The evolution of an integrated State Trauma System in Victoria, Australia. Injury [serial online]. November 2005;36(11):1277-1287 Romsaiyud, W, Premchaiswadi W. SOA context-aware mobile data model for emergency situation. Proceedings of Knowledge Engineering, 8th International Conference on ICT 2010, pp.93-97, 24-25 Nov. 2010 Sandeep Chatterjee, James Webber. Developing Enterprise Web Services, Prentice Hall PTR. 2004 Hameed SA, Miho V, AlKhateeb W, Hassan A. Medical emergency and healthcare model: Enhancement with SMS and MMS facilities. Proceedings of Computer and Communication Engineering (ICCCE), International Conference 2010, pp.1-6, 11-12 May 2010 El-Masri, S. Mobile Comprehensive Emergency System using Mobile Web Services. A book chapter, In Unhelkar B, editer. Handbook of Research on Mobile Business: Technical, Methodological and Social perspective. Idea Group, 1 (2005) 106-112
354
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-354
The General Practitioner in the Giant’s Web Vigdis HEIMLYa,b,1 Norwegian University of Science and Technology b Norwegian Centre for Informatics in Health and Social Care Norway a
Abstract. Most General Practitioners (GPs) in Norway use Electronic Health Record (EHR) systems to support their daily work processes. These systems were developed with basis in local needs. Electronic collaboration between the different actors has developed over time. Larger national projects like the ePrescription and the Core EHR are examples of projects that interact with the GPs EHR systems. The requirements from these projects need to be addressed by the vendors of the EHR systems. At the same time the GPs see a need for further development of their EHR systems to make them more suited as tools to support the daily work processes. This paper addresses the how GPs can influence on the design and development of their EHR systems in a situation with a preexisting installed base of systems and increasing requirements from many actors. Keywords. National deployment, Electronic collaboration, Electronic Health Record Systems, General Practitioners, Practice Consultants, Requirements
1. Introduction More than 95% of the GPs currently use EHR systems[1]. These systems were developed in a local setting and deployed on a national basis. The process resembles a bootstrapping process as described by Skorve and Aanestad [2]. The GPs use the EHR systems actively in their clinical work and they do not keep paper records. EHR systems are also in widely use in hospitals and in nursing homes. The development of all these systems has been done with the local actors needs in mind and not the needs of the collaborating actors. Electronic collaboration is wanted by all actors, but how to coordinate at a national level and still provide room for further development of EHR systems based on the different user groups needs? One of the main challenges is how to balance between influences from national actors like regional health authorities and smaller actors that do not have a strong organization to represent them nationally.
2. Method The paper is based on experiences from participation in the EHR-monitor study [1], the initial ELIN-project [3], the GPs’ national reference group and a study of available project documentation. 1
Corresponding author.
V. Heimly / The General Practitioner in the Giant’s Web
355
3. Analysis In a study from 2009 [4] T. Christensen concludes that “EPR systems in Norwegian primary care that have been developed in accordance with the principles of usercentered design have achieved widespread adoption and highly integrated use. The quality and efficiency of the clinical work has increased in contrast to the situation of their hospital colleagues, who report more modest use and benefits of EPR systems.” The study was based on a national, cross-sectional questionnaire survey in Norwegian primary care. They found that the GPs got assistance from their EPR system while conducting most of their clinical tasks, but the GPs also saw the need for improvements of their EHR systems. This was further documented in second study [5]. Examples of missing functionality were decision support that could be adjusted to the individual patient, extended possibilities for electronic collaboration and integration of the GPs EHR with personal health records. The EHR monitor survey [1] has also shown that one of the most evident challenges for the GPs currently is missing functionality of the existing systems. The ELIN projects are examples of a project family where GPs are involved at a national level [3]. A panel of experts created functional requirements for electronic communication in health care with basis in the existing systems, standards and the expert’s local needs. These requirements were implemented in the EHR-systems. The EHR-vendors costs were partly funded by Innovation Norway2. The rest were covered by licenses that were paid by the users of the EHR systems. This project model has worked out well, but the challenge for the EHR-vendors is that there are many ELINprojects (health station, community care, general practice, dentistry,..) and the same vendors have obligations in several of the projects. The growing need for collaboration has become more and more evident over the years. With a large installed base of EHR systems installed by the collaborating actors, it is not an easy task to develop collaboration systems and deploy them in full scale at a national level [6]. This is a complex interplay between the development of standards, technical solutions and the people who use these systems as a part of their daily work processes. Like many other users, the GPs might to be skeptic to requirements that are established by external actors. If they do not feel an ownership to new systems and modules that they are supposed to use, they can refuse to use them. There must also be some obvious benefits. Even the rumor about missing functionality of a system that is defined by another party might make the deployment process difficult. As an example, one of the interviewed GPs in an electronic referral project said: “I have heard that the hospital’s requirements are too detailed and that the system is time consuming for us GPs to use, so I have never tried it.” One way to narrow the gap between clinicians in primary and secondary care can be by establishing a practice consultancy system [7-10]. Practice Consultants are GPs that work in part time positions at hospitals with issues that are related to collaboration between primary care and specialized care. The Practice Consultants can be regarded as boundary spanners [11] who try to engage clinicians both in primary and secondary care to take part in a common Community of Practice [12]. E. Wenger has defined 2 Innovation Norway promotes nationwide industrial development profitable to both the business economy and Norway’s national economy, and helps release the potential of different districts and regions by contributing towards innovation, internationalization and promotion.
356
V. Heimly / The General Practitioner in the Giant’s Web
Communities of practice as groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly. Work processes where the Practice Consultant participate would typically include strategic plan processes at the hospital, distribution of documentation from general practice to health workers and administrative staff at the hospital, participation in design and deployment of electronic collaboration projects, arrangement of meetings and seminars with GP in then hospitals local area and development of guidelines for GPs in collaboration with the specialists. A survey related to the introduction of the referral system with decision support showed that GPs tended to trust the practice consultant because they were experienced and regarded as one of their own[6]. Experiences with the Practice Consultants have been good both in Denmark and Norway, although it has been challenges to fund the system [8], [10]. 3.1. The Vendor’s Challenge As a wide range of ICT systems have been installed by actors that collaborate with the GPs, an increasing need for extensions and changes to their local systems have emerged. A lot of requirements are put on the vendors of the GPs EHR systems for actors like the Directorate of Health, the regional health authorities, insurance companies, the national insurance scheme etc. The development of national systems like ePrescription, Core EHR, and new ELIN-projects are funded in national strategies, standards and architectures. Originally most of the projects started at a local level. The challenge now is how you can balance the external factors and limitations that are set on the development with the local needs. Fewer and fewer of the projects that the GP vendors move into are projects that are only intended for a local market. The vendors need to know that their products can be sold and deployed at a wider scale. The vendors are also short of money for further development of their systems because scarce resources are being kept by other actors. There is a contradiction between the local needs and the potential for moving into a broader market. The users groups will also vary from project to project and there is no link between them. During the recent years the EHR vendors have been obliged to satisfy national requirements. The vendors have user forums and user groups, pilot users claim that the vendors cannot afford to prioritize the local needs to the same extent as before, because they have to pay more attention to needs from the collaborating actors and the authorities. The GPs do not have a national body where that can represent them in a national setting. They are linked to the Municipal Authority (KS) and the Norwegian Medical Association, but these organizations have a wide focus that the GPs consider to be too wide. 3.2. The Gps’ National Reference Group A group of ICT experienced GPs took the initiative to establish a national reference group in 2010. The Norwegian Medical Association has a subgroup named the Norwegian Association for General Practice (NFA), where the reference group is connected. Their focus is on further development of electronic health record systems in General Practice. Most of the GPs in the group have broad experience from being pilot users in various ICT projects, practice consultants or users representatives in the vendor’s users groups. The GPs also have a very active online forum where they
V. Heimly / The General Practitioner in the Giant’s Web
357
discuss ICT related issues vividly. The reference group also uses this forum to get active feedback on the work that they do.
Figure 1. Development model for EHR in General Practice
The reference group has so far come up with a list of more than 30 action points where they want improvements of their EHR systems. Some of these action points are general and wide (decision support) while other are more concrete and limited (suggestions for improvements of the interface of a communication module). Some of the action points only have implications for the systems in general practice, while others are related to the collaboration with external actors. The tasks that the GPs wanted to give priority to first were the development of medication synchronization modules, NEKLAB (Norwegian coding system for laboratory services), synchronization tables and new functionality for transfer of EHRs between GPs. The GPs have also experienced that promising pilots have been stopped, because there are not available resources for the deployment process and want more focus on deployment processes. The reference group first of all wants more money and programming resources to the vendors, in order to ensure that they can continue to improve the EHR systems based on the GPs needs. The GPs are willing to pay parts of this bill by increased licenses, but they also try to get national funding from the Directorate of Health and Innovation Norway. These actors have been positive in terms of supporting the initiative that seems promising. So far this process is at an early stage and it remains to see how this Reference group will find its role among all the other national actors. A possible model for further development of the EHR systems in general practice is illustrated in the Figure 1. Recommendations for development: • • • •
Requirements from local projects and users are discussed and prioritized in user forums The GPs national reference group coordinates and prioritizes tasks with national project, and works for funding. Vendors develop new functionality in collaboration with GPs, Practice Consultants and other collaborating actors. Practice Consultants partake actively in the deployment of new functionality.
One of the challenges that need to be sorted out, is how the GPs in the reference group should be compensated for the time they spend on coordination task. As a starting point they have partly been compensated by a project linked to NFA, but most
358
V. Heimly / The General Practitioner in the Giant’s Web
of the work has been done on a volunteer basis. One model could be to provide the GPs with a 20% position that is linked to their national role. This position could be funded by actors like the Department of Health, KS or the Norwegian Medical Association.
4. Conclusion External actors put an increasing pressure on the EHR system vendors in terms of requirements for new functionality. The GPs own possibilities for influence on the EHR system development has decreased simultaneously. The development of new functionality should still have a basis in the local needs, but coordination at a national level is also needed. A model with a national reference group that is initiated by the GPs has been tried out and seems promising. Based on the experiences from this work, a more permanent model for the involvement of the GPs should be established at a national level. Experiences from Danish and Norwegian collaborations projects also show that active involvement of Practice Consultants in design and deployment of collaboration functionality can be recommended.
References Heimly V, et al. Diffusion and use of Electronic Health Record Systems in Norway. Studies in Health Technology and Informatics, 2010. 160: p. 381. [2] Skorve E, Aanestad M. Bootstrapping Revisited: Opening the Black Box of Organizational Implementation. Scandinavian Information Systems Research, 2010: p. 111-126. [3] Christensen T, Grimsmo A. Development of functional requirements for electronic health communication: preliminary results from the ELIN project. Informatics in Primary Care, 2005. 13(3): p. 203-208. [4] Christensen T, et al. Norwegians GPs’ use of electronic patient record systems. International Journal of Medical Informatics, 2009. 78(12): p. 808-814. [5] Christensen T, Grimsmo A. Expectations for the next generation of electronic patient records in primary care: a triangulated study. Informatics in Primary Care, 2008. 16(1): p. 21-28. [6] Heimly V. Collaboration across Organizational Borders, the Referral Case. Studies in Health Technology and Informatics, 2010. 157: p. 106. [7] Kvamme O, Olesen F, Samuelsson M. Improving the interface between primary and secondary care: a statement from the European Working Party on Quality in Family Practice (EQuiP). Quality in Health Care, 2001. 10(1): p. 33. [8] Kvamme OJ, Eliasson G, Jensen PB. Co-operation of care and learning across the interface between primary and secondary care - Experiences from two workshops at the 15th WONCA World Conference 1998. Scandinavian Journal of Primary Health Care, 1998. 16(3): p. 131-134. [9] Kvamme OJ, Olesen F, Samuelsson M. Improving the interface between primary and secondary care: a statement from the European Working Party on Quality in Family Practice (EQuiP). Quality in Health Care, 2001. 10(1): p. 33-39. [10] Risanger V. Mind the Gap. Master thesis, 2008. [11] Levina N, Vaast E. The emergence of boundary spanning competence in practice: implications for implementation and use of information systems. Management Information Systems Quarterly, 2005. 29(2): p. 8. [12] Wenger E. Communities of practice: Learning, meanings, and identity. 2007: Cambridge university press. [1]
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-359
359
When Information Sharing is not Enough a
Berit BRATTHEIMa,1, Arild FAXVAAGa, Pieter TOUSSAINTa Norwegian EHR Research Centre (NSEP), Institute of Neuroscience, Faculty of Medicine, NTNU, Trondheim, Norway
Abstract. This paper explores information sharing in multidisciplinary clinical collaboration between three hospitals. Our study draws on qualitative interviews with surgeons and radiologists in two county hospitals and one university hospital. The analysis shows that the actors shared a restricted amount of information about the patients they have in common and that different actors used the shared information in different ways. However, much communication was still needed to clarify and negotiate the meaning of shared data and its implications for collaborative care. To conclude, while the arguments for a shared information space may appear convincing, the communication practice observed should illustrate that IS also needs to support the communicative process in clinical collaborative work. Keywords. Shared record, communication support, transinstitutional collaboration, aortic aneurysm, surgery, radiology
1. Introduction The process of planning and subsequent execution of clinical activities, including the coordination of information and transfer of patients, works reasonably well in small clinical units. Actors that are involved in the care of a patient have access to the same clinical information in a shared record system. At the same time, the actors have excellent access to each other, facilitating discussions and negotiations on care issues by allowing less formalized exchange of information. In multidisciplinary contexts, this practice might cause different disciplines to use presumably the same information elements in multiple ways [1]. Most clinical domains are characterized by a steady introduction of new clinical methods and techniques, innovations that must be accompanied by education and more specialized training of the personnel [2-4]. Clinical units that deploy new and improved services by taking sophisticated techniques into use, rapidly find themselves attracting patients from other hospitals. The less innovative clinical units might find a new role as a collaborating and contributing partner. In such situations, collaboration will have to be extended across institutional borders. Clinical domains characterized by trans-hospital collaboration face particular challenges with regards to achieving efficient clinical information exchange [5]. It has been assumed that establishing shared information spaces will lead to more effective collaboration [6], for example when healthcare actors have to exchange information within or across units to provide patient care. Even if the involved actors get access to 1
Corresponding author: The Norwegian EHR Research Centre, Medical technical research centre, NO-7489 Trondheim, Norway, E-mail: [email protected]
360
B. Brattheim et al. / When Information Sharing is not Enough
information system (IS) that is shared between multiple institutions [7], this will not suffice. The actors might have other unmet clinical needs that must be satisfied to support effective clinical collaboration. In this paper we have addressed this question in the context of collaboration between members of a multidisciplinary care team that provides advanced endoscopic surgical services (endovascular aneurysm repair (EVAR)) to patients with abdominal aortic aneurysms (AAA) asking the following questions: What information is actually shared between the collaborating clinicians? How is the information used by the different actors, and how is this information shared?
2. Method Healthcare setting: One university hospital and two county hospitals, all being part of a Norwegian regional healthcare service. The information infrastructure consisted of a radiological IS on a shared inter-hospital server, deployed at all public hospitals in the particular health region. Identical Electronic Patient Record (EPR) systems were applied as a stand-alone installation within each hospital. Study design: Semi-structured interviews with 12 key clinicians. The interview guide was inspired by a prior observation study focusing on one episode of monitoring for AAA patients potentially eligible for surgery [4]. From the county hospitals we interviewed two vascular surgeons and four radiologists. At the university hospital, three interventional radiologists and three vascular surgeons were interviewed, all being members of the EVAR care team. Each interview lasted 45-60 minutes and was tape-recorded for subsequent transcription. The analysis was inspired by a ‘grounded theory’ approach [8] and followed an inductive strategy [9]. For the purpose of this paper, we present only excerpts of the empirical material to illustrate the particular issues in question. The study was approved by the Regional Committee for Medical Research Ethics and the Norwegian Social Science data Services.
3. Results In our case of multidisciplinary trans-hospital collaboration we found three different characteristics of information sharing. First, the exchange of information necessitated supplementary discussions to clarify and negotiate essential care concerns. Second, for non-emergency patients, timing was important, but not critical, and the communication could take place in an asynchronous way. Finally, the amount of overlapping information elements indicated a rather modest common dataset. Further details are given below. 3.1. What Information to Share? Making a decision on whether to offer EVAR to a patient required collaboration between experts from both surgery and radiology departments. The transfer of patient information from the county hospital to the university hospital involved two key datasets: One set holding a processed excerpt of focal clinical information extracted from the medical record, and a second set holding more specific information stored in
B. Brattheim et al. / When Information Sharing is not Enough
361
the radiological record. Interestingly, the EVAR surgeons focused primarily on the clinical data set, while the EVAR radiologists drew mainly on the second one. In general, a rather restricted amount of information was shared. To exemplify this point, Table 1 depicts an illustrative EVAR case, describing the different information elements as well as the overall communication pattern. 3.2. How is the Information Used by the Different Clinicians? As illustrated in Table 1/Figure 1, different actors had different perspectives on the shared information. The county surgeon considered the submitted dataset as a means to mediate important clinical risk factors, highlighted with key radiological information. The EVAR surgeons, on the other hand, viewed the same dataset within the context of deciding whether EVAR surgery was an option. Hence, one task included a request to the EVAR radiologists about working out an anatomical EVAR suitability assessment. This should be based on the delivered CT information combined with notes indicating clinical risk factors. A second task implied to consider the received clinical risk information and, if needed, collect supplementary considerations on preoperative risk factors. Further, the county radiologists viewed the radiological part of these datasets (e.g the CT images and report) as a means to support the local surgeon’s decisionmaking by providing CT-derived diagnostic information of the AAA and its surrounding arteries. Some of them even included EVAR specific measurements of the arteries, intending to contribute to the EVAR radiologists’ assessments. However, to the EVAR radiologists, this CT information did not suffice. They had to acquire additional data by getting hold of the CT source dataset collected at the county hospital. In general, the EVAR radiologists viewed this source dataset to be fundamental for their image processing, grounding the radiological decision on anatomical EVAR suitability, as well as guiding both their choice of stentgraft components and the actual EVAR intervention. Table 1. Principle communication pattern for eligible EVAR candidates – an illustrative case.
County hospital: 71-year-old-patient attending the regular surveillance of his AAA. Having balanced the risks and benefits of surgical repair versus ongoing surveillance, the surgeon recommends EVAR surgery. A radiological CT scan supports the surgeon’s decision-making. In agreement with the patient, the surgeon sends an EVAR referral letter to the vascular surgery team at the university hospital, including important hand-over information: e.g. considerations on the patient’s comorbidities and preoperative risks. In addition, the surgeon gives access to sharing of CT data between the two hospitals. University hospital: The vascular surgeons (=EVAR surgeons) request the EVAR radiologists to consider the patient’s CT-scan with respect to anatomical EVAR suitability, including some notes about the patient’s risk factors. If needed, the surgeons also collect supplementary clinical considerations, arranging for a separate patient-surgeon consultation and/or tests. As for the radiological work, the EVAR actors draw on the patient CT data or, more precisely, the CT source data collected at the county hospital. The existing IS does not offer a source data transfer utility, but an informal arrangement has been set up between the EVAR radiologists and the county ones to support this function. In a face-to-face meeting between the EVAR experts, the two disciplines share their information.Then follows a discussion including clarification of various risk factors accompanied by negotiations on the difficult trade-offs between anatomical and clinical risk factors. In case of EVAR, the radiologists will be responsible for the ordering of customized components/stentgraft.
362
B. Brattheim et al. / When Information Sharing is not Enough
Figure 1. Principle interaction pattern for EVAR collaboration across hospitals.
3.3. How is the Information Shared? Throughout the course of the EVAR suitability assessment, collaboration unfolded as partly asynchronous, discipline-specific work tasks, interspersed with multiple communicative acts. The existing IS solution supported parts of the communication. The actors also communicated by phone, by sending formal paper letters and by exchanging handwritten notes. Information about the outcome of the multidisciplinary face-to-face meeting to decide upon the crucial EVAR inclusion at the university hospital was particularly important. In this meeting the different actors presented, discussed, and negotiated pro and cons of further actions, in particular to balance clinical risk factors against anatomical conditions. Further, some of the county radiologists reflected on the lack of feedback from their colleagues at the university. They pointed out that feedback on their delivered CT work could have helped them improve their EVAR diagnostic-related CT skills. These actors illustrated how the communication could have taken place by presenting examples on how they collaborated with colleagues at other hospitals in other clinical settings.
4. Discussion In this case report, we have shown that having access to a shared information space does not suffice to establish an effective collaboration between clinicians that collaborate across institutional borders. As our data indicate, communicative processes are also necessary, because substantial parts of the collaboration consisted of giving multiple meanings to information from different perspectives and to negotiate the implications for further actions. The information shared was rather modest, leading to discussions to clarify and negotiate the meaning of the shared data as well as their
B. Brattheim et al. / When Information Sharing is not Enough
363
consequences when approaching collaborative care concerns. From this, it might seem that seeking to enhance clinical collaboration by providing a shared information space does not suffice when dealing with a limited amount of overlapping information elements. This view is in line with that of Ash et al [10] who argued that the varying and changing bulk of information put strict demands on the specification of shared minimum data sets to avoid information systems causing new types of errors. The use of both asynchronous and synchronous communication channels indicated that not all EVAR tasks were time-critical. Asynchronous communication was often enough. This emphasizes the need for support of asynchronous information exchange in the IS-solutions (e.g. email functionality, discussion forums and chat). In conclusion, IS support should both support communication and negotiation within cross-organizational clinical activities as well as facilitate the sharing of data. This has implications for many initiatives that aim to improve the coordination of care services, such as the Norwegian National Health Plan [11]. Despite the limited number of cases, our study has shown that today’s IT-systems make it difficult to support care that is provided as collaboration across institutional and professional borders. To accommodate for this, we propose to apply an information needs approach [12, 13] as the first step for process support in evolving clinical treatment processes. Acknowledgments: We thank the participants from the three hospitals for their contribution. We also thank A. Landmark for technical assistance and K.M. Lyng for valuable input on Mol’s work.
References [1] [2] [3] [4]
[5] [6] [7]
[8] [9] [10] [11] [12] [13]
Mol A. The body multiple: ontology in medical practice, Durham: Duke University Press. XII; 2002. Hartswood M, Procter R, Rouncefield M, Slack R. Making a case in medical work: Implications for the electronic medical record. CSCW 2003. 12(3): 241-66. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn't. BMJ 1996. 312(7023): 71-2. Brattheim B, Faxvaag A, Tjora A. Getting the aorta pants in place: A ‘community of guidance’ in the evolving practice of vascular implant surgery. Health (London) 2010 [Epub ahead of print]. DOI: 10.1177/1363459310376300. Bardram J. Pervasive healthcare as a scientific discipline. Methods Inf Med 2009. 47(3): 178-5. Blomberg J. Negotiating meaning of shared information in service system encounters. Europ Man J 2008. 26(4): 213-2. Mäenpää T, Suominen T, Asikainen P, Maass M, Rostila I. The outcomes of regional healthcare information systems in health care: A review of the research literature. Int J Med Inform 2009. 78(11): 757-71. Strauss AL, Corbin JM. Basics of qualitative research: techniques and procedures for developing grounded theory. Thousand Oaks, Calif.: Sage; 1998. Creswell JW. Qualitative Inquiry and Research Design: Choosing Among Five Traditions. Sage Publications; 1998. Ash J, Berg M, Coiera E. Some unintended consequences of information technology in health care: the nature of patient care information system-related errors. J Am Med Inform Assoc. 2004. 11(2): 104-12. Ministry of Health and Care services: The National Health Plan. Available at www.government.no. Denekamp Y. Clinical decision support systems for addressing information needs of physicians. Isr Med Assoc J 2007. 9(11): 771-6. Häkkinen H, Korpela M. A participatory assessment of IS integration needs in maternity clinics using activity theory. Int J Med Inform 2007. 76(11-12): 843-9.
364
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-364
Information and Communication Needs of Healthcare Workers in the Perioperative Domain Børge LILLEBOa,1, Andreas SEIM b, Arild FAXVAAG a Norwegian EHR Research Centre, Faculty of Medicine, Norwegian University of Science and Technology (NTNU), Trondheim, Norway b Department of Computer and Information Science, Faculty of Information Technology, Mathematics and Electrical Engineering, NTNU, Trondheim, Norway a
Abstract. Perioperative work requires the collaborative efforts of a multitude of actors. Coordinating such collaboration is challenging, and coordination breakdowns may be very expensive and jeopardize patient safety. We studied the needs for status information and projection of future status and events for key actors in the perioperative environment. We found that information and projection needs differed significantly between actors. While just-in-time notifications sufficed for some, others were dependent on projections to provide high quality and efficient care. Finally, information on current status and support in projecting the future unfolding of events could improve actors situated coordination capabilities. Keywords. Collaboration, awareness, patient management,
1. Introduction The perioperative departments, i.e. the admission wards, operation suites and the postoperative recovery are among the costliest hospital resources. Because of the problem solving nature of surgical work and the unpredictable influx of emergency cases, what actually gets done regularly differs from that inscribed in the schedule. In such an environment, the coordination of actors, patients and operating rooms becomes a challenge. In recent years, one has seen the emergence of information systems that display information on large, wall-mounted boards [1]. By creating a shared information space, these electronic whiteboards improve collaboration and selfcoordination by making actors aware of other actors’ work [2]. According to Endsley [5] the highest form of situation awareness is the ability to project future status (here “situation awareness” means “knowing what is going on”). As part of an effort to develop a next-generation system for supporting situated [3] coordination, we sought to understand the needs for projection of future status and events for key actors in the perioperative environment. In particular, we were interested in involved actors’ need for projection related to transfer of patients from a surgical ward to an operating room (OR), preparation for the operation including induction of 1
Corresponding author. E-mail address: [email protected].
B. Lillebo et al. / Information and Communication Needs of Healthcare Workers
365
anesthesia, surgery, emergence from anesthesia, transfer of the patient to the postanesthesia care unit (PACU), cleaning of the OR, monitoring of the patient at the PACU and final transfer of the patient back to the ward. We carried out a modified Goal directed task analysis to study these needs for projection.
2. Methods We studied the largest surgical unit of an 800-bed university hospital in Norway, consisting of two post-anesthesia care units, six surgical wards and 13 operating rooms. The unit performed acute and elective gastroenterologic, urologic, plastic, orthopedic, breast, endocrinologic and vascular surgery. We conducted 31 semistructured interviews with perioperative actors during a field study within the perioperative environment. Interviews lasted 5-120 min, averaging 47 min and totaling 1460 min. Some of the interviews were with more than one worker at the same time. In total we spent 32 hours together with anesthesiologists, cleaners, nurse anesthetists, operating room (OR) nurses, OR suite coordinators, OR technicians, post-anesthesia care unit (PACU) nurses, surgeons and ward nurses. Data were collected as handwritten notes. The interview guide was developed and modified based on pilot observations and interviews. Interviews focused on the goals and tasks of various perioperative actors and what information they ideally would like to have in order to reach those goals on time - without considering whether or not that piece of information was available with current technology. This approach was based on a cognitive task analysis known as 'Goal directed task analysis' [4]. 2.1. Ethics The National Committee for Medical and Health Research Ethics (NEM) and The Norwegian Directorate of Health approved the study. All participants gave their informed consent prior to data collection.
3. Results Many informants confirmed the importance of the ability to project future status, pointing out potential personal benefits pertaining perioperative work. However, different classes of actors were interested in the status of different phases of perioperative work (see Figure 1) and for a variety of reasons. Moreover, the actors inferred their projections from a multitude of sources.
366
B. Lillebo et al. / Information and Communication Needs of Healthcare Workers
Figure 1. Illustration of a typical trajectory of a patient undergoing surgery which links status shifts in that trajectory to actors that could benefit from the ability of knowing in advance when those status shifts would occur.
For some, projection of future events was necessary for delivering the required quality of care. A ward nurse emphasized that: “I have to know when the operation will start in order to do required patient preparations such as fasting, showering and premedication.” Others were interested in potential gains in efficiency. A surgical resident explained “It would have been nice to know in advance approximately when the operations would start. In my case a message one hour before the operation begins would be nice. That would give me enough time to take care of an ED [emergency department] patient in the meantime.” The degree of uncertainty in projections made by the actors was at times substantial. One surgeon said “My impression is that e.g. when they bring the patient down to the OR, there is approximately one hour left [until the operation starts], but this depends on the type of operation and anesthesia.” Actors often relied on colleagues’ notifications for updating their projections. Although some of these notifications were done in advance, often communicated through pagers and phones, most notifications were done just-in-time. They acted more as a last minute reminder that would require immediate action and limiting situated coordination. Cleaners explained “We know when we should go and wash an operating room the moment they page us. Usually they do that when they are about to transport the patient out of the room. Sometimes they notify us before the patient is out, then we wait or start some minor washing with the patient in the room.” Similarly, a PACU nurse noted “When the patient has recovered sufficiently we call the ward and ask them to come and get the patient. We call them when we know for certain that the patient is fit enough to be transported to the ward. We don't call and say that the patient will be ready in half an hour...” While such just-in-time notifications were convenient for some, such as a surgeon who noted “They page me when I have to come to the OR”, others were much more dependent on projections. A PACU nurse expressed that “We like to prepare equipment and make sure all other things are done before the patient arrives from the OR. Yesterday that didn't happen, three patients suddenly appeared here without us knowing they were on their way... It works out anyway though, but we prefer knowing it in advance...” Some of the informants described how they used the workspace to project future events. One OR technician explained that “Anyway, it is about walking around and looking through the OR windows. I don't know if there is anything particular that I look
B. Lillebo et al. / Information and Communication Needs of Healthcare Workers
367
for, such as the surgeon having finished his work or something like that. But I try to project when each operation will finish and next patient will arrive”. There were also examples of how technology could be used to get almost the same kind of awareness information. PACU nurse: “We pay attention to the electronic OR scheduling system throughout the day. It is possible to see when the patient has arrived, when the operation started and so on. This gives us an indication on when the operation will be done (...) What matters to us is when the patient is expected to arrive here. It is nice to know that approximately 30 minutes before he comes, because then we have the possibility of sending one of the other patients out if we are full” Finally, the coordinator had a particular need to project the future status of multiple perioperative actors. One OR suite coordinator said “ I miss simpler tools to control whether or not our plans are feasible - e.g. if a surgeon has been planned to be in two places at the same time.”
4. Discussion In this study we have illustrated that facilitating access to situation awareness information could improve situated coordination and thereby improve the overall coordination of perioperative activities. Many of those involved in the perioperative processes are located far away from operating rooms - the center for perioperative activity. A ward nurse is usually located nearby the patients he/she is responsible for at the patient ward while a surgeon on call is often wandering between various patient wards, operating rooms, emergency departments, radiology departments and more. These differences in routines and workplace environments require special consideration. Perhaps wall-mounted electronic whiteboards is an insufficient tool for distributing sufficient information? Perhaps a mobile, single user device is more appropriate for this task? The divergent information needs of our informants indicate that a personal device might be better than a common shared information space. Many informants pointed out that being able to project future events also could be beneficial. According to Mintzberg [7] situated coordination2 takes place when “Two or more people simply adapt to each other as their work progresses, usually by informal communication.” In other words situated coordination depends on awareness of the current activities of your coworkers. Whether an information system should purport to support coordination by projecting future events, is an open question. What we do know about projections within this domain is that the duration of surgery is inherently hard to predict accurately, even for operations that have started as planned and are being performed by experienced surgeons [6]. Those in need of coordinating themselves might be capable of inferring future events given they were offered highly updated information about ongoing events and the whereabouts of their colleagues. This could be accomplished by improving the communication means of the actors, making it easy for them to update and share information about their activities. Our study was limited to the activities related to perioperative patient handovers. Involved actors also participated in other activities throughout the hospital. Information and projection needs pertaining to these activities are outside our scope. Moreover, our work was limited to actors’ subjective beliefs about their personal benefits from status
2
Mintzberg uses the term ”mutual adjustment”
368
B. Lillebo et al. / Information and Communication Needs of Healthcare Workers
information and projections. Such beliefs have high face validity, but lack objective verification. In conclusion, we found that many actors saw use of or depended on projections of future status and events in their efforts to deliver efficient high quality care. However, the actors’ needs differed substantially, both with respect to which perioperative phases and events they were interested in and how long in advance they felt they needed projections. Information on current status and support in projecting the future unfolding of events could improve actors’ situated coordination capabilities.
References [1]
[2]
[3] [4] [5] [6] [7]
Aronsky D, Jones I, Lanaghan K, Slovis CM (2008) Supporting Patient Care in the Emergency Department with a Computerized Whiteboard System. Journal of the American Medical Informatics Association 15, 184 -194. Bardram JE, Hansen TR, Soegaard M (2006) Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work In Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work ACM, New York, NY, USA, pp. 109–118. Lundberg N, Tellioglu H (1999) Understanding complex coordination processes in health care. Scand. J. Inf. Syst. 11, 157-181. Endsley MR, Bolté B, Jones DG (2003) Designing for situation awareness, Taylor & Francis. Endsley MR (1995) Toward a Theory of Situation Awareness in Dynamic Systems. Human Factors: The Journal of the Human Factors and Ergonomics Society 37, 32-64. Macario A, Dexter F (1999) Estimating the Duration of a Case When the Surgeon Has Not Recently Scheduled the Procedure at the Surgical Suite. Anesth Analg 89, 1241. Glouberman S Mintzberg H (2001) Managing the care of health and the cure of disease--Part II: Integration. Health Care Management Review 26, 70-84; discussion 87-89.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-369
369
Clinical Situations and Information Needs of Physicians During Treatment of Diabetes Mellitus Patients: a Triangulation Study Gudrun HÜBNER-BLODER a,1, Georg DUFTSCHMIDb, Michael KOHLERb, Christoph RINNERb, Samrend SABOORa, Elske AMMENWERTHa a UMIT-University for Health Sciences, Medical Informatics and Technology. Hall in Tirol, Austria b Section for Medical Information Management and Imaging, Medical University of Vienna, Austria
Abstract. Physicians should have access to the information they need to provide the most effective health care. Medical knowledge and patient-oriented information is dynamic and expanding rapidly so there is a rising risk of information overload. We investigated the information needs of physicians during treatment of Diabetes mellitus patients, using a combination of interviews, observations, literature research and analysis of recorded medical information in hospitals as part of a methodical triangulation. 446 information items were identified, structured in a set of 9 main categories each, as well as 6 time windows, 10 clinical situations and 68 brief queries. The physician’s information needs as identified in this study will now be used to develop sophisticated query tools to efficiently support finding of information in an electronic health record. Keywords. Physicians’ information needs, Triangulation study, Electronic health record, information overload, Diabetes mellitus.
1. Introduction The goal of health care of the new millennium is excellent care for all. So physicians should have access to the information they need to provide the most effective and efficient care based on the best available evidence [1]. To fulfil this demand of excellent clinical care excellent information support is needed. Medical knowledge and patient-related information is dynamic and expanding rapidly, and physicians need more and more information to provide optimal care for their patients [2]. The electronic health record (EHR) is a repository of patient data. It provides a portal to a large patient information space, and it offers data from a variety of sources that are aggregated into one place [3]. Due to the huge quantity of medical data, there is a rising risk for information overload. Furthermore physicians must review and process previously documented patient history in an ever-shorter period of time [4, 5]. 1
Corresponding author: Dr. Gudrun Hübner-Bloder MSc., Institute of Health Informatics, Eduard Wallnöfer-Zentrum I, 6060 Hall in Tirol, Austria, Email: [email protected].
370
G. Hübner-Bloder et al. / Clinical Situations and Information Needs
“Medicine is a knowledge based business, and experienced physicians use about two million pieces of information to manage their patients” as Smith reported in [6]. As defined in [7] an information need is a personal item about required information. Physicians seek out information in response to a problem at hand. While physicians need access to the entire patient history, they often seek answers to particular questions and therefore they wish to browse certain subsets of the available data [8]. Within the Austrian Science Fund (FWF) project “Archetype based Electronic Health Record” [9] we develop solutions to assist the EHR user when searching the EHR in the context of the treatment of Diabetes mellitus (DM) patients. The first goal of this project was to investigate the specific information needs of physicians during DM patient’s treatment. The aim of this paper is to report about the study of physicians’ information needs during treatment of DM patients and to investigate, based on the results of the study, how the results will support EHR users during the systematic search of patient-related information.
2. Method The goal of this study was to investigate information needs of physicians during the treatment of Diabetes Mellitus (DM) patients. We decided to use a methodical triangulation, using literature search, interviews, observations and documentation analysis, in order to systematically aggregate different perspectives of the investigated object [10]. Literature research: We analyzed five international evidence-based DM guidelines for clinical diagnostics and medical treatment of DM [11-15] to identify information items needed during DM treatment. To analyze the results, we used a summary qualitative content analysis with inductive creation of categories according to Mayring [16] with the assistance of a qualitative data analysis software (MAXQDA 2007) [17]. Expert Interviews: We performed oral, partial-standardized expert interviews. Objective of expert interviews is the investigation of function-specific know-how. [18]. We conducted expert interviews with 6 internists with specialization DM in the Diabetes outpatient clinics of the University Hospital of Innsbruck, the Regional Hospital of Hall in Tyrol, the Hospital St. Vinzenz in Zams and with one internal physician in private practice. To analyze the results we again used qualitative content analysis [16]. Observation: Additional to the expert interviews, we decided to make participant, unstructured observations of clinical encounters [19], to gain additional insight and to validate information from the interviews. The observation of 22 DM patient encounters took place in the DM outpatient clinic of the internal medicine of the university hospital of Innsbruck. We decided to analyze the data based on grounded theory [20, 21]. This analysis allows inductive concept and theory development during the data collection. Clinical documentation analysis: Another source to investigate the information needs of physicians was to analyze the information documented in the electronic records of hospitals. We analyzed the recorded medical information in three Diabetes outpatient clinics of internal medicine (University Hospital of Innsbruck, Regional Hospital of Hall in Tyrol and the Vienna General Hospital). For analysis these recorded data we again used qualitative content analysis.
G. Hübner-Bloder et al. / Clinical Situations and Information Needs
371
3. Result Categories of information needs: The study resulted in the identification of 446 distinct information items (e.g. items of DM classification (Typ 1 DM, Typ 2 DM, gestational diabetes etc.), onset of DM, weight-height status like body mass index, weight gain, weight loss etc.) which are structured in a set of categories with 9 main categories: I Patient master data, II Self-monitoring of the patient, III Diabetes mellitus classification, IV General medical history, V Diagnosis, VI Recent surgery, VII Recent check-ups, VIII Laboratory findings, IX Medication/Therapy. Each main cateogires comprises up to four sub categories each. Based on the comprehensive information items of the international evidence-based DM guidelines we structured the categories. These categories are adapted with the information items we gained in the expert interviews and the observations. Finally we matched these results with the clinical documentation analysis Time windows: We also investigated the time windows users request when searching for information. We define the time windows as timeframes in which specific information items are important for the attending physician. For example, during a routine check of a DM patient, physicians may want to have access data limited to a 3month-interval or 6-month-interval. Overall, the study pointed out to six typical time windows: I. 0-3 months, II. 0-6 months, III. 0-12 months, IV. 0-36 months, V. 0-60 months, VI. all available data. Clinical situations and brief queries: Both the expert interviews and also the observations showed that information needs are different according to the clinical situations. As a result of this knowledge, we defined ten clinical situations as well as 68 additional brief queries, which consist of queries that supply additional short information (e.g. the progress of HbA1c trend in a specific time-window). Table 1 shows an overview of the clinical situations and brief queries. Table 1. List of the clinical situations and brief queries (exemplary) Clinical Situations 1. 2. 3.
Numbers of Items
Initial clinical interview Routine check - brief data set Routine check - extended data set (exemplary): • Routine check by patients with cardiovascular problems • Routine check by patients with Neuropathy • Routine check by patients with Nephropathy • Routine check by patients with ophthalmological problems (e.g. retinopathy) • Routine check by patients with dermatological problems (e.g. diabetic foot)
202 42 82 58 53 53 63
Laboratory Brief Queries (Exemplary)
Questions
Glucose status • Fasting plasma glucose (FPG) • Postprandial plasma glucose (PG pp) • HbA1c • Oral glucose tolerance test (OGTT)
All pathological values (PG > 100 or PG < 70) All pathological values (PG pp > 130) Progress in time-window (selectable) All data available
372
G. Hübner-Bloder et al. / Clinical Situations and Information Needs
4. Discussion and Conclusion The purpose of the study was to investigate the physicians’ information needs during the treatment of Diabetes mellitus patients. The study revealed 446 distinct information items structured in a set of nine categories, six different time windows, ten clinical situations and the 68 brief queries. Earlier studies that investigated information needs were mostly based only on interviews [22, 23] or observations alone [24-26] or on a combination of interviews and literature research [27]. By using the methodical triangulation as we did, different point of views could be aggregated systematically [10]. In our study we combined interviews, observations, literature research and documentation analysis. We felt that using four different qualitative methods helped to get a more complete and comprehensive picture. “Theoretical sampling necessitates building interpretative theories from the emerging data and selecting a new sample to examine and elaborate on this theory” as reported in [28]. In our study, interviews and observations included six internal physicians. Our goal was to gain a deeper understanding of physicians’ information needs to develop concepts which will be used in our research. At this stage of the project we focused only of the internal physicians information’s needs. Expert interviews with physicians of other medical fields (e.g. cardiologist, neurologists etc.) should be made to extend the focus. While EHRs are designed with all needed patient-related information, the consequence is that this huge amount of available data can overwhelm physicians and making it hard for them to identify the desired information [8]. The identification of information items, time windows, clinical situation and brief queries will now be used to develop pre-defined or modifiable queries that help to search for that information the physician needs in a given situation. For example, for a given clinical situation such as “routine check for patients with neuropathy”, relevant information needs and time windows will be presented to the user, together with additional brief queries. This approach should enable a situation-dependent, optimized view on the patient data. The physician’s information needs as identified in this study will now be used to develop sophisticated query tools to efficiently finding information in an electronic health record [29]. Acknowledgement: The project EHR-ARCHE is being supported by the Austrian Science Fund, project number P21396.
References [1] [2] [3] [4] [5]
Godlee F, Pakenham-Walsh N, Ncayiyana D, Cohen B, Packer A. Can we achieve health information for all by 2015? Lancet 2004;364(9430):295-300. Gorman P. Excellent information is needed for excellent care, but so is good communication. West J Med 2000;172(5):319-20. Hayrinen K, Saranto K, Nykanen P. Definition, structure, content, use and impacts of electronic health records: a review of the research literature. Int J Med Inform 2008;77(5):291-304. Van Vleck TT, Stein DM, Stetson PD, Johnson SB. Assessing data relevance for automated generation of a clinical summary. AMIA Annu Symp Proc 2007:761-5. Bawden D. The dark side of information: overload, anxiety and other paradoxes and pathologies. Journal of Information Science 2008;35(2):180-191.
G. Hübner-Bloder et al. / Clinical Situations and Information Needs [6] [7] [8]
[9] [10] [11]
[12]
[13] [14] [15]
[16] [17] [18]
[19] [20] [21] [22] [23] [24]
[25] [26] [27] [28] [29]
373
Smith R. What clinical information do doctors need? BMJ 1996;313:1062-8. Timmins F. Exploring the concept of 'information need'. Int J Nurs Pract 2006;12(6):375-81. Zeng Q, Cimino J. Providing Multiple Views to Meet Physician Information Needs. Hawaii International Conference on System Sciences, p. 5006, 33rd Hawaii International Conference on System Sciences-Volume 5, 2000. EHR-Arche Archetype based electronic health record. 2010 [cited 2010 15.03]; Available from: http://www.meduniwien.ac.at/msi/arche Flick U. Triangulation - An Introduction (in german). 2 ed. Wiesbaden, Germany: VS Verlag für Sozialwissenschaften; 2008. Austrian Diabetes Association. „Guidelines for Diabetes Care" Revised and advanced version 2009 (in german). Wiener klinische Wochenschrift 2009 [cited 2010 15.02]; Available from: http://www.springerlink.com/content/3540562266364567/fulltext.pdf German Diabetes Association. Evidence-based Guidelines (in german). [cited 2010 15.02]; Available from:-http://www.deutsche-diabetes-gesellschaft.de/redaktion/mitteilungen/leitlinien/ Uebersicht_leitlinien_evidenzbasiert.php American Diabetes Association. Standards of Medical Care in Diabetes - 2010. 2010 [cited 2010 02.02]; Available from: http://care.diabetesjournals.org/content/33/Supplement_1 International Diabetes Federation. Global Guideline for Type 2 Diabetes. 2010 [cited 2010 02.02]; Available from: http://www.idf.org/Global_guideline WHO - World Health Organization - EMRO Publication. Guidelines for the prevention, management and care of diabetes mellitus. 2006 [cited 2010 02.02]; Available from: http://www.emro.who.int/publications/Book_Search.asp Mayring P. Qualitative Content Analysis - Principles and Techniques (in german). 8 ed. Weinheim, Germany: Beltz Verlag; 2003. MAXQDA. MAXQDA - The professional tool for qualitative data analysis. [cited 2010 15.09]; Available from: http://www.maxqda.com/ Kassner K, Wassermann P. Das theoriegenerierende Experteninterview. In: Bogner A, Littig B, editors. Nicht überall, wo Methode draufsteht, ist auch Methode drin. 2 ed. Wiesbaden, Germany: Verlag für Sozialwissenschaften; 2005. Bortz J, Döring N. Research Methods and Evaluation (in german). 3 ed. Berlin, Germany: Springer Verlag; 2002. Glaser BG, Strauss AL. Grounded Theory: Qualitative Research Strategies (in german). 2 ed. Bern, Schweiz: Hans Huber Verlag; 1998. Grounded Theory Institute. What is Grounded Theory? [cited 2010 14.10]; Available from: http://www.groundedtheory.com/what-is-gt.aspx Haigh V. Clinical effectiveness and allied health professionals: an information needs assessment. Health Info Libr J 2006;23(1):41-50. Mihalynuk TV, Knopp RH, Scott CS, Coombs JB. Physician informational needs in providing nutritional guidance to patients. Fam Med 2004;36(10):722-6. Currie LM, Graham M, Allen M, Bakken S, Patel V, Cimino JJ. Clinical information needs in context: an observational study of clinicians while using a clinical information system. AMIA Annu Symp Proc 2003:190-4. Graham MJ, Currie LM, Allen M, Bakken S, Patel V, Cimino JJ. Characterizing information needs and cognitive processes during CIS use. AMIA Annu Symp Proc 2003:852. Seol YH, Kaufman DR, Mendonca EA, Cimino JJ, Johnson SB. Scenario-based assessment of physicians' information needs. Stud Health Technol Inform 2004;107(Pt 1):306-10. Braun LM, Wiesman F, van den Herik HJ, Hasman A, Korsten E. Towards patient-related information needs. Int J Med Inform 2007;76(2-3):246-51. Marshall MN. Sampling for qualitative research. Fam Pract 1996;13(6):522-5. Rinner C, Kohler M, Hübner-Bloder G, Saboor S, Ammenwerth E, Duftschmid G. Creating ISO/EN 13606 Archetypes based on Clinical Information Needs. In: EFMI Special Topic Conference 'E- salus trans confinia sine finibus - e-Health Across Borders Without Boundaries; Lasko, Slovenia; 2011.
374
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-374
A Constructivist approach? Using formative evaluation to inform the Electronic Prescription Service Implementation in Primary Care, England Jasmine HARVEYa,1, Anthony AVERY a, Justin WARING b, Ralph HIBBERD c, Nicholas BARBER c a University of Nottingham, School of Community Health Sciences, Primary Care (Medical School), Queens Medical Centre, Nottingham, UK b University of Nottingham, Business School, Jubilee Campus, Nottingham, UK c University of London, School of Pharmacy, Department of Practice and Policy, Tavistock Square, London.
Abstract. As part of the National Programme for IT (NPfIT) in England, the Electronic Prescription Service (EPS) is being implemented in two releases. The first release placed barcodes on prescriptions and is widely implemented. Release two (EPS2), the electronic transmission of prescriptions between GP, pharmacy and the reimbursement body, has just started implementation. On the NPfIT agenda, community pharmacies have been predicted to benefit from changes in work practice following the full EPS implementation. The study focused on how the advanced EPS (EPS2) might alter dispensing work practice in community pharmacies on issues such as workflow and workload; and the bearing of these issues on improvement in quality of service and safety. This paper demonstrates how findings of the pre-implementation study were used to provide formative feedback to the implementers. A mixed ethnographical method that combined nonparticipant observations, shadowing and interviews, before and after implementation, was used to qualitatively study eight community pharmacies across three early adopter Primary Care Trusts (PCTs) in England. Key implementation issues were fed-back to the PCTs as part of the EPS2 rolling-out process. Staff access to dispensing terminals needs to be improved if electronic dispensing is to be encouraged. Also, as a safety issue, pharmacists are planning to print off electronic prescriptions (tokens) and dispense from them. Although safer, this could increase workload. The EPS2 could positively alter work practice by improving certain demanding aspects of dispensing whilst reducing human errors. For example, the high demand of customers handing in prescriptions and waiting for them to be dispensed could be reduced through automation. Also, the extreme variation in workload during various times of the day could be evened out to improve workflow and provide a better service; however, in order for this to be fully realized, technical issues such as number of staff per dispensing station and dispensing from tokens would need to be addressed. Keywords. EPS, e-prescription, EHR, EPR, healthcare modernisation, clinical work practice, social constructivism, safety, medication automation, quality of care.
1
Corresponding Author. Dr Jasmine Harvey, University of Nottingham, School of Community Health Sciences, Division of Primary Care, Tower (Floor 14) Nottingham, NG7 2RD, [email protected]
J. Harvey et al. / A Constructivist Approach?
375
1. Introduction Community pharmacies in England are part of an ambitious national programme (NPfIT) to computerise health; part of a wider e-Government agenda1. In the UK GP practices are computerised, and virtually all prescribing is done electronically. A paper prescription (called an FP10) is printed and signed by the doctor, and the patient takes that to a pharmacy to be dispensed. The prescription is endorsed by the pharmacist and posted to a national centre which arranges payment for the pharmacist. The concept of the Electronic Prescription Service was one of four strands of a national information strategy first set out in 20022. The EPS implementation commenced in 2008 with the initial phase (EPS1) rolled-out across early adopter Primary Care Trusts (PCTs). The key feature of EPS1 was the inclusion of a barcode on the FP10. The barcode, when scanned in the pharmacy, automatically transferred patient information from the paper to the computer screen, usually eliminating the need to type the medicine labels. The advanced phase (release 2 or EPS2), enables prescribers to authorise and send prescriptions electronically and send them to a centralized system, commonly called the spine (technically called N3). Prescriptions then can be downloaded and dispensed by the pharmacist3. The patient’s role in this is to nominate the pharmacy that will do the downloading and dispensing. Electronic prescriptions open up the possibility of integration with Electronic Health Record (EHR) programme, although the EPS can exist in isolation of EHR. Community pharmacies as key stakeholders of this agenda have been predicted to benefit from the full EPS roll-out in terms of: freeing dispensing staff from work associated with re-keying prescription information; giving dispensing staff scope to streamline workflow by preparing medications in advance; and, managing stock more effectively4. Our study focused on how EPS2 will alter community pharmacies by doing a pre and post implementation study of workflows, workloads and priorities of community pharmacies. The research also explored anticipated issues and perceptions of the full roll-out from pharmacy professionals, how the implementation process was understood, and the pharmacist’s ability to influence patient safety. This paper demonstrates how some of our pre-implementation findings were used to advise key stakeholders.
2. People, technology and the concept of social constructivism in healthcare. A fundamental theory in the study of people and their work practices is that which conceptualises that it is human beings that appropriate technology through formative feedback. Described as social construction of technology, this theory critically opposes technological determinism and theorises that through everyday use, people influence and shape technologies and how they become useful. In the healthcare environment, it is important that the use of technologies does not become a barrier to providing care but are instead tools of know-how that can be appropriated to suit high quality care provision. For example, when May et al5 used an ethnographic study to explore the spatial and temporal relationships between health professionals and patients in the context of how technologies are used in telepsychiatry, they concluded that the technologies needed to be appropriated well in order to avoid interfering with clinical professionalism. May et al5 demonstrated how the boundaries between hard and soft technologies such as the technical and the social are blurred and how the social need to
376
J. Harvey et al. / A Constructivist Approach?
be taken into account (for example in a clinician-patient relationship) in order for the technology to work effectively. The theory of constructivism builds on other socio-technological theories demonstrated by Greenhalgh6; Berg and Van der Lei7, Eden et al8 and Harrison et al9. Significantly it recognises that there is an on-going assessment of systems before and after implementation and that it is through the re-engineering by users that the system becomes successfully adopted. Studying how the EPS might alter work practice include attaining a deeper understanding of how it could be shaped by social and organisational processes of its users. The need for this deeper sense of understanding informed the ethnographic framework used in the data collection and analysis.
3. Data and the analytical method Qualitative methods were employed that used an ethnographic framework of nonparticipant observation and shadowing of community pharmacy staff, as well as interviewing. Baseline data were collected in eight sites across three PCTs in the Midland and Northern regions of England. The PCTs were classified as early adopters of the EPS. As the first phase of the service (EPS1) had already been rolled-out, the study focused on the pharmacies that were about to receive the implementation of the second (EPS2) roll-out. These pharmacies were classified as first-of-type sites. The pharmacies were sampled according to which were available as first-of-type or ‘semifirst-of-type’ sites that were due to implement the EPS2, and also according to their geographic location, size and ownership (independent or chain). Overall, 84 hours of observations were conducted in addition to extra hours of shadowing and interviewing staff. The observation and shadowing were written up as case studies. The case studies, together with the interviews were thematically analysed. In the analysis, implementation issues were identified on key themes such as the prioritisation and organisation of work; and the fluidity of work (workflow) and workload.
4. Findings Prioritisation and organisation of work – A majority of the sites tended to prioritise customers who hand in their prescriptions and wait for them to be dispensed. This is termed walk-in (wait-in) dispensing. Most walk-in prescriptions were for acute treatments. Some pharmacists offered a ‘collection and delivery’ service whereby prescriptions were collected from the GP practice, dispensed and delivered to the customer; these tended to be repeat prescriptions. Repeat prescriptions on average were 70% of prescriptions dispensed in each site. In pharmacies that had large numbers of walk-ins, resources were sometimes very stretched as walk-in customers required immediate attention compared to ‘collection and delivery’ customers. As a result, the dispensing of ‘collection and delivery’ prescriptions tended to be fitted around walk-ins. However, as ‘collection and delivery’ prescriptions tended to be greater in quantity than walk-ins, how dispensing was organised and prioritised was sometimes problematic. In order to combat this problem, some of the pharmacies had a prioritisation system of using coloured baskets to organise the dispensing process. Under the EPS2 system, in order minimise this problem and continue to retain current safety practice, pharmacists planned to process electronically transmitted prescriptions as they currently do. This means that even with EPS2, dispensers can print-
J. Harvey et al. / A Constructivist Approach?
377
off the electronic prescriptions (called tokens) and process them as they do with a current FP10. Whilst this could indeed retain the current safety practice, it could also increase the time taken (and cost) of dispensing as dispensers will have an added workflow activity of printing prescriptions onto specialized FP10-like paper before processing and dispensing. Workflow and Workload - The amount of work, such as the number of items dispensed in relation to the pharmacy’s dispensing support system, appeared to influence the fluidity of work. Predictably, the greater the pharmacy’s dispensing resource, the higher the workload. The bigger pharmacies, which had more staff, dispensed more items (over 400 items) per day, whilst the smaller pharmacies dispensed around 100-150 items per day. The workload also varied in relation to the type of dispensing service the pharmacy offered. In some of the pharmacies that offered a ‘collection and delivery’ service, the workload tended to range from moderate to very high depending on how many ‘collection and delivery’ items needed to be processed and dispensed. This was done in addition to other duties such as dispensing to walk-in customers, date checking, packing away medicines, answering telephone queries and so on. Under the EPS2 system, these different prescriptions will be streamlined into electronically sent prescriptions (whether acute or repeat), thereby eliminating the extreme workload and workflow variation associated with dispensing. The electronic transmission however introduces a new issue for pharmacies that do not have an adequate number of dispensing stations. Dispensing staff often jostled for terminals which sometimes disrupted the workflow and elongated time taken to dispense prescriptions. If staff have to log in and out whenever they need to use a terminal (in order for the system to record each user’s activity), this issue would be exacerbated, especially if dispensing directly from terminals is encouraged. Since some pharmacy systems are quite sensitive and therefore prone to crashing, the logging in and out of system between too many dispensers could cause potential problems to the dispensing process. In this case, EPS2 would be more beneficial if staff had greater access to dispensing stations.
5. Discussion The introduction of technology into work places radically changes the way work is done and introduces a potential number of ways of doing that work10. Work is therefore engineered through using the most suitable way and an on-going assessment of the technology. MacKenzie and Wajcman11 describe three layers of technology; these are the physical object or the artefact, activities or processes involved with the artefact, and how to operate the artefact. The introduction of EPS 1 and 2 into community pharmacies encompasses changes in all the three layers described by Mackenzie and Wajcman11. Whilst the extended baseline study of this work is currently being examined socio-technically in another article, this paper highlights how the preliminary findings (discussed in the results) were used to inform early adopter PCTs through review reports the study team produced for the PCTs. The constructivist approach enabled the study team to use methods that showed how EPS2 could be socially appropriated to suit current practice of safely dispensing medicines. This was done by observing current practice and providing a platform for potential users to converse about the intended use of the system. As part of the on-going assessment of EPS2, this became a useful information source for key implementer stakeholders, and crucially
378
J. Harvey et al. / A Constructivist Approach?
identified some key potential benefits, and implementation issues that could become barriers to effective use of EPS2 in community pharmacy work practice.
6. Conclusion Our preliminary findings indicate that EPS2 has the potential to add value to current dispensing work in terms of smoothing out workflow and improving the management of workloads. There may also be safety benefits for patients and this will be assessed in detail in the final stages of the study. However, issues such as dispensers printing tokens to dispense from, could become barriers to the streamlined workflow and increase the cost of dispensing. In addition, pharmacies need extra technological support such as more dispensing terminals in order to maintain a streamlined workflow. It should however be noted that the benefits and implementation issues identified in this literature are a result of eight site visits to first-of type-sites. Therefore the findings may not be attributable to all implementation sites in terms of the potential effects of the EPS in relation to current work practice. Disclaimer: This report is independent research commissioned by the National Institute of Health Research. The views expressed in this publication are those of the authors and not necessarily those of the NHS, the National Institute for Health Research or the Department of Health.
References Warner, N. A suitable case for treatment: The NHS and reform, Grosvenor House Publishing Ltd, 2011. DoH, Delivering 21st century IT support for the NHSDepartment of Health, UK 2002. Rai, N. EPS put simply. Pharmj.org.UK 56 (2008). CfH, The national programme for IT implementation guide: guidance to support trusts when implementing National Programme products and services. NHS Connecting for Health, 2007. [5] May. C., Gask, L. Atkinson, T. Ellis, N. Mair, F. & Esmail, A. Resisting and promoting new technologies in clinical practice: The case of telepsychiatry. Social Science and Medicine 52 (2001), 1889-1901. [6] Greenhalgh., T. Benefits realization? Lessons from England’s efforts to produce a nationally stored summary record for 50 million people. Conference paper: The International Implementation of Electronic Health Records conference, London. 26.10.2010. [7] Berg, M. Aarts, J. & Van der Lei, J. ICT in health care: sociotechnicial approaches. Methods in Information in Medicine (2010), 297-301. [8] Eden, K.B. Messina, H. L. Osterweil, P. Henderson, C.R. & Marie Guise, J. The Impact of Health Information Technology on Work Process and Patient Care in Labor and Delivery. American Journal of Obstetrics and Gynecology 199 (2008) 307.e1-307.e9. [9] Harrison, M.I. Koppel, R. & Bar-Lev, S. Unintended Consequences of Information Technologies in Health Care-An Interactive Sociotechnical Analysis. Journal of the American Medical Informatics Association 14 (2007) 542-549. [10] Pouloudi, A. Perry, M. & Saini, R. Organisational appropriation of technology: A case study. Cognition Technology & Work (1999), 169-178. [11] MacKenzie, D. & Wajcman, J. The Social shaping of Technology: How the Refrigerator got its Hum, Open University Press, Buckingham, 1985.
[1] [2] [3] [4]
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-379
379
Can Cloud Computing Benefit Health Services? – A SWOT Analysis a
Mu-Hsing KUO a1, Andre KUSHNIRUKa Elizabeth BORYCKI a School of Health Information Science, University of Victoria, BC, Canada
Abstract. In this paper, we discuss cloud computing, the current state of cloud computing in healthcare, and the challenges and opportunities of adopting cloud computing in healthcare. A Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis was used to evaluate the feasibility of adopting this computing model in healthcare. The paper concludes that cloud computing could have huge benefits for healthcare but there are a number of issues that will need to be addressed before its widespread use in healthcare. Keywords. Electronic Health Record, Cloud Computing, Healthcare, SWOT
1. Introduction Despite the many benefits associated with using the EHR, there are numerous obstacles that restrict its adoption such as the [1]: lack of support for startup expenses or reimbursement for implementation costs; lack of standardized technical platforms to support EHR; lack of uniform standards for documentation of clinical services; concerns about the inability to align workflow with a standardized EHR; concerns that automation of clinical charting requires more time than paper charting; need to overcome security and privacy concerns. According to the Accenture’s survey, 58% of survey respondents noted that the expense required to implement EHRs was the area of greatest concern [2]. In 2007, talk of a new on-demand self-service Internet infrastructure (i.e. cloud computing) became more prominent. Many healthcare organizations, managers and experts believe that cloud computing can improve EHR adoption and will change the face of health care information technology [3-8]. The aim of this paper is to discuss the substance of cloud computing, its current applications in healthcare, and the challenges and opportunities of adopting this new approach. 2. What is Cloud Computing? Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service-provider interaction [9]. Examples of similar more limited applications are Google Docs or Gmail. However, cloud computing is different from traditional systems. For example, it provides a wide range 1
Corresponding Author: Dr. M. H. Kuo, School of Health Information Science, PO Box 3050 STN CSC, Victoria, BC, V8W 3P4, Canada. E-mail:[email protected].
380
M.-H. Kuo et al. / Can Cloud Computing Benefit Health Services? – A SWOT Analysis
of computing resources on demand any where and anytime; eliminates an up-front commitment by cloud users; it allows users to pay for use of computing resources on a short-term basis as needed; and has higher utilization by multiplexing of workloads from different organizations [5, 9-12]. From a service point of view, cloud computing includes three models: − Software as a Service (SaaS) - The applications (e.g. EHRs) are hosted by a cloud service provider and made available to customers over a network, typically the Internet (e.g. Google Apps and Salesforce.com). − Platform as a Service (PaaS) - The development tools (e.g. OS systems) are hosted in the cloud and accessed through a browser (e.g. Microsoft Azure). With PaaS, developers can build web applications without installing any tools on their computer, and then deploy those applications without any specialized administrative skills. − Infrastructure as a Service (IaaS) - The cloud user outsources the equipment used to support operations, including storage, hardware, servers and networking components. The cloud service provider owns the equipment and is responsible for housing, running and maintaining it (e.g. Amazon EC2). The client typically pays on a per-use basis. To deploy cloud computing, the U.S. National Institute of Standards and Technology (NIST) listed four models as follows: − Private cloud - A proprietary network or a data center supplies hosted services to a certain group of people. − Public cloud - A cloud service provider makes resources (applications and storage) available to the general public over the Internet. − Community cloud - The cloud infrastructure is shared by several organizations and supports a specific community that has common concerns (e.g. mission, security requirements, policy, and compliance considerations). − Hybrid cloud - An organization provides and manages some resources within its own data center and has others provided externally such as Microsoft HealthVault.
3. Current State of Cloud Computing in Healthcare "In the cloud" medical records services, such as Microsoft HealthVault, Google Health, Oracle and Exalogic Elastic Cloud and Amazon’s Web Service (AWS) promise an explosion in the storage of personal health information online [13]. Amazon was one of the first companies to launch a cloud product for the general public, and it continues to have one of the most sophisticated and elaborate set of options. Amazon’s Web Service (AWS) plays host to a collection of healthcare IT offerings, such as Salt Lake Citybased Spearstone’s healthcare data storage application, and DiskAgent which uses Amazon Simple Storage Service (Amazon S3) as its scalable storage infrastructure [14]. In addition, MedCommons, a Watertown, Mass.-based health records services provider, utilizes AWS to build its personal health record (PHR) offering, HealthURL [15]. In most healthcare environment physicians don't always have the information they need when they need to quickly make patient-care decisions, and patients often have to carry a paper record of their health history information with them from visit to visit. To address these problems, IBM and ActiveHealth Management worked together to create a cloud computing technology-based Collaborative Care Solution that gives physicians and patients access to the information they need to improve the overall quality of care, without the need to invest in new infrastructure [16]. American Occupational Network
M.-H. Kuo et al. / Can Cloud Computing Benefit Health Services? – A SWOT Analysis
381
(AON) and HyGen Pharmaceuticals are improving patient care by digitizing health records and streamlining their business operations using cloud-based software from IBM MedTrak Systems, Inc. and The Systems House, Inc. Their technology handles various tasks (e.g. online appointment scheduling) as a cloud service through the internet instead of developing, purchasing and maintaining technology onsite [17]. The U.S. Department of Health & Human Services' (HHS) Office of the National Coordinator for Health IT (ONC) recently selected an Acumen Solutions’ cloud computing CRM and project management system to manage the selection and implementation of EHR systems across the country. The software will enable regional extension centers to manage interactions with medical providers related to the selection and implementation of an EHR system [18]. Sharp Community Medical Group in San Diego will be using the Collaborative Care Solution to change the way physicians and nurses access information throughout the hospital group's multiple electronic medical record systems to apply advanced analytics and clinical decision support to help give doctors better insight and work more closely with patient care teams [14]. In Europe, a consortium including IBM, Sirrix AG security technologies, Portuguese energy and solution providers, Energias de Portugal and EFACEC, San Raffaele (Italy) Hospital and several European academics and corporate research organizations announced Trustworthy Clouds (TClouds) - a patient-centric home healthcare service that will remotely monitor, diagnose and assist patients outside of a hospital setting. The complete lifecycle, from prescription to delivery to intake to reimbursement will be stored in the cloud and will be accessible to patients, doctors and pharmacy staff [19].
4. Opportunities and Challenges The SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis is a well-known strategic planning methodology used by organizations to ensure that there is a clear objective defined for a project or venture, and that all factors related to the effort, both positive and negative, are identified and addressed. In this paper, we use the SWOT analysis to evaluate the feasibility of heath sectors adopting cloud computing to improve healthcare services (Figure 1). In SWOT, strengths and weaknesses are internal factors; opportunities and threats are external factors.
Figure 1. The health cloud computing SWOT analysis
Strengths Healthcare, as with any other service operation, requires continuous and systematic innovation in order to remain cost effective, efficient and timely, and to provide high quality services. Many healthcare organizations, managers and experts believe that the cloud computing approach can improve health services [4-9]. In addition, recent
382
M.-H. Kuo et al. / Can Cloud Computing Benefit Health Services? – A SWOT Analysis
research indicates that 75% of Chief Information Officers (CIO) indicated that they will need and use cloud computing in the near future [20]. Weaknesses Despite many health cloud computing application examples nowadays, however, there is insufficient evidence to indicate that the new approach is suitable for healthcare. Also, the lack of expertise to evaluate the feasibility of the new approach in healthcare sectors is currently another weakness. Opportunities One of the greatest advantages of adopting cloud computing in healthcare is that the network, server and security headaches that exist for locally-installed, legacy systems are eliminated. Smaller hospitals and medical practices typically don’t have internal IT staff to maintain and service in-house infrastructure for mission-critical applications such as EHRs. Therefore, eliminating the new infrastructure cost and the IT maintenance burdens clearly removes the obstacles to EHR adoption. Also, the cloud computing approach promises to speed deployment while maintaining vital flexibility, i.e. rapid elasticity, and ubiquitous access to health resources. Threats Among the possible threats the cloud computing adoption are the healthcare professionals' lack of trust in the new approach and the lack of national or international mandates or regulations to support full adoption. Armbrust et al indicated 10 top obstacles to users’ trust in the cloud approach [21]: availability of service, data lock-in, data confidentiality and audibility, data transfer bottlenecks, performance unpredictability, scalable storage, bugs in large-scale distributed systems, scaling quickly, reputation fate sharing, and software licensing. Data jurisdiction, data interoperability and some legal issues are also potential major concerns. For example, the US Health Insurance Portability and Accountability Act (HIPAA) restricts companies from disclosing personal health data to non-affiliated third parties unless specific contractual arrangements have been put in place.
5. Conclusion and Discussion Cloud computing is a new style of computing that promises to provide a more flexibility, less expense and more security to end-users. It provides potential opportunities for improving EHR adoption and healthcare services. However, there are still many challenges behind the fostering of the new model in healthcare. In this paper, we use a SWOT analysis to evaluate the feasibility of healthcare cloud computing. We conclude that the pro side includes less up-front capital investment, capability of rapid elasticity and ubiquitous access to health resources. The con side includes the lack of sufficient successful evidence of its application in healthcare, a dearth of domain experts to evaluate the feasibility, less of healthcare professionals' trust, and lack of mandates/regulations to support full adoption. Perhaps the strongest resistance to the adoption of cloud computing in health IT centers on data security and privacy. However, we believe that compared to locallyhoused data, this computing model typically improves security because cloud providers (e.g. Microsoft, Google) are able to devote huge resources to solving security issues that many customers cannot afford, in contrast to the destruction of many medical records and legal documents in the New Orleans Hurricane Katrina disaster.
M.-H. Kuo et al. / Can Cloud Computing Benefit Health Services? – A SWOT Analysis
383
Regarding data privacy, some organizations such as the Cloud Security Alliance, a non-profit organization have developed a comprehensive guide to deal with privacy issues [22]. Governments can also play a critical role by fostering widespread agreement regulations for both users and providers. In conclusion, if users, providers and governments act wisely, cloud computing could potentially be very beneficial to healthcare services.
References [1] [2]
[3] [4] [5] [6] [7]
[8] [9] [10] [11] [12] [13]
[14]
[15]
[16]
[17]
[18]
[19] [20] [21] [22]
Maki, S. E. and Petterson, B., 2008. Using the Electronic health records. Thomson Delmar. Accenture. 2005. Electronic Health Records Survey. [Cited 2010 December 15], Available from: http://www.accenture.com/NR/rdonlyres/407281EB-2187-4A99-8FD72A802D1370EF/0/EHRSurvey.pdf Vouk, M.A., 2008. Cloud Computing – Issues, Research and Implementations. Journal of Computing and Information Technology – CIT, 16(4), 235–246. Han, Y., 2010. On the Clouds: A New Way of Computing. Information Technology and Libraries, 8792. Armbrust, M., et al., 2010. A View of Cloud Computing. Communications of the ACM, 53(4), 50-58. Chatman, C., 2010. How Cloud Computing is Changing the Face of Health Care Information Technology. Journal of Health Care Compliance, May-June, 37-70. Batchelor, J., 2009. Future Forecast: Cloud Computing Brightens Healthcare’s Dark Skies. [Cited 2010 December 15], Available from: http://www.cmio.net/index.php?option=com_articles&view=article& id=16941:future-forecast-cloud-computing-brightens-healthcares-dark-skies Fahrni, J., 2010. Cloud computing and health care - Facing the Future. [Cited 2010 December 18], Available from: http://www.slideshare.net/JFahrni/cloud-computing-and-health-care-facing-the-future Mell, P. and Grance, T., 2010. The NIST Definition of Cloud Computing. Communications of the ACM, 53(6), 50. Iyer, B. and Henderson, J.C. 2010. Preparing for the Future: Understanding the Seven Capabilities of Cloud Computing. MIS Quarterly Executive, 9(2), 117-131. Vouk, M.A.., 2008. Cloud Computing – Issues, Research and Implementations. Journal of Computing and Information Technology – CIT, 16(4), 235–246. Han, Y., 2010. On the Clouds: A New Way of Computing. Information Tech. and Libraries, 87-92. Online storage of your Medical Records with Google Health and Microsoft Healthvault. [Cited 2011 April 4], Available from: http://www.sencilo.com/blog/article/online-storage-of-your-medical-recordswith-google-health-and-microsoft-healthvault/ DiskAgent Launches New Remote Backup and Loss Protection Software as a Service Offering. [Cited 2011 January 18], Available from: http://www.thefreelibrary.com/DiskAgent(TM)+Launches+New+Remote+Backup+and+Loss+Protecti on+Software...-a0182194404 Batchelor J., 2009. Future Forecast: Cloud Computing Brightens Healthcare’s Dark Skies, CMIO. [Cited 2011 January 18], Available from: http://www.cmio.net/index.php?option=com_articles&view=article&id=16941 ActiveHealth and IBM Pioneer Cloud Computing Approach to Help Doctors Deliver High Quality, Cost Effective Patient Care. [Cited 2011 January 18], Available from: http://www03.ibm.com/press/us/en/pressrelease/32267.wss IBM and Partners Help Healthcare Clients Adopt Electronic Health Records and Improve Operations with Cloud Software. [Cited 2011 January 18], Available from: http://www03.ibm.com/press/us/en/pressrelease/26963.wss Acumen nabs ONC cloud computing contract, [Cited 2011 January 20], Available from: http://www.healthimaging.com/index.php?option=com_articles&view=article&id=20648:acumen-nabs-onc-cloudcomputing-contract&division=hiit EU consortium launches advanced cloud computing project with hospital and smart power grid provider, http://www-03.ibm.com/press/us/en/pressrelease/33067.wss Danek, J. 2009. Cloud Computing and the Canadian Environment, [Cited 2011 January 18], Available from: http://www.scribd.com/doc/20818613/Cloud-Computing-and-the-Canadian-Environment Armbrust, M., et al., 2009. Above the clouds: A Berkeley view of cloud computing. Technical Report No. UCB/EECS-2009-28, EECS Department, U.C. Berkeley. Cloud Security Alliance, 2009. Security Guidance for Critical Areas of Focus in Cloud Computing, V2.1. [Cited 2011 January 20], Available from: http://www.cloudsecurityalliance.org/csaguide.pdf
This page intentionally left blank
Evaluation
This page intentionally left blank
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-387
387
Medical Providers’ Dental Information Needs: a Baseline Survey Amit ACHARYAa1, Andrea MAHNKEa, Po-Huang CHYOUa, Carla ROTTSCHEITa, a Justin B STARREN a Biomedical Informatics Research Center, Marshfield Clinic Research Foundation, Marshfield Clinic, Marshfield, WI
Abstract. Articulation of medical and dental practices has been strongly called for based on the many oral-systemic connections. With the rapid development and adoption of electronic health records, the feasibility of integrating medical and dental patient data should be strongly considered. The objective of this study was to develop an initial understanding of the medical providers’ core dental information needs and opinion of integrated medical-dental electronic health record (iEHR) environment in their workflow. This was achieved by administering a 13 question survey to a group of 1,197 medical care providers employed by Marshfield Clinic in Wisconsin, United States. The survey received a response rate of 35%. The responses were analyzed based on provider ‘Role’ and ‘Specialty’. The majority of the respondents felt the need for patient’s dental information to coordinate or provide effective medical care. An integrated electronic health record environment could facilitate this holistic patient care approach. Keywords. Integrated Medical-Dental Electronic Health Record, Baseline Survey, Medical Providers’ Dental Data Need, Medical-Dental Holistic Care, Health Information Technology.
1. Introduction The ‘Great Divide’ between dentistry and medicine is a well known fact of the healthcare delivery system. However, it is often said that mouth is the mirror of overall health and there has been many studies linking the oral and systemic connections [1, 2]. The Institute of Medicine (IOM) of the National Academy of Sciences released a report, Dental Education at the Crossroads: Challenges and Change in January 1995 [3]. The IOM report called for a strong cohesion between medicine and dentistry, it states that "Dentistry will and should become more closely integrated with medicine and the health care system on all levels: research, education, and patient care” [3]. An article by Baum, “Will dentistry be left behind at the healthcare station?” [4] indicates that economic prosperity of dental practices and the financial constraints on dental education is keeping dentistry isolated from utilizing biological approaches to managing oral health. Although the healthcare team consists of various specialists trained in different area of expertise, there is often a lack of bi-directional information flow between the 1
Corresponding Author: Amit Acharya, Dental Informatics Scientist, Biomedical Informatics Research Center, Marshfield Clinic Research Foundation, 1000 North Oak Avenue, Marshfield, WI 54449, Phone: 1715-221-6423, Fax: 1-715-221-6402, E-mail: [email protected]
388
A. Acharya et al. / Medical Providers’ Dental Information Needs: A Baseline Survey
dentists and the medical care providers. Many factors contribute to this lack of effective communication and sharing of patient information between the different groups of care providers in delivering a holistic approach to patient care. Some of the contributing factors could be security issues, lack of infrastructure, and the business model of the practices to list a few. A recent study by Schleyer et al. [5] found that although 55% of the respondents to a survey answered ‘yes’ when asked whether they would allow other providers to access information about their patients, many qualified their response by indicating that they would require a level of security in place. The need for dentistry to be part of National Health Information Infrastructure has also been discussed in literature [6]. Only 32% of the physicians are in solo or 2-physician practices [7], on the contrary almost 73% of all dentists in U.S. are in solo practices [8] representing the different business model of the practices. However, with the advanced technological development and widespread adoption of electronic health records, some of the larger healthcare organizations are in a unique position to explore the feasibility of providing a holistic care approach to their patients through a medical-dental integrated electronic health record (iEHR) environment. The objective of this study was to develop an initial understanding of the medical care providers’ core dental information needs and opinion of medical-dental integrated electronic health record (iEHR) environment in their workflow.
2. Background Founded in 1916, Marshfield Clinic is one of the largest comprehensive medical systems in the United States. This 777-physician, 6519-employee multi-specialty group practice provides patient care, research and medical education across 52 Wisconsin locations. The Marshfield Clinic center works closely with St. Joseph's Hospital a 524-bed acute facility and maintains a joint EHR. Family Health Center of Marshfield, Inc. (FHC) in partnership with Marshfield Clinic has served low-income, underinsured and uninsured individuals since March 1974. FHC has been providing onsite dental services since the fall of 2002. Currently, FHC operates seven dental sites with two additional sites under construction that will be operational in the fall of 2011. Marshfield Clinic made a significant commitment to internal development of its information systems over the past 40 years. Physicians have collaborated with affiliated hospitals, clinics and an in-house development staff of over 300 IT professionals to develop systems. CattailsMD™ is the first internally developed EHR to be certified by the Certification Commission for Health-care Information Technology (CCHIT). Marshfield Clinic is currently developing a robust medical-dental integrated electronic health record (iEHR) environment. The beta version of the dental module, CattailsDental has been implemented and successfully rolled out in all of the seven dental centers. The survey discussed in the manuscript is one of the many studies conducted at the Marshfield Clinic as part of the iEHR environment design and development.
3. Methods The research group developed the survey instrument and pilot-tested to identify any issues. Minor changes were carried out to the survey instrument as a result of the pilot-
A. Acharya et al. / Medical Providers’ Dental Information Needs: A Baseline Survey
389
test. The development of the survey was also informed by a literature review that helped identify certain aspects of the survey instrument. The survey consisted of 13 questions with both structured and open-ended questions. Figure 1 illustrates the final survey instrument used in this study to measure the medical care providers’ core dental information needs and opinion of medical-dental integrated electronic health record (iEHR) environment.
Figure 1. Survey instrument used in the study
The final survey and the research protocol were submitted to the Marshfield Clinic Institutional Review Board, which classified the study as exempt under section 45 CFR 46.101(b) and waived requirement for an authorization (FWA00000873). A list of all the medical providers was extracted from the Marshfield Clinic data warehouse that included Physicians, Surgeons, Residents, Registered Nurses, Nurse Practitioners, Certified Nurses and Licensed Practical Nurses. The list identified 1,197 providers from all Marshfield Clinic locations and St. Joseph’s Hospital who were eligible to participate in the survey. The survey was administered through an online survey tool, Survey Monkey (Portland, OR). The survey was administered between January 14th, 2010 and February 15th, 2010. Two reminders were sent, the first on January 25th and the second on February 6th. The providers had an option to be entered into a drawing for an iPod on completing the survey to encourage participation. The survey respondents were grouped based on the ‘Role’ and ‘Specialty’. Groups based on ‘Role’: Group 1 - Physician, Surgeon, Anesthesiologist, Medical Director, Department Chair, Resident and Nurse Practitioner, Group 2 - Certified Nurse, Nurse Midwives, Licensed Practical Nurses and Registered Nurse and Group 3 - Managers and Others. Groups based on ‘Specialty’: 1. Surgery; 2. Cardiology; 3. Emergency Medicine; 4. Primary Care; 5. Oncology; 6. Pediatrics; 7. Neurology; 8. Women’s Health/Obstetrics-Gynecology; 9. Other Specialties. Questions Q5 to Q10 which were ‘structured’ in their format and were analyzed based on the ‘Role’ and ‘Specialty’. P values were derived by performing the Chi-square test. Questions Q11 to Q13 were open-ended and the collected data were coded, analyzed and identified under appropriate major themes. As all the questions in the survey were not mandatory, any missing data from the providers’ response to a particular question were handled by not including that respondent for the analysis of the respective question.
390
A. Acharya et al. / Medical Providers’ Dental Information Needs: A Baseline Survey
4. Results The survey was initially emailed to 1221 provider email addresses within the system. There were 24 emails that were undeliverable and those providers email addresses were removed from the original list. Of the original 1,197 eligible providers, 417 completed the survey, which yielded a response rate of 34.84%. The provider’s response to Questions Q5, Q7 and Q8 were statistically significant when analyzed based on the ‘Role’ (P-value < 0.05). On the other hand, when analyzed based on ‘Specialty’, Questions Q5, Q7, Q8 and Q9 were statistically significant (P-value < 0.05). Granular details regarding the responses to individual questions could not be presented in the manuscript due to the space limitation. However, Figure 2 illustrates the overall medical provider’s dental data needs based on the different dental categories. After analyzing the responses to Q13, it was determined that most of the comments fell under Q11 and Q12 and were included under the respective category for analysis. The data collected from the responses to Q11, which represented the respondents’ mentioned advantages of a medical-dental iEHR environment were coded and identified under the following themes: a. access to reliable dental information and history, b. better communication with the dentist, c. holistic care and better continuity of patient care, d. better coordination of patient care, e. easy and faster access to dental information and f. reduce narcotic abuse. Similarly, the data collected to Q12, which represented the respondents’ mentioned disadvantages of a medical-dental iEHR environment were coded and identified under the following themes: a. information overload, b. cost issues, c. privacy concern, d. system slowness and e. coping with dental jargon.
Figure 2. Medical provider’s dental information needs based on the different dental information categories
5. Discussion There is no data from previous literature regarding medical providers’ opinions on an integrated electronic health record environment. The survey results present a baseline
A. Acharya et al. / Medical Providers’ Dental Information Needs: A Baseline Survey
391
measure by different medical specialties and major roles. Considering the busy nature of the medical providers and the historic disconnect between how medicine and dentistry has been practiced, the response rate of 35% was encouraging. However, there is further scope for extensive investigation into the articulation of medical and dental iEHR. It would also be interesting to explore how a similar survey would perform on a state, national or an international level. The majority of the respondents felt the need for patient’s dental information to coordinate or provide effective medical care, especially Cardiologists, Emergency medicine physicians, Primary care physicians, Oncologists, Pediatricians and Neurologists. Since there are well-established connections between oral health and these specialties, this reflected the expected outcome. However, only about half of Surgeons and Obstetricians-Gynecologists who responded to the survey expressed the need for patient’s dental information for coordinating or providing effective medical care. Based on the medical providers’ response, there seems to be a strong need for inter-communication between the physicians and dentists regarding their patients’ health information. About 55% of the respondents requested a consult monthly or less and 10% weekly from the general dentist or dental specialist. The need for patients’ oral health status, dental treatment plan, dental problem list and dental diagnosis was of importance for the majority of the survey respondents. Although dentists rarely document diagnostic codes, this calls for an urgent need for documenting diagnosis in their patient records and could be supported by standardized dental diagnostic codes. It is evident from this baseline study that the medical providers have recognized the need for patient’s dental information to provide comprehensive care. Hence, an iEHR environment could facilitate this holistic approach. There is scope for further investigation into specific dental information required for each of the identified specialties. Also a quantitative analysis of the advantages vs. disadvantages of an iEHR environment can further be conducted to explore the feasibility of such an environment.
References [1] [2] [3] [4] [5] [6] [7] [8]
Ostfeld RJ, “Periodontal Disease and Cardiology,” “Report of the Independent Panel of Experts of the Scottsdale Project,” Grand Rounds Supplement September 2007, p. 3. Grossi SG, Genco RJ, Periodontal disease and diabetes mellitus: a two-way relationship. Ann Periodontol. 1998 Jul;3(1):51-61. Field MJ, ed. Dental education at the crossroads: challenges and change. Institute of Medicine Report. Washington, DC: National Academy Press, 1995. Baum BJ., Will dentistry be left behind at the healthcare station? J Am Coll Dent. 2004 Summer;71(2):27-30. Schleyer TK, Thyvalikakath TP, Spallek H, Torres-Urquidy MH, Hernandez P, Yuhaniak J. Clinical computing in general dentistry. J Am Med Inform Assoc. 2006 May; 13(3):344-352. Schleyer TK. Should dentistry be part of the National Health Information Infrastructure? J Am Dent Assoc 2004;135(12):1687-95. PMID:15646601. Boukus E, Cassil A, O'Malley AS. A Snapshot of U.S. Physicians: Key Findings from the 2008 Health Tracking Physician Survey, Center for Health Systems Change, Data Bulletin no 35, September 2009. American Dental Association Survey Center. Survey of dental practice. Chicago, IL: American Dental Association; 2003.
392
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-392
What Makes an Information System More Preferable for Clinicians? a Qualitative Comparison of Two Systems Habibollah PIRNEJADab, Zahra NIAZKHANIab1, Jos AARTSb, Roland BAL b a Department of Medical Informatics, Urmia University of Medical Science, Urmia, Iran b Healthcare Governance, Institute of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
Abstract. Two different information systems with respect to their ability to afford clinicians’ needs in the chemotherapy medication process were implemented in a large Dutch academic hospital. A commercially available Computerized Physician Order Entry (CPOE) system was not appreciated because clinicians believed that it could not support complex chemotherapy process. Later, a home-grown IT system with the capability of prescribing chemotherapy medications based on standard care protocols was appreciated and fully used by clinicians. We evaluated both systems from their users’ perspective to find the sources of clinicians’ preference and to trace them back to their Systems Development Life Cycle (SDLC). Keywords. Chemotherapy, information system, implementation, design, requirement analysis, qualitative research, user involvement, CPOE, SDLC
1. Introduction There has been an ongoing discussion in the world of health information technology why and how an information system that was adopted in a healthcare setting successfully, fails to be adopted or is adopted sub-optimally in another setting [1]. Is it related to information systems’ design, meaning that a System Development Life Cycle (SDLC) fails to address user requirements, which are very specific for every target organization? Or, is it because of the implementation methodologies, which also should be specific for every implementation site [1, 2]? There is evidence to support the critical role of both design and implementation processes. Studies, for example, revealed that most of the successful implementation and use of decision support systems have been from those institutions that developed their own systems [3]. But, how system development and design can be such [if not the most] important factor in determining successful IT adoption and use? Erasmus University Medical Center (Erasmus MC) implemented a CPOE system in all inpatient wards. Although the system was appreciated and used in most wards, it was not considered appropriate for prescribing chemotherapy medications. Clinicians in hematology and oncology wards continued to use paper-based medication system 1
Corresponding author: Z. Niazkhani, E-mail: [email protected]
H. Pirnejad et al. / What Makes an Information System More Preferable for Clinicians?
393
hoping their required changes come in the CPOE system redesign. Few years later, a home-grown IT system with the ability of prescribing chemotherapy medications using standard care protocols was implemented in all hematology/oncology wards of Erasmus MC. The system was appreciated and was fully used by clinicians. In this paper, we looked into the questions that “what made the second system more preferable to clinicians?” and “what lessons can we learn?”
2. Background and Methodology Chemotherapy work can be considered a type of medication process; only it is more complex and takes longer. The complexity in the chemotherapy process comes mainly from the fact that chemotherapy medications have narrow therapeutic windows, necessitating more accurate dose calculation and adjustments, and because multiple parties are involved, making the process more vulnerable to coordination problems and inefficiencies [4]. Guiding patients throughout long-term care protocols as well as keeping high quality prescription records are also very important for chemotherapy process. Erasmus MC is a 1237-bed tertiary academic hospital in Rotterdam, The Netherlands. A commercially available CPOE system (Medicatie/EVS® V 2.30) was implemented in all inpatient wards in 2003-2005. The system had the capability to generate alerts on drug overdoses, interactions, and double medications. We interviewed system users throughout the hospital; among them we conducted semistructured interviews with 2 physicians and 2 nurses from the hematology/oncology department as well as with the project leader, between 2006-2007. The interviews were voice recorded, transcribed, coded, and analyzed for the emerging themes on supportive and non-supportive features of the system in the medication process. More information about our qualitative methods of data acquisition and analysis can be found in [5,6]. In 2007 a home-grown information system, named Kuren, was implemented in all the hematology and oncology inpatient and outpatient clinics, using the same implementation strategy as of the CPOE system. Kuren was designed by a Pediatric oncologist and before being implemented in adult hematology/oncology departments, its early version had been developed gradually and implemented in the pediatric oncology department of Erasmus MC for about 5 years. The system was designed specifically to plan chemotherapy courses based on medical protocols, to adjust chemotherapy medication doses based on patients’ biometric indexes, and to provide decision support in following chemotherapy protocols. One year after the system implementation, we conducted 7 semi-structured interviews with system users (including 4 physicians, 2 nurses, and the project leader). During the interviews, we asked the interviewees to explain what characteristics of Kuren makes it more suitable for chemotherapy prescription process and if possible to compare Kuren with the paper based system and with the CPOE system. The interviews were voice recorded and transcribed. The transcripts were analyzed to find specific reasons for preference of Kuren and to find out more about possible non-supportive features of the CPOE system in chemotherapy process. More analysis on data was conducted by comparing of the preference reasons of Kuren to the non-preference reasons (non-supportive features) of the CPOE system in order to trace the source of differences to system design.
394
H. Pirnejad et al. / What Makes an Information System More Preferable for Clinicians?
3. Findings Both systems were implemented through more or less the same implementation strategies. The implementation of the CPOE system was seen overall in the hospital as success. However, the situation with respect to hematology/oncology wards was different. Although, the CPOE system had the capability of being used for prescribing chemotherapy medications, the clinicians thought this functionality couldn’t support the complex chemotherapy process. Therefore, they preferred to continue to use paperbased system and wait for a better functionality of the system. On the other hand, although the interviewed clinicians reported a few problems in working with Kuren, they all liked the system and were very happy with the way it supported the chemotherapy process. Our data analysis revealed 13 reasons for Kuren preference that could be traced back to 3 differences in the SDLC of the systems (Table 1). Table 1. Specific reasons because of them clinicians preferred Kuren, and their source in design process. System Design Differences Proximity of development site to implementation site
Specific Reasons on Kuren Preference Quick and easy communication of feedbacks from system users to system developers
User requirement driven design
Reduced workload of clinicians (they did not have to fill in may forms and there was less double work) Easy to use (e.g., navigation through the system was considered easy) Flexibility (e.g., it was easy to perform changes to patient’s already planed care) Reduced possibility of mistakes in clinicians’ work (the system did exact and accurate dose calculation based on patients biometric indexes) Easy to find different pieces of patient information in the system Offering general overview of patient care (the general scheme of patient care was represented in one screen) Ability to link different pieces of patient information together (in a time related pattern) Providing decision support aid (providing information for clinicians on how to fulfill a step in a care process based on standard care protocols)
Process oriented design
Support applying standard care protocols (physicians could choose a standard care protocol to follow or built their own protocols by combining different standard protocols) Support an overview of patient care (by connecting current patient care to past care as well as planned care) Support synchronization and coordination of the stakeholders where sequence of actions was important (e.g., through the system nurses knew which patients they should expect and what preparations they needed to do for patients before chemotherapy courses arrive at daycare center.) Support communication between the different stakeholders (e.g., the system provided biometric indexes measured by physicians to pharmacists in case they needed to double check the doses.)
3.1. Proximity of Development Site to Implementation Site Following implementation, a system enters into maintenance phase in which not only possible errors with the system during its working life are recognized and eliminated, but also the system is tuned to its working environment. Some of the advantages of Kuren were related to the fact that it was developed onsite. Every comment and/or required change to the system could easily and quickly be communicated to the ICT
H. Pirnejad et al. / What Makes an Information System More Preferable for Clinicians?
395
department where the designer and the programmer could sit together to figure out how to adapt the system accordingly. In technical terms such a setting shortened SDLC. The situation was opposite for the CPOE system. The project team had the responsibility to collect the comments and problems of clinicians in working with the system and to communicate them to the system vendor. Twice a year all the countrywide clients of the CPOE vendor gathered in one city where they had the chance to discuss on the required changes to the system. The vendor then had to figure out how the system should or could be adapted in a way to be respondent to its users’ needs in different working environments. Such structured way of SDLC inevitably prolonged the change process. By the time the changes were made to the system, users had already found their way out by working around the system [7]. 3.2. User Requirement Driven Design A thorough user requirement analysis and user involvement is fundamental and prior step to every good system design. We could trace back some important differences between the two systems to the way their user requirements were analyzed and the result were fed into system design and redesign processes. The designer of Kuren was an oncologist, someone with thorough knowledge of chemotherapy work requirements, and someone who has enough experience in working with the paper-based system that was going to be replaced by Kuren. He, moreover, was in close contact with other users and could relatively easily get their feedbacks. This setting created a short cycle of evaluation and consideration of users’ requirements in the design and re-design processes of the system. The result therefore was a specific system for clinical context of Erasmus MC that could respond to many of the clinicians’ needs. Considering users’ concerns in adjusting the system moreover created a sense of system ownership among the users and brought their commitment and close collaboration as a result. Such setting never existed for the CPOE system. Many of our interviewees had complains and/or concerns about the CPOE system. In fact, many of those complaints were never considered in the system re-design and remained as the system’s inflexibility and less user friendliness characteristics. An oncologist noted: “the CPOE system had an alerting system on drug-drug reaction but you had to calculate and combine the chemotherapy medications every time. You had to fill in the information every time. There was no information for physicians and nurses for example about side effects and the way the courses had to be administered.” Contrary to Kuren, entering a new chemotherapy course and/or specifications about a course into the CPOE system could not be done without the help of a technical person. 3.3. Process Oriented Design One of the Kuren’s basic characteristics, which pleased its users, was its ability to tie different stakeholders’ work into a single multidisciplinary care process. This was done at least by four means: First, the system supported using standard care protocols thus reduced variation in care practice concerning who should do what, when, and how. Second, it was connecting different episodes of patient care in the past to planned care episodes in the future along a timeline, giving a general overview on patients past, current, and future care. Third, it improved communication between different parties throughout the process; thus the clinicians did not have to make many phone calls for the purpose of information gathering. Fourth, by providing necessary information from
396
H. Pirnejad et al. / What Makes an Information System More Preferable for Clinicians?
one party to the other, the system helped different stakeholders to synchronize and to coordinate better in their interdependent work. On the contrary, we detected many communication problems between the clinicians in working with the CPOE system [7]. The CPOE system, in general, could not support inter-professional work [6] and the patient care process as a whole appropriately. An oncologist explained: “The CPOE system was based on taking only a single chemotherapy course every time you visit a patient. You could not take a chemotherapy strategy for the patient [at once]. You could basically make a mistake by taking a wrong course for a patient. You also did not have an overview on patient care”.
4. Discussion The two evaluated systems were considered successful. Kuren was the system of preference for hematologists/oncologists because it could support the complex chemotherapy process and managed its user requirements better. The advantages of Kuren were built into the system through a user requirement driven and process oriented design as well as due to its onsite development. Our finding in this study demonstrates the fundamental impact of an appropriate SDLC strategy on successful adoption of IT systems. They underscore the importance of user involvement and a comprehensive user requirement analysis in an IT system preference and success. A thorough understanding of a care process is required to design a system to support it. Such thorough understanding (especially if the process is a multidisciplinary one) will only develop gradually and through a close collaboration between system users and its developers. The study did not evaluate the financial aspect of onsite system development.
References [1] [2]
[3]
[4] [5]
[6]
[7]
Wears RL, Berg M. Computer technology and clinical work: still waiting for Godot. JAMA. 2005 Mar 9;293(10):1261-3. Ammenwerth E, Talmon J, Ash JS, Bates DW, Beuscart-Zephir MC, Duhamel A, et al. Impact of CPOE on mortality rates--contradictory findings, important messages. Methods Inf Med. 2006;45(6):586-93. Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005 Mar 9;293(10):1223-38. Scavuzzo J, Gamba N. Bridging the gap: the Virtual Chemotherapy Unit. J Pediatr Oncol Nurs. 2004 Jan;21(1):27-32. Niazkhani Z, Pirnejad H, van der Sijs H, de Bont A, Aarts J. Computerized Provider Order Entry System - Does it Support the Inter-professional Medication Process? Methods Inf Med. 2010; 49(1):207 Pirnejad H, Niazkhani Z, van der Sijs H, Berg M, Bal R. Impact of a computerized physician order entry system on nurse-physician collaboration in the medication process. Int J Med Inform. 2008 Nov;77(11):735-44. Pirnejad H, Niazkhani Z, van der Sijs H, Berg M, Bal R. Evaluation of the Impact of a CPOE system on Nures-physician Communication: A Mixed Method Study. Method Inf Med. 2010 (48):350-60 doi:10.3414/ME0572.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-397
397
Does PACS facilitate work practice innovation in the intensive care unit? Isla M HAINS,a,1 Nerida CRESWICK a, Johanna I WESTBROOK a Centre for Health Systems and Safety Research, Australian Institute of Health Innovation, Faculty of Medicine, University of New South Wales, Sydney, Australia a
Abstract. Picture Archiving and Communication Systems (PACS) allow the fast delivery of imaging studies to clinicians at the point-of-care, supporting quicker decision-making. PACS has the potential to have a significant impact in the Intensive Care Unit (ICU) where critical decisions are made on a daily basis, particularly during ward rounds. We aimed to examine how accessing image information is integrated into ward rounds and if the presence of PACS produced innovations in ward round practices. We observed ward rounds and conducted interviews with ICU doctors at three hospitals with differing levels of PACS availability and computerization. Imaging results were infrequently viewed by clinicians during ward rounds in two ICUs: one without PACS and one which had both PACS and bedside computers. In the third ICU, where PACS was only available at a central workstation, images were frequently viewed throughout the daily round and integrated into decisions about patient care. The presence of bedside computers does not automatically result in innovations to work practice. Despite the ability to utilize PACS at the bedside to support decision-making, use was varied. Research to understand how the complexities and context of the ICU contribute to work practice innovation and why practice changes differ is required. Keywords. PACS, Intensive Care Units, Ward Rounds, Decision Making
1. Introduction Diagnostic imaging is a key facet of healthcare and digital imaging is changing the way in which healthcare is provided. Picture Archiving and Communication Systems (PACS) have developed and evolved over the last 30 years and are defined as “comprehensive networks of digital devices designed for acquisition, transmission, storage, display and management of diagnostic imaging studies” [1]. These systems allow x-rays and other diagnostic images to be delivered to clinicians at the point-ofcare, where decisions are made, far more rapidly than hard copies. PACS is now used in healthcare worldwide [2; 3] and has been shown to improve image availability and radiology reporting times, workflows, issues associated with lost images and to reduce the time clinicians spend searching for images [4; 5]. One clinical area where PACS has potential to significantly impact is the Intensive Care Unit (ICU) [6]. Diagnostic imaging can be critical to the care of an ICU patient [7; 8] with imaging examinations conducted on a daily basis. PACS has the potential to allow faster clinical decisionmaking, though studies to determine its impact on the initiation of clinical actions in the ICU have proved inconclusive [9; 10]. Many of these clinical decisions are 1
Corresponding author.
398
I.M. Hains et al. / Does PACS Facilitate Work Practice Innovation in the Intensive Care Unit?
conducted during daily ward round in the ICU. The aims of our study were to examine how accessing images and reports is integrated in daily ward rounds in the ICU, and assess if presence of PACS produced innovations in ward round work practices.
2. Methods 2.1. Study Design We observed daily ward rounds in the ICU to understand how use of imaging and imaging reports are incorporated in daily practice. We conducted interviews with ICU doctors to ascertain their perceptions of impact of PACS on their work practices and how it is, or may be, used on a daily basis. Ethics approval was obtained from the relevant hospitals and the University of New South Wales Human Research Ethics Committee. Each participant gave written consent to participate in interviews and to being observed. 2.2. Study Setting and Participants The study was carried out in ICUs at three Australian hospitals, each of which had differing levels of PACS availability and computerization. The characteristics of each site are shown in Table 1. All ICU doctors were invited to participate in the study. Table 1. Study site characteristics ICU Site 1 (Large metropolitan teaching hospital) 2 (Large metropolitan teaching hospital) 3 (Medium metropolitan teaching hospital)
No. of ICU Beds 54 28 13
PACS Availability AGFA IMPAX 6.3.1 – 9 years GE Centricity Web v3.0 – 8 months No
Level of Computerization Bedside computers and central workstations Central workstations Central workstations
2.3. Data Collection Data were collected by IH and NC between April and October 2010. We observed 79 hours of ward rounds and carried out 40 one-on-one semi-structured interviews with ICU doctors (staff specialists, registrars and residents). We asked questions relating to changes or potential changes to practice with PACS; if the sequence of tasks has been or was anticipated to be impacted following PACS introduction; and where PACS is accessed. The interview schedule was adapted throughout the process according to issues arising from earlier interviews. Written observation notes included how often imaging information was accessed during a ward round, where and how the information was accessed and if it was accessed for every patient. 2.4. Data Analysis We (IH & NC) reviewed all observation notes and classified data according to frequency imaging data were accessed, where and how this occurred, either via PACS or hard copy film. We employed NVivo 8 (QSR International) to organize the
I.M. Hains et al. / Does PACS Facilitate Work Practice Innovation in the Intensive Care Unit?
399
interview data and analyzed it using thematic analysis (IH & NC) [11]. We report results according to themes arising from the combined observational and interview data.
3. Results 3.1. Integration of Imaging into Ward Rounds At Site 1, which has bedside computers with access to an ICU clinical information system, electronic ordering and PACS, imaging data were infrequently viewed and accessed during the majority of daily morning ward rounds. We observed only one ward round where clinicians viewed images during the round and this round was carried out primarily at the central workstation area, with the doctor simply doing a physical examination of the patient at the bedside before going back to the workstation to access data and write notes. Though senior clinicians commented on the ability to “see an x-ray at every bedside without having to walk back and try and find it” and “look up an x-ray at the bedside when you are interested on the ward round”, others stated that images are generally looked at on PACS at the central workstation, outside of rounds, rather than at the bedside. This may be related to their belief that the bedside computer screens are not of a high enough resolution “to have a proper look at PACS” and so they prefer using the high resolution screens at the central workstation, though we infrequently observed this during the round. Additionally it could simply occur because clinicians preferred to keep the ward round “compartmentalized just for the sake of getting the ward round done in a timely fashion”. Similar practices were also observed at Site 3. This ICU does not have PACS or bedside computers and imaging information was rarely used during the ward round. Images are viewed on a multiviewer light box in a corner of the ICU and we observed a total of two instances on two separate rounds (out of six rounds observed) in which a senior doctor accessed an x-ray in this area. However, immediately before the ward round all ICU clinicians attend a “handover round” where imaging data and plans for current ICU patients are discussed. Comments from all doctors also supported the use of imaging information primarily at this handover meeting rather than on ward rounds. Conversely, at Site 2, which has had PACS for a relatively short period of time, we observed the clinicians using PACS to access imaging reports, x-rays and CT scans for the majority of the patients on nearly all rounds when making decisions regarding the patient, with only a few exceptions. 3.2. Ward Round Work Practice Innovation Though there are formal times throughout the week or day in each ICU where imaging results are viewed (Table 2), imaging information was also accessed on the ward rounds, where we observed innovative practice associated with PACS. While the use of PACS during the ward round was seldom observed at Site 1, this was clearly not the case at Site 2 with clinicians demonstrating substantial differences in their integration of imaging information and real-time decision-making about patient care. The organization of ward rounds at Site 2 was observed (and apparent from participants’ reports) to be strongly influenced by both the senior doctor leading the round, and that access to PACS was only at a central workstation area. However many doctors at this site commented that the introduction of PACS has positively changed their practices on
400
I.M. Hains et al. / Does PACS Facilitate Work Practice Innovation in the Intensive Care Unit?
ward rounds. It “changes the structure of the ward round” from one where all the images are accessed at the beginning or end of the round either in the ICU or through having to go to radiology (mainly for CT images), which one junior doctor stated was “completely impractical”, to one where images are generally viewed at the time the patient is being reviewed. One senior doctor also perceived that PACS allowed the ward round to be conducted in a safer and timelier manner. “I think it increases the ability for us to actually get through the ward round and it makes sure we actually see what we need to see and it also means that the patients are safe because everyone’s on the same page.” Site 2
Though Site 3 did not have PACS, only one doctor perceived that the introduction of PACS would innovate the ward round, while others expected there would be little change, with the only difference being the ‘handover round’ would use a projector screen rather than the traditional multi-viewer light box. Table 2. Image viewing times ICU Site 1 (Large metropolitan teaching hospital) 2 (Large metropolitan teaching hospital) 3 (Medium metropolitan teaching hospital)
Formal Times Imaging Information is Viewed Afternoon x-ray meeting Bi-weekly radiology meeting (in the radiology department) Daily morning handover round (pre-ward round) Bi-weekly radiology meeting (in the ICU)
3.3. Perceptions of Bedside Computers Despite the presence of bedside computers at Site 1, images were rarely viewed at the bedside during the examination of the patient on the round. However, doctors at sites without either bedside computers or PACS believed that the presence of bedside computers would enhance the use of PACS for their decision making on ward rounds. “Whereas if it was at the bedside: examine, look at the numbers, look at the x-ray and then you can formulate a plan.” Site 2 “If it’s available at the bedside… I think the ideal way to do it is...have a look at it when you’re actually seeing the patient or else go there to listen, examine the patient and then, then look at the x-ray. Much more likely to register more from the x-ray when you’ve seen it after looking at the patient.” Site 3
4. Discussion To our knowledge this is the first study which reports on ward round work practice innovation associated with PACS in the ICU. While PACS is an innovation in itself, the presence of PACS did not necessarily lead to significant changes in work practice during ward rounds, despite its great potential to aid in decision-making at the point-ofcare [4; 6]. Though bedside computers are shown to aid clinicians’ work [12], interestingly we found that while doctors (without current access to PACS) perceived that bedside computers would enhance or increase the use of PACS during rounds, our investigation of a site with this capability showed limited bedside use, potentially due in part to screen resolution [13]. This demonstrates an interesting variance between anticipated changes in practice versus what happens in practice. At the site that integrated the use of PACS into their daily ward rounds, clinicians commented on
I.M. Hains et al. / Does PACS Facilitate Work Practice Innovation in the Intensive Care Unit?
401
improvement and efficiency in ward rounds. Conversely, this was a reason suggested by one of the doctors at Site 1 as to why they did not use PACS during rounds. Studies of impact and changes to structure of ward rounds as a result of PACS in other settings show both positive and negative effects, with conflicting reports of PACS affecting the efficiency of the ward round [13; 14]. The ICU is a complex environment and it is conceivable that context and culture of each ICU will contribute observed differences.
5. Conclusion Although PACS use at the bedside in particular has enormous potential for innovating ward round work practices, we found this was not a consistent outcome. There is a clear need to understand how the complexities and context of each ICU contribute to work practice innovation and why PACS integration can create significant work practice change at some sites and not others. Furthermore, measuring the clinical impact of such work practice changes remains a challenge.
Acknowledgements. The authors thank ICU staff for participating in the study. This research was funded by an Australian Research Council Linkage Grant LP0989144
References [1] Hood, M.N. and Scott, H. Introduction to Picture Archive and Communication Systems, Journal of Radiology Nursing 25 (2006), 69-74. [2] HIMSS Foundation, Picture Archiving and Communication Systems: A 2000-2008 Study, http://www.himss.org/foundation/docs/PACS_ResearchWhitePaperFinal.pdf?src=pr (Accessed 1st November 2010) [3] Sutton, L.N.PACS and diagnostic imaging service delivery--A UK perspective, Eur J Radiol In Press (2010). [4] Bryan, S. Weatherburn, G.C. Watkins, J.R. and Buxton, M.J. The benefits of hospital-wide picture archiving and communication systems: A survey of clinical users of radiology services, Br J Radiol 72 (1999), 469-478. [5] Siegel E.L. and Reiner, B.I.Filmless radiology at the Baltimore VA Medical Center: A 9 year retrospective, Comput Med Imaging Graph 27 (2003), 101-109. [6] Steckel, R.J. The Current Applications of PACS to Radiology Practice, Radiology 190 (1994), 50A-52A. [7] Strange, C. Infection in the Intensive Care Unit: A Clinician's View of the Role of Imaging, Semin Roentgenol 42 (2007), 7-10. [8] Trotman-Dickenson, B. Radiology in the intensive care unit (Part I), J Intensive Care Med 18 (2003), 198-210. [9] Kundel, H.L. Seshadri, S.B. Langlotz, C.P. Lanken, P.N. Horii, S.C. Nodine, C.F. Polansky, M. Feingold, E. Brikman, I. Bozzo, M. and Redfern, R. Prospective study of a PACS: information flow and clinical action in a medical intensive care unit, Radiology 199 (1996), 143-149. [10] Watkins, J. Weatherburn, G. and Bryan, S. The impact of a picture archiving and communication system (PACS) upon an intensive care unit, Eur J Radiol 34 (2000), 3-8. [11] Pope, C. and Mays, N. Qualitative research in health care, BMJ Books, Oxford, 2006. [12] Poissant, L. Pereira, J. Tamblyn, R. and Kawasumi, Y. The Impact of Electronic Health Records on Time Efficiency of Physicians and Nurses: A Systematic Review, J Am Med Inform Assoc 12 (2005), 505-516. [13] Tan S.L. and Lewis, R.A. Picture archiving and communication systems: A multicentre survey of users experience and satisfaction, Eur J Radiol 75 (2010), 406-410. [14] Pilling, J.R. Picture archiving and communication systems: The users' view, Br J Radiol 76 (2003), 519524.
402
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-402
Innovation in Intensive Care Nursing Work Practices with PACS a
Nerida CRESWICK,a,1 Isla M. HAINS,a, Johanna I. WESTBROOK a Centre for Health Systems and Safety Research, Australian Institute of Health Innovation, Faculty of Medicine, University of New South Wales
Abstract. Doctors are the main users of x-rays and other medical images in hospitals and as such picture archive and communication systems (PACS) have been designed to improve their work processes and clinical care by providing them with faster access to images. Nurses working in intensive care units (ICUs) also access images as an integral part of their work, yet no studies have examined the impact of PACS on the work of intensive care nurses. Our study aimed to examine whether and how ICU nurses view and use images and whether access to PACS promotes innovation in work practices. We interviewed (n=49) and observed (n=23) nurses in three Australian metropolitan teaching hospital ICUs with varying degrees of PACS implementation. Our study found that nurses with access to PACS were able to independently and easily access images, did so more frequently when required, and perceived that this had the potential to positively impact upon patient safety. Those without PACS usually viewed images more traditionally as part of a ward round. The introduction of PACS to ICU settings promotes changes in nursing work practices by providing nurses with the ability to act more autonomously, with the potential to enhance patient care. Keywords. critical care, nurses, intensive care units, hospital information systems, radiology information systems, evaluation studies.
1. Introduction Picture archive and communication systems (PACS) store and provide faster access to electronic medical images such as x-rays, CTs and MRIs for doctors (including radiologists), and have the potential to assist them in their decision making [1]. Accessing and utilising medical images is an integral part of the work of intensive care nurses [2], for example to determine the position of nasogastric tubes on chest x-rays prior to commencement of feeding. A literature review [3] has highlighted ways in which PACS could innovate nursing practice. These include allowing improved patient care by providing access to more information, its use as a tool to improve handover communication, and in research and education by providing the means for image searching. The only previous studies evaluating PACS use in intensive care units (ICUs) included surveys with nurses as participants, but did not separately report findings regarding nurses [4,5], and an Australian study which found that nurses did not use PACS in the ICU [6]. None have focused on the use of PACS by intensive care nurses, nor its impact on their work practices.
1
Corresponding Author.
N. Creswick et al. / Innovation in Intensive Care Nursing Work Practices with PACS
403
Intensive care nurses are working in a complex environment in collaboration with doctors to provide care for high acuity patients, integrating clinical information from multiple sources. A socio-technical approach provides a framework which allows information and communication technology (ICT) such as PACS to be examined in the context of its users and their setting [7,8]. The aim of our study was to understand whether and how intensive care nurses access and use medical images in their work, and to examine the impact of PACS on nursing work practice innovation.
2. Method 2.1. Design, Setting and Participants A qualitative study of ICU nurses’ work practices was conducted across three Australian metropolitan teaching hospital ICUs using a multi-method approach with semi-structured interviews to elicit perceptions, and observations to examine practices of access and use of medical images. Interviews and observations provided a means to explore the complex socio-technical network of users, systems and settings [9]. ICU 1 had longstanding (nine years prior) access to PACS (AGFA IMPAX 6.3.1) from bedside and central workstation computers, ICU 2 introduced PACS (GE Centricity Web v3.0) eight months prior, accessed from central workstation computers, and ICU 3 had not yet implemented PACS. Participants were selected to ensure representation across roles, including Registered Nurses (RNs), Clinical Nurse Specialists (CNSs), Clinical Nurse Educators (CNEs) and Nurse Unit Managers (NUMs). We interviewed (n=49) and observed (n=23) nurses for 35.5 hours. Ethics approval was granted by hospital and university ethics committees. Each participant gave written consent. 2.2. Data Collection and Analysis Between April and October 2010 three researchers conducted semi-structured interviews which included asking nurses about how PACS has changed or will change their work practices, patient safety and their role. These interviews were audiorecorded and transcribed. Observations of nurses carrying out their work were conducted and researchers recorded information about the ways in which nurses carried out their day-to-day work, including viewing medical images. Interview transcripts were analysed by one researcher (IH) to categorise direct quotations by the nurses into themes, assisted with QSR NVivo version 8.0. A second researcher (NC) reviewed the categories to achieve triangulation of analyses. Observation notes were reviewed to identify occurrences of nurses viewing medical imaging during their work. Our results present the themes arising from both interview and observational data.
3. Results 3.1. Nurses Viewing Images Independently In ICU 1 where there was bedside access to PACS, nurses stated that they viewed chest x-rays at least once or twice per shift for intubated patients (for nasogastric feeding): at
404
N. Creswick et al. / Innovation in Intensive Care Nursing Work Practices with PACS
the beginning of each shift and when new chest x-ray images became available. Few nurses perceived that they relied on doctors to access and review images. They reported that because PACS is available at the bedside, they are able to access images when required with convenience, without leaving the patient. Nurses in ICU 2 also reported viewing images at the start of their shifts, especially for intubated patients who routinely receive a chest x-ray early each morning, and later in the day if required. “…virtually in the morning in the first hour or so to look at your blood results, look at what the chest xray looks like but then any other further tests that your patient requires through the day then we have to go back…”(CNE/CNS, ICU 2)
The recent replacement of hard copy films with PACS at this site allowed nurses to compare their practices. They reported spending less time searching for x-rays, and that the turnaround time for the availability of images for viewing had decreased. In ICU 3, where PACS was yet to be implemented, nurses who recounted viewing images independently, conveyed that they did so infrequently. Instead, they mainly viewed images at the multi-disciplinary handover round. Many predicted that when PACS is introduced, with access only from the central workstation computers, that it will not be as useful as it would be at bedside computers. 3.2. Collaborative Image Viewing Practices In ICU 2, multidisciplinary ward rounds were carried out each day, with an effort made for each nurse caring for a patient to be present at the bedside while their patient was examined by doctors. If doctors reviewed images during the round they did so at the central workstation where PACS was available, while most nurses stayed with their patient. One nurse commented that if she was able to, he/she viewed the x-ray with the doctors requiring him/her to move away from the patient bedside. In ICU 3, nurses viewed images each morning at the multi-disciplinary “handover round” at the multi-viewer lightbox which they attend while their patient is discussed. Many nurses reported mainly viewing images at these morning handover rounds: “…I would obviously look at them each morning on the morning round with everybody just to get a general understanding but I wouldn't necessarily go back and have a look at it later unless somebody has asked me to go back and have a look.” (NUM, ICU 3)
The introduction of PACS in ICU 3 may change the handover work practices, and nurses may no longer have the opportunity to view the images either in collaboration with doctors or alone: “It may change our handover process in the morning…we, you know, we’ll sit and we would all have a chat about the patient and what’s going on, ask questions about where all the lines are so I don’t know what’s gonna happen…and whether it will change that.” (CNC,ICU 3)
Conversely, in ICU 1, nurses had few opportunities to view images in collaboration with doctors as there were no multi-disciplinary meetings or handovers, and doctors rarely viewed images at the bedside where nurses were usually located. 3.3. Technical Aspects of PACS Some of the nurses at both PACS sites (ICU 1 and 2) perceived that they were not using all the available functions of PACS due to lack of on-going training. There was also some dislike expressed regarding accessing images from small screens compared to viewing large films, the time required to manipulate images, and because the system
N. Creswick et al. / Innovation in Intensive Care Nursing Work Practices with PACS
405
is sometimes down with images unavailable. Some nurses reported that they liked being able to manipulate images in PACS, to “alter the light and all the different brightness and just have our own control over how the film can be reviewed” (RN, ICU 2). Yet others recounted that having to manipulate the images was a problem as they do not possess the expertise do this, and “we’re not really trained to do that...” (RN, ICU 2). Comparing multiple images was perceived to be more difficult with PACS than with hard copies by some: “The advantage with hard films was you could just compare one with the other a lot quicker. Sometimes I find PACS, it’s a bit hard to get a whole series of films and compare yesterday’s and before to today’s x-rays…” (RN, ICU 2)
Others perceived that comparing images was easier with PACS, because at least they could rely on the films being in the system. 3.4. Work Practices and Patient Safety Nurses in ICU 1 believed that by looking at x-rays themselves and not just relying on the doctors they could assess the x-ray, and raise an alarm if they saw an abnormality. “… well I think having the PACS that handy it just allows you for a double check quickly… you know your doctors are always checking but for you to check as well it just feels better you know just being able to look at it yourself.” (RN, ICU 1) “… usually if you got told by the doctor the x-ray said this or that, once they’ve had a look at it in the morning you sort of believed that by word but now you’ve got the access here, so you can sort of go in and it’s a new task, you sort of don’t have to go with what they say now, yeah. And you can actually question a few things and, yeah, so I find that that’s quite good.” (RN, ICU 1)
A number of nurses working in ICU 2 believed that the introduction of PACS had contributed to improved infection control due to films “not actually being handled at the bedside” (NUM, ICU 2). However some nurses found the hard copies delivered to the patient bedside each morning acted as a visual prompt (which they no longer had) to remind them to view their patients’ images. Prior to the introduction of PACS in ICU 2, films were often misplaced around the unit or taken elsewhere in the hospital and locating old films was cumbersome. Many nurses in ICU 2 commented that they perceived access to x-rays had improved with the introduction of PACS simply because they no longer had to search for films. They thought the loss or misplacement of hard films impacted negatively on patient safety, and that the introduction of PACS had eliminated this: “…they don’t get lost and it’s a lot easier, so that saves time instead of searching you know, to and fro. You know if you open up your PACS … you’re going to have your chest x-ray there no matter what, …it’s not going to be erased, it’s not going to be missing.”(RN, ICU 2)
In ICU 3 where PACS had not yet been implemented nurses anticipated there will be less need to “chase up films” with “treatment initiated quicker” (CNC, ICU 3).
4. Discussion By using a sociotechnical approach [7,8] for the design, collection and analysis of data our study was able to examine the technical features of PACS in the context of different settings and its use by ICU nurses in a variety of roles. Our study found that nurses working in ICUs with PACS were able to more frequently view x-rays separately from doctors, to check the position of tubes before proceeding with nasogastric feeding, potentially contributing to the safety of their patients. Other types
406
N. Creswick et al. / Innovation in Intensive Care Nursing Work Practices with PACS
of clinical information technology have been found to allow nurses to act more autonomously and take on extended roles in patient management [10]. The improved access to images for nurses which PACS provides in the settings we examined, allows them to act autonomously and raise the alarm when they detect abnormalities. In the ICU without PACS (ICU 3), and to a lesser extent in the ICU with PACS at the central workstation (ICU 2), nurses had the opportunity to view images alongside doctors, and were able to participate in discussions with them. In the ICU with PACS at the bedside, nurses lacked those opportunities, but they did access images autonomously. Many of the ways for integration of PACS into nursing work practices [3] appear to be coming to fruition in ICU settings with better access to images and improved delivery of education. Further work should examine the impact of these changes on doctor-nurse communication and patient care. The context of the way work is carried out in each of the ICUs appears to influence the ways in which the introduction of PACS innovates work practices.
5. Conclusion The introduction of PACS to ICU settings promotes changes in nursing work practices by providing nurses with the ability to act more autonomously, with the potential to enhance patient care. Acknowledgements. The study was supported by an Australian Research Council Linkage grant in partnership with Sydney South West Area Health Service (LP0989144). The authors thank the participation nurses from the ICUs, and Anne Marks for her work in collecting some of the data.
References [1] Huang H (2003) Enterprise PACS and image distribution. Computerized Medical Imaging and Graphics 27:241-253. doi:10.1016/s0895-6111(02)00078-2 [2] Revell MA, Pugh M, Smith TL, McInnis LA (2010) Radiographic Studies in the Critical Care Environment. Critical Care Nursing Clinics of North America 22 (1):41-50. doi:DOI: 10.1016/j.ccell.2009.10.013 [3] Hood MN, Scott H (2006) Introduction to Picture Archive and Communication Systems. Journal of Radiology Nursing 25 (3):69-74. doi:DOI: 10.1016/j.jradnu.2006.06.003 [4] Cox B, Dawe N (2002) Evaluation of the impact of a PACS system on an intensive care unit. Journal of Management in Medicine 16:199-205. doi:10.1108/02689230210434934 [5] Pilling JR (2003) Picture archiving and communication systems: The users' view. British Journal of Radiology 76 (908):519-524 [6] Yu P, Hilton P (2005) Work practice changes caused by the introduction of a picture archiving and communication system. Journal of Telemedicine & Telecare 11 Suppl 2:S104-107 [7] Berg M (1999) Patient care information systems and health care work: a sociotechnical approach. International Journal of Medical Informatics 55 (2):87-101 [8] Westbrook J, Braithwaite J, Georgiou A, Ampt A, Creswick N, Coiera E, Iedema R (2007) Multimethod evaluation of information and communication technologies in health in the context of wicked problems and socio-technical theory. Journal of the American Medical Informatics Association 14 (6):746 - 755 [9] Berg M (1999) Patient care information systems and health care work: a sociotechnical approach. International Journal of Medical Informatics 55 (2):87-101 [10] Ash JS, Sittig DF, Campbell E, Guappone K, Dykstra RH, Ash JS, Sittig DF, Campbell E, Guappone K, Dykstra RH (2006) An unintended consequence of CPOE implementation: shifts in power, control, and autonomy. Paper presented at the AMIA Annual Symposium, Washington DC.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-407
407
Evaluation of Telephone Triage and Advice Services: a Systematic Review on Methods, Metrics and Results Sara CARRASQUEIRO a,1, Mónica OLIVEIRA b, Pedro ENCARNAÇÃO a a Catholic University of Portugal, Faculty of Engineering, Lisbon, Portugal b Instituto Superior Técnico, Technical University of Lisbon, Lisbon, Portugal
Abstract. Telephone triage and advice services (TTAS) have been increasingly used to assess patients’ symptoms, provide information and refer patients to appropriate levels of care (attempting to pursue efficiency and quality of care gains while ensuring safety). However, previous reviews have pointed out for the need for adequately evaluating TTAS. AIMS: To review TTAS evaluation studies, compile methodologies and metrics used and compare results. Systematic search in PubMed database; data collection and categorization by TTAS features and context, type of evaluation, methods, metrics and results; critical assessment of studies; discussion on research needs. 395 articles screened, 55 of them included in the analysis. In conclusion, several aspects of TTAS impact on healthcare systems remain unclear either due to a lack of research (e.g. on long term clinical outcomes, clinical pathways, safety, enhanced access) or because of huge disparities in existing studies on the accuracy of advice, patient compliance, system use, satisfaction and economic evaluation. Further research on TTAS impact is required, comprising multiple perspectives and broad range of metrics. Keywords. Teletriage, e-health, health technology assessment, health services research, economic evaluation, systematic review.
1. Introduction In recent years telephone triage and advice services (TTAS) have been introduced to improve delivery of healthcare services. TTAS are e-health services that combine the use of call centre technology with formal or informal clinical decision systems to evaluate patients’ health condition and advise them or their caregivers to act accordingly. Major objectives pursued by TTAS are to provide education to patients, reducing the fear caused by unknown conditions and empowering to self-care, and to direct patients to appropriate levels of care, increasing the efficiency of healthcare systems and promoting safety and access to care. Although several studies on the impact of TTAS have been conducted, systematic reviews have found flaws in available literature and indicated the need for further studying the impact of teletriage on healthcare systems’ use, safety, cost and on patient satisfaction [1,2]. This study provides a systematic review of evidence about TTAS’ impact on healthcare systems
1
Corresponding Author: Sara Carrasqueiro, Catholic University of Portugal, Faculty of Engineering, Estrada Octávio Pato, 2635-631 Rio de Mouro, Portugal, E-mail: [email protected]
408
S. Carrasqueiro et al. / Evaluation of Telephone Triage and Advice Services
and about the methods and metrics used in these studies2, so as to analyze quality and results from previous studies, as well as to define future research needs.
2. Methods Papers were selected from the PubMed database3 using the keywords listed in Table 1, line 1, which encompass different terminologies associated with TTAS. Other terms like ‘telephone counseling’, ‘counseling call centre’, ‘counseling line’, ‘consultation call centre’, ‘helpline’ and ‘hotline’ were excluded from search because they are mostly associated with specific medical problems’ advice or with follow-up or self support services. Results were successively filtered to: (a) retrieve only evaluation studies (Table 1, line 2); (b) retrieve papers including evaluation from the viewpoint of the healthcare system (line 4); and (c) retrieve papers published from 1994 to present (line 6). The search was run in October 18 2010. All articles retrieved were screened by the first author using title and abstract information, and those outside the scope of this study were excluded. Ambiguous cases were discussed with the remaining authors to achieve a consensus. Table 1. PubMed database search strategy Search Strategy 1. (teletriage OR telephone triage OR telephone consultation OR NHS Direct OR telephone advice OR tele-advice OR health call centre OR (nurs* AND call centre) OR triage call centre OR (consul* AND call centre) OR (after-hours AND call centre) OR triage line OR advice line OR (telephone-based AND triage)[Title/Abstract] 2. (impact OR assess* OR effect* OR evaluat* OR econom*)[All Fields] 3. #1 AND #2 4. (hospital OR visit* OR pathway* OR emergency* OR referral OR utilization)[All Fields] 5. #3 AND #4 6. "1995"[PDAT] : "3000"[PDAT]) AND "0"[PDAT] : "3000"[PDAT] 7. #5 AND #6
Each study was classified according to: Context and Features of TTAS, Objective, Perspective of analysis, Type of economic evaluation, Metrics, Design and Results. Type of economic evaluation category applied the definitions suggested by Drummond et al. [3]. Study design was compiled using the terms and definitions of INHTA Health Technology Assessment (HTA) [4] and The Cochrane Collaboration [5] glossaries. Metrics were grouped by: A. Accuracy of advice; B. Patient compliance to advice; C. Output: C1. Access to care; C2. System use; C3. Clinical outcomes; C4. Safety; C5. Satisfaction; C6. Economics. One should note that a meta-analysis was considered inappropriate because of a large heterogeneity in methods, metrics and context of TTAS studies. Critical assessment of evaluation studies used a modified version of “a check-list for assessing economic evaluation” proposed by Drummond et al. [3] that best fits partial economic evaluations assessing: objective clearness, alternatives adequacy, potential bias, costs and outcomes completeness, data sources, uncertainty allowance and generalizability. 2
Both teletriage and tele-advice by health professionals in their routine work with their patients and teletriage services restricted to one specific disease or telephone advice services for self support (ex. tobacco cessation helpline) are outside the scope of this study. 3 http://www.ncbi.nml.nih.gov/pubmed
S. Carrasqueiro et al. / Evaluation of Telephone Triage and Advice Services
409
3. Results Figure 1 shows search and screening results. 55 papers were included in our review, of which 50 are original studies. A reference list of these papers is accessible from the website http://echo.fe.ucp.pt/~189903001/ind ex_files/ttas1.html.
Figure 1. Flowchart of selection process and results
Context and Features. 24 of the original studies concern stand alone centralized TTAS, 22 concern TTAS embedded in healthcare delivery units, and the remaining 4 compare different organization models. Most studies report services provided in the UK and USA, some in Australia and few in other countries - Canada, New Zealand, Netherlands, Denmark, Switzerland, France and Japan. 20 studies evaluate 24-hours TTAS services, 18 evaluate TTAS for management of out-of-hours care and the remaining 6 for in-hours care. Most studies relate to TTAS provided to populations of all ages, still 20% relate to pediatric TTAS. Most studies address TTAS provided by nurses supported by computerized systems with embedded protocols and algorithms. Regarding the maturity of the service, 20 studies evaluate services established over three years, 18 evaluate services with three or less years of operation and 18 evaluate pilot experiences. Objective and perspective of analysis. Almost all studies clearly state their objectives, aiming at assessing the impact of TTAS on identified issues or metrics. Many studies do not explicitly state the perspective of analysis (although that can be inferred): the perspective of the system is adopted in most studies, of the provider in some studies, and of patients or professionals in a few studies. Type of economic evaluation. 16 of the 50 original studies do not perform an analysis of alternatives (15 are consequence description studies and one is a costoutcome description). 25 studies are efficacy or effectiveness studies (comparing consequences of alternatives). Only 9 studies assess both costs and consequences of alternatives, although some do not present an overall index or ratio of costs to consequences. Many of the studies comparing alternatives do not use an independent concurrent control but rather a “do nothing alternative” obtained either from “patient intention if the TTAS did not exist” or from “before” data in pre-post designs. Others use a control defined by patients using health care providers who did not contacted previously TTAS, or who live in areas where TTAS is not available. Only 4 studies randomize patients to receive or not care through TTAS.
410
S. Carrasqueiro et al. / Evaluation of Telephone Triage and Advice Services
Study design. 30 studies are retrospective and 20 prospective. Almost all studies included quantitative assessments, only 2 qualitative studies and 18 observational studies, 31 experimental and 1 decision analysis study. 33 studies do not use an independent control group. Concerning sampling, 14 studies use total/population data, 14 use a randomized sample and 15 use a convenience non-random sample. Metrics and results. Table 2 presents a summary of main metrics, frequent instruments for data collection and key findings for each type of metric. Table 2. Summary of findings for most used metrics, strategies and results for each type of metric (Tm) Tm Metrics A Adequacy of the advised level of care.
Data Collection Strategies Audits to real or simulated calls; Assessment of medical record when patients present to providers (with or without control). Self-reported through survey; Determined through providers databases.
B
Patient compliance to present to level of care advised.
C1
Enhanced access to care.
Self-reported through survey; Analyzed from operations data.
C2
Change in rates or tends of services use. Changes in professionals’ workload.
Determined from difference between self-reported intention and action after TTAS (self-reported or checked from providers’ data); Determined from services’ use trend analysis with or without control (before-and-after); Randomized Controlled Trial (few).
C3
Adverse events (deaths, ED, admissions); Delayed care.
Patients survey; Medical record after service use.
C4
Clinical Outcomes after TTAS.
Self-reported through patients’ survey.
C5
Patient satisfaction in Likert scales.
Self-reported from survey.
C6
Savings from avoided services’ use; TTAS costs.
Analysis derived from system use impact studies.
Analysis of Results Unable to demonstrate high rates of advice appropriateness or service use adequacy gains when compared with control. Varies according recommendation and is affected by other factors (intention, complaint, age, income); Is higher when measured from selfreported data comparing with provider’s database matching and is affected by the time window in metric definition. Not always improved, depending on the considered indicators and system context; Reports of expedited access to hospital for patients with serious symptoms. TTAS usually promptly reduces medical workload but remains unclear whether it only delays it; Evidence on the impact on primary care or emergency department use is diverse; Relevance of influence factors such as: TTAS use rate, geographic location (urban vs. rural) and TTAS organization (central or embedded). Safety is a concern for both patients and professionals; Few adverse events with death reported; Rates of unadvised significant care between 4% and 10%. No studies on long-term clinical outcomes; Some cases resolve with TTAS, others improve, others require additional care. Most studies report high levels of satisfaction (non controlled measure); There are reports of low satisfaction with TTAS when it constitutes a barrier to traditional care (e.g. home visits). Most studies suggest the existence of net benefits from TTAS, but others conclude TTAS does not reduce overall costs; Some studies do not account for follow-up costs and those who do it use different time windows. Some studies use non robust data of service use avoidance. No study evaluated all relevant benefits and costs and all relevant perspectives.
S. Carrasqueiro et al. / Evaluation of Telephone Triage and Advice Services
411
4. Discussion In line with previous studies [1, 2], results from our review indicate that many aspects of TTAS impact on healthcare systems remain unclear, and further research is needed. Several studies have analyzed the accuracy of advice and the impact on services’ use, but the dispersion of results suggests that it is important to further study TTAS features and context as determinants of success and to overcome inconsistencies in the definition of evaluation metrics and in the choice of evaluation methods, which greatly affect the generalization of results. Most studies report experiences of TTAS in the UK, USA and to a less extent in Australia, remaining unknown whether TTAS has been adopted and/or evaluated in other healthcare systems. Impact on long term clinical outcomes and safety are areas where few results were found - most studies tend to focus on the type of advice given to patients. More research is needed to consider impacts for both patients and professionals. Concerning economic evaluations of TTAS, no study considered a broad range of impacts (e.g. follow-up costs/savings, costs/savings from anticipated/delayed care or adverse events’ rate change, value of recommendations for people, patient clinical pathways), nor all relevant perspectives (e.g. patients perspective was rarely studied). Future evaluation studies of TTAS should attempt to address multiple perspectives and impacts, as well as consider the deployment of multiple organizational strategies within TTAS. Methodologies combining multiple designs and data sources, or using decision analytic modeling could be essayed to achieve these goals. Available evidence suggests that TTAS might be reasonably safe, although advice accuracy rates are not high. It still remains unclear if TTAS promotes access to care, services’ use adequacy, or system efficiency and which are the organization models that enhance TTAS potential gains.
5. Conclusions Further research on TTAS impact is required, comprising multiple perspectives, broad range of metrics and complete care process (from initial call to problem resolution), including clinical pathways and clinical outcomes.
References [1] [2] [3] [4] [5]
Bunn F, Byrne G, Kendall S. The effects of telephone consultation and triage on healthcare use and patient satisfaction: a systematic review, Br J Gen Pract, 521 (2005), 956-61. Bunn F, Byrne G, Kendall S. Telephone consultation and triage: effects on health care use and patient satisfaction, Cochrane Database Syst Rev, 4 (2004), CD004180. Drummond MF, Sculpher MJ, Torrance GW, O’Brien BJ, Stoddart GL. Methods for the Economic Evaluation of Health Care Programmes, Oxford University Press, New York, 2005. INHATA – The International Network of Agencies for Health Technology Assessment, Health Technology Assessment (HTA) Glossary, first edition, (2006). Glossary of Terms in The Cochrane Collaboration, version 4.2.5, The Cochrane Collaboration, 2005.
412
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-412
Human Factors Based Recommendations for the Design of Medication Related Clinical Decision Support Systems (CDSS) a
Sylvia PELAYO a,1, Romaric MARCILLY a, Stéphanie BERNONVILLE a, Nicolas LEROY a, Marie-Catherine BEUSCART-ZEPHIR a INSERM CIC-IT, Lille ; Univ Lille Nord de France ; CHU Lille ; UDSL EA 2694 ; F59000 Lille, France
Abstract. This study is part of a research project aiming at developing advanced functions of medication related CDSS to support the monitoring of patients’ therapeutic treatments based mainly on corresponding lab values. We adopted a user-centred approach to the design of these advanced CDSS functions. We collected existing recommendations in the literature and completed previous Human Factors (HF) field studies and analyses by focused observations and modeling. We present resulting HF based recommendations for the design of such advanced medication CDSS and focus more specifically on two innovative high level recommendations completing those already existing in the literature. For illustration purposes, an example of the operationalization of one of the recommendation is presented. Keywords. Adverse drug event, Clinical Decision Support Systems, Software design, Monitoring and clinical context, Human factors
1. Introduction Medication related CDSS have been found to be beneficial in improving the quality of clinicians’ prescriptions, reducing medication errors and ultimately preventing Adverse Drug Events (ADE) [1]. These systems support clinicians’ therapeutic decision by a real-time checking of the orders through a medication knowledge base providing alerts to the prescribers. In spite of their known positive impact, they remain difficult to implement [2]. These difficulties are partly due to compatibility problems between users’ cognitive characteristics and organisation of work on the one hand, and the model implemented in a given application on the other hand. Thus, alerts do not respect the collective aspects of the healthcare work situations [3]. The medication use process involves different professionals with cumulative roles: physicians monitor the treatment of the patient, pharmacists and nurses are in charge of controlling and executing the therapeutic orders. But alerts are very often, if not always, designed exclusively for the physicians. Additionally, alerts are too often disruptive of the cognitive processes inherent to decision making, due to wrong timing, wrong display mode and wrong/weak content of the information delivered. For example, it is not wise 1
Corresponding author: Sylvia Pelayo, EVALAB – University Hospital of Lille, CHRU de Lille, 2 Avenue Oscar Lambret, 59037 LILLE Cedex; E-mail: [email protected].
S. Pelayo et al. / Human Factors Based Recommendations
413
to suggest to the physician an action s/he is just about to carry out, or to alert him/her on a potentially dangerous situation for which s/he has just taken action. Medication related CDSS have been in used for long enough to allow publishing a number of review papers which have tried to (i) identify the key features which should ensure their success and (ii) provide recommendations for the design of applications [4]. The analysis of these key papers allows drawing an overview of the current state of the art for designing acceptable, usable and efficient CDSS. Authors agree on the current limitations of existing CDSS and there is a consensus on a number of recommendations for the design of CDSS: provide decision support automatically as part of clinician workflow and deliver decision support at the time and location of decision making [5], provide justifications for the suggestion and require provider’s documentation of the reason for not following it [6], integrate the CDSS with the Electronic Healthcare Record (EHR) and automatically retrieve data from it [7]. The design of such CDSS requires a perfect understanding and appropriation of the work environment and of its key elements. A lot of studies have already provided valuable insights on the intangible characteristics of the activities within the medication use process. The results emphasize the fact that the therapeutic decision making process is a dynamic process [8]: the patient’s condition evolves depending on the healthcare professionals’ actions but also spontaneously by itself. At each encounter with the patient, clinicians have then to update their knowledge about the patient’s status and his/her evolution, especially as regards new elements in the situation that must be known, e.g. new lab results, unexpected clinical evolution of the patient or actions already undertaken for the patient. Moreover, the medication use process may be characterized as a complex distributed work situation: the information is distributed across the media, such as the EHR or the CDSS but also across the minds of the members of the clinical team. This distribution of the work processes supposes team situation awareness, i.e. a shared understanding of the situation where each professional has a complete vision of the situation allowing the decisions to be adjusted according to the information of the others. The present study is part of the European project entitled “Patient Safety through Intelligent Procedures in medication” (PSIP). One of the major goals of the project is to design CDSS functions and to integrate them into different EHR/CPOE (Computerized Physician Order Entry) systems, so that the resulting CDSS corresponds to the users needs and fits clinical workflows and cognitive processes. We have capitalized on existing knowledge of the medication use process work situations and of the current limitations of the existing CDSS to provide HF based recommendations for the design of medication related CDSS along with clues for the operationalization of the existing recommendations. We completed previous field studies and analyses by focused observations and modelling of the monitoring process of patients’ therapeutic treatments based mainly on corresponding lab values. This paper presents two innovative recommendations complementing those already existing in the literature. An example of operationalization is given to illustrate the adopted approach.
2. Methods 2.1. Study Site The study took place in the Hospital Center of Denain in northern France. The hospital
414
S. Pelayo et al. / Human Factors Based Recommendations
has a Patient Care Information system (PCIS) including an Electronic Health Record (EHR) equipped with a CPOE which in this version has very limited CDSS functions (e.g. alerts in case of doubloons). The PCIS is interfaced with a pharmacy system, which allows the pharmacists to check the medication orders and send physicians alerts. The analyses were carried out in two medicine departments: the “cardiology” department and the “internal medicine and infectious diseases” department. 2.2. HFE Methods Over a period of one month, four Human Factors experts observed all tasks related to the medication process carried out by 4 physicians, 6 nurses, 2 pharmacists and 2 assistant pharmacists, with a special focus on all actions related to lab values monitoring. Observation time amounted to 53 hours and concerned 101 patients. It was completed with debriefings and semi-structured interviews. The list of recommendations obtained during the analysis has been submitted for feasibility assessment to all PSIP project’s partners, including two CPOE vendors in charge of integrating the PSIP CDSS in their CPOE, the company in charge of ensuring the connectivity of the system, the designers of the knowledge based system and the designers of the PSIP standalone CDSS.
3. Results The two critical recommendations for the design of medication related CDSS summarize what a CDSS should be, i.e. a teamplayer and a partner to clinicians. 3.1. Make the System a Team Player The system should be a team player to be able to support the elaboration and maintenance of a team situation awareness for the healthcare professionals in charge of the patient. That means the system should (i) provide an indication for all the professionals of the availability of an information; the designers may choose the most appropriate way of indicating the information in the interface, (ii) incorporate functions to support the team awareness about the alert management and its evolution over time (e.g. visible access to how the alert was handled and to the reasons for alert override or rule deactivation if any has been documented), (iii) have the same display of basic CDS information for the case at hand for all professionals and (iv) give access upon request to extended information (justification of the rule, attached scientific documentation, etc.) that should be structured depending on the user profile. 3.2. Make the System a Clinicians’ Partner The clinicians have a critical role since they are the decision makers and those handling the alerts (acknowledgement, deactivation and so on). But the existing CDSS are not able yet to catch elements of the work situation to provide relevant information to clinicians. The CDSS should act as a partner by (i) adapting its behaviour according to a subset of relevant actions taken by clinicians, (ii) adapting its behaviour to the evolution of the outcome at risk over time (i.e. take into account the evolution of the
S. Pelayo et al. / Human Factors Based Recommendations
415
targeted lab values to filter the rules and adapt its severity) and (iii) incorporating functions supporting the dialog between the CDSS and the clinician (e.g. acknowledgment / de-activation of the CDSS alert). For illustration purposes of these recommendations, we provide a Unified Modelling Language (UML) model describing the classification process to be performed by a CDSS to catch the monitoring and clinical context of patients identified by the system as being at risk of an ADE (Figure 1). We identified eight typical situations characterizing the current status of drug monitoring. They result from the combination of the status of the lab tests orders on the one hand and the validity and normality of the available lab values on the other hand. For each typical situation we can identify whether the monitoring procedure is appropriate or not, and whether the patient’s clinical status, assessed by the lab value, is alarming or not.
Figure 1. UML model supporting the classification of the situations.
Making the system able to catch the monitoring and clinical contexts opens interesting opportunities for the design of the CDS information content and display mode. For each situation the system displays the rule leading to the identification of the case as being at risk of ADE. But this basic information may be extended or particularized depending n the context. For instance, in contexts 2, 4 and 7 which correspond to situations that are not properly monitored, the CDS could suggest that the clinician order the required lab test and eventually propose a short cut to the lab tests ordering page. On the contrary in context 6, in which a new (recent and valid) lab value came in abnormal, the system could alert the physician on the increasing negative side effect of the drug and invite him/her to reassess the cost benefit ratio of the incriminated drug(s).
416
S. Pelayo et al. / Human Factors Based Recommendations
4. Discussion and Conclusion This paper has presented two high-level recommendations for the design of medication CDSS, so that the resulting system correspond to the users needs and fit clinical workflows and cognitive processes. Due to limited space it was not possible to present the entire set of recommendations. These recommendations look promising for improving the capacity of the system to catch the monitoring and clinical context which in turn opens interesting design possibilities. However it is important to assess the technical feasibility of such a set of recommendations. In the PSIP context, all partners have rated the feasibility of the proposed recommendations and unanimously confirmed that most of the recommendations are possible to implement. This attests to their technical feasibility and also to their relevance for the design of advanced medication CDS functions. Those recommendations are quite innovative in the domain of the design of CDSS. For instance, a lot of studies emphasize the inherently collaborative characteristics of the medication use process and the importance to support them efficiently, but to our knowledge, none provides recommendations for the design of a teamplayer CDSS supporting shared awareness of potential ADEs and of the actions taken to prevent them. Similarly, it would be a significant progress for these systems to better catch the monitoring and clinical context. Acknowledgement. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 216130 – the PSIP project.
References [1]
[2] [3] [4] [5]
[6]
[7] [8]
Ammenwerth E, Schnell-Inderst P, Machan C, Siebert U. The Effect of Electronic Prescribing on Medication Errors and Adverse Drug Events: A Systematic Review, J Am Med Inform Assoc 15 (2008), 585-600. Ash JS, Berg M, Coiera E. Some unintended consequences of information technology in health care: the nature of patient care information system-related errors. J Am Med Inform Assoc 11 (2004), 104-12. Pelayo S, Beuscart-Zéphir MC. Organizational considerations for the implementation of a computerized physician order entry. Stud Health Technol Inform., 157 (2010), 112-117. Kuperman GJ, Bobb A, Payne TH, et al. Medication-related clinical decision support in computerized provider order entry systems: a review, J Am Med Inform Assoc 14 (2007), 29-40. Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success, BMJ 330 (2005), 749:765. Mollon B, Chong JJ, Holbrook AM, Sung M, Thabane L, Foster G. Features predicting the success of computerized decision support for prescribing: a systematic review of randomized controlled trials, BMC Med Inform Decis Mak , 11 (2009), 9:11. Nies, J Colombet, I Degoulet, P, Durieux. P Determinants of success for computerized clinical decision support systems integrated in CPOE systems: a systematic review, AMIA Annu Symp Proc 2006. Beuscart-Zéphir MC, Pelayo S, Bernonville S. "Example of a Human Factors Engineering approach to a medication administration work system: Potential impact on patient safety", Int J Med Inform. (2009) Sep 7. [Epub ahead of print]PMID: 19740700.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-417
417
Making a Web Based Ulcer Record Work by Aligning Architecture, Legislation and Users - a Formative Evaluation Study Anne G. EKELANDa,b1, Eva SKIPENESa, Beate NYHEIMa Ellen K. CHRISTIANSENa a Norwegian centre for integrated care and telemedicine b University of Tromsø, Department of clinical medicine Norway
Abstract. The University Hospital of North Norway selected a web-based ulcer record used in Denmark, available from mobile phones. Data was stored in a common database and easily accessible. According to Norwegian legislation, only employees of the organization that owns an IT system can access the system, and use of mobile units requires strong security solutions. The system had to be changed. The paper addresses interactions in order to make the system legal, and assesses regulations that followed. By addressing conflicting scripts and the contingent nature of knowledge, we conducted a formative evaluation aiming at improving the object being studied. Participatory observation in a one year process, minutes from meetings and information from participants, constitute the data material. In the technological domain, one database was replaced by four. In the health care delivery domain, easy access was replaced by a more complicated log on procedure, and in the domain of law and security, a clarification of risk levels was obtained, thereby allowing for access by mobile phones with today’s authentication mechanisms. Flexibility concerning predefined scripts was important in all domains. Changes were made that improved the platform for further development of legitimate communication of patient data via mobile units. The study also shows the value of formative evaluations in innovations. Keywords. Web based ulcer record, access by mobile phone, collaborative health care delivery, law and security, formative evaluation
1. Introduction In 2007, the Department of Dermatology (DoD) at the University Hospital of Norway (UNN), in collaboration with Norwegian Centre for Integrated Care and Telemedicine, (NST) offered net-based guidance to health staff in the municipal health service as a pilot. Nurses from the home-care service improved their competence, felt more confident when providing treatment, and gained greater skills in making their own assessments. The patients experienced great confidence in the treatment [1]. After completion of the pilot, DoD considered different solutions for electronic collaboration with improved usability. They selected a web-based ulcer record system, pleje.net, available from mobile phones and used in Denmark[2]. Data was stored in 1
Corresponding Author: Anne G. Ekeland.
418
A.G. Ekeland et al. / Making a Web Based Ulcer Record Work
one database and easily accessible. In Norway, only employees of the organization that owns data can access the system according to law and security regulations, and mobile units require strict privacy protection and access regulations. As a consequence, the system had to be changed. The solution is now about to be brought into regular use, after a long process of adaptation, where Norwegian legal and safety regulations interacted with the technological options and the health care professionals’ requirements for operability and quality. In this paper we will first describe essential features, scripts, of the Danish ulcer record. We will then proceed to describe requirements from the collaborating health care professionals and from legal and security regulations. We proceed to describe some features of the interaction between the different actors and scripts. Finally we present adjustments of the solution, as well as clarifications of legal and security issues and changes in attitudes and routines from the DoD and Home care nurses. We point to further action. The objective is to demonstrate that in efforts to make new innovations work in a public domain such as the Norwegian health services, different actors are involved and a number of challenges are made visible and addressed. In turn, they created a more informed and realistic platform for additional improvements.
2. Approach, Methods and Data Formative evaluations have been recommended for process studies of complex interventions[3]. They may assess the ways new services and technologies influence, and are influenced by small- and large-scale, interrelated actions. Stakeholders, including patients and researchers, are considered partly objective and partly subjective in these processes. Formative evaluations strive to strengthen or improve the object being evaluated and help to model it by examining the delivery of the program or technology, the quality of its implementation, and organizational conditions, personnel, procedures, inputs, and so on. Formative assessments focus on competing discourses, conflicting scripts, and the socially contingent nature of knowledge. The objectives of the approach are to make different scripts and interests transparent and to articulate (results of) negotiations[4-6]. Within this perspective, researchers are working systematically to link experiencebased knowledge with their theoretical base for reflection related to current problems, in this case to make a web based ulcer record work. It involved a scrutiny of how the system, health professionals’ needs and legal and safety regulations aligned. That is how they shaped ‘making the ulcer record work’, and were mutually reshaped in the same processes. The empirical data are thus generated from a participatory study of/with the actors that were involved in the work to make the record work: competent users with experience from the pilot, the law and security team at NST and the system developers in Denmark collaborated in meetings and discussions, with the goal to arrive at solutions that were legally acceptable, practically useful and technologically feasible. All actors were important and depending on each other. The researchers participated in meetings and discussions, planned and ad hoc, that took place in the period from February 2010 – February 2011. We also included minutes from these meetings and information from participants on additional meetings with authorities and other stakeholders.
A.G. Ekeland et al. / Making a Web Based Ulcer Record Work
419
The concept of script that is used, denotes programs of action as inscribed in a technical artifact, or in this case, a ulcer record [7]. I.e. the legal script that the electronic ulcer record carries is a program of action responding to certain legal and security regulations.
3. Results and Discussion 3.1. The Scripts within the Technological Domain Pleje.net is a web-based ulcer record system. The system can be accessed from both a computer connected to the Internet and from mobile phones. One benefit of the system is that all the relevant ulcer data are stored in one database. Health professionals who have the responsibility to provide health care to patients with chronic ulcers collaborate via the system. It is available both for nurses and doctors in the local community, and nurses and doctors in specialist health services, as well as patients and their relatives. The system consists of a database, an application to communicate images and text between participants and a tool to analyze ulcers. The service includes advice between a specialist and the home health care nurses. The system is in used in Denmark and between Roskilde Hospital, Copenhagen and the Faroe Islands. The system simplifies the collection of data both at the DoD, at the GP’s office and the patient's home. Everyone who cooperates in ulcer treatment of the patient has access to the web-based ulcer record system. What they needed in order to utilize the system from a computer was a password and username for access. From mobile phones no authentication was required other than registration of the phone number in the user’s profile in the system. All users had to be registered. 3.2. The Scripts within the Health Care Delivery Domain E-mail based communication with attached images made it possible to intervene immediately if the status of the ulcer changed. Individual consultation provided the opportunity for advising participants on the basis of their level of knowledge. For many ulcer patients, rapid intervention resulted in faster improvement of the condition, and it was assumed that the intervention prevented the need for hospital admissions. The nurses also saw areas for improvement. They wanted to store the ulcer images, and not only send them to the DoD. Thus, they could compare images and see how the ulcers changed over time. They also believed that the images made ulcer documentation less person-dependent, and they could use the images for work based training. They also wanted the patient's physician in the municipality to take part in the treatment in the future. Options to take and send images directly to DoD was ideal. The Danish solution was thus strongly needed. They found the log on procedures and functionality of the system very useful, especially from the mobile phone, as they could use the camera and connect directly to the service from the phone. 3.3. The Script within the Legal And Security Domain In Norway, only those who are employees of the organisation that owns an IT-system or service in the health care sector are allowed to access the system or service. This
420
A.G. Ekeland et al. / Making a Web Based Ulcer Record Work
means that if a hospital provides a service or a system, only the employees of this hospital can legally access the system. General practitioners or home care nurses will not be given access. In addition, patient data was considered to be of the highest sensitivity level by the Health Directorate, which requires use of the highest level of security for access to health information via mobile units. This implies among others, two factor authentication for access via external networks. Access to these data by mobile phones thus required the strictest security level. This means that the Danish solution did not comply with Norwegian legislation. The Danish solution has a common database for all actors taking part in the treatment and it is accessible via mobile phones. It operates with dispensation from legislation. 3.4. Interaction Between Domains and Resulting Adjustments The Danish solution had to be adapted in order to comply with the Norwegian legal requirements. Each party had to have their own application and database for which they were responsible, thus a communication service had to be developed in order to share information between the different users’ applications and databases. The Norwegian Pleie.net, which today is the result of the process, has a common portal and login page (www.pleie.net) with general information about the service. Participants from the various service locations might also select their own login page directly. There is one login page for staff in specialist health services, one for general practitioners, one for home care nurses and one for patients. All databases are run at the same server park, and there is a common user database and patient database with demographic information about the users and the patients that all the other databases and applications are linked to. For the home care nurses and the DoD, the functionality of the ulcer service will not be changed. What has changed as a result of the legal requirements is the log on procedures, both for the computer and the mobile phones. For the computer a two factor authentication is required; implying username/password and a onetime password sent via SMS from the server to the phone number registered in the user profile. When the service is accessed from the mobile phone, both user name/password and a onetime link is used. This responds to requirements of one additional security level. For the users this means one extra operation in order to use the system. It was more complicated than they expected from the Danish system, but it is still easier than the system used in the pilot, where a digital camera was connected to a computer with internet access. In the security domain, certain clarifications and adjustments were also obtained. This is a service where patients have to give their consent to be included. More insights into challenges around sharing of patient data were obtained. The Health Directorate’s norms for information security were changed in July 2010 with stricter security requirements. The processes contributed to making the new norms topical and made it clear that different authorities interpreted the risk level for access to health information by mobile units somewhat differently, with consequences for security levels. The processes also implied an adjustment of the security level. The security team at NST interacted with the authorities, and a number of meetings were held. Based on an overall assessment they found that patient information accessible in the ulcer record did not require the strongest level of security. It was due to the fact that the collaborating partners only had access to ulcer data and not to the entire patient record. The latter would imply a stricter legislation.
A.G. Ekeland et al. / Making a Web Based Ulcer Record Work
421
In these processes, not only the architecture of the technological innovation was adjusted, but the initial scripts presented by all domains were affected. In this case, technologies, security and legal regulations as well as attitudes and routines of professionals both influenced and were influenced, resulting in the production of a new assemblage. It is a problem that in Norway there are no technological solutions to comply with these requirements for access via mobile phones. The need now is to develop secure solutions for mobile communication, and there are proposals to sophisticate the legislation somewhat. 3.5. Considering the Methodology Following a formative approach, the attention to domains that influenced the development, their scripts and interactions, turned out to be valuable for assessing and conceptualizing characteristics and functionality of the resulting service. This approach can be recommended for assessments of services that are under development in real life settings in order to take part in the work to improve them. It is complimentary to effect studies, assessing the effects of real life use.
4. Conclusions and Further Action Different domains claiming attention and carrying their internal logic or script, which they expected the others to comply with, created complexities that had to be addressed in order to make the ulcer record work. The scripts became subject for changes as they were included in negotiations. Formative assessments, addressing different scripts and negotiations helped display these tensions and contribute to solutions. Formative assessments can thus play a vital role in such processes, in that they address transparency and negotiations, and conceptualize change that has not been anticipated. The process has produced a clearer platform for the ongoing development of web based electronic records and electronic communication by mobile phones between levels of care. At present legal and security challenges are being addressed as a consequence of the processes around the ulcer record, and adjustments are expected. Actions are also taken to develop secure solutions for mobile communication. We are also currently carrying out a formative evaluation study on the ways in which knowledge and actions are being integrated via use of the ulcer record. The experiences from use of four databases will also be assessed.
References [1] [2] [3] [4] [5] [6] [7]
The web page URL: https://www.pleje.net/Info_1.asp. Nyheim B, Lotherington AT, Steen A. Nettbasert sårveiledning. Kunnskapsutvikling og bedre mestring av leggsårbehandling i hjemmetjenesten. Nordisk Tidsskrift for helseforskning. 2010;Volume 6(nr 1). Ekeland AG, Bowes A, Flottorp S. Effectiveness of Telemedicine - a Systematic Review of Reviews. International Journal of Medical Informatics. 2010;79(11):736-71. Shriven M. The methodology of evaluation. In: Gredler ME, editor. Program Evaluation. New Jersey: Prentice Hall; 1967. Rip A, Schot J, Misa T. Constructive Technology Assessment: A new paradigm for managing technology in society. 1995. Oxford Dictionaries Online: Oxford University Press; 2010. Oxford Dictionaries Online. Akrich M. The de-scription of technical objects. Shaping technology/building society. 1992:205-24.
422
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-422
Assessing the Role of a Site Visit in Adopting Activity Driven Methods a
Irmeli LUUKKONENa1, Kaija SARANTOb, Mikko KORPELAa School of Computing, Healthcare Information Systems Research and Development b Department of Health and Social Management at University of Eastern Finland, Kuopio, Finland
Abstract. Healthcare activities rely heavily on socio-technical information systems. Such systems should be developed according to a socio-technical approach. The Activity Driven (AD) approach has been developed to contribute to the early phases of information system development in healthcare. Multiprofessional and multi-disciplinary education in teams has been used to introduce the approach to prospective analysts, including “lay” healthcare professionals. ‘Almost real life cases’ have been emphasized as promoters of learning. This paper reports on a study on site visits as a crucial element for adopting socio-technical methods of analysis in healthcare. The paper presents feedback collected from an intensive course on health information systems development held in Mozambique. The results indicate the high importance of site visits - not only as a starting point of system analysis but also as a crucial promoter to learning socio-technical methods. Based on the results, needs for improvements are identified to the usability of AD tools and to the practical arrangements of site visits. Keywords. health information system development, socio-technical approach, activity analysis, education, site visit
1. Introduction: How to Teach Socio-Technical Analysis in Healthcare? Healthcare is highly information-intensive; i.e., healthcare activities rely heavily on information being transferred between patients and various care providers, collected, stored, processed and used. The purposeful use of information within activities can be seen as a socio-technical information system (IS) [1, 2], within which information technology (IT; manual or computer-based) is used as a means of work by individual actors or as a means of coordination and communication between actors [3]. To develop such socio-technical systems, the focus should on the work activities as the basic unit of analysis, instead of the IT artefacts embedded in the IS [4]. The Activity Driven (AD) approach on Information Systems Development (ISD) has been studied and developed in the University of Eastern Finland (University of Kuopio until 2009) since the early 1990s [5, 3], with the main focus on healthcare activities and healthcare information systems. It is a socio-technical and participatory approach based on Activity Theory [6] with the primary goal of providing methods that emphasize the intertwined development of work and IS. The approach encourages IS developers and “users” (e.g., healthcare providers) to study collaboratively how 1
Corresponding author: Irmeli Luukkonen, University of Eastern Finland, School of Computing, PL 1627, FI-70211 Kuopio, Finland; E-mail: [email protected].
I. Luukkonen et al. / Assessing the Role of a Site Visit in Adopting Activity Driven Methods
423
different kinds of work activities are actually arranged and conducted, including what kind of information and technology do the actors need within those activities. The approach comprises several interrelated parts, including the Activity Analysis and Design (ActAD) framework [5], the Activity Driven Information Systems Development Model (AD ISD) [3], and a methodology for depicting healthcare “landscapes” [7]. Some initial practical methods and tools for the various parts have been produced and tested by the researchers in cooperation with healthcare providers and IS developers in practical cases. Socio-technical analysis of work and information systems is not possible without analysts who have been trained in the respective methods. However, analysts need not be IT experts; experienced “lay” professionals adopt methods that fit their experience and needs [8]. Multi-professional and multi-disciplinary education in teams has proved out particularly useful in health informatics [9, 10]. Pedagogically, ‘almost real life cases’ have been emphasized as promoters of learning [9]. This paper highlights site visits as a crucial element for adopting socio-technical methods of analysis in healthcare. The paper reports on a study in which four multicultural groups of students used a socio-technical approach that was mostly unfamiliar to them, in a previously unfamiliar context, for the rapid analysis and reporting of a healthcare service activity and its socio-technical information system.
2. Materials and Methods The experiment was implemented in Mozambique as a part of an Intensive Course on Health Information Systems Development and Implementation (6 days) organized by two Finnish and three African universities. The participants (15 students and 11 lecturers) came from Finland and five African countries. The education background of the students was: 7 Information Systems, 4 Health Sciences, 4 Computer Science/IT. The purpose of the course was to introduce the participants with a set of sociotechnical theories and methods for IS needs analysis and implementation in a collaborative, hands-on manner. Pedagogically the main idea was to use a real-life case for group work where the theories and methods were applied by the students. The case site was Macia-Bilene Health Centre, a typical rural-area health facility where the information system in use was mostly paper-based. As preliminary materials, the students were provided with papers about the AD approach and landscape modelling, as well as three lectures on AD methods and Mozambican healthcare service system. For the experiment, the students were divided into four groups (3-4 persons from different universities and countries) and assigned to explore one section each in the health centre, report on their findings and give feedback on the visit and the AD methods. The site visit was arranged in the second day of the course and its duration was three hours. The students reported their findings in four steps: (1) initial observations, (2) tentative and (3) final oral, visual and written reports on the case site and the research process, and (4) written feedback assessing 1) the AD approach and tools and 2) the site visit arrangements. In this paper we focus on step 4, the student feedback. The feedback was gathered with a paper-based questionnaire including 12 questions, 5 unstructured and 7 structured questions with a Likert scale from 1=poor to 5=excellent and option to comment each question freely. The questionnaire was given to the students after the site visit and collected in the last day of the course. All 15
424
I. Luukkonen et al. / Assessing the Role of a Site Visit in Adopting Activity Driven Methods
students returned the questionnaire, while the average answering rate to each question was 80%. The basics of Mozambican health care system were known by 2/3 of the students, while 1/3 had no prior knowledge at all about it. The actual site was unfamiliar to all students. The majority (n=9) of the students had never heard of AD methods. Other systems design and analysis methods were unfamiliar to 6 students. Anonymous research data was analyzed with quantitative and qualitative methods.
3. Results The content analysis of the assessments on the AD approach and tools is summarized below in Table 1, and the average scores and assessments of the site visit are summarized in Table 2. When possible, the factors impacting positively are listed after the subheading Pros and those impacting negatively after Cons. Table 1. Summary of the assessments on the AD approach and tools. Issue (n)
Summary of comments
AD approach compared to traditional systems analysis methods (n=10)
Overall: “field work driven”; observations ‘in vivo’, and ‘in situ’; incorporating stakeholders; “easy to understand” because of “top down approach starting with a broad landscape”, and “zooming into more specific from the big picture”; Models are “near the reality” and “representative”, e.g. showing connections between actors, workflows and information entities
Usability of AD tools: 3 tables and 3 diagram templates (aggregated average rate 4.2 out of 5)
Most used tools: tables and the landscape diagram. Pros: helps in identifying stakeholders (~whom to talk) and important issues and connections between (~what to model); graphical rich representation; models for context mapping; Cons: drawing diagrams is time consuming, some diagrams tend to become too big, some symbols were not easy to understand
New useful ideas
To be used in one’s own work and research: field work driven research, planning systems around workflows instead of existing systems, and incorporating the different stakeholders
(n=6)
To improve the adaptability of the approach: a clear manual for using the approach and more exercises in practice are needed Possible future use of AD methods (n=10)
In research (n= 5); In practical ISD work (n=4); Reasons to use: to promote understanding of the problem domain / research area: identification of essential elements and making the connections between different elements
Although the AD diagrams and their elements were considered easy to understand and representative, some difficulties were identified in two aspects of the use of the models (Table 1). First, “identifying corresponding elements in an unfamiliar field” was difficult. Second, some difficulties were met in “drawing diagrams”, particularly in “deciding what items to include in a diagram and what is the proper level of detail”. On the other hand, the tables were used to ‘identify’ things. In order to improve the adaptability of the tools, both formats should be provided hand in hand, preferably complemented by practical examples describing a very similar target domain. Despite the limitations, the students were able to produce a comprehensible holistic view of their target section of the case site, to use AD tools successfully, and to acquire an idea of a socio-technical perspective to information systems. Although the groups’ reports and presentations are not reported here, it should be mentioned that combined together they provide a valuable starting point to any development activities in the Macia-Bilene Health Centre.
I. Luukkonen et al. / Assessing the Role of a Site Visit in Adopting Activity Driven Methods
425
Table 2. Summary of the assessments of the site visit. Issue
Aver. score (1-5)
Summary of comments
(n) Clarity of assignment
2.9
Pros: prior experience, local knowledge, teamwork; Cons: Unclear goal, changes in plans, no preliminary task division, got the assignment and rubric too late
3.3
Assessing the assignment itself: impacting factors are pre-materials and visit timing in relation to lectures. Assessing the information gathered from visit: impacting factors are visit length, time allocation and knowledge of informants, language barrier, available external information sources
Timing of the visit in terms of background information, (n=13)
2.9
Most preferred time would have been later in the course; visit should be allocated to the working days of informants; preliminary visit to the site before actual information gathering
Length of the visit
2.9
Mainly considered too short, due to changed and unsure plans, language barrier (translations take time), and to interview other sections of the health centre as well
3.3
Mainly considered sufficient; factors impacting the quality: language barrier, informants’ professional age and how long has been in the task; need for documents as additional information sources; need for validation of the results with the informants
4.4
Pros: firsthand experience of the context of research, ‘an eye opener’, interaction with stakeholders and (information system) users in their natural context; Cons: research takes time of health providers thus hindering their daily job with patients
(n=14) Amount of information (n=15)
(n=15) Quality of knowledge of the informants (n=15) Importance of the site visit in terms of adopting AD methods (n=12) Overall comments (n=10)
Positive (n=7): informative, and important to see the work in real context, because it gives better research perspective and promotes learning. Negative (n=3): limited beforehand planning, changed plans, and too short time for visit. Future/ improvement (n=5): visit also other levels of health facilities, re-visit the same site, time allocation of the site’s actors, goal clarification and proper planning
There was a clear positive and even enthusiastic attitude towards the site visit (Table 2). The expression ‘eye-opener’ describes very well the site visit’s importance as a means of learning. The suggestions for improvement mainly addressed the planning and arrangements of the visit. It is important that the task, goals and timetables are properly defined in beforehand. It would be ethically justified if site visits could be beneficial to the site’s development, not only for educational purposes.
4. Discussion In this experiment, the site visit was the heart of the intensive course, providing learning experience of 1) adopting a socio-technical, particularly AD approach, and 2) first-hand knowledge about health care (services and facility) in a rural area in Mozambique. Only the summary of the former is presented in this paper. The site visit concretized very clearly the following points. Since the site was not computerized, the socio-technical view of IS was highlighted. The site visit showed the benefits of multi-professional cooperation, both as a group of researchers, and in interacting with domain experts. The tables were found usable tools, complementing
426
I. Luukkonen et al. / Assessing the Role of a Site Visit in Adopting Activity Driven Methods
the existing diagram templates. Due to the unfamiliarity with the site, fuzzy goals and limited opportunities for planning, the need for improvisation during the visit was highlighted. For such challenging situations, guidelines rather than strict formal instructions are needed to help researchers to improvise and think by themselves. Although the scale of the experiment was small (15 students), it provided clear evidence of the importance of site visits – not only as a starting point of system analysis but also as a crucial promoter to learning socio-technical methods for the early phases of ISD. Based on the feedback from the experiment, improvements can be made to the AD tools and educational artefacts (lectures, guidelines, course programs). Acknowledgements.The intensive course was funded by the North-South-South programme of the Centre for International Mobility (CIMO), Finland, through the INDEHELA-Education project no. 1000202 (2009-2011). The research was supported by the SOLEA project funded by the Finnish Agency of Technology and Innovation (grant 40127/08).
References [1]
Berg M. Patient care information systems and health care work: a sociotechnical approach, International Journal of Medical Informatics 1999;55(2): 87-101. [2] Westbrook JI, Braithwaite J, Georgiou A, Ampt A, Creswick N, Coiera E, Iedema R. Multimethod evaluation of information and communication technologies in health in the context of wicked problems and sociotechnical theory. Journal of the American Medical Informatics Association 2007;14(6):746755. [3] Mursu A, Luukkonen I, Toivanen M, Korpela M. Activity theory in information systems research and practice: theoretical underpinnings for an information systems development model. Information Research 2007;12(3): paper311. Available from: http://InformationR.net/ir/123/paper311.html [4] Alter S. 18 Reasons why IT-reliant work systems should replace the IT artifact as the core subject matter of the IS field. Communications of the Association for Information Systems 2003;12(23):365-394. [5] Korpela M, Mursu A, Soriyan A, Eerola A, Häkkinen H, Toivanen M. I.S. research and development by activity analysis and development - dead horse or the next wave? In: Kaplan B, Truex D III, Wastell D, Wood-Harper AT, DeGross JI, editors. Information systems research – relevant theory and informed practice. Boston: Kluwer Academic; 2004. p. 453-471. [6] Hedegaard M, Chaiklin S, Jensen UJ. Activity theory and social practice: an introduction. In: Chaiklin S, Hedegaard M, Jensen UJ, editors. Activity theory and social practice: cultural-historical approaches. Aarhus, Denmark: Aarhus University Press; 1999. p. 12-30. [7] Korpela M, de la Harpe R, Luukkonen I. Depicting the landscape around information flows: methodological propositions. In: SIG GlobDev Workshop Proceedings, Paris, France, 13 December 2008. Association for Information Systems; 2008. [8] Truex D, Alter S, Long C. Systems analysis for everyone else: empowering business professionals through a systems analysis method that fits their needs. In: Alexander T, Turpin M, van Deventer JP, editors. IT to Empower - 18th European Conference on Information Systems, Pretoria, 6-9 June 2010. [9] Saranto K, Korpela M, Kivinen T. Evaluation of the outcomes of a multi-professional education programme in health informatics. In: Patel VL, Rogers R, Haux R, editors. Medinfo 2001. Proceedings of the 10th World Congress on Medical Informatics, London, 2-5 September 2001. Amsterdam: IOS; 2001. p. 1071-1075. [10] Saranto K. Challenges for multidisciplinary education in health informatics. In: Oud N, Sheerin F, Ehnfors M, Sermeus W, editors. Acendio 2007. 6th European Conference of Acendio. Nursing Communication in Multidisciplinary Practice. Amsterdam: Oud Consultancy; 2007. p. 175-176.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-427
427
A Multi-method Study of Factors Associated with Hospital Information System Success in South Africa a
Lyn A HANMERa,1, Sedick ISAACS b, J Dewald ROODE c eHealth Research & Innovation Platform, South African Medical Research Council b HealthTechSA, South Africa. c Department of Information Systems, University of Cape Town, South Africa
Abstract. A combination of interpretivist and positivist techniques was used to develop and refine a conceptual model of factors associated with computerised hospital information system (CHIS) success in South Africa. Data from three case studies of CHIS use in level 2 public sector hospitals were combined to develop a conceptual model containing seven factors associated with CHIS success at hospital level. This conceptual model formed the basis of a fourth case study which aimed to confirm and refine the initial conceptual model. In the third phase of the study, a survey of CHIS use was conducted in 30 hospitals across two South African provinces, each using one of three different CHISs. Relationships between hospital-level factors of the conceptual model and user assessment of CHIS success were examined. A revised conceptual model of CHIS use was developed on the basis of the survey results. The use of a multi-method approach made it possible to generalise results from the case studies to multiple CHIS implementations in two provinces. Keywords. Hospital information system success, Information system (IS) success, multi-method approach, conceptual model.
1. Introduction A conceptual model of computerised hospital information system (CHIS) use has been developed, based on relevant theoretical background and the results of case studies and a survey, to support decision-making about CHIS acquisition and implementation in South African level 1 and level 2 hospitals2. This model takes into account the context in which the CHISs are implemented (environments of limited or vulnerable resources such as skilled personnel and infrastructure; and CHISs of limited scope, i.e., admission/discharge/transfer (ADT) and billing). In this paper, the combined use of interpretivist and positivist approaches to test and refine the conceptual model is described.
1
Corresponding Author. Lyn A Hanmer, eHealth Research & Innovation Platform, South African Medical Research Council, PO Box 19070, Tygerberg, South Africa, 7505. E-mail: [email protected]. A level 1 hospital is a facility at which a range of outpatient and inpatient services is offered, mostly within the scope of general medical practitioners. A level 2 hospital is a facility that provides care requiring the intervention of specialists as well as general medical practitioner services. 2
428
L.A. Hanmer et al. / Factors Associated with Hospital Information System Success
2. Methods Two contrasting approaches (positivist and interpretivist) have typically been used in the analysis of the effects of the implementation of information systems in organisations. Much of the literature on information system (IS) success seems to reflect the positivist approach, in which attempts are made to demonstrate the validity of theories of IS success, or the need to modify such theories, based on empirical studies of comparatively large numbers of cases (for example, studies reviewed in [1]). The theoretical work relating to health information system (HIS) success and failure identified to date has generally been based on an interpretivist approach, in which the aim is to deepen understanding of the social and other factors which contribute to the experience of implementing HISs in different environments. In this study, the aim is to develop an understanding of the relationships between an organisation (a level 1 or level 2 hospital), the people in that organisation, and the information system (the CHIS). The aim of some HIS studies has been to develop or extend theories which provide a framework in which to interpret results (for example, [2] and [3]). Other recent studies of factors influencing the success of HISs have taken the form of Delphi studies [4-5]. The interpretivist approach is appropriate to investigating CHIS success or failure because the highly complex nature of the environment being studied makes it difficult to predict outcomes of activities. This study used the opportunity to combine positivist and interpretivist approaches by using in-depth case studies to identify and examine the factors which affect the success or failure of CHISs in the environment of level 1 and level 2 public sector hospitals (a largely interpretivist approach), in combination with a survey of a large number of these organisations in an attempt to explain similarities and differences in the experiences of CHIS implementation across the organisations (a largely positivist approach). The combination of interpretivist and positivist approaches has been advocated by authors such as [6-8]; so that the strengths of each approach can be combined to enrich the analysis of a particular domain. Westbrook et al. [9] are following a multi-method approach in a study of the implementation of a commercial CPOE system in an Australian hospital, describing the analysis of the effects of this implementation as a ‘wicked’ problem, requiring multiple methods of investigation to gain the best possible understanding of the process. The broad framework for the methodological approach used in this study is a reflection of the complexity of the issues being addressed: the socio-technical approach to HIS studies (as in [9-11]) is based on the premise that the implementation of information systems, such as CHISs, results in a complex interaction between the organisation in which the CHIS is implemented and the CHIS itself; i.e., the social and technical aspects of the implementation. This approach is consistent with the intention in this study to examine the implementation of CHISs in the specific context of level 1 and level 2 public sector hospitals in a developing country, based on the premise that access to the resources required for CHIS implementation in these environments is limited and vulnerable. The socio-technical approach provides a mechanism for the incorporation of the context issues in the study design and analysis. The CHISs in use in the study hospitals support mainly patient administrative functions (patient registration, ADT and billing). Most published HIS studies identified in this project refer to clinical information systems, such as computerised physician order entry (CPOE) systems. The lack of published studies of administrative CHISs could imply that the technical and organisational issues related to a CHIS
L.A. Hanmer et al. / Factors Associated with Hospital Information System Success
429
implementation like those at the study hospitals are relatively trivial. However, reports of studies in two South African provinces highlight the challenges experienced with the implementation of similar CHISs in those environments [12-13].
3. Results and Discussion The interpretivist component of the current project provided the opportunity to examine the use of a specific CHIS through case studies in three hospitals (the pilot case studies) in order to improve understanding of factors which influence the potential for CHIS success or failure. Once factors had been identified, they were incorporated in the initial conceptual model of CHIS use. This initial conceptual model then provided the framework for the subsequent (fourth) case study. All case study hospitals used the same CHIS. Based on the findings from the fourth case study, and additional insights from the literature and from interviews with HIS experts, the conceptual model was revised to develop an ‘extended conceptual model of CHIS use’, following the structured case study approach described by Plummer [14]. The aim of the survey component of the study was to validate the extended conceptual model by conducting a survey of CHIS use in level 1 and level 2 hospitals in two South African provinces, each using one of three different CHISs. Survey respondents were asked questions designed to confirm (or not) the factors affecting CHIS success, and the relationships between them. This positivist approach was supplemented by a small interpretivist component, since respondents were also asked a few open-ended questions designed to obtain information on additional factors which could affect CHIS success in the study environments. The final version of the conceptual model of CHIS use for this project, the revised conceptual model of CHIS use, was developed based on the results of the survey. The following seven hospital-level factors were identified as being associated with CHIS success: Knowledge and understanding of CHIS; Appropriateness of CHIS design; CHIS performance; Resource availability and allocation; Perception of usefulness; Management commitment to success; and Effective use of CHIS and/or outputs. The survey results and the revised conceptual model are described in [15]. The case studies and discussions with expert informants yielded mainly qualitative data about opinions of the CHISs in use in the study environments. The survey was designed to collect quantitative data, based as far as possible on a 5-point scale to order opinions, and thus facilitate statistical analysis. The design of the questionnaires also made provision for recording qualitative data, both through open-ended questions and by making provision for respondents to record comments. 3.1. Case Studies The use of case studies in examining HIS implementations is well established (for example, as reported in [10]; [13]; [16-17]). In practice, the pilot case studies and the fourth case study resulted in the identification of factors associated with (effective) CHIS use, rather than the more general concept of CHIS success. In keeping with the practice for qualitative research, cases were chosen in order to ensure representativeness of a particular class of cases , rather than on the basis of statistical sampling [18]. The description of the relationship between the identified factors was formalised in the development and refinement of an initial conceptual model of CHIS
430
L.A. Hanmer et al. / Factors Associated with Hospital Information System Success
use. The fourth case study differed from the pilot case studies in that it was aimed at investigating the applicability of the initial conceptual model of CHIS use while also clarifying information gained in the pilot case studies, resulting in the extended conceptual model. Yin [19] has made recommendations for enhancing the quality of case studies in health services research. Among the issues identified as being associated with high quality case studies is that they ‘should contain some operational framework’ even if they are exploratory [19, p1215]. For the pilot case studies in this CHIS success study, the framework was provided by the interview framework which was used in all the case studies, and the key IS success models identified by that stage ([1], [20-21]). The initial conceptual model of CHIS use and the same interview framework (as used in the pilot case studies) provided the operational framework for the fourth case study. 3.2. Survey The survey provided data on the CHIS implementations in 30 hospitals. There was no evidence from the available literature that other surveys of similar scope had been conducted either in South Africa or elsewhere, although there have been reports on surveys of the status of clinical information technology in hospitals in Canada and the US [22-23]. While the primary aim of the survey was not to obtain information about the CHIS itself in each hospital, questions were included about the functioning of the CHIS, relating to the factor ‘CHIS performance’ in the conceptual model of CHIS use. The survey was also designed to confirm whether the factors included in the conceptual model of CHIS use do apply in a wider set of hospitals, and to find out whether the relationships described in the conceptual model could be identified from the survey data. Several hypotheses related to the factors in the conceptual model were defined for investigation through the survey. The analysis of the survey data showed that the factors of the conceptual model are associated with CHIS success, and confirmed the relationships between factors of the model in varying degrees.
4. Conclusion The results of the study of factors associated with CHIS success in South African level 1 and level 2 hospitals have been reported in more detail elsewhere [15], [24]. The use of a multi-method approach made it possible to generalise results obtained from the case studies in four level 2 hospitals in the same province using the same CHIS to level 1 and level 2 hospitals in two provinces, using three CHISs. This approach has the potential to support further generalisation of the results of this study.
References [1] [2]
DeLone WH and McLean, ER. The DeLone and McLean Model of Information Systems Success: A Ten-Year Update, Journal of Management Information Systems 19(4) (Spring 2003), 9-30. Thompson, MPA Cultivating meaning: interpretive fine-tuning of a South African health information system, Information and Organisation 12 (2002), 183-211.
L.A. Hanmer et al. / Factors Associated with Hospital Information System Success [3] [4]
[5]
[6] [7] [8]
[9]
[10]
[11] [12]
[13] [14]
[15]
[16]
[17] [18] [19] [20]
[21]
[22]
[23] [24]
431
Braa J and Hedberg, C The struggle for district-based health information systems in South Africa, The Information Society 18 (2002), 113-127. Paré, G Sicotte, C Jaana, M and Girouard, D Prioritizing the risk factors influencing the success of clinical information system projects. A Delphi study in Canada, Methods of Information in Medicine 47 (2008), 251-259. Brender, J Ammenwerth, E Nykänen, P and Talmon, J. Factors influencing success and failure of health information systems: A pilot Delphi study, Methods of Information in Medicine 45 (2006), 125136. Roode, JD Information Systems Research: A Matter of Choice? South African Computer Journal 30 (2003), 1-2. Lee, AS Integrating Positivist and Interpretive Approaches to Organisational Research, Organisation Science 4 (1991), 342-365. Kaplan, B Evaluating informatics applications – some alternative approaches: theory, social interactionism, and call for methodological pluralism, International Journal of Medical Informatics 64 (2001), 39-56. Westbrook, JI Braithwaite, J Georgiou, A Ampt, A Creswick, N Coiera, E and Iedema, R Multimethod evaluation of information and communication technologies in health in the context of wicked problems and sociotechnical theory, Journal of the American Medical Informatics Association 14 (2007), 746755. Aarts, J, Dooreward, H and Berg, M Understanding implementation: the case of a computerised physician order entry system in a large Dutch university medical centre, Journal of the American Medical Informatics Association 11(3) (May/Jun 2004), 207-216. Berg, M Implementing information systems in health care organizations: myths and challenges, International Journal of Medical Informatics 64 (2001), 143-156. Jacucci, E Shaw, V and Braa, J Standardization of Health Information Systems in South Africa: the Challenge of Local Sustainability. In: Abiodun OB (ed), Proceedings of the IFIP 9.4 Working Conference on Enhancing Human Resource Development through ICT, Abuja, Nigeria, May 2005. Littlejohns, P Wyatt, JC and Garvican, L Evaluating computerised health information systems: hard lessons still to be learnt, BMJ 326 (2003), 860-863. Plummer, AA Information systems methodology for building theory in health informatics: the argument for a structured approach to case study research, Proceedings of the 34th Annual Hawaii International Conference, System Sciences, Hawaii, 2001. Hanmer, LA Isaacs, S and Roode, JD Factors associated with hospital information system success: Results of a survey in South Africa, In: Safran C, Marin H, Reti S (eds), Proceedings of Medinfo 2010, the 13th World Congress on Medical and Health Informatics, Cape Town, South Africa, 12–15 September 2010. Mohd. M Yusof, J Kuljis, A Papazafeiropoulou, and LK Stergioulas, An evaluation framework for Health Information Systems: human, organization and technology-fit factors (HOT-fit), International Journal of Medical Informatics 77(6) (2008), 386-398. Southon, G Sauer, C and Dampney, C Lessons from a failed information systems initiative: issues for complex organisations, International Journal of Medical Informatics 55 (1999), 33-46. Ragin, CC The distinctiveness of case-oriented research, Health Services Research 34:5, Part II (December 1999), 1137-1152. Yin, RK Enhancing the quality of case studies in health services research, Health Services Research 34:5 Part II (December 1999), 1209-1224. Heeks, R Mundy, D and Salazar, A Why Health Care Information Systems Succeed or Fail, University of Manchester Institute for Development Policy and Management: Working Paper Series No 9, 1999. http://www.sed.manchester.ac.uk/idpm/publications/wp/igov/igov_wp09.htm (accessed May 2006) Ballantine, J Bonner, M Levy, M Martin, A Munro, I and Powell, PL Developing a 3-D Model of Information Systems Success, In: Garrity EJ, ed, Information Systems Success. Idea Group Publishing. pp. 46-59, 1998. Jaana, M Ward, MM Paré, G and Wakefield, DS Clinical information technology in hospitals: A comparison between the state of Iowa and two provinces in Canada, International Journal of Medical Informatics 74 (2005), 719-731. Ward, MM Jaana, M, Bahensky, JA Vartak, S and Wakefield, DS Clinical information system availability and use in urban and rural hospitals, Journal of Medical Systems 30 (2006), 429-438. Hanmer, LA Factors associated with the successful implementation of computerised hospital information systems in South Africa, PhD thesis, University of Cape Town, Cape Town, 2009.
432
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-432
Assessing Biocomputational Modelling in Transforming Clinical Guidelines for Osteoporosis Management a
Rainer THIELa, 1 , Marco VICECONTI b, Karl STROETMANN a empirica Communication and Technology Research, Bonn, Germany b Istituto Ortopedico Rizzoli, Bologna, Italy
Abstract. Biocomputational modelling as developed by the European Virtual Physiological Human (VPH) Initiative is the area of ICT most likely to revolutionise in the longer term the practice of medicine. Using the example of osteoporosis management, a socio-economic assessment framework is presented that captures how the transformation of clinical guidelines through VPH models can be evaluated. Applied to the Osteoporotic Virtual Physiological Human Project, a consequent benefit-cost analysis delivers promising results, both methodologically and substantially. Keywords. Biocomputational modelling, VPH, clinical workflow, evaluation, impact assessment, osteoporosis
1. Introduction There is a growing interest for computational technologies in the area of medicine. Whereas Information and Communication Technologies (ICT) already play a fundamental role in medical informatics and practice, bioinformatics, and telehealth, the use of ICT as support to prevention, screening, diagnosis, treatment, and monitoring remains limited. Yet it is by now evident that this is the area of medical technology most likely to revolutionise the practice of medicine in the longer term. Computer models that simulate physiopathological processes can be employed to take clinical decisions on the basis of “what-if” analyses (predictive medicine), to tailor the delivery of care to the specific needs of individual patients (personalised medicine), and to explore pathological scenarios for systemic interactions between multiple physiological processes (integrative medicine). In Europe, the global framework of methods and technologies that will permit the delivery of a predictive, personalised, and integrative medicine has been developed under the name of Virtual Physiological Human (VPH). This initiative has been marked by a demand for measurable evidence that such complex technology is actually worth the cost. The aim of this paper is (1) to introduce a new evaluation framework as developed and applied to predictive computational models for osteoporosis
1
Corresponding author: Rainer Thiel, empirica Communication and Technology Research GmbH, Oxfordstr. 2, 53111 Bonn, Germany; E-mail: [email protected].
R. Thiel et al. / Assessing Biocomputational Modelling in Transforming Clinical Guidelines
433
management during the Osteoporotic Virtual Physiological Human Project (VPHOP)2, and (2) to present preliminary results of a cost-benefit assessment (CBA).
2. Method With respect to the conventional definition of Health Technology Assessment (HTA) [1], its application to VPH technology needs to take into account two additional elements: a) the technology involves predictive computer models, which have the potential to revolutionise currently applied clinical guidelines; b) the purpose of the assessment is extended to RTD policymaking, i.e. decisions made during the development of the technology itself. The methodological challenge in comparison to commonly applied health technology assessments is based on two reasons: (1) inherent is the need to assess the technology ex-ante, in very early stages of development [2]; (2) the impact on clinical decision making and practice may be far reaching. For the purposes of assessing biocomputational technologies, standard HTA is not sufficient. To expand on this dimension [3], we suggest considering the complete life cycle of a new or modified technology, ranging across development stages. Therefore, we introduced a particular (VPH) technology readiness level [4]. For the purpose of the VPHOP technology assessment, a new concept assigning fine grained technology readiness levels was introduced across the broader development phases of basic research, experimental validation, pre-clinical validation, clinical validation, and operational usage, providing a overview of the technologies’ maturity at a given time.
3. Result As this paper is of methodological nature, this result section foremost presents the assessment framework. 3.1. Fundamental Attributes of Predictive Computer Technologies We propose that every health technology that includes a predictive model should be assessed with respect to these fundamental attributes: • Capability: substantiation that a computerized model’s reliably represents a conceptual model within specified limits of (inherent) accuracy. Capability assessment requires tightly controlled conditions like laboratory environments. • Clinical accuracy: model accuracy needs to be assessed not only under controlled conditions, but also under operational conditions. Predictive accuracy can thus be truly assessed only in the clinical environment. • Efficacy: efficacy indicates the capacity for beneficial change (or therapeutic effect) of a given intervention in an optimal context. Here the assessment focuses on how medically beneficial is the new clinical pathway for the patient that incorporates the predictive technology (incl. risk). • Impact: for adoption, a health technology should not only be beneficial for the patient, but also present an impact upon the other stakeholders involved 2
EU FP7 #223865, www.vphop.eu.
434
R. Thiel et al. / Assessing Biocomputational Modelling in Transforming Clinical Guidelines
(medical professionals, healthcare providers, healthcare payers, policy makers, society at large) that they consider favourable or at least acceptable. 3.2. Central Assessment Framework: VPH Measurement Variables and Indicators All available indicators for each of the four fundamental dimensions above are exhibited in Table 1. The table depicts which indicators can and should be used to assess the four fundamental variables of predictive technology during the four stages of its lifecycle. This matrix serves as the central methodological framework that guided the VPHOP technology assessment. Table 1. Grid of indicators VPH technology assessment Impact Variable
Development Phase
Basic research Experimental verification & validation (inherent accuracy) Pre-clinical verification & validation
Capability
Accuracy
Efficacy
Impact Measure
Verification, validation
Prediction uncertainty
Estimated accuracyefficacy function
Projected cost/time based on simulation
RMS, ROC, AUC
FP/FN accuracyefficacy function
Projected cost/time/risk based on actual use on prototype
Comparative outcome, QALY
Actual cost/time/risk measured Indicators of impact upon patient, provider, payer, etc.
Clinical validation & assessment (clinical accuracy) Operational
Ex-post assessment
3.3. Overall Outcome Measures of Socio-Economic Impact Assessment Before the more concrete developments and application approaches about how to measure the technologies’ capability, accuracy, efficacy and impact were performed, it is worthwhile reconnecting the entire assessment exercise to the ultimate objective of the socio-economic technology assessment task. We can distinguish between two aggregate, overall socio-economic impact outcomes that further guided the further development of measurement variables, indicators and tools: once clinically applied – the ultimate reference point –, the new technologies will affect a) care provider, i.e. the health system, and b) the patient’s health (see Figure 1).
Figure 1. HTA based decision-making and influence of technology
Clinical impact, as the central avenue to approach the impact assessment, is constituted by two elements: (1) clinical management – i.e. the care pathway of the standard of care of the osteoporotic patient and, consequently, its change management; (2) health impact – the disease states and health of the patient (i.e. the expected consequences of fractures avoided) – scaled up to a macro/country level.
R. Thiel et al. / Assessing Biocomputational Modelling in Transforming Clinical Guidelines
435
3.4. Efficacy and Definition of Clinical Pathways While accuracy is a concept that can be associated to every technological component, the concept of efficacy can only be defined with respect to a specific clinical pathway, and its associated clinical scope. Once the new multiscale predictive technology has been validated in a clinical context, a new modified pathway will evolve. The comparison between the old and the new pathways represents the initial tool for estimating the expected overall impact of the new technology for the clinical guidelines and thus clinical management. A standard of care pathway (SoC) served as the central comparator of current osteoporosis management with the future VPHOP clinical pathways. For the SoC, all assumptions are based on approximations of current literature and epidemiological data, and constitute a reduced version of the European Guidance algorithm [5]. For the VPHOP clinical pathways, a multi-layered pathways consisting of three levels with the respective technology components assigned was hypothesised for the deployment of the VPHOP technology. 3.5. Cost-Benefit Analysis VPHOP Clinical Pathway Focussing on the outcome variables subsumed under health impact, a first preliminary and integrative cost benefit analyses was performed. The projected costs of the VPHOP clinical pathways (as based on an originally developed costing model robustly estimating costs of each deployable component) were set in relation to the expected benefits the increased inherent accuracy rates in comparison to the costs and predictive accuracy of the standard of care diagnostic pathway. At this early stage of the project, only the technical capability assessment served as the ground work for the consequent impact assessment. In sum, the final output of the dimension impact assessment forms the cost-benefit analysis. The patient flow and output of the VPHOP and the SoC pathway with a hypothetical patient cohort of 5000 patients was comparatively simulated. Health impact was defined to encompass as outcome clinical management and health, formalised as fractures avoided. Each hip fracture amounts to life-time costs of €60.000 when diagnosed and treated in the SoC pathway (including costs of diagnosis, treatment, hospital stays, nursery facility costs, etc.) [6;7]. One of the causes for theses enormous expenses is the low accuracy of the risk assessment of the current standard of care pathway. To reach an estimate of the health impact VPHOP technologies have on avoiding fractures and the derived amount of costs saved, the increased accuracy was multiplied with the costs of fractures. Further, in a conservative estimate, the average ten-year probability to suffer from a hip fracture is around 25%. We assume, furthermore, that the treatment efficacy is 50% in both pathways. For assessing the cost-benefit ratio of the VPHOP technologies, the benefits can be set equal as with the cost savings that derive from the additional prevention of fractures the VPHOP prognosis pathway has achieved in comparison to the standard of care. The costs, in sum, can be defined as the extra costs of the VPHOP prognosis pathway as compared to the costs of the SoC. For the VPHOP clinical pathway, in a simplified manner, with B = Cost savings (fractures avoided) and C = Extra costs, the benefit-cost ratio (BCR) can be calculated as:
436
R. Thiel et al. / Assessing Biocomputational Modelling in Transforming Clinical Guidelines
Alternating between conservative and relaxed assumption and data input, the calculated ratio indicated in nearly all instances a positive return. For the simulated patient cohort, and for the VPH technologies, to break even with the costs of the SoC, the number of additional fractures needed to prevent is within realistic reach, once the technology would be deployed in a clinical setting. The CBA exhibited clearly that the extra costs needed to implement the VPHOP pathways are by far offset through the large amount of costs savings that the improved fracture risk prognosis of VPHOP presents.
4. Discussion Through newly developed clinical decision support and pathways, the transformation of biocomputational modelling and VPH technologies into future patient workflows are meant to ameliorate or even replace current clinical management processes, here of osteoporotic patients. The new (VPH) technology assessment framework developed forces many of the implicit assumptions behind such developments to lay bare. Since the assessment perspective is to develop concrete clinical scenarios, the further work, e.g. on VPHOP technologies, will clearly benefit from a much more focused alignment towards producing results that matter within the context of deployable, routine clinical applications. The cost-benefit analysis already at this early stage allowed highlighting some of the fundamental and, most importantly, clinical challenges VPHOP will have to overcome, thereby directing its further research into the clear direction of early clinical triability and later routine clinical deployment.
References [1] [2]
[3] [4] [5]
[6] [7]
Drummond MF, Sculpher MJ, Torrance GW, O'Brien BJ, Stoddart GL, Methods for the Economic Evaluation of Health Care Programmes, Oxford University Press (2005). Hartz S, John J, Contribution of Economic Evaluation to Decision Making in Early Phases of Product Development: A Methodological and Empirical Review, International Journal of Technology Assessment in Health Care 24(4) (2008), 465-472. Eisenberg JM, Ten lessons for evidence-based technology assessment, JAMA, 17 (1999), 1865-9. US Department of Defense, Technology Readiness Levels in the European Space Agency (ESA) and the US Department of Defense, Defense Acquisition Guidebook, http://akss.dau.mil/DAG, 2006. Kanis J, Burlet N, Cooper C, Delmas P, Reginster J-Y, Borgstrom F, Rizzoli R, European guidance for the diagnosis and management of osteoporosis in postmenopausal women, Osteoporosis Int 12 (2008), 399-428. Braithwaite R., Nananda F. Col S, Wong JB, Estimating Hip Fracture Morbidity, Mortality, and Costs, J Am Geriatr Soc 51(3) (2003), 364-70. Ström O, Borgstrom F, Zethraeus N, Johnell O, Lidgren L, Ponzer S, Svensson O, Abdon P, Ornstein E, Ceder L, Thorngren KG, Sernbo I, Jonsson B, Long-term cost and effect on quality of life of osteoporosis-related fractures in Sweden, Acta Orthop 79(2) (2008), 269-80.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-437
437
Technical Data Evaluation of a Palliative Care Web-Based Documentation System a
Tobias HARTZa,1, René BRÜNTRUPa, Frank ÜCKERTa Institute of Medical Informatics,University Hospital Münster, Germany
Abstract. A technical analysis of the web-based patient documentation system, eKernPäP, was conducted. The system is used by interdisciplinary pediatric palliative care teams in Germany to document outpatient care. The data of the system and the data of an external web analytic system have been evaluated. The results gave an overview how the system is used and what information is generated. A detailed analysis of singular forms showed that not all forms were filled in completely. With the help of the external web analytic system the navigation behavior of the users could be retraced. The users followed the given navigation from top to bottom. An existing exception in this pattern turned out to be misplacement and will be corrected in the next version. The technical analysis proved to be a good tool for improving a web-based documentation system. Keywords. Palliative Care, PCT, EHR.
1. Introduction 1.1. Palliative Care Ambulatory palliative care is about providing comprehensive services to terminally ill patients in their personal surroundings. The goal is to relieve the patients of their symptoms and to improve their quality of life [1]. The physical, mental, social and spiritual needs are the main focus. Palliative care requires multi-professional cooperation. Doctors, nurses, psychologists, social workers and institutions such as hospitals and palliative care teams (PCTs) are involved in the care process. To provide well-balanced care, communication among those health care providers is essential. Therefore access to current patient documentation at all times and from all places is needed [2]. 1.2. The Web-Based Documentation System Based on a prior documentation system which was developed in 2002 using Microsoft Access, a new web-based solution, called eKernPäP, has been implemented [3]. eKernPäP stores the medical data in forms, that each covers one topic. If data is entered or changed, a new version of the entire form is saved. eKernPäP contains a wide range of forms covering diagnosis and therapy related information about the patients. Through the use of the internet and systems equipped with UMTS, the health providers always 1
Corresponding Author: Tobias Hartz, Department of Medical Informatics, University Hospital Münster, Domagkstraße 11, 48149 Münster, Germany; E-mail:[email protected].
438 T. Hartz et al. / Technical Data Evaluation of a Palliative Care Web-Based Documentation System
have immediate access to the latest data, regardless of their location. Since the documentation system is running on any modern web browser it is independent of specific hard- or software. To satisfy the high demand for data protection and security the data protection concept of the German TMF e.V. (Technology, Methods, and Infrastructure for Networked Medical Research) [4] has been implemented, in particular by storing identifying data and medical data separately on two different MySQL-database servers. The two data classes are merged locally in the user’s web browser and only when the user has successfully signed on and is involved in the treatment of the patient. The implementation is based on PHP and JavaScript. 1.3. Technical Analysis This paper focuses on an evaluation of the usage of the system. Three PCTs have been using the system for nearly a year by now. Thus, enough data is available to do a comprehensive technical analysis. The purpose of this analysis is to understand how the system is used in order to draw conclusions on how to further improve it.
2. Methods For the technical analysis of the system three different data sources are available. First of all there is the content data, which is entered by the users of the system. In most cases this data is stored in the databases with some additional metadata as the current point of time and the ID of the editing user. The second data source is the integrated audit system that records user actions such as logging in and out and modifying data. As a third source of data the open source web analytics system Piwik [5] was used to collect information about the users’ system configurations, types of internet connection and browsing behaviors. Piwik has the advantage of respecting the users’ privacy to a greater degree by avoiding links between the collected data and the eKernPäP user database. Implementation of Piwik into eKernPäP proved to be a simple method of recording each user’s navigation. The collected patient data of the above mentioned PCTs was retrieved from the servers and anonymized. Using SQL the relevant data was joined and further processed using a spreadsheet application. Analysis focused on single interesting aspects that were found in this process.
3. Results 3.1. Internal Change Log The three PCTs whose data was analyzed have been using the system for almost one year. Up to this point, 61 user accounts were issued and 215 patient records have been created. Not all 33 forms in eKernPäP are used when documenting one patient’s treatment. In general the users choose which forms they need for their documentation. Looking at the data of the internal logging system, a summarization of the usage for those teams can be given. For example, 1,607 address entries were created and in 1,794 cases existing addresses have been edited. The form with pain related questions was
T. Hartz et al. / Technical Data Evaluation of a Palliative Care Web-Based Documentation System 439
edited 997 times and 539 distinct data entries can be found in the database. There have been 4,467 memos saved. In contrast, only 25 psychosocial findings have been documented. This first overview reveals the significance of some forms and might suggest the irrelevance of others for the documentation process. Combining this analysis of the internal system data with the extensive statistics from the web analytics system offers further insights. The visited pages and the data of the internal logging system can be matched (tab. 1). The results show that some pages such as medication or comments are not only edited very often, but also viewed even more frequently. Often the ratio between editing and viewing is similar, but there are some forms which show a considerable difference. The high number of views on medication and memo indicates that this information is important not only for the person documenting but also for others using the system. In contrast to that the pain location form has almost as many views as entries. Therefore it seems to be information which is seldom accessed just for the purpose of reading. In most case, when a user views this information, an existing entry is edited or generated. Table 1. Edits and Views of the user.
3.2. Data Completeness of Specific Forms A form consisting of a set of items is always saved as an entire block. Similar to the fact that the users can choose which form they want to use it has been decided in the development not to demand mandatory fields within a form (with the exception of user and patient registration). The reason was that each team keeps its patient records in different detail. Forcing them into a fixed scheme has not been an option within the first step. Even though it would make sense when the data shall be used for comparison and quality management, a more liberal approach gave more flexibility of usage. However, the results of the first internal log analysis, which looks at the forms as an entire block, might be misleading. A high number of edited and viewed forms do not mean that all items within a form have been used. It is important to analyze the data quality of individual forms in more detail. The pain form, for example, had 539 distinct data entries. Some pain forms have been filled in completely but many others only have been partially completed as can be seen in table 2. This observation is important and leads to different consequences. If some aspects of a form do not apply to all patients, the users may omit these parts and this possibility needs to be obvious for the user. But if there are only a few entries due
440 T. Hartz et al. / Technical Data Evaluation of a Palliative Care Web-Based Documentation System
because most users consider this information unnecessary, it should either be removed or the importance of these aspects and the need to document them should be clarified. Table 2. Number of data entries of selected items from the pain form. Selection of Items from the Pain Form
Input Type
Strongest pain (24h) What does relieve your pain? Pain relief through treatment Sense of pain: dull, onerous Negative effect of pain concerning vitality
Radio button Text Radio button Radio button Radio button
Data Entries 456 180 96 32 6
3.3. How User Navigate through the System To determine how the users navigate in eKernPäP the probabilities with which the users switch from one form to another were calculated. The results showed (tab. 3) that the users strongly tend to follow the menu structure when documenting a patient contact. The graph shows the likeliness for users navigating from the form on the left to the form on the top. Percentages above 25 % are highlighted. There are only two menu items that are usually not used in the order in which they are positioned in eKernPäP, but exactly the other way around. Thus the position of these two items (“Zeiterfassung” (time registration) and “Gesprächsnotiz” (memo)) will be interchanged in the next version. For the other forms this analysis can be interpreted in two ways: (1) the order of the forms as it is set up in the menu meets the needs of the health providers and respectively the user; (2) the users use the menu structure as a guide for their documentation work, not necessarily indicating that the menu structure is ‘correct’. Misplaced items may negatively influence the users’ documentation. Table 3. Transitions from one form to another 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Origin / Target Select Patient Contact Assessment Anamnese Base data Physical Examination Performance Scale Symptoms overview Symptom controll List of Symptoms Pain Form Pain Localisation Paediatric Pain Profile Time registration Memo
1 5% 8% 8% 4% 1% 0% 3% 1% 1% 2% 1% 17% 20%
2 26% 3% 2% 1% 1% 0% 0% 0% 0% 0% 1% 1% 1%
3 4 5 6 7 8 9 10 11 12 13 14 1% 12% 1% 0% 1% 1% 0% 0% 0% 0% 1% 6% 14% 47% 4% 1% 4% 5% 1% 1% 0% 0% 0% 1% 49% 4% 0% 1% 0% 1% 2% 2% 42% 4% 12% 5% 0% 0% 0% 1% 1% 0% 5% 19% 22% 30% 1% 1% 0% 0% 1% 1% 2% 2% 78% 8% 1% 0% 0% 0% 0% 1% 0% 1% 83% 6% 1% 1% 0% 0% 0% 2% 1% 0% 1% 56% 4% 1% 0% 1% 1% 1% 0% 0% 1% 11% 51% 3% 1% 1% 0% 0% 1% 0% 0% 1% 1% 2% 74% 1% 0% 0% 0% 0% 0% 0% 2% 1% 4% 10% 1% 0% 2% 1% 2% 2% 0% 3% 4% 1% 1% 1% 0% 0% 0% 0% 0% 0% 0% 0% 25% 0% 0% 0% 0% 0% 0% 0% 33%
Interesting to point out is the fact, that going from top to bottom some forms are omitted. For example after documenting physical examination most users go directly to symptom control leaving out performance scale and symptom overview. Another omission was found for those entries concerning psychosocial documentation. It was already said that only 25 entries of psychosocial findings were saved. Discussions with users have indeed shown that certain forms within this block were not known and are misplaced in the current version. The block psychosocial documentation was often skipped thinking that this only concerned their psychosocial colleges.
T. Hartz et al. / Technical Data Evaluation of a Palliative Care Web-Based Documentation System 441
4. Discussion The technical analysis helps to get an overview of how a system is used and what information is generated. Thanks to the web-based architecture this evaluation can be done at any time. The results are not only important for the developers who want to improve the system, but also for users benefiting from system improvements when using the tool and for those users who want to use the data for medical research. Especially in pediatric palliative care, where the number of patients a PCT cares for is small, a central system used by several teams has the chance to generate a useful pool of valid, comparable data. The content of the forms focuses on two aspect: daily work as well as research. Therefore the forms contain items that do not directly affect the care of a patient and are not needed for the daily routine, but are important for research issues. It is important that the users know why they are asked to document this information nevertheless. A technical analysis made it obvious that some forms within the system are not filled in completely. Either the users should be trained to provide this information in the future or the irrelevant items should be removed from the forms. For further improvement the technical analysis can help indicate what features are needed more urgently. The impact of improvement of features which are used very often is much higher than changes to features with minor relevance. Since in most projects the resources are limited, technical analysis can help to select the next mandatory tasks. In addition, the technical analysis can point to shortcomings, for example the analysis of how users follow the navigation helped improve the order of the menu items.
5. Conclusion The technical analysis has been proven to be a useful tool to improve a system and to enhance data quality. Results are easily generated and influence the development and the usage of the system. It would be useful to further analyze the possibilities of this method and to integrate an automatic technical analysis tool into the documentation system, which could provide statistical feedback at any time.
References [1] [2] [3] [4] [5]
Henkel HW, Gerschlauer C, Jan A. Palliativversorgung von Kindern in Deutschland. Monatszeitschrift für Kinderheilkunde; 2005; 153(6). Knapp C. e-Health in Pediatric Palliative Care. American Journal of Hospice and Palliative Medicine; 2010 February 01; 27(1):66-73. Hartz T, Verst H, Ueckert F. Kern-PaeP—a web-based pediatric palliative documentation system for home care. Stud Health Techno Inform.; 2009; 150:337-341. Reng C. Generische Lösungen der TMF zum Datenschutz für die Forschungsnetze in der Medizin. Berlin. Med. Wiss. Verl.-Ges; 2006. Piwik: Open Source Web Analytics [cited 2011 Jan 11]. Available from: URL:http://piwik.org/.
This page intentionally left blank
Imaging and Biosignals
This page intentionally left blank
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-445
445
Extracting Gait Parameters from Raw Electronic Walkway Data André DIASa,b,c,1, Lukas GORZELNIAKb, Angela DÖRINGc, Gunnar HARTVIGSENa,d , Alexander HORSCHb,d,e a Norwegian Centre for Integrated Care and Telemedicine, University Hospital of North Norway, Tromsø, Norway b Institut für Medizinische Statistik und Epidemiologie, Technische Universität München, Germany c Institute of Epidemiology, Helmholtz Zentrum München, Germany d Computer Science Department, University of Tromsø, Norway e Department of Clinical Medicine, University of Tromsø, Norway
Abstract Spatiotemporal gait parameters are very important for the detection of gait impairments and associated conditions. Current methods to measure such parameters, e.g. electronic walkways or force plates, are costly and can only be used in a laboratory. The new generation of raw data accelerometers might be a cheap and flexible alternative. We conducted a small feasibility study with 50 subjects from the KORA-Age project exploring the output of GAITRite and Actigraph GT3X. We open-sourced a package to extract and process raw data from GAITRite. The most promising location for the accelerometer seems to be at the ankle. The use of accelerometers showed to be simple and reliable, indicating that they can be used in daily life to extract gait parameters. Keywords. Gait parameters, Actigraph GT3X, GAITRite, open source
1. Introduction Objective measurements of spatiotemporal gait parameters are essential in a clinical or research environment to detect possible gait impairments or to monitor the effects of recovery therapy. There are several methods for the assessment of gait parameters, varying in validity, reliability and usability, such as force plates, pressure activated sensors and motion analyses from video. Most of them are either costly, time or labour intensive, or can only be applied to few gait cycles. Because of these limitations they are only feasible in a laboratory, raising questions as to whether such data represents the gait performance in daily life [1]. For these reasons, a portable and easy to use method is of great value, as it allows measurements for many gait cycles in daily living. In the last few years, accelerometer-based gait analysis systems have been proposed for this task [2,3]. Present technology allows us to record data in very high frequencies for long periods, opening a promising window for portable gait assessment. 1
Corresponding author: André Dias Department of Computer Science, University of Tromsø 9037 Tromsø, Norway. E-mail: [email protected]
446
A. Dias et al. / Extracting Gait Parameters from Raw Electronic Walkway Data
The elderly population is an essential target group. However, so far little work has been done on using accelerometers for gait assessment within this group [4].
2. Materials and Methods 2.1. Sensors For assessment of the gait parameters we used the GAITRite portable electronic walkway (CIR Systems Inc., Havertown, USA), 6 meters long, measurement length 4.88 meters, and 0.89 meters wide, with a sampling rate of 80 Hz. For motion sensing we used sets of 4 triaxial accelerometers of type Actigraph GT3X (Actigraph LLC, Ford Walton Beach, Florida, USA) with 16MB and capable of recording data at 30 MHz. This accelerometer has been validated in several published studies for medical and epidemiological research. 2.2. Subjects We asked a subset of 50 subjects recruited for the KORA-Age project [5] to wear accelerometers while walking in the GAITRite, which was part of the KORA-Age study protocol, with no changes due to the accelerometers. The subset was selected by asking every subject who took part in the KORA-Age project on randomly selected days. Therefore the same inclusion and exclusion criteria of KORA-Age study applied [5]. The accelerometers were prepared and attached by the biosensor team, subjects were given a brief explanation of the goal and asked to perform four walks (normal walk, slow walk, fast walk, and walk performing a mental task). The acceptance rate was 100%, i.e. all subjects agreed to wear the sensors. 2.3. Data acquisition and Handling of GT3X Each subject wore one tri-axial sensor at each of the extremities: left and right wrist, left and right ankle. The sensors were configured for raw data mode recording. Care was taken to customise Velcro straps in order to ensure reliable sensor attachment and correct orientation. Time constraints of the KORA-Age protocol did not allow download and reconfiguration of the sensors for each subject. So, each sensor recorded the entire day, without breaks, merging data from several subjects in one session/file. We used one computer for operating Gaitrite and GT3X, to ensure synchronisation. 2.4. Data Processing 2.4.1. Data from GAITRite The main software application provided by the manufacturer is proprietary. Therefore it was not possible to extend the methods and gait parameters extracted from the data. However, the vendor provides a separate program named Gaitraw, which outputs the raw data in a known format, documented by CIR Systems and made available to us. We developed an open-source Java package to process the data provided by Gaitraw. First, as shown in Table 1, seven gait parameters equivalent to those
A. Dias et al. / Extracting Gait Parameters from Raw Electronic Walkway Data
447
computed by the GAITRite were implemented, in order to create a testing ground. Later we extended this set of features by one parameter that we found relevant for medical and epidemiological studies, as well as by a batch processing mode. Table 1 summarises all extracted parameters. The software package, named GaitParser, is available for download and contribution at: http://code.google.com/p/gait-raw-parser/. Table 1. Implemented parameters, at time of writing, in GaitParser. Per walk
Ambulation time (AT)
Duration of the walk
Per footprint
Step length (SL)
Length of a step
Side swing (SS)
Distance of a step from straight walking line (only GaitParser)
Gait cycle time (GT)
Duration of a gait cycle
Single support (SP)
Duration of period only 1 foot on the ground
Per gait cycle
Step time (ST)
Duration of a step
Swing (SW)
Complementary to single support
Double support (DS)
Time when both feet are on the ground
2.4.2. Data from Actigraph GT3X Because the output of each GT3X sensor was one single CSV file including the measurements for all subjects of an entire day, we developed a script that takes the timestamps for each walk as input and splits the file into individual walks. It can process an arbitrary number of files in batch mode. Having the data for each walk, further processing was performed with a set of R statistics scripts. We performed data quality assurance tasks, visualisation and statistical analysis of the data. This paper does not focus on modeling the gait parameters from the accelerometer data, as this is ongoing work.
3. Results 3.1. Comparison to GAITRite In order to test the GaitParser software we randomly selected the data of 7 subjects from the study and did a direct comparison of the results to the GAITRite output. We got an error rate of less than 2% for the common gait parameters as shown in Table 2. 3.2. Output from Actigraph GT3X Figure 1a shows the output from the GT3X sensors mounted on both legs, for one subject walking at a normal speed. We can clearly see the acceleration peaks resulting from the steps in the X-axis (circled). Also identifiable are the time shifts between the two series of the same axis for left and right leg (arrow), indicating the alternate left and right steps. The Z-axis, capturing outward and inward movement, shows signals of low amplitude. We can observe a very stable acceleration pattern for each step, with regular amplitudes and durations.
448
A. Dias et al. / Extracting Gait Parameters from Raw Electronic Walkway Data
The number of steps visually identifiable (4 for left leg and 5 for right leg) corresponds to the output from GAITRite for the same subject. There is a significant difference of amplitude for left and right X-axe, indicating gait impairment. On Figure 1b we can see the matching data for the arms. It seems to contain equivalent information for gait analysis, but amplitudes appear lower than for the legs. The time shift in the series of the same component seems to be less visible in the arms than in the legs. Table 2. Errors in parameters between GaitParser and GAITRite.
Parameter Mean Maximum error Error Percentage
AT (s) 2.85 0.00 0%
SL (cm) 78.32 0.01 0%
GT (s) 1.10 0.02 2%
SP (s) 0.48 0.00 0%
ST (s) 0.56 0.01 2%
SW (s) 0.48 0.00 0.00%
DS (s) 0.20 0.00 0%
Figure 1. Output from the GT3X sensors. a) leg sensors; b) arm sensors. Dotted: Left sensor. Full: Right sensor. Blue: X – axe. Green: Y-axe. Red: Z-axe. Arrow shows time shift. Circles show step pattern.
4. Discussion The preliminary results we achieved from the GT3X sensors indicate that accelerometers capable of recording high frequency raw data may prove to be valuable tools for assessment of gait parameters in daily life. This had been explored in the published literature for young and adult populations. Although quantification is missing, our observations indicate that we may expect similar results for elderly populations. The most promising location for mounting the motion sensors was the leg. The lack of robust functionalities for batch processing and exporting of the calculated gait parameters, in the GAITRite system, makes it hard to use the data for further processing with other tools. Our approach based on open-source code is a first step in the direction to address these restrictions. We were able to visually identify the key characteristics of the signal, but we are still working on methods to numerically process them. The ability of the sensors to record at high sample frequency provides us with enough information to explore pattern-matching techniques to identify the key events of each gait cycle, quantifying all relevant parameters. We do not present a direct comparison of the two systems, as it is a work still in progress. Thus, we can not make the definitive statement that the use of raw data from accelerometers can in fact be successful. Given the ease of use and 100% acceptance by subjects in this study, we believe similar rates will be achieved in clinical application scenarios.
A. Dias et al. / Extracting Gait Parameters from Raw Electronic Walkway Data
449
5. Conclusion Further work is needed to improve the quantification of gait parameters from accelerometer data, in order to make the results reliable enough to be used for medical and epidemiological research. We are working on a complete software package capable of processing the data from GT3X in a simple and user-friendly manner. We have also developed the infrastructure and collected large amounts of data, namely in the scope of cohort studies, to perform robust validation of the method against a de facto standard. We want to encourage other researchers to explore and contribute to the GaitParser package as an open source research tool. Acknowledgments. This research was funded/supported by the Graduate School of Information Science in Health (GSISH) and the Technische Universität München Graduate School. A. Dias is supported by scholarship SFRH/BD/39867/2007 of the Portuguese Foundation for Science and Technology and Research Council of Norway Grant No. 174934. The authors wish to thank Jennifer Reinelt, Matej Svejda, Friederike Thun and Julia Strauß for their essential contributions to the project.
References [1] [2] [3] [4] [5]
Woollacott MH, Tang PF. Balance control during walking in the older adult: research and its implications. Phys Ther 1997;77:646–60. Kavanagh JJ, Menz HB. Accelerometry: a technique for quantifying movement patterns during walking. Gait Posture 2008;28:1–15. Henriksen M, Lund H, Moe-Nilssen R, Bliddal H, Danneskiod-Samsoe B. Test-retest reliability of trunk accelerometric gait analysis. Gait Posture 2004;19:288–97. de Bruin ED, Hartmann A, Uebelhart D, Murer K, Zijlstra W. Wearable systems for monitoring mobility related activities in older people; a systematic review. Clin Rehabil 2008;22:878–95. Holle R, Happich M, Löwel H, Wichmann HE. MONICA/KORA Study Group. KORA--a research platform for population based health research. Gesundheitswesen. 2005 Aug;67 Suppl 1:S19-25.
450
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-450
Safe Storage and Multi-Modal Search for Medical Images Jukka KOMMERIa,1, Marko NIINIMÄKIa, Henning MÜLLERb,c a Helsinki Institute of Physics, CERN, Switzerland b University of Applied Sciences Western Switzerland (HES–SO), Sierre, Switzerland c University Hospitals and University of Geneva, Switzerland
Abstract. Modern hospitals produce enormous amounts of data in all departments, from images, to lab results, medication use, and release letters. Since several years these data are most often produced in digital form, making them accessible for researchers to optimize the outcome of care process and analyze all available data across patients. The Geneva University Hospitals (HUG) are no exception with its daily radiology department’s output of over 140’000 images in 2010, with a majority of them being tomographic slices. In this paper we introduce tools for uploading and accessing DICOM images and associated metadata in a secure Grid storage. These data are made available for authorized persons using a Grid security framework, as security is a main problem in secondary use of image data, where images are to be stored outside of the clinical image archive. Our tool combines the security and metadata access of a Grid middleware with the visual search that uses GIFT. Keywords. grid networks, multi–modal information search, security
1. Introduction Images are getting increasingly important in modern diagnosis and treatment planning. Through the large variety in radiology protocols and modalities, a detailed image interpretation is not always simple. By producing extremely large volumes of imaging data, tomographic modalities such as CT (Computed Tomography), MRI (Magnetic Resonance Imaging) but also combined PET/CT (Positron Emission Tomography) and PET/MRI can also lead to an information overload and create a need for new tools to help interpreting images. Content–based image retrieval has been proposed as one of the potential tools to aid diagnosis and use the large amount of visual data available [1]. So far, Grid technologies have been successfully employed inside hospitals to speed up image analysis by distributing the visual feature extraction of images to a cluster of computers [2], and by integrating image analysis, for example with the ProVision PACS of the hospital [3]. This in–house solution has the advantage that images do not need to leave the hospital network for analysis or treatment. The image analysis and retrieval software used in our case is the GNU Image Finding Tool (GIFT). GIFT is used for content–based visual retrieval, whereas so called multi–modal systems combine visual and textual information in retrieval. This has been demonstrated to often give better results than either textual or visual information alone 1 Corresponding Author.
J. Kommeri et al. / Safe Storage and Multi-Modal Search for Medical Images
451
[4]. Very often, security constraints are not taken into account when discussing an inclusion of image retrieval into the PACS (Picture Archival and Communication System) or RIS (Radiology Information System) [5] and images are stored unencrypted. Steps towards integrating analysis with a secure storage of medical images have been taken in the Medical Data Manager (MDM) software [6]. MDM uses technologies of the EGEE (Enabling Grids for E–science in Europe) project’s gLite middleware [7]. The medical images themselves are stored in an encrypted format in the Disk Pool Man-ager (DPM) Grid storage [8]. Their meta data are stored in AMGA (gLite Grid MetaData Catalog) [9,10]. The symmetric encryption key is split into a number of pieces and stored in the distributed Hydra storage [11,12] according to the well–known Shamir’s Secret Sharing Scheme (SSSS). Even if one node of the Hydra storage is compromised, one piece of the key is not enough to reconstruct the actual symmetric encryption key to de-crypt the data in question. This enhanced security measure is a feature requested by the EGEE BioMedical user group. The gLite security system has been audited by the Centre National d’Etudes Spatiales (CNES) and was validated. In this paper, we describe an integration between GIFT and an MDM–like system to implement on–demand analysis of images. Problems with the initial test for using MDM directly are also described. Moreover, an integration of Grid storage and GIFT is implemented. Two usage scenarios, metadata search and multimodal search are described in Section 2. The components needed to enable these scenarios are described in Section 3. Section 4 contains a discussion and directions for future work. A functional prototype of it has been created as an evaluation of a technical concept in a project for the Swiss Academic Network SWITCH.
2. Functionality The goal of our system is to enable the following two functionalities: • Meta data search: The user has a valid VOMS (Virtual Organization Membership Services) certificate [13]. This issues a command to search the metadata. Access is granted based on the VOMS role and the results of his search are returned. The keys to decrypt the images are obtained and applied and images are returned. • Multi–modal search: In the PACS system, the user selects an image. This image has a DICOM header containing structured inforamtion. Similar images are searched by the visual image content using GIFT and by the metadata using a textual search. Only data matching the user’s privacy level are returned and shown on screen. This requires complete system integration as follows: 1. The image and its meta data are extracted from the PACS system. 2. A request containing the image data is sent to GIFT. 3. A request containing the meta data is sent to AMGA. 4. The results are combined and shown to the user based on the role. The combined system then allows for a safe access to the distributed image data based on the privacy levels of the users. For adding images and metadata to a secure storage system, the following steps are taken: (i) The user is authenticated, (ii) metadata of images is loaded to AMGA, (iii) GIFT carries out a feature extraction of the image and the features are stored by GIFT,
452
J. Kommeri et al. / Safe Storage and Multi-Modal Search for Medical Images
(iv) the image is encrypted and encryption keys are stored by Hydra, (v) the encrypted image is stored in DPM. As the system tests were outside of the hospital network, a test database of DICOM files and files from the medical literature used in the ImageCLEF2 benchmark were used for testing. In Figure 1 the possibilities for data access are described. The authentication is performed via certificates with the VOMS server. Queries of the metadata can be performed with AMGA and visual retrieval with GIFT.
Figure 1. Work flow of multi–modal search for images and associated metadata..
3. Components The system uses components based on existing Grid tools from the official gLite repositories. The structure is shown in Figure 2. The software components were installed on several virtual machines so that each virtual machine contained a logical collection of the software. On one virtual machine we installed AMGA, GIFT, the glite user interface and one Hydra server. Then, we installed two separate Hydra servers on two separate virtual machines. As VOMS server we used an existing service from the Swiss Multi Science Computing Grid (SMSCG) and a host certificate for every virtual machine. The components are described in the following text. The AMGA Metadata Catalog is an EGEE gLite service allowing metadata handling on the grid. The main usage can be as a front end file metadata service, providing means of describing and discovering data files required by users and their jobs. It can also be used as a Grid–enabled database for applications that require structuring their data, providing a database–like service supporting Grid security features (X509 proxies and the VOMS authentication and authorization system). Finally, an additional feature allows the access of existing relational databases from a Grid environment (worker nodes, user interface, etc.), which enables the addition of Grid security to existing databases. Hydra, part of the European Middleware Initiative (EMI), encrypts data using a distributed key storage system. The passkey is generated and then split into components, which are shared across multiple key stores on different servers and, if possible, in different countries. This is more secure than a central key storage system,
2
http://www.imageclef.org/
J. Kommeri et al. / Safe Storage and Multi-Modal Search for Medical Images
453
which requires only one security breach to be compromised. In contrast, to obtain the passkey generated by Hydra, a coordinated attack on multiple servers is required.
Figure 2. System component network topology for the setup of safe image storage and access..
The GNU Image Finding Tool, GIFT3, is a content–based image indexing and retrieval package developed at the University of Geneva in the late 1990’s. GIFT uses techniques common for textual information retrieval and creates a large set of mainly binary features (global and local color and texture features) [14]. GIFT extracts these features and stores them in an inverted file. In a typical desktop PC, the speed of this feature extraction is about 1 or 2 images per second. An inverted file is created after the feature extraction enabling quick retrieval. Through the Multimedia Retrieval Markup Language (MRML) the system can easily be integrated with other applications.
4. Discussion and Future Work This article describes a safe storage and access system for medical images and associated metadata using methods based on standard Grid tools. The tools allow for an easy integration of safe storage of all data in encrypted form and an access to the data via meta-data search and content-based image retrieval. Initially the use of the MDM (Medical Data Management) system [6] was planned but the software turned out to not be maintained anymore and the security framework was outdated. Thus we decided to change the architecture for the meta data search. The system uses a role-based access via VOMS servers to potentially confidential medical data. Our test system uses several storage servers of the Universities of Geneva and Bern, and a standard VOMS server of the Swiss Grid community. A similar structure can also be implemented inside hospitals, with the encryption keys being distributed on several machines. To limit security risks, all data (images, thumbnails) are always stored in encrypted format and access to meta data is protected by x.509 certificates. Encryption keys are stored in a distributed fashion, so a single security breach does not give access to the encryption keys. Such an architecture also allows data for research projects to be extracted from the PACS and stored in safe format, quicker than accessing via the often overloaded PACS system. Access to all data are via the role definition of a user according to defined access rights. The amount of data that has been stored in the test system is still small, and therefore future studies are 3
http://www.gnu.org/software/gift/
454
J. Kommeri et al. / Safe Storage and Multi-Modal Search for Medical Images
needed to measure the performance, precision/recall of the searches, and usability aspects of the system Acknowledgements. This work was partly supported by the SWITCH AAA project MedLTPC and the European Union in the context of the Khresmoi project (grant agreement no 257528).
References [1]
[2]
[3]
[4]
[5]
[6] [7] [8]
[9]
[10] [11] [12] [13] [14]
Müller H, Michoux N, Bandon D, Geissbuhler A. A review of content-based image retrieval systems in medicine–clinical benefits and future directions, International Journal of Medical Informatics, 73, pp. 1–23, 2004. Niinimäki M, Zhou X, Depeursinge A, Geissbuhler A, Müller H, Building a community grid for medical image analysis inside a hospital, a case study, Medical imaging on grids: achievements and perspectives (Grid Workshop at MICCAI 2008), New York, USA, pp. 3–12, 2008. Niinimaki M, Zhou X, de la Vega E, Cabrer M, Müller H. A web service for enabling medical image retrieval integrated into a social medical image sharing platform, in MEDINFO 2010, Studies in Health Technology and Informatics, 160, pp. 1273–1276, IOS press, 2010. Eggel I, Müller H. Indexing the medical open access literature for textual and content–based visual retrieval, in MEDINFO 2010, Studies in Health Technology and Informatics, 160, pp. 1277–1281, IOS press, 2010. Welter P, Deserno TM, Fischer B, Wein BB, Ott B, Günther RW. Integration of CBIR in radiological routine in accordance with IHE, in SPIE Medical Imaging 2009: Advanced PACS–based Imaging Informatics and Therapeutic Applications, 7264, 2009. Montagnat J, Frohner A, Jouvenot D, et al. A secure grid medical data manager interfaced to the glite middleware, Journal of Grid Computing, 6, pp. 45–59, 2008. Laure E, Fisher SM, Frohner A, et al. Programming the grid using glite, Computational Methds in Science and Technology, 12(1), pp. 33–45, 2006. Stewart GA, Cameron D, Cowan GA, McCance G. Storage and data management in EGEE, in Proceedings of the fifth Australasian symposium on ACSW frontiers, Darlinghurst, Australia, Australia, pp. 69–77, 2007. Santos N, Koblitz B. Metadata services on the grid, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 559(1), pp. 53– 56, 2006. Koblitz B, Santos N, Pose V. The AMGA Metadata Service, Journal of Grid Computing, 6, pp. 61–76, 2008. Abadie L, Badino P, Baud JP, et al. Grid–enabled standards-based data management, Mass Storage Systems and Technologies, pp. 60–71, 2007. Frohner A, Baud JP, Rioja RMG, et al. Data management in EGEE, Journal of Physics: Conference Series, 219(6), 2010. Alfieri R, Cecchini R, Ciaschini V, et al. VOMS, an Authorization System for Virtual Organizations, Grid Computing, pp. 33–40, 2004. Squire DM, Müller W, Müller H, Pun T. Content–based query of image databases: inspirations from text retrieval, Pattern Recognition Letters (Selected Papers from The 11th Scandinavian Conference on Image Analysis SCIA ’99), 21(13–14), pp. 1193–1198, 2000. Ersboll BK, Johansen P, editors.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-455
455
Respiration Tracking Using the Wii Remote Game-Controller J. GUIRAO AGUILAR1, J. G. BELLIKAa, L. FERNANDEZ LUQUE b V. TRAVER SALCEDOc a Department of Computer Science, Faculty of Science and Technology, University of Tromsø, Norway b Norut - Northern Research Institute, Tromsø, Norway c ITACA-TSB, Universidad Politécnica de Valencia, Spain
Abstract. Respiration exercises are an important part in the pulmonary rehabilitation of COPD (chronic obstructive pulmonary disease) patients. Furthermore, previous research has demonstrated that showing respiration pattern helps the patients to improve their breathing skills. We have developed a low cost and non-invasive prototype based on the Wii remote game controller infrared camera to provide BPM (breaths per minute) measurement as feedback. It can also be a comfortable solution without wires, batteries or any kind of electronics but just wearing passive markers. The lab evaluation with 7 healthy individuals showed that this approach is feasible when users are resting of their exercise. The BPM monitored during the tests presented less than 15% of maximum error and the RMSE (root mean square error) was lower than 6% in all the tests. Further research is needed to evaluate and adapt the system for COPD patients. In addition, more work is needed to develop applications that can be built to motivate and guide the users. Keywords. COPD, Wiimote, camera, pulmonary rehabilitation.
1. Introduction Chronic Obstructive Pulmonary Disease (COPD) is a common cause of death. According to the World Health Organization, there are 210 million people with COPD and it accounts for the 5% of all world deaths in 2005 [1, 2]. COPD is defined as a chronic airflow obstruction that can lead to reduced breathing skills and low exercise capacity [3]. Pulmonary rehabilitation is a key aspect of COPD treatment where exercise training and breathing techniques are essential aspects [4, 5]. There are some barriers within the pulmonary rehabilitation, such as lack of motivation and transportation problems [6]. However, a study by Collins et al [7] showed that giving feedback to the patients about how they are breathing has positive effects. If feedback could be combined with game-based rehabilitation, the outcome could be improved based on increased patient motivation [8].
1
Corresponding Author: Julián Guirao, Polytechnic University of Valencia. E-mail: [email protected]
456
J. Guirao Aguilar et al. / Respiration Tracking Using the Wii Remote Game-Controller
In our study we used a prototype based on the Wii remote controller (aka Wiimote) to acquire and process the user’s breathing signal and visualize the information on a screen. Our approach uses the infrared camera inside the Wiimote, capable of tracking up to four light sources at the same time. Using those infrared cameras in the health domain has already been tested [9]. The Wiimore is a low cost device (its price is below 40€) that can easily be installed in the home of the patient as part of a telemedicine system, or integrated with a computer-game applications for improving motivation [10]. The aim of this project was to implement a prototype to evaluate the feasibility of using the Wiimote's camera for tracking tiny movements such as breathing chest movements and, therefore, to test the possibility of using the Wiimote within respiration rehabilitation. If such approach is feasible, the Wiimote could be used as the user interface for applications aiming at providing respiratory feedback to patients with pulmonary diseases.
2. Methodology The developed prototype comprised of (a) hardware elements to capture the breathing signal and (b) software to process the data and provide feedback to users. 2.1. Architecture The designed prototype includes following parts (Figure 1): • Array of 30 infrared LEDs (light emission diodes) as light source. • Belt with attached markers. These markers were round reflecting metal pieces of 3 cm in diameter and placed approximately 10 cm of distance between them. These markers were made by adapting ice-cream spoons. • Wiimote connected to a PC using Bluetooth connection. • Computer to receive the data and show feedback. The system works as follows: light produced by the LEDs is reflected in the markers and captured in the Wiimote's camera. Therefore the camera is able to track markers on the user's chest and send the data to the computer. As they are moving according to the breathing, the computer processes these variations obtaining a signal corresponding to the breathing pattern Figure 1. Respiration tracking system
2.2. Software A desktop Java application was developed for calculations and signal processing. This application uses two open source Bluetooth libraries: Wiiuse and WiiuseJ. The first of
J. Guirao Aguilar et al. / Respiration Tracking Using the Wii Remote Game-Controller
457
them establishes the communication link with the Wiimote to receive the raw data. WiiuseJ is written in Java and prepares the raw data from the Wiimote. The Wiimote, once connected to the computer over the Bluetooth connection, automatically starts to send the raw data. This data is the preprocessed image of the camera, which consists of up to four dots corresponding to detected light sources. The developed application processes the data and visualizes: 1) the breathing signal, extracted from distance between the markers (detected as light sources), 2) a frequency analysis through Fast Fourier Transforms and 3) the “breaths per minute” (BPM) value from the last 20 secs, among some other things. All data and information was saved to files for later analysis. 2.3. Evaluation Protocol During the evaluation, 7 volunteers, 3 women and 4 men, without breathing problems completed some tests. They were asked to wear a belt with markers on the chest in a tight but comfortable way. The exact position was over the abdomen because this is the place with the maximum displacement due to respiration. Every volunteer made 3 tests of 3 minutes each: sitting on a chair, sitting on a stationary bicycle before doing exercise and after doing 5 minutes of exercise. The Wiimote and the light source were placed around 25 cm away from the body. In every respiration, when the lungs were full and the expiration phase was about to start, the user was told to press a button on the Wiimote. Every time the button was pressed, a time stamp was stored in the computer, saving the data to validate systems' outcomes. Finally, data provided by the user and calculated by the system was compared to get the results.
3. Results Octave2 was the chosen tool to address the analysis of the information gathered. Every test was classified in one of the following groups: • Normal: no errors were detected during the test • User induced errors: the user made a mistake pressing the button so the control signal is not valid. • System error: the system failed and the data shown was wrong. From the 21 tests realized (3 each volunteer), 12 of them were classified as normal. Only two of them had a maximum error above 10% (10,17% and 14,04%). The RMSE (root squared mean error), which is a good estimator of precision, was higher than 5% (5,8% and 5,24%) in only two of these tests. It means that difference between both signals was quite low in all these tests. The rest of the tests were classified as user induced errors (3 of them) and system error (6 of them). The errors produced during the evaluation tests were consequence of the impossibility of differentiating chest movements due to respiration from body movements or insufficient light being reflected by the markers.
2
More information available from: http://www.gnu.org/software/octave/
458
J. Guirao Aguilar et al. / Respiration Tracking Using the Wii Remote Game-Controller
4. Discussion The Wii video console has been introduced with success within the rehabilitation field in diseases such as stroke or cerebral paralysis. The COPD patients have also started to play with it to reduce the burden of their symptoms [11, 12]. However, using the Wii remote as respiration sensor, as proposed in this paper, has been never tested. The proposed system acquires the data and provides real-time feedback. The implemented prototype records breathing data and provides feedback to the patient about how slow or fast the respiration is. It is a non-invasive, low cost system with the following components: 1) a standard computer, 2) a US$40 Wiimote, 3) a US$5 belt with markers 4) and a illuminator about $30~40. It is also a comfortable solution without wires, as the patient wears no batteries or any kind of electronics. The patient only wears a Velcro belt with some passive markers. The system developed has proved to be able to acquire breathing signals with surprising precision having in mind the materials involved. All the data collected by the application showed the user's respiration signal without any doubt. Since access to a breathing sensor was not available during the development of the prototype, it was impossible to compare the respiration signal and its accuracy remains unknown. Therefore, a different evaluation protocol was addressed in order to know the potential of the system and its limitations. BPM calculated from this signal was compared to a test signal obtained from the user. Among the tests without any incidents the RMSE was lower than 6% showing a relatively high precision. Maximum error was lower than 15%. As a conclusion it is fair to say that the Wiimote is able to work as a breathing monitor device as a non-invasive, comfortable and low cost alternative. 4.1. Limitations This project is a preliminary study to test the feasibility of acquiring breathing signals with the Wiimote’s infrared camera. There are still many limitations that need to be addressed in order to create applications that are adapted to the needs of patients with COPD. In addition, the accuracy of the new prototype needs to be evaluated using a certified respiratory sensor and not just relay on subjects input. Some of the errors produced during the evaluation tests were consequence of the impossibility of differentiating respiratory chest movements from body movements. In the literature reviewed related to acquire breathing through visual devices, and also in this case, a prerequisite is that the user must be immobile [10]. Detecting respiration with visual devices while exercising, cycling on the stationary bike or walking on a treadmill for example, is something that remains an unsolved challenge. Although some tests using two Wiimotes were addressed, no solutions are proposed to solve this issue. Performance of the markers was also an important limitation. Range of proper operation was very low, allowing the user to be at a maximum distance of the Wiimote of 20-30 centimeters. The cause of this limitation was the material used to develop the reflectors, it did not have the appropriate reflective qualities. These reflectors were modified ice-cream spoons and not expensive reflective devices. The evaluation tests were only carried out by healthy users. COPD patients may have an anomalous breathing pattern or smaller chest movements due to their impaired
J. Guirao Aguilar et al. / Respiration Tracking Using the Wii Remote Game-Controller
459
lungs. The outcomes of the prototype with real patients have not been tested and it might present additional issues. 4.2. Future Work At the end of the preliminary study, the conclusion is that the Wiimote is capable of acquire breathing signals. Therefore the next step is to develop the system further on to reach its potential and to find out the real limitations of this approach. The low performance of the markers highlights them as the first thing that should be improved. Better materials must be found in order to have higher reflectiveness from the passive markers. Active markers are another option that should be researched. It would be composed of a LED, a resistor, a switch and a button-size battery, being as lightweight as a passive one. Performance of an active marker seems very likely to outperform passive ones. A larger number of trials with COPD patients is also a milestone to achieve.The real challenge would be to avoid the limitation of body movements. Some studies employed two Wiimotes to achieve a kind of stereo vision or 3D vision that showed to be very accurate [13, 14]. It could be a good approach to avoid this problem.
References [1] [2] [3] [4] [5] [6] [7]
[8] [9] [10]
[11]
[12] [13] [14]
World Health Organization. WHO | Chronic obstructive pulmonary disease (COPD). WHO (cited 2010 June 1). Available from: http://www.who.int/mediacentre/factsheets/fs315/en/index.html Mathers C, Loncar D. Updated projections of global mortality and burden of disease, 2002-2030: data sources, methods and results. World Health Organization. 2005. Anto J, Vermeire P, Vestbo J, Sunyer J. Epidemiology of chronic obstructive pulmonary disease. European Respiratory Journal 17. 2001; 982-994. Gosselink R. Breathing techniques in patients with chronic obstructive pulmonary disease (COPD). Chronic Respiratory Disease 1. 2004; 163-172. Ries AL, Bauldoff GS, Carlin BW, et al. Pulmonary Rehabilitation: Joint ACCP/AACVPR EvidenceBased Clinical Practice Guidelines. Chest 131. 2007; 4S-42S. Smith SM, Partridge MR. Getting the rehabilitation message across: emerging barriers and positive health benefits. European Respiratory Journal 34. 2009; 2-4. Collins E, Laghi F, Langbein W, et al. Can Ventilation-Feedback Training Augment Exercise Tolerance in Patients with Chronic Obstructive Pulmonary Disease? American Journal of Respiratory and Critical Care Medicine 177. 2008; 844-852. Lange B, Flynn Sheryl M, Rizzo A. Game-based telerehabilitation. European journal of physical and rehabilitation medicine 45. 2009; 143-151. Orimoto A, Haneishi H, Kawata N, Tatsumi K. Monitoring and analysis of body surface motion caused by respiration. IEICE 108. 2009; 523-526. Decker J, Li H, Losowyj D, Prakash V. Wiihabilitation: Rehabilitation of Wrist Flexion and Extension Using a Wiimote-Based Game System. Governor's School of Engineering and Technology Research Journal. 2009. Schmidt KI, Porcari JP, Felix M, Gillette C, Foster C. Energy Expenditure of Wii Sports: A Comparison of Five Sport Games. Journal of Cardiopulmonary Rehabilitation & Prevention 28. 2008; 272. Khoo JCT, Brown ITH, Lim YP. Wireless On- Body-Network breathing rate and depth measurement during activity. IEEE Engineering in Medicine and Biology Society. Conference. 2008; 1283-1287. Scherfgen D, Herpers R. 3D tracking using multiple Nintendo Wii Remotes: a simple consumer hardware tracking approach. Proceedings of the 2009 Conference on Future Play. Canada 2009, 31-32. Cuypers T, Van den Eede T, Ligot S, et al. Stereowiision: stereo vision with two wiimotes. 2009.
460
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-460
A Nomenclature for the Analysis of Continuous Sensor and Other Data in the Context of Health-Enabling Technologies a
Matthias GIETZELTa,1, Klaus-Hendrik WOLFa, Reinhold HAUXa Peter L. Reichertz Institute for Medical Informatics,University of Braunschweig – Institute of Technology and Hannover Medical School Germany
Abstract. Due to the progress in technology, it is possible to capture continuous sensor data pervasively and ubiquitously. In the area of health-enabling and ambient assisted technologies we are faced with the problem of analyzing these data in order to improve or at least maintain the health status of patients. But due to the interdisciplinarity of this field every discipline makes use of their own analyzing methods. In fact, the choice of a certain analyzing method often solely depends on the set of methods known to the data analyst. It would be an advantage if the data analyst would know about all available analyzing methods and their advantages and disadvantages when applied to the manifold of data. In this paper we propose a nomenclature that structures existing analyzing methods and assists in the choice of a certain method that fits to a given measurement context and a given problem. Keywords. Continuous sensor data, health-enabling technologies, ambient assisted living, analysis methods, nomenclature
1. Introduction In research we face a large variety of possible questions. Every problem needs the right tool to produce a useful solution matching the desired question. Especially in dealing with continuously recorded sensor data there is an enormous amount of methods to analyze the data and new analysis methods are often developed with the availability of new data. In the area of health-enabling and ambient assisted technologies there are various use cases for continuously recorded sensor data. In the context of a patient-centered care these sensor data should be seen in combination with data e.g. from a medical health record in order to increase the information gain [1]. Health-enabling technologies are used to detect emergency situations, to give a feedback about the health status, or to assist in daily living [2]-[4]. Due to the interdisciplinarity of this field (e.g. informatics, electrical engineering, medical science and psychology) every 1
Corresponding Author: Matthias Gietzelt, E-mail: [email protected]; Peter L. Reichertz Institute for Medical Informatics, University of Braunschweig – Institute of Technology and Hannover Medical School, Mühlenpfordtstr. 23, D-38106 Braunschweig, Germany.
M. Gietzelt et al. / A Nomenclature for the Analysis of Continuous Sensor and Other Data
461
discipline makes use of their own analyzing methods. In fact, the choice of a certain analyzing method often solely depends on the set of methods known to the data analyst. It would be an advantage if the data analyst would know about all available analyzing methods and their advantages and disadvantages when applied to the manifold of data. It is desirable to know which analyzing method is most appropriate to handle a certain problem.
2. Objectives Our fundamental question was how to choose the most suitable analysis method(s) based on a certain measuring context and a certain problem. To the author’s knowledge, there is no tool that supports the choice regarding the most suitable method(s) and no systematization of methods for analyzing continuous sensor data.
3. Nomenclature for the Analysis of Continuous Data Since a nomenclature is an established approach for systematization (e.g. in economics a nomenclature is used to make an environmental analysis [5]), we developed an open three-axial mono-hierarchical nomenclature in order to structure analysis methods for continuous sensor data. This nomenclature is considered to assist in the selection of one or more appropriate analysis methods in a certain context and with a certain problem. The subsequent semantic dimensions appeared to be reasonable: • Context: description of the situation that was measured; • Problem: underlying problem to be solved; • Analysis method: a scheme of steps for analyzing the measured data 3.1. Context Axis The first axis is the context axis. The context axis gives us information about the situation in which the sensor data was collected. At first, we have to consider the object to be measured. In the area of healthenabling technologies the object is primarily a person. But there are also cases, in which a certain room or an electrical device is primarily measured. It is also necessary to identify the reference system in which the data was collected. The reference system can be a person, a certain room, a flat or a car. E.g., in case of on-body sensors, the reference system is the person measured. For the analysis of the data of some on-body sensors it is crucial to provide information about the wearing position and orientation. In addition, the context axis describes the data source used. It is important to know the specifications of the sensor used to get a deeper insight and a better comprehension for the data. Wolf et al. developed a classification system for sensor-based data sources which describes relevant properties of a sensor [6]. This scheme was refined in [7]. This classification scheme was adopted in our nomenclature. But there may be other (non-sensor-based) data sources based on questionnaires, results of physical examinations, or data from an (electronic) medical record. These data can also be seen as a possible continuous data source and therefore, we added it to the proposed nomenclature.
462
M. Gietzelt et al. / A Nomenclature for the Analysis of Continuous Sensor and Other Data
The last sub-axis describes the data source’s type, an essential part to choose an appropriate analysis method. Some analysis methods require quantitative, some require qualitative data sources. In addition, one should know how many channels and how many dimensions each channel of the data source has. Since continuous sensor data are recorded in time, a channel can be interpreted as a single sensor measuring a certain physical, chemical or biological characteristic. Please note that each channel can have one or more dimensions measured at the same point in time and a defined sample rate, whereas different channels can have varying sample rates. An example for a continuous one-dimensional signal is a body scale and a triaxial accelerometer measures a threedimensional signal. Multi-dimensional signals can be pictures or data of a computer tomography. Figure 1 shows the context axis and selected sub-axes.
Figure 1. Mind map of the context axis.
3.2. Problem Axis The second axis describes the underlying problem to be solved. Health-enabling technologies can be intended for a broad range of applications [1]. Thereby, a wide variety of aspects must be considered. An example for such a technology is a fall detector [8]. After a detected fall, the system has to initiate an alarm that has to be sent to persons who are able to help. Therefore, it is necessary to identify a strategy for an escalation and a de-escalation chain and to identify one or more suitable communication channels. It would be also helpful to give the helper auxiliary information, e.g. the localization of the affected person. Besides emergency detection, health-enabling technologies can also be used for information and education purposes or even for wellness and sport [1]. Figure 2 shows some use cases for such technologies. This axis may help to go further into the problem, to define the functionality of the complete system, and therefore to define the outcome of the analysis. 3.3. Analysis Method Axis The third axis is the analysis method axis. In this axis we structured methods for analyzing continuous sensor data. The sub-axes are structured in a way that they represent a typical procedure in analyzing continuous sensor data. At first, we have to extract candidate features from the data using filters or a frequency analysis. The second step is to select the most important features with the highest predictive capability. Thereby, we should avoid redundant or highly inter-correlated features. The
M. Gietzelt et al. / A Nomenclature for the Analysis of Continuous Sensor and Other Data
463
feature selection methods can be differentiated through their search behavior: if a method involves one feature at a time it is a single factor analysis (or uni-variate analysis) and otherwise a multi factor analysis (or multi-variate analysis). The third sub-axis contains the structure identification methods. These methods can be chosen using the information captured by the context and the problem axis. If a qualitative analysis is needed, one should prefer the classification and indexing methods; in case of a quantitative analysis one may choose a regression analysis.
Figure 2. Mind map of the problem axis.
Figure 3. Mind map of the analysis method axis.
3.4. Statement Model The statement model for the nomenclature summarizes all information that was derived from the situation measured: In a certain context x (if measuring the object x1 with reference system x2, using data sources x3 which provide data of type x4) and a given problem y, one should use the analysis method(s) z.
464
M. Gietzelt et al. / A Nomenclature for the Analysis of Continuous Sensor and Other Data
4. Discussion and Conclusion In this paper we introduced an open three-axial mono-hierarchical nomenclature that may assists in the selection of one or more adequate analyzing methods for continuous sensor data. It covers the description of the context in which the data was collected, and the underlying problem to be solved. It was intentionally designed as an open nomenclature, so that new methods can be added. Within the proposed structure we also considered the typical procedure in data analysis. This first step in the systematization we are conscious about that a nomenclature is not a sophisticated model in choosing analyzing methods. Our future research focuses in enhancing and refining the model in order to choose analyzing methods in a problem adequate manner. 4.1. Limitations In the author’s opinion there is, beside the development of new methods, an interdisciplinary and application oriented demand in research in structuring analyzing methods. But there is also a demand on establishing guidelines for choosing such methods. However, there are also a number of limitations to be stated. First, in spite of an intensive literature review, we did not find any results related to a systematization or a nomenclature for choosing analyzing methods for continuous sensor data. Second, the nomenclature presented is still work in progress and is supposed to be for discussion about its demand and content. Third, the proposed nomenclature has not been evaluated, yet. This will be done in our next step. Therefore, we will systematically analyze existing literature and make focus group discussions with experts in this field.
References [1] [2] [3] [4] [5] [6] [7] [8]
[9]
Haux R, Howe J, Marschollek M, Plischke M, Wolf KH. Health-enabling technologies for pervasive health care: on services and ICT architecture paradigms. Inform Health Soc Care 33 (2008), 77-89. Saranummi N. IT applications for pervasive, personal, and personalized health, IEEE Trans Inf Technol Biomed 12 (2008), 1-4. Arnrich B, Mayora O, Bardram J. Pervasive or Ubiquitous Healthcare?, Methods Inf Med 49 (2010), 65-6. Demiris G. Smart homes and ambient assisted living in an aging society. New opportunities and challenges for biomedical informatics, Methods Inf Med 47 (2008): 56-7. Fleisher CS. Bensoussan BE. Strategic and competitive analysis: Methods and techniques for analyzing business competition, Prentice Hall, New Jersey (USA), 2002. Wolf KH, Marschollek M, Bott OJ, Howe J, Haux R. Sensors for health-related parameters and data fusion approaches. Proceedings of the European Conference on eHealth ECEH 2007:155–61. Koch S, Marschollek M, Wolf KH, Plischke M, Haux R. On health-enabling and ambient-assistive technologies. What has been achieved and where do we have to go? Methods Inf Med 48 (2009), 29-37. Bourke AK, van de Ven PW, Chaya AE, OLaighin GM, Nelson J. Testing of a long-term fall detection system incorporated into a custom vest for the elderly. Conf Proc IEEE Eng Med Biol Soc. 2008:28447. Klein LA. Sensor and Data Fusion: A Tool for Information Assessment and Decision Making. SPIE Publications, 2004.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-465
465
Image-based Classification of Parkinsonian Syndromes Using T2'-Atlases Nils Daniel FORKERTa,1, Alexander SCHMIDT-RICHBERGb, Brigitte HOLSTc, Alexander MÜNCHAUd, Heinz HANDELS b, Kai BOELMANSd a Department of Medical Informatics, University Medical Center Hamburg-Eppendorf, b Institute of Medical Informatics, University of Lübeck c Department of Diagnostic and Interventional Neuroradiology; And d Department of Neurology, University Medical Center Hamburg-Eppendorf. Germany
Abstract. Parkinsonian syndromes (PS) are genetically and pathologically heterogeneous neurodegenerative disorders. Clinical distinction between different PS can be difficult, particularly in early disease stages. This paper describes an automatic method for the distinction between classical Parkinson`s disease (PD) and progressive supranuclear palsy (PSP) using T2' atlases. This procedure is based on the assumption that regional brain iron content differs between PD and PSP, which can be selectively measured using T2' MR imaging. The proposed method was developed and validated based on 33 PD patients, 10 PSP patients, and 24 healthy controls. The first step of the proposed procedure comprises T2' atlas generation for each group using affine and following non-linear registration. For classification, a T2' dataset is registered to the atlases and compared to each one of them using the mean sum of squared differences metric. The dataset is assigned to the group for which the corresponding atlas yields the lowest value. The evaluation using leave-one-out validation revealed that the proposed method achieves a classification accuracy of 91%. The presented method might serve as the basis for an improved automatic classification of PS in the future. Keywords. Parkinsonian syndromes, Magnetic resonance imaging, Classification, Computer-assisted image analysis
1. Introduction To date, the diagnosis of Parkinsonian syndromes (PS) is mainly based on clinical criteria. Beside classical Parkinson`s disease (PD), which is characterized by an asymmetric onset of slowness of movements, rigidity, and tremor, other Parkinsonian entities have to be separated, such as progressive supranuclear palsy (PSP). PSP is clinically characterized by vertical gaze palsy or hypometric vertical saccades and postural instability with falls in the first year, in combination with predominantly axial rigidity, and frontal behavioral abnormalities or dementia. Clinico-pathological studies demonstrated that only 41% up to 88% of pathologically proven PSP are correctly diagnosed in life. However, most often PSP was clinically misdiagnosed as PD [1]. In clinically equivocal cases, additional investigation including standardized 1
Corresponding author: Nils Daniel Forkert, University Medical Center Hamburg-Eppendorf, Department of Medical Informatics, Martinistr. 52, 20246 Hamburg, Germany; E-mail: [email protected].
466
N.D. Forkert et al. / Image-Based Classification of Parkinsonian Syndromes Using T2’-Atlases
neuropsychological assessment, electro-oculography or assessment of postural stability can be helpful to distinguish between PD and PSP. So far, there are no disease-specific biomarkers. However, making a correct diagnosis early is becoming more relevant because of potential disease modifying treatment strategies that crucially depend on an accurate diagnosis [2]. To enhance diagnostic accuracy, automated image-based decision support seems to be a promising approach. In contrast to manually defined brain regions, automated techniques map the morphological and/or metabolic patterns across the entire brain and do not require a subjective judgment of a rater. In PSP, for example, the morphological analysis commonly revealed an atrophy of the rostral midbrain and superior cerebellar peduncle. Using a morphological classification scheme, Duchesne et al. [3] analyzed T1-weighted MR image sequences to extract deformation information in the hindbrain region using non-linear registration. The results were used to train a support vector machine, which achieved a 91% classification accuracy to distinct PD from PSP and multiple system atrophy (MSA). Complementary, a metabolic-based approach using Positron Emission Tomography was recently presented by Tang et al. [4], who used voxel-based spatial covariance mapping. Using this automated image-based classification, a high specificity (>90%) in distinguishing between Parkinsonian disorders was achieved. Referring to metabolic pattern, regional brain iron content might be a potential target for an automated classification scheme as brain accumulation and distribution differs between PD and PSP [5]. Paramagnetic substances in the brain such as nonheme iron (ferritin and hemosiderin) create local magnetic field inhomogeneities producing intra-voxel dephasing and shortening transverse relaxation times. Therefore, an estimation of tissue iron can be obtained from T2-weighted image sequences using changes, caused by local susceptibility magnetic resonance imaging (MRI). Here, variations, are particularly sensitive for tissue iron stores and can be calculated by the equation . Graham et al. [6] took advantage of this relation and investigated the iron deposits in the basal ganglia in PD patients and healthy controls in manually defined regions using a T2-weighted PRIME (partially refocused interleaved multiple echo sequence) sequence. The analysis revealed shortened relaxation rates in the substantia nigra and caudal putamen in PD patients compared to controls. To date, the brain iron content has not been analyzed in atypical PS or used for an automatic image-based classification. values could be used The focus of this work was to perform a feasibility study, if for an automatic classification in PS using an atlas-based approach.
2. Material and Methods 2.1. Material 67 MRI datasets were available for the generation and evaluation of T2' atlases, including 24 healthy control subjects (62.8±9.9, 41.9–77.6; mean age ± SD, range), 33 PD patients (61.5±10.4, 41.3–79.9) and 10 PSP patients (66.3±7.8, 55.4-78.8) with a clinically probable diagnosis. All MR scans were performed on a 1.5T Siemens Sonata MR system.
N.D. Forkert et al. / Image-Based Classification of Parkinsonian Syndromes Using T2’-Atlases
467
Figure 1. Representative slice from a T2 sequence (a1-a3) and T2* sequence (b1-b3) and corresponding slice from the calculated T2' map (c).
Among others, the MR protocol contained a T2 and T2* sequence. For T2 determination, a triple-echo sequence using echo times (TE) of 12, 84, and 156 ms was used. The T2* weighted images were performed using an echo-planar imaging sequence at a TE of 20, 52 and 88 ms. Both, the T2 and T2* sequences, offer a matrix of and voxel spacing of mm³ (see Fig. 1). A quantitative qT2 map was calculated by a voxel-wise fitting the exponential function to the signal intensity decay curve given by the multiple TE data of the T2 sequence. In analogy to this, a quantitative qT2* map was calculated using the multiple TE data of the T2* sequence. The dataset can be calculated from the quantitative qT2 and qT2* values voxel-wise by the following relationship: . 2.2. T2' Atlas Generation atlases are generated for the classification of PS, one for In this work, three healthy subjects, one for PD patients, and one for PSP patients. This procedure was employed for two reasons. First, the different atlases allow a visual definition of brain areas with differing values by calculating the difference between each possible atlases. Second, it was supposed that this procedure enables an combination of automatic differentiation of PS. Due to different patient anatomies and positions during the image acquisition, a registration of the datasets is required for calculation of the atlases. For this, one healthy subject was chosen to serve as the main reference for the registration process. This dataset was selected since all important brain areas are covered, no moving artifacts or anatomical abnormalities are present and the head is located in the center of images are very noisy and contain metabolic rather than the image. Since the anatomical information (see Fig. 1c), a direct registration would be error-prone. Therefore, all transformation field calculations were performed on the images of the T2 triple echo sequence with the highest TE, since it exhibits the best tissue contrast. After registration, the calculated transformation fields are used to transform the corresponding datasets. The registration process was divided into two steps. In the first step, the datasets are pre-aligned using an affine registration by optimizing the mean sum of squared differences (mSSD) between the main reference dataset and each other dataset. For this, a manually segmented brain mask of the reference image was
468
N.D. Forkert et al. / Image-Based Classification of Parkinsonian Syndromes Using T2’-Atlases
used to improve registration accuracy. After pre-alignment, an intensity-based diffusion registration as described in [7] was applied to take non-linear differences between the images into account. After both registration steps have been performed and the final dataset, the three atlases were calculated transformation has been applied to each by simple averaging. 2.3. Atlas Classification In order to use the generated atlases for classification of PS, a dataset not part of the atlas generation is separately registered using the same method as described in the previous section. Finally, the similarity of the given and registered dataset to the three atlases can be determined by calculating the mSSD inside the brain mask for each atlas. The given dataset is assigned to the group (PD, PSP, controls) for which the mSSD is the lowest.
3. Experiments and Results For evaluation of the proposed atlas-based registration scheme for PS differentiation, a leave-one-out cross validation was performed. For this, every dataset available was atlases classified using the described method. To prevent biased results, the used were generated again by leaving the dataset to be classified out of the atlas calculation. Table 1 shows the results of the automatic classification using the generated atlases. The results show that 91% of all datasets were correctly classified. The presented method yields a sensitivity of 0.8 and specificity of 1.0 for PSP patients. These quantitative values should be handled with caution since only 10 datasets were available for this syndrome. The presented method yields a specificity of 0.92 and sensitivity of 0.93 for classification of healthy subjects, while a specificity of 0.94 and sensitivity of 0.91 was achieved for PD patients. These results can be assumed to be more significant due to the number of datasets analyzed. Overall, the results are in the range of typical state-of-the art methods, e.g. [3,4]. Table 1. Results of the leave-one-out validation using T2' atlases Group Healthy (n=24) PD (n=33) PSP (n=10)
Classified as Healthy 22 2 1
Classified as PD 2 31 1
Classified as PSP 0 0 8
4. Discussion This paper describes the first stage of the development of an automatic image-based distinction between PD, PSP and healthy controls using atlases. In summary, the current results already reveal a very good discrimination between all three groups. Nevertheless, more datasets of PSP patients are required to obtain more significant results. The approach offers several opportunities for a more sophisticated analysis in the future. For example, so far only the mSSD as a global criterion has been used for the
N.D. Forkert et al. / Image-Based Classification of Parkinsonian Syndromes Using T2’-Atlases
469
classification. Improved results might be possible if only strategic brain areas, which are mainly affected by brain iron metabolism in different PS, are selected and analyzed using more sophisticated classification methods, such as support vector machines. atlases are used to identify these brain areas of clinical Currently, the generated interest for future extensions of the proposed method. For this, the difference between each two generated atlases are calculated and currently visually inspected by a neurologist (see Fig. 2). It should be pointed out that only PD and PSP patients have been applied to the described automatic differentiation. Therefore, patients with other Parkinsonian syndromes like multiple system atrophy or a corticobasal syndrome should be included in future classifiers. Furthermore, classification results might be improved in the future with inclusion of additional MR modalities, like the apparent diffusion coefficient. In summary, the proposed image-based Parkinsonian syndrome differentiation using atlases might enable an automatic classification with reliable results in future.
Figure 2. Selected slices from the main reference dataset (left) and corresponding differences between the T2' atlas calculated from the healthy subjects and T2' atlas calculated from PD patients (right). Bluish colors indicate higher T2' values in control atlas, yellowish colors higher T2' values in the PD atlas .
References [1] [2] [3] [4] [5] [6] [7]
Hughes AJ, Daniel SE, Ben-Shlomo Y, Lees AJ. The accuracy of diagnosis of parkinsonian syndromes in a specialist movement disorder service, Brain 125 (2002), 861-70. Tolosa E, Wenning Y, Poewe W. The diagnosis of Parkinson's disease. Lancet Neurol 5 (2006), 75-86. Duchesne S, Rolland Y, Vérin M. Automated computer differential classification in Parkinsonian syndromes via pattern analysis on MRI, Acad Radiol 16 (2009), 61-70. Tang CC, Poston KL, Eckert T, et al. Differential diagnosis of parkinsonism: a metabolic imaging study using pattern analysis, Lancet Neurol 9 (2010), 149-58. Schenck JF, Zimmerman EA. High-field magnetic resonance imaging of brain iron: birth of a biomarker? NMR Biomed 17 (2004), 433-45. Graham JM, Paley MN, Grünewald RA, Hoggard N, Griffiths PD. Brain iron deposition in Parkinson's disease imaged using the PRIME magnetic resonance sequence, Brain 123 (2000), 2423-31. Schmidt-Richberg A, Werner R, Ehrhardt J, Handels H. Landmark-driven parameter optimization for non-linear image registration, Image Processing, SPIE Medical Imaging 2010 (in press)
470
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-470
Cell Edge Detection in JPEG2000 Wavelet Domain – Analysis on Sigmoid Function Edge Model a
Vytenis PUNYSa,1, Ramunas MAKNICKASa Department of Multimedia Engineering, Kaunas University of Technology, Lithuania
Abstract. Big virtual microscopy images (80K x 60K pixels and larger) are usually stored using the JPEG2000 image compression scheme. Diagnostic quantification, based on image analysis, might be faster if performed on compressed data (approx. 20 times less the original amount), representing the coefficients of the wavelet transform. The analysis of possible edge detection without reverse wavelet transform is presented in the paper. Two edge detection methods, suitable for JPEG2000 bi-orthogonal wavelets, are proposed. The methods are adjusted according calculated parameters of sigmoid edge model. The results of model analysis indicate more suitable method for given bi-orthogonal wavelet. Keywords. Edge detection, edge strength, image segmentation, discrete wavelet transform, wavelet domain.
1. Introduction Scanning microscopes, which had been introduced on the medical equipment market some years ago, keep improving their throughput and are targeting routine clinical use, being able to acquire high quality digital images of the whole slide samples with a scanning times between 1 to 4 minutes per sample (depending on the resolution) in batches containing up to 384 slides (what corresponds to 5.27 Tbyte of uncompressed image data). Then the scanning is followed by a clinical assessment and an imagebased cell quantification, which in ideal situation, should be done automatically. Automatic analysis of virtual microscopy image batches requires considerable computing resources, which recently are available for research purposes (e.g. deploying GRID and cloud computing architectures), but are not yet applied in routine clinical use. Therefore a technique, that could considerably reduce computing resources (or image processing times) needed for microscopy image analysis, will shorten the way of virtual microscopy into wide clinical use. The virtual microscopy images (VMI), suitable for cell quantification, usually are large (80K x 60K pixels and larger), containing 14.4 Gbytes or more data [1]. Naturally, they are stored in a compressed form. And it is the VMI where the JPEG 2000 image compression standard is more widely used than the wide spread conventional JPEG. Thanks to the more sophisticated mathematical techniques, the JPEG 2000 gives 1
Corresponding author: V.Punys, Department of Multimedia Engineering, Kaunas University of Technology. Address: Studentu str. 56-305, LT-51424 Kaunas, Lithuania. E-mail: [email protected]
V. Punys and R. Maknickas / Cell Edge Detection in JPEG2000 Wavelet Domain
471
approximately double compression ratio compared to the conventional JPEG. Furthermore, hierarchical compression scheme, implemented by the JPEG2000 (based on the wavelet transform which forms multi-resolution pyramid of image data and wavelet coefficients), coincides with the way the images are reviewed by pathologists. The diagnostic assessment, performed by pathologists, is based on the visibility and staining (intensity, area) either of cell nuclei or cell membranes (together with surrounding areas) in microscopy images (see Figure 1). Technically, visibility and staining are correlated with height and width of object edges in VMI.
Figure 1. Detecting cell membranes and the surrounding stained areas.
There is a general understanding, that the wavelet coefficients “carry” information about both the magnitude and the location of the signal. This is proved by numerous successful research efforts in using the wavelet transform for detection of signal peak or pre-defined shape segment. However, the basic wavelet functions, ensuring good detection results, are far to be the best in image compression applications. The goal of this work is to study what edges (including their parameters: height and width) could be detected in compressed image data structures (wavelet domain) using the biorthogonal wavelets defined in the JPEG2000 standard for lossless (CDF 5/3) and lossy (CDF 9/7) compression [2] without image decompression step. The height and width of detected edges might be used for automatic cell quantification.
2. Edge detection in space and wavelet domain The Canny edge detector is proved to be optimal in sense of good edge detection, localization and minimal response in Euclidean space domain [5]. Usually, the edge strength of objects of interest varies along the edge. It is hard to set the threshold levels, which could discriminate the object of interest among surrounding objects. That is why multi-scale image analysis is helpful. One of multi-scale transformation is a wavelet transformation (WT). Wavelet transform modulus maxima (WTMM) method is analogue to Canny edge detection in multi-scale domain. Since image is 2D discrete digital signal, this method is adjusted to discrete wavelet transform (DWT). In DWT scaling function smoothes the image at different scales removing the noise. Wavelet function detects edges of the same object at different scales. As 2D DWT analyses image in horizontal, vertical and diagonal directions, edge gradient is calculated at every point of the image. The modulus maxima of the WT are defined by the positions where modulus (magnitude) of the WT, i.e. the gradient, is locally maximal in argument direction. Therefore the WTMM method requires wavelet function to be the first derivate of its scaling function [3]. Wavelet functions used in JPEG2000 standard violates this requirement, therefore new methods for edge detection should be searched for.
472
V. Punys and R. Maknickas / Cell Edge Detection in JPEG2000 Wavelet Domain
3. Edge modelling by a sigmoid function Edges describe properties of objects in given signal. One of mostly used descriptors is the strength of edge [4]. The wavelet coefficients significantly differ from zero at irregular regions of a signal, so they can be used to evaluate the strength of edge. More generally, an edge is described by its three parameters: height , width and position . An edge can be modelled by a sigmoid function in digital image: .
(a) Figure 2. Edge model sigmoid function
(1)
(b) , when parameter c is: (a)
and (b)
DWT w(x,a,b,c) is a discrete convolution between discrete signal and digital high-pass filter (of length ). Edge detection in wavelet domain is performed by analysis of local maximum values of , having DWT coefficients. The range of maximum variation is and . The inverse DWT (IDWT) is multi-valued, as for every wavelet coefficient there is set of pairs of parameters of sigmoid function. Denote local absolute maximum of sigmoid edge at scales . The and of wavelet coefficients are corresponding minimum and maximum values of wavelet coefficients of sigmoid edge having height and width . It is necessary to calculate and analyse all and , of edge, whose local absolute the pairs of parameters at scales having maximums of wavelet coefficients are equal to , .
4. Methods for reverse calculation of edge model parameters Method 1 calculates parameters and of all suitable edges, whose wavelet absolute . This is achieved by calculating maximum coefficients at scales are equal to given intersection of intervals of edge width at every scale. The interval enclosing correct width is in intersection . The more scales are considered, the shorter the interval of edge width is likely to be obtained. The result is the set of ranges of width for every height of an edge. Method 2 calculates height intervals at various widths of edge. The idea of this method is the calculation of edge height , here is homogenous of is composed of wavelet coefficients of edge having unknown degree 1, vector is composed of pre-calculated wavelet height (is taken from given signal), vector coefficients of edge having known height.
V. Punys and R. Maknickas / Cell Edge Detection in JPEG2000 Wavelet Domain
473
In real applications parameter is usually unknown. Therefore, a set of values is taken (sampled) from a given range. Then for every width the vector of bounds for height range is calculated. In order to make less pairs of parameters more than one function should be involved in the calculation.
5. Experimental results Method 1 gives a set of parameter pairs (a,b), and the interval of detected edge width is analysed. The mean lengths of all detected width intervals were calculated for a given (pre-defined) height and width of edge, for m = 3 DWT scale-levels. The results, presented in Figures 3a and 3b show, that accuracy of detected length is within 0.5 pixel for edges wider than 3 pixels. Analysis of intervals shows better results when detection is performed using CDF 5/3 at almost all widths of given edge. The Method 1 gives slightly worse results for CDF 9/7 – lossy compressed data. As the Method 2 calculates height intervals at various widths of edge, the shorter is the length of these intervals, the higher is accuracy of correct edge height (see Figures 3c and 3d). CDF 5/3 wavelet presented better results (lower mean length of intervals).
(a)
(b)
(c)
(d)
Figure 3. Modelling of detection: Method 1 (a,b) - mean length of detected width intervals;. Method 2 (c,d) - mean length of detected height intervals. Pre-defined edge height: (a,c) a=50; and (b,d) a=150.
Both methods produce many pairs of edge parameters, when local absolute maximum of DWT wavelet coefficients for a modelled edge have same values. The more different values of edge height ai are in pairs (ai , bi), the more difficult it is to distinguish an object. The results, presented in Figure 4, show that it is better to use CDF 9/7 than CDF 5/3 in Method 1, while CDF 5/3 is more appropriate in Method 2. If an object of interest has edge height less than pixels, then it is better to use
474
V. Punys and R. Maknickas / Cell Edge Detection in JPEG2000 Wavelet Domain
coefficients of DWT with CDF 5/3 in Method 2; otherwise, the Method 1 with CDF 9/7 is more appropriate.
Figure 4. Mean number of different detected edge heights at pre-defined edge height
6. Conclusions Two methods have been proposed for assessment of edge parameters (height, width and position) using wavelet coefficients calculated by the JPEG200 image compression scheme from edges modelled by sigmoid edge function. The modelling results are encouraging to continue the research of edge detection in wavelet domain for virtual microscopy imaging. Analysis of these methods showed different and unambiguous correspondence of edge parameter vectors to wavelet coefficients. Variability of detected edge width for any height does not exceed 0.5 pixel size for edge wider than 3 pixels. Method 1 is more suitable for lossy compressed images, and Method 2 – for lossless compressed ones. Naturally, detection results in lossless compressed images are better than in lossy images, except the case when height of an object exceeds 150 – then the Method 1 is more accurate for height detection in lossy compressed images. Acknowledgements. The work has been carried out within the EU COST Action IC0604 “Telepathology Network in Europe: EURO-TELEPATH”, supported by the EU Science Foundation, R&D programme of the Kaunas University of Technology (Lithuania) and grant COST-42/10 from the Research Council of Lithuania. Authors express their gratitude to all the colleagues involved in the COST Action, especially to Professor Touradj Ebrahimi from École Polytechnique Fédérale de Lausanne for the valuable discussions on the subject of this paper.
References [1] Garcia Rojo, M. et al. Digital pathology in Europe: coordinating patient care and research efforts. Studies in Health Technology and Informatic), vol.150, pp. 997-1001, 2009. [2] Christopoulos, C. Skodras, A. Ebrahimi, T. The JPEG2000 Still Image Coding System: An Overview. IEEE Trans. Consumer Electronics, vol.16, pp.1103-1127, 2000. [3] Mallat, S. Hwang. W.L. Singularity Detection and Processing with Wavelets. IEEE Transactions on Information Theory, vol.38, pp. 617-643, 1992. [4] Kitanovski, V. Taskovski, D. Panovski. L. Multi-scale Edge detection Using Undecimated Wavelet Transform. Proceedings ISSPIT 2008, pp.385-389. [5] Angel, P., Morris C. Analyzing the Mallat Wavelet Transform to Delineate Contour and Textural Features. Computer Vision and Image Understanding, vol.80, pp.267–288, 2000.
Information Modeling, Storage and Retrieval
This page intentionally left blank
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-477
477
Using Multimodal Mining to Drive Clinical Guidelines Development Emilie PASCHEa,b1, Julien GOBEILLa,c, Douglas TEODOROa,b, Dina VISHNYAKOVAa,b, Arnaud GAUDINATa,c, Patrick RUCHa,c, Christian LOVISb a BiTeM Group, Geneva, Switzerland b Division of Medical Information Sciences, University Hospitals of Geneva and University of Geneva, Geneva, Switzerland c Information Science Department, University of Applied Sciences, Geneva, Switzerland
Abstract. We present exploratory investigations of multimodal mining to help designing clinical guidelines for antibiotherapy. Our approach is based on the assumption that combining various sources of data, such as the literature, a clinical datawarehouse, as well as information regarding costs will result in better recommendations. Compared to our baseline recommendation system based on a question-answering engine built on top of PubMed, an improvement of +16% is observed when clinical data (i.e. resistance profiles) are injected into the model. In complement to PubMed, an alternative search strategy is reported, which is significantly improved by the use of the combined multimodal approach. These results suggest that combining literature-based discovery with structured data mining can significantly improve effectiveness of decision-support systems for authors of clinical practice guidelines. Keywords. Multimodal mining, information retrieval, clinical guidelines, resistance profile, antibiotic cost.
1. Introduction Since the early use of antibiotics, it was observed that the selection pressure imposed by their massive employ led to a gradual acquisition of bacterial resistance to antibiotics, rendering them ineffective to treat infectious diseases. Thus it became a priority to regulate antibiotic use and clinical guidelines were developed in that intention. Evidence-based approach is being adopted by most of the organizations developing clinical guidelines, since it provides a very rigorous basis by directly linking the recommendation to evidence [1]. However, the systematic review of the literature required by this approach is a time-consuming and labor-intensive process [2]. As part of the DebugIT (Detecting and Eliminating Bacteria UsinG Information Technology) FP7 European project [3], we aim at facilitating clinical guidelines development and maintenance with the creation of an innovative tool called KART (Knowledge Authoring and Refinement Tool), which gathers literature search and information extraction capabilities based on an advanced question-answering framework. In a previous report [4], we presented an approach to help generating 1
Corresponding author: Emilie Pasche, University Hospitals of Geneva, Division of Medical Information Sciences, Rue Gabrielle-Perret-Gentil 4, 1211 Geneva 14, Switzerland; E-mail: [email protected].
478
E. Pasche et al. / Using Multimodal Mining to Drive Clinical Guidelines Development
guidelines based exclusively on text-mining. A question-answering engine performed an automatic literature scanning, followed by the identification of hypothetical treatments, thus accelerating systematic reviews. Infectious disease experts can then validate the correct propositions out of the automatically-generated treatments. In this report, we describe how non-textual modalities and in particular clinical data as stored in operational clinical databases can be injected into the baseline system to improve recommendations, using an association model directly inspired by Aronson et al [5]. The structured data used in our experiments gathers clinically-observed resistance profiles, since it is well-known that performing antibiograms before prescription is the optimal way to prescribe an appropriate antibiotic, and prescription cost-related information, assuming healthcare should minimize health costs. The number of data analysis methods that can be used to combine multimodal contents is virtually infinite since learning algorithms and distance calculi are in general highly data independent. In our experiment we borrowed the methodological framework from the Cranfield paradigm [6] and the linear combination approach pioneered by Fox et al [7]. Numerous subsequent works have been reported to improve the basic method; however the original approach applied strictly to textual observations as for instance when several engines are combined to generate a meta-engine. In contrast, our fusion experiments merge text-generated associations with prior probabilities directly extracted from a clinical datawarehouse.
2. Data and Methods In this study, clinical guidelines are represented using a simplified design, assuming the following hypothesis: disease + pathogen + conditions = antibiotics. A questionanswering engine, EAGLi (Engine for Question-Answering in Genomics Literature, http://eagl.unige.ch/EAGLi) [8], is queried with the parameters disease, pathogen and conditions to retrieve a set of the most-cited antibiotics ranked by relevance. The computation of this set is based on the screening of 50 documents from which possible answers are extracted. In our experiments, the target terminology, corresponding to the space of possible answers, consists of 70 antibiotics normalized by their respective WHO-ATC code. A set of synonyms derived from the Medical Subject Headings (MeSH) is used to augment the recall of the answers. Two search engines are used; PubMed, a Boolean and chronological ranking and easyIR, a vector-based similarity ranking. The set of the most-cited antibiotics is then re-ordered based on the injection of costs and resistance profiles. The re-ranking is based on the attribution of penalty or bonus on the original relevance scores, resulting in a new ranking. Thus, expensive antibiotics and antibiotics with high resistance get lower ranks, while cheapest antibiotics and antibiotics with low resistance obtain a better rank. The injection of cost is based on a cost list containing 129 products, corresponding to 17 distinct substances, provided by the HUG (University Hospitals of Geneva) pharmacy supply chain. The very same substance can be mentioned several times (Table 1), representing different routes and/or dosages. We attempt to obtain a daily cost for each antibiotic present in the list. Prescription data of the HUG are used to obtain the number of daily doses usually prescribed given a route/dosage for each product. Moreover, as our system is based on the substance and not the marketed product, different products corresponding to the same substance must be aggregated.
E. Pasche et al. / Using Multimodal Mining to Drive Clinical Guidelines Development
479
This is based on the prescription frequency of each product in the clinical data of the HUG. Finally, for antibiotics without cost information, we attribute an arbitrary cost. This value is set during the tuning phase by varying the bonus/penalty value from 0 (which expresses a minimal price) to 100 (which expresses a maximal price). The injection of resistance profiles is based on antibiograms present in the HUG Clinical Data Repository [9] of the DebugIT project. As antibiograms for the pair pathogen-antibiotic were retrieved for only 5% of the data, we decided to search antibiograms for the antibiotic only, disregarding the targeted pathogen. From these antibiograms, we extracted the number of resistant (R) and susceptible (S) outcomes. A susceptibility score is calculated for each antibiotic: S/(S+R). When no antibiogram data is available, an arbitrary susceptibility score is assigned. This score is obtained during the tuning phase by varying the bonus/penalty value from 0 (always resistant) to 1 (always susceptible). Table 1. Extract of the HUG’s cost table. Column Identifier ATC indicates the ATC identifier of the antibiotic. Column Term ATC displays the substance name. Column Int_Art_Ach mentions the name of the drug, as well as its form, dosage and number of doses in the box. Column Public cost indicates the cost of the article in Swiss Franc (for sake of confidentiality, real prices are not displayed). Identifier ATC J01MA02 J01MA02 J01MA02 J01MA02
Term ATC Ciprofloxacin Ciprofloxacin Ciprofloxacin Ciprofloxacin
Int_Art_Ach Ciproxinep.osusp 5g=100ml (pce) Ciproxinep.osusp 10g=100ml (pce) Ciproxinefiol 400mg=200ml (pce) Ciprofloxcpr 250mg (1x20)
Public cost 62.90 104.95 50.95 37.50
A collection of 72 rules extracted from the geriatrics guidelines of the HUG is manually translated and normalized to obtain a machine-readable benchmark [10], following the schema of our simplified clinical guidelines: disease + pathogen + conditions = antibiotics. The collection is divided into two sets: a tuning set of 23 rules used for the design of the optimal recommendation system and an evaluation set of 49 rules used for the validation of the final system on previously unseen contents.
3. Results In Table 2, we provide results of our baseline system, i.e. text-mining results as obtained without any additional knowledge show a top-precision of 40.37% when we use the PubMed engine and 34.28% when the easyIR [8] engine is used. Thus, we can compare two different search models. Although PubMed shows a higher precision, it is worth observing that the relative recall is much lower for PubMed. Thus, the PubMedbased search is able to answer 32 questions out of 49, while easyIR is able to provide answers to all questions. Results obtained when tuning the model with cost-related information are shown in Figure 1A. The best results for easyIR have been found when a null cost is attributed to antibiotics for which no cost information is available. Performances of PubMed-based search decrease with the injection of costs. Final results based on the evaluation set show a top-precision of 43.31% (+9.03%, p<0.05) with easyIR and of 40.28% (-0.09%, not statistically significant) with PubMed using the less penalizing settings (Table 2). Results of the tuning based on the resistance profile are shown in Figure 1B. The best top-precisions are obtained when a value of 0.1 is assigned to antibiotics without resistance profile information for easyIR, meaning that bacteria are most of the time resistant to this set of molecules and a value of 0.0 for PubMed, meaning that bacteria
480
E. Pasche et al. / Using Multimodal Mining to Drive Clinical Guidelines Development
are always resistant to these antibiotics. Applying these parameters on the evaluation set (Table 2) results in a top-precision of 39.86% (+5.58%, p<0.05) for the easyIR search engine, while the PubMed search engine increases its top-precision up to 56.41%, corresponding to a gain of +16.04% (p<0.01). Table 2. Text-only and multimodal results on the evaluation set. Column Engine indicates the search model used. Column Multimodal is the type of additional knowledge injected in the model. Column NbA is the number of answers to which the search model succeeds to answer. Column P0 is the top-precision, i.e. the precision at recall = 0. Column MAP is the mean average precision. Column R5 is the recall at position 5. Engine easyIR easyIR easyIR PubMed PubMed PubMed
Multimodal No injection Cost Resistance profile No injection Cost Resistance profile
A
NbA 49/49 49/49 49/49 32/49 32/49 32/49
P0 0.3428 0.4331 (+9.03%) 0.3986 (+5.58%) 0.4037 0.4028 (-0.09%) 0.5641 (+16.04%)
MAP 0.1899 0.2211 0.2246 0.2354 0.2311 0.3337
R5 0.2619 0.3197 0.3299 0.3281 0.2656 0.4653
B
Figure 1. A: Tuning of the cost injection. B: Tuning of the resistance profile injection. Axis x represents the value assigned to the data without injection information and axis y represents the top precision. Baseline topprecisions are represented with for easyIR and for PubMed. Multimodal re-ranked top-precisions are represented with for easyIR and for PubMed.
4. Conclusion In our experiments, we have shown that combining textual contents with antibiotics costs result in fairly contrasted results. Indeed, while the relevance-driven model (socalled easyIR) clearly showed improvement when using cost-related features (+9%), such an improvement is not observed when a Boolean search strategy (PubMed) is used. We hypothesized that this is due to the limited set of costs we have had access. This set is even narrower for experiments made with the Boolean search strategy due to the higher number of queries returning no result. Based on the improvements obtained by the relevance-driven engine, we believe that cost information is worth to be used to improve compliance with clinical practice guidelines. In order to overpass such limitations in the future, we plan to use broader coverage resources, for instance, the Swiss Kompendium (http://www.kompendium.ch), a database of drugs, supplied with pricing information. Although those prices may not reflect real HUG’s prices, it could potentially provide the information that is currently missing in our cost-based model. Further, during the normalization process, we used the cost of one day of treatment.
E. Pasche et al. / Using Multimodal Mining to Drive Clinical Guidelines Development
481
But this is a reducing view over antibiotic prescription, since it could be more consistent to determine the cost of the treatment until recovery. Injection of resistance profiles extracted from the clinical datawarehouses into results obtained by text-mining clearly showed an improvement of the top-precision, especially using the PubMed engine (+16%). Thus, resistance profiles are appropriate features to significantly improve effectiveness of our model when compared to evidence-based knowledge and so for decision-support applied to computerized orderentry systems. Although today’s results are already encouraging, we expect that further improvements can be obtained by aggregating species and antibiogram-related information at higher taxonomic levels. Indeed, using exact organisms names such as Staphylococcus aureus or Staphylococcus epidermidis for retrieving antibiograms can sometimes results in very sparse data, while aggregating all Staphylococcus into a phylogenetically-related set seems an effective way to augment recall without affecting significantly precision. Acknowledgements. The DebugIT project (http://www.debugit.eu) is receiving funding from the European Community’s Seventh Framework Programme under grant agreement n°FP7-217139, which is gratefully acknowledged. The information in this document reflects solely the views of the authors and no guarantee or warranty is given that it is fit for any particular purpose. The European Commission, Directorate General Information Society and Media, Brussels, is not liable for any use that may be made of the information contained therein. The EAGLi questionanswering framework has been developed thanks to the SNF Grant # 325230-120758.
References [1]
Burgers JS, Grol R, Klazinga NS, Mäkelä M, Zaat J. AGREE Collaboration. Towards evidence-based clinical practice: an international survey of 18 clinical guideline programs. Int J Qual Health Care, 2003 Feb;15(1):31-45. [2] Browman GP. Development and aftercare of clinical guidelines: the balance between rigor and pragmatism. JAMA. 2001 Sep 26;286(12):1509-11. [3] Lovis C, Colaert D, Stroetmann VN. DebugIT for patient safety – improving the treatment with antibiotics through multimedia data mining of heterogeneous clinical data. Stud Health Technol Inform. 2008;136:641-6. [4] Pasche E, Teodoro D, Gobeill J, Ruch P, Lovis C. QA-driven guidelines generation for bacteriotherapy. AMIA Annu Symp Proc. 2009 Nov 14;2009:509-13. [5] Aronson A, Demner-Fushman D, Humphrey S, et al. Fusion of knowledge-intensive and statistical approaches for retrieving and annotating textual genomics documents. In Proceedings of the 15th Text REtrieval conference TREC-15. 2005. [6] Voorhees E. The Philosophy of Information Retrieval Evaluation. Lecture Notes in Computer Science. 2002;2406/2002:143-170. [7] Fox EA, Koushik MP, Shaw J, Modlin R, Rao D. Combining evidence from multiple searches. In Proceedings of the First Text REtrieval Conference (TREC-1). 2001. [8] Gobeill J, Pasche E, Teodoro D, Veuthey AL, Lovis C, Ruch P. Question answering for biology and medicine. Information Technology and Application in Biomedicine (ITAB 2009). 2009. [9] Teodoro D, Choquet R, Pasche E, et al. Biomedical data management: a proposal framework. Stud Health Technol Inform. 2009;150:175-9. [10] Voorhees EM, Harman DK. TREC experiment and evaluation in information retrieval. 2005.
482
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-482
Defining and Reconstructing Clinical Processes Based on IHE and BPMN 2.0 Melanie STRASSERa,, Franz PFEIFERa,, Emmanuel HELMa, Andreas SCHULERa, Josef ALTMANNb a Upper Austria University of Applied Sciences, Research & Development b Upper Austria University of Applied Sciences, School of Informatics, Communications and Media Softwarepark 11, A-4232 Hagenberg
Abstract. This paper describes the current status and the results of our process management system for defining and reconstructing clinical care processes, which contributes to compare, analyze and evaluate clinical processes and further to identify high cost tasks or stays. The system is founded on IHE, which guarantees standardized interfaces and interoperability between clinical information systems. At the heart of the system there is BPMN, a modeling notation and specification language, which allows the definition and execution of clinical processes. The system provides functionality to define healthcare information system independent clinical core processes and to execute the processes in a workflow engine. Furthermore, the reconstruction of clinical processes is done by evaluating an IHE audit log database, which records patient movements within a health care facility. The main goal of the system is to assist hospital operators and clinical process managers to detect discrepancies between defined and actual clinical processes and as well to identify main causes of high medical costs. Beyond that, the system can potentially contribute to reconstruct and improve clinical processes and enhance cost control and patient care quality. Keywords. Integrating the Healthcare Enterprise, Clinical pathways, Workflows, Clinical core processes, BPMN 2.0
1. Introduction Process management in healthcare is used to increase the quality of care and to decrease treatment costs [1, 2]. Medical guidelines are often used to define the workflow of a treatment based on a certain diagnosis. Nevertheless, these guidelines are not commonly used [3]. Although, many systems and approaches to define clinical processes exist, all of these systems share the following drawbacks: either these systems are not able to share information with other systems [4, 5, 6, 7] (interoperability issues), or these systems only cover parts of a clinical process [8]. Interoperability problems can now be addressed by using Integrating the Healthcare Enterprise (IHE) compliant components [9]. Using Business Process Model and Notation (BPMN) 2.0 in combination with medical guidelines, structured medical processes can easily be defined. Moreover, the usage of IHEs’ Audit Record Repository (ARR) enables a standardized way of process mining [10].
M. Strasser et al. / Defining and Reconstructing Clinical Processes Based on IHE and BPMN 2.0
483
In literature similar approaches exist but none of them are using IHE Integration Profiles to define clinical processes. Anzböck and Dustdar present an approach based on web services to model medical e-services [5, 8]; Dogac, Bicer and Okcan are focusing just on IHE Integration Profiles to model collaborative business processes [11]. In this paper we present an approach aimed to define clinical processes based on BPMN 2.0 and selected IHE Integration Profiles. Moreover, this approach is able to reconstruct a patients’ history based on a standardized audit database.
2. Methods An integral part of interoperable workflow processes is the use of established standards such as BPMN and IHE. The following two sections describe all necessary standards used by the presented approach. The remainder of the section discusses the methods and detailed steps taken in the research project IHExplorer [12]. 2.1. Integrating the Healthcare Enterprise Integrating the Healthcare Enterprise (IHE) is an international initiative by healthcare professionals, IT professionals and industry to improve the integration and interoperability of information systems in health care with standardized descriptions of medical use cases and the use of communication standards. IHE is structured in Domains [13] and Integration Profiles [14]. Our approach is mainly based on following Integration Profiles: Patient Administration Management (PAM): This profile maintains the consistency of patient data (e.g. attending physician or relatives) and coordinates the exchange of patient account, encounter and location information among care systems. The PAM profile is important for the process reconstruction, because it delivers current information about the patient’s location in a health care facility and the patient’s state. Audit Trail and Node Authentication (ATNA): This profile controls the access to protected health information, like patient demographic data and clinical documents and logs every access in a protocol database - the Audit Record Repository (ARR). Using ATNA in combination with the PAM Integration Profile, all transfers and states during a patient’s treatment are stored in the ARR. 2.2. Business Processes, Workflows and Clinical Guidelines in Healthcare There are several definitions of the term Business Process (BP) available in literature. Van der Aalst defines a BP as a process aiming on the production of a certain product or service [15]. Another source suggests that a BP is a set of activities using different inputs to produce a certain output [16]. Davenport extends the previous definition with regard to a customer or a certain market [17]. On the other hand, a workflow is an automated part of a business process [18] which means that a workflow is an executable instance of a BP. A workflow is built of a set of activities and is triggered by a defined start event; the main difference is that a workflow is automated [18]. Moreover, workflows can be categorized into three groups: Structured Workflows, Semi-structured Workflows and Ad-hoc Workflows [4]. Structured Workflows are characterized as not too complex and are always predictable.
484
M. Strasser et al. / Defining and Reconstructing Clinical Processes Based on IHE and BPMN 2.0
Semi-structured Workflows are partially predictable, though parts of it can be modeled. Ad-hoc Workflows are not predictable and too complex which makes it impossible to model them. Clinical Guidelines (CG) are a set of principles to help health care practitioners during the care of a patient to ensure consistent high quality clinical practice [3]. The main objectives of CG are to improve the quality and to standardize health care. Moreover, these CGs’ are provided in several repositories, such as Map of Medicine [19], which are maintained by many different organizations. 2.3. Process Definition Processes are defined using the BPMN 2.0 standard currently developed by the Object Management Group [20]. Compared to prior versions of BPMN or Event-driven Process Chain (EPC) the main advantage of BPMN 2.0 is that BP are executable directly. BPMN supports the user in the modeling of processes and ensures the generation of a valid and well-formed BPMN process definition. In addition it is mandatory to orchestrate the defined processes, thus to make them executable. Typical actions that are summarized under the term orchestration are on the one hand the association between specific BPMN 2.0 tasks (Service-Tasks) with a service implementation (e.g. Java class, web service), and on the other hand the assignment of a task to an user or a group that is responsible for its execution. The execution of an orchestrated XML based process definition is done by a BPMN 2.0 workflow engine. 2.4. Process Execution In order to execute a defined process, a specific workflow engine is needed, which is responsible for the lifecycle of a process, often referred to as process instance [21]. To make a process available for execution, the process definition and any additional process resources have to be deployed in a running instance of the workflow engine. The engine creates a new process instance associated with certain trigger events and executes each task defined in the process definition. To guarantee the compliance to an existing IHE environment, these tasks interact with IHE actors. IHE actors are information systems or software components that process data. Task executions generate HL7 messages, which are stored in the ARR. 2.5. PAM Based Process Mining To reconstruct a patient’s way through a healthcare facility the PAM Integration Profile has to be implemented. The ARR stores all messages generated during admission, discharge and transfer of the patient. By querying the ARR collecting all messages for a specific patient and sorting them according to their date of creation, all stays of a patient can be identified. The initial diagnosis is stored in the admission/registration message - so every pathway of every patient can be compared to a defined process.
M. Strasser et al. / Defining and Reconstructing Clinical Processes Based on IHE and BPMN 2.0
485
3. Results A prototype has been developed which allows the definition of business processes based on BPMN 2.0. This prototype enables the definition of business processes as well as the execution of workflow processes. Furthermore, the system supports the reconstruction of a patients’ pathway throughout a healthcare organization. Figure 1 shows a part of a process definition based on the approach described above.
Figure 1. A BPMN 2.0 Process Definition
The service tasks of the BPMN process definition are bound to IHE consumer actors which in turn communicate with an IHE infrastructure to obtain all necessary patient data for treatment. Each action performed by a service task is logged into the ARR of the underlying IHE infrastructure, which is the basis for reconstructing the patient pathway (see Figure 2). Departments visited by the patient which are present in the process definition are highlighted. Moreover, the total time of each pathway is calculated allowing further analysis. Currently the reconstruction of patient pathways is not based on realistic workload.
Figure 2. Three reconstructed patient pathways based on ARR entries
4. Discussion and Conclusion The presented approach shows how IHE Integration Profiles and BPMN 2.0 process definition are used to model and execute clinical processes. The advantage of this approach is obvious: the business process definition and execution are detached from the underlying systems, which means that clinical processes can be defined independently from a hospital information system (HIS). Nevertheless, this approach has several limitations. As IHE currently doesn’t provide enough Integration Profiles a comprehensive clinical process definition is not possible. Moreover, the reconstruction of the underlying business process based on entries in the ARR has some limitations. Currently it is not possible to reconstruct the
486
M. Strasser et al. / Defining and Reconstructing Clinical Processes Based on IHE and BPMN 2.0
BPMN 2.0 definition without using the event log of the workflow management system executing the process. To achieve further details the reconstruction should be combined with the history service of workflow management system. The history service stores metadata related to all executed processes. Although the level of detail increases using the built in history service, reconstruction is completely independent from the workflow engine, as it depends only on the information provided by the ARR. Reconstructed patient pathways combined with defined processes enable delta analysis and as there are often discrepancies between the defined and the actual process it should help to point out opportunities and threats and therefore to adjust and optimize the target process.
References [1] [2]
[3]
[4] [5] [6] [7] [8] [9] [10]
[11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]
Anyanwu K, Sheth A, Cardoso J, Miller J, Kochut K. Healthcare Enterprise Process Development and Integration, Journal of Research and Practice in Information Technology 35/2 (2003), 83-98. Becker J, Fischer R, Janiesch C, Scherpbier HJ. Optimizing U.S. Healthcare Processes - A Case Study in Business Process Management, In Proceedings of the 13th Americas Conference on Information Systems (2007), 1-9. Isern D, Moreno A. Computer-based Management of Clinical Guidelines: A Survey, In Proceedings of Fourth Workshop on Agents applied in Healthcare in conjunction with the 17th European Conference on Artificial Intelligence (2006), 71-80. Lenz R, Reichert M. IT support for healthcare processes – premises, challenges, perspectives, Data & Knowledge Engineering 61/1 (2007), 39-58. Anzböck R, Dustdar S. Modeling Medical E-services, Business Process Management (2004), 49-65. Nidumolu SR, Menon NM, Zeigler BP. Object-oriented business process modeling and simulation: A discrete event system specification framework, Simulation Practice and Theory 6/6 (1998), 533-571. Wakamiya S, Yamauchi K. What are the standard functions of electronic clinical pathways?, International Journal of Medical Informatics 78/8 (2009), 543-550. Anzböck R, Dustdar S. Modeling and implementing medical Web services. Data & Knowledge Engineering 55/2 (2005), 203-236. IHE International, Integrating the Healthcare Enterprise (IHE), http://www.ihe.net/ [April 2011]. Mans RS, Schonenberg MH, Song M, van der Aalst WMP, Bakker PJM. Process Mining in Healthcare: A Case Study, In Proceedings of the First International Conference on Health Informatics (2008), 118125. Dogac A, Bicer V, Okcan A, Collaborative Business Process Support in IHE XDS through ebXML Business Processes, In Proceedings of 22nd International Conference on in Data Engineering, 2006. Fachhochschule OÖ. Campus Hagenberg, IHExplorer, http://ihexplorer.fh-hagenberg.at/ [April 2011]. IHE International, IHE Domains, http://www.ihe.net/Domains/index.cfm [April 2011]. IHE International, IHE Profiles, http://www.ihe.net/Profiles/index.cfm [April 2011]. van der Aalst WMP, van Hee KM. Workflow Management: Models, Methods and Systems, MIT Press, Cambridge, MA, 2002. Hammer M, Champy J. Business Reengineering – Die Radikalkur für das Unternehmen. Campus Verlag, Frankfurt, New York, 1993. Davenport TH. Process Innovation: Reengineering Work through Information Technology. Harvard Business School Press, Boston, MA, 1992. Richter-von Hagen C, Stucky W. Business-Process- und Workflow-Management: Prozessverbesserung durch Prozess-Management, Vieweg+Teubner Verlag, Wiesbaden, 2004. Hearst Corporation, Map of Medicine, http://www.mapofmedicine.com/ [April 2011]. The Object Management Group, Business Process Model and Notation Version 2.0, http://www.omg.org/spec/BPMN/2.0 [April 2011]. Hollingsworth D. The Workflow Reference Model, Workflow Management Coalition, http://www.wfmc.org/index.php?option=com_docman&task=doc_download&gid=92&Itemid=72 [April 2011].
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-487
487
Facilitating Access to Laboratory Guidelines by Modeling their Contents and Designing a Computerized User Interface Mobin YASINIa,1, Catherine DUCLOSa, Jean-Baptiste LAMYa, Alain VENOTa Laboratoire d’Informatique Médicale et Bioinformatique, University of Paris 13
a
Abstract. Laboratory tests are not always prescribed appropriately. Guidelines for some important laboratory tests have been developed by expert panels in the Parisian region to maximize the appropriateness of laboratory medicine. However; these recommendations are not frequently consulted by physicians and nurses. We developed a system facilitating consultation of these guidelines, to increase their usability. Elements of information contained in these documents were identified and included in recommendations of different categories. UML modeling was used to represent these categories and their relationships to each other in the guidelines. We used the generated model to implement a computerized interface. The prototype interface, based on web-based technology was found to be rapid and easy to use. By clicking on provided keywords, information about the subject sought is highlighted whilst retaining the entire text of the guideline on-screen. Keywords. Test ordering, Laboratory medicine, Prescription appropriateness, Guidelines, Modeling, User interface
1. Introduction The use of diagnostic tests has increased over the past decades, and 60 to 70% of the most important decisions concerning admission, discharge and medication are based on laboratory test results [1]. Various international studies have suggested that up to 67% of laboratory test requests in healthcare are inappropriate and can be called into question [2-3]. Various strategies for changing the test-ordering behavior of medical practitioners, including education programs [4], redesigning the request form [5], feedback on the number or rational basis of tests prescribed [6], informing requesters about the costs of the tests requested [7], and the use Computerized Decision Support Systems (CDSS) [8], have been proposed in the literature. The implementation of guidelines, which are becoming more and more common in medical practice, and compliance with guidelines in daily practice, remain problematic areas [9]. A lack of awareness of the content of a guideline and a lack of familiarity or agreement with these recommendations are the reasons for not using or following practice guidelines by physicians [10]. Furthermore, guidelines may be difficult to
1
Corresponding Author: Mobin Yasini MD, Laboratoire d’Informatique Médicale et Bioinformatique (LIM&BIO), University of Paris 13, UFR Santé, Médecine, Biologie Humaine, 74 rue Marcel Cachin, 93017 Bobigny, France. E-mail: [email protected]
488
M. Yasini et al. / Facilitating Access to Laboratory Guidelines
obtain at the appropriate moment and identifying the information required can be laborious as guidelines are usually written in a purely textual format. The expert panels of the public hospital system of the city of Paris and its suburbs (AP-HP), the largest hospital system in Europe, recently formulated evidence-based laboratory guidelines for improving test ordering, specimen collection and handling procedures for about 30 common laboratory tests. Unfortunately, it became evident that theses recommendations were not frequently consulted by physicians or by the nurses responsible for specimen collection and handling procedures under appropriate conditions. We hypothesize that the contents of these documents might be used more effectively if a computerized interface was developed to facilitate access to these recommendations and to make them easier to read. The main aim of this study was therefore to develop and evaluate a new presentation of the information of these laboratory guidelines, based on a model derived from an analysis of their contents.
2. Materials and Methods We studied 22 evidence-based laboratory guidelines formulated by expert panels of the AP-HP. These documents appeared to be very heterogeneous in structure, as the topics were different (either a pathology or a test), and they were written by different panels. 2.1. Modeling the Content of the Guidelines In order to identify the elements of information contained in the guidelines, we first randomly assigned them to two groups. The first group, the study group, containing two thirds of these documents, was used to structure a model. The remaining third was used to validate the model obtained. We then listed all the different headings present in the study group. For each section, we studied the relevant text in each document (when available), and broke it down into recommendations (some sections contained several independent recommendations). We then analyzed the recommendations, to identify the various elements of information within them. For example, an indication for the represcription of a test may have conditions such as persistence of the underlying disease, inefficacy of the treatment and/or a time limit between two prescriptions. Whenever an information element did not match a type already encountered, a new information element was added. The information elements found in each recommendation were therefore listed incrementally, as they were discovered. Unified Modeling Language (UML) was used to model the categories of information. Once the model had been developed, it was validated with the documents of the second group. 2.2. Designing and Evaluation of the Interface The next step was to develop a web-based interface to facilitate the consultation of guidelines, providing faster access to the information within them. The documents were restructured according to the model constructed. This restructuring involved the grouping of individual recommendations into the categories and subcategories of recommendations defined in the model. We extracted headings and keywords from categories of recommendations and from the information elements in these recommendations, and presented them in the left margin of each document. The categories of recommendations correspond to section headings in the restructured
M. Yasini et al. / Facilitating Access to Laboratory Guidelines
489
document: Prescription, Represcription, etc. It is interesting for the user to have information not only about the types of information present in a document, but also about the types of information not present. For example, if the physician sees that there is no recommendation concerning the writing of requests for a specific test, he or she saves the time that would have been wasted searching the entire document in vain. We used a color-coding system to indicate to the user whether information about a category or subcategory of recommendation is present in the document. We tried then to make the interface interactive by using HTML language to make it possible to click on these headings. When the user clicks on a heading in the left margin, the corresponding text is highlighted in the main document. A satisfaction survey was carried out, by asking physicians and nurses to score the interface on an analogic scale questionnaire. The physicians and nurses were already familiar with the original guidelines and were asked to complete the questionnaire after handling the interface.
3. Results In total, 15 different topics were found in all 22 original guidelines. Some topics appeared in several guidelines, whereas others were specific to some guidelines only. These documents were heterogeneous, with the same section in different guidelines containing different information. Despite the differences between sections and contents, each laboratory guideline consisted of several recommendations, and listed bibliographic elements and authors. A list of the various recommendations and their frequency in the guidelines is presented in Table 1. All the information found in these guidelines is represented in our UML model in Figure 1. Each recommendation has one or more conditions determining the application of the recommendation and leads to one or more actions that an authorized actor should do or not do. A recommendation may also relate to: a) A test prescription. This may be a recommendation for the initial prescription or represcription (for example, 24 hours after the first series of two samples, if endocarditis is suspected, the represcription of blood culture is recommended). b) The collection of a specimen (for example, the collection of 15 ml of blood by puncture). c) The return of the result (for example, the result must include the electrophoresis layout). d) An assessment of guideline effects (for example, a decrease in the number of blood cultures per patient). This model was developed from two thirds of the guidelines and was validated on the remaining third. The validation showed that the model was able to represent all the recommendations contained in the remaining third of the guidelines. Table 1. Recommendation categories and their frequency in the guidelines Recommendation categories Recommendation for prescription Recommendation for represcription Recommendation for specimen collection Recommendation for interpretation of results Recommendation for writing the request Recommendation for assessment of guideline effects
Number of guidelines (%) 22 (100) 22 (100) 16 (73) 7 (32) 11 (50) 16 (73)
490
M. Yasini et al. / Facilitating Access to Laboratory Guidelines
Figure 1. UML diagram describing the organization of information elements in laboratory guidelines
The model served to rearrange the elements of the guidelines. It was used to design the web-based interface, to reorganize the recommendations present in each guideline according to an identical section structure for all documents. The user clicks on a heading (listed in Table 1) in the left margin and corresponding text is highlighted in the main document, providing direct access to required information without the need to read the entire document. When an element of information is not present in a guideline, the heading is shown in gray, indicating that the information sought is not available. A “Specific clinical context” section is provided for each guideline, enabling the user to identify rapidly information applying to a particular patient context, e.g., “suspicion of anaerobic bacteremia” in the guideline for blood culture. We asked 21 physicians and 14 nurses from three university hospitals in Paris to consult the original documents and then to manipulate the web-based interface. Table 2 shows the results of the evaluation. Table 2. Interface evaluation results Question Compared to the initial presentation, the order of presentation of information seems better/worse (0 for the worst and 100 for the best) Compared to the initial presentation, the headings seem more/less comprehensible (0 for the least understandable and 100 for the most understandable) The clickable menu in the left margin seems hard/easy to use (0 for difficult to use and 100 for very simple) Do you find the possibility of selecting “Specific clinical contexts” useful? (0 for not at all useful and 100 for very useful) Overall, access to the information is/is not better (0 for has no improvement and 100 for considerable improvement)
Mean 74
Standard deviation 19
80
18
77
15
96
6
74
15
M. Yasini et al. / Facilitating Access to Laboratory Guidelines
491
4. Discussion and Conclusion The consultation of laboratory guidelines should lead to rational and appropriate requests for testing. However, if guidelines are to be useful, the user must be able to search for the necessary information rapidly and to find it in a non ambiguous context [11]. In this study, we proposed a model for structuring the contents of laboratory guidelines. The integration of recommendations into a web-based interface made it possible to provide a contextual menu that facilitates reading. For example if the user seeks the indications for prescription of a laboratory test, he clicks on “Indications” in the right menu and finds them rapidly, as the information is highlighted on the screen. A quick glance at the menu provides the user with a concise overview of the contents of the page. An evaluation of satisfaction with this interface gave promising results. Further evaluation of the model, by other users than developer team, is required to confirm its generic nature. Although our preliminary evaluation showed that the interface was favorably received by physicians and nurses, a quantitative evaluation of the impact of the interface in terms of the behavior of physicians and nurses (e.g. their test-requesting behavior) would be of considerable interest. A model of this type can pave the way to many other possible applications for improving the quality of laboratory medicine. These applications may be based on decision trees, graphical approaches, or alert systems when a physician prescribes a test that is not in accordance with the recommendations. Automatic assessment of the effects of guidelines is feasible based on the indicators described in the guidelines. This model may also be useful for improving the formulation of future laboratory guidelines and designing a useful knowledge base containing the entire body of knowledge about the laboratory tests available.
References Forsman RW. Why is the laboratory an afterthought for managed care organizations? Clin. Chem. 1996 mai;42(5):813-816. [2] Janssens PMW. Managing the demand for laboratory testing: options and opportunities. Clin. Chim. Acta. 2010 nov 11;411(21-22):1596-1602. [3] Sucov A, Bazarian JJ, deLahunta EA, Spillane L. Test ordering guidelines can alter ordering patterns in an academic emergency department. J Emerg Med. 1999 juin;17(3):391-397. [4] Mindemark M, Larsson A. Long-term effects of an education programme on the optimal use of clinical chemistry testing in primary health care. Scand. J. Clin. Lab. Invest. 2009;69(4):481-486. [5] Shalev V, Chodick G, Heymann AD. Format change of a laboratory test order form affects physician behavior. Int J Med Inform. 2009 oct;78(10):639-644. [6] Bunting PS, Van Walraven C. Effect of a controlled feedback intervention on laboratory test ordering by community physicians. Clin. Chem. 2004 févr;50(2):321-326. [7] Allan GM, Lexchin J. Physician awareness of diagnostic and nondrug therapeutic costs: a systematic review. Int J Technol Assess Health Care. 2008;24(2):158-165. [8] Matheny ME, Sequist TD, Seger AC, et al. A randomized trial of electronic clinical reminders to improve medication laboratory monitoring. J Am Med Inform Assoc. 2008 août;15(4):424-429. [9] Grol R, Eccles M, Maisonneuve H, Woolf S. Developing Clinical Practice Guidelines: The European Experience. Disease Management & Health Outcomes. 1998;4(5):255-266. [10] Cabana MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999 oct 20;282(15):1458-1465. [11] Ely JW, Osheroff JA, Ebell MH, et al. Obstacles to answering doctors’ questions about patient care with evidence: qualitative study. BMJ. 2002 mars 23;324(7339):710. [1]
492
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-492
Evaluation of Multi-Terminology SuperConcepts for Information Retrieval Nicolas GRIFFONa, Lina F. SOUALMIAb,c, Aurélie NÉVÉOLd , Philippe MASSARIa, Benoit THIRIONa,b, Badisse DAHAMNAa,b, Stefan J. DARMONIa,b,1 a CISMeF, Rouen University Hospital, France b TIBS & LITIS EA 4108, Rouen University, France c LIM&Bio, University of Paris 13, Sorbonne Paris Cité, France d National Center for Biotechnology Information, NLM, Bethesda, MD 20894, USA
Abstract. Background: Following a recent change in the indexing policy for French quality controlled health gateway CISMeF, multiple terminologies are now being used for indexing in addition to MeSH®. Objective: To evaluate precision and recall of super-concepts for information retrieval in a multi-terminology paradigm compared to MeSH-only. Methods: We evaluate the relevance of resources retrieved by multi-terminology super-concepts and MeSH-only superconcepts queries. Results: Recall was 8-14% higher for multi-terminology superconcepts compared to MeSH only super-concepts. Precision decreased from 0.66 for MeSH only super-concepts to 0.61 for multi-terminology super-concepts. Retrieval performance was found to vary significantly depending on the superconcepts (p<10-4) and indexing methods (manual vs automatic; p<0.004). Conclusion: A multi-terminology paradigm contributes to increase recall but lowers precision. Automated tools for indexing are not accurate enough to allow a very precise information retrieval. Keywords. abstracting and indexing; cataloguing; information storage and retrieval; internet; controlled vocabulary
1.
Introduction
The Internet contains a considerable amount of health information that internet users experience difficulties navigating [1]. Several quality-controlled health gateways have been developed to help users find the health information they are looking for. Quality controlled subject gateways were defined by Koch [2] as Internet services which apply a comprehensive set of quality measures to support systematic resource discovery. CISMeF ([French] acronym for Catalogue and Index of Online Health Resources in French) is one such gateway, developed at the Rouen University Hospital. It initially relied on the Medical Subject Headings (MeSH®) thesaurus [3] to manually index the most important sources of institutional health information in French. This thesaurus was chosen because of its granularity (26,142 MeSH keywords describing the biomedical domain in the 2011 version) and the fact that it is well known among
1
Corresponding author: Stefan J. Darmoni, CISMeF, Rouen University Hospital, 1 rue de Germont, 76031 Rouen Cedex, France; E-mail: [email protected].
N. Griffon et al. / Evaluation of Multi-Terminology Super-Concepts for Information Retrieval
493
medical librarians. Several improvements have been introduced to adapt this scientific publication-oriented indexing vocabulary to internet resources [4]. A notable enhancement was the gathering of MeSH terms under meta-terms. These are super-concepts (SC) which correspond roughly to medical specialties (e.g. surgery), biological sciences (e.g. genetics) or health topics (e.g. diagnosis). MeSH terms were semantically linked to SCs to allow end-users to look for all the resources relevant to one specialty, which is difficult with the MeSH thesaurus, since MeSH terms related to a given specialty are dispersed among the 14 MeSH hierarchies. These semantic links have been hand-crafted by the CISMeF chief medical librarian (BT), based on his expertise. The idea of creating SC came up to maximize information retrieval in CISMeF: a query using the SC corresponds to the union of queries for all the terms semantically linked to it. A comparison of the results of MeSH term-based queries and SC-based queries showed an increased recall with no decrease in precision [5]. The use of multiple terminologies was recommended [6] to increase the number of biomedical concept lexical and graphical forms recognized by the search engine. For this reason, CISMeF evolved recently from a mono-terminology approach using MeSH keywords and qualifiers to a multi-terminology paradigm using, in addition to MeSH: Systematized NOmenclature of MEDicine (SNOMED 3.5), French CCAM for procedures, Foundational Model of Anatomy (FMA), and some classifications from the World Health Organization viz. the 10th revision of the International Classification of Diseases (ICD10), Anatomical Therapeutic Chemical (ATC) Classification for drugs, ICF for handicap, ICPS for patient safety [7]. These terminologies can be used for indexing resources (allowing a more precise indexing) and for querying the Catalogue. The goal of this study is to assess the effect of multi-terminology SC (MT-SC) definition compared to MeSH-only SC (MeSH-SC) definition on information retrieval performance in CISMeF.
Figure 1. Semantic links between CISMeF Super-Concept and terminologies terms and resource types. Terminology terms describe the subject matter of the resources, resource type categorize the nature or genre of the resource content.
494
2. 2.1.
N. Griffon et al. / Evaluation of Multi-Terminology Super-Concepts for Information Retrieval
Material and Methods CISMeF Information Model
The addition of multiple terminologies to CISMeF did not induce modifications in the tasks performed for using, maintaining and updating the catalogue: manual resource indexing, automatic resource categorization, visualizing and navigating through the concept hierarchies in the CISMeF Health Multi-Terminology Portal (URL: http://www.pts.chu-rouen.fr) and information retrieval using the Doc'CISMeF search tool. Nevertheless, the tools used for indexing and retrieving information needed important modifications [7]. As shown in figure 1, new terminologies have been linked to SCs manually by experts: one physician (PM) for ICD10 and CCAM, one pharmacist-librarian for ATC, and one resident (NG) for FMA. For instance, SC "cardiology" was initially linked to MeSH keywords such as "cardiology", "stents", and their descendants. With the integration of new terminologies, additional links completed the definition of SC “cardiology”: links to "cardiovascular system", "Antithrombotic agents" and others from ATC, links to "Cardiomyopathy", "Heart" and their descendants from ICD10 and so on. These mappings are available at: http://pts.chu-rouen.fr. 2.2.
Information Retrieval Queries
Our aim is to compare the precision and recall of MT-SC compared to MeSH-SC queries in CISMeF. As MT-SC are based on MeSH-SC plus semantic link to some terms in other terminologies, the query results for MeSH-SC are all included in the query results for MT-SC, which became the gold standard for recall. So, we have to evaluate the precision of the query retrieving resources indexed by a term linked to MeSH-SC (MeSH-SC query), on the one hand, and by a term linked to MT-SC and not to MeSH-SC (Delta query) on the other hand. For this purpose, we build Boolean queries using SC themselves: for the "surgery" SC, MeSH-SC query was "surgery[MeSH-SC]" and Delta query was "surgery[MT-SC] NOT surgery[MeSH-SC]". Retrieved resources returned were assessed for relevance according to a three modality scale used in other standard Information Retrieval test sets [8]: irrelevant (0), partly relevant (1) or fully relevant (2). A medical resident (NG) manually assigned relevance scores to the top 20 resources returned for each SC query in our study (see Table 1). We chose to assign relevance scores to the top 20 resources returned because 95% of the end-users do not go beyond this limit when using a general search engine [9], and 80% when using a biomedical search engine [10]. Weighted precisions for MeSH-SC queries and for Delta queries were computed given the level of relevance considered and compared using χ² test. Indexing methods and SC were compared too. Relative recall for MeSH-SC queries were computed given the level of relevance considered.
3.
Results
For the purpose of assessing SCs for Information Retrieval, we have developed a test collection comprising relevance judgments for the top 20 resources returned for a
495
N. Griffon et al. / Evaluation of Multi-Terminology Super-Concepts for Information Retrieval Table 1. Relevance of resources retrieved by 20 Super-Concept queries Super-Concept query
Number of resources retrieved MeSH-SC Delta Query Query 13,132 350 11,980 482 9,325 2,168 6,557 2,573 7,560 251 5,288 2,388 5,626 1,063 5,504 320 4,408 856 4,069 1,106
Relevance of top 20 retrieved resources MeSH-SC Query* Delta Query* 0 1 2 0 1 2 0 2 15 14 1 5 0 0 20 16 1 3 8 4 8 11 5 4 0 0 20 3 16 1 4 4 12 2 4 13 1 0 18 4 10 6 0 1 18 2 14 4 17 0 3 5 0 15 3 8 9 11 5 4 0 0 20 8 11 1
Diagnosis Toxicology Neurology Infectious diseases medicine Paediatrics Cardiology Oncology Surgery Rheumatology Gastroenterology Study of allergies and 4,598 573 1 immunology Metabolism 3,797 849 14 Dermatology 3,196 1,427 7 Nutrition 3,455 1,027 0 Pneumology 3,466 584 0 Gynaecology 3,186 850 6 Haematology 2,906 1,075 13 Endocrinology 3,168 666 15 Obstetrics 3,063 316 5 Virology 3,122 257 1 Total 107,406 19,181 95 *: Due to dead link, some queries had less than 20 resources evaluated
17
2
2
17
1
2 0 1 7 1 2 1 1 11 62
4 13 19 12 12 5 4 12 6 232
0 0 0 0 0 7 0 20 0 105
2 4 9 14 1 10 9 0 20 153
18 16 11 6 19 3 11 0 0 141
selection of 20 SC queries. This collection is made available to the research community. Table 1 shows that the queries yielded 126,587 resources (59224 unique), of which 788 (754 unique) were assessed for relevance (0.6%). The mean weighted precision of Delta queries was 0.33 and 0.76 for, respectively, full and partial relevance. The mean precision of MeSH-SC queries was 0.66 and 0.80 for, respectively, full and partial relevance. The difference between MeSH-SC and MTSC was significant for full relevance (0.66 vs 0.61; p<10-4, χ²) but not for partial relevance (both 0.80; p=0.3, χ²). The mean recall of MeSH-SC queries was 0.92 and 0.86 for, respectively, full and partial relevance. Table 2 shows that, whatever the relevance considered was, results varied significantly according to the indexing method: manual (precision of 0.50 and 0.81 for, respectively, full and partial relevance) perform better than automatic (precision of 0.38 and 0.48 for, respectively, full and partial relevance), and to the SC studied.
4.
Discussion & Conclusion
This study evaluates the precision and the recall of the MT-SC compared to MeSH-SC queries in information retrieval in quality controlled subject gateway CISMeF. For full relevant resources, the precision decreases with the shift from MeSH-SC to MT-SC (from 0.66 to 0.61) for an 8% improvement in recall. For partial relevance, the increase of recall with multiple terminology is even higher (14%) at no cost in terms of precision (0.80)
496
N. Griffon et al. / Evaluation of Multi-Terminology Super-Concepts for Information Retrieval Table 2. Determinants of relevance
Variable Specific query$ Indexing method Super-concept $: MeSH-SC vs MT-SC ; *: χ² test
Full relevance p < 10-4* p = 0.004* p < 10-4*
Partial relevance p = 0.3* p < 10-4* p < 10-4*
Because of the significant difference in relevance between MT-SC and MeSH-SC queries, MT-SC queries will be best used when the MeSH-SC result set is small. In this case, MT-SC queries can offer a larger result set with good partial relevance. A limitation of this study is that only the top 20 results are assessed for relevance. This possibly induced bias, because resources are sorted by a relevance algorithm, but we think this method reflects the real life, since most users usually do not look at results beyond the first page, i.e. the top 20 documents returned [9, 10]. This analysis underlines that the performance of the automatic indexing algorithm is lacking and needs to be improved significantly. However, even resources indexed manually (thus having higher quality indexing) were less relevant for MT-SC than for MeSH-SC. We have possible explanations: (1) some hand-crafted links between descriptors and SC have been found to be erroneous and will be corrected soon, (2) the shift to multi-terminology occurs recently and concern only new resources that are different from old MeSH only indexed resources. These 2 sets of resources are not comparable (e.g. some of these new resources, providing very standardized and precise information, need new indexing strategy to avoid them inducing noise). Overall, the multi-terminology paradigm for super concepts definition was found to increase the recall but lower the relevance of retrieved resources. Automated tools for indexing are not accurate enough to allow a very precise information retrieval.
References [1] [2] [3] [4]
[5] [6] [7]
[8]
[9] [10]
Keselman A, Browne AC, Kaufman DR. Consumer health information seeking as hypothesis testing. J Am Med Inform Assoc. 2008 Jul-Aug;15(4):484-95. doi: 10.1197/jamia.M2449 Koch T. Quality-controlled subject gateways: definitions, typologies, empirical overview, subject gateways. Online Information Review. 2000;24(1):24-34. doi: 10.1108/14684520010320040 Nelson SJ, Johnson WD, Humphreys BL. Relationships in Medical Subject Heading. In: Relationships in the organization of knowledge. Bean CA, Green R. Kluwer Academic Publishers, 2001:171-84 Douyère M, Soualmia LF, Névéol A, Rogozan A, Dahamna B, Leroy JP, Thirion B, Darmoni SJ. Enhancing the MeSH thesaurus to retrieve French online health resources in a quality-controlled gateway. Health Info Libr J 2004 Dec;21(4):253-261. doi: 10.1111/j.1471-1842.2004.00526.x Gehanno JF, Thirion B, Darmoni SJ. Evaluation of meta-concepts for information retrieval in a qualitycontrolled health gateway. AMIA Annu Symp Proc. 2007;269-73 Wagner MM. An automatic indexing method for medical documents. Proc Annu Symp Comput Appl Med Care. 1991;1011-7. Darmoni SJ, Pereira S, Sakji S, Merabti T, Prieur E, Joubert M, Thirion B. Multiple terminologies in a health portal: automatic indexing and information retrieval. In: Conference on Artificial Intelligence in Medicine. 2009;255-9. doi: 10.1007/978-3-642-02976-9_37 Hersh W, Buckley C, Leone TJ, Hickam D. OHSUmed: An interactive retrieval evaluation and new large test collection for research. In: Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval (1994), pp. 192-201. Spink A, Jansen BJ. Web search: Public searching on the web. Kluwer Academic Publishers, 2004;199. Islamaj Doğan R, Murray GC, Névéol A, Lu Z. Understanding PubMed user search behavior through log analysis. Database. 2009 ;bap018. doi: 10.1093/database/bap018.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-497
497
Framework Model and Principles for Trusted Information Sharing in Pervasive Health Pekka RUOTSALAINENa,1, Bernd BLOBEL b , Pirkko NYKÄNENc, Antto SEPPÄLÄc, Hannu SORVARI d a National Institute for Health and Welfare, Finland b University of Regensburg, Germany c University of Tampere, Finland d Turku University, Finland
Abstract. Trustfulness (i.e. health and wellness information is processed ethically, and privacy is guaranteed) is one of the cornerstones for future Personal Health Systems, ubiquitous healthcare and pervasive health. Trust in today’s healthcare is organizational, static and predefined. Pervasive health takes place in an open and untrusted information space where person’s lifelong health and wellness information together with contextual data are dynamically collected and used by many stakeholders. This generates new threats that do not exist in today’s eHealth systems. Our analysis shows that the way security and trust are implemented in today’s healthcare cannot guarantee information autonomy and trustfulness in pervasive health. Based on a framework model of pervasive health and risks analysis of ubiquitous information space, we have formulated principles which enable trusted information sharing in pervasive health. Principles imply that the data subject should have the right to dynamically verify trust and to control the use of her health information, as well as the right to set situation based context-aware personal policies. Data collectors and processors have responsibilities including transparency of information processing, and openness of interests, policies and environmental features. Our principles create a base for successful management of privacy and information autonomy in pervasive health. They also imply that it is necessary to create new data models for personal health information and new architectures which support situation depending trust and privacy management. Keywords. Pervasive health, ubiquitous computing, privacy, trust, modeling.
1. Introduction Information processing in today’s healthcare takes place in closed environments where organizational trust and security is a rule. New service models such as Personal Health System (PHS) and ubiquitous healthcare use sensors, motes and surveillance systems to monitor data subjects (DS) in their daily living environment [1]. This means a jump from controlled and trusted environment to dynamic, uncontrolled and unsecure one. In spite of those changes these models are only extensions of today regulated and in many cases paternalistic healthcare paradigm. 1
Corresponding author: Pekka Ruotsalainen, E-mail: [email protected]
498
P. Ruotsalainen et al. / Framework Model and Principles for Trusted Information Sharing
A more revolutionary paradigm is pervasive health which takes part in ubiquitous information space. The pervasive health model tries to change today healthcare delivery model from doctor- and organizational-centric to person-centric, from acute reactive to preventive, and from sampling to continuous monitoring [2]. It integrates medicine, biomedical engineering, medical informatics, and ubiquitous computing [3]. Instead of focusing on eHealth services that healthcare professionals provide to patients, pervasive health is person-centric and person-driven. It is strongly targeted to make health and welfare management personal. Typical pervasive health services are location based services, pervasive access to health and wellness data, and lifestyle management [4]. Because pervasive health is not organization-centric, it enables a person to act as her own wellness coordinator and primary decision maker (with or without the help of any healthcare provider). Other unique features in pervasive health are: • It enables the use of services which are not offered and controlled by regulated healthcare providers, • Heterogeneous personal lifelong health and wellness related information together with rich contextual data is widely collected and used by stakeholders, • Personal health and wellness information is not stored in today’s regulated electronic healthcare records (EHRs), • The data subject can set personal preferences regarding the use of her data, and, • It uses ubiquitous computing for data collection, processing and sharing [4]. In this paper we use literature analysis to find major security and privacy risks which exist in pervasive health. We also demonstrate that the way privacy and trusted information processing have been implement in today’s healthcare information systems cannot guarantee privacy and autonomy in pervasive health. Using analysis results and the developed reference model for pervasive health, we have formulated new realizable principles for trusted information processing and sharing in pervasive health.
2. Reference Model for Information Processing in Pervasive Health Our reference model uses the concept of spaces (e.g. sub-systems, digital territories or bubbles), relations and polices (Figure 1).
Figure 1. Framework model for information flow in pervasive health
Each space can have own business concepts, ethical rules, regulatory framework, context, and security and privacy policies [5]. Relations between spaces are dynamic
P. Ruotsalainen et al. / Framework Model and Principles for Trusted Information Sharing
499
without predefined trust. Any space can collect, process, store, and disclose health data. One of these spaces is data subject’s personal space. Principles (rules, agreements, regulations and policies) define the way spaces communicate and process data. Health data can be distributed or it can be stored in the Personal Health Record (PHR). Information sharing between spaces is dynamic and context-aware. In such environment, the DS have problems to know whom they can trust, what the level of trust is, and what kind of data is collected, processed and shared by whom? It is also difficult to be aware of, and control, the secondary use of personal health data.
3. Information Content and Processing View Pervasive health is characterized by heterogeneous information, dynamic number of stakeholders, and ubiquitous computing which seamlessly interconnects digital infrastructures into our daily life. It collects, processes, and distributes “any kind” of personal information and contextual data at any time. Pervasive health uses information about individuals that exceeds what today’s organization-based EHRs can offer. It requires knowledge of individual´s normal functions in order to provide early detection of diseases, changes in functionality, and to offer pro-active prevention as well as personal health and wellness prediction services. This means that pervasive health requires information which covers person’s whole life including data about personal behaviors, lifestyle, emotions, genealogical and genomic data, social data, data of psychological functionality, and data from environmental and body sensors. Rich contextual data and full or partial copies of the legal EHR might also be used. Those features mean that dynamic and context-aware trust and privacy management are needed.
4. Security and Privacy Threats in Pervasive Health Ubiquitous computing used by pervasive health and features of information space generate many security and privacy threats which do not exist in today’s healthcare systems and networks. It has been discussed already that there is no predefined trust between spaces in pervasive health. Furthermore, health data can be collected, processed, and communicated invisible to the DS, and contextual information can be easily misused. Dataveillance enables monitoring of person’s activities and behaviors, and it is difficult to control the secondary use of data by multiple agencies. Ubiquitous computing generates digital footprints of all events. It enables privacy breaches by linking of multisource, heterogeneous and context-depending information. It has also unlimited memory.
5. State of Art of Data Privacy and Information Autonomy in Today Healthcare Widely accepted principles for fair information processing include principles of withholdings, trusted usage, controlled dissemination and processing, transparency and security. Legitimate ground for processing is also required [6]. A typical way how those
500
P. Ruotsalainen et al. / Framework Model and Principles for Trusted Information Sharing
principles are implemented in today’s healthcare information systems is shown in Table 1. Table 1. Typical implementation of privacy principles in today healthcare. Principle Existence of personal privacy Withholdings Trusted usage Controlled dissemination Transparency
Control over the creation, collection, processing and archiving of EHRs
Typical Implementation Patients’ privacy can be overridden in situations and purposes defined by national legislation. Patients do not have right to control the content of their EHRs. Blind and organizational trust. Realized by security services. Trustfulness is seldom audited or certified. Patients’ right to control dissemination is restricted by national legislation. Patient is not automatically aware which professionals or entities are processing her EHR and for what purposes. Patient is not aware of all disclosures of the content of her EHR. Typically patients’ have no right to those activities. Patients have limited or sometimes no control over processing of their EHRs inside healthcare organizations.
Table 1 show that today’s implementations are based on blind trust and follow the manifestation of organization-centric and paternalistic healthcare model. At more technical level, security solutions used in today’s healthcare information systems are organizational, reactive, and based on static rules. They are neither context-aware nor content-aware, and are targeted to be used in controlled environments with predefined rules. Even modern infrastructures developed for national healthcare information networks (NHIN) have adapted the models shown in Table 1. Based on the analysis of the previous chapters, it is clear that today’s security and access control focused implementation models cannot guarantee trustfulness and privacy in pervasive health. Therefore newly formulated principles and information system architectures are needed.
6. Principles for Trusted Information Processing in Pervasive Health Our proposal is that the DS have new rights and spaces/stakeholders have mandatory responsibilities covering collection, processing and sharing of health data. The DS should have rights to: • Verify dynamically trustfulness of any space, and control the use of personal health information both inside spaces and between them, • Be aware of all events and situations where health data is collected, processed, stored and shared, and • Define situation specific, context-aware and granular personal policies regulating the processing and disclosure of personal health data. Spaces/stakeholders have responsibilities to ensure: • Transparency in data processing, openness of relationships between spaces, openness of their interests, policies and environmental and contextual features. Our principles imply that the DS should not only be aware of the use of her personal health data but she also needs power to control how data is used, processed and shared.
P. Ruotsalainen et al. / Framework Model and Principles for Trusted Information Sharing
501
7. Discussion Our principles are not completely new. Some researchers have proposed patient controlled EHR/PHR and health data banks [7], [8]. Those proposals are limited to today’s healthcare rules and to the use the predefined trust models. As our target is pervasive health without predefined trust, we have used privacy and trust frameworks developed for social web and ubiquitous computing as a starting point [9], [10]. From healthcare perspective, the adaption of our principles means a paradigm change which has big impact to services, to data models of the PHR and to information architectures. Remaining challenges are both technical and political. The use of our principles can easily create a huge amount of personal policies. It is also difficult to manage and solve automatically policy conflicts between spaces without common security and privacy ontologies. In real life, some individuals do not have the ability or the willingness to use personal policies and verify trust. This all means that implementation of our principles will require the combination of personal, context-aware, dynamic and computer understandable security and privacy policies, trust verification, data encryption, notification services and capsulation of data and related contextual metadata. A Trusted Third Party service which can act on the behalf of the DS to manage trust seems also necessary. A big political challenge is to what extent business companies and governmental as well as professional organizations and health professionals have willingness to implement our principles. Our further work focuses on ontologies for trust, privacy and wellness. Our principles should also be converted to a computer understandable policy language. We will also develop a security and privacy architecture which realize our principles. Acknowledgements: We acknowledge the funding of the THEWS-project (Trusted eHealth and eWelfare Space) by the Finnish Academy of Science within the MOTIVE research program during 2009-2012.
References [1]
Kiefer S. Personal Health Systems (PHS) Overview and Research Trends, Fraunhofer Institut, Biomedizinische Technik, 2007, ec.europa.eu/information_society/events/phs.../phs2007-kiefer-s1a.pd. [2] Arnrich O, Mayora J, Bardram G, Tröster B. Pervasive Healthcare Paving the Way for a Pervasive User-Centered and Preventive Healthcare Model, Methods Inf Med 49 1 (2010), 67-73. [3] Bardram J, Pervasive Healthcare as a Scientific Discipline, Method Inf Med 47 3 (2008), 178-185. [4] Varchney U. ACM Communications, December 2003, pp. 138-140. [5] Beslay, L, Hakala, H. Digital territory: Bubbles. In Wejchert, J., ed.: The Vision Book, Brussels, 2005 [6] Wassernaar J. Privacy Rules, A Steep Chase For Systems Architects, www.w3.org/2006/07/privacyws/papers/04-borking-rules. [7] Huda N, Sonehara N, Yamada S. A privacy management architecture for patient-controlled personal health record systems, Journal of Engineering Science and Technology Vol. 4.No. 2(2009), 154-170. [8] Ball M, Gold J. Banking on Health: Personal Records and Information Exchange, Journal of Healthcare Information Management, Vol. 20, No 2. (2006), 71-83. [9] Joshi A, Finin T, Kagal L, Parker J, Patwardhan A. Security policies and trust in ubiquitous computing, Philosophical transactions of the Royal Society, A (2008) 36. [10] Langheinrich M. Privacy by Design – Principles of Privacy-Aware Ubiquitous Systems, www.inf.ethz.ch/~langheinrich.
502
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-502
Populating the i2b2 Database with Heterogeneous EMR Data: a Semantic Network Approach Sebastian MATEa,1, Thomas BÜRKLEa , Felix KÖPCKEa, Bernhard BREILb, Bernd WULLICHc, Martin DUGASb, Hans-Ulrich PROKOSCHa,d, Thomas GANSLANDTd a Chair of Medical Informatics, University Erlangen-Nuremberg, Erlangen, Germany b Institute of Medical Informatics, University of Münster, Münster, Germany c Department of Urology, Erlangen University Hospital, Erlangen, Germany d Center for Medical Information and Communication, Erlangen University Hospital, Erlangen, Germany
Abstract. In an ongoing effort to share heterogeneous electronic medical record (EMR) data in an i2b2 instance between the University Hospitals Münster and Erlangen for joint cancer research projects, an ontology based system for the mapping of EMR data to a set of common data elements has been developed. The system translates the mappings into local SQL scripts, which are then used to extract, transform and load the facts data from each EMR into the i2b2 database. By using Semantic Web standards, it is the authors’ goal to reuse the laboriously compiled “mapping knowledge” in future projects, such as a comprehensive cancer ontology or even a hospital-wide clinical ontology. Keywords. i2b2, electronic medical records, secondary use, semantics, controlled vocabulary, heterogeneous data integration
1. Introduction Data collection for cross-institutional research projects or the annotation of biospecimens is often done by manual reentry of data into a shared database. This process is error-prone, time-consuming and may result in incomplete data collection. With the shift from paper-based to electronic documentation in recent years, much of this data is already captured in various subsystems of the hospital information system, for example in the electronic medical record (EMR). It is tempting to reuse this data for research purposes. However, while technical access to these databases is easy, it is very difficult to process this data in a semantically correct manner, especially if it’s not encoded with a standardized coding system. This task is getting even harder when trying to merge medical data from different hospitals. The efficient reuse of these large pools of precious information has been declared as a major challenge for medical informatics in the near future [1]. The Deutsches Prostatakarzinom Konsortium e.-V. (DPKK) is a German crossinstitutional research network consisting of more than 70 urologists, pathologists and 1
Corresponding author: [email protected]
S. Mate et al. / Populating the i2b2 Database with Heterogeneous EMR Data
503
scientific researchers to fight prostate cancer. Similar to the CPCTR’s efforts in the US [2], one of their goals is to establish a shared database of tissue specimen, containing annotation data from the patients’ medical history, surgery and pathology. Recently, a new common dataset has been defined by DPKK experts in Erlangen and Münster, comprising 26 medical concepts (e.-g. pTNM) with 154 atomic enumerable values (e.-g. pN=0) and 12 medical concepts with non-enumerable values (e.-g. the PSA value). The current web-based DPKK research database implementation, however, requires the reentry of such clinical annotation data, even though most of the data are already stored in the partners EMR systems. We therefore evaluate a new single source approach based on i2b2, a NIH-funded, open source clinical data warehouse and translational toolkit [3], as a pilot project between the university hospitals in Erlangen and Münster. i2b2 features a generic database schema and enables the easy and user-friendly construction of database queries to determine patient cohorts based on the combination of eligibility criteria [4]. In order to reuse the data elements already stored within the two hospitals’ EMR systems, we had to implement ETL (extraction/transformation/loading) steps to load those data into the i2b2 database. Since i2b2 does not provide an integrated means for data loading, we had to establish those functions externally. For this purpose we decided not to use proprietary import/export programs between the respective EMR systems and i2b2, but to extend i2b2 with an ontology suite, which supports the generic mapping of heterogeneous EMR data to a set of common data elements. These mappings can then be processed to perform the data export into the i2b2 research database. By using Semantic Web standards [5] for the definition of machine processable, declarative mappings, it is our vision to reuse the now laboriously compiled “mapping knowledge” in future projects, combined with other freely available medical ontologies [6] in the context of a comprehensive cancer or hospital ontology.
2. Methods We chose an approach in which all required information is represented with semantic networks in the flexible Web Ontology Language (OWL) [5], as illustrated by the two bold arrows in figure 1. The targeted DPKK dataset is defined inside a target ontology describing all data elements, which shall be exported into the i2b2 database. There, all concepts are stored in a taxonomy-like structure with attributes such as name, datatype, and a short textual description, plus, if applicable, i2b2 specific attributes such as medication and lab value ranges. To speed up the ontology editing process, we have developed OntoEdit for entering and editing those contents. In a similar manner, each source system’s EMR data structure (i.-e. data entry forms, data input fields, enumerable value lists, checkboxes and radio buttons within Soarian metadata) has to be defined in shape of a source ontology. This source ontology also contains technical information on how to access to the source system’s database in order to retrieve the data records represented by each ontology concept. The creation of this ontology is custom to each source system. If direct access to the source system’s EMR metadata is difficult (e.-g. because of licensing issues), we have implemented OntoGen to support the import and use of CSV files instead. OntoGen publishes the data records from the CSV file in a temporary database and automatically derives the ontologies from the columns’ headlines and by aggregating data values.
504
S. Mate et al. / Populating the i2b2 Database with Heterogeneous EMR Data
When the source and target ontology have been defined in OWL, mappings between the two can be defined inside a flexible mapping ontology. Figure 1 illustrates two different types of mappings. In the first example, the target concept D is directly mapped to source concept B using the hasImport relation, because D exactly matches B. Therefore, the corresponding data records from B can be exported to i2b2 without any data transformation. In some cases, however, filtering and transforming of source concepts may be necessary in order to conform to the concepts in the target ontology. We express such operations with intermediate transformation nodes. This is illustrated in figure 1 with an ADD node between the concepts E, A and C, which means that the target concept E is the sum of the source concepts A and C. To keep operations “semantically atomic”, nodes are limited to 2 operands; complex operations can be expressed by cascading multiple nodes into expression trees (an example will be given later in figure 2). We have developed QuickMapp for the easy creation of such mappings. In order to actually perform the data export from the source EMR to i2b2, an export software, OntoExport, automatically translates the information stored inside all ontologies into SQL statements. These extract the source systems’ data records, transform them according to the mapping rules and write them into the target i2b2 database.
Figure 1. All information to perform a data export is described in semantic networks.
3. Results We have implemented this approach for the EMR systems Siemens Soarian Clinicals® in Erlangen and Agfa HealthCare ORBIS® in Münster. In Erlangen, we were able to derive 42,000 ontology elements from the Soarian EMR by processing its metadata tables. Because direct database access to ORBIS in Münster was not allowed, we had to use a CSV export from the EMR and post-process it with OntoGen. Table 1 summarizes the achieved mapping results. More than 75% of the required DPKK data elements could be matched directly from the two EMR systems. For 10 data elements in Erlangen and one in Münster, transformation nodes had to be defined in the mapping ontology. In Erlangen, four of these data elements required checking whether a specific data entry form existed. This could only be implemented by using a workaround, which simulates another – in reality nonexistent – database table, which stores this abstract information. One mapping in Erlangen was not supported yet, because it required access to administrative data (date of birth) from the ADT system. OntoExport however, is currently limited to process data records from only one database connection at a time. At both sites, two mappings were impractical to create be-
S. Mate et al. / Populating the i2b2 Database with Heterogeneous EMR Data
505
cause our current implementation is limited to mappings at the value level only. Creating them with the current implementation would result in 108 distinct partial mappings. Table 1. Result after mapping the two EHRs to the common DPKK dataset with 166 concepts. No. of concepts directly mapped No. of concepts mapped through transformations No. of concepts not documented in source system No. of mappings not supported / impractical Generated SQL statements / execution time: Number of facts / patients in source table: Obtained facts / patients for DPKK i2b2:
Erlangen Hospital 138 10 (4 with a workaround) 15 1/2 548 / ~15 seconds 29,721,416 / 161,512 3,686 / 155
Münster Hospital 127 1 36 0/2 284 / ~ 3 seconds 5,100 / 500 (test data) 2,585 / 487 (test data)
Concerning our method’s mapping capabilities, we have successfully implemented and tested various types of string manipulation as well as arithmetic, Boolean and comparison operations. Figure 2 shows a complex real-world example from Erlangen. The DPKK data set requires the latest PSA value from each respective EMR. In Soarian, four outpatient follow-up PSA measurements are stored in four data fields with attributed date fields. A fifth, extra PSA field contains the last inpatient value, if no outpatient follow-up was done so far. The export logic is comprised of five distinct partial mappings with conditional checks. The first mapping (A) checks whether Date1 is greater than Date4, Date3 and Date2 and only then exports the PSA1 field (B): Only if all GREATERVT nodes evaluate to “True” (the “VT” variant in particular allows the comparison of blank/null fields), the nodes IF3, IF2 and IF1 pass the data records abstracted by the PSA1 concept into the i2b2 database. Likewise, three other mappings (C) process the fields for PSA/Date 2, 3 and 4. The fifth mapping (D) checks if all outpatient fields are empty and eventually exports the extra inpatient PSA value (E).
Figure 2. Example of a complex transformation: PSA mapping for Erlangen.
506
S. Mate et al. / Populating the i2b2 Database with Heterogeneous EMR Data
4. Discussion There have been several prior projects to integrate and query heterogeneous medical data, e.-g. [7, 8]. However, most of these implementations are stand-alone systems that require the formulation of complex queries in proprietary query syntax, while our approach reuses an existing platform (i2b2) for the final data integration that also acts as a proven, easy-to-use query interface. One major advantage of our approach is that the transformation and loading processes between EMR source data structures and the DPKK target data set are not implemented with proprietary import/export program and SQL code, but defined on a higher, more generic and reusable ontology level. Thus, the mappings and domain knowledge can be reused in other i2b2 and warehousing projects and can be processed with standard tools such as Protégé. Furthermore, extending the data integration pilot project to further EMR systems (as it is planned in a next step) will reutilize the already defined target ontology and only require the definition of new source ontologies. These need to be compatible with the database’s SQL syntax and the EMR’s data schema in order to create proper SQL statements for the data extraction. The current implementation must still be considered prototypical as it offers opportunities for improvement. We plan e.-g. to improve the SQL code generation by optimizing the node’s processing order. We further plan to extend the ontology suite to support mappings at different hierarchy levels instead of the value-level only. Currently, we have limited the target ontology’s semantic features to the functionality of the i2b2 system. We are confident, however, that we will be able to expand or link the target ontology to a more powerful ontology that follows commonly accepted desiderata [9] and standards [10] for medical terminologies. By using the OWL format, our approach can act as a bridge between raw medical data, i2b2 and the Semantic Web, because it enables the linkage to other freely available medical ontologies [6]. Thus, by using i2b2 and extending it with our ontology suite we feel confident that we have made a step forward in efficiently accessing and reusing EMR data from routine care for a cross-institutional research database.
References [1]
Prokosch HU, Ganslandt T. Perspectives for Medical Informatics: Reusing the Electronic Medical Record for Clinical Research, Methods of Information in Medicine 48 (2009), 38–44. [2] Patel AA, et al. The development of common data elements for a multi-institute prostate cancer tissue bank: The Cooperative Prostate Cancer Tissue Resource (CPCTR) experience. BMC Cancer 5 (2005). [3] Murphy SN, et al. Architecture of the Open-source Clinical Research Chart from Informatics for Integrating Biology and the Bedside. AMIA Annu Symp Proc. 2007 (2007), 548–552. [4] Deshmukh VG, et al. Evaluating the informatics for integrating biology and the bedside system for clinical research, BMC Med Res Methodol 9 (2009). [5] Ruttenberg A, et al. Advancing translational research with the Semantic Web, BMC Bioinf 8 (2007). [6] Bodenreider O. Biomedical Ontologies in Action: Role in Knowledge Management, Data Integration and Decision Support, Yearb Med Inform (2008), 67–79. [7] Sujansky W. Heterogeneous database integration in biomedicine, J Biomed Inform 34 (2001), 285–298. [8] Hernandez T, Kambhampati S. Integration of Biological Sources: Current Systems and Challenges Ahead, SIGMOD Rec 33 (2004), 51–600. [9] Cimino JJ. Desiderata for Controlled Medical Vocabularies in the Twenty-First Century, Methods Inf Med 37 (1998), 394–403. [10] Solbrig HR. Metadata and the Reintegration of Clinical Information: ISO 11179. M.D. computing: computers in medical practice 17 (2000), 25–28.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-507
507
A Novel Way of Standardized and Automized Retrieval of Timing Information along Clinical Pathways Eva GATTNARa,b,1, Okan EKINCI a, Vesselin DETSCHEWb Clinical Competence Center Cardiology, Siemens Healthcare, Germany b Institute of Biomedical Engineering and Informatics, Ilmenau Technical University, Germany a
Abstract. Improving the effectiveness and efficiency of acute healthcare is very important nowadays. Optimization of clinical pathways regarding quality, time and costs is one of the key management strategies of critical diseases such as heart attack and stroke. To identify workflow bottlenecks requires a thorough understanding about both the hospital environment (e.g. IT-systems) and processes (e.g. clinical pathways). Having in mind the interoperability issues in hospitals, the standardized division- and system-crossing time measurement is still a challenge. Therefore this paper presents a novel way of structured and standardized retrieval of timing information along the clinical pathway of time-critical diseases in the context of hospital IT-systems, which represents a promising opportunity to identify workflow bottlenecks over several departmental and system borders. Keywords. clinical process monitoring, process analysis, clinical times, hospitals.
1. Introduction Improving the effectiveness and efficiency of acute healthcare is very important nowadays. Optimization of clinical pathways regarding quality, time and costs is one of the key management strategies of time-critical diseases such as heart attack and stroke. Although they are highly standardized, the diagnostics and treatment process can be more or less time-consuming depending on the workflow bottlenecks in a particular hospital. To identify workflow bottlenecks in such a process - that overall can be accomplished within one or two hours - requires a thorough understanding about both the hospital environment (IT-systems) and clinical processes (pathways). Healthcare processes are nowadays heavily dependent on information technology. With their enormous impact and rapid evolvement in hospitals, various standards and systems have to be considered in order to monitor clinical pathways. Having in mind the interoperability issues in hospitals, the standardized division- and system-crossing measuring of performance is still a challenge. We propose a novel way of standardized and automized retrieval and communication of timing information along the clinical pathways of time-critical diseases in the context of various clinical communication standards and systems. 1
Corresponding Author: Eva Gattnar, Clinical Competence Center Cardiology, Siemens Healthcare, Allee am Röthelheimpark 3a, 91502 Erlangen, Germany; E-mail: [email protected].
508
E. Gattnar et al. / A Novel Way of Standardized and Automized Retrieval of Timing Information
Beside the investigation of the IT-infrastructure and interoperability issues, we propose a newly developed clinical process model for performance measurement and quality improvement in hospitals. In this paper we describe exemplarily the retrieval of timerelevant IT-based events using the example of the diagnostic process performed in a hospital. Our proposed integrated technical and clinical process-oriented approach provides the basis for an automated and standardized performance measurement and monitoring in a heterogeneous clinical IT-environment. There are other healthcare-specific approaches to clinical process mapping and improvement as presented in this paper. An example is the Module Library for Medical Process Models (MoBimeP), which was developed for the standardized clinical pathways mapping and the a-priori pathway modularization [1]. The Patient Journey Modeling Architecture (PaJMa) focuses especially the patient flow [2]. The 3LGMMeta-Modell focuses sub-processes in hospitals and allows the hospital information modeling [3], [4]. The Breakthrough Series Model for Improvement (MOI) is another patient flow based approach initially developed for manufacturing and business domain [5]. Lean Thinking’s Value Stream Modeling supports the reengineering of healthcare processes by using the lean paradigm derived from the field of manufacturing [5]. All approaches have in common that they don’t support event-based time measurement. Furthermore, clinical and disease-specific performance measurement is not supported. In the field of workflow analysis there are approaches, which provide event-based process monitoring and mining with the objective to optimize the workflow [6], [7], [8], [9]. However, they were not specifically designed for healthcare.
2. Methods Prerequisite for the acquisition of process lead time is the knowledge of the processes. For modeling purposes, Event-driven Process Chain (EPC) and the ARIS Toolset (Architecture of Integrated Information Systems) are used. The EPC is a methodology for a semi-formal description of business processes and is central to a variety of reference models [10]. The ARIS Toolset, a worldwide common business process modeling architecture, is very often used for reference modeling by means of EPC [11]. The EPC-method is one of the most commonly used methods in business modeling [12]. An EPC represents the temporal and logical dependencies of events and processes, and also allows the explicit notation of events, where process performance measurements can be done. For these reasons, the EPC is a suitable method for supporting clinical process measurement efforts. Event-based time measures can be performed in our approach in all clinical ITsystems which are relevant for monitoring. However, a great number of workflow bottlenecks in the field of acute-diseases are caused in the radiology department, where the diagnostics is done. Therefore we focus in this paper on the diagnostic process and investigate the availability of retrieval and communication of process-related timestamps on the basis of existing clinical standards in the radiology department. We evaluated and verified the described results and the methodology behind with several clinicians and medical specialists. Additionally we verified our clinical model, upon which the investigation is based, using clinical pathways of several hospitals. We draw the missing link between IT and the clinical process by integrating clinical meaningful IT-timestamps into our EPC-model and assigning them to the appropriate events in the
E. Gattnar et al. / A Novel Way of Standardized and Automized Retrieval of Timing Information
509
model. The assignment of timestamps to the clinical process is presented in this paper based on an excerpt of our clinical process model in EPC-notation.
3. Retrieval and Communication of Timing Information in Radiology Department The patient care process can be divided into several distinct sub-processes (admission, diagnostics, treatment and discharge) during an inpatient hospital stay. While the admission and discharge take place only once, several diagnostics and treatment procedures can be consecutively repeated. In the following the diagnostic process is exemplarily analyzed. In this phase examinations are carried out in the catchment area of radiology information systems and imaging modalities (e.g. Computed Tomography etc.) and therefore the automatic generation of additional clinically relevant timestamps is possible. DICOM (Digital Imaging and Communications in Medicine) - as one of the clinical standards in radiological environment - defines the format and mechanism for exchange and storage of radiological meaningful information. So-called Modality Performed Procedure Steps (MPPS) can be sent to the RIS at the beginning and the end of an examination [13]. The diagnostic process in the field of radiology can be divided into sub-processes. First, a modality examination is requested (request). In the ideal case, required patient data are automatically transferred from the upstream Hospital Information System (HIS) into the RIS; otherwise they are entered manually and supplemented by additional data (diagnostic problem, contraindications, etc.). Subsequently, the examination is planned (planning), which includes both the scheduling and planning of the procedure itself. The received request is assigned by RIS to an examination room (i.e. a specific modality). The modality can retrieve independently the DICOM worklist from the RIS and assume it into their local worklist. Following the preliminary planning activities preparatory arrangements in the examination room (e.g. cleaning of the room, sterile coverage, and preposition of materials) are made (preparation). At the same time, the patient is transported to the radiology or is waiting there already. Subsequently, the patient can be prepared for the examination (disinfection, puncture, positioning on the table, anesthesia, etc.). To begin the next step, the examination itself, the registration of the patient at the modality is required. For this, the patient data can be selected from the local worklist of the modality and the examination (procedure) can be started. Acute patients who are registered later in the worklist, or not registered, are preferred here. In the last case, the patient registration has to be done manually at the modality. If the registration has been completed and examination protocols at the modality have been started, the DICOM MPPS information about the start of the study (“MPPS in progress”) is generated by the modality and transmitted to the RIS [13]. Before the start of the first image acquisition, the table has to be positioned, the patient's position on the table has to be reviewed (possibly the patient will be placed first) and the modality will be adjusted. During an examination run one or more images or image series are acquired by use of modality-specific protocols, sequences or applications. Studies may be repeated several times in series until all necessary images are captured in diagnostic quality, and variety of images is sufficient to state the diagnosis with high confidence. After completion of the examination, relevant information is compiled by the modality and sent together with existing patient and order data via DICOM MPPS to the RIS (“MPPS completed”) [13]. Details of timing of the examination runs (duration
510
E. Gattnar et al. / A Novel Way of Standardized and Automized Retrieval of Timing Information
between MPPS in progress and the launch of the first image acquisition, number and length of image acquisitions, time after the last image acquisition up to MPPS completed etc.) can be obtained from the logfiles of the modalities. Then the documentation is completed and the results processed (processing). In addition, the transport of the patient and after-care activities is initiated. At the same time, interpretation, evaluation and reporting of medical findings can be started. Finally, the report will be distributed to the responsible healthcare professionals and archived in the system, among other documents (report distribution).
4. Event-driven Process Chain Model In order to measure the clinical process execution times, it is necessary to integrate the timestamps in a clinical context. For this purpose a generic EPC model was created that consists of 21 modules. It is restricted to the hospital environment. The modules can be called and executed repeatedly during the run of process. Figure 1 illustrates the process performed at an imaging modality. As shown in the figure, events whose timestamps should be collected are labeled in the model.
Figure 1. Timestamp mapping to the modality procedure process (EPC)
The clinical process times can be obtained by calculating the difference between timestamps of the final and initial event of the process chain e.g. the length of an image acquisition (t (acquisition finished) – t (acquisition started)) and/or of the entire
E. Gattnar et al. / A Novel Way of Standardized and Automized Retrieval of Timing Information
511
examination (t (procedure finished) – t (procedure started)). In addition, further data and the sources of the timestamps are mapped to resp. stored in the model.
5. Conclusion The process-based retrieval and communication of clinical timing information was investigated in this paper using the example of the diagnostic process. The assignment of timestamps to the clinical process was outlined based on an excerpt of the developed EPC-model. However, not all timestamps provided in the model are currently available in clinical systems. In cases of unavailability, we recommend providing a possibility of automated generation of appropriate IT-timestamps. In summary, it is established that a process-oriented view of IT-timestamps represents a promising opportunity to retrieve timing information about the clinical pathways within a hospital environment. This will provide the basis for process monitoring, analysis and optimization and can lead to an improved quality of care. Additionally the utilization of expensive devices like modalities can be monitored which enables cost reduction by workflow improvement.
References [1]
[2]
[3] [4] [5]
[6]
[7] [8] [9]
[10]
[11] [12] [13]
Eisentraut K, Ammon D., Winkler M, Detschew V. Abbildung klinischer Behandlungspfade als Modelle in der Unified Modeling Language. 53. Jahrestagung der GMDS; 2008 Sep 15-19; Stuttgart, Germany. Düsseldorf: German Medical Science GMS Publishing House; 2008. McGregor C. et al. A Structured Approach to Requirements Gathering Creation Using PaJMa Models. Proceedings of the International Conference of IEEE Engineering in Medicine and Biology Society. 2008 Aug 20-24; Vancouver, Canada. IEEE Computer Society Press; 2008. P. 1506-1509. Winter A, Haux R. A Three Level graph-based Model for the Management of Computer-Supported Hospital Information Systems. Methods of Information in Medicine. 1995; 4(4):378-396. Buchauer A, Ammenwerth E, Winter A, Haux R. 3LGM: Method and Tool to support the management of heterogeneous hospital information systems. Computers in Medicine. 1997;1:77-82. Curry J, McGregor C, Tracy S. A Communication Tool to Improve the Patient Journey Modeling Process. Proceedings of the International Conference of IEEE Engineering in Medicine and Biology Society; 2006 Aug 30 – Sep 03; New York City, USA. IEEE Computer Society Press; 2006. P. 47264730. zur Mühlen M, Rosemann M. Workflow-based process monitoring and controlling––technical and organizational issues. HICSS-33. Proceedings of the 33rd Hawaii International Conference on System Science; 2000 Jan 4-7; Los Alamitos, California. IEEE Computer Society Press; 2000. P. 1–10. Aalst W. Business Alignment: Using Process Mining as a Tool for Delta Analysis. Requirements Engineering. 2005;10(3):198-211. Aalst W, Weijters A, Maruster L. Workflow mining: A survey of issues and approaches. Data & Knowledge Engineering. 2003;47(2): 237-267. Aalst W, Hofstede A, Weske M. Business Process Management: a Survey. BPM2003. Proceedings of the International Conference on Business Process Management; 2003 June 26-27; Eindhoven, Netherlands. Berlin: Springer; 2003. P. 1-12. Keller G, Nüttgens G, Scheer A.-W. Semantische Prozeßmodellierung auf der Grundlage Ereignisgesteuerter Prozeßketten (EPK). Veröffentlichungen des Instituts für Wirtschaftsinformatik (IWi). Saarbrücken: Universität des Saarlandes; 1992. Report No.: 89. Fettke P, Loos P. Perspectives on Reference Modeling. In: Fettke, P, Loos, P. editors. Reference Modeling for Business Systems Analysis. London: Idea Group Publishing Inc; 2007. Status Quo Prozessmanagement 2010/2011 [Internet] 2011. Available from: http://www.bpm-expo.com. Noumeir R. Benefits of the DICOM Modality Performed Procedure Step. Journal of Digital Imaging. 2005 Dec;18(4):260-269.
512
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-512
Computing the Compliance of Physician Drug Orders with Guidelines Using an OWL2 Reasoner and Standard Drug Resources Joseph NOUSSA YAOa, Brigitte SÉROUSSI b, Jacques BOUAUDc,a,1 a INSERM UMR_S 872, éq. 20, CRC, Paris, France. b UPMC, UFR de Médecine, Paris, France; AP-HP, Hôpital Tenon, DSP, Paris, France; Université Paris 13, UFR SMBH, LIM&BIO, Bobigny, France. c AP-HP, STIM, Paris, France.
Abstract. Assessing the conformity of a physician’s prescription to a given recommended prescription is not obvious since both prescriptions are expressed at different levels of abstraction and may concern only a subpart of the whole order. Recent formalisms (OWL2) and tools (reasoners) from the semantic web technologies are becoming available to represent defined concepts and to handle classification services. We propose a generic framework based on such technologies, using available standardized drug resources, to compute the compliance of a given drug order to a recommended prescription, such that the subsumption relationship yields the conformity relationship between the order and the recommendation. The ATC drug classification has been used as a local ontology. The method has been successfully implemented for arterial hypertension management for which we had a sample of antihypertensive orders. However, supplemental standardized drug knowledge is needed to correctly compare drug orders to recommended orders. Keywords. drug prescription, guideline compliance computation, ontological reasoning (OWL2), ATC drug classification, semantic web tools
1. Introduction Due to the explosion of medical knowledge, medicine is currently becoming increasingly complex. As a consequence, health care providers may fail to update their clinical practices at the pace of the state-of-the-art evolution, which results in significant clinical practice variations, and sometimes medical errors. Urged by public opinion claims, governments worldwide aim at structuring quality in health care. In France, physician certification is monitored by the “Haute Autorité de Santé”. The process specifically assesses actual clinical practices of a physician with respect to clinical practice guideline (CPG) recommendations. However, if compliance with CPGs is easily computed in the case of prevention campaigns or vaccination programs, 1
Corresponding author: J. Bouaud, PMR-Orphanet INSERM U872, 96 rue Didot, 75014 Paris, France.
J. Noussa Yao et al. / Computing the Compliance of Physician Drug Orders with Guidelines
513
it is much more complex in the case of therapeutic prescriptions, especially for chronic diseases. We assume that computerized guideline-based clinical decision support systems (CDSSs) will be more and more available in the future so that detailed patientcentered recommended drug orders would be issued. Among others, we developed the “guiding mode” of ASTI [1], a prototype CDSS applied to therapeutic prescribing in primary care. More than only securing prescription through drug-oriented checks (mostly intra-order drug-drug interactions and drug indications or contraindications) embedded in CPOE [2,3] or drug database systems, such CDSSs operate at a higher strategic level by delivering the best evidence-based recommended treatments that account for the current clinical situation and the patient past treatments. However, computing the compliance of an actual drug order with guideline-based recommendations yields several difficulties. The first one derives from the different levels of abstraction that requires to match drug names appearing in orders, e.g. Aprovel® (irbesartan), with recommended drugs expressed in CPGs as therapeutic classes, e.g. angiotensin II receptor blockers (ARBs). A second difficulty lies in the fact that drug orders are not structured by therapeutic indications but may also contain many drugs related to different pathologies. A third difficulty is to determine the level of combination of therapeutic classes in a given single order. For instance, a single order of Aprovel® corresponds to a monotherapy for hypertension whereas a single order of Coaprovel® corresponds to a bitherapy (combination of irbesartan and hydrochlorothiazide). The semantic web community has nowadays produced standards and tools for representing knowledge and reasoning using ontologies. The aim of this work is to develop a generic external module that computes the compliance of drug orders with recommended orders based on standards for both technologies and contents. We used the reasoning services of the Ontology Web Language standard OWL2 (http://www.w3.org/TR/owl2-overview/) and the widespread WHO drug Anatomical, Therapeutic Chemical (ATC) drug classification (http://www.whocc.no/). The tool has been evaluated on a sample of 422 GP orders, in the management of a clinical case of arterial hypertension, which we collected in an on-line survey described in [4].
2. Material and Method The approach consists in proposing models for prescriptions and recommendations so that the subsumption relationship established by an ontological reasoning yields the conformity relationship between the order and the recommendation. 2.1. OWL2 and Reasoners OWL is a standardized language for describing hierarchical structures of classes (or concepts) and defining classes using logical descriptions. This syntax can be associated with different logical models, with different levels of expressivity, based on description logics. Reasoners are logical engines that classify defined concepts within the asserted ontology. Because of our application to prescription and the need to quantify the level of drug combinations (e.g. bi and tritherapies), we needed the SHOINQ interpretation of OWL that includes quantified cardinality restriction (QCR). Only recent developments in reasoners, such as Hermit [5] which we used, handle this property.
514
J. Noussa Yao et al. / Computing the Compliance of Physician Drug Orders with Guidelines
2.2. ATC Drug Hierarchical Classification ATC is a worldwide standard providing a hierarchical drug organization. Drugs are divided into different groups according to organs or systems they act on, and to their chemical, pharmacological and therapeutic properties. Drugs are mainly classified following their therapeutic use and their main ingredients. There is at least one ATC code for each pharmaceutical formulation (e.g. C09CA04 for Aprovel® and C09DA04 for Coaprovel® which is a combination). The ATC structure does not allow to know the ingredients in the case of combinations. However, for drugs with one active ingredient, the ATC is locally equivalent to a subsumption hierarchy. 2.3. The Prescription Model Many prescription models have been proposed until reaching the HL7 standards. In this work, we only focus on the composition relationship, noted hasComp, for modeling our prescriptions of drugs. Each prescription is linked to the ATC code of its composing drugs by hasComp. Since our application is hypertension management, in case of several codes, we use the one belonging to the C sub-hierarchy (cardio-vascular system). For instance, a prescription P2 “Tareg® 160, Lipanthyl®160” is rewritten as “valsartan, fenofibrate”, then as “C09CA03, C10AB05” using their ATC codes. At a more abstract level, assuming Drug is the top class of all ATC codes, we define a genericPrescription in OWL using the Protégé syntax as: genericPrescription = Thing and (hasComp some Drug) Drug and genericPrescription are defined as disjoints. Then, for each prescription we build an OWL-defined class in which we state that the prescription has only the drugs it contains and exactly these drugs. Considering P2, we obtain this definition: P2 =
genericPrescription and (hasComp only (C09CA03 or C10AB05)) and (hasComp exactly 1 C09CA03) and (hasComp exactly 1 C10AB05)
2.4. The Recommendation Model The OWL representations of recommended prescriptions must be built so that they subsume actual prescription representations. We define intermediate concepts for ntherapies. These are the definitions of anti-hypertensive mono and bitherapies: monoAntiAHT = genericPrescription and (hasComp exactly 1 antiHTA) biAntiAHT = genericPrescription and (hasComp exactly 2 antiHTA) antiHTA is a defined concept made of the exclusive list of ATC codes of therapeutic classes mentioned in the French hypertension management CPGs. Recommended orders specify combination level of drugs and their ATC classes. If we use 2 possible recommended prescriptions: “ARBs alone” or “ARBs + thiazides” represented as: R-ARBs = monoAntiAHT and (hasComp some C09CA)
R-ARBs-Th = biAntiAHT and (hasComp some C09CA) and (hasComp some C03A)
J. Noussa Yao et al. / Computing the Compliance of Physician Drug Orders with Guidelines
515
2.5. The Conformity Computation Module Computing the comformity of a drug order with recommended orders consists in transforming orders in their OWL representations, adding the OWL coded ATC hierarchy, then submitting the OWL file to a reasoner, and assessing whether there is a subsumption relationship between the actual order and one of the recommended ones. If we consider the following 5 drug orders with respect to the 2 recommendations RARBs and R-ARBs-Th, we obtain the subsumption graph depicted in figure 1. P1 = pravastatin 40 1/day; coaprovel®300/12,5 1/day; P2 = tareg®160; lipanthyl® 160 1 pill/d; P3 = aprovel® 300; esidrex®; lipanthyl® 160; P4 = hydrochlorothiazide 25mg 1 pill on morning; lipanthyl®160mg 1/day; P5 = amlor®; lasilix® 20; lipanthyl® 60; metformin;
Figure 1. The subsumption graph for the sample of prescriptions.
P2 (ARBs, fenofibrate) is a monotherapy of ARBs (subclass of R-ARBs); P3 (ARBs, thiazides, fenofibrate) is a bitherapy of ARBs and thiazides (subclass of R-ARBS-Th); P4 (thiazides, fenofibrate) is not a subclass of any of the 2 recommendations, but is a monotherapy of antiAHT; P5 (CCBs, loop diuretics, fenofibrate, metformin) is not conformant to the recommendations but is an anti-hypertensive bitherapy; P1 (pravastatin, (ARBs and thiazides)) is not recognized as a conformant bitherapy because of the unique non-decomposed Coaprovel® ATC code. As a result, only P2 and P3 are appropriately considered conformant with the recommendations, whereas P1 is not, but should.
3. Results Prescription data has been obtained during the on-line study described in [4]. 266 GPs participated and, for the sole clinical case #2 of a 52-year-old hypertensive woman, a total of 442 drug orders were collected. The recommended anti-hypertensive treatment was either an ARBs monotherapy (R-ARBs) or an ARBs+TH bitherapy (R-ARBs-Th).
516
J. Noussa Yao et al. / Computing the Compliance of Physician Drug Orders with Guidelines
For each GP drug order, conformity was computed and compared to the conformity gold standard. Concordance reaches 99.55% with only 2 discordant results. On this sample, although being an inexact test, the sensibility of the conformity computation was measured at 0.99 and its specificity at 1.0. There were no false positives. The only 2 false negatives (i.e. compliant prescriptions that were missed) correspond to prescriptions of the drug combination in the Coaprovel® specialty.
4. Discussion and Conclusion We have developed a generic module using an OWL2 reasoner to compute the compliance of physician drug orders with CPG recommendations using ATC therapeutic classes. The same technologies have been used recently for care plans [6]. The cardinality problem has been solved in subsomption checking allowing to identify mono, bi, or tritherapies of antihypertensive drugs due to the ability of OWL2 reasoners to manage cardinality. The ATC classification has been used as an “ontology-like” hierarchy. However, if the sub-trees concerning molecules (as opposed to combinations) can be considered as ontologies, the approach remains limited in the case of single drug names that correspond to combinations of ingredients. The solution is to represent active ingredients instead of drugs. However, the problem does not lie in the modeling of ingredients as drug sub-components, but rather in the lack of standardized published and shared resources like ATC. For instance, as in other countries, the French agency for drug safety administration (AFSSAPS) has developed a drug database from marketing authorization forms that includes active ingredients, but they are not normalized and not in standardized formats. This stresses the need for normalized terminologies of active substances for the sake of semantic interoperability. RxNorm and NDF-RT™ are such efforts but are local to US administration.
References [1] [2] [3] [4]
[5] [6]
Séroussi B, Bouaud J, Dréau H, et al. ASTI, a guideline-based drugordering system for primary care. In: Patel VL, Rogers R, Haux R, eds, Medinfo, 2001:528–32. Teich JM, Merchia PR, Schmiz JL, Kuperman GJ, Spurr CD, Bates DW. Effects of computerized physician order entry on prescribing practices. Arch Intern Med. 2000 Oct 9;160(18):2741-7. Devine E, Hansen R, Wilson-Norton J, et al. The impact of computerized provider order entry on medication errors in a multispecialty group practice. J Am Med Inform Assoc 2010;17(1):78–84. Séroussi B, Bouaud J, Sauquet D, et al. Why GPs do not follow computerized guidelines: an attempt of explanation involving usability with ASTI guiding mode. Stud Health Technol Inform 2010;160(Pt 2):1236–40. Motik B, Shearer R, Horrocks I. Hypertableau Reasoning for Description Logics. J Artif Intell Res 2009;36:165–228. Din MA, Abidi SS, Jafarpour B. Ontology based modeling and execution of nursing care plans and and practice guidelines. Stud Health Technol Inform 2010;160(Pt 2):1104–8.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-517
517
Automatic Definition of the Oncologic EHR Data Elements from NCIT in OWL Marc CUGGIAa, Annabel BOURDÉa, Bruno TURLINb, Sebastien VINCENDEAUb, Valerie BERTAUDa, Catherine BOHECc, and Régis DUVAUFERRIERa a UMR 936 Inserm, Faculté de médicine de Rennes. France b CHU Pontchaillou, Rennes, France c Réseau ONCOBRETAGNE, Rennes -France
Abstract. Semantic interoperability based on ontologies allows systems to combine their information and process them automatically. The ability to extract meaningful fragments from ontology is a key for the ontology re-use and the construction of a subset will help to structure clinical data entries. The aim of this work is to provide a method for extracting a set of concepts for a specific domain, in order to help to define data elements of an oncologic EHR. Method: a generic extraction algorithm was developed to extract, from the NCIT and for a specific disease (i.e. prostate neoplasm), all the concepts of interest into a sub-ontology. We compared all the concepts extracted to the concepts encoded manually contained into the multi-disciplinary meeting report form (MDMRF). Results: We extracted two sub-ontologies: sub-ontology 1 by using a single key concept and sub-ontology 2 by using 5 additional keywords. The coverage of sub-ontology 2 to the MDMRF concepts was 51%. The low rate of coverage is due to the lack of definition or mis-classification of the NCIT concepts. By providing a subset of concepts focused on a particular domain, this extraction method helps at optimizing the binding process of data elements and at maintaining and enriching a domain ontology. Keywords. Semantic interoperability, modularization, date elements, value-set
information
system,
ontology
1. Introduction The development of health information systems, including EHR (Electronic Health Record), implies to define information models that capture increasingly complex patient data. Standardization efforts of these models are in process either through HL7 (version 3) templates [1] or archetypes (Open EHR / EN13606) [2]. These models define and organize data elements i.e. a basic unit of information built on standard structures having a unique meaning and distinct units or values [3]. These data elements are used in the forms, messages or documents in order to capture or to transmit patient data in an interoperable way. The process of defining an information model contains two steps. Firstly, we define all the data elements necessary and sufficient to capture information from a domain. Each data element contains a label and a value-set based on an interface terminology (end-user oriented) [4,5]. Secondly, in order to ensure a semantic interoperability, we bind this interface terminology with a pivot terminology i.e. controlled vocabularies or reference ontologies. This “bottom-
518
M. Cuggia et al. / Automatic definition of the Oncologic EHR Data Elements from NCIT in OWL
up” approach, beginning from a consensus of experts and leading to formalization of the semantic of an information model, is particularly cumbersome whereas biomedical ontologies are precisely providing the domain knowledge. The objective of this work is to show the interest of using an ontology in a top-down approach, to automatically extract, from a few key concepts (or “seed concept”) , a sub-ontology to define data elements and their value sets. We extracted from the National Cancer Institute's Thesaurus (NCIT), a sub-ontology representing the data elements of the MDMRF (Multi-Disciplinary Meeting Report Form) related to prostate neoplasm. In recent years, many ontolgies’ fragments extraction techniques, starting with a search criterion, have been developed. LexValueSets [6] defines an approach for extracting data elements (value-sets) from SNOMED CT. This technique uses two complementary processes. The extensional one where extraction is conducted using a set of concepts chosen by experts. The intentional process where extraction is done from a semantic definition of a concept. The modularization techniques have also been explored by several studies [7]. It consists of methods for the extraction of ontological modules from an original ontology. The objective is to enable the reuse of ontology, but also to facilitate their development, their management and their use [8]. Our work was conducted under the research project ANR ASTEC, whose objective is to automatically determine the eligibility of patients to be included in clinical trials, from the MDMRF data.
2. Methods Extraction algorithm: The goal of this work is to provide a consistent sub-ontology with the domain concepts i.e. a subset of the NCIT with all the semantic relations between the concepts. Our work is based on the NCIT (version 10.07) because of its availability in OWL format, its free use, its specificity in the oncology domain, and because it is both a terminology for encoding and a reference ontology internationally recognized. Using Protege-OWL API to access Ontology model, the extraction algorithm takes some parameters as input: an ontology with OWL format, a list of key concepts from which extraction should begin, the directions in which it searches (i.e., towards parents, children and restrictions) and the list of restriction types to be followed (i.e, relations between concepts except the subsumption). It searches for semantically related concepts of interest and adds them in an empty ontology, which will progressively grow to create the sub-ontology.
Figure 1. Extraction from Prostate_Neoplam key concept.
Initially,
the
key concept to apply the extraction algorithm was (figure 1). We searched all its parents, children, and all the target concepts related to either Prostate_Neoplasm or its children with a restriction. Then we searched all the parents of these target concepts until ontology’s root to have a Prostate_Neoplasm
M. Cuggia et al. / Automatic definition of the Oncologic EHR Data Elements from NCIT in OWL
519
consistent ontology. It’s the sub-ontology 1. To have a better coverage of the domain, we added in a second time 5 concepts (given by two experts as concepts that best represent the domain) to Prostate_Neoplasm as key concepts. Then we apply again the extraction algorithm with these 6 concepts. It’s the sub-ontology 2. Method for evaluating the algorithm: To evaluate our method, we used, as a target to reach, the MDMRF about prostate cancer created by the clinicians. We manually encoded all data elements of this MDMRF in NCIT concepts to compare them to those of our sub-ontologies. To analyze the sub-ontologies, we grouped the MDMRF concepts and those of each sub-ontology in 5 subsets (figure 2). Set A: manually encoded MDMRF concepts. Set B: concepts of the sub-ontology that exactly match those of A. The ratio B/A gives the proportion of ontology concepts that were strictly the same as those in MDMRF. Then to analyze more precisely the sub-ontology 2, two physicians evaluated all concepts of this sub-ontology and classified them in subsets. Set B’: concepts that are strictly the same as those in the MDMRF (Set B) and the concepts that experts defined as semantically close to concepts in Set A. These concepts could substitute for Set A concepts or could complete the data elements of the MDMRF. It’s an extension of Set B. Set C: concepts that could be present in an EHR but the expert didn’t keep for MDMRF. Set D: other concepts that couldn’t be present in the MDMRF nor in the EHR but that are necessary to formally define the subontology concepts. We also analysed the MDMRF concepts that were not extracted in the two sub-ontologies (Set A minus Set B) in order to determine the reason.
3. Results The NCIT ontology contains 83143 concepts. The manual encoding of the 36 data elements of MDMRF produced 82 NCIT concepts. 11 concepts were not found in the NCIT, such as hip replacement notion, hepatic or renal chronic insufficiency: recall is 86,5%. The extraction algorithm performance is compared to these 82 concepts. The extraction from a single key concept (Prostate_Neoplasm), produced the subontology 1 containing 434 concepts. Among these concepts, only 16 concepts (set B) matched exactly with the 82 MDMRF concepts (set A: recall B/A is 19,5%). When we checked the sub-ontology 1, we highlighted the absence of some concepts like those about the TNM (international classification of the extension of malignant tumors). The extraction from 5 key concepts in addition to the Prostate_Neoplasm concept (Prostate_Adenocarcinoma,Prostate_Cancer_TNM_Finding,Biopsy_of_Prostate,PS A_Assay,Total_Gleason_Score_for_Prostate_Cancer) produced the sub-ontology 2 that contained more concepts (483). However, the recall B/A (51%) was better since 42 concepts (set B) matched exactly with the 82 MDMRF concepts (set A). The precision (proportion of B in the sub-ontology 2) was lower (9%). However, sub-ontology 2 contained 140 concepts that experts classified in the set B’ (precision is 27%). These concepts were either semantically close to MDMRF concepts and could be substituted to them (e.g., recurrent_prostate_neoplasm instead of recurrent_disease). or completed the list of possible value as well. For example we find many missing histologic types that experts didn’t integrate in the MDMRF (e.g., Prostate_Adenosquamous_Carcinoma or Prostate_Basal_Cell_Carcinoma). We also found concepts about the tumor staging nut no longer used in favor of TNM classification (e.g., Stage_I_Prostate_Adenocarcinoma).
520
M. Cuggia et al. / Automatic definition of the Oncologic EHR Data Elements from NCIT in OWL
The set C contains 34 concepts that were not present in the MDMRF but that could be in the EHR. The set C contained essentially semiological concepts like Bone_Pain or Urinary_Retention that were not used in the MDM record because they were not taken into account for the MDM decision. Moreover, we found concepts about benign tumor forms like Prostate_Adenoma that can co-exist with a cancerous transformation. The set D contained 309 concepts that were not essential for encoding MDMRF and EHR. These concepts represented intermediate concepts that structured the subontology and made the definition of concepts of interest. These concepts could be used for reasoning and automatic classification. For example, they may be parents of Prostate_Neoplasm concept like Reproductive_System_Neoplasm or Disorder_by_Site, which were too general to characterize the disease but necessary for the reasoning. We found concepts about genomic (e.g., Gain_of_Chromosome_2p) that had no interest to be used routinely but participated to the definition of the disease. 49% of MDMRF concepts were not found in the sub-ontology 2. These concepts were essentially either about Medical antecedents (e.g., Ischemic_Heart_Disease) or about prostate neoplasm treatments (e.g., Adjuvant_Therapy), or findings elements (e.g., birth_date or age or Zubrod_Performance_Status).
Figure 2. Sets used to evaluate sub-ontologies concepts.
4. Discussion Our extraction method allows a modularization of the NCIT in domain specific subontologies. This extraction makes available a limited number of concepts (those of interest in a domain) and facilitates the process of defining the data elements of an information model. This approach has been already defined as relevant and is being integrated into terminology server. This study was carried out about one single neoplasm type. Other neoplasms could be better defined in the NCIT both by concepts number and richness of the relations. Performing the same study on other cancers should assess this selection bias. Moreover, if we targeted the data elements of prostate medical record, instead of the MDMRF data elements (that contains only the crucial elements for the therapeutic decision), we would have a better coverage with the subontology (through concepts of Set C), especially with finding concepts. Our coverage of relevant terms (51%) is higher than LexValueSets project (35%) if we consider a strict comparison. We reviewed exhaustively the concepts of our sub-ontology whereas
M. Cuggia et al. / Automatic definition of the Oncologic EHR Data Elements from NCIT in OWL
521
it was done only on a sample in the LexValuesSets project. The precision of the Set B is quite low because our sub-ontology contains many concepts that don’t belong to the MDMRF but are essential for automatic reasoning. Our algorithm takes as input parameter a list of restrictions that we followed during the extraction. If we don’t follow the restrictions that are not interesting for clinical data collection like the “omics” domain, the precision of the algorithm will be increased. Furthermore, our approach allows extracting concepts of interest (Set B’) that can be used to complete the MDMRF during its creation. The relatively low coverage of the sub-ontology 1 was increased by the multi-concepts extraction (sub-ontology 2). This is explained by a lack of concept definitions and misclassified concepts. Firstly, concepts are well represented in the NCIT (86,5%), but with absence of relationships between concepts, they cannot be extracted by the algorithm. Secondly, some concepts were misclassified in NCIT, e.g., the extractor cannot get back primary and secondary Gleason scores concepts whereas we used Total_Gleason_Socre_for_Prostate_Cancer concept as key concept. These three concepts are not in the same hierarchy (same parents) and are not related to the Prostate_Adenocarcinoma concept. Therefore, another contribution of our work is that this modularization allows identifying more effectively misclassified or insufficiently defined concepts. The task of maintenance and enrichment of the ontology is easier by working on reasonable sized ontology and with an approach by domain. Thus we describe a virtuous cycle where the ontology is optimized for encoding patient data, and in return, extraction algorithm has optimal results due to enrichment and best expression of the ontology. Compared to bottom-up approach used for designing templates or archetype, we propose to use a top-down approach, starting from the ontology, to get back the semantics of a domain. Although, this approach supposes that the ontology contains concepts coming from the “reality” (e.g. MDM), and definitions that are sometimes non-formal and so, non-computable (e.g. may_have, is extensively used in NCIT). Acknowledgement: We would like to thank the ANR-TECSAN for its financial support and Sahar BAYAT for her help.
References [1] [2] [3] [4] [5] [6]
[7] [8]
HL7 Template, http://www.hl7.org/Special/committees/template/index.cfm Kalra D, Beale T, Heard S. “The openEHR Foundation,” Studies in Health Technology and Informatics, vol. 115, 2005, p. 153-173. Data element - Wikipedia, the free encyclopedia. http://en.wikipedia.org/wiki/Data_element Rosenbloom ST, Miller RA, Johnson KB, Elkin PL, Brown SH. “Interface Terminologies: Facilitating Direct Entry of Clinical Data into Electronic Health Record Systems,” JAMIA, 13, 2006, pp. 277-288. Daniel C, Buemi A, Mazuel L, Ouagne D, Charlet J. Functional Requirements of Terminology Services for Coupling Interface Terminologies to Reference Terminologies. MIE 2009:205-209 Pathak J, Jiang G, Dwarkanath SO, Buntrock JD, Chute CG, Chute C. “LexValueSets: an approach for context-driven value sets extraction,” AMIA. Annual Symposium Proceedings / AMIA Symposium. AMIA Symposium, 2008, p. 556-560. Seidenberg J, Rector A. “Web ontology segmentation: analysis, classification and use,” Proceedings of the 15th international conference on World Wide Web, ACM, 2006, p. 13-22. Rector A, Napoli A, Stamou G, et al. ‘Report on modularization of ontologies’, Technical report, Knowledge Web Deliverable D2.1.3.1, (2005)
522
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-522
Developing a Model for the Adequate Description of Electronic Communication in Hospitals a
Samrend SABOORa,1, Elske AMMENWERTH Institute for Health Information Systems, UMIT - University for Health Sciences, Medical Informatics and Technology, Hall in Tyrol, Austria
Abstract. Adequate information and communication systems (ICT) can help to improve the communication in hospitals. Changes to the ICT-infrastructure of hospitals must be planed carefully. In order to support a comprehensive planning, we presented a classification of 81 common errors of the electronic communication on the MIE 2008 congress. Our objective now was to develop a data model that defines specific requirements for an adequate description of electronic communication processes We first applied the method of explicating qualitative content analysis on the error categorization in order to determine the essential process details. After this, we applied the method of subsuming qualitative content analysis on the results of the first step. A data model for the adequate description of electronic communication. This model comprises 61 entities and 91 relationships. The data model comprises and organizes all details that are necessary for the detection of the respective errors. It can be for either used to extend the capabilities of existing modeling methods or as a basis for the development of a new approach. Keywords. Systems architecture, Computer Communication Networks, HIS management
1. Introduction Communication is essential for healthcare [1]. The quality of medical treatment depends on how well the information requirements are met [2]. Errors in the communication between healthcare providers (e.g., interruptions in inter-personal communication [3]) directly affect the patient treatment process and can even harm patients [3]. Information and communication systems (ICT) shall contribute to the improvement of the communication (i.e., the transmission of information objects) [3]. A main prerequisite for successful communication on a technical level is standardization [4]. Established international communication standards and guidelines for health care do exist (i.e., DICOM (Digital Imaging and Communications in Medicine, [5]) and HL7 (Health Level 7, [6]) and IHE (Integrating the Healthcare Enterprise, [7]). But their utilization is aggravated, amongst others, by the complexity of the hospital’s electronic communication infrastructure or e.g. the usage of legacy systems [8]. Current publications show that there are still connectivity problems occurring for DICOM (e.g. [9]) as well as for HL7 1
Corresponding Author: Dr. Samrend Saboor. University for Health Sciences, Medical Informatics and Technology (UMIT) – Eduard Wallnöfer-Zentrum 1, 6060 Hall in Tyrol, Austria. E-mail: [email protected]
S. Saboor and E. Ammenwerth / Developing a Model for the Adequate Description
523
(e.g., [10]). Therefore, it is important to carefully plan any changes to the communication infrastructure (e.g., new installations or updates). Amongst others, potential errors must be considered in this planning. Previously, we presented a classification of 81 common problems that can occur within the electronic communication of information objects [11] (extended version [12]). It is organized in a hierarchy of five levels and names 229 reasons for the collected errors as well as recommendations to avoid them. A small example shall clarify what the categorization entries look like (for reasons of simplicity, only the sections that pertain the problem, reasons and recommendation are regarded here): the problem “Incompatible value representation of data attributes” is associated with the reasons “Conversion error in communication standard – lack in DICOM-standard regarding combination of attribute length and value representation” or “Incompatible character sets”. One recommendation for this problem is “Avoid usage of attributes with implicit value representation” The categorization is a valuable aggregation of practical experiences and could support the planning process by describing the conditions for possible problems. However, its extent (i.e., amount of problem, reason and recommendation entries) and the complexity of the relationships between the error conditions aggravate the manual application of the categorization on real communication processes. Without further support, it is difficult to organize all necessary process details in order to take them into account when checking for the existence any error. Consequently, it is necessary to describe communication processes using an adequate model in order to efficiently make use of the categorization’s content. Such models could be queried automatically for the existence of potential communication errors. Therefore, we have used the error categorization to develop a complex data model that defines specific requirements (i.e., basic entities and their relationships) for an adequate description of electronic communication processes. The aim of this paper as a sequel is to describe how the data model was developed and its basic entities.
2. Methods 2.1. The Data Model’s Basis – Categorizing Common Communication Errors The new data model is meant to describe electronic communication processes for the purpose of detecting potential conflicts. In order to consider all required details of such processes, the data model is based on a categorization of 81 common communication errors. The categorization names 229 reasons for these errors and recommendations for avoiding them. For instance, one entry names the reason “R1: Usage of paper-based or analog media” for the problem “P1: Unavailable information objects”. For more details please refer to [12]. 2.2. The Data Model’s Content – Determining the Required Details In order to determine what details are essential for the adequate description of communication processes, we applied the method of explicating qualitative content analysis (specifically, narrow context analysis which uses details from the regarded literature itself to clarify important but vague details) on the error categorization [13]. Our aim was to describe the conditions for each of the problem-entries within the categorization
524
S. Saboor and E. Ammenwerth / Developing a Model for the Adequate Description
(such as the above mentioned example P1 and one of its reasons R1) in a clear, flexible, uniform and redundancy free way. The basic idea was to develop clear expressions in terms of propositional logic. We did this according to the following steps: Acquisition of basic statements: We examined the collected reasons (e.g., “R1: Usage of paper-based or analog media”) and recommendations within the classification for basic statements. Here, we transferred each of the entries into a basic statement (i.e., question or sentence) that can be clearly evaluated (i.e., it pertains only one specific aspect) and assigned it a new variable name. Such a resulting statement would be e.g. “V2: Information object has persistent media”. Clarification of basic statements: Some of the basic statements still needed to be clarified. But there were also other basic statements that provided the required details. We thus substituted the unclear statements by a combination of the more concrete statements. For instance, the aforementioned statement V2 was substituted by the statements “V2.1: Information object is electronic persistent” and “V2.2: Information object is provided paper-based”. Rebuilding the error conditions: We then used the basic statements in order to rebuild the conditions that are described by the original reason- and recommendationentries within the error classification in form of logical terms (e.g., Problem = V1 AND V2). Aggregation of error conditions: Some of the communication problems had multiple reasons and recommendations. This required an additional aggregation of the logical terms. We combined the logical terms according to the disjunctive normal form. 2.3. The Data Model’s Relationships – Developing a Relationship Model In total, we extracted nearly 600 basic statements out of the original error conditions within the error categorization. Thus, we applied the method of subsuming qualitative content analysis, in order to identify the main entities and their relationships. We used the results for constructing an Entity Relationship Model (ERM). This ERM (see Figure 1 for an excerpt) defines the data model for describing communication processes.
3. Results Figure 1 shows an excerpt of the aforementioned ERM as an example – in total, it includes 61 entities and 91 relationships. Figure 1 depicts only the most important entities of our model that shall describe electronic communications: Communication process: The root entity that groups an ordered number of connections between different application systems. Connection: Involves a pair of application systems that communicate via their interfaces. Each connection fulfills a specific purpose. Purpose: The overall goal of a connection (e.g., transmitting information objects or retrieving a modality worklist). This goal is achieved through one or more services. Service: A service that is defined by a specific communication standard (e.g., DICOM Query/Retrieve service for retrieving worklists). A service is performed by a number of operations. Operation: A single information processing step that the respective application system implements. It is also possible to specify the sequence in which the operations are executed.
S. Saboor and E. Ammenwerth / Developing a Model for the Adequate Description
525
Interface: Propagates a specific service that is supported by the respective application system. Each interface is e.g. dedicated to one communication standard in a specific version. It also supports certain character sets and transfer syntaxes for presenting and transmitting information objects. Further, an interface is a collection of concrete attribute specifications. It is also possible to define a mapping between the attribute specifications of connected interfaces (in case that e.g. the interfaces do not store semantic equal attributes in the same tags or if communication brokers are used). Attribute specification: Bases on entries of a data dictionary. An attribute specification includes details like e.g. index, data type, value repetition or whether values are optional or mandatory. Application system (version): A software application that is part of the hospital’s ICT infrastructure. There can be several different versions of one application system that might perform their operations on the information objects differently. Information object: A collection of attribute definitions. There can be different versions of an information object instance (e.g., after the content was edited). Related information objects (i.e., a series of a radiological study) can be grouped in a collection.
Figure 1. Excerpt of the ERM that describes the basic entities (e.g., communication process, connection) of our data model and their relationships (e.g., a each connection has a purpose)
Due to space limitations and the complexity of the ERM, it is not possible to present all details. In case of further interest, please contact the first author.
4. Discussion and Conclusion ICT can help to improve the communication in hospitals. However, there are still connectivity problems although established communication standards do exist. An adequate description of the communication processes could help to plan changes of the ICT infrastructure. In this paper we propose a data model for such an adequate description. It is based on a categorization of common errors of electronic communication [12]. Therefore, our data model does not depend on any specific process types or case stud-
526
S. Saboor and E. Ammenwerth / Developing a Model for the Adequate Description
ies. Consequently, it can be potentially applied on any electronic communication process. To our knowledge there seems to be currently no other modeling method that anticipates such a broad variety of communication errors. Other approaches concentrate e.g. on formal description and assessment possibilities (e.g., [14]). The elaborated data model could be either used to extend the capabilities of existing modeling methods or as a basis for the development of a new approach. In either way, a proper implementation of the data model is needed for evaluation purposes.
5. Outlook As a proof of concept, we have already implemented a prototypic software. This prototype was evaluated in a local university hospital with the goal to determine whether it is practically possible to describe all details of real communications processes. For this purpose, we used the prototype to model communication processes like the electronic ordering of radiological examination. We also want to know if it is possible to detect potential errors in the real processes on basis of the data model. For this, we test modeled processes using specific queries which we developed on basis of the error categorization (each query evaluates a specific condition for the existence of one problem). First results of the evaluation are promising and encourage us to continue the research.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
12. 13. 14.
Gurses AP, Xiao Y. A systematic review of the literature on multidisciplinary rounds to design information technology. J Am Med Inform Assoc 2006;13(3):267-76. Gaus W, Haux R, Knaup-Gregori P, Leiner F, Pfeiffer K. Medizinische Dokumentation. 4 ed. Stuttgart: Schattauer GmbH; 2003. Pirnejad H, Niazkhani Z, Berg M, Bal R. Intra-organizational communication in healthcare-considerations for standardization and ICT application. Methods Inf Med 2008;47(4):336-45. van Bemmel JH, Musen MA. HANDBOOK of MEDICAL INFORMATICS. 1999 [cited 2009 Jan]; Available from: http://www.mieur.nl/mihandbook/r_3_3/handbook/home.htm ACR/NEMA. DICOM Homepage. [Webpage] 2008 [cited 2008 Sep]; Available from: http://medical.nema.org HL7. Health Level 7. 2008 [cited 2008 Sep]; Available from: http://hl7.org HIMMS/RSNA. IHE - changing the way healthcare connects. [Webpage] 2008 [cited 2008 Sep]; Available from: http://www.ihe.net De Moor GJ, Claerhout B, van Maele G, Dupont D. e-Health standardization in Europe: lessons learned. Stud Health Technol Inform 2004;100:233-7. Oosterwijk H. DICOM questions, answered. Radiol Manage 2008;30(1):33-9; quiz 40-2. Barnes M. Lessons learned from the implementation of clinical messaging systems. AMIA Annu Symp Proc 2007:36-40. Saboor S, Ammenwerth E. Developing a taxonomy of communication errors in heterogeneous information systems. In: Andersen SK KG, Schulz S, Aarts J, Mazzoleni MC, editor. eHealth Beyond the Horizon - Get IT There - Proceedings of MIE2008 - The 21st International Congress of the European Federation for Medical Informatics; 2008 May 25-28; Göteborg, Sweden: IOS press; 2008. p. 461 - 466. Saboor S, Ammenwerth E. Categorizing communication errors in integrated hospital information systems. Methods Inf Med 2009;48(2):203-10. Mayring P. Qualitative Inhaltsanalyse - Grundlagen und Technik. Weinheim und Basel: 8. Auflage, UTB; 2003. Winter A, Strubing A. Model-based assessment of data availability in health information systems. Methods Inf Med 2008;47(5):417-24.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-527
527
Contextualization in Automatic Extraction of Drugs from Hospital Patient Records Svetla BOYTCHEVA a, c, Dimitar TCHARAKTCHIEV b, Galia ANGELOVA a a Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, Sofia, Bulgaria b University Specialized Hospital for Active Treatment of Endocrinology (USHATE), Medical University, Sofia, Bulgaria, [email protected] c University of Library Studies and IT, Sofia, Bulgaria
Abstract. Information Extraction (IE) from medical texts aims at the automatic recognition of entities and relations of interests. IE is based on shallow analysis and considers only sentences containing important words. Thus IE of drugs from discharge letters can identify as 'current' some past or future medication events. This article presents heuristic observations enabling to filter drugs that are taken by the patients during the hospitalization. These heuristics are based on the default PR structure and linguistic expressions signaling temporal and conditional markers. They are integrated in a system for drug extraction from hospital Patient Records (PRs) in Bulgarian language. Present evaluation results are summarized as well. Keywords. Natural Language Processing (NLP), Automatic IE from Patient Records, Structuring and Contextualization of medication events
1. Introduction NLP is viewed as a promising technology that can help to acquire structured information by (partial) understanding of free-text medical documents [1]. Usually only extraction of entities and relations of interest is implemented since full text understanding is hard to achieve. In general the IE success is limited; the overview [2] suggests that the limitations of semantic analysis (the so-called '60-percent bаrrier') are perhaps due to the shallow processing which tackles only 'what the text wears on its sleeve'. However, the extraction correctness is much higher for well-defined tasks and in well-defined domains. Usually IE accuracy is measured by the precision (percentage of correctly extracted entities as a subset of all extracted entities), recall (percentage correctly extracted entities as a subset of all relevant entities available in the corpus) and their harmonic mean called f-score. A successful systems is MedEx which extracts medication events with 93,2% f-score for drug names, 94,5% for dosage, 93,9% for route and 96% for frequency [3]. The systems presented at the NLP shared task 'Medication Extraction Challenge' in 2009 also achieve best scores about 90% [4]. Our extractor identifies 1537 drug names in 6200 Bulgarian PRs with f-score 98,42% and dosage with f-score 93,85% [5]. However, recognition of the mere drug name occurrences delivers no information regarding the 'present treatment' since numerous past or future medication events might be discussed in a single discharge letter. This article summarizes recent results related to the following research challenge: given an
528
S. Boytcheva et al. / Contextualization in Automatic Extraction of Drugs
IE component which recognizes medication events with high accuracy, how to filter drugs taken by the patient at the moment when the discharge letter is composed.
2. Project Background Joining the project PISP [6] via a FP7 ICT Call for extension of running projects, our aim is to extract drugs from hospital PRs in order to fill in a PSIP-compliant repository and to validate PSIP rules for Adverse Drug Events. The drugs, prescribed through the Hospital Pharmacy of USHATE, are sent to the PSIP repository via the Hospital Information System. But USHATE is a specialized hospital which treats endocrine diseases of patients coming from allover the country; due to this fact drugs for the accompanying diseases are often brought in by the patient and taken without records in the USHATE Computerized Physician Order Entry (CPOE). In these cases the medication is documented in the PR texts so IE from discharge letters is the only means to generate a full picture of patient treatment during the hospitalization period. The Semantic Mining results achieved in PSIP mark the state-of-the-art for French language. Merlin et al. [7] present a detailed evaluation where the extracted drugs are compared to the suggestions by human experts or the already encoded EHR content. The extraction of ATC codes from French text is performed with f-score 88% when compared to the manual extraction and with f-score 49% compared to the CPOE content. The miners exploit no PR structuring and name searching is done in the whole PR text. But for Bulgarian we can split the input PR text into sections because some default headers exist, and try to contextualize the extracted facts.
3. Material and Methods The input texts in our experiment are free-text paragraphs of discharge letters. In Bulgaria the discharge letter structure is mandatory for all hospitals (it is published in the Official State Gazette, as Article 190(3) of the legal Agreement between the National Health Insurance Fund and the Bulgarian Medical and Dental Associations [8]). The PR text should contain the following sections: (i) personal details; (ii) diagnoses; (iii) anamnesis (personal medical history), including current complains, past diseases, family medical history, allergies, risk factors; (iv) patient status, including results from physical examination; (v) laboratory and other tests findings; (vi) medical examiners comments; (vii) debate; (viii) treatment; (ix) recommendations. This structure could provide appropriate context for extraction of medication events relevant for the respective hospitalization but in reality it is not strictly kept. Table 1 shows some statistics about availability of the above-listed sections in a training corpus of 1300 USHATE PRs. Although the structure is mandatory, many PRs are structured differently due to the following reasons: section merging, changing the section headers, skipping (empty) sections and replacing the default section sequence. Table 1. Percentage of PRs including standard sections (which can be automatically recognised) Diagnoses
Anamnesis
Past di-
Allergies
Family
seases
risk factors
medical
100
100
88,52
43,56
52,22
Patient status
Lab tests
Examiners comments
Debate
Treatment
100
100
59,95
100
26,70
history
S. Boytcheva et al. / Contextualization in Automatic Extraction of Drugs
529
Thus inventing an algorithm for automatic recognition of 'current treatment' is a nontrivial task which can be tackled only by heuristics collected on a representative corpus. Drug names as tokens participate in all PR sections, including in (ii) Diagnoses (e.g. 'Amiodaron-induced hypothyroidism'). After removal of repeating drug names in each PR section, the recognised drugs in 1300 PRs are 10493 in total. Some 70% of them (7332) are unique in the respective PR and the remaining 30% (3161) repeat in several sections. The extractor has found 19 tokens in the Diagnoses section. Figure 1 summarizes the frequency of occurrences of drug names in the PR sections.
Figure 1. Potential descriptions of medication events in the various PR sections
4. Heuristics Supporting the Selection of 'Current Medication Events' Human experts have studied section by section the drugs extracted from 1300 anonymized PRs. The finings can be summarized as follows: • Section 'Anamnesis' discusses past medication events using phrases in past or present tense, hence the grammatical tense itself is not a filter for the event time. Among 5143 drug occurrences, only for 462 (9%) the local context contains explicit statements in that the drug is taken during the hospitalization; • Section 'Lab data and other tests' contains some 678 drug occurrences, especially in phrases like ‘blood sugar has values X,Y while the patient is taking Z’. These events are related to the period of hospitalization when the lab tests are made; • Section 'Medical Examiners comments' contains 465 drug name occurrences. About 90% of them are expressed by simple noun phrases with drug names and dosages, which can be interpreted as medication events related to the period of hospitalization. The remaining 10% contain longer conditional expressions like e.g. ‘if X is ineffective, replace it by Y’ or ‘if blood pressure increases include Z’. These phrases and the related drugs cannot be automatically interpreted as medication events happening in USHATE; • In section 'Debate' some 3299 drug name occurrences are met. Some 542 (16,5%) are found in local contexts signaling therapy changes: start, stopping, increase, decrease, replacement of drugs. By default these events happen in USHATE; • In section 'Treatment', which is available as a separate paragraph in about 26% of the PRs in our training corpus, 889 drug occurrences are encountered. Some 236 of them (26,5%) concern future events as the local contexts contain typical
530
S. Boytcheva et al. / Contextualization in Automatic Extraction of Drugs
expressions signaling recommendations for further treatment. The remaining 73,5% of the events take place in USHATE. Our IE system wrongly interpreted 74 drug name occurrences as medication events: 1,36% participate in description of allergies, 0,08% in descriptions of sensibility, and 0,01% in phrases signaling drug intolerance. In principle an IE system, based on shallow analysis in the local context, might suggest each drug name occurrence as an actual event. After the statistical observations in a corpus of 1300 PRs, we consider as 'current' the following medication events: • drugs in 'Anamnesis', only if they are listed under the headers 'medication at the moment of hospitalization' or 'accompanying treatment'. Some phrases like 'started treatment with' can be also interpreted as hints for 'current medication' but only if they are not followed by phrases including 'replaced by' which signal past events; • drugs in 'Medical Examiners comments' which are recognized by the system with high confidence as present events (and not future prescriptions), except cases with 'stop' and 'replace' phrases; • drugs in 'Debate' which are recognized by the system with 100% confidence as present events (and not future prescriptions); • drugs in 'Treatment' which are not recognized as future events.
5. Evaluation Results, Impact and Conclusion Our aim in the PSIP project is to extract information about drugs, taken by the patient, which are not prescribed via the Hospital Pharmacy. Usually these are drugs for accompanying and chronic diseases. Our extractor, integrating the heuristics defined at the end of section 3, has found 355 such drugs in the experimental test corpus of 6200 PRs (in addition to the 1182 drugs that are in use in USHATE during the period relevant for our experiment). These 'external' drugs might be listed in the Anamnesis, in a special paragraph with a subtitle which human experts recognize easily, and also seen in the Debate or Treatment. The automatic recognition of drugs for the accompanying diseases is somewhat simpler since they are often documented as simple enumeration (because the accompanying diseases attract less attention in a specialized hospital). The extraction algorithm tackles some negative phrases as single expressions according to a study of negative forms in Bulgarian medical texts [9]. The simple rules, summarized above, help to reduce the over-generation of medication events concerning these 355 'external' drugs. We have performed evaluation of our heuristic strategy on 1300 PRs; they contain in total 1648 names of drugs which are not in use by the USHATE Pharmacy. Table 2 present the percentage of 'external' drugs mentioned in different PRs sections in comparison to all drug events in that sections. We see that medication events for 'external' drugs are described mainly in the Anamnesis, Medical examiners comments, Debate and Treatment sections. Table 2. Occurrences of drug names, which are not in use by the USHATE Pharmacy, in the PR sections In the Anamnesis under header “Accompanying treatment” 17,42%
As prescription by external medical examiner 16,34%
In the Debate
In the Treatment
12,28%
21,48%
Table 3 proves the feasibility of our approach to mine the local context by searching typical phrasal expressions which are learnt from a representative training corpus. We have noted about 6% over-generation for two categories of events: (i) in the
S. Boytcheva et al. / Contextualization in Automatic Extraction of Drugs
531
Anamnesis, when a past event is considered as a present treatment, and (ii) in the Debate and Treatment, when a recommended medication event is interpreted as a present one (but often these expressions are ambiguous even for human readers). Table 3. Accuracy of automatic extraction of medication events, related to 355 drugs Precision 97,92%
Recall 90,69%
f-score 94,17%
Comparing our results to other experiments reported in the literature, we think that the relatively established PR structure is one of the main factors for the high accuracy. Despite the limitations of the NLP technologies, IE is applied to discharge letters in many languages other than English, French and German: there are prototypes in Greek [10], Polish, Hungarian etc. The long-term objective is to enable the automatic or semi-automatic filling of big specialized scientific databases by extracting data from patient-related texts. However there is an inevitable percentage of extraction errors, which might be due to unrecognized entities (false negative) or over-generated entities (false positive). We believe that the NLP results should be embedded into large-scale experiments where the IE-induced noise will be statistically insignificant. Such data driven activities are related to the secondary use of EHR data. Acknowledgments: The research tasks leading to these results have received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 216130 PSIP (Patient Safety through Intelligent Procedures in Medication).
References [1]
Demner-Fushman, D., W. Chapman and C. McDonald. What can natural language processing do for clinical decision support? Journal of Biomedical Informatics, 42(5), October 2009, pp. 760-772. [2] Hobbs, J. and E. Riloff. Information Extraction, In: Indurkhya, N. and F. J. Damerau (Eds.) Handbook of Natural Language Processing, 2nd Ed., Chapman & Hall/CRC Press, Taylor & Francis Group, 2010. [3] Xu.H., S. P Stenner, S. Doan, K. B. Johnson, L. R. Waitman, and J. C. Denny. MedEx: a medication information extraction system for clinical narratives. JAMIA 17 (2010), pp. 19-24. [4] Third i2b2 Shared-Task and Workshop “Challenges in Natural Language Processing for Clinical Data: Medication Extraction Challenge”, https://www.i2b2.org/NLP/Medication/, last visited April 2011. [5] Boytcheva, S. Shallow Medication Extraction from Hospital Patient Record, To appear in the Proc. 2nd Int. PSIP Workshop on Patient Safely through Intelligent Procedures in medication, Paris, May 2011. [6] PSIP project: Patient Safely through Intelligent Procedures in medication, http://www.psip-project.eu, European Community’s 7FP, Information and Communication Technologies Programme. [7] Merlin B., E. Chazard, S. Pereira, E. Serrot, S. Sakji, R. Beuscart, and S. Darmoni. Can F-MTI semantic-mined drug codes be used for Adverse Drug Events detection when no CPOE is available? In Studies in Health Technology and Informatics, Proc. 13th World Congress on Medical Informatics, Cape Town, South Africa, Volume 160, Number pt 1, 2010, pp. 1025-1029. [8] National Framework Contract between the National Health Insurance Fund, the Bulgarian Medical Association and the Bulgarian Dental Association, Official State Gazette №106/30.12.2005, updates №68/22.08.2006 and №101/15.12.2006, Sofia, Bulgaria, http://dv.parliament.bg/. [9] Boytcheva, S., A. Strupchanska, E. Paskaleva, and D. Tcharaktchiev, Some Aspects of Negation Processing in Electronic Health Records. In Proc. of International Workshop Language and Speech Infrastructure for Information Access in the Balkan Countries, 2005, Borovets, Bulgaria, pp. 1-8. [10] Karanikolas, N. and C. Skourlas. Automatic Diagnosis Classification of patient discharge letters. In Health Data in the Information Society, Proceedings of MIE2002, Stud. Health Technology and Informatics, vol. 90, IOS Press, 2002, pp. 444-449.
532
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-532
Revisiting the Area Under the ROC a
Berry DE BRUIJNa National Research Council, Institute for Information Technology, Ottawa, ON, Canada
Abstract. The Receiver-Operating Characteristic curve or ROC has been a long standing and well appreciated tool to assess performance of classifiers or diagnostic tests. Likewise, the Area Under the ROC (AUC) has been a metric to summarize the power of a test or ability of a classifier in one measurement. This article aims to revisit the AUC, and ties it to key characteristics of the noncentral hypergeometric distribution. It is demonstrated that this statistical distribution can be used in modeling the behaviour of classifiers, which is of value for comparing classifiers. Keywords. ROC curve, classifier performance, statistical modeling
1. Introduction The Receiver-Operating Characteristic curve or ROC has been a long standing and well appreciated tool to assess performance of classifiers or diagnostic tests. Likewise, the Area Under the ROC (AUC) has been a metric to summarize the power of a test or ability of a classifier in one measurement, rather than other metrics that require a pair of measurements such as sensitivity & specificity or precision & recall. The AUC is, however, less frequently used than sensitivity & specificity, probably because it does not have the same strong intuitive interpretation. This article aims to revisit the AUC, and ties it to the noncentral hypergeometric distribution. It is demonstrated that this statistical distribution can be used in modeling the behaviour of classifiers. While there are several flavours of the ROC curve, the predominating one plots sensitivity on the Y-axis against 1-specificity on the X-axis [1]. Both are calculated from classifier/diagnostic-test performance on a finite set of cases. The classifier or test yields a numerical reading that maps onto a binary class-membership after a threshold is applied. By sweeping across the range of the possible threshold settings, the data points are derived for plotting. The Area Under the Curve is directly derived from this plot. As a note - for the rest of the paper, the term 'classifier' will be used but can be read as synonymous for ‘diagnostic test’. Among all possible curves, two have an immediate intuitive meaning. One is the curve that describes the perfect classifier: its curve goes straight up along the Y-axis from the origin (lower-left corner) to the upper-left corner (100% sensitivity, 100% specificity), and then straight to the upper-right corner. Clearly, the AUC for such a curve equals 1.0. The second standard curve is the diagonal line from the origin to the upper-right corner. This curve represents a 'chance' classifier that always takes a pure guess on every case. The AUC for the triangle under the diagonal is, clearly, 0.5. A third curve - one with an AUC of 0 - represents a classifier that actually makes perfect predictions but has the class labels reversed.
B. de Bruijn / Revisiting the Area Under the ROC
533
Given the two extremes - a 0.5 AUC for a random classifier and a 1.0 AUC for a perfect classifier, one can infer that the AUC actually corresponds to a likelihood of making correct predictions: Fawcett [1] describes how it is equivalent to the probability of ranking a randomly chosen positive instance higher than a randomly chosen negative instance. The following paragraphs will add statistical foundations to the interpretation of AUC, and finally, these are tied together with examples from practice.
2. Methods Given the constraints, one bridge between classifier behaviour, classifier performance, and AUC lies in a statistical distribution named the noncentral hypergeometric distribution [2]. This distribution describes how many positive/negative cases one can expect to draw given how many positive/negative cases there are in the total set (which is finite, as set in the assumptions), given the odds ratio for drawing a positive case, and given that draws take place without replacement. This can be rephrased as an 'urn with marbles' scenario that illustrates a stochastic process. Three aspects are important in the ‘urn’ metaphor: (1) our two classes of marbles are not equally represented: we are often faced with few 'positive' cases against many 'negative' cases; (2) marbles are taken out of the urn one by one, which affects the distribution of the marbles that remain in the urn for the next draw. A good classifier that started with a 10:90 urn might immediately pick 8 positives straight in a row. At that point it faces a 2:90 distribution, making the task to find those last two inherently harder; (3) the chance of a false-positive is assumed to be stochastic, it stays the same throughout the process, but is not necessarily equal to 0.5. In the urn model, a non-equal chance or bias is illustrated by assuming that the marbles are not just different by colour, but the red ones are heavier or bigger than the green ones, which make it more likely that a red marble is drawn than a green one. That likelihood is the bias and the metaphor is named the 'biased urn' model. The noncentral in noncentral hypergeometric distribution refers to a probability that is not equal to 0.5 (the 'without replacement' clause causes the hypergeometric distribution to apply). In a practical classifier's context, the bias to do better than 50/50 chance is a matter of how easy the task is combined with how good the classifier handles that task. Blindly separating red from green marbles is hard, being able to look at them makes it easier. Or potentially easier. A colour-blind classifier is back at square-one, a classifier that is only partially perceptive to colour does better but still not perfectly. Lighting conditions may change the bias. In short, the line between the inherent simplicity of a task and the power of a classifier tends to be fuzzy. The challenge in constructing a classifier lies in giving it the ability to shift the a-priory chance as far away from the 0.5 point as possible. That is often reflected in experiment designs, where a classifier is compared to a baseline approach to better quantify its power. The noncentral hypergeometric distribution can be used to estimate the expected number of positives given the counts of positives, negatives, sample size and odds ratio. Repeating that for every sample size (from no 'marbles' to all 'marbles') produces the data points for an expected ROC curve. The consequence of this, is that the performance of a practical classifier can be modeled as a stochastic 'classifier', using only three basic parameters: the starting positive/negative counts, and the odds ratio. The odds ratio (OR) can be directly derived from the AUC of the actual classifier: OR = AUC / (1-AUC). This last observation is the support for using the AUC as a singular
534
B. de Bruijn / Revisiting the Area Under the ROC
indicator of a classifier's performance: it represents the classifier's ability to move the bias away from 0.5 and closer to 1.0. The noncentral hypergeometric distribution comes in two flavours: sampling under competition is modeled by the Wallenius, while the Fisher is used for independent sampling (see Appendix 1). Application of our sweeping threshold may imply that the Wallenius should be used. However, classifiers generally are not prohibited from assigning identical scores to multiple ‘marbles’, therefore scores are not strictly competitive. As well, the Wallenius distribution is not symmetrical: the process of directly finding the positives gives a different curve than its reciprocal - finding the positives by taking away the negatives (finding the ‘needles’ by removing the ‘hay’). This asymmetry tends to not fit most classifiers (even if it is an interesting concept). The Fisher curve should therefore be first considered to model a classifier’s behaviour.
3. Experiment: Two Examples As two examples, two tasks from earlier research are represented here. In both cases, a classifier labeled cases as one of two classes, and provided a confidence score. Based on a sliding threshold on the score, the points for an ROC are calculated. As described above, Fisher and Wallenius estimates for every possible sample size are used to construct modeled ROC curves. These models only require three basic parameters: the starting positive/negative counts, and the observed AUC. The data in the first example was part of the 2010 i2b2 challenge on clinical NLP [3, 4]. The task was to determine for 18,550 'problem' concepts in patient narrative (477 documents), whether the problem was 'present' (n=13025), 'absent' (3609), 'possible' (883), 'conditional' (171), 'hypothetical' (717), or 'associated with someone else' (145). Text classifications were done with a machine-learning categorization system, trained on a separate set of 349 documents containing 11967 concepts. Figure 1 shows the observed ROC and the stochastically modeled ROC for a classifier between conditional and possible. It also shows curves for Wallenius estimates, reciprocal-Wallenius, Fisher estimates using the directly calculated odds ratio ('Fisher 1'), and using a re-calibrated odds ratio ('Fisher 2') optimized so that the observed and modeled AUC correspond. Optimization is done through simple Newton-
Figure 1: ROC of actual observations and three curves for modeled behaviour
B. de Bruijn / Revisiting the Area Under the ROC
535
Raphson iteration. The shape of the curve supports the use of Fisher modeling over Wallenius modeling. The placement of the two Fisher curves illustrates that recalibration of the OR is necessary. This pattern was consistent for all other datasets examined (plots not displayed). With six classes, there are 15 distinct pairwise one-vs-one classifications. To prevent clutter, Figure 2a displays only three of these curves selected such that each class is represented once. The curves derived from Fisher models map onto the curves for the actual observations fairly closely. The second example involves data that was part of a 2005 study on identifying acute fracture cases in collections of X-ray reports [5]. For three types of X-rays (wrist, hip, and spine), reports were collected, automatically classified using a text categorization system, and predictions on fracture / non-fracture through crossvalidation were compared to the gold standard. Figure 2b shows the ROC curves for hip, wrist and spine, and their models. The fits appear to be quite good for wrist and spine; for hip, the fit is a little poorer. Wrist reports counted 493 reports with 213 positives, hip reports 615 cases with 77 positives, and spine 638 cases with 47 positives.
Figure 2: ROC and models for i2b2 data (Figure 2a, left) and Radiology data (Figure 2b, right). For clarity, only the top-left corner of the ROCs are displayed.
4. Discussion In general, the fit between a classifier's actual behaviour and its model using the Fisher noncentral hypergeometric distribution was quite close. A look at the recalibrations of the ORs revealed that a corrected OR could be estimated from the directly calculated OR (= AUC / (1-AUC)) by raising it to the power of approximately 1.4 or √2. Resulting AUCs are within 0.25% of the observed AUCs for the approximation, except for the spine curve (0.47% difference). Future research will need to reveal the reason for the discrepancy, and explain the mathematical approximation. It must be said that the modeled curve does not always neatly overlay the observed values. A plausible requirement for model application may be that data collections
536
B. de Bruijn / Revisiting the Area Under the ROC
must be relatively homogeneous. In the i2b2 data, one-vs-one classifications followed a better predictable path than one-vs-rest classifications, quite likely because the 'rest' class is inherently composed of distinct subclasses (or, is composed of 'marbles' of varied weight rather than the same weight). It may be possible to include nonhomogeneity in the model at the expense of added complexity. Note that errors made by the actual classifier (which may be caused by non-homogeneity) do affect not only that point of the curve but displace the entire rest of the curve to its right affecting the closeness of neighbouring sample points as well. The noncentral hypergeometric distributions have multivariate extensions, the value of which for evaluating the performance of multi-class classifiers could be the subject of valuable future research [6].
5. Conclusion This article presents the interwoven relationship between classifier performance or diagnostic tool performance, ROC curves, the Area under the Curve, biased odds ratios, and the noncentral hypergeometric distribution to model a classifier. To my knowledge, this is the first study that discusses that relationship. It aims to support the case for using the AUC as a singular classifier performance metric by linking it to the odds ratio of making right/wrong predictions. The presented plots illustrate the fit between Fisher modeled curves and observed curves.
References [1] [2] [3]
[4] [5] [6]
Fawcett T. An introduction to ROC analysis, Pattern Recognition Letters 27 (2006) 861-874 Fog A. Sampling Methods for Wallenius' and Fisher's Noncentral Hypergeometric Distributions. Communications in statistics, simulation and computation (2008:37) 241-257 de Bruijn B, Cherry C, Martin J, Kiritchenko S, Zhu X. Machine learned solutions for three stages of clinical information extraction: the state of the art at the i2b2 2010 benchmark. J Am Med Inform Assoc., in press. i2b2: https://www.i2b2.org/NLP/Relations/ de Bruijn B, Cranney A, O'Donnell S, Martin JD, Forster AJ. Identifying wrist fracture patients with high accuracy by automatic categorization of X-ray reports. J Am Med Inform Assoc. (2006:6) 696-698 Flach PA. ROC Analysis. In: Sammut C, Webb GI, eds. Encyclopedia of Machine Learning. Springer (2010) 869–875
Appendix 1 The expected number of positives (μ) in a sample of size n taken from a population of known positives (m1) and negatives (m2), and given a bias (ω). Wallenius distribution: μ is found through approximation by solving
μ / m1 + (1 – (n - μ) / m2) = 1 ω
(1)
Fisher distribution: μ is approximated by
μ ≈ – 2⋅c / (b – √(b2 – 4⋅a⋅c)) where a = ω – 1; b = n – m2 – (n + m1)⋅ω; c = m1⋅n⋅ω.
(2)
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-537
537
Service Delivery for e-Health Applications Martin STAEMMLERa,1, University of Applied Sciences, Stralsund, Germany
a
Abstract. E-Health applications have to take the business perspective into account. This is achieved by adding a fourth layer reflecting organizational and business processes to an existing three layer model for IT-system functionality and management. This approach is used for designing a state-wide e-Health service delivery allowing for distributed responsibilities: clinical organizations act on the fourth layer and have established mutual cooperation in this state-wide approach based on collectively outsourced IT-system services. As a result, no clinical organization can take a dominant role based on operating the IT-system infrastructure. The implementation relies on a central infrastructure with extended means to guarantee service delivery: (i) established redundancy within the system architecture, (ii) actively controlled network and application availability, (iii) automated routine performance tests fulfilling regulatory requirements and (iv) hub-to-spoke and end-to-end authentication. As a result, about half of the hospitals and some practices of the state have signed-up to the services and guarantee long-term sustainability by sharing the infrastructural costs. Collaboration takes place for more than 1000 patients per month based on second opinion, online consultation and proxy services for weekend and night shifts. Keywords: e-Health service delivery, business layer, infrastructure, 3LGM², GCM
1. Introduction E-health Service delivery between two organizations or between a larger organization and associated smaller ones has been successful for a variety of applications, e.g. teleradiology, teleneurology, telepathology. Typically, the implementation reflects the professional relationship between the organizations involved. However, larger regional or state-wide e-Health services have to take additional requirements into account: • Frequent changes in cooperation according to clinical needs, business opportunities and personal relationships. • Availability and continuity management as part of a professional IT-Service Management [1, 2], as well as scalability, training and maintenance [2]. • Compliance to directives and regulations [3] and • Sustainability, in particular for e-Health projects. The objective of this paper is to present an approach for state-wide e-Health service delivery based on embedding the concept of the Revised Three-layer Graph-based Meta Model (3LGM²) [4, 5] in a business perspective and presenting a reference implementation compliant to the above requirements.
1
Martin Staemmler, University of Applied Sciences, Stralsund, Medical Informatics, Zur Schwedenschanze 15, D-18435 Stralsund, Germany, E-mail: [email protected]
538
M. Staemmler / Service Delivery for e-Health Applications
2. Materials and Methods The 3LGM² comprises of three layers (domain layer, logical tool layer, physical tool layer) for modeling enterprise functions, associated applications and physical components. It has been applied to information management in hospitals [6] and telemedicine [7]. However, it is focused on IT-related aspects and less targeted to business perspectives which have become the driving forces in e-Health based co-operation. Table 1. Structure of e-Health Services Layer
Contents
Example related to teleradiology
organization, business process
medical / clinical cooperation
weekend and night shift, external teleradiology services
domain
enterprise functions, entity types
second opinion, providing online consultation
logical tool
application components
DICOM and Web Services for image/report handling
physical tool
infrastructure components
network and IT systems
Mapping bilateral cooperation or cooperation of a larger organization with associated satellites to the structure proposed in Table 1 reveals that each organization has to deal with all four layers resulting in disadvantages: • Linking the organizational with IT related layers limits mutual cooperation. • In larger settings several cooperation clusters (bilateral, one to satellites) will exist in parallel and the lower three layers will be implemented multiple times. This will result in cost increase and technical incompatibility between clusters. • Organizations which are located in between two clusters or needing multiple cooperations due to their medical specialty have to establish connectivity to more than one cluster. This again causes costs and leads to re-implementation. In addition, a lesson learnt from several years of providing e-Health service delivery is, that cooperative relationships between organizations change frequently. Consequently, for the design of a state-wide e-Health service the responsibility has been split: The organizational and business process related layer stays with the organizations themselves, thus allowing for varying contractual, financial and organizational arrangements. The IT-related lower three layers are contracted to an independent service provider.
Figure 1. Teleradiology with central infrastructure
This decoupling paves the way for a centralized infrastructure exhibiting significant advantages (Figure 1): • The cost for the infrastructure is shared by all organizations. • Each organization (1 … k) is relieved from infrastructure services and benefits from only one communication channel to the central infrastructure.
539
M. Staemmler / Service Delivery for e-Health Applications
•
Mutual cooperation becomes possible without individual organizations imposing on partnerships. • Change management with regard to cooperation stays with the organizations involved and is not passed-through to the lower layers. The reference implementation targets three teleradiology scenarios: (i) second opinion, (ii) emergency consultation and (iii) remotely supervised radiological examination. Due to the fact, that a central infrastructure is a single point-of-failure its system architecture has been designed to be fully redundant (Figure 2, left). In case of hardware failure the task is taken over by the corresponding device.
Figure 2. Physical layer (left) and logical tool layer (right)
Applications on the logical tool layer (Figure 2, right) rely on virtual machines for high-availability. Connections between organizations and the infrastructure use VPN for privacy. The DICOM Webserver only holds pseudonymized data accessible via http. The tool Nagios [9] is used for active monitoring. Besides a standard “ping” on network level, Nagios has been enhanced with DICOM C-Echo (Figure 3).
Figure 3. Monitoring on application level (red bars reveal a limited availability of this DICOM node)
The methods described so far are initiated from the central infrastructure and do not provide an organization to organization performance test. Such a test has become compulsory due to a standard [8] and requires the following tasks (Table 2). Table 2. Tests required by the DIN 6868-159 task Functional test, max. two trails
daily x
monthly x
Measurement of the transfer time of a reference data set, maximal 900s
x
Check for the completeness and correctness of the reference data set
x
Documentation of the test results
x
x
To avoid cumbersome manual testing the tool TR-DIN has been to developed (using Java and pixelmed[10]). To measure the transfer time a reference data set is sent to the receiving organization which replies with a DICOM conformant acknowledgement, but with no pixel data included. The transfer time is easily determined by rating the transfer
540
M. Staemmler / Service Delivery for e-Health Applications
time for the acknowledgement negligible. Transfer times recorded for one month using a 65MB data set show acceptable variation and are far below the required 900s limit. On site-to-infrastructure level authentication and authorization is achieved by VPN tunnels, AET (Application Entity Title) and port. For end-to-end authentication the PKI functionality of the national health telematics infrastructure is used. Since it does not support the transfer of large amounts of data, a hybrid approach has been implemented by the so-called TISP (Telematics Infrastructure Subscriber Proxy) (Figure 4).
Figure 4. End-to-end authentication using a hybrid method
The TISP receives DICOM objects, calculates a hash, manages the digital signature of this hash via the PKI and inserts the signature information in the DICOM object prior to transmission. At the receiving site the TISP verifies the signature by using the PKI.
3. Results e-Health Service delivery has to take the business perspective into account. This has been accomplished by adding a fourth layer to an existing three layer model for the management of IT systems. Splitting the responsibilities between the fourth layer and the IT-system layers (outsourced to an independent service provider) has resulted in a sustainable e-Health Service with the infrastructure costs being shared by 15 hospitals and 2 practices. In a previous experience where all layers had been under control of one large hospital this had lead to an enforcement of centralized collaboration. In contrast, this split approach has motivated significant mutual collaboration between all partners with each partner benefiting from the flexible and highly available infrastructure. The exchange of about 1500 studies and 50000 images per month confirm this approach.
4. Discussion Authenticating DICOM objects has been done previously [11] and DICOM supports signatures by a supplement [12]. However, providing a generic approach using the TISP in combination with a nation-wide PKI is a step forward and can easily be extended to non DICOM document types, e.g. reports and referral letters with an even stronger request for authentication, when compared to DICOM images. Even though services like the TISP and TR-DIN have to operate at the organizations’ site for allowing end-to-end authentication and quality control, they can
M. Staemmler / Service Delivery for e-Health Applications
541
be easily integrated in an IT environment. As such, they contribute significantly to the stated availability and the business continuity requirements. The concept of using a DICOM Forward service as central dispatcher for teleradiology confines administration and maintenance to one place. From the viewpoint of an organization the DICOM Forward concept avoids the installation of specific software or hardware, a direct link to the existing PACS / RIS is sufficient. This allows the users to work with their known and accepted applications. DICOM email [13] could be used comparable, but the partition of images into emails appears to be less suited for an immediate online consultation or remotely supervised examination. The central infrastructure is a logical consequence of the shared responsibility developed for e-Health service delivery. Adding a business perspective to the 3LGM² reflects the relevance of organizational and economical issues in e-Health services. In a more formal way, the Generic Component Model (CGM) [14] addresses business concepts at its top layer. With the CGM being more focused on architectural perspectives it provides a more detailed approach using three dimensions (RM-ODP compliant views for system design on a second axis and representing different domains on a third axis). As such the proposed structure correlates to mainly a column in the GCM. Acknowledgments: The reference implementation would not been possible without the grant of the Ministry for Social Affairs and Health in the federal state Mecklenburg-Vorpommern, Germany. Furthermore, the author wants to thank Susann Wrobel and Christian Schmidt for their work on TR-DIN and Henry Ritter for his work on the TISP.
References [1] [2] [3] [4] [5] [6]
[7] [8] [9] [10] [11] [12] [13] [14]
ITIL (IT Infrastructure Library), www.itil.org, last visited 3.2.1011 ISO, The ISO 27000 Directory, www.27000.org, last visited 3.2.2011 European Union, Medical Device Directive 2007/47/EC, OJ, 21.9.2007:L247 pp. 21-55. Wendt, T. Brigl, B. Winter, A. Modeling Hospital Information Systems (Part 1): The Revised Threelayer Graph-based Meta Model 3LGM², Methods Inf Med 2003, 42 (2003), 544-51. Wendt, T. Häber, A. Brigl, B. Winter, A. Modeling Hospital Information Systems (Part2): Using the 3LGM² Tool for Modeling Patient Record Management, Methods Inf Med 43 (2004) 256-67. Winter, A. Birgl, B. Funkat, G. Häber, A. Heller, OWendt, 3. T. LGM²-Modelling to Support Management of Health Information Systems, Connecting Medical Informatics and Bio-Informatics,: IOS Press, Amsterdam, 2005. Staemmler, M. Towards Sustainable e-Health Networks: Does Modeling Support Efficient Management and Operation?, Building Sustainable Health Systems, IOS Press, Amsterdam 2008. DIN 6868-159, Image quality assurance in diagnostic X-ray departments – Part 159: Acceptance and consistency testing in teleradiology according to the RöV, Beuth, Berlin, 2009. www.nagios.org, last visited 3.2.1011. Clunie, D. www.pixelmed.com/index.html#PixelMedJava DICOMToolkit, last visited 3.2.2011. Eichelberg, M. Riesmeier, J. Loxen, N. Jensch, P. Introduction of Security Features to DICOM: Experiences with Digital Signatures, Proceedings of EuroPACS (2000), 286-291. DICOM Suppl. 41: Digital Signatures, ftp://medical.nema.org/medical/dicom/final/ sup41_ft.pdf, 2001. DICOM Suppl. 54, MIME Type, ftp://medical.nema.org/medical/dicom/final/sup54_ft.pdf, 2002. Blobel, B. Introduction into Advanced eHealth - The Personal Health Challenge, eHealth: Combining Health Telematics, Telemedcine, Biomedical Engineering and Bioinformatics to the Edge, IOS Press, Amsterdam, 2008.
542
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-542
A KPI Framework for Process-based Benchmarking of Hospital Information Systems a
Franziska JAHNa,1, Alfred WINTER a University of Leipzig, Institute for Medical Informatics, Statistics and Epidemiology, Leipzig, Germany
Abstract. Benchmarking is a major topic for monitoring, directing and elucidating the performance of hospital information systems (HIS). Current approaches neglect the outcome of the processes that are supported by the HIS and their contribution to the hospital’s strategic goals. We suggest to benchmark HIS based on clinical documentation processes and their outcome. A framework consisting of a general process model and outcome criteria for clinical documentation processes is introduced. Keywords. Benchmarking, hospital information systems, quality of information systems, process assessment
1. Introduction Benchmarking of information systems has become an important method for strategic information management in hospitals. Camp defined it as “continuous process of measuring products, services and practices against the toughest competitors or those companies recognized as industry leaders” in order to find best practices [1]. Later the Joint Commission substituted “practices” as benchmarking subject by “processes” leading to the benchmarking aim of “improving products, services or processes” [2]. The board of the hospital and also the HIS users often lack in transparent information about the performance of the HIS. Especially the board regards the information system as a “black box” and can hardly estimate its contribution to a hospital’s processes and strategic goals [3]. HIS benchmarking thus can serve as means for success control of HIS. However, appropriate key performance indicators (KPI) are needed for measuring and comparing different HIS. These KPI should be accepted by the board, the HIS users and the information management department. From the board’s and the HIS users’ perspective, the quality of an information system is determined by supporting processes efficiently and by the information that is created, updated and used. We assume, if the HIS outcome in terms of information handled in the HIS is linked to HIS characteristics, benchmarking results can both help information management improving the HIS and help HIS stakeholders understanding HIS performance. In this paper we want to: 1
Franziska Jahn, University of Leipzig, Institute for Medical Informatics, Statistics and Epidemiology, Haertelstr. 16-18, 04107 Leipzig; E-mail: [email protected]
F. Jahn and A. Winter / A KPI Framework for Process-Based Benchmarking
1. 2.
543
analyze existing benchmarking initiatives and develop a KPI framework for benchmarking HIS performance based on clinical documentation processes and their outcome for the HIS stakeholders.
2. Methods and Materials 2.1. Current Benchmarking Initiatives Table 1 provides a limited selection of HIS benchmarking initiatives which considerably vary in their benchmarking subjects, KPI types and data collection methods. Table 1. Information system benchmarking: benchmarking subjects, KPI types and data collection Ref.
Benchmarking subject
KPI types (examples)
[3]
information management processes
maturity models according to CobIT®: (0= process not existent to 5= process is managed)
IT cost, IT performance and IT support clinical processes and their IT support (e. g. order entry) information handled with HIS (discharge letters, appointments) EMR system
IT cost per employee, IT users per IT staff, clients per bed self-defined maturity models
[4]
[5]
[6]
application systems making up the electronic medical record
completeness and timeliness of discharge letters, number of electronically scheduled appointments system quality, information quality, service quality, use, user satisfaction form together a “composite index” maturity model (stage 0 to 7) based on evolutionary development of information systems in U.S. hospitals
Method of data collection CIO2 survey
CIO survey routine data from application systems routine data from application systems user survey (CIO, CMO, CNO3, doctors, nurses) CIO survey
There is no common agreement on the subject of HIS benchmarking: Should application systems, clinical processes, IT cost or information management processes be benchmarked (see 2nd column of Table 1)? Regarding the KPIs (3rd column of Table 1), the following advantages and disadvantages with respect to stakeholder expectations and practicability for information management occur. • Using a maturity model or a composite index ([3], [5], [6]) finally results in a number on an ordinal scale which is useful to compare a large number of HIS. However, if, e.g., the board cannot relate the scales to the hospital’s processes or strategic goals, the KPI run the risk of being not accepted. • Cost-related KPI like “IT cost per employee” [3] do not describe how well the HIS works and how it contributes to the hospitals’ business goals. • Measuring the outcome of clinical documentation processes [4] seems plausible for stakeholders. However, the outcome needs to be related to the HIS. Regarding data collection, the least time-consuming method is to conduct a survey among CIOs (4th column of Table 1). However, there is a risk of gaining biased results from an information management perspective. User surveys help to get a more comprehensive view of the HIS performance. Collecting and analyzing clinical routine data is an objective and cost-saving method to gain KPI. 2 3
CIO = chief information officer CMO= chief medical officer, CNO = chief nursing officer
544
F. Jahn and A. Winter / A KPI Framework for Process-Based Benchmarking
2.2. The General Documentation Process Model and KPI Based on Process Outcome The benchmarking approaches introduced in 2.1 are useful to rank and categorize HIS as a whole. However, to focus on the “asset” information itself and the problems with creating, updating or using information with help of the HIS, we deal with documentation processes as benchmarking subject.
Figure 1. General model of a documentation process. Hexagons denote events, rectangles with white arrow denote activities, rectangles with document icon denote documents, “X” stands for logical XOR operator, “O” for logical OR operator. (Tool: ARIS express, www.ariscommunity.com)
First, we created a general model describing documentation processes and the lifecycle of clinical documents. We used the results of projects at the University Medical Center of Leipzig, in which discharge letter writing, order entry and result reporting were analyzed. For modeling we used event-driven process chains (EPC) [7] and their concepts “event”, “function” and “information object” as well as logical connectors (see Figure 1; model elements will be referred in the text using italics). For example, the start event information demand arises may represent “patient is discharged from hospital” or “lab examination ordered”. The process functions, e. g. process steps of collecting information, composing, correcting, signing, transmitting and archiving the document can be supported by IT to different degrees. The process ends when the user of the final document has received it. Second, we searched for KPI categories which are from different stakeholder perspectives useful for assessing the outcome of documentation processes. They are related to the clinical document after signing the document. Another premise was to identify objectively measurable criteria. We decided to adopt six outcome-oriented categories from the HIS monitor by [8] to our framework, but to measure them not only by a user survey (us) but also by means of routine data (rd) from application systems. In a Delphi survey [9] top 15 KPI were identified, which CIOs and HIS researchers consider to be important for benchmarking of HIS. These KPI were also considered for our framework as far as they can be measured for processes. We added generalized criteria defined by [4] to our list of outcome criteria: • O1: timeliness of the clinical document [4] (rd/us) • O2: availability of finished clinical document (in hospital) [4], [8], [9] (rd/us) • O3: time needed for information processing [8] (rd/us) • O4: user satisfaction with documentation process [9] (us) • O5: completeness/correctness of finished clinical document [8] (rd/us)
F. Jahn and A. Winter / A KPI Framework for Process-Based Benchmarking
545
• O6: compliance of the finished document with legal regulations [8] (rd) • O7: usability of the finished document [8] (rd/us) • O8: readability of the finished document [8] (us) Now these outcome criteria need to be explained by subcriteria for the process flow (P) and the underlying structures (S). For example, O1 and O3 immediately depend on the duration of process steps such as collecting information or completing/correcting the document. I. e. the duration of these steps can be chosen as subcriterion for O1 and O3. Similarly, O4 can be further divided into “user satisfaction with single process steps”. In a next step, structural criteria explain the process outcome. In terms of the HIS, the underlying (types of) application systems for the process systems should be taken as a criterion. However, it is also necessary to consider not only technology facts, but also organizational facts and human facts [10], e.g. the education background of personal resources involved in the documentation process.
3. Results Table 2. Examples for key performance indicators for discharge letter writing, “rd” marks KPI to be gathered from routine data, “us” marks KPI to be gathered by a user survey Process outcome O1: timeliness of discharge letters (rd) O4: user satisfaction with process of discharge letter writing (us) O6: legal relevance of electronically stored discharge letters in terms of use of electronic signatures (rd)
Process flow time for single process steps (doctors and clerks separately) (rd) user satisfaction with IT support of single process steps (us) -
Underlying structures S1: type of application system used (rd), organization: department S1 (us), users’ professional grade (us), organization: central or decentral clerks (us) application system used for signing (rd), organization: department
The KPI framework for benchmarking HIS consists of our general model for documentation processes and outcome criteria related to the process flow and underlying structures (see 2.2). Based on existing benchmarking methods (2.1), we recommend a mix of data collection methods. For using the framework follow the steps below: • Decide on the process to be improved and the benchmarking partner(s). • Map the general documentation process model to the process to be observed. • Outcome: Choose relevant KPIs from O1 to O8. For time-related criteria use the events of a documentation process as measuring points. • Process flow: Refine outcome criteria by looking at the single process steps. • Consider the underlying structures, which may affect the outcome (information system, human factors, organizational factors). • Choose data collection methods (routine data vs. standardized user survey). In a joint HIS benchmarking project between Leipzig Medical Center and Hannover Medical School we used the framework to define KPI for the process of discharge letter writing. We concentrate on the criteria O1-O6. For example, the timeliness of discharge letters (O1) we defined as the duration between the events information demand arises and finished document needs to be transmitted. Related process criteria are times for dictating and correcting the document. As structural criteria, the type of application system (e.g. normal text processing system, digital dictation system) and the department are to be determined (see Table 2 for some examples).
546
F. Jahn and A. Winter / A KPI Framework for Process-Based Benchmarking
O1, O5 and O6 and their subcriteria are determined by using routine data from the clinical information systems. O2, O3 and O4 are subject of a user survey among physicians. The user survey was implemented in LimeSurvey® and is currently running. In a pre-study, interesting results could be obtained. E. g., physicians often spend half of their working days with writing discharge letters what emphasizes the need for finding best practices for the IT support of this documentation process.
4. Discussion KPI and their critical reflection have to be integrated into a continuous strategic information management process, e.g. by a strategic information management board. After the first benchmarked processes have been improved sufficiently, more processes have to be included. Meanwhile, KPIs of well managed processes can be reduced. From a methodological perspective, new or extended process modeling languages are necessary, which support the mapping of processes with quality measures [11]. Acknowledgements: We thank D. May and U. Stecher and the participants of the IT benchmarking workshop of the GMDS working group mwmkis in November 2010 for their support.
References Camp, R. C. Benchmarking. Carl Hanser Verlag, München, Wien, 1994. Joint Commission: Benchmarking in Health Care - Finding and Implementing Best Practices. Joint Commission on Accreditation of Health Care Organizations, Oakbrook Terrace, Illinois, 2000. [3] Simon, A. Die betriebswirtschaftliche Bewertung der IT-Performance im Krankenhaus am Beispiel eines Benchmarking-Projekts. In: H. Schlegel (ed.): Steuerung der IT im Klinik-Management: Methoden und Verfahren. Vieweg + Teubner, Wiesbaden, 2010, 7390 (in German). [4] Dugas, M. Eckholt, M. Bunzemeier, H. Benchmarking of hospital information systems: Monitoring of discharge letters and scheduling can reveal heterogeneities and time trends. BMC Medical Informatics and Decision Making 8 (2008), no. 15. [5] Otieno, G. O. Toyama, H. Asonuma, M. Koide, D. Naitoh, K. Measuring effectiveness of electronic medical records systems: Towards building a composite index for benchmarking hospitals. International Journal of Medical Informatics 77 (2008), 657669. [6] HIMSS Analytics. EMR Adoption ModelSM. http://www.himssanalytics.org/hc_providers/emr_adoption.asp. Last accessed on 2011-02-01. [7] Keller, G. Nüttgens, M. Scheer, A. W. Semantische Prozeßmodellierung auf der Grundlage „Ereignisgesteuerter Prozessketten (EPK)“. In: A. W. Scheer (ed.): Veröffentlichungen des Instituts für Wirtschaftsinformatik (IWi), Universität des Saarlandes 89 (1992) (in German). [8] Ammenwerth, E. Ehlers, F. Hirsch, B. Gratl, G. HIS-Monitor: An approach to assess the quality of information processing in hospitals. International Journal of Medical Informatics 76 (2007), 216225. [9] Hübner-Bloder, G. Ammenwerth, E. Key Performance Indicators to Benchmark Hospital Information Systems A Delphi Study. Methods of Information in Medicine 6 (2009), 508518. [10] Yusof, M. M. Kuljis, J.. Papazafeiropoulou, A Stergioulas, L. K. An evaluation framework for Health Information Systems: human, organization and technology-fit factors (HOT-fit). International Journal of Medical Informatics 77 (2008), S. 386–398. [11] Korherr, B. List, Extending the EPC and the BPMN with Business Process Goals and Performance Measures. ICEIS 3 (2007), 287294. [1] [2]
Natural Language Processing, Data Mining
This page intentionally left blank
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-549
549
Medical Knowledge Evolution Query Constraining Aspects Ann-Marie EKLUNDa,1 Centre for Language Technology, Department of Swedish Language, University of Gothenburg, Sweden
a
Abstract. In this paper we present a first analysis towards better understanding of the query constraining aspects of knowledge, as expressed in the most used public medical bibliographic database MEDLINE. Our results indicate, possibly not surprising, that new terms occur, but also that traditional terms are replaced by more specific ones or even go out of use as they become common knowledge. Hence, as knowledge evolve over time, search methods may benefit from becoming more sensitive to knowledge expression, to enable finding new, as well as older, relevant database contents. Keywords. Information Retrieval, NLP, Question Answering, Medical Informatics
1. Introduction As presented by for instance Prier et al [1] and Bender et al [2], social media like Twitter and Facebook have changed the way people communicate health and disease related matters, by providing new ways of sharing experiences, and of seeking information and advice from others, both professionals and general public. The appearance of social media has also brought renewed interest to questions regarding knowledge and language, for instance as expressed by Paul et al [3], "you are what you tweet". In other words, the way you express yourself reflects your knowledge and interests. Thereby, if your knowledge or interests change, your tweeting, or searching, changes too. One such change could be the appearance of terms expressing new or more specialised interests, also reflected in query and communication logs. Studying interaction logs can provide increased understanding of people's knowledge and interests, but also of how these change over time in relation to, for instance, changes in society. One example is how internet query logs have been used for syndromic surveillance tracking flu related searches [4,5]. Hence, our knowledge will direct the way we search and share information. Other restricting aspects in the context of health and medical data are accessibility (patient record databases) and exponential data increase (medical bibliographic repositories), which will have an impact on querying behaviour. To summarise, the way health related information is shared and retrieved is heavily influenced by aspects like knowledge of the relevant topics and how data is organised.
1
Corresponding author: Ann-Marie Eklund, MA, University of Gothenburg, Box 200, 405 30 Gothenburg, Sweden; E-mail: [email protected].
550
A.-M. Eklund / Medical Knowledge Evolution Query Constraining Aspects
Thereby it is of importance in query optimisation and the development of future internet-based health support. In this work we focus on one of these aspects, i.e. how knowledge is expressed in a medical bibliographic database (MEDLINE) and how it evolves over time. For instance, we show how more specific terms (hyponyms) are used over time at the same time as concepts become common knowledge and are not explicitly expressed. This complements studies of e.g. number of used terms, used terminology and search persistence [6,7,8,9].
2. Materials and Methods We used a corpus of 5851 MEDLINE records (1993-2009), which in title, abstract or keywords contain the term adiponectin, herein called an anchor term due to its role of defining the corpus. We chose the term adiponectin because it is unambiguous and without synonyms, and due to its relatively new appearance in life science the corresponding corpus becomes manageable for manual analysis. From each record we used title, abstract, year of publication and keywords. The keywords consist of Medical Subject Headings (MeSH) 2 , which is NLM's controlled vocabulary thesaurus organised in a hierarchical structure. The implementation3 was done in Python using the Natural Language Toolkit (NLTK) (tokenization and lemmatization) and Biopython (data retrieval and management). The analysis of data was done using Microsoft Excel in combination with R (visualisation of data).4 This study is an initial analysis of the data, performed by manual inspection of a few concepts known to be discussed in the context of adiponectin, but also ones that are new in this context. It focuses on when the terms first occurred and if their use increases or declines over time. MeSH is designed to reflect knowledge and use of terms in the field of biomedicine, and a term may have been used in titles and abstracts for some time before it is available in the MeSH ontology for use as an indexing term. Thereby, a trend analysis based only on keywords may not reflect the actual use of terms, or expressed knowledge. We have not taken into account the year of introduction of a keyword into the MeSH ontology, which may be slightly misleading when comparing the use of terms as keywords to their use in abstracts and titles.
3. Results In the adiponectin context, around 4500 different MeSH terms, or keywords, have been used since the first adiponectin paper in 1993, and the abstracts contain around 20,000 different words, stopwords not included, and only a small part of the terms have been examined here. The emphasis in this section is on findings related to uses of the corpus anchor term (adiponectin), hyponyms, and the introduction of new terms over time.
2
www.nlm.nih.gov/mesh The program and result files can be obtained from the author on request. 4 nltk.org, biopython.org, r-project.org 3
A.-M. Eklund / Medical Knowledge Evolution Query Constraining Aspects
551
3.1. Use of the Anchor Term One interesting aspect of knowledge and its expression is if and when it becomes common, thereby more seldom explicitly stated in communication. The first MEDLINE record containing the term adiponectin is from 1993, but before the year 2000 not many papers in MEDLINE mention adiponectin (Figure 1, left). The number of papers containing adiponectin in title, abstract or keywords has increased every year since 1999, but more and more of the papers do not have Adiponectin as a keyword, (Figure 1, right). Hence, it seems like the use of the anchor term as a keyword has decreased over time.
Figure 1. Number of MEDLINE records containing the term adiponectin in abstract, title or keywords (left), and the percentage of papers in the adiponectin corpus having the term in abstract, title and keywords respectively (right).
3.2. Use of Terms and Their Hyponyms Since the MeSH keywords are hierarchically organised, it is possible to study if, and how, the use of more general (hypernym) and specific (hyponym) terms changes over time. The percentage of papers having Obesity as a keyword decreased from around the year 2000. A corresponding decrease can be found in titles and abstracts, where we also have a percentage decrease in the use of the word obese (which is not by itself a MeSH term). The keyword Obesity, Abdominal is a hyponym of Obesity and can be found in the papers from the last two years. In the abstracts we see a frequent use of the word abdominal since 2003. The keywords Adipose Tissue and Adipocytes were used in the first paper from 1993. They are both still in use as keywords, but there is a percentage decrease every year. From 2007 Adipocytes, White and Adipocytes, Brown are being used as keywords. They are both hyponyms of the term Adipocytes. Similarly for Adipose Tissue, there are the hyponyms Adipose Tissue, Brown, first seen in 2002, and Adipose Tissue, White, which first occurred in 2006. To conclude, by these examples we have seen indications of a shift over time in the use of traditional adiponectin related terms like adipocytes, obesity and adipose tissue towards the use of more specific terms (hyponyms).
552
A.-M. Eklund / Medical Knowledge Evolution Query Constraining Aspects
3.3. Use of New Terms If we assume that new knowledge and interests of a researcher are reflected in terms and keywords used in a paper, it is interesting to study if new words appear in the adiponectin context. One example of this is the increased use of words like older, middle and aged that we see in titles since their first occurrence in 2004. In abstracts older first appeared in 2003, and middle and aged in 2002. The keywords Aged and Middle Aged occurred for the first time in 1999, and since then both of them have been in frequent use. The keyword Young Adult is much used in 2009. Another example is the plant related keywords. The keyword Plant Extracts has increased slightly since it was first used in 2005, and the keyword Seeds can also be found in a few papers every year since 2007 (Seeds is a descendant of Plant Structures or of Food and Beverages in the MeSH hierarchy). The last two years the keywords Plant, Plant Stems, Plant Preparations and Plants, Medicinal have appeared. In the last few years the words plant and seed have occurred mainly in abstracts, but also in a few titles. Hence, by our analysis it is also possible to trace the occurrence of new terms, related to more specialised study groups and alternative forms of treatment.
4. Discussion 4.1. Use of Terms and Hyponyms In the examples in Results we have seen indications of keywords becoming more specific, the annotations seem to have become more detailed, for example in the case of Adipocytes which decrease while its hyponyms Adipocytes, White and Adipocytes, Brown have started to be used as keywords. The use of more specific terms could indicate more detailed knowledge of a subject, described in the text by new terms not used before. This may have led to the use of more specific keywords to reflect that. Another reason for the decrease in the use of for example terms like Obesity could be that obesity is already a given premise in this context and does not need to be stated explicitely anymore - terms become common knowledge, cf the discussion in Results on the decreased use of the anchor term adiponectin. 4.2. Use of New Terms By studying the occurrence of new terms not used before in the adiponectin context, we find that terms related to completely new concepts appear. One example is the plant related terms, which correspond to an introduction of a new aspect into the research field. We can also see an increased age aspect, with terms like Aged and Young Adult being more and more common. New aspects like these often originate in the analysis of the results of earlier studies, where new connections can be seen in the data and lead to new angles to study. When new terms appear, like the plant or age related terms in the adiponectin context, it could reflect new knowledge and new interests within the field. The increased use of plant related terms seen in the last few years could indicate an increasing interest in alternative treatments.
A.-M. Eklund / Medical Knowledge Evolution Query Constraining Aspects
553
5. Conclusions In the examples above, we have presented indications of a shift over time in the use of terms towards more specific terms (hyponyms), which could indicate a more detailed knowledge of a subject. There was also a decrease in the use of some keywords which are closely connected to the anchor term adiponectin. This decrease could indicate that the concepts described by these terms are already given in this context and that the concepts have become common knowledge. We have also seen examples of the appearance of new terms related to concepts not previously occurring in this context. This could be an indication of new knowledge being added to the existing one. We have tried to exemplify how the use of terms in bibliographic records changes over time, and how this may be related to the evolution of new knowledge. As a consequence, as knowledge evolve over time, queries and search methods may benefit from considering these changes, to make query terms match terms and keywords in the papers. An approach for future investigation could be to make search algorithms "history aware", i.e. for a given search term, they could use its hypernyms to find older papers and its hyponyms to find more recent ones. This could be based on trend analysis of the occurrence of concepts/terms. A trend analysis could also note if given search terms decrease in use because they become common knowledge, and algorithms take this into account giving less weight to these terms when identifying relevant papers. We have analysed only a limited bibliographic corpus and future work should address other corpora and if similar results can be found in other health domains [5].
References [1]
[2] [3] [4] [5] [6] [7]
[8] [9]
Prier K, Smith M, Giraud-Carrier C, Hanson C. Identifying Health-Related Topics on Twitter: An Exploration of Tobacco-Related Tweets as a Test Topic, International Conference on Social Computing, Behavioral-Cultural Modeling, & Prediction (SBP 2011), 2011. Bender JL, Jimenez-Marroquin MC, Jaddad AR. Seeking support on Facebook: a content analysis of breast cancer groups, J Med Internet Res 13(1) (2011). Paul MJ, Dredze M. You are what you tweet: Analyzing Twitter for public health, Proceedings of the 5th International AAAI Conference on Weblogs and Social Media (ICWSM 2011), 2011. Eysenbach G. Infodemiology: tracking flu-related searches on the web for syndromic surveillance, AMIA Annu Symp Proc, 2006, 244–248. Hulth A, Rydevik G, Linde A. Web queries as a source for syndromic surveillance, PLoS One 4(2) (2009). Herskovic JR, Tanaka LY, Hersh W, Bernstam EV. A day in the life of PubMed: analysis of a typical day's query log, J Am Med Inform Assoc 14(2) (2007), 212–220. Hoogendam A, Stalenhoef AFH, de Vries Robbé PF, Overbeke AJPM. Analysis of queries sent to PubMed at the point of care: observation of search behaviour in a medical teaching hospital, BMC Med Inform Decis Mak 8 (2008). Povnick RM, Zeng QT. Reformulation of consumer health queries with professional terminology: a pilot study, J Med Internet Res 6(3) (2004). Dogan RI, Murray GC, Neveol A, Lu Z. Understanding PubMed user search behavior through log analysis, Database (2009).
554
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-554
Optimal Asymmetrical SVM Using Pattern Search. A Health Care Application a
Gilles COHENa,1, Rodolphe MEYERa Direction of Medico Economic Analysis, University Hospital of Geneva, 1211 Geneva, Switzerland
Abstract: This paper considers the model selection problem for Support Vector Machines. A well-known derivative Pattern Search method, which aims to tune hyperparameter values using an empirical error estimate as a steering criterion, is proposed. This approach is experimentally evaluated on a health care problem which involves discriminating nosocomially infected patients from non-infected patients. The Hooke and Jeeves Pattern Search (HJPS) method is shown to improve the results achieved by Grid Search (GS) in terms of solution quality and computational efficiency. Unlike most other parameter tuning techniques, our approach does not require supplementary effort such as computation of derivatives, making them well suited for practical purposes. This method produces encouraging results: it exhibits good performance and convergence properties. Keywords: Machine Learning, Optimization, Hooke and Jeeves Pattern Search, Nosocomial Infection.
1. Introduction A support vector machine (SVM) is a powerful classification method. However, to obtain good generalization performance, it is crucial to choose an appropriate set of model parameters. The choice of SVM model parameters can have a profound effect on the resulting model's generalization performance. Most approaches use trial and error procedures to tune SVM parameters while trying to minimize the training and test errors. Such an approach may not really obtain the best performance while consuming huge amount of time. Another common but more systematic and reliable approach is to decide on parameter ranges and then do an exhaustive grid search over the parameter space to find the best setting. Unfortunately, even moderately high resolution searches can result in a large number of evaluations and unacceptably long run times. Recently other approaches to parameter tuning have been proposed [1-3]. These methods use a gradient descent search to optimize a validation error, a leave-one-out (LOO) error or an upper bound on the generalization error. However, gradient descent oriented methods may require restrictive assumptions regarding, e.g., continuity or differentiability. Typically the criterion, such as LOO error, is not differentiable, so approaches based on gradient descent are not generally applicable using crossvalidation. Furthermore, they are likely to be trapped in a local minimum. For such non
1
Corresponding Author : Gilles Cohen, University Hospital of Geneva, Direction of Medico-Economic Analysis, rue Gabrielle-Perret-Gentil 4, CH-1211 Geneva 14, Switzerland. E-mail: [email protected]
G. Cohen and R. Meyer / Optimal Asymmetrical SVM Using Pattern Search
555
differentiable criteria other approaches based on Evolutionary Algorithms have been investigated [4-6]. In the present work we propose a HJPS methodology to choose Asymmetrical SVM (ASVM) parameters and demonstrate its effectiveness in a classification task which involves detecting nosocomial infected patients from non-infected patients. The main advantages of HJPS based strategy is the suitability for problems for which it is impossible or difficult to obtain information about the derivatives. The remainder of this paper is organized as follows. HJPS method is briefly introduced followed by a short review on SVM and SVM model selection. Our approach and the healthcare problem used as a testbed for assessing our method are then described. Experimental results are presented to show the general applicability of the method. Finally, we close with a general conclusion and a preview of future work.
2. Materials and Methods 2.1. Hook and Jeeves Pattern Search Method The direct search method designed by Hooke and Jeeves [7] is an iterative method which makes use of two types of moves in progressing toward a function's minimum namely exploratory and pattern moves. The exploratory move is designed to examine the local behavior of the objective function by searching the local proximity in the directions parallel to the coordinate axes for an improved objective function value. The rudimentary information gained by this step is then provided for the pattern move to indicate a possible direction for a successful move. The algorithm operates as follow: 1. Start with an arbitrary base point b1 and a step size ΔΘ={Δθi,…,Δθp}. Define a termination parameter ε, a step reduction factor δ and an acceleration factor α. 2. Compare f(b1) and f(b1+Δθ1). If the objective function value is reduced, the new point is kept and called the temporary candidate t11, where the first subscript indicates that first the “pattern” is being developed while the second subscript shows that the first variable has been modified. If not, the coordinate θ1 is reduced by -Δθ1, and the new point is checked in the same way as before yielding to t11 in case of success. If the value of f(b1) is not improved by either augmenting nor reducing the θ1 coordinate, b1 is then left unchanged. This procedure is applied to each variables in turn; finally arriving at a new base point t1p=b2. The exploratory move is then completed. 3. Base point b1 and the one obtained using the exploratory move define the “pattern” of the search direction. Pattern move takes a single step from current base point in the direction specified by the pattern. The temporary candidate is given by t20= b2+ α(b2- b1).where the second subscript 0 indicates that the variables have not been yet changed. A local exploration about t20 is now carried out as in step 2 to establish the next temporary candidates t21,…,t2p. If the new base point b3 =t2p improve the objective function value, as before, a new temporary candidate t30 = b3+ α(b3- b2) is established. To generalize, if bn yields better objective function value than bn-1, tn0 is given by tn0= bn+α(bn-bn-1). 4. Steps 2 and 3 are repeated until tmp is not better than previous base point bm; in this case, let bm+1= bm and reduce ΔΘ=ΔΘ/δ. Treating bm+1 as b1, previous steps
556
G. Cohen and R. Meyer / Optimal Asymmetrical SVM Using Pattern Search
are repeated again until the step size is too small : maxi=1..p ||Δ θi||<ε, assuming that the optimum have been reached. 2.2. SVM Model Selection via Hooke and Jeeves Pattern Search 2.2.1. Support Vector Machine Support vector machines [8-9] (SVMs) are state of the art learning machines based on statistical learning theory. The basic idea is to map each data point of the training set onto a high dimensional space by some function φ and to seek for a separating hyperplane (w,b), with w the weight vector and b the bias, in this space which maximises the margin or distance between the hyperplane and the closest data points belonging to the different classes. φ is performed by a nonlinear function k(.,.), also called a kernel, which defines a dot product in the feature space. We can then substitute the dot product 〈φ(x),φ(xi)〉 in feature space with the kernel k(x, xi). Conditions for a function to be a kernel are expressed in a theorem by Mercer [10]. The optimal separating hyperplane can be represented based on a kernel function: (1.1) For a separable classification task, such an optimal hyperplane exists but very often, the data points will be almost linearly separable in the sense that only a few of the members of the data points cause it to be non-linearly separable. Such data points can be accommodated into the theory with the introduction of slack variables that allow particular vectors to be misclassified. The hyperplane margin is then relaxed by penalizing the training points misclassified by the system. Furthermore, to adapt to the case of unbalanced distributions the basic idea is to introduce different error weights C+ and C- for the positive and the negative class in order to penalize false positives and false negatives differently [4]. This induces a decision boundary which is more distant from the smaller class than from the other. Hence one variant of the algorithm consists of solving the following optimization problem:
(1.2)
Where ξi is a positive slack variable that measures the degree of violation of the constraint. The penalties C+ and C- are regularization parameters that control the tradeoff between maximizing the margin and minimizing the training error. 2.2.2. Model Selection Criteria To obtain good performance, some parameters in SVMs have to be selected carefully. These parameters include the regularization parameters C and the parameters of the kernel function. These ''higher level'' parameters are usually referred to as hyperparameters. The model selection problem is to select the best model from a candidate set so that generalization error is minimized over all possible examples
G. Cohen and R. Meyer / Optimal Asymmetrical SVM Using Pattern Search
557
drawn from an unknown distribution P(x,y). As the data distributions in real problems are not known in advance, generalization error is not computable and one needs some reliable estimates of the generalization performance. We estimate generalization error using class weighted accuracy CWA=w*sensitivity+(1-w)*specificity, which assigns user-defined weights to sensitivity and specificity in order to compensate for class imbalance [11]. 2.3. Application to Nosocomial Infection Detection We applied our parameters optimization method to a medical problem, the detection of nosocomial infections. A nosocomial infection (NI) is an infection that develops during hospitalization whereas it was not present or incubating at the time of the admission. Usually, a disease is considered a NI if it develops 48 hours after admission. The University Hospital of Geneva (HUG) has been performing yearly prevalence studies to detect and monitor NIs since 1994 [12]. Their methodology is as follows: the investigators visit every ward of the HUG over a period of approximately three weeks. All patients hospitalized for 48 hours or more at the time of the study are included. Medical records, kardex, X-ray and microbiology reports are reviewed, and additional information is eventually obtained by interviewing nurses or physicians in charge. Collected variables include demographic characteristics, admission date, admission diagnosis, comorbidities, McCabe score, type of admission, provenance, hospitalization ward, functional status, previous surgery, previous intensive care unit stay, exposure to antibiotics, antacid and immunosuppressive drugs and invasive devices, laboratory values, temperature, date and site of infection, fulfilled criteria for infection. After preliminary data cleaning, the resulting dataset consisted of 683 cases and 49 variables. The major difficulty inherent in the data is the highly skewed class distribution. Out of 683 patients, only 75 (11%) were infected and 608 were not. This application was thus an excellent testbed for assessing the efficacy of the use HJPS to tune SVM parameters in the presence of class imbalance.
3. Experimentation and Results The experimental goal was to assess the HJPS method for tuning SVM parameters automatically and within a reasonable amount of time. For this, a standard GS method was used as a baseline for comparing the quality of the final result and the computational cost of obtaining that result. For both approaches we used 5-fold stratified cross-validation to compare the quality of the HJPS with GS in terms of performance and computational cost. To train our SVM classifiers we used a Gaussian kernel k(x, xi) = exp(-γ|x-z|2). Thus the parameter set to tune was θ : (γ,C+,C-). GS was done by varying a range of values from [2i]i=-5,..,8 for regularization parameters C+,Cand from [2i] i=-8,..,5 for RBF kernel width γ. Standard parameter settings from the HJPS literature were used. For CWA, a weight of 0.7 was assigned to sensitivity which is the priority goal in nosocomial infection surveillance. Table 1 shows the comparative results between our search method (HJPS) and a GS approach for the nosocomial dataset. Final results were compared using McNemar's test which revealed no significant difference between the two approaches at the 95% confidence level. However, it is clear from the last column of the table that HJPS incurred roughly two
558
G. Cohen and R. Meyer / Optimal Asymmetrical SVM Using Pattern Search
hundred time less the number of function evaluations required by GS to find the best parameters (one function evaluation corresponds to building 5 SVM models). Table 1. Performance of SVMs using a Gaussian kernel with optimal parameter set (γ,C+,C-) found via GS and HJPS. Computational load: number of function evaluations. Methods
Performance C+
Computational Load
C-
γ
Acc.
Sens.
Spec.
CWA (w=0.7)
CPU time [min]
Nb func. eval.
GS
0.5
0.06
1.6 .10-2
83.59
81.33
83.87
83.11
36.17
2744
HJPS
19.51
1.77
12.10-4
81.84
88
81.08
83.16
1.35
13
4. Conclusion and Future Work We proposed an algorithm based on HJPS method that can reliably find good ASVM models with RBF kernels in a fully automated way. We selected the HJPS method because of its robustness, simplicity and ease of implementation. Our experiments have shown that HJPS and GS attain equivalent generalization performance, but that HJPS is about forty times faster than GS. Despite the low complexity enjoyed by our method it suffers, from falling into local optima. To alleviate this drawback we plan to merge HJPS with Genetic Algorithms (GA) by first performing a coarse search for the global minimum by means of a GA and then refining the solution by a HJPS approach.
References [1]
Chapelle O., Vapnik V., Bousquet O., Mukherjee S. Choosing Multiple Parameters for Support Vector Machines. Machine Learning 2002;46:131-159. [2] Keerthi S.S. Efficient tuning of SVM hyperparameters using radius/margin bound and iterative algorithms. In: IEEE Transactions on Neural Networks; 2003; 2003. p. 1225-1229. [3] Chung K.M., Kao W.C, Sun C.L, Wang L.L, Lin C.J. Radius margin bounds for support vector machines with the RBF kernel. Neural Comput. 2003;15:2643--2681. [4] Cohen G., Hilario M., Geissbuhler A. Model Selection for Support Vector Classifiers via Genetic Algorithms. An Application to Medical Decision Support. In: Int Symp Biol Med Data Analysis; 2004. [5] Friedrichs F., Igel C. Evolutionary tuning of multiple SVM parameters. In: 12th European Symposium on Artificial Neural Networks (ESANN); 2004; 2004. [6] Runarsson T.P., Sigurdsson S. Asynchronous Parallel Evolutionary Model Selection for Support Vector Machines. Neural Information Processing - Letters and Reviews 2004;3:1065-1076.. [7] Hooke R., Jeeves R. Direct search solution of numerical and statistical problems. Association Computing Machinery J. 1960;8:212-229. [8] Cortes C., Vapnik V. Support Vector Networks. Machine Learning 1995;20:273-297. [9] Vapnik V. Statistical Learning Theory: Wiley; 1998. [10] Cristianini N., Taylor J.S. An Introduction to Support Vector Machines: Cambridge U. Press; 2000. [11] Cohen G., Hilario M., Sax H., Hugonnet S. Data Imbalance in surveillance of nosocomial infections. In: International Symposium on Medical Data Analysis; 2003; Berlin; 2003. [12] Harbarth S., Ruef C., Francioli P., Widmer A., Pittet D. Nosocomial infections in Swiss University Hospitals: a multicentre survey and review of the published experience. Schweiz Med Wochenschr 1999 (129):1521-28.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-559
559
Factuality Levels of Diagnoses in Swedish Clinical Text Sumithra VELUPILLAI a,1, Hercules DALIANIS a, Maria KVISTa, b a Dept. of Computer and Systems Sciences (DSV), Stockholm University, Forum 100, SE-164 40 Kista, Sweden b Dept. of clinical immunology and transfusion medicine, Karolinska University Hospital, SE-171 76 Stockholm, Sweden
Abstract. Different levels of knowledge certainty, or factuality levels, are expressed in clinical health record documentation. This information is currently not fully exploited, as the subtleties expressed in natural language cannot easily be machine analyzed. Extracting relevant information from knowledge-intensive resources such as electronic health records can be used for improving health care in general by e.g. building automated information access systems. We present an annotation model of six factuality levels linked to diagnoses in Swedish clinical assessments from an emergency ward. Our main findings are that overall agreement is fairly high (0.7/0.58 F-measure, 0.73/0.6 Cohen’s κ, Intra/Inter). These distinctions are important for knowledge models, since only approx. 50% of the diagnoses are affirmed with certainty. Moreover, our results indicate that there are patterns inherent in the diagnosis expressions themselves conveying factuality levels, showing that certainty is not only dependent on context cues. Keywords. Diagnosis reasoning, factuality levels, annotation, Swedish, clinical text, electronic health records.
1. Introduction The process of diagnosing a patient is not trivial, and involves making decisions based on many diverse criteria. Clinicians are documenting reasoning processes and decisions in free-text, information that is currently not fully exploited for further knowledge management or research. Accurate and situation-specific information access is extremely important, especially in the clinical domain. This will provide clinicians with tools for information retrieval, using extracted information to produce relevant summaries, aggregating extracted information for knowledge discovery and further clinical research [1]. In order to create information access solutions that utilize the knowledge documented in free-text, it is necessary to be able to model subtleties expressed in natural language. One important aspect to consider is the level of certainty expressed in the reasoning and decision context. For instance, a likely scenario is the incorporation of a search engine in an electronic health record system, where clinicians can search for previous mentions of diagnoses for a particular patient. However, some of these diagnoses are written in a negated or speculative context, e.g. this is definitely not 1
Corresponding author
560
S. Velupillai et al. / Factuality Levels of Diagnoses in Swedish Clinical Text
diabetes or angina pectoris cannot be excluded. It is crucial that such distinctions are observed, as they convey different levels of knowledge certainty. Research on modeling factuality levels, or degrees of certainty, in textual data, has increased in recent years. In the BioScope corpus [2], which contains biomedical texts, certainty levels are annotated at a sentence level, while negation and speculation cues are annotated at a token (word) level. In FactBank, factuality levels in newspaper articles are instead annotated on an event level [3]. In the clinical domain, agreement on probability expressions in radiology reports has been studied. Two studies analyzed phrases indicating different levels of certainty with respect to diagnoses [4, 5]. Both studies show that intermediate probabilities are more difficult to agree on while phrases indicating very high or low probabilities result in higher agreement. In automatic information retrieval settings, these issues have also been addressed in the research community lately. RadReportMiner [6] is a context-aware search engine, taking into account negations and uncertainties, achieving improved precision results (81%) compared to a generic search engine (27%). In this paper, we present a model for annotating factuality distinctions in clinical documentation. Our aim is to develop automated systems that distinguish factuality levels of diagnoses in Swedish. Two clinicians annotate diagnoses in free-text entries for factuality levels. We analyze and evaluate the annotations with Intra- and InterAnnotator Agreement (IAA). To our knowledge, this is the first attempt at modeling these distinctions and creating such a resource in Swedish.
2. Methods Work process: we (1) assembled a list of diagnoses and created a resource for annotation, (2) developed annotation guidelines and annotated the created set, (3) evaluated Inter- and Intra-Annotator Agreement and did a qualitative analysis. We used the Knowtator plugin in the Protégé tool [7] for all annotation work. All documents were extracted randomly. Two senior physicians, A1 and A2, performed all annotation tasks, both accustomed to reading and writing medical records. We extracted free-text entries from an emergency ward included in the Stockholm EPR Corpus [8]. Only entries documented under the category Bedömning (Assessment) were used in the annotation task. This field was chosen since it is the documentation entry containing most reasoning. 2.1. Creating a set of Documents Marked with Diagnoses Instead of using diagnoses from Swedish medical terminology resources, we wanted to capture many diagnosis variants (e.g. inflections, misspellings, abbreviations). A collection of Swedish diagnoses was produced through a manual analysis of a subset of 150 assessment fields. A diagnosis was defined as a medical condition with a known cause, prognosis or treatment. All different variants and inflections of the same diagnosis expression were annotated. A simple string matching procedure was employed to automatically mark diagnoses from the created diagnosis collection. A general language automatic
S. Velupillai et al. / Factuality Levels of Diagnoses in Swedish Clinical Text
561
lemmatizer for Swedish2 was used for capturing further inflections. Each diagnosis was marked with brackets, e.g. Patient with diabetes. 2.2. Annotation Classes and Guidelines Factuality levels were modeled in two polarities: Positive and Negative. These were further graded: Certain, Probable or Possible. Each extracted diagnosis expression was annotated as belonging to one polarity and gradation, e.g. Certainly Positive, resulting in six annotation classes. Furthermore, the class Not Diagnosis was included for cases where the current context was not a diagnosis (e.g. infektion – short for clinic), and the class Other, for cases where e.g. the diagnosis referred to someone other than the patient, or where the annotator was uncertain. A first annotation task was performed in order to create detailed guidelines for the remaining task3. 2.3. Evaluation Metrics The results were evaluated with IAA: F-measure, and Cohen’s κ. IAA (Intra) results were measured on documents annotated twice by annotator A1, the second time in a new, randomized order. IAA (Inter) results were measured on documents annotated by two annotators; A1 and A2, treating A1 as the gold standard.
3. Results In total, the number of annotated diagnosis instances was 2 182 (A1 vs A1) and 2 070 (A1 vs A2)4, extracted from 1 297 Assessment fields (approx. 51% of the total amount of Assessment fields). From the collection of 337 diagnoses, 227 were found. 3.1. Intra- and Inter-Annotator Agreement A confusion matrix over the number of instances assigned to each class is shown in Table 1. Certainly Positive was in clear majority, almost 50% of the total number of instances. Possibly Negative and Not Diagnosis were very rare. The main discrepancies between the two annotators were in cases of assigning intermediate factuality levels. A1 generally assigned higher levels of factuality. Intra- and Inter-Annotator Agreement was very high for the majority class Certainly Positive (0.9 F-measure, respectively), while very low for Possibly Negative (0.35/0.03 F-measure, respectively), being a rare class. It is interesting to note that the classes Not Diagnosis and Other, both relatively rare, resulted in fairly high agreement results (0.82/0.62 and 0.69/0.65 F-measure, respectively). Overall IAA measured by Cohen’s κ is: 0.73 (Intra), and 0.60 (Inter).
2
http://www.cst.dk/online/lemmatiser/ Annotation guidelines, including examples, can be found at http://www.dsv.su.se/hexanord/guidelines/ (guidelines_stockholm_epr_diagnosis_factuality_corpus.pdf) 4 The discrepancy between the two sets is caused by mismatches and missed instances 3
562
S. Velupillai et al. / Factuality Levels of Diagnoses in Swedish Clinical Text Table 1. Confusion matrix, Intra- and Inter-Annotator Agreement. CP Intra
CP 990
PrP 78
PoP 4
PoN 0
PrN 3
CN
ND 4
2
O 19
Σ 1100
Inter
834
59
7
0
4
5
1
20
930
PrP Intra
20
236
55
1
1
0
1
0
314
Inter
66
134
10
1
0
0
2
1
214
PoP Intra
4
38
127
25
9
0
0
2
205
Inter
11
149
180
41
45
1
1
10
438
PoN Intra
0
0
6
14
7
1
0
1
29
Inter
0
0
0
1
5
1
0
0
7
PrN Intra
1
1
1
10
118
25
0
5
161
Inter
0
0
0
2
35
18
0
1
56
CN Intra
2
0
4
0
51
195
0
1
253
Inter
2
0
0
4
99
193
1
3
302
ND Intra
0
0
0
0
0
0
26
0
26
Inter
13
5
3
2
1
3
30
4
61
O Intra
8
1
4
1
7
0
8
65
94
Inter
1
1
1
1
5
3
1
49
62
Σ Intra
1025
354
201
51
196
225
37
93
2182
Inter
927
348
201
52
194
223
36
88
2070
Columns: A1, first annotation iteration. Rows: Intra: A1, second annotation iteration (same set randomized), Inter: A2. CP = Certainly Positive, PrP = Probably Positive, PoP = Possibly Positive, PoN = Possibly Negative, PrN = Probably Negative, CN = Certainly Negative, ND = Not Diagnosis, O = Other, Σ = Total
3.2. Qualitative Analysis We also performed a manual, qualitative analysis of the resulting class assignments. We found that Certainly Positive dominated where a) diagnoses show overtly, e.g. skin diseases (eczema, urthicaria, skin infection) and general conditions (overweight, asystolia, fainting), or b) diagnosis was made by an apparatus (auricular fibrillation/ ECG). Probably Positive dominates for diagnoses with medical reasons for not securing certainty, e.g. virosis, gasthritis. Linguistic reasons seem to direct the following for some diagnoses: 1) an inverted pattern with a complementary vocabulary, e.g. ischemia (Certainly/Probably Negative in majority), heart attack or angina pectoris (Certainly/Probably Positive in majority), 2) a lack of negative annotation classes when normality was not expressed as negation (hypertension), 3) for lunginflammation (pneumonia), speculation was expressed in Swedish while we saw certainty expressed in Greek.
4. Discussion In this study we present a model for knowledge certainty classification. This is used for the creation of an annotated set of Assessment entries from a Swedish emergency ward for factuality levels assigned to diagnoses. The model was functional and agreeable to the domain expert annotators. Our IAA results suggest that this model and resource can be used for developing automated systems. We also show, through a qualitative analysis, that factuality levels for different diagnoses are dependent on diagnosis type as well as inherent linguistic factors. This demonstrates that factuality and speculation in clinical text resides not only in linguistic context cues.
S. Velupillai et al. / Factuality Levels of Diagnoses in Swedish Clinical Text
563
4.1. Limitations The study design has some limitations that lowered the recall of diagnoses to be annotated. By employing a strict matching approach, yielding high precision, possible variants in form of misspellings, compounding and other formulations were missed. Fuzzier matching techniques could increase recall, at the cost of lower precision. The use of a limited list of diagnoses will inevitably result in a skewed distribution of diagnosis types. As a result, the model may not catch enough numbers and types of expressions of subtleties in conveying levels of factuality. How this in turn limits the created resources’ ability to be used for machine learning is yet to be seen. The main limitation of this model for future work is the low numbers of annotations in some annotation classes. Intermediate probability assignments are clearly not self-evident (e.g. [4] and [5]). It can be argued that factuality levels Possibly and Probably may be fused, or even two Possibly classes, to lower the number of factuality levels, and increaseing training instances for machine-learning tasks. Such fusion was not agreeable to the involved physicians, as it would be a less accurate description of reality. 4.2. Significance of Study Our results have important implications on the creation of intelligent information access from electronic health records. Without factuality analysis, uncertain or negated diagnoses would be identified as factual diagnoses. We have chosen a broad contextaware approach, in order to receive a wide perspective on how factuality levels are expressed concerning diagnoses. To our knowledge, no other studies have used a similar approach in this domain. Studies in the biomedical field (e.g. [3]) use hedge cues to detect uncertainty. We hope our approach will reveal inherent and previously unknown features that will aid in future machine-learning and text-mining studies. Acknowledgments: This research has been carried out after approval from the Regional Ethical Review Board, Stockholm (Etikprövningsnämnden i Stockholm), permission no 2009/1742-31/5
References [1]
[2] [3] [4]
[5] [6] [7] [8]
Meystre SM, Savova GK, Kipper-Schuler KC, Hurdle JE. Extracting Information from Textual Documents in the Electronic Health Record, IMIA Yearbook of Medical Informatics 2008 47 Suppl. 1 (2008), 138–154. Vincze V, Szarvas G, Farkas R, Móra G, Csirik J. The Bioscope Corpus: Biomedical Texts Annotated for Uncertainty, Negation and their Scopes, BMC Bioinformatics 9(S-11) (2008) Saurí R, Pustejovsky J. FactBank: a corpus annotated with event factuality, Language Resources & Evaluation 43 (2009), 227–268 Khorasani R, Bates DW, Teeger S, Rothschild JM, Adams DF, Seltzer SE. Is Terminology Used Effectively to Convey Diagnostic Certainty in Radiology Reports?, Academic Radiology 10 (2003), 685–688. Hobby JL, Tom BDM, Todd C, Bearcroft PWP, Dixon AK. Communication of Doubt and Certainty in Radiology Reports, The British Journal of Radiology 73 (2000), 999–1001. Wu AS, Do BH, Kim J, Rubin DL. Evaluation of Negation and Uncertainty Detection and its Impact on Precision and Recall in Search, Journal of Digital Imaging Ogren P. Knowtator: a Protégé plugin for annotated corpus construction, in Proc. HLT-NAACL 2006, Morristown, NJ, USA, ACL, 2006, pp. 273–275 Dalianis H, Hassel M, Velupillai S. The Stockholm EPR Corpus – Characteristics and some Initial Findings, in Proc. 14th ISHIMIR, Kalmar, Sweden, 2009.
564
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-564
Network Analysis of Possible Anaphylaxis Cases Reported to the US Vaccine Adverse Event Reporting System after H1N1 Influenza Vaccine Taxiarchis BOTSISa,b,1, Robert BALL a Office of Biostatistics and Epidemiology, Center for Biologics Evaluation and Research (CBER), Food and Drug Administration (FDA), Rockville, MD, USA b Department of Computer Science, University of Tromsø, Tromsø, Norway
a
Abstract. The identification of signals from spontaneous reporting systems plays an important role in monitoring the safety of medical products. Network analysis (NA) allows the representation of complex interactions among the key elements of such systems. We developed a network for a subset of the US Vaccine Adverse Event Reporting System (VAERS) by representing the vaccines/adverse events (AEs) and their interconnections as the nodes and the edges, respectively; this subset we focused upon included possible anaphylaxis reports that were submitted for the H1N1 influenza vaccine. Subsequently, we calculated the main metrics that characterize the connectivity of the nodes and applied the island algorithm to identify the densest region in the network and, thus, identify potential safety signals. AEs associated with anaphylaxis formed a dense region in the ‘anaphylaxis’ network demonstrating the strength of NA techniques for pattern recognition. Additional validation and development of this approach is needed to improve future pharmacovigilance efforts. Keywords. Spontaneous Reporting System, Network Analysis, VAERS, H1N1.
1. Introduction More than 10,000 reports of adverse events following more than 82.4 million doses of the H1N1 2009 monovalent vaccine were submitted to the United States (US) Vaccine Adverse Event Reporting System (VAERS) [1]. VAERS is the repository for adverse events (AEs) that are reported after vaccinations by health care providers, vaccine recipients and other interested parties, and by manufacturers as required by regulation. Well-trained nurses code these reports using the Medical Dictionary for Regulatory Activities (MedDRA) and assign preferred terms (PTs) that represent the AEs described in the narratives. Data collected in VAERS is analyzed to identify safety signals [2]. The traditional approach combines the review of individual reports by Medical Officers (MOs) and statistical data mining algorithms (DMAs), that are scientifically based on the detection of disproportionality of reporting [3]. Current 1
Corresponding Author: Taxiarchis Botsis, OBE|CBER|FDA, Woodmont Office Complex 1, Room 306N, 1401 Rockville Pike, Rockville, MD 20852, USA, e-mail: [email protected].
T. Botsis and R. Ball / Network Analysis of Possible Anaphylaxis Cases
565
DMAs are generally limited in their ability to evaluate the multiple interactions among all the vaccines and AEs in the database. Thus, the methodologies to identify patterns of AEs related to the administration of a vaccine or the co-administration of multiple agents need to be improved. If a safety concern is identified, MOs follow up with more detailed analysis, including an evaluation of a series of cases with subsequent classification according to a predefined case definition. Responding to a safety signal for anaphylaxis after H1N1 influenza vaccine that was received from the Canadian Ministry of Health in midNovember 2009 [1], FDA systematically reviewed all (N=6034) case reports submitted to VAERS related to the H1N1 vaccine from November 22, 2009 through January 31, 2010 to evaluate whether a similar safety signal for anaphylaxis existed in VAERS. Although there was not a safety signal for anaphylaxis after H1N1 influenza vaccine in VAERS, the dataset generated by this review provided an opportunity to investigate whether applying the principles of network analysis (NA) would allow us to identify a pattern of PTs within the network that had performance characteristics nearly equal to manual case classification. The VAERS subset was viewed and analyzed as a network with the vaccines/PTs and their interconnections being the nodes and the edges, respectively.
2. Methods MOs manually screened the narratives and PTs of all reports of possible anaplylaxis (N=237). The possible ‘anaphylaxis’ subset was preprocessed to facilitate the subsequent NA. Each report was first represented as a vector (Rx) consisted of vaccines and PTs; then, the vectors were decomposed into pairs of vaccine (Vax) or PT and report ID. For example, the vector Rx= [IDx Vax_1 Vax_2 PT_1 PT_2 PT_3] was decomposed to Vax_1-IDx, Vax_2-IDx, PT_1-IDx, PT_2-IDx, PT_3-IDx. The vaccines/PTs were tied by their co-occurrence in an individual report that is being part of pairs with the same IDx. The number of reports containing a particular tie was the weight for each element that was included in an adjacency matrix; this matrix facilitated the construction of the ‘anaphylaxis’ network. We focused on identifying patterns among the PTs consistent with anaphylaxis in this network. In terms of topology a dense region within the network structure would represent a pattern. NA offers the possibility for the qualitative evaluation and quantification of these areas. Particularly, we used certain node centrality metrics: hub centrality, which measures the degree of connectivity of a node to other important nodes in the network [4]; betweenness centrality, which measures the extent to which each node acts as a ‘bridge’ between other nodes [5]; and, inverse closeness centrality, which measures the average distance from a node to the other nodes [4]. We calculated these metrics for the anaphylaxis network and scaled them according to the top value, i.e. all values in each metric were divided by that top value. Subsequently, we selected the top 20 nodes according to hub centrality and constructed betweenness vs. inverse closeness centrality diagram to illustrate the connectivity of these nodes. Further evaluation included the visual representation of the densest area of the network that might hide the pattern of interest. To reduce the full network, we selected the ‘islands’ algorithm that identifies all the maximal islands within a predefined node interval for an edge weight threshold [6] and combined it with triangular weight (TW) that is equal to the number of triangles each line of the original network is
566
T. Botsis and R. Ball / Network Analysis of Possible Anaphylaxis Cases
contained [7]. We hypothesized that the use of TWs instead of the original weights would emphasize multiple interactions, filter out weak connections and reveal the patterns; thus, we applied it to both networks. While no clear “gold standard” exists for pattern recognition, we compared the PTs identified by the above network analysis with criteria in the Brighton Collaboration (BC) case definition for anaphylaxis [8]. Based on these criteria (Table 1), the patterns related to anaphylaxis are defined. Here, we were interested in finding these criteria through both qualitative and quantitative NA of the ‘anaphylaxis’ subset. MedDRA does not include all the appropriate PTs to fully represent the BC criteria; however, BC case definition was a guide for recognizing the PTs that describe these criteria in the identified patterns. Pajek 2.01 and ORA 2.2.5 were the tools used for the network analysis.
3. Results The original ‘anaphylaxis’ network included 301 nodes. The diagrams in Figure 1A present the metrics for the 20 top nodes according to hub centrality. The original network was reduced to include a community of 30 nodes by combining the TW (TW threshold equal to 70) with the ‘island’ algorithms (Figure 1B). Network analysis showed a clear pattern for anaphylaxis syndrome with all the PTs (shown in red crosses in the betweenness vs. inverse closeness diagrams) that characterize this condition being part of the ‘anaphylaxis’ island and among the top nodes in terms of all centrality metrics (Figure 1A). In line with the Brighton Collaboration criteria the symptoms for the four organ systems (dermatological/mucosal, cardiovascular, respiratory and gastrointestinal) were represented in the network image as well as in the top 20 nodes. As expected FLU(H1N1) node was the most central; the other two influenza vaccines were also among the top nodes. Table 1. Summarized criteria for the Brighton Collaboration case definition of anaphylaxis. Organ Systems Dermatologic or mucosal
Major Criteria urticaria (hives) or erythema, generalized angioedema, localized or generalized generalized pruritus with skin rash
Cardiovascular
measured hypotension uncompensated shock (tachycardia, capillary refill time >3 sec, reduced central pulse volume, decreased level or loss of consciousness) bilateral wheeze (bronchospasm), stridor upper airway swelling (lip, tongue, throat, uvula, or larynx) respiratory distress (tachypnoea, increased use of accessory respiratory muscles, recession, cyanosis, grunting)
Respiratory
Laboratory Gastrointestinal
Minor Criteria generalized pruritus without skin rash generalized prickle sensation localized injection site urticaria red and itchy eyes reduced peripheral circulation (tachycardia, a capillary refill time of >3 sec without hypotension, a decreased level of consciousness) persistent dry cough, hoarse voice difficulty breathing (no wheeze or stridor) sensation of throat closure sneezing, rhinorrhea Mast cell tryptase elevation > upper normal limit Diarrhoea, abdominal pain, nausea, vomiting
T. Botsis and R. Ball / Network Analysis of Possible Anaphylaxis Cases
567
Figure 1. A. ‘Anaphylaxis’ network and centrality metrics for the top 20 nodes; for illustration purposes, node labels are not presented and hub centrality diagram is reversed. B. ‘Anaphylaxis’ island and pattern.
4. Discussion This work demonstrates the potential use of NA for pattern identification in VAERS as this was discussed in our first study (also the first in the area) that dealt with the same issue [9]. Filling the gap of traditional approaches, we analyzed the multiple interactions of the critical terms (vaccines and PTs) in VAERS reports using a dataset related to adverse events reported after H1N1 vaccination. Through the anaphylaxis example, we showed that it is possible to isolate the densest region in a network using certain metrics and algorithms. Using a certain standard (e.g. BC criteria) this region
568
T. Botsis and R. Ball / Network Analysis of Possible Anaphylaxis Cases
could be characterized as a pattern that deserves further investigation. While not the focus of our study, NA might serve as an efficient way to begin development of Standardized MedDRA Queries [10]. This study has some limitations. First, we did not apply a statistical framework for identifying the anaphylaxis pattern but empirically evaluated the results of NA. Second, we did not follow a validated rule for selecting the node interval in the ‘islands’ algorithm; it was considered that this number should be adequate to reveal a strong pattern. It could be also argued that our sample included retrospectively classified reports and this might reduce the value of our analysis; however, our main scope was the investigation of the possible benefits from applying NA to VAERS data. Various algorithms have been applied before for the detection of clustered regions in a network. For example, Newman described the identification of communities based on the concept of modularity [11]. The evaluation of other approaches in addition to the ‘islands’ algorithm should be included in the next steps of our work. The evaluation framework should be extended to include a statistical aspect e.g. a thorough analysis of the centrality metrics. The current study is one step in evaluating the NA potential to recognize safety patterns in VAERS. We plan to further study this approach by addressing the aforementioned limitations and the application of our ideas to prospectively collected data for prediction purposes. Acknowledgements: This project was supported in part by an appointment to the Research Participation Program at the Center for Biologics Evaluation and Research administered by the Oak Ridge Institute for Science and Education through an interagency agreement between the U.S. Department of Energy and the U.S. Food and Drug Administration. We thank the Medical Officers at FDA who evaluated the reports and those who reported them.
References [1]
Reblin T. AREPANRIX™ H1N1 Vaccine Authorization for Sale and Post-Market Activities. 11-122009. Canadian Ministry of Health. [2] Varricchio F, Iskander J, Destefano F, et al. Understanding vaccine safety information from the vaccine adverse event reporting system, The Pediatric Infectious Disease Journal, 23 (2004) 287-294. [3] Stephenson WP, Hauben M. Data mining for signals in spontaneous reporting databases: proceed with caution, Pharmacoepidemiology and Drug Safety, 16 (2007) 359-365. [4] Newman MEJ. Networks: an introduction., Oxford University Press, New York 2010. [5] Freeman LC. Set of Measures of Centrality Based on Betweenness, Sociometry, 40 (1977) 35-41. [6] Zaversnik M, Batagelj V. Islands, Sunbelt XXIV, 2004. [7] Batagelj V, Mrvar M. Analysis of Large Networks with Pajek, Sunbelt XXIX, 2009. [8] Ruggeberg JU, Gold MS, Bayas JM, et a. Anaphylaxis: Case definition and guidelines for data collection, analysis, and presentation of immunization safety data, Vaccine, 25 (2007) 5675-5684. [9] Ball R, Botsis T. Can network analysis improve pattern recognition among adverse events following immunization reported to VAERS? Clinical Pharmacology & Therapeutics (in press). [10] Bate A, Evans SJW. Quantitative signal detection using spontaneous ADR reporting. Pharmacoepidemiology and Drug Safety, 18 (2009) 427-436. [11] Newman MEJ. Fast algorithm for detecting community structure in networks, Physical Review, 69 (2004).
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-569
569
Using Pharmacogenetics Knowledge to Increase Accuracy of Alerts for Adverse Drug Events Yossi MESIKAa,1, Byung Chul LEE b, Yevgenia TSIMERMAN a, Haggai ROITMAN a Heon Kyu PARK b a IBM Research, Haifa, Israel b IBM Ubiquitous Computing Laboratory, Seoul, Korea
Abstract. Adverse drug event (ADE) has significant implications on patient safety and is recognized as a major cause of fatalities and hospital expenses. Although some medical systems today can help reduce the number of ADE occurrences, these primarily take into account clinical factors-even though recent studies show the significance of genetic profiles in ADE detection. Incorporating pharmacogenetics knowledge and data from genetic test results into these systems can improve the accuracy of preliminary alerts about potential ADEs. However, pharmacogenetics knowledge is unstructured, making it inappropriate for use in a system that involves automatic processing. We propose a methodology that can help incorporate the pharmacogenetics knowledge. Specifically, we show how pharmacogenetics knowledge can be expressed in a medical system and used together with the patient genetic data to provide alerts about ADEs at the point of care. Keywords. Pharmacogenetics, Adverse Drug Events, Warfarin
1. Introduction Adverse Drug Events (ADE) are usually defined as an undesirable effect in a patient, caused by a drug or the inappropriate use of a drug [1]. A common example of ADE is the administration of an overdose or underdose of a drug to a patient. In the US alone, ADEs are responsible for 6.7% of the hospitalizations, ranking just below emergency department visits [2]. A similar rate in the UK suggests that ADE is a serious worldwide phenomenon. Studies have indicated that about 28% of ADEs could be prevented [3]. Clearly, a reliable prediction system would increase patient safety. Apart from patient safety concerns, the prevention of ADEs also has significant economic implications for hospitals due to the expenses incurred by unplanned hospitalization. The return on investment (ROI) of a hospital that invests in methods to prevent ADE will be substantial [4]. ADE prevention can be achieved by incorporating an ADE alert system at the point of care. Such a system can either function as a component complementing the existing electronic medical records (EMR) system or integrated within the EMR system. Although some hospitals have developed ADE prevention systems, these solutions and 1
Corresponding author: Yossi Mesika, IBM Research, Haifa 31905, Israel; E-mail: [email protected].
570
Y. Mesika et al. / Using Pharmacogenetics Knowledge to Increase Accuracy of Alerts
the integrated ADE knowledge remain proprietary. Furthermore, in many health organizations, the knowledge about ADE lies buried in the physicians' expertise, and gaining access to this knowledge could require a long formal process of education based on their experiences. In both cases, the ADE knowledge remains inaccessible to patients who store their medical records in public services such as Google Health2 or Microsoft HealthVault3. Traditionally, ADE knowledge is based on studies that determined a range of clinical factors explaining the variability of different patient reactions to certain drugs. However, in recent years, advances in the genetics domain and new pharmacogenetics research have shown a high correlation between specific genetic variations and ADEs [5, 6, 7]. For example, Han Chinese patients carrying the HLA-B*1502 haplotype have significantly higher risk for severe skin reactions associated with Carbamazepine, a drug commonly prescribed for the treatment of seizures [8]. Ideally, this new brand of pharmacogenetics knowledge could be used before initiating treatment to help reduce or completely eliminate ADEs. The idea is even more relevant today, now that genetic testing is more accessible and less expensive, and the number of direct-to-consumer (DTC) services that offer a comprehensive range of genetic testing to the public has increased [9]. The integration of genetic test reports and pharmacogenetics knowledge into existing health provider's medical system is not currently supported by existing ADE alert systems. This prohibits the prediction of ADEs that consider both clinical and genetic data at the point of care. Designing a generic system capable of providing ADE notifications based on a genetic profile of a patient is a difficult challenge. There are two main hurdles that need to be tackled to design such a system. First, the clinical and genetic factors are vast and usually cannot be found within only one type of document. Second, the research outputs are unstructured and therefore cannot be processed by a machine. A semantic data warehouse can assist in overcoming the first challenge [10]. In this paper, we target the second challenge with a systematic methodology that uses a rule-based approach for expressing pharmacogenetics knowledge and executing it over harmonized patient data. Our work proposes a generic system that can integrate patient data (clinical and genetic) and transform pharmacogenetics knowledge into a machine processable form. This ADE alerting service can be integrated into legacy medical systems (such as EMR) and provide caregivers with more accurate alerts.
2. Methods There is a knowledge gap between two domains that are required for developing of ADE alerting services. The first domain is the information technology (IT), which contributes the technical solutions for developing a complex medical system. The second domain is pharmacogenetics, where the knowledge exists in unstructured forms such as published papers, articles, research reports, and so on. The challenge of building an ADE alerting service is twofold. On the one hand, it is extremely difficult to integrate the pharmacogenetics knowledge because its unstructured form is difficult to process automatically. In addition, it is usually difficult for IT specialists, who are not familiar enough with pharmacogenetics, to express such knowledge. On the other 2 3
http://www.google.com/health http://www.healthvault.com
Y. Mesika et al. / Using Pharmacogenetics Knowledge to Increase Accuracy of Alerts
571
hand, pharmacogenetics information is clear to medical experts; yet, they generally lack the technical skills and familiarity with machine languages needed for adding the knowledge to the system. We propose bridging this gap by breaking the development process into two separate phases, where each phase targets a different type of domain expertise. In the first phase, we extract the knowledge from the sources of information and generate an abstraction model. In this phase, we make use of concept mapping [11], a method for visualizing different concepts and the relationships between them. A concept mapping tool allows medical experts to describe the pharmacogenetics knowledge in a structured form without requiring any programming skills. After modeling several cases, we were able to determine common concepts and relationship types that can be predefined and reused across different models. For example, concepts like gene and allele are common to all the models that we constructed and their specific values can imply a variation in normal drug dosage. Aside from its ease-of-use, this library of common reusable concepts allows IT specialists to understand and work with the generated models. In the second phase, IT specialists transform the models into a set of executable rules using a technology specific language such as Java. In short, the first phase can be done by medical experts and the second one can be done by IT specialists. Once the models and rules are created, the results of patients' genetic tests results can be retrieved from hospital genetic labs or from external systems where independent genetic tests are performed. Once the data is available, the system can analyze it by executing the pharmacogenetics rules and provide alerts on drug overdose, drug allergy, interaction with other drugs, and so on. These alerts can be given to physicians at the point of care, allowing them to immediately take action and modify the prescription to minimize the chances of ADE.
3. Results We implemented a generic ADEs alerting system by incorporating clinical information, genetic profiles, and pharmacogenetics knowledge. We then validated this generic approach on common pharmacogenetics cases including drugs such as Abacavir, Azathioprine, Clopidogrel, Irinotecan, Panitumumab and Warfarin. We integrated the pharmacogenetics knowledge for each drug into the system in two phases: model expression and rules generation. The rest of this section provides the details of the process using the common case of Warfarin. Warfarin4 is an anticoagulant used for prophylaxis and treatment of thromboembolic disorders. The Warfarin drugs are responsible for 6.2% of all reported ADEs [2]. This high percentage provides tremendous incentive for creating a systematic solution that can assist in reducing the number of ADEs caused by this drug. Today, several clinical factors are already being taken into consideration to improve the estimation of a patient's initial therapeutic dose; these include: age, body surface area (BSA), smoker status, race, and so forth. Although clinical predictors provide an explanation for 17 to 22% of dose variability, recent pharmacogenetics studies show that 53 to 54% of variability in dose can be explained by including the CYP2C9 and VKORC1 genotypes [5]. As a result, FDA has approved updating the Warfarin label to include recommended therapeutic dose range based on the specific genotypes Table 1. 4
Also known by its brand names of Coumadin, Jantoven, Marevan, Lawarin and Waran
572
Y. Mesika et al. / Using Pharmacogenetics Knowledge to Increase Accuracy of Alerts
Table 1. Warfarin expected initial therapeutic dose ranges VKORC1 (rs9923231) GG AG AA
CYP2C9 (rs1799853/rs1057910) *1/*1 *1/*2 *1/*3 (CC/AA) (CT/AA) (CC/AC) 5-7 mg 5-7 mg 3-4 mg 5-7 mg 3-4 mg 3-4 mg 3-4 mg 3-4 mg 0.5-2 mg
*2/*2 (TT/AA) 3-4 mg 3-4 mg 0.5-2 mg
*2/*3 (CT/AC) 3-4 mg 0.5-2 mg 0.5-2 mg
*3/*3 (CC/CC) 0.5-2 mg 0.5-2 mg 0.5-2 mg
The Warfarin pharmacogenetics information already exists on the drug label and in relevant papers and public sources5. However, the information is not structured, making it very difficult to have it automatically processed by a system. In our proposed system, a domain expert who is familiar with the relevant concepts can express the pharmacogenetics information in a visual model. The expert uses an intuitive concept mapping tool6 to describe the information visually by drawing the concepts and the relationships between them. We assist the expert by predefining a library of reusable components that represent common concepts in pharmacogenetics. A model that expresses the pharmacogenetics of Warfarin can be seen in Figure 1. A more comprehensive model that also considers non genetic factors (such as environmental, demographic and clinical) should be incorporated. The scenario above describes how we use our generic systematic approach for one common case of pharmacogenetics. The same approach can be applied for current and
Figure 1. Concept modeling of Warfarin pharmacogenetics.
5 6
Public knowledge sources such as SNPedia, dbSNP, PharmKGB and PubMed Such as ihmc CmapTools that is available freely from http://cmap.ihmc.us/
Y. Mesika et al. / Using Pharmacogenetics Knowledge to Increase Accuracy of Alerts
573
future advances in pharmacogenetics, where correlations between drug ADEs and genetic variations are discovered. Some correlations also involve clinical conditions and environmental factors, which should also be part of the process to determine whether an ADE alert is relevant.
4. Conclusions Including pharmacogenetics in the process of determining potential ADE will increase the accuracy of the generated ADE alerts. In this paper, we suggested a methodology for incorporating pharmacogenetics knowledge into an ADE alerting system. This methodology can be extended to also represent knowledge for non genetic factors that have influence on the behavior of a medication. Integrating the various knowledge models related to a specific medication will result in a comprehensive accuracy. We distinguished between two types of expertise, medical and IT, which should be included separately in the process of implementing such a system due to the different disciplines. By bridging these two areas of expertise, we were able to generate an ADE notification service that considers both clinical and genetic data for a patient, ultimately reducing the number of ADEs. In addition to improving the quality of treatment, reducing the number of ADEs also has positive economic implications for care providers.
References [1]
Nebeker J. Clarifying adverse drug events: a clinician's guide to terminology, documentation, and reporting, Annals of internal medicine, 2004. [2] Budnitz DS, Pollock DA, Weidenbach KN, Mendelsohn AB, Schroeder TJ, Annest JL. National surveillance of emergency department visits for outpatient adverse drug events, Journal of the American Medical Association, 2006; 296(15):1858-66. [3] Bates DW, Cullen DJ, Laird N, Petersen LA, Small SD, Servi D. Incidence of adverse drug events and potential adverse drug events, Journal of the American Medical Association, 1995; 274(1):29–34. [4] Bates DW, Spell N, Cullen DJ, Burdick E, Laird N, Petersen LA. The costs of adverse drug events in hospitalized patients, Journal of the American Medical Association, 1997; 277(4):307. [5] Gage B, Eby C, Johnson J, Deych E, Rieder M, Ridker P. Use of pharmacogenetic and clinical factors to predict the therapeutic dose of warfarin, Clinical Pharmacology & Therapeutics, 2008; 84(3):326– 331. [6] Hetherington S, Hughes AR, Mosteller M, Shortino D, Baker KL, Spreen W. Genetic variations in HLA-B region and hypersensitivity reactions to abacavir, The Lancet, 2002; 359(9312):1121–1122. [7] Hoskins JM, Goldberg RM, Qu P, Ibrahim JG, McLeod HL. UGT1A1*28 genotype and irinotecaninduced neutropenia: dose matters, Journal of the National Cancer Institute, 2007; 99(17):1290-5. [8] Chung W, Hung S, Hong H, Hsih M. Medical genetics: a marker for Stevens–Johnson syndrome, Nature, 2004 [9] Hogarth S, Javitt G, Melzer D. The Current Landscape for Direct-to-Consumer Genetic Testing: Legal, Ethical, and Policy Issues. Annual Review of Genomics and Human Genetics. 2008; 9(1):161-182. [10] Bianchi S, Burla A, Conti C, Farkash A, Kent C, Maman Y. Next Generation Information Technologies and Systems. Springer Berlin Heidelberg. 2009. [11] Novak JD. The Theory Underlying Concept Maps and How to Construct Them. Florida: Institute for Human and Machine Cognition. 2006.
574
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-574
Schizophrenia Prediction with the Adaboost Algorithm a
Jan HRDLICKAa, Jiri KLEMA a Department of Cybernetics, Czech Technical University in Prague, The Czech Republic
Abstract. This paper presents an adaBoost approach for schizophrenia relapse prediction. The data for the adaBoost are extracted from patients answers to Early Warning Signs questionnaires sent regularly via mobile phone messages. The performance of the adaBoost algorithm is confronted with current ITAREPS system with sensitivity 0.65 and specificity 0.73. AdaBoost has the same sensitivity 0.65 but higher specificity 0.84 and is then ready to became the part of the ITAREPS care program. Keywords. Schizophrenia, Prediction, Sensitivity, Specificity, AdaBoost
1. Introduction Evidence suggests that a relapse of schizophrenia is preceded by psychotic and nonpsychotic behavioural and phenomenological changes. The most common nonpsychotic prodromal symptoms are irritation, sleep problems, tense, fear, anxiety, quiteness or withdrawal [1]. Psychotic early prodromal symptoms resemble those of schizophrenia symptoms but are milder. For example, a milder experience of hallucinations might be termed "perceptual abnormalities", hearing voices etc. The evidences indicate that the prediction of schizophrenic relapse is a realistic goal and therefore an intervention based upon programs of early detection can reduce schizophrenic relapse [4]. ITAREPS (Information Technology Aided Relapse Prevention in Schizophrenia) was developed for rapid and targeted recognition of early warning signs of psychotic disorder relapse by Psychiatric Centre in Prague. This paper uses data from one-year double-blinded follow-up, but normally the ITAREPS is routinely used in clinical practice. The patient concerned in ITAREPS system responds ten questions included in patient’s questionnaire. For example, one of the questions is "Does your feeling of being laughed at or talked about changed for the worse during this week?". The observer – a family member or person close to patient - answers similar questions about patient’s mental health. Both the patient and the observer evaluate change for the worse of particular items by numbers between 0 and 4 and send responses via mobile phone message. Patiens and their family care takers are asked to send mobile phone message once a week. The alert sending algorithm in the ITAREPS is a simple sumand-threshold classifier. Information about the ITAREPS system can be found in [6, 7].
J. Hrdlicka and J. Klema / Schizophrenia Prediction with the Adaboost Algorithm
575
2. Objectives and Measures Our goal is to predict schizophrenia relapse on the basis of SMS messages sent by patiens and their family informants. So we are looking for appropriate classifier to distinguish between messages preceding the hospitalization i.e. messages with prodromal symptoms and other messages. One of the most popular measures used among medical experts are sensitivity and specificity. The literature concerning these test measures, deals primarily with settings where a diagnostic test result, Y, is measured concurrently with the gold standard disease variable D. The sensitivity also called true positive rate (TPR) and 1-specificity, called false positive rate (FPR) are defined as Sens = TPR = P{Y = 1|D = 1} and 1Spec = FPR = P{Y = 1|D = 0}. Where D=1 indicates the presence of disease and D=0 denotes its absence. Y=1 is positive test result and Y=0 negative test result. In ITAREPS system, the diagnostic test (classifier outcome based on new message sent) is performed every week and it is trying to predict a future event - hospitalization, not the current state of the patient. Because of that, TPR function has to be extended for event times where the success of the prediction depends on a time lag between measure time s (message date) and a patient’s readmission time T. The time-dependent TPR function is then defined as (1) where the time lag is t and τ is some prespecified suitably large time distinguishing between positive and negative messages. The FPR, or 1-specificity, relates to subjects without hospitalization, or at least with hospitalizations having the measurementhospitalization time lag bigger than τ. Therefore, we define the time-dependent FPR as (2) note that the FPR is not a function of time since it accumulates over all subjects for whom t > τ.
3. Methods First, the question whether some types of the answer are bounded together, arises. For the answer, a hierarchical clustering of the questionnaire’s items (particular questions) was performed with complete linkage criterion and 1-correlation as a distance function. As it can be seen in Figure 1 there are some questions in the questionnaire with high correlation (for example 9 and 10) which should be bounded together. After consulting with a medical expert, four clusters were chosen. For patient’s questions these clusters were {1},{9;10},{2;6},{3;4;5;7;8}. The same clustering with four resulting clusters was made for observers questions.
576
J. Hrdlicka and J. Klema / Schizophrenia Prediction with the Adaboost Algorithm
Figure 1. Dendrogram of hierarchical clustering of the particular questions
For the purposes of messages classification two types of features were extracted. The first is sum of all the past answers in particular cluster with exponential forgetting time weighting. Since there are four patients clusters and four observers clusters these are eight features. The second feature is a difference between the last two messages - again for every cluster separately. So as an input for the classification there are sixteen features. All these features are discretized by minimal entropy algorithm with minimum description length principle stopping criterion (MDLPC) introduced by Fayyad in [3]. In this paper, sample is set of above-mentioned features measured in time s, which is the time the message was sent. For these samples classification(prediction) is made. There are two classes within the data: The class D of the sample i is negative if the time lag t between the sample time s and the event time T is bigger than t. And the sample is positive if the time lag t is not greater than τ. (3)
For classification itself the two-class adaBoost algorithm described by Freund and Schapire in [5] is used. These weak rules for adaBoost algorithm are constructed from the features and the possible feature thresholds. Every rule is in form Feature >= threshold or Feature < threshold. Standard adaBoost algorithm is initialized by setting the same weights to all the training samples. But since there is much more negative samples than positive, it is natural requirement to concentrate on positive samples. So the the weights are initially distributed separately for negative samples: (4)
and for positive samples: (5)
J. Hrdlicka and J. Klema / Schizophrenia Prediction with the Adaboost Algorithm
577
The base learning algorithm has 30 iterations in total, therefore the final classifier is a linear combination of 30 rules (weak classifiers).
4. Results Table 1 gives overview of the data. There are 71 patients and 15 hospitalizations within 12 hospitalized patients. The patients have sent 5389 messages and 139 messages are positive for τ = 42days. The τ was chosen on the medical expert advise as maxima length of prodromal symptoms. Table 1. Data Overview. Patients Hospitalization Positive Samples Negative Samples
71 15 139 5250
The resulting time-sensitivities, modeled as splines according to Cai in [2], are depicted in Figure 2. On the left of the figure is time-sensitivity of the original ITAREPS while on the right is adaBoost time-sensitivity. Both time-sensitivities are increasing with smaller measure-hospitalization time lag. This means the prediction of hospitalization is easier for messages closer to the hospitalization. Adaboost has lower sensitivity just before the hospitalization, but time-sensitivity is not decreasing with time lag so fast. Cumulative sensitivity and specificity was defined by Heagerty in [8]. Cumulative sensitivity is a ratio of true alerts to all positive samples. Cumulative specificity is same as (2) and is a ratio of true non-alerts to all negative samples. Where positive and negative samples are defined by (3). Youden index is Sensitivity+Specificity-1.
Figure 2. Time-sensitivity for original ITAREPS system and adaBoost classifier.
All the results can be seen in Table 2. Overall (cumulative) sensitivities are almost the same for both the ITAREPS and adaBoost while specificity is higher for adaBoost. This specificity increase means the false alerts for the program would decrease from 1403 in current ITAREPS to 824 false alerts with adaBoost. Table 2. Cumulative results Measure Specificity Cumulative Sensitivity Youden
ITAREPS 0.73 0.65 0.38
AdaBoost 0.84 0.65 0.49
All the adaBoost results were gathered through the patient-wise leave-one-out crossvalidation.
578
J. Hrdlicka and J. Klema / Schizophrenia Prediction with the Adaboost Algorithm
5. Disscusion The adaBoost prediction algorithm has significantly higher specificity and thus generates less false alerts then current ITAREPS system. While the overall cumulative sensitivity is same for both the algorithms, adaBoost generates more timelyinterventions, it’s sensitivity decreases slower with measure-hospitalization time lag than in ITAREPS system. The adaBoost classification algorithm is ready to became the part of the ITAREPS care program. Ackowledgements: Jan Hrdlicka‘s work was supported by the Grant Agency of the Czech Technical University in Prague, grant No. SGS10/279/OHK3/3T/13. Jiri Klema’s work was supported by the research program No. MSM 6840770012 "Transdisciplinary Research in Biomedical Engineering II" of the CTU in Prague.
References [1]
[2] [3] [4] [5] [6]
[7]
[8]
M Birchwood, J Smith, F Macmillan, B Hogg, R Prasad, C Harvey, and S Beringg, Predicting relapse in schizophrenia - the development and implementation of an early signs monitoring-system using patiens and families as observes, a preliminary investigation. Psychological Medicine 19(3) (1989), 649–656. TX Cai, MS Pepe, YY Zheng, T Lumley, and NS Jenny, The sensitivity and specificity of markers for event times. Biostatistics 7(2) (2006),182–197. UM Fayyad and KB Irani, On the handling of continuous-valued attributes in decision tree generation, Machine Learning, 8(1) (1992), 87–102. PB Fitzgerald, The role of early warning symptoms in the detection and prevention of relapse in Schizophrenia, Australian and New Zealand Journal of Psychiatry, 35(6) (2001), 758–764. Y Freund and RE Schapire, A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1) (1997), 119–139. F. Spaniel, P. Vohlidka, J. Kozeny, T. Novak, J. Hrdlicka, L. Motlova, J. Cermak, and C. Hoeschl, The Information Technology Aided Relapse Prevention Programme in Schizophrenia: an extension of a mirrordesign follow-up, International Journal of Clinical Practice, 62(12) (2008), 1943–1946. Filip Spaniel, Pavel Vohlidka, Jan Hrdlicka, Jiri Kozeny, Tomas Novak, Lucie Motlova, Jan Cermak, Josef Bednarik, Daniel Novak, and Cyril Hoschl, ITAREPS: Information technology aided relapse prevention programme in schizophrenia, Schizophrenia Research, 98(1-3) (2008), 312–317. Heagerty PJ, Lumley T, Pepe MS, Time-dependent ROC curves for censored survival data and a diagnostic marker, Biometrics, 56(2) (2000), 337-344.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-579
579
Applying One-vs-One and One-vs-All Classifiers in k-Nearest Neighbour Method and Support Vector Machines to an Otoneurological Multi-Class Problem a
Kirsi VARPA a1, Henry JOUTSIJOKI a, Kati ILTANEN a Martti JUHOLA a Computer Science, School of Information Sciences, University of Tampere, Finland
Abstract. We studied how the splitting of a multi-class classification problem into multiple binary classification tasks, like One-vs-One (OVO) and One-vs-All (OVA), affects the predictive accuracy of disease classes. Classifiers were tested with an otoneurological data using 10-fold cross-validation 10 times with kNearest Neighbour (k-NN) method and Support Vector Machines (SVM). The results showed that the use of multiple binary classifiers improves the classification accuracies of disease classes compared to one multi-class classifier. In general, OVO classifiers worked out better with this data than OVA classifiers. Especially, the OVO with k-NN yielded the highest total classification accuracies. Keywords. multi-class classification, binary classifiers, otoneurology, k-nearest neighbour method, support vector machines
1. Introduction Multi-class classification problems can be difficult to understand. Especially, if the application domain is not so familiar before, it can be hard to conceptualize the domain. Whenever creating new computer systems into new domains, it is important to have understanding about domain concepts, their relationships and differences. In order to distinguish classes better, one way is to convert the multi-class problem into multiple two-class problems [1, 2]. This may also help separation of classes. Earlier we have studied otoneurological data, for example, by using machine learning (ML) methods like decision trees [3] and neural networks [4]. Previous studies have shown that certain disease classes are difficult to recognize: they easily mix up with other classes [5]. From the literature, studies can be found where this kind of problem has been eased with using One-vs-One (OVO, also called round robin or pairwise class binarization) [6] and One-vs-All (OVA, also known as one-against-all, one-vs-rest) [7] solutions (i.e. using several binary classifiers instead of trying to classify all the classes at the same time with one classifier). Beforehand, it is not possible to say which of these solutions is better than others. Therefore, we examine the use of multiple binary classifiers to help the classification of vertigo data, and to find out which classifier solution seems to work the best with this data. 1 Corresponding author: Kirsi Varpa, Computer Science, School of Information Sciences, FI-33014 University of Tampere, Finland; E-mail: [email protected].
580
K. Varpa et al. / Applying One-vs-One and One-vs-All Classifiers
In this paper, we examine the effect of using multiple binary classifiers instead of using only one multi-class classifier. Binary classifiers used are OVO and OVA classifiers with a k-Nearest Neighbour (k-NN) method [8] and Support Vector Machines (SVM) [9].
2. Data and Methods The k-NN classifier is a widely used, basic instance-based learning method that searches for the k most similar cases of a test case from the training data [8]. It can be used with both binary and multi-class problems. The k-NN classifier used in this research was implemented in Java. The nearest cases were searched with k= 1, 3, 5, 7, 9, 11 and 13. The best k-NN varied between classes, so, we selected NN classifier with k=5 (5-NN) into the comparison to SVM. (In addition, 5-NN was used in our earlier study [5]). The k-NN method used Heterogeneous Value Difference Metric (HVDM) [10] since our data included nominal, ordinal and quantitative attributes. SVM is a newer, more sophisticated ML method to be used in the separation between two classes [9]. It is a kernel-based classification method [11, 12]. Originally, it was made for the binary classification tasks, but later it has been extended for the multi-class cases [13]. The basic idea in SVM is to generate an input space dividing hyperplane such that the margin, the distance between the closest members of both classes, is maximized. The use of SVM was expanded by the invention of kernel trick, where the input space is mapped with a non-linear transformation into higher dimensional space [14, 15]. In the research, we used the binary SVM implementation of Bioinformatics Toolbox of Matlab with the Least-Square method [16] as a basis for the multi-class extensions. SVM runs were made with linear, polynomial (d=2,3,4,5), Multilayer Perceptron (MLP) (scale κ in [0.2,10]; bias δ in [-10,-0.2]) and Gaussian Radial Basis Function (RBF) (scaling factor σ in [0.2,10]) kernels with box constraints [0.2,10] (κ, δ and σ with intervals 0.2). The best kernel functions, linear and RBF, were selected into comparison. ML methods were tested with an otoneurological data containing 1,030 vertigo cases from nine different vertigo diseases (Table 1). Data was collected at Helsinki University Central Hospital during several years [3]. The dataset used in this research consists of 94 attributes concerning a patient’s health status: occurring symptoms, medical history and findings in otoneurologic, audiologic and imaging tests. More detailed information about the collected patient’s information is provided in [17] and in [4] 38 main attributes are described. From the 94 attributes, 17 were quantitative (integer or real) and 77 were qualitative: 54 binary (yes/no) and 23 categorical attributes. Clinical tests are not done to every patient and, therefore, values are missing in several test results. In total, the data had about 11% missing values, which allowed using imputation. Imputation was needed due to calculation of the SVM method. Missing values of qualitative attributes were imputed (substituted) with class modes and missing values of other attributes with class medians. The imputed data was used with k-NN in order to keep it comparable to SVM. A 10-fold cross-validation (CV) was repeated 10 times using each time different random data divisions. Training and test set divisions into 10-fold CV were created with Matlab. In divisions, the ratios of disease classes were maintained in different CV folds. CV was used with both ML methods.
581
K. Varpa et al. / Applying One-vs-One and One-vs-All Classifiers
In OVA runs, we had nine (n_classes) binary classifiers: each one of them was trained to separate one class from the rest. A test sample was input to each classifier and a final class for the test sample was assigned according to the winner-takes-all rule from a classifier suggesting a class. For OVO runs, we trained 36 (n_classes·(n_classes − 1)/2) binary classifiers between all pairs of the classes. A test sample was solved with each binary classifier. Table 1. Nine disease classes and their absolute and relative frequencies in the otoneurological data. Average true positive rates (TPR) of disease classes, median of TPR and total classification accuracies with machine learning methods 5-NN and SVM linear and RBF using OVO and OVA classifiers from ten 10-fold crossvalidation runs in percents. Used kernel parameters with SVM linear and RBF presented below the table. OVO Classifiers Disease Name (Abbreviation)
Cases 1,030(100%)
OVA Classifiers
5-NN
5-NN
SVM linear
SVM RBF
5-NN
SVM linear
SVM RBF
Acoustic Neurinoma (ANE)
131 (12.7)
89.5
95.0
91.6
87.2
90.2
90.6
90.7
Benign Positional Vertigo (BPV)
173 (16.8)
77.9
79.0
70.0
67.0
77.6
73.5
78.6
Menière's Disease (MEN)
350 (34.0)
92.4
93.1
83.8
90.1
89.8
87.8
91.5
Sudden Deafness (SUD)
47 (4.6)
77.4
94.3
88.3
79.4
87.4
61.3
58.1
Traumatic Vertigo (TRA)
73 (7.1)
89.6
96.2
99.9
99.3
77.7
79.9
96.7
Vestibular Neuritis (VNE)
157 (15.2)
87.7
88.2
82.4
81.4
85.0
85.4
84.3
Benign Recurrent Vertigo (BRV)
20 (1.9)
3.0
4.0
20.0
16.5
8.0
21.0
8.0
Vestibulopatia (VES)
55 (5.3)
9.6
14.0
16.5
22.8
15.8
15.3
13.5
Central Lesion (CL)
24 (2.3)
5.0
2.1
26.0
28.5
15.0
19.0
15.8
Median of TPR
77.9
88.2
82.4
79.4
77.7
73.5
78.6
Total Classification Accuracy
79.8
82.4
77.4
78.2
78.8
76.8
79.4
Linear kernel with box constraint bc = 0.20 (OVO and OVA) RBF kernel with bc = 0.4 and σ = 8.20 (OVO), bc = 1.4 and σ =10.0 (OVA)
In OVO, the results of pairwise decisions were combined, thus having 36 class suggestions (votes) for the class of the test sample altogether. The final class for the test sample was chosen by the majority voting method, the max-wins rule [1]. A class, which gained the most votes, was chosen as the final class. If a tie situation occurred in the max-wins (OVO) or winner-takes-all (OVA) rules, the final class, within the tied classes, was solved in SVM by 1-NN, whereas k-NN searched for the nearest case from the classifiers belonging to the tied classes and selected the class with minimum distance to the test case. If the test case did not get any class by using k-NN with OVA (every classifier voted 0), the class was searched from the whole learning set with normal 1-NN.
3. Results In the Table 1, mean true positive rates (TPRs) and total classification accuracies of the ten 10-fold cross-validations are presented for 5-NN and SVM with linear and RDF
582
K. Varpa et al. / Applying One-vs-One and One-vs-All Classifiers
kernels. Both methods were run by using OVO and OVA classifiers. The 5-NN method was also run in a basic way by using all nine disease classes in a classifier, i.e. all of the training cases class labels were used when searching for the nearest case to the test sample. The basic 5-NN was used as a baseline in the comparison of the predictive accuracies of the methods. The mean number of tie situations occurring during 10 times repeated 10-fold CV with OVO classifiers was 20.3 with 5-NN (standard deviation SD=4.8), 7.2 using SVM linear (SD=2.6) and 2.6 with SVM RBF (SD=1.5). With OVA classifiers, the number of ties was higher, as expected: 5-NN 167.1 (SD=3.8), SVM linear 49.8 (SD=4.7) and SVM RBF 25.8 (SD=4.2). With 5-NN OVA classifier all of the ties (16.2%) happened when a case could not be classified at all, ties with all nine classifiers, whereas 5-NN OVO classifier had ties (2.0%) with two or three classes (mainly BPV, MEN and VES). The results show that the use of multiple binary classifiers improves the TPRs of disease classes. The best results were yielded with OVO in 5-NN: it had the highest median of TPR and total accuracy. With this data, the OVO classifiers mainly increase TPRs and the total accuracies, whereas OVA classifiers have slightly decreasing effect on classification. However, there were exceptions also with this: SVM with MLP and polynomials 4 and 5 worked better with OVA classifiers. Usually, MLP is one of the best kernel functions used in SVM, but with this data it did not work at all (total accuracy 25.5% with OVO and 68.5% with OVA). It could also be seen with k-NN that the bigger k, the closer the results with OVO, OVA and the basic k-NN came (except with disease classes SUD and TRA).
4. Discussion In this research, we concentrated on studying the effect of splitting the multi-class problem into several binary classifiers and the voting procedure within two different ML methods, the k-NN and SVM classifiers. Splitting a problem into several binary problems helps to understand data better, especially with OVO classifiers in k-NN. The OVO classifiers aid to see which classes are difficult to separate and which ones distinguish well from the others. Diagnosis of the otoneurological disorders is demanding. For example, in [18], 1,167 patients participated in research but only for 872 patients could be made confirmed diagnosis and in [19], ten of the 33 test cases had to be excluded from the test because even the expert physician could not give them a definite diagnosis. Diseases can simulate each other in the beginning having symptoms of similar kind and symptoms can vary in time making recognition difficult [18, 20]. Classification accuracy of the medical professionals with the data of this study having 1,030 cases has not been tested because this would be an enormous task for them to do. However, a smaller number of cases (23) have been classified with a group of physicians [19]. We need to remember that classification tasks in this research were performed with the imputed data. In real life, there usually occur missing data because clinical tests are not done to every patient automatically. Thus, TPRs and total classification accuracies in this research might be a little bit higher than with the original data having missing values. There occur some differences in the way how ML methods used in the research handle data. SVM treats each attribute as quantitative, whereas k-NN using HVDM
K. Varpa et al. / Applying One-vs-One and One-vs-All Classifiers
583
distance metric makes a different calculation depending on the type of the attribute (quantitative or qualitative). In the future, we shall expand the use of the voting procedure to involve handling the results of several different classification methods (e.g. k-NN, nearest pattern method of an otoneurological expert system [21] and Naive Bayes [8]), thus, forming a hybrid decision support aid. Being able to use results of several ML methods simultaneously strengthens the support of decision making. Acknowledgements: The authors wish to thank Erna Kentala, M.D., and prof. Ilmari Pyykkö, M.D., for their help in data collection during the years and their valuable aid in domain expertise. The first and second authors acknowledge the support of the Tampere Doctoral Programme in Information Science and Engineering (TISE).
References [1] [2] [3] [4] [5]
[6]
[7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]
Friedman JH. Another approach to polychotomous classification. Stanford University; 1996 Oct.14 p. Allwein EL, Schapire RE, Singer Y. Reducing multiclass to binary: a unifying approach for margin classifiers. J Mach Learn Res. 2000;1:113–141. Viikki K. Machine learning on otoneurological data: decision trees for vertigo diseases [PhD Thesis]. Tampere, Finland: University of Tampere; 2002. Siermala M, Juhola M, Kentala E. Neural network classification of otoneurological data and its visualization. Comput Biol Med. 2008;38(8):856–866. doi:10.1016/j.compbiomed.2008.05.002. Varpa K, Iltanen K, Juhola M. Machine learning method for knowledge discovery experimented with otoneurological data. Comput Methods Programs Biomed. 2008;91(2):154–164. doi:10.1016/j.cmpb.2008.03.003. Fürnkranz J. Round robin rule learning, In: Brodley CE, Danyluk AP, editors. ICML-01. Proceedings of the 18th International Conference on Machine Learning; 2001. Williamstown, MA: Morgan Kauffman; 2001. P.146–153. Rifkin R, Klautau A. In defense of one-vs-all classification. J Mach Learn Res. 2004;5:101–141. Mitchell T. Machine Learning. New York: McGraw-Hill;1997. Debnath R, Takahide N, Takahashi H. A decision based one-against-one method for multi-class support vector machine. Pattern Anal Appl. 2004;7(2):164–175. doi:10.1007/s10044-004-0213-6. Wilson DR, Martinez TR. Improved heterogeneous distance functions. J Artif Intell Res. 1997;6:1–34. Cortes C, Vapnik V. Support-vector networks. Mach Learn. 1995;20:273–297. Vapnik VN. The Nature of Statistical Learning Theory. 2nd ed. Springer; 2000. Hsu CW, Lin CJ. A comparison of methods for multiclass support vector machines. IEEE Trans Neural Netw. 2002;13(2):415–425. Christiani N, Shawe-Taylor J. An introduction to support vector machines and other kernel-based learning methods. Cambridge University Press; 2003. Burges CJC. A tutorial on support vector machines for pattern recognition. Data Min Knowl Discov. 1998;2:121–167. Suykens JAK, Vandewalle J. Least squares support vector machine classifiers. Neural Processing Letters. 1999;9:293–300. Kentala E, Pyykkö I, Auramo Y, Juhola M. Database for vertigo. Otolaryngol Head Neck Surg. 1995;112:383–390. Kentala E. Characteristics of six otologic diseases involving vertigo. Am J Otol. 1996;17(6):883–892. Kentala E, Auramo Y, Juhola M, Pyykkö I. Comparison between diagnoses of human experts and a neurotologic expert system. Ann Otol Rhinol Laryngol 1998;107(2):135–140. Havia M. Menière’s disease prevalence and clinical picture [PhD Thesis]. Helsinki: Department of Otorhinolaryngology, University of Helsinki; 2004. Auramo Y, Juhola M. Comparison of inference results of two otoneurological expert systems. Int J Biomed Comput. 1995;39:327–335.
584
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-584
Roogle: An Information Retrieval Engine for Clinical Data Warehouse Marc CUGGIAa, Nicolas GARCELONa, Boris CAMPILLO-GIMENEZa, Thomas BERNICOTa, Jean-François LAURENTb, Etienne GARINb, André HAPPEc, and Régis DUVAUFERRIERa a UMR 936 Inserm, Faculté de médicine de Rennes. France b CRLCC Centre Eugène Marquis, Rennes, France c Intermède – Guignen. France
Abstract. High amount of relevant information is contained in reports stored in the electronic patient records and associated metadata. R-oogle is a project aiming at developing information retrieval engines adapted to these reports and designed for clinicians. The system consists in a data warehouse (full-text reports and structured data) imported from two different hospital information systems. Information retrieval is performed using metadata-based semantic and full-text search methods (as Google). Applications may be biomarkers identification in a translational approach, search of specific cases, and constitution of cohorts, professional practice evaluation, and quality control assessment. Keywords. Information retrieval, electronic patient record, ontology, indexing
1. Introduction As of today, medical informatics is going through a strong shift of paradigm. Data from medical reports are most of the time not structured. They are however of high medical value as they correspond to the expert interpretation of the clinician. These information sources consist therefore in a data repository potentially highly relevant for scientific research. From this perspective, the combined exploitation of metadata and information contained in exam reports on a large data corpus by tailored search engines becomes very relevant. The main objective of the R-oogle project is to implement a system aimed at offering to researchers the possibility to exploit, for a scientific goal, the huge amount of medical data that are metadata and exam reports with the ease of Googletm search engine, combining search methods for structured data elements and full text. The objective of R-oogle is to implement a platform consisting of: (i) A Clinical data warehouse (CDW) containing a large collection of patient data coming from different hospitals (ii) A search engine combining semantic search and full text search (with semantic enrichment) exploiting information contained in the exam reports.
2. Background Building and exploiting a multi-domain medical CDW built from Electronic Health Records (EHR) is currently an active research topic, e.g. Rubin et al [1] or the open
M. Cuggia et al. / Roogle: An Information Retrieval Engine for Clinical Data Warehouse
585
source platform (I2B2) [5] developed by Harvard. Information retrieval (IR) is also a very active research domain and the medical field appears to be very suitable for such techniques, despite medical documents were, as for semantic ambiguity, more suitable for indexing than documents from other domains [2]. Ehrler, Ruch et al proposed in 2007 [3] an approach based on the full text indexing of medical reports, exploiting the context in which can be found information on the report’s structure (motive of the exam, description, conclusion). In this vein, Spat and Cadonna [4] described a system for document retrieval based on the metadata enriched by automatically extracted concepts from the reports indexed in German. A literature review done in 2008 [6] on 174 publications showed the increased scientific activity around IR on EHRs, despite significant storage and processing limitations to implement systems outside of experimental context. In a previous publication, we evaluated contribution of full-text search versus encoding data with the Diagnosis Related Groups (DRG)) in an epidemiological study context [7]. This study highlighted the contribution of full-text search to the DRG database search. In this work, we added semantic enrichment to the search engine, and we compared document retrieval with or without semantic enrichment. Shultz and al. developed MorphoSaurus, a German concept-based document search engine, connected to hospital information system in order to support search across the whole corpus of patient discharge letters and other clinically relevant documents [8].
3. Material and Methods Building the biomedical data warehouse: A “star” data base schema is used. CDW includes patients, full-text documents and structured documents linked to thesauri. Each document is defined as a medical production by a medical department on one patient, at a specific date. Data come from different software, so common kind of information have been defined to all documents (i.e. patient’s identification number, date, author, title, type of document, or text). Into some full-text documents, zones of specific texts can be extracted: motive, results, conclusion, technique, exam, and medical issues. These metadata have been added in the document description to help targeting a search. Structured data have been recorded into a separate table with a scalable architecture to allow integration of heterogeneous data. Each data is attached to a document as a data element, wherever applicable a thesaurus (such as the Logical Observation Identifiers Names and Codes for laboratory data (LOINC)), combined with a data value (such as a date, a number or text), wherever applicable a thesaurus for the data value (such as the French procedure classification (CCAM) used for encoding the DRG data). The CDW was implemented with the Oracle® database management system. Scripts using the Open Source Talend ETL were programmed to feed the database. The first step consisted in loading documents produced between 2005 and July 2010 from different sources. These scripts will be used to feed the CDW “on the fly”. Patient identity mapping: Data sources come from two distinct medical hospitals, so we implemented an algorithm mapping the different identification numbers of the two establishments. The mapping is successively realized following four methods, from the most accurate (identical surname or maiden name, first name, sex and birth date) to the least one (surname or maiden name, sex, date of birth and the first four letters of the first name). The accuracy criterion is applied when mapping so that
586
M. Cuggia et al. / Roogle: An Information Retrieval Engine for Clinical Data Warehouse
manual mapping of patient identification numbers is allowed for the least accurate method. The identification numbers of the Rennes Cancer Institute are stored to ease patients mapping in future imports and to provide a way to search the data warehouse by its own identification numbers. Semantic enrichment and indexing documents: Medical concepts are extracted from reports using NOMINDEX [9], a concept extraction tool based on the ADM tool (Aide au Diagnostic Médical). Medical concepts associated to each document are stored in the structured part of the data warehouse. Concepts are restricted to the MeSH thesaurus. Lucene, an Open Source full-text engine written in Java by Apache, performed document indexing with tri-grams [10]. Full-text search is optimized and very elaborate queries may be composed (such as boolean, fuzzy, joker) on the entire information contained in a document or on part of the metadata (e.g.: +(mediator benfluorex) +conclusion:valvulop All French synonyms issued from the UMLS (Unified Medical Language System Metathesaurus) and the hierarchical parents of concepts are also integrated in indexing. Subsomption is therefore taken care of during the indexing of the document and not at the query time, the whole Lucene capabilities remaining intact for allowing complex queries. Developing multi-criteria search engine: The first part of the engine consists in a high-level search: patient’s sex, age range and medical department producing the document. The second part consists in a full-text search using Lucene syntax and the third part consists in the structured search. This way the user will have the capability to build structured queries on whole disjoined documents or only on specific parts of documents (e.g. motive, conclusion). Information retrieval assessment: We placed the evaluation in a non-habit user perspective, without complex structured queries. The corpus of assessed documents consisted of the textual part of multidisciplinary prostate cancer reviews. This one was selected for the natural language properties of disease descriptions contained into this part, because our main objective was to assess the potential benefit of documents’ semantic enrichment in a full text search problematic. The assessment was performed with two contextual designs, one where search terms were presumed to be found with a high level of occurrence in the whole documents (i.e. prostatic adenocarcinoma), and one with a low level of search term occurrence (i.e. heart failure). Four types of search process have been conducted on the corpus of documents: one with and one without semantic enrichment by the search engine, one exact match term search by a human medical expert, and one textual search with clinical interpretation of each document by a human medical expert too. We used recall, precision with their 95% confidence interval and f-measure to describe the whole assessment results.
4. Results Status of the data warehouse and the search engine: The CDW was fed by six sources of heterogeneous data, five sources managed by Rennes hospital (pathology reports, radiology reports, hospitalization/consultation reports, gastrointestinal endoscopy reports, DRG data) and one source managed by Rennes Cancer Institute (imaging reports). We are not yet reaching completeness, and the CDW contains as of today 2 115 581 documents. The results of a query are displayed as a table of the retrieved documents. When the user opens a document, the text searched through the full-text
M. Cuggia et al. / Roogle: An Information Retrieval Engine for Clinical Data Warehouse
587
search engine is highlighted to ease validating the document (like Google cache). The user can then display all the documents on the patient as a sortable list or according to a temporal representation modeled as a Gantt diagram. The user has then the possibility to add the selected patient in a “cohort” he/she is building or, on the contrary, rule out this patient. All the documents of enrolled or ruled-out patients for the current cohort will be displayed as attached to patients already enrolled or ruled out, and won't appear to to be checked again. Other functionalities will provide another general view of the CDW. Table 1. Results of the evaluation in the high term prevalence context TP
FP
TN
FN
Recall [95% CI]
Precision [95% CI]
Fmeasure [95% CI]
nWES search engine 142 1 100 15 0,90 [0,86-0,95] 0,99 [0,98-1,00] 0,95 * HEM search WES search engine 157 26 75 0 1,00 0,86 [0,81-0,91] 0,92 * HEM search nWES search engine 141 2 45 70 0,67 [0,60-0,73] 0,99 [0,97-1,00] 0,80 * HCI search WES search engine 180 3 44 31 0,85 [0,81-0,90] 0,98 [0,97-1,00] 0,91 * HCI search nWES : without semantic enrichment ; WES : with semantic enrichment; HEM : human exact match; HCI: human clinical interpretation; TP: true positive; FP: false positive; TN: true negative; FN: false negative; CI: confidence interval.
Evaluation results: We evaluated the contribution of semantic enrichment of the document database for the search engine. Two hundred and fifty eight notices of prostatic cancer multidisciplinary meetings have been analyzed. Related to the high term prevalence context using the combined query: “adenocarcinoma” AND “prostatic”, the search without semantic enrichment retrieved 143 documents, the search with semantic enrichment retrieved 183 documents, the human exact match search retrieved 157 documents and the human search with clinical interpretation retrieved 211 documents. Related to the low term prevalence context using the combined query: “heart” AND “failure”, the search without semantic enrichment retrieved 0 documents, the search with semantic enrichment retrieved 7 documents, the human exact match search retrieved 0 documents and the human search with clinical interpretation retrieved 8 documents. To compare complete results of the search engine and the human evaluation, see table 1 and table 2. Table 2. Results of the evaluation in the low term prevalence context TP
FP
TN
FN
Recall [95% CI]
Precision [95% CI]
Fmeasure [95% CI] -
nWES search engine 0 0 258 0 * HEM search WES search engine 0 7 251 0 0 * HEM search nWES search engine 0 0 250 8 0 * HCI search WES search engine 4 3 247 4 0,50 [0,15-0,85] 0,57 [0,21-0,94] 0,53 * HCI search nWES : without semantic enrichment ; WES : with semantic enrichment; HEM : human exact match; HCI: human clinical interpretation; TP: true positive; FP: false positive; TN: true negative; FN: false negative; CI: confidence interval.
588
M. Cuggia et al. / Roogle: An Information Retrieval Engine for Clinical Data Warehouse
5. Discussion - Conclusion In this paper we demonstrated the feasibility and applicability of a CDW that benefits from full-text search capabilities, as opposed to I2B2 that is mainly based on a structured data approach and does not address French NLP specificities. We however applied NLP methods for annotating reports using relevant concepts as well as synonyms and ancestors. This enrichment permitted to deal with subsumption issues using full-text search, and also to cluster cases by projecting reports on a MeSH hierarchy. Results show that semantic enrichment provides a better recall while precision stays quite stable. This is the best situation for prescreening patient for clinical trials. Prescreening aims at spotting patients better too often than too seldom. Knowing that each document returned by the searching engine, can be quickly checked and validated by an end-user (with the keyword highlighting feature), it is then really easy to rule out un-relevant documents. Even though temporality is taken care of through the Gantt diagram representation and the multi-documents search within a single hospitalization is possible, the search engine could be improved to address specific situations, e.g. retrieving hospital-acquired infection cases would require the detection of a positive bacteremia at 48h or later after the admission. We encountered some issues related to reference terminologies we used to encode patient data in the CDW (e.g concerning the integration of lab test, as for Cormont et al [11], we were confronted with the lack of coverage and the missing French translations of LOINC). As perspective, we are currently working a web portal intended to research technicians and investigators. This portal aims at managing the workflow to access to the CDW. Acknowledgement: We would like to thank the CRITT Santé Bretagne for their financial support and Delphine Rossille.
References Rubin DL, et al. A data warehouse for integrating radiologic and pathologic data,. J Am Coll Radiol, 2008. 5(3): p. 210-7. [2] Ruch P, et al. Comparing general and medical texts for information retrieval based on natural language processing: an inquiry into lexical disambiguation. Stud Health Technol Inform, 2001. 84(Pt 1): p. 2615. [3] Ehrler F, et al. Challenges and methodology for indexing the computerized patient record, Stud Health Technol Inform, 2007. 129(Pt 1): p. 417-21. [4] Spat S, et al. Enhanced information retrieval from narrative German-language clinical text documents using automated document classification, Stud Health Technol Inform, 2008. 136: p. 473-8. [5] Murphy SN, et al. Integration of clinical and genetic data in the i2b2 architecture. AMIA Annu Symp Proc. 2006:1040. Available at: Consulté novembre 30, 2010. [6] Meystre SM, et al. Extracting information from textual documents in the electronic health record: a review of recent research, Yearb Med Inform, 2008: p. 128-44. [7] Cuggia M, et al. A full-text information retrieval system for an epidemiological registry, Studies in Health Technology and Informatics, vol. 160, n°. 1, p. 491-495, 2010 [8] S. Schulz, Daumke P, Fischer P, Müller ML. « Evaluation of a document search engine in a clinical department system », AMIA ... Annual Symposium Proceedings / AMIA Symposium. AMIA Symposium, p. 647-651, 2008. [9] Happe, A, et al. Automatic concept extraction from spoken medical reports, Int J Med Inform, 2003. 70(2-3): p. 255-63. [10] Hatcher E, et al. Lucene in action, Action series. Manning Publications Co., Greenwich, CT, 2004. [11] Cormont S, et al. Construction of a dictionary of laboratory tests mapped to LOINC at AP-HP, In Actes AMIA Annual Fall Symposium 2008, page 1200, Washington, DC, novembre 2008. AMIA
[1]
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-589
589
Truecasing Clinical Narratives a
Markus KREUZTHALERa, Stefan SCHULZa,b,1 Institute for Medical Informatics, Statistics and Documentation, Medical University of Graz, Austria b Institute of Medical Biometry and Medical Informatics, University Medical Center Freiburg, Germany
Abstract. Truecasing, or capitalization, is the rewriting of each word of an input text with its proper case information. Many medical texts, especially those from legacy systems, are still written entirely in capitalized letters, hampering their readability. We present a pilot study that uses the World Wide Web as a corpus in order to support automatic truecasing. The texts under scrutiny were Germanlanguage pathology reports. By submitting token bigrams to the Google Web search engine we collected enough case information so that we achieved 81.3% accuracy for acronyms and 98.5% accuracy for normal words. This is all the more impressive as only half of the words used in this corpus existed in a standard medical dictionary due to the excessive use of ad-hoc single-word nominal compounds in German. Our system performed less satisfactory for spelling correction, and in three cases the proposed word substitutions altered the meaning of the input sentence. For the routine deployment of this method the dependency on a (black box) search engine must be overcome, for example by using cloud-based Web ngram services. Keywords. EHR, NLP, WWW
1. Background Most significant patient-related content in electronic health records is contained in free text narratives [1, 2]. The scenarios in which these texts are produced vary across institutions, and their quality depends on the authors, the target readers, and institutional quality standards. Hastily written notes, typed by a physician or a nurse directly into the computer, tend to exhibit a lower quality in spelling, grammar, style and layout [3], compared to discharge letters, which are first dictated by a resident, then transcribed by a typist, proofread by the author, and finally validated by the staff physician before being sent out to another clinic or to the patient’s GP. As well as the hurried speed in which texts are often produced, there may be technical factors responsible for the bad quality of text. Although most up-to-date text entry interfaces offer the levels of functionality users are accustomed to in modern word processors or e-mail clients, legacy systems still exist which restrict the text entry to 7bit ASCII, thus not permitting lower case characters or diacritics. As a consequence, users familiar with these systems often persist in writing in this style even after migrating to a new system. Although it is simply a matter of time before new texts are no longer produced under these restrictions, and writers of these texts will have familiarized themselves with 1
Corresponding Author: Stefan Schulz.
590
M. Kreuzthaler and S. Schulz / Truecasing Clinical Narratives
the production of correctly capitalized texts, 7bit ASCII text still continues to exist in clinical text corpora. Such legacy data is not only an important resource for retrospective research and clinical care, but also for the training of statistical natural language processing (NLP) systems [4]. The distribution of capital letters inside of a text token depends on its current context, which strongly impacts on the intelligibility of texts [5]. Practical applications of truecasing include the processing of raw input text, such as the output from speech recognition systems, as well as spelling and grammar correction systems. Just as other NLP approaches, truecasers rely on tagged corpora for the training of statistical models such as MaxEnt or SVN. Most truecasing experiments have been performed on newspaper corpora, for which the main use case was the identification of proper names characterized by initial capital letters. Languages differ in their capitalization rules, and German constitutes a special case; in contrast to most languages initial capitals are mandatory for all nouns (and nominalised adjectives and verbs) and therefore are not specific to proper names.
2. Materials and Methods Corpus: 3,542 German-language pathology reports, containing a total of 83,818 words, were extracted from the Graz University Hospital Information System, covering a broad range of clinical disciplines. The texts had been dictated by physicians and entered by typists into a character-based user interface. The reports are entirely in upper case and do not use diacritics such as "Ä", "Ö", or "Ü". Dictionary coverage: the coverage of these words using a German-language medical dictionary [6] was calculated. Sampling: A random sample of 100 sentences was taken, with an average of 9.3 words (SD = 7.9; MIN = 2; MAX=38, Median = 7) per sentence. The following characters were considered sentence delimiters: [.;:!?]. Periods within abbreviations (e.g. "etc.") were not considered as delimiters. Preparation: all remaining punctuation characters and parentheses were removed. Gold standard: for each sentence a corrected version was created. Corrections included not only the restitution of the case, but also spelling and grammar corrections where necessary according to the 1996 German orthography reform, the medical spelling rules in accordance with German medical publishers, and [6]. Reference corpus: The case information was extracted employing the WWW as a corpus. The Google search engine was used for harvesting correct case information. Algorithm: Each sentence with n characters was dissected into overlapping bigrams B1 ... Bn-1. All bigrams are sent to the search engine as a phrase search (quoted). The hits (as displayed in bold face in the summary) of the pages returned by the search engine are saved within a map data structure. The two maps from the same token (Tk+1 which is the second token in Bk and the first token in Bk+1) are merged. A weight W is assigned, directly proportional to the number of occurrences in the map and indirectly proportional to the Levenshtein edit distance [7] of the term to be corrected. The token with the maximum calculated weight is accepted as the corrected token. The edit distance is used because the search engine can also return near matches for quoted phrase searches if, for example, there are very few exact matches for that phrase on the Web.
M. Kreuzthaler and S. Schulz / Truecasing Clinical Narratives
591
In the case that a token is not resolved by either bigram, a single-token (quoted) search is performed. If even this search does not yield any results, the token is decapitalized (with an upper case initial character) and diacritics are restored by applying the rule ["ae" "ä"; "Ae" "Ä"; "oe" "ö"; "Oe" "Ö";"ue" "ü"; "Ue" "Ü"], according to German character transcription rules. The algorithm was implemented in Java, using JDOM, Tagsoup and XPath (XML Path Language). The search requests were spaced by moderate delays so that the strain on the search engine was minimal.
3. Results Table 1 shows a typical correction result and clearly visualizes the increase in readability after truecasing. Table 1. Original text (left), automatically corrected text (right).
CHRONISCHE HEPATITIS MIT GERING BIS MITTELGRADIGER AKTIVITAET (HEPATISCHER AKTIVITAETSINDEX 6 VON 18) UND MITTELGRADIGER BIS HOEHERGRADIGER PORTALER UND MITTELGRADIGER INKOMPLETTER UND KOMPLETTER PORTOPORTALER UND PORTOZENTRALER FIBROSE (FIBROSESCORE 4 VON 6)
Chronische Hepatitis mit gering bis mittelgradiger Aktivität (hepatischer Aktivitätsindex 6 von 18) und mittelgradiger bis höhergradiger portaler und mittelgradiger inkompletter und kompletter portoportaler und portozentraler Fibrose (Fibrosescore 4 von 6).
A comparison of the types in the entire corpus with the Pschyrembel clinical dictionary [6], a standard reference for German clinical terminology, showed an astonishingly low lexical coverage of 51%; of 7500 types in the text corpus only 3808 match any token in the entire dictionary corpus. This result is mainly due to the high productivity in single-word compounding (a similar result can be seen in [8]) and, to a minor extent, the use of spelling variants.
Figure 1. Typical search result. The bigrams in bold face are picked by the algorithm.
Figure 1 shows a fragment of a typical search result, from which the sequences in bold face are extracted. Table 2 exemplifies the decision algorithm.
592
M. Kreuzthaler and S. Schulz / Truecasing Clinical Narratives
Table 2. Decision algorithm for "CHRONISCHE". Input Bigram 1 Frequency Bigram 2 Frequency
GERINGGRADIGE CHRONISCHE GASTRITIS GERINGGRADIGE CHRONISCHE Geringgradige chronische geringgradige geringgradige" CHRONISCHE GASTRITIS Chronische Gastritis chronische
7 15 6 2 9 14 5
Merged
Frequency
Chronische Gastritis Geringgradige chronische geringgradige geringgradige"
Decision
chronische
9 14 7 20 6 2
After the automated correction procedure, 55 of 100 sentences resulted in being equivalent to the spelling and truecasing gold standard. If equivalent expressions and acceptable spelling variants are included this rate increases to 62 and 72, respectively. In several cases it was observed that a word with standard spelling was converted to the non-standard spelling variant, as the latter occurred sufficiently more frequently on the Web. It is well known that few health professionals are perfectly proficient in spelling standard Latin. Some rules are complicated; situations require that a "c" in a Latin word stem should be converted to "k" or "z" as soon as they are no longer in a Latin syntactic context, e.g. "Ulcus ventriculi" but "Magenulkus". A synopsis of the results is given in Table 3. Table 3. Results.
Correction Phenomenon Right case correction of normal words Right case correction of acronyms Meaning of sentence affected by correction Spelling / grammar error corrected New grammar error after processing
896 13 3 1
Total
Units
909 16 100 5 1
tokens tokens sentences sentences sentence
The figures show an impressive accuracy of 98.5% of capitalized non-acronym tokens which were transformed into the correct case. The rate is not as good for acronyms (81.3%). Also, the procedure affected the meaning of three of the one hundred sentences. Only one of five known spelling errors was corrected, and one additional grammar error was introduced after processing. The accuracy of 98.5% slightly outperforms the truecasing result reported by [4] on news articles. However, our method does not clearly separate between truecasing and spelling correction. This was justifiable under the constraints of our research, as the motivating factor for this work was the restrictive nature of the 7-bit ASCII character set which does not only preclude the use of lower-case characters but also of diacritics (in the case of German, mainly the "ä", "ö", "ü", and "ß" characters). The dependence on the Google Web search interface, and its non-predictable output in those cases
M. Kreuzthaler and S. Schulz / Truecasing Clinical Narratives
593
where there was no match, led to strange corrections such as, for instance, proposing "maximaler" as a correction for "minimaler". This distortion of a document's content is, of course, not acceptable, and challenges the unsupervised applicability of the truecasing system. In a future version we will therefore introduce a more conservative edit distance threshold for corrections (after applying the diacritic transcription rules). Additionally, the dependence on Google Search as a black box system, which can not tolerate any major upscaling, is an unknown quantity upon which no routine system could realistically be based. An alternative would be to use Web n-gram services made available by Yahoo!, Google, and Microsoft Research [9].
4. Conclusions We demonstrated that the use of the World Wide Web as a corpus can impressively improve the legibility of legacy texts in medical record systems that use 7-bit ASCII encoding. As the texts under scrutiny were German-language pathology reports, both German diacritics and its associated capitalization rules had to be taken into account. By submitting token bigrams to the Google Web search engine we collected enough case information so that we achieved an accuracy of 81.3% for acronyms and of 98.5% for normal words. This is all the more impressive as only half of the word types used in this corpus could be found in a comprehensive standard medical dictionary. Our system performed less satisfactory for spelling correction, and in three cases proposed word substitutions that altered the meaning of the input sentence. For the routine deployment of this method the dependency on a (black box) search engine must be overcome, for example by using cloud-based Web n-gram services.
References [1] Barry J. Value of unstructured patient narratives. Current EHRs capture most information--patient demographics, medications and problem lists--as structured data, and often codify the details to support billing instead of clinical activities. Health Management Technology. 2010; 31 (7): 6-7. [2] Schiff GD, Bates DW. Can electronic clinical documentation help prevent diagnostic errors. New England Journal of Medicine. 2010; 25; 362(12): 1066-1069. [3] Peters AC, Nohama P, Pacheco E, Schulz S. Análise de erros de linguagem em sumários de alta.. XII Congresso Brasileiro de Informática na Saúde, Oct 18-22, 2010, Porto de Galinhas, Brazil: http://www.itarget.com.br/newclients/cbis2010.com.br [4] Lita LV, et al. tRuEcasIng. Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL 2003), July 7-12, Sapporo, Japan. [5] Batista F, et al. Language Dynamics and Capitalization using Maximum Entropy. Proceedings of ACL08: HLT, Short Papers (Companion Volume), pages 1-4, Columbus, Ohio, USA, June 2008. [6] Pschyrembel W. Pschyrembel Klinisches Wörterbuch Version 2. CD-ROM for Windows 3.x/95/98 de Gruyter, Bln. 1999; ISBN: 3110166208. [7] Levenshtein VI. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet Physics. Doklady, volume 10, pages 707-710, 1966. [8] Schulz S, Hahn U. Morpheme-based, cross-lingual indexing for medical document retrieval. International Journal of Medical Informatics 2000 Sep; 58-59:87-99. [9] Zhai, et al. Web N-gram Workshop. Workshop of the 33rd International ACM SIGIR Conference (2010) http://research.microsoft.com/en-us/events/webngram/sigir2010web_ngram_workshop_proceedings.pdf.
594
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-594
Checking Coding Completeness by Mining Discharge Summaries Stefan SCHULZa,c,1,Thorsten SEDDIGa, Susanne HANSERa, Albrecht ZAIßa, Philipp DAUMKEb a University Medical Center Freiburg (UMCF), Germany b Averbis GmbH, Freiburg, Germany c Medical University of Graz, Austria
Abstract. Incomplete coding is a known problem in hospital information systems. In order to detect non-coded secondary diseases we developed a text classification system which scans discharge summaries for drug names. Using a drug knowledge base in which drug names are linked to sets of ICD-10 codes, the system selects those documents in which a drug name occurs that is not justified by any ICD-10 code within the corresponding record in the patient database. Treatment episodes with missing codes for diabetes mellitus, Parkinson's disease, and asthma/COPD were subject to investigation in a large German university hospital. The precision of the method was 79%, 14%, and 45% respectively, roughly estimated recall values amounted to 43%, 70%, and 36%.. Based on these data we predict roughly 716 non-coded diabetes cases, 13 non-coded Parkinson cases, and 420 non-coded asthma/COPD cases among 34,865 treatment episodes. Keywords. Clinical Coding, Diabetes Mellitus, Parkinson's Disease, Obstructive Lung Disease, Natural Language Processing, Electronic Patient Records
1. Background Health services research, outcome assessment, disease reporting and reimbursement in hospitals require valid and complete data on diagnoses at discharge. It is well known that clinical coding results exhibit extensive weaknesses [1, 2]. Reimbursement systems based on diagnosis related groups (DRGs) tend to increase coding quality [3]. For the clinical controller the main question is whether coding optimization prevents the loss of revenue, whereas for the clinical epidemiologist there is a concern that the coding performed by DRG-savvy coders penalizes the documentation of those conditions that are known to be irrelevant for reimbursement. We present an approach that is suited to bring to light undocumented diagnoses, i.e. conditions that play a certain role in the clinical process but are non remembered (or deemed irrelevant) when it comes to the assignment of ICD codes at discharge. Our hypothesis is that medication at discharge can give an important hint to which ICD codes may be missing. However, manual review of the patient record with the aim to identify missing information and to re-code the discharge profile is time consuming, and requires in-depth medical knowledge. The most trustworthy source here is the discharge summary, as a comprehensive structured documentation of medication is 1
Corresponding author: E-mail: [email protected]
S. Schulz et al. / Checking Coding Completeness by Mining Discharge Summaries
595
often missing. In the treatment episodes under scrutiny summaries mostly finish with a "drugs at discharge" list, because this information is important for the follow-up treatment by the patient's general practitioner. The objective of this study is to employ a simple text mining approach to predict missing codes. We focus on three diseases, which were known to be readily omitted in coding, according to the long-lasting experience of two of the authors (SH, AZ) in the UMCF medical control department: (i) diabetes mellitus, (ii) Parkinson's disease, (iii) bronchial asthma and chronic obstructive pulmonary disease (COPD).
2. Materials and Methods Documents. We used a corpus of 34,865 in-patient discharge summaries from UMCF, covering all clinical disciplines (except psychiatry) for one year. Each discharge summary represents one treatment episode (one patient may occur more than once). The corpus was split by random into a training corpus (n=17,000) and a test corpus (n=17,865). The summaries show a broad variation between clinical departments. Information on drugs occurred in several sections (family history, patient history, lab, evolution, medication at discharge) with large disparities in layout and formatting. Annotations. Via a unique ID each summary is linked to one treatment episode and a list of one to many ICD-10 codes with which the episode has been manually annotated for reimbursement, based on the German DRG (diagnoses related groups) system. Rule Bases. For the three diseases under scrutiny a rule base was built, relating drug names with their indications. The official indications were acquired from two databases, MMI and RL [4, 5], completed by additional drug indications found in the training corpus in order to capture off-label usage. Both commercial drug names and ingredient names were included. Unspecific name parts like "sodium" or "hydrochloride" were ignored. For each drug a rule was encoded as a triple (D, P, N) with D (drug name) being a string of characters, P a list of ICD codes (p1…pn) for the diseases under scrutiny ("positive list"), and N a list of ICD codes (n1…nm) detailing other indications for this drug ("negative list"). Only the first there ICD digits were mandatory. E.g., in the Parkinson rule base, for D = "Madopar", the positive list contains the code fragments P = {G20, G21, G22}, covering also more specific codes like G21.3. Extensive negative lists had to be built for anti-Parkinsonian and bronchodilatatory drugs, whereas, no negative list was necessary for antidiabetic drugs. Filter algorithm. For each target disease the rule base was applied to the entire document corpus using the following algorithm, implemented as a Python script (documents had been made available as plain text, extracted from the original RTF format): For each document: For each drug name: If drug name matches text token: If no match between any discharge ICD code and any code in the negative or positive list: Return the document ID
Thus, all those discharge summaries are selected that mentioned a drug for which no ICD annotation justified its administration. The execution of this algorithm on the training data discovered some sources of error, e.g. drug names that are homographs of
596
S. Schulz et al. / Checking Coding Completeness by Mining Discharge Summaries
patient names, others which also occur in laboratory results, as well as treatment episodes in which the drug can theoretically be justified by a code from the negative list, although the summary clearly tells that this drug was prescribed to treat a disease from the positive list, mentioned in the record but not coded in the patient management system. Finally, there are cases where antidiabetic and antiparkinsonian substances occur in lab procedures. Such cases were difficult to decide without further information and therefore constitute a source of potential false positive candidate documents. Evaluation methodology. For calculation of the precision a samples (n = 3 * 50) of the candidate texts retrieved by the above algorithm was analyzed by a domain expert. A gold standard for roughly approximating the recall was created as follows: The filter was tested on a document set which has already been annotated with a ICD code of interest. So we modified the above algorithm in order to obtain a rough estimator. For each document: If annotated with ICD code from positive list For each drug name: If drug name matches text token: If drug name is not justified any code from the negative list: Return the document ID
by
The number of documents returned by this procedure divided by the number of documents annotated with a code from the positive list yields the recall estimator for the given rule set (by disease). A low rate of documents retrieved by this code indicates either (i) that the medication is missing in the document or (ii) that there is no drug treatment at all. The first option is not very frequent because all discharge summaries are forwarded to the patients' GP, who generally expects a complete list of drugs at discharge. Note that the set of correctly coded episodes is not representative and the derived values must be interpreted as very rough estimates.
3. Results For the test set (n=17,865) Table 1 shows the output of algorithm 1 and the result of the relevance assessment for the estimation of precision. As an overall result 1.3 percent of all cases lacked an ICD code for either diabetes, asthma / COPD or Parkinson's disease. The differences in precision can be explained by the fact that antidabetic drugs are very specific to diabetes, while antiparkinsonian drugs are used for a broad range of diseases. Table 1 Candidates for missing codes as returned by algorithm 1 and estimated precision after rating of 50 treatment episodes per disease.
As introduced above, we estimated the recall by applying the filter on the set of already coded cases (Table 2). All these treatment episodes are annotated by some code for
S. Schulz et al. / Checking Coding Completeness by Mining Discharge Summaries
597
diabetes, Parkinson's or asthma. The high recall for Parkinson's is consistent with the fact that most of these cases take a specific medication. The lower rates for the other two disease groups comply with the cases of diabetes treated by diet only and the lighter obstructive lung diseases which are only treated in case of exacerbation. Table 2 Recall estimation based on correctly coded diagnoses.
Taking in account the recall estimates, and considering the test set and the entire data there are approximately 716 non-coded diabetes cases, 13 non-coded Parkinson cases, and 420 non-coded asthma/COPD cases among 34,865 treatment episodes. Table 3 Analysis of false positive cases
The analysis of false positive cases (Table 3) reveals insulin administration in cases of intensive care, provocation tests, as well as a reference to the measurement of probably endogenous insulin in blood. Table 4 explanation for false negative cases
For Parkinson's, false positives derive from the fact that for a broad range of rare neurologic diseases anti-Parkinsonian drugs are used. Antiasthmatic drugs, finally, are used in a series of severe pulmonary diseases like lung cancer for which the bronchial obstruction is rather a symptom than a disease on its own. Most of false negative cases are due to disease cases which are not treated by drugs and, to a minor extent, drugs that are missing in the rule base, or misspelt drug names.
598
S. Schulz et al. / Checking Coding Completeness by Mining Discharge Summaries
Recent studies have applied various text mining approaches for the extraction of drug or substance names from medical texts [9, 8, 7, 6, 10]. [11] emphasizes importance of the drug / disease relationship. These studies highlight that extraction of drug names extend the medical record use case we were focusing on. Equally important is the mining of literature abstracts in order to extract generic medical knowledge.
4. Conclusions A computationally simple, high-throughput text mining approach retrieved missing secondary ICD-10 codes of hospitalized patients. For three selected chronic diseases we obtained a rate of together under 2% undercoded treatment episodes, which demonstrates a fairly good coding quality, although the rate is expected to be higher considering a broader array of typical secondary diseases. It supports the observation that although DRG-based reimbursement systems have led to an increased coding quality for major diseases, diseases deemed secondary or unrelated to the actual clinical problem tend to be omitted, given that that they have no impact for DRG grouping. Precision and recall of the proposed information extraction system can be increased in two directions. The rule base must be improved, as our data clearly demonstrate questionable quality of the pharmacopeia used. Off-label indications added, and, ideally, additional knowledge should be acquired by medical experts. This became especially evident when we analysed the potential additional indications for antiparkinsonian drugs. For better investigating the justifications for drugs often used for symptomatic treatments (such as bronchodilatators) additional knowledge associating symptoms with the underlying diseases would also be helpful. The information extraction system can be improved by allowing fuzzy string match and by better identifying the discourse context in which the text string of interest occurs (thus ignoring, e.g. the occurrence of substance names in the lab result section). The latter will also support the harvesting of additional diseases names which occur in the summary but are not coded.
References [1]
Sackett DL. Clinical disagreement. How often it occurs and why. Canadian Medical Association Journal, 123:499–536, 1980. [2] Barnum JF. The misinformation era: the fall of the medical record. Annals of Internal Medicine 10: 482–484, 1989 [3] Stausberg J. Die Kodierqualität der stationären Versorgung. Bundesgesundheitsblatt, Gesundheitsforschung, Gesundheitschutz, 20:1039–1046, 2007. [4] Medizinische Medien Informations GmbH: www.mmi.de, last accessed 5th February, 2011 [5] ROTE LISTE®: www.roteliste.de, last accessed 5th February, 2011 [6] Schönbach C, Nagashima T, Konagaya A: Textmining in support of knowledge discovery for vaccine development. Elsevier, Amsterdam (ISSN 1046-2023: 2004, vol. 34) [7] Jimeno A et al. Assessment of disease named entity recognition on a corpus of annotated sentences. BMC Bioinformatics. 2008; 9 (Suppl 3): S3 [8] Hauben M, Reich L: Data mining, drug safety, and molecular pharmacology: potential for collaboration. The Annals of pharmacotherapy, Whitney, Cincinnati (2004) [9] Garten Y, Altman R: Pharmspresso: a text mining tool for extraction of pharmacogenomic concepts and relationships from full text. BMC Bioinformatics, 2009; 10 (Suppl +2): S6 [10] Dunkel M, Günther S, Ahmed J, Wittig B, Preissner R: SuperPred: drug classification and target prediction. Nucleic Acids Res. 2008 Jul 1;36 (Web Server issue):W55-9. [11] Phoebe M. Roberts, William S. Hayes: Information needs and the role of text mining in drug development. In Pacific Symposium of Biocomputing 2008, 592-603.
Privacy and Security
This page intentionally left blank
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-601
601
Healthcare Professionals’ Experiences With EHR-System Access Control Mechanisms Arild FAXVAAGa,1, Trond S JOHANSENa, Vigdis HEIMLYb, Line MELBYa, Anders GRIMSMOa a Norwegian EHR Research Centre, Faculty of Medicine, University of Science and Technology (NTNU), Trondheim, Norway b Department of Computer and Information Science, Faculty of Information Technology, Mathematics and Electrical Engineering, NTNU, Trondheim, Norway
Abstract. Access control mechanisms might influence on the information seeking and documentation behavior of clinicians. In this study, we have surveyed healthcare professionals in nursing homes and hospitals on their attitudes to, and experiences with using access control mechanisms. In some situations, the access control mechanisms of the EHR system made clinicians postpone documentation work. Their practice of reading what others have documented was also influenced. Not all clinicians logged out of the system when they left a workstation, and some clinicians reported to do some of their documentation work in the name of others. The reported practices might have implications for the safety of the patient. Keywords. Electronic health record systems, Information security, Access control, Patient safety
1. Introduction Modern healthcare is information-intensive in the sense that clinical work depends on access to relevant and updated information about the patient while it at the same time leads to new information about the patient. The data, which often is of very sensitive nature, must be made available to those providing care while at the same time kept protected from unauthorized access. To this means, most jurisdictions have developed personal health data protection acts. Increasingly, patient-specific health data are being presented through electronic health record systems (EHR-systems). The balance between securing legitimate access and protecting data from unauthorized access is achieved through the use of authentication and authorization mechanisms [1], [2]. Because of a more extended use of advanced technologies and more specialized personnel, an increased number of healthcare personnel come into contact with the patient. Increasingly, more than one healthcare institution collaborates on providing the care [3]. This adds further complexity to the issue of keeping the data secure while ensuring sufficient access. 1
Corresponding author: Arild Faxvaag, The Norwegian EHR Research Centre, Medical-Technical Research Centre, N-7489 Trondheim, Norway; E-mail: [email protected]
602
A. Faxvaag et al. / Healthcare Professionals’ Experiences
The details of access control mechanisms might have profound impacts on the information seeking behavior of healthcare personnel. Many studies have addressed how access control mechanisms should influence on the information working conditions of healthcare professionals, but little is known on how such mechanisms actually influences on the work of healthcare personnel. Access control mechanisms might make access to relevant information more cumbersome and time consuming, not only leaving information less available but also jeopardizing the processes of updating the patients records. Different methods of implementing information security policies might have different impacts on the users of the system. Some aspects of the behavior of users might be inferred from the study of access logs [4], but how healthcare professionals perceive the use of access control mechanisms is largely not known. EHR Monitor is an annual survey that monitors the implementation and use of EHR systems in Norway [5]. The survey is directed towards GPs, municipal care and hospitals and collects data by means of questionnaires. In the 2009 survey, we wanted to explore healthcare professionals attitudes to, and experiences with using access control mechanisms was included in the study. We here present the results.
2. Materials and Methods 2.1. Development of the Questionnaire In the last ten years, we have surveyed healthcare professionals on their use and perceived benefits of using EHR-systems e.g. [6]. As paper-based health records gradually have been withdrawn from clinical workflow, we have observed that healthcare professionals an have become more dependant on access to the EHRsystems for doing their work [7]. Informal observations of clinical work indicated numerous problems with getting access to EHR information and doing documentation work. To explore this, we developed a set of questionnaire items related to the respondents’ perceived use of time while presenting his user name and password to the access control component of the EHR-system and the effects of the automatic logoff mechanism. Other questionnaire items were related to the impact of the access control mechanisms on how the respondents work with the patient’s EHR. Response was given on a five-point Likert scale (For the first five questions: Strongly disagree – Disagree – Neither agree or disagree – Agree – Strongly agree; For the remaining six questions: Always – Most of the time – Half of the time – Rarely – Never). The 11 questions were distributed as a section in the EHR Monitor questionnaire form. An English translation of the wording of the questionnaire items is given in Figure 1 and 2. 2.2. Selection of Participants The survey was directed towards healthcare professionals in nursing homes and hospitals. The municipalities are responsible for the care provided by the Norwegian nursing homes whereas the hospitals are owned by the state. Nursing homes: 45 of a total of 430 Norwegian municipalities were selected on the basis of size and geographic distribution in such a way that they could be regarded as representative for the national average. In municipalities with more than one nursing home, only one of these was selected. The nursing homes were contacted by e-mail and phone and invited to participate in the study. 29 nursing homes agreed to participate. 590 questionnaires
A. Faxvaag et al. / Healthcare Professionals’ Experiences
603
were distributed. Of these 239 (41%) were returned. Hospitals: For each of the 21 selected hospitals, two clinical departments were invited to participate in the survey. These employed a total of 1352 clinicians. We developed an electronic version of the questionnaire (Questback), distributed a link to the questionnaire to the leaders of the clinical departments and asked them to forward this link to their employed clinicians. A total of 206 questionnaires (15%) were returned. The participants used many different EHR-systems. All systems have implemented access control mechanisms as this is mandatory according to Norwegian law [1]. The user has to present his user name and password to gain access to patient information in the EHR-system. Also the user has to log on to the work station.
3. Results Clinicians must log on to the EHR-system before they start reading in the patient’s EHR or do documentation work. A majority of clinicians reported that they spent too much time dealing with the access control mechanism. Approximately two out of three agreed that every login to the system took too much time (60% in the nursing homes, 62% in the hospitals). Most (56% in the nursing homes, 60% in hospitals) reported that they rarely failed to log on to the system, but as much as 32 % of respondents in the nursing homes and 27% of the hospital respondents reported to have experienced the opposite (figure 1). In most clinical settings, documentation work and patient work is performed in a sequence. The information security regulations make it mandatory for the user to log off the system once he or she is done with the current work process in the EHR-system. This also has the effect that the workstation becomes available for the next user. Sometimes clinicians leave the workstation without logging off the system. 36 % of the respondents in nursing homes and 45 % of hospital employees reported that they often had to log other users off before they could start using the system (Figure 1).
Figure 1. Clinicians’ opinions about and experiences with logging on to EHR systems.
604
A. Faxvaag et al. / Healthcare Professionals’ Experiences
Six questionnaire items concerned the effects of access control mechanisms on the efficiency, timeliness and some quality aspects of documentation work (figure 2).
Figure 2. Consequences of the use of access control mechanisms / login routines.
More than one out of three reported that the login routines often contributed to delays in their work. In the nursing homes, 52% reported that the access control mechanisms had them relay messages about the patient orally rather than documenting in the patient’s EHR. Access control mechanisms also influenced on use of the EHR system for reading purposes. 46% of the respondents from the nursing homes reported that the logon procedure contributed to that they fail to look up in the patient’s EHR in advance of providing care to the patient. Many reported that they often postponed documentation work until they could find the time to log on to the system. This practice was most prominent in the nursing homes. Among the hospital employees, 16% reported that documentation work always or often was done in the name of others (figure 2).
4. Discussion In this study we have surveyed clinicians in Norwegian hospitals and nursing homes on their experiences with access control. The access control mechanisms of today’s EHRsystems might fulfill the formal demands of the information security regulations but having to use the very same mechanisms are perceived to have a negative impact on clinical work. Access control mechanisms make clinicians postpone documentation work but also alter their practice of reading what others have documented. Not all clinicians log out of the system when they leave the workstation, and some clinicians report to do some of their documentation work in the name of others (i.e. while another user is logged on to the system). Since the survey participants report of behaviors that not are in accordance to the regulations, one might assume that they would tend to underestimate. The practices that have been uncovered might therefore be more common than estimated in the survey. Another limitation of the study is that the users
A. Faxvaag et al. / Healthcare Professionals’ Experiences
605
only have responded by picking the most appropriate response on a Likert scale. Why the clinicians postponed documenting is not clear. This, and other aspects of the participants’ use of the access control mechanisms could have been uncovered by interviews. The results might be interpreted in the perspective of time constraints and the fact that clinicians do much of their work away from a workstation. Throughout the workday at a ward, many clinical activities take place at the bedside. The patients have conditions that require continuous attention and care. Spending time taking care of the patient must be prioritized. From the perspective of clinicians, maneuvering through an access control mechanism takes time and therefore is perceived as having a cost. Due to time constraints, clinicians might have to choose between working with the patient and spending the same precious minutes logging on to an EHR system. In this situation some of the reported practices might be interpreted as workarounds, developed to accommodate their use of the EHR-system with other clinical tasks [8]. Workarounds like those that have been uncovered in this survey have also been observed in Danish hospitals [9]. The findings are relevant in the perspective of patient safety. Delayed updates of the patient’s EHR, and preferring to share patient information through oral communication makes the information in the EHR less reliable. Not checking the patient’s EHR before providing care to the patient is a potentially dangerous practice. The results should also inform information security policy makers. The results should also have implications for those implementing access control mechanisms. To our knowledge, none of the EHR systems in use have done usability tests of their access control implementations.
References [1] [2] [3]
[4] [5] [6] [7] [8] [9]
The Norwegian Directorate of Health. Available from http://www.helsedirektoratet.no/vp/multimedia/ archive/00012/Summary_of_The_Code__12645a.pdf Rindfleisch, TC, Privacy, information technology, and health care. Communication of the ACM 80 (1997) , 92-100. Meingast, M, et al. Security and privacy issues with health care information technology. Engineering in Medicine and Biology Society, 2006. EMBS '06. 28th Annual International Conference of the IEEE. 5453-5458. Røstad, L, Access Control in Healthcare Information Systems PhD thesis (2009) Available at http://www.idi.ntnu.no/~lilliaro/docs/lr_phd_final.pdf (PDF). Heimly, V, Diffusion and use of Electronic Health Record Systems in Norway. Stud Health Technol Inform 160 (2010), 381-5. Lærum, H, et al. Doctors' use of electronic medical records systems in hospitals: Cross sectional survey BMJ 323 (2001), 1344-1348 Lium, J-T, et al. From the front line, report from a near paperless hospital: Mixed reception amongst health care professionals. JAMIA 13 (2006), 668-75. Ash, JS, Some Unintended Consequences of Information Technology in Health Care: The Nature of Patient Care Information System-related Errors, JAMIA 11 (2004), 104-112. Mabech, H, [Electronic prescription in clinical practice] PhD thesis (2008). Available at http://vbn.aau.dk/files/16956379/Elektronisk_medicinering.pdf (PDF).
606
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-606
Personal Health Information on Display: Balancing Needs, Usability and Legislative Requirements Erlend Andreas GJÆRE a, Inger Anne TØNDEL b, 1, Maria B. LINE b, Herbjørn ANDRESEN c, Pieter TOUSSAINT a a Dep. of Computer and Information Science, NTNU, Trondheim, Norway b SINTEF ICT, Trondheim, Norway c Dep. of Private Law, University of Oslo, Norway
Abstract. Large wall-mounted screens placed at locations where health personnel pass by will assist in self-coordination and improve utilisation of both resources and staff at hospitals. The sensitivity level of the information visible on these screens must be adapted to a close-to-public setting, as passers-by may not have the right or need to know anything about patients being treated. We have conducted six informal interviews with health personnel in order to map what kind of information they use when identifying their patients and their next tasks. We have compared their practice and needs to legislative requirements and conclude that it is difficult, if not impossible, to fulfil all requirements from all parties. Keywords. Personal health information, de-identification, privacy, coordination
1. Introduction The COSTT2 project aims at supporting coordination in the peri-operative hospital environment by visualising status information regarding current operations and patients under treatment on large wall-mounted screens. This will help the personnel predicting when their time and effort are needed, and which colleagues are available for advice or assistance. As a result, both physical resources and staff can be utilised more effectively. Research on similar computerised coordination systems implemented as electronic whiteboards are also presented by Bardram et al. [1] and Aronsky et al. [2]. In order to maximise coordination support, the screens should be placed at locations where the relevant health personnel are likely to see them, e.g. in corridors. This however makes them available to everybody present, including patients, their relatives, and personnel not directly involved in patient treatment (e.g. cleaners and technicians). Such availability has consequences for the privacy of patients and employees. In previous work [3] we have introduced the concept of flexible de-identification, and described how it is possible to present patient information at various levels of 1
Corresponding author: Inger Anne Tøndel, SINTEF E-mail: [email protected] 2 Co-operation support Through Transparency, http://costt.no/
ICT,
N-7465
Trondheim,
Norway;
E.A. Gjære et al. / Personal Health Information on Display
607
details, both with regards to identifying information and the medical condition. Three perspectives have to be taken into account when developing solutions for deidentification. The first perspective is that clinical personnel require a certain amount of identifying information for the medical information presented to be meaningful and useful. The second perspective is that laws and regulations restrict the amount of patient identifying information that can be presented. The last perspective is usability. A system that requires users to log on to multiple systems in order to obtain patient information, might fulfil both the information need and requirements set by laws and regulations, but is not very usable in a dynamic work environment where clinicians work under time pressure. These three perspectives generate different demands, and designing the right level of de-identification means balancing these different demands. The rest of the paper is organised as follows: Section 2 presents the results of unstructured interviews with personnel working in the surgical clinic at a Norwegian hospital, and Section 3 outlines the Norwegian legislative requirements. Then, Section 4 discusses how needs, usability and legislative requirements can be balanced, and Section 5 concludes the paper.
2. Interviews In order to improve our understanding of the information needs of health care personnel, and specifically their need for identifying information, we conducted six unstructured interviews at Trondheim University Hospital, during NovemberDecember 2010. Six different identification approaches were explored (see overview in Table 1), where the one with highest identification level used initials and birth year of the patient. The less identified approaches aimed to identify the patient by his location or his relation to health care personnel, possibly in combination with the test or surgery type performed. In the interviews we wanted to gain feedback on whether the less identifying approaches still resulted in useful status information for health care workers. The participating clinicians included one senior physician and two ward nurses from the Department of Gastrointestinal Surgery, one junior physician and one nurse from the Department of Emergency, one ward nurse from the Department of Breast and Endocrine Surgery, and one charge nurse from a ward at the Department of Orthopaedic Surgery. Their ages ranged from 25 to 55, and all had been in their position for some while. The informants were recruited randomly during work hours, and interviewed straight away in their regular work environment. They were each asked to comment on some early-stage paper-based prototypes of information visualizations, containing message examples related to the treatment progress of patients, e.g. “CT-image description is ready” and “Patient has been scheduled for surgery”. We explored in total four different prototypes, but only one or two were presented to each informant. Some status messages were added during the process, and two of the prototypes were modified slightly in-between interviews, due to feedback given. The prototypes mainly differentiated on how information was organised and how the patients were identified). We used the prototypes to investigate whether the clinicians would be able to tell patients’ identities apart with the different identification approaches, and to evaluate how these related to current practices. The feedback was recorded with handwritten field notes, and written out directly afterwards. The results of the interviews are summarised in Table 1. Generally, clinicians were positive to the idea of integrating status updates from several systems. Most were still
608
E.A. Gjære et al. / Personal Health Information on Display
reluctant to the immediate thought of placing any patient information more publicly available than workstations or personal devices. Though the approach where patients are identified by initials and birth year stood out as the most convenient option, our main impression is that health care personnel have varying needs for patient identification, depending on their role and the context where identification should happen. We also discovered that clinicians commonly used patients’ diagnosis or treatment history as de-identification in conversations between colleagues, (e.g. “he with ileus who needs another operation in three days”). Table 1. De-identification approaches explored in the interviews, based on the paper-based prototypes. Approach Initials and birth year of patient
Example JD59
Summary of Responses Will normally provide fairly good accuracy. Patients having the same birth year and initials (or last name) do however occur. Clinicians still found this convenient as they are used to working with basis in the patients and their name/age (various combinations of name and birth date are used today).
Room number/location
(Plotted on a map of wards)
Patients move around (this may leave room lists temporarily inconsistent) or they can even be placed in the corridors. Room numbers are commonly used for reference today, but in combination with other identifiers, e.g. name, diagnosis or sex. It seems hard to remember the patients’ exact locations.
Initials of responsible physician (first two letters of both first name and last name)
DAJO
Patients are not followed up by only one physician, and physicians attend many patients at each ward. Nurses will not necessarily know the name of the physician providing care for each of their patients at a specific time.
Blood test indicators, time and responsible nurse
Hb, Na, INR 10:41 (HAPE)
Blood tests are ordered as standardised batches, so important indicators, if any (e.g. INR may decide whether to operate or not), do not stand out. Tests for several patients are often ordered at the same time, and by the same nurse, too.
Radiology type, level of urgency, time and referring physician
CT abdomen (red) 11:00 (DAJO)
Some results (MR) take days to arrive, and often 20-30 patients with abdominal pains arrive daily. Hence, a list of pending results may become overloaded and hard to interpret.
Operation room number, surgery type, scheduled time and surgeon initials
OP3: Appendicitis 11:00 (PT)
Nurses rarely know exactly what room an operation will take place in. But as it is uncommon to have several patients from the same ward undergoing surgery at the same time, they may still be able to deduce which operation to follow.
3. Legislative Requirements In Norway, rules and regulations on the obligation of secrecy, and the criteria for sharing or disclosing data, are mainly found in the Personal Health Data Filing System Act [5] which implements the EU personal data protection directive [4] for the health domain, and in the Health Personnel Act [6] which are national rules of conduct for health personnel. The authorisation rule for granting access to health data [5] consists mainly of two criteria. The first is a general need-to-know restriction: “Access may only be granted insofar as this is necessary for the work of the person concerned” [5]. The second criterion is that access must be “in accordance with the rules that apply regarding the duty of secrecy” [5]. The general rule on secrecy goes beyond a mere duty to “keep silent”. It is a proactive duty on institutions as well as individual health personnel to “prevent others from gaining access to or knowledge of information relating to people’s health or medical condition” [6]. There are a few derogations to the
E.A. Gjære et al. / Personal Health Information on Display
609
secrecy rule [6], mainly the need to share information with co-operating health personnel, the duty to supply patient administrative systems with key data, and a few more rules on sharing information with a patient’s next of kin, and with students, health care assistants or data processing expertise. However, there are no general permissions for making health data available to other patients, or to other patients’ next of kin. There are, in principle, two possible strategies on how to make the envisioned wall-mounted displays legitimate under data protection law. The first strategy would be to generalise or trivialise the data in ways that put the information content below the threshold of “relating to people’s health or medical condition”. An example could be to make the displayed data read something like “patient x to be present in room 101 from 9:30 to 14:00” without revealing what activities would take place there. The second strategy would be some sort of de-identification of the patient, in order to avoid that the displayed data pertains to a specific part of the definition of “personal health data” [5], namely a criterion that it “may be linked to a natural person”. Norwegian law contains several useful concepts for de-identification [5]. These legal concepts were initially aimed at central health registers, spanning information originating from different hospitals, but they could also be relevant for de-identification purposes within a single hospital. The definition of “de-identified personal health data” has two components. First, any identifying data is removed. Second, any reidentification shall be dependent on re-supplying the data that was removed. This second component implies a high threshold; an acceptable level of de-identification may not be pro forma, and re-linking data to the right patient cannot be easily accomplished by guessing. An alternative is to aim for “pseudonymous health data”, which implies that identifying information is encrypted.
4. Discussion The interviews indicate that status updates for patients under treatment are useful. Health care personnel would like to know when test results are ready, how operations proceed, etc. Making such information easily available on wall-mounted screens will however expose the information to everybody who has physical access, something that is not permitted by Norwegian legislation. As mentioned in Section 3, two main strategies are available in order to adhere to the legal restrictions: Removing all healthrelated information or de-identifying the information. The first strategy may work for some events, but using it as a general strategy, will probably render the system useless. The second strategy seems more appealing, as it can supply more useful information. Finding an appropriate level of de-identification that makes personnel able to identify patients yet remains a challenge. Results from the interviews reveal that variations over name and birth date are commonly used for identification. At a ward with a limited number of patients, this close to identifies most patients. The other de-identification techniques tested in the interviews, such as using the room number or the identity of health care personnel, turned out not to be usable. Thus we need to work on alternative de-identification methods. Existing literature on de-identification of health information [7] is mainly concerned with de-identification of large datasets that are to be used for secondary purposes (e.g. research). Still we plan to look into how existing techniques such as pseudonymisation can be used for our setting. We will also investigate to what extent information will still be useful if all identifiers are removed.
610
E.A. Gjære et al. / Personal Health Information on Display
If it turns out that the level of de-identification required by legislation will render the system useless, we are left with no option but to limit access to the information to authorised personnel only. This can be ensured by placing the screens at locations where only health personnel have access or by access control mechanisms on the screens, although this will exceedingly reduce the usability for coordination purposes. If such an approach is necessary, it will be important to investigate smart ways of doing access control, e.g. by providing more details on a personal handheld device, or by mechanisms that automatically detect who is present and present information based on the access rights of that group of people. Reducing the level of identification will result in an increased risk of erroneous interpretation of information. Though this will reduce the benefits of the coordination support system, it is important to state that the system will not replace any of the medical information systems. These will still use full identification for all medical data, and thus there should be no increased risk of treatment errors.
5. Conclusion Public display of health information poses an obvious risk to patient privacy, and thus there is a need to determine the appropriate level of identification. As the legislative requirements are in conflict with the needs of health personnel, it may be impossible to fulfil all the legislative requirements, without sacrificing usability. Acknowledgments: We would like to thank Børge Lillebo for his work on the prototypes and cooperation on the interviews. Thanks also to our other colleagues in the COSTT project, Arild Faxvaag especially, for useful comments and discussions. This work was supported by the Norwegian Research Council’s VERDIKT program (grant no. 187854/S10).
References [1]
[2]
[3]
[4]
[5] [6] [7]
Bardram, J.E. Hansen, T. Soegaard, M. AwareMedia – A Shared Interactive Display Supporting Social, Temporal, and Spatial Awareness in Surgery, Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work (CSCW '06) (2006), 109-118. Aronsky, D. Jones, I. Lanaghan, K. Slovis, C.M. Supporting Patient Care in the Emergency Department with a Computerized Whiteboard System, Journal of the American Medical Informatics Association 15 (2008) , 184-193. Faxvaag, A. Røstad, L. Tøndel, I.A. Seim, A.R. Toussaint, P.J. Visualizing Patient Trajectories on Wall-Mounted Boards – Information Security Challenges, Studies in Health Technology 150 (2009), 750-759. Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data Act on personal health data filing systems and the processing of personal health data [Personal Health Data Filing System Act] Act of 2nd July 1999, no 64 relating to health personnel etc. [The Health Personnel Act] El Emam, K. Fineberg, A. An overview of Techniques for De-identifying Personal Health Information, Health Canada, January 2009.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-611
611
Watermarking – a new way to bring evidence in case of telemedicine litigation Gouenou COATRIEUXa1, Catherine QUANTINb, François-André ALLAERTc, Bertrand AUVERLOTb, Christian ROUXa a . Inserm U650, LaTIM; GET ENST Bretagne, Dpt. ITI b. Dpt. of Biostatistics & Medical Informatics, Inserm U866, CHU de Dijon c Dpt. of Epidemiology and Biostatistics, Mc Gill University, Montreal Canada
Abstract: When dealing with medical data sharing, in particular within telemedicine applications, there is a need to ensure information security. Being able to verify that the information belongs to the right patient and is from the right source or that it has been rerouted or modified is a major concern. Watermarking, which is the embedding of security elements, such as a digital signature, within a document, can help to ensure that a digital document is reliable. However, at the same time, questions arise about the validity of watermarking-based proof. In this paper, beyond the technical aspects, we discuss the legal acceptability of watermarking in the context of telemedicine applications. Keywords: Telemedicine, practitioner liabilities, watermarking, Medical Imaging.
1. Introduction The evolution of medical information systems, supported by advances in information technology, has made it possible for information to be shared between distant health professionals and to be manipulated and managed more easily. Telemedicine applications illustrate this evolution. However, at the same time, more attention should be paid to information security, which is intimately linked to the liability of physicians. Security can be defined in terms of confidentiality, availability, integrity and authenticity (1). In this paper, we focus on the security of multimedia medical data, which means the protection of documents or medical content (images, text, , databases, and so on) shared between different health professionals in the context of telemedicine. Among the different measures to ensure the protection of content, watermarking is awaiting acceptance by health professionals before being deployed in real practice. Basically, watermarking is defined as the invisible embedding or insertion of a message in a host document, an image, for example. As we will show later in this paper, watermarking makes it possible to introduce new security and management layers much closer to the host data: in the signal itself. Even though most of the work on watermarking has concerned medical images in order to verify image integrity or improve confidentiality (2), watermarking can also be applied to any other kind of digital data. Technical aspects of watermarking concerns ways to modify the host
1 Corresponding Author: G. Coatrieux; E-mail : [email protected]. LatIM Inserm U650, Dpt. ITI, Telecom Bretagne, Technopôle Brest-Iroise - CS 83818 - 29238 Brest Cedex 3 – France.
612
G. Coatrieux et al. / Watermarking – A New Way to Bring Evidence
document for message insertion, and questions arise about whether or not it acceptable evidence and conveys proof of a physician’s liability. In this paper, we aim to answer these questions in the context of telemedicine applications. Thus, in sections 2 and 3, we recall physicians’ liabilities and security concerns that need to be satisfied in telemedicine applications. In section 4, we discuss the security applications watermarking can be used for and conclude with open questions about this technology.
2. Medical Liabilities in Physicians’ Collaboration In France, regarding medical malpractice suits against multiple physicians, assessing the liability of each individual is the classical approach from a legal as well as ethical standpoint. Such assessments aim to identify negligence in practitioner’s behavior, as it is this negligence and not the diagnosis or the drug used, which must be proved to demonstrate the physician’s liability. The legal bases of this obligation go back to the jurisprudence of the Court Of Cassation and the Mercier ruling (1936) in which it is stated that “the obligations that the medical practitioner must meet include providing the patient with conscientious and attentive treatment that is in accordance with the current state of medical science”. In other words, about “obligation of care”, French legislation mentions an “obligation of means”, i.e. an “obligation to make the best effort” and not an obligation to achieve a particular result. If the technical or intellectual means normally used by a competent and diligent professional are not employed, this constitutes criminal negligence. A doctor cannot be sanctioned for not being able to make a difficult diagnosis, e.g. in studying an X-ray film or an anatomopathological examination slide. In contrast, if the practitioner fails to diagnose a common and obvious lesion, the facts show that the professional has not given the care “based on data known to science” (French Medical Code of Ethics, art.32), i.e. those ordinarily known by a competent and diligent practitioner, who “must always make use of, if necessary, the aid of competent third parties”. In telemedicine, physicians’ diligence must be appraised according to their personal involvement in the diagnostic process. Although two practitioners may exchange data in a symmetrical way, they do not necessarily consider these data from the same standpoint. The requester of the opinion has access to all the available information, while the referent, the practitioner whose opinion is sought, generally only receives a part of the information, selected by the first doctor. This selection must be made by a competent person, able to choose the information relevant for the diagnosis and to interact effectively with the referent. This is the most common situation in tele-expertise as defined by French legislation, where two doctors are most often of the same specialty and are accustomed to such dialogues (3). Although nowadays it is possible to send whole images of a radiological or anatomopathological file via the internet, this task does not solve the problem of the acceptability of evidence in cases of litigation. The fact of not having all the information available does not exonerate the referent from his responsibility with respect to the advice he gives. In cases of doubt or of difficulty in diagnosis, it is up to him to ask for additional information, and to decline to give an opinion if this information is insufficient for his needs, or if he feels not competent to do so (4). Furthermore, the referent and patient never meet; it is therefore hard to believe that recourse to tele-expertise truly stems from the wishes of a patient, and that this patient is able to validate the choice of the referent, regarding the “ideal vision of free choice”
G. Coatrieux et al. / Watermarking – A New Way to Bring Evidence
613
(P. Fernandez). Without patient agreement, there is thus no legal contract between the patient and referent practitioner, as mentioned in the French Medical Code of Ethics (art. 60). From the standpoint of the patient, the liability of a referent doctor, who comes to the assistance of a requester, may not be contractual but of tort nature which may also make him or her more vulnerable in terms of civil liability.
3. Identifying the Facts When a patient suffers a prejudice related to a diagnostic error, it is necessary to determine the respective liabilities of the different medical practitioners involved in the diagnostic/therapeutic process. As mentioned, in France, professional negligence can only be argued if the practitioner provided care in an unsatisfactorily way, i.e. falling short of what might have been expected from him regarding his field of competence. To assess this possible lack of efficiency, and, thus, possible involvement, several questions will rise at the heart of the judge's task: who requested? What? When? For who? Providing which document? Who answered? What? When? Regarding which documents (additional material could be requested to properly answer)? Therefore, all elements involved in the transaction must be carefully stored, with no means of modification (Need 0 (N0)), and the following are thus necessary: Need 1 (N1) - Whole transmitted images are saved with name of the practitioner, name of the patient, date and time of transaction, and these data must be rendered unreadable from an unauthorized access. Need 2 (N2) - Sender must be identified in such a way cannot repudiate the message; Need 3 (N3) - Date, time and substance of answer of referent practitioner must be strongly linked to documents received to make diagnosis before returning them; Need 4 (N4) - The referent must be identified, in a way he cannot repudiate the reply. Need 5 (N5) - Data have to be stored on a non-erasable medium for the 10-year prescription period required by national law.
4. Security Services Based on Watermarking A general schema for watermarking is depicted in figure 1, it relies on two processes: embedding and reading. At the embedding stage, the message is inserted by modifying the host document in an “imperceptible” way. Such a host can be a signal, an image, a text as well as a data base. “Imperceptible” means that the watermarked document can be used instead of the original document without interferences. Applied to an image; embedding consists in slightly modifying its pixel gray level values to insert the message. Image pixel values are modified or modulated so that they can be interpreted or demodulated by the reader to gain access to the message. An example is given in figure 2, where a Magnetic Resonance Image (fig.2(a)) has been watermarked (fig.2(b)) applying the method proposed in (5). The image of difference in fig.2(c) corresponds to the image signal variations, variations which encode the inserted message. Thus, image watermarking can be viewed as the addition of a signal, a watermark w, to the image I. Several techniques have been proposed for medical imaging. The reader must refer to (2) for more details about how these methods preserve the image diagnosis value.
614
G. Coatrieux et al. / Watermarking – A New Way to Bring Evidence Message (m)
Watermarked document Iw
Host document (I)
Embedder
Reader
Message (m)
Secret Key
Figure 1. A general schema of watermarking.
(a)
(b)
(c)
Figure 2. Illustration of the reversible watermarking method (5) on a Magnetic Resonance Image, (a) original image (256x256 pixels, encoded on 12 bits), (b) watermarked image (c) Signal of difference, it is the watermark w whose amplitude equals +/-1 or 0. The reversibility property allows the recovery of the original image by inverting image distortion introduced by the embedding process.
According to previous definition, watermarking provides a hidden communication channel where different security elements can be made available. Its main interest resides in the fact that information is dissimulated in the signal itself and can be retrieved even if the image file format is changed. It can also be made very difficult to modify the embedded message without definitively destroying the host content. Embedded data can be accessed by compliant systems, meaning systems that hold the appropriate watermarking plug-in and authorizations, i.e. knowing the watermarking key – see figure 1. Watermarking has been proposed for several applications. Considering data exchange and security needs presented in section 2, it can be used for: − Verifying data integrity by embedding a cryptographic hash of the image itself (N0). This image footprint can be digitally or cryptographically signed by the emitter to also satisfy non repudiation (N1, N2) (6). It can also be used not only to detect image modification but also to indentify the nature of such modifications, e.g. a transmission error or the result of malevolent behaviour (7-9). − Maintaining link between patients and their health records (N1) by watermarking patient identifier/pseudonym (10-11) along with a unique document identifier (e.g. DICOM UID – see standard http://medical.nema.org/). In the same way, content can be traced (N2, N4) by inserting sender and recipient identifiers. − Contribute to N3 by securely linking any documents involved in the collaborative session. Images can be watermarked with the digital signature and identifier of the referent’s report and, simultaneously, images’ digital signatures and identifiers can be included in this report (possibly watermark it) (12). It becomes thus possible to retrieve documents linked to their contents and also more difficult to tamper with documents, as they must be modified as a whole. Watermarking can offer more services than integrity or non-repudiation. For example, embedding the recipients’ identity allows identification of those who disclose or reroute data. Inserting content access or transmission authorization makes it possible to verify if the patient has consented or allowed the sharing of his/her data (13). Embedded patient data is also more difficult to access. Information has to be extracted before being decrypted (need to know watermarking and encryption keys); and, in some cases, embedding may reduce the amount of data to be transmitted (14).
G. Coatrieux et al. / Watermarking – A New Way to Bring Evidence
615
5. Conclusion Thanks to its transparency, watermarking can contribute to the improvement of medical data security, especially regarding telemedicine, where it can help to overcome most of the major issues mentioned above. As watermarking is a protection mechanism applied after creation of the document, it also allows the user to have access to the multimedia contents while maintaining content protection. Finally, watermarking is specific in that it allows information to be embedded inside the document itself. However, some questions still need to be answered. Without being exhaustive, the invisibility of the watermark is one of the major concerns. Though various solutions have been proposed for images (2), they cannot satisfy all of the needs N1 to N5 at the same time. Furthermore, the way to update the watermark content also has to be studied in depth. As for cryptography, there are symmetric and asymmetric watermarking schemes, but asymmetric methods do not allow the embedding of a large amount of data. At the same time, it may not be possible to embed every type of data. In fact, privacy concerns may limit or restrict information embedding. For instance, if the patient ID is embedded, a specific anonymization procedure will have to be designed. To be accepted, watermarking needs to be combined with cryptographic mechanisms, like digital signatures, which provide legally accepted proof.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11]
[12] [13] [14]
Coatrieux G, Maître H, Sankur B, Rolland Y, Collorec R. Relevance of watermarking in medical imaging. ITAB00; 2000 Nov; Arlington, USA. Coatrieux G, Lecornu L, Sankur B, Roux C. A review of image watermarking applications in healthcare. Conf Proc IEEE Eng Med Biol Soc. 2006;1:4691-4. Allaert FA, Dusserre L. Telemedicine: responsibilities and contractual framework. Stud Health Technol Inform. 1998;52 Pt 1:261-4. Allaert FA, Dusserre L. Tele-expertise; users' and suppliers' liabilities. Brender J, et al ed. Press I, editor1996. Coatrieux G, Puentes J, Roux C, Lamard M, Daccache W. A low distorsion and reversible watermark: application to angiographic images of the retina. Conf Proc IEEE Eng Med Biol Soc. 2005;3:2224-7. Pan W, Coatrieux G, Cuppens-Boulahia N, Cuppens F, Roux C. Medical image integrity control combining digital signature and lossless watermarking. Lect Notes Comput Sci, 2009,5939,153-162 Liew SC, Zain JM. Reversible Tamper Localization and Recovery Watermarking Scheme with Secure Hash. European Journal of Scientific Research, 49(2), 2011; p.249–264. Huang H, Coatrieux G, Shu H, Luo L, Roux C. Medical image tamper approximation based on an image moment signature. 12th IEEE International Conference on, 2010; 254-259 Cheng S, Wu Q, Castleman KR. Non-ubiquitous digital watermarking for record indexing and integrity protection of medical images, ICIP05, Genoa, Italy, vol. 2, Sept. 2005. Quantin C, Allaert FA, Gouyon B, Cohen O. Proposal for the creation of a European healthcare identifier. Stud Health Technol Inform. 2005;116:949-54. Hsiang-Cheh H, Wai-Chi F, Shin-Chang C. Privacy protection and authentication for medical images with record-based watermarking.Life Science Systems and Applications Workshop (LiSSA), 2009, 190 193. Puentes J, Coatrieux G, Lecornu L. Secured Electronic Patient Records Content Exploitation. Healthcare Knowledge Management: Issues, Advances, and Successes, Springer Verlag, 2006. Pan W, Coatrieux G, Cuppens-Boulahia N, Cuppens F, Roux C. Watermarking to Enforce Medical Image Access and Usage Control Policy. Int. IEEE Conf on SITIS, 2010, 251-260. Acharya R, Niranjan UC, Iyengar SS, Kannathal N, Min LC. Simultaneous storage of patient information with medical images in the frequency domain, Comput. Meth. Prog. Bio. 76:13–19, 2004.
616
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-616
Sharing Sensitive Personal Health Information through Facebook: the Unintended Consequences Mowafa HOUSEH a,1 College of Public Health and Health Informatics, King Saud Bin Abdulaziz University for Health Sciences (KSAU-HS), National Guard Health Affairs (NGHA), Riyadh, Saudi Arabia a
Abstract. The purpose of this paper was to explore the types of sensitive health information posted by individuals through social network media sites such as Facebook. The researcher found several instances in which individuals, who could be identified by their user profiles, posted personal and sensitive health information related to mental and genetic disorders and sexually transmitted diseases. The data suggest that Facebook users should be made aware of the potential harm that may occur when sharing sensitive health information publicly through Facebook. Ethical considerations in undertaking such research are also examined.
Keywords. Social networking, Privacy, Health information, Facebook
1. Introduction In 2011, the Markle Survey of Health reported that privacy around the exchange and use of health information was a top concern for physicians and patients [1]. Although the concerns voiced by patients and physicians in the survey are legitimate, they have been a focal point of the healthcare informatics agenda for many years. One of the earliest papers on the subject of privacy and confidentiality regarding health information was published by the New England Journal of Medicine in 1968 [2]. In the paper, the authors advocate for state laws, ethical and clearly defined regulations regarding the protection of health information. It was not until 30 years later that the United States passed the Health Insurance Portability and Accountability Act (HIPPA) to protect the privacy and confidentiality of health information [3]. In addition, with recent advancements made in the collection and analysis of genetic data within the field of bioinformatics, the United States Congress has passed the Genetic Information Nondiscrimination Act of 2008 to protect the improper use of genetically identifiable data collected by health insurers and employers. Similar health information privacy legislation has been introduced in Europe through the Personal Data Directive introduced in 1995. As science makes new discoveries and advances to collect an array 1
Corresponding Author: Dr. Mowafa Househ, King Saud Bin Abdul Aziz University for Health Sciences, College of Public Health and Health Informatics, Riyadh, Kingdom of Saudi Arabia; E-mail: [email protected]
M. Househ / Sharing Sensitive Personal Health Information Through Facebook
617
of new health information, new legislation will be needed to ensure that the privacy and confidentiality of health information is maintained. Historically, HIPPA and other acts to protect the privacy and confidentiality of health information were designed to protect the patient from privacy violations that could impact their employment, relationships, or public perceptions of them. Such violations did occur, as reported in 2003 by the Health Privacy Project [3], and they included privacy violations such as • A woman’s medical records being posted on the internet after she was treated for complications that were a result of an abortion • A man being fired after an insurance company informed his employer that he received treatment for alcohol abuse • A clerk working in a hospital stole social security numbers and applied for credit cards and opened bank accounts • Files of persons living with sexually transmitted diseases being sold by a U.S. state These examples are enough for legislators to propose and pass laws that protect the personal privacy and confidentiality of the patient’s health information from improper use by clinicians, staff, hospitals, and government. With the advent of social networking and the promotion of individualized healthcare, however, there is a growing trend of patients sharing their own personal health information to the world through social networking sites such as Facebook. Within this context, the purpose of this paper was to explore the various types of groups and information shared by various Facebook pages and to make recommendations for concerns surrounding privacy and confidentiality for patients with regard to sharing their health information via social networking sites. The focus of this paper was not the health information people may be sharing about others, but rather the health information they may be sharing about themselves. Ethical considerations while conducting this type of research were also examined.
2. Methodology The research focused on reviewing sensitive health information related to mental disorders (anxiety, depression, eating disorders, and drug addiction), sexually transmitted diseases (HIV, chlamydia, and gonorrhea), and sexual and genetic disorders (cystic fibrosis, hemophilia, and sickle cell) shared through Facebook groups. To limit the scope of the study, the researcher investigated only anxiety, HIV, and cystic fibrosis. Briefly, Facebook is a social networking site that allows users to network with other individuals or groups registered on the Facebook site. Registration is free, and each user sets up their own network of friends and groups. Each user can upload videos, pictures, and hyperlinks as well as engage in live chatting and many other features. In addition, users can set up their own privacy settings, which can range from simple issues such as making their profile public or visible only to the friends they invite. For the purpose of this study, Facebook group pages regarding anxiety, HIV, and cystic fibrosis were searched. Only groups that were publicly made available to all Facebook users were reviewed in this study. The researcher logged in with his own personal account, and the search was carried out on February 3, 2011. The search was filtered to include only anxiety, HIV, and cystic fibrosis groups and to excluded pages
618
M. Househ / Sharing Sensitive Personal Health Information Through Facebook
(e.g., pages with information on the subject) and people with names similar to the search terms. The researcher only selected one group with the most registered users for anxiety, HIV, and cystic fibrosis. Once in the group, the researcher observed discussions of group members and various types of information shared. Data from the Facebook Wall and Facebook Discussion forums were the only data included. According to Facebook, the Wall feature allows users to share text, pictures, videos, and hyperlinks. People can comment on the Wall, which is for all Facebook users to view. The discussion forums are more focused on specific issues and are more detailed in their content. See Figure 1 for an illustration.
Figure 1. Facebook Anxiety Group Page
3. Results For the top group site regarding anxiety, there were 266 registered users as of February 3, 2011. The data showed that there were a total of 15 Wall postings by a total of 12 users between July 1, 2010 and January 31, 2011. Two of the Wall postings were commented on by group members. In general, of the 12 users, 7 generally discussed their struggles with anxiety, 1 posting was an herbal advertisement, 1 was advertising a course on anxiety, and 3 were links to non-relevant websites and advertisements. Much of the wall discussions centered on the individual’s struggles with anxiety. For example, one person was seeking help because their anxiety was leading them to contemplate suicide. Another Facebook user was asking for advice about prescription drugs and strategies to cope with anxiety. In the discussion tab, there were no group discussions surrounding the issue from July 1, 2010 to January 31, 2011. Several posts, however, were found before this time period, with the highest number of discussions focusing on panic attacks. For HIV, the top group on the subject had 926 registered users as of February 3, 2011. There were only two Wall comments made in this group, both of which were from two different users regarding disease-related information. As for the discussion forum, there were two postings for the specified time period. One posting was demeaning, whereas another was about a man sharing his struggles with HIV. This particular Facebook user listed his picture, name, location, and work in his public profile. Although it appeared to be genuine, it was not confirmed given the scope and ethics of the study. For the Sickle Cell group, the top group had 3786 members as of February 3, 2011. On the Facebook Wall, there were 32 wall postings made by 24 members of the Sickle Cell group between July 1, 2010 and January 31, 2011. Most of the postings (a total of 8) shared on the wall were related to general information sharing regarding education, new treatments for the disease, and experiences with physicians or hospitals. There
M. Househ / Sharing Sensitive Personal Health Information Through Facebook
619
were seven postings in which the Facebook user was seeking information or help about a problem they were having related to the disease. There were about 6 postings related to advertisements for fundraising events or links to products and articles. Finally, members posted stories about their life struggles in living with the disease. With regard to the discussion threads, there were only two discussion threads posted between July 1, 2010 and January 31, 2011. The first discussion thread was about a Facebook user suffering from the disease where they expressed their frustration with trying to find adequate care and a job. Two respondents provided the Facebook user with support and advice on what to do. The other thread was about a Facebook user contemplating graduate school who was afraid of not obtaining acceptance because of his/her disease. A Facebook user responded by encouraging the individual and letting them know of another individual they knew who was working while living with the disease.
4. Discussion In this study, it was found that there are Facebook users sharing their personal details along with their health information without realizing the potential ramifications of doing so. According to HIPPA regulations, there are no laws that stop the individual from sharing their personal health information [4]. Trying to interpret the results behind this behavior is difficult to ascertain, given the limitations of this study. One conclusion may be that the individuals sharing their health information on Facebook are unaware that such information could potentially be used against them by unscrupulous organizations or individuals. There have been several recorded instances in the news media where employers have fired Facebook users as a result of their public postings. A recent study on Facebook patient privacy violations showed that numerous privacy violations were carried out by medical residents and students [5]. Therefore, there is a growing need for Facebook to make its users aware of potential abuses that may result from sharing health information online. To remedy this issue, Facebook should provide policies and guidelines and create an awareness campaign for its users regarding the sharing of health information via its social networking site. Another possible interpretation to the nonchalant behavior in sharing health information on Facebook could be a result of the cultural change surrounding patient engagement and empowerment in taking control of their own health. For example, Sunnybrook Hospital in Canada has recently provided its patients full access to their personal healthcare records [6]. Google Health and similar technologies are empowering patients to manage their own health. Facebook provides a platform for individuals to connect with other individuals suffering from the same disease or disorder. With over 500 million users on Facebook [7], the potential to connect with people suffering from the same disease or disorder is higher than on any other alternative social media networking sites known to date. The only drawback to Facebook is the ability to identify individuals sharing personal health information and the potential misuse of this information by organizations and individuals, which may cause harm to the individual.
620
M. Househ / Sharing Sensitive Personal Health Information Through Facebook
5. Limitations and Future Research Based on the results of this study, future research should examine the perceptions of individuals regarding the sharing of personal health information through social media networks such as Facebook, Twitter, and YouTube. Future studies should also examine other diseases and disorders shared through Facebook to explore the potential threats that may arise as a result of sharing sensitive health information. There are several limitations worth noting in this study. The manual data analysis was conducted by the researcher and was not verified by another researcher. As a result, this may have caused bias in interpretation of the results. Furthermore, because of time constraints, the data were limited to the period between July 1, 2010 and January 31, 2011. Finally, there was no way to prove the authenticity of the Facebook users who posted their health information on the Facebook group pages.
6. Ethical Considerations When conducting the research study, various ethical considerations were considered. All of the data used in this study were publicly available data that could be accessed by any Facebook user with a Facebook account. The researcher did not solicit information nor did the researcher ask to join a particular group to gain access to data. While conducting the analysis, the confidentiality of the individuals included in the study was strictly enforced; however, anonymity is not guaranteed given that the information is publicly available to any Facebook user. Consent of the groups to use the data was not solicited. With the pervasive use of social networking sites such as Facebook and YouTube, the issues regarding confidentiality, anonymity, and consent are pushed to the limits, and the ethical considerations within the context of such research studies should be reexamined. Acknowledgements: We would like to thank the King Abduallah Institute for Medical Research for their help in editing this document.
References [1]
[2] [3] [4] [5]
[6]
[7]
Lewis N. (2011) Privacy, accountability, lead health IT concerns. Information Week Healthcare. Webaccess February 3, 2011 [http://www.informationweek.com/news/healthcare/EMR/showArticle.jhtml?articleID=229200192&su bSection=News] Curran WJ, Steams B, Kaplan H. Privacy, confidentiality and other legal considerations in the establishment of a centralized healthdata system. N Engl J Med. 1968;281:241-8. Wager K, Wickham F, Glaser J. (2005). Managing health care information systems: A practical approach for health care executives. Jossey-Bass. San Francisco, CA, U.S.A. pg. 83-84 Khan M, Long H. Interview: HIPPA regulations around health information privacy. February 2, 2011. Thompson LA, Black E, Duff WP, Paradise Black N, Saliba H, Dawson K. Protected Health Information on Social Networking Sites: Ethical and Legal Considerations. J Med Internet Res 2011;13(1):e8. URL: http://www.jmir.org/2011/1/e8/ doi: 10.2196/jmir.1590 PMID: 21247862 Canadian Broadcast Corporation (CBC). Online health records popular with patients. Canada. January 30, 2011. Webaccess [http://www.cbc.ca/health/story/2011/01/30/e-health-recordssunnybrook.html#ixzz1D5SAJCKU] Facebook. Statistics. Webaccess February 3, 2011 [http://www.facebook.com/press/info.php?statistics]
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-621
621
End-to-End Security for Personal Telehealth Paul KOSTER a,1, Muhammad ASIM a, Milan PETKOVIC a, b a Philips Research, b TU/e, Eindhoven, The Netherlands
Abstract. Personal telehealth is in rapid development with innovative emerging applications like disease management. With personal telehealth people participate in their own care supported by an open distributed system with health services. This poses new end-to-end security and privacy challenges. In this paper we introduce new end-to-end security requirements and present a design for consent management in the context of the Continua Health Alliance architecture. Thus, we empower patients to control how their health information is shared and used in a personal telehealth eco-system. Keywords. security, privacy, consent, telehealth
1. Introduction Healthcare around the world is facing important challenges through a substantial increase of the average age of the population and an increase of chronic diseases. Personal telehealth systems are expected to take an important role in addressing these issues. They extend healthcare from acute institutional care to outpatient care and home healthcare. Technological developments in this area are followed by standardization, policy and marketing activities of more than 230 companies that joined their efforts within the Continua Health Alliance [1] to ensure interoperability and further develop the personal telehealth market. Although personal telehealth technologies bring a lot of benefits, they also create new security and privacy challenges. With personal telehealth services, it becomes simpler to collect, store, and search electronic health data, which in turn endangers people's privacy. Furthermore, mistakes that are made because patient measurements are not available, associated to a wrong patient or modified in an unauthorized way can endanger patient safety. Therefore, technological means that empower patients with control over their health information while preventing security breaches and ensuring information correctness are of utmost importance. Traditionally, security in healthcare treats protection of sensitive data by considering individual systems and communication. For personal telehealth applications like remote patient monitoring common security means are (role-based) access control and secure communication protocols [2]. However, emerging trends to open, distributed and user-centric telehealth architectures call for a more end-to-end approach to security. Only an end-to-end approach can provide a consistent level of security and meet patient empowerment expectations. 1
Corresponding Author: Paul Koster, Philips Research, Koninklijke Philips Electronics N.V., High Tech Campus 34, 5656AE, Eindhoven, The Netherlands; E-mail: [email protected]
622
P. Koster et al. / End-to-End Security for Personal Telehealth
In this paper, we present new requirements for an end-to-end approach to security for personal telehealth. Moreover, we describe a digital consent management solution. This work represents ideas that have been contributed to and further elaborated in the Continua Health Alliance. This paper is organized as follows. In section 1, we introduce the Continua Health Alliance. Security and privacy requirements are described in section 2. Section 3 focuses on consent management to empower patients and presents a design addressing Continua requirements. Section 4 concludes the paper.
2. Continua Health Alliance The Continua Health Alliance is an industry alliance formed with the intention to foster the growth of personal telehealth. Its 230+ members recognize the need for alignment and interoperability for applications such as disease management, fitness and aging independently. Continua provides a reference architecture for personal telehealth systems and the Continua guidelines [3], which select and profile standards to realize the interoperability objectives. Architecture. The Continua architecture is characterized by its interfaces and device classes as illustrated in Figure 1. Medical observation devices measure people’s vital signs such as weight and blood pressure. These devices can be stationary, portable or body-worn, and use USB, Bluetooth or Zigbee to transmit the measurements to an application hosting device (AHD). Measurement communication follows the IEEE 11073 standard. The AHD acts as an intermediary between observation devices and remote services. The AHD can be a gateway device, PC or smartphone. It uses the WAN interface to forward observations to the remote services such as a disease management organization (DMO). WAN communication makes use of the IHE Patient Care Device transaction standard, which is based on web-services and HL7 2.6 message standards.
Figure 1. Continua end-to-end reference architecture
A WAN service collects the observation data to provide care, e.g. a DMO employing nurses supported by IT systems to coordinate a patient’s care. If measurements fall outside the expected range then a nurse may prepare a Personal Health Monitoring Report (PHMR) and forward this to care providers e.g. a patient’s family physician. The HRN interface facilitates the exchange of the PHMR documents. HRN services interact with WAN services using IHE XDR (Cross-Enterprise Document Reliable Interchange) and IHE XDM (Cross-Enterprise Document Media
623
P. Koster et al. / End-to-End Security for Personal Telehealth
Interchange) standards. HRN services include electronic health record (EHR) systems belonging to care providers or personal health record (PHR) systems. Security. As mentioned in the introduction, for adoption of personal telehealth systems, trust, security and privacy are very important. The same holds for compliance to legislation like EU Directive 95/46 and HIPAA. Continua acknowledges the importance of these issues amongst others through its E2E Security Task Force. The authors actively participate in this task force. Initial security and privacy issues have been addressed in Continua version 1 guidelines for the PAN and HRN interfaces. Continua version 1.5 guidelines added security features for the WAN and LAN interfaces with e.g. TLS for secure communication and SAML 2.0 tokens for authentication of AHD users, see table 1. Table 1. Security standards in Continua version 1.5. Security Standard TLS 1.0 IHE XDM (S/MIME) IHE ATNA WS-I BSP (TLS 1.0) IHE ATNA WS-I BSP (WS-Security + SAML 2.0) Zigbee security Bluetooth security
Security Objective confidentiality + integrity + authentication confidentiality + integrity + authentication auditing confidentiality + integrity + authentication auditing entity authentication confidentiality + integrity + authentication confidentiality + integrity + authentication
Interface HRN “ “ WAN ” “ LAN PAN
3. End-to-End Security and Privacy Requirements Continua version 1.5 addresses basic security requirements for personal telehealth with a strong focus on point-to-point transport security. However, a telehealth system with such open and distributed nature calls for an end-to-end approach, which follows from the risk analysis for remote patient monitoring performed by ENISA [4]. An end-to-end approach helps service providers to ensure compliance with legislation, empowers patients and eases seamless integration by defining a homogeneous security framework. The next paragraphs sketch end-to-end requirements as identified in Continua. Identity management. A correct association of health information to patient identities is essential to provide high quality and safe personal telehealth services. However, a person typically has different identifiers at the various systems in a distributed architecture like Continua. These multiple identifiers imply linking and cross-referencing of identities at the AHD, WAN and HRN systems and services. Up to now, service providers often provide a vertically integrated solution and deal with identity management out-of-band. However, larger number of patients, operational cost pressure, less vertical integration, and vendor interoperability ask for standardized inband identity solutions. This leads to the following requirements: i) measurement uploads should be unambiguously linked to a particular patient, and ii) identity linking should be in-band using interoperable protocols and preferably user-initiated. Integrity and data origin authentication. Health measurements performed by patients require healthcare professionals to place trust in information that patients report. For example, for a blood pressure measurement it is crucial to know that blood pressure of a registered patient is actually measured on his body, that the measurement is taken with a certified device and that it is not modified on the way to healthcare providers. However, end-to-end integrity is not trivial to guarantee as in the Continua
624
P. Koster et al. / End-to-End Security for Personal Telehealth
architecture health measurements pass through multiple parties and undergo transformations before a health provider obtains them. Therefore, it is required to i) authenticate data sources including users and devices, and ii) prevent or detect unauthorized data modifications while allowing legitimate transformations. A more detailed description of related requirements and security mechanisms is presented in [5]. Consent management. Traditionally, consent has been an important concept in healthcare. Signed paper consent forms are used to grant (opt-in) or withhold (opt-out) consent and enable patients to regulate which care providers have access to their health information. In perspective of user centered care and patient empowerment trends, patients should be in more direct control for distributed applications like personal telehealth. Digital consent addresses this requirement and increases consistency, compliance and efficiency for both patients and care providers. Consequently, highlevel end-to-end requirements include that i) patients should be able to define and manage their digital consent and privacy policies in a user-friendly manner, e.g. on a device at home or online, ii) digital consent should propagate together with the patient data, and iii) systems of care providers and services must enforce digital consent.
4. Design for Consent Management in the Continua Architecture Consent management entails the specification, exchange and update of patient’s digital consent preferences. For maximum effect, a patient should indicate his consent and privacy policies as early as possible such that they can travel together with the patient data through the ecosystem. In Continua, the AHD device would be a practical location for a patient to specify his consent. Alternatively, it could be specified at a WAN service by the patient online or taken care of by a nurse on behalf of the patient. Propagation of consent policies over the WAN and HRN interfaces must be enabled to ensure that disease management organizations and care providers use and share patient data in accordance with the patient’s digital consent policy. The Implementation Guide for HL7 CDA R2 Consent Directive [6] forms the basis for our approach to consent management in Continua. This recently approved draft standard for trial use defines a document format for digital consent and enables the expression of structured patient consent policies. The advantage of this standard lies in the fact that it is based on the CDA R2 standard, which is already used at the Continua HRN interface for the health PMHR document. Similarly, well-defined protocols exist for the exchange of this type of documents through the IHE XD* family of profiles. Figure 2 provides an overview of how these document and document exchange standards realize the consent management interactions at the HRN interface. The WAN interface solution is a subset of the solution for the HRN interface. Consent at the HRN interface is supported through a basic and a more advanced interaction. In the basic interaction the patient consent document is included in the same transaction as the health PHMR document as shown in Figure 2a. This makes use of the IHE XDR transaction in Continua, which allows inclusion of both documents in the existing submission set. In the more advanced interaction the patient consent document is retrieved online on demand as depicted in Figure 2b. Such interaction allows for more flexibility as a receiver may obtain consent documents e.g. when it does not have the required consent to perform its intended task. Technically, this variant involves the IHE XDS standard, which provides a superset of the functionality provided by XDR. To enable online
P. Koster et al. / End-to-End Security for Personal Telehealth
625
retrieval of consent documents, a sender at the HRN interface implements the Document Repository and Registry actors to host the consent documents. The receiver implements the IHE XDS Document Consumer actor. Optionally, the receiver may query and lookup the appropriate consent document identifiers and their location URLs. Subsequently, the HRN receiver requests the consent document through an XDS retrieve transaction. The receiver may include a token in the request to authenticate to the sender and enable personalization of the consent document. Finally, the sender responds with the requested patient consent document personalized for the recipient.
Figure 2. Consent management at the HRN interface
5. Conclusions Novel use cases in personal telehealth cannot be addressed with point-to-point or transport security alone anymore and a more end-to-end approach to security and privacy is required. The user-centered and open architecture of personal telehealth systems introduce challenging end-to-end security needs in the areas of identity management, integrity, data origin authentication and consent management. This paper presents a design to extend personal telehealth with digital consent. This design is applied to and presented in context of the Continua Health Alliance interoperability architecture. It demonstrates how the application and combination of novel standards from the healthcare domain realizes consent management and thereby empowers users.
References [1]
[2] [3] [4] [5] [6]
Wartena F, Muskens L, Schmitt L, Petkovic M. Continua. The reference architecture of a personal telehealth ecosystem. Proceedings of the 12th IEEE International Conference on e-Health Networking, Application and Services (Healthcom); 2010 July 1-3; Lyon, France; 2010. Raman A. Enforcing Privacy through Security in Remote Patient Monitoring Ecosystems. Information Technology Applications in Biomedicine; ITAB 2007; Tokyo; 2007. Continua Health Alliance. Continua Design Guidelines version 1.5. 2010. Chronaki C, et al. Being diabetic in 2011: Identifying emerging and future risks in remote health monitoring and treatment. EFR Pilot; ENISA; 2009. p29 Petkovic M. Remote Patient Monitoring: Information Reliability Challenges. TELSIKS 2009. IEEE Press; 2009. p295-301 HL7 Implementation Guide for Clinical Document Architecture, Release 2: Consent Directives, Release 1. Draft Standard for Trial Use. January 2011; HL7; 2011.
This page intentionally left blank
Public Health, Catastrophes, Outbreaks
This page intentionally left blank
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-629
629
The Epidemiologic Surveillance of Dengue-Fever in French Guiana: When Achievements Trigger Higher Goals Claude FLAMANDa,1, Philippe QUENELa, Vanessa ARDILLONa, Luisiane CARVALHOa, Sandra BRINGAYb,c, Maguelonne TEISSEIRE d a Cellule de l’Institut de Veille Sanitaire en Région Antilles-Guyane b Départment MIAp, University Paul-Valéry, Montpellier 3 c LIRMM, CNRS, UMR 5506, Montpellier 2 d TETIS Laboratory Départment of Information System
Abstract. The epidemiology of dengue fever in French Guiana is marked by a combination of permanent transmission of the virus in the whole country and the occurrence of regular epidemics. Since 2006, a multi data source surveillance system was implemented to monitor dengue fever patterns, to improve early detection of outbreaks and to allow a better provision of information to health authorities, in order to guide and evaluate prevention activities and control measures. This report illustrates the validity and the performances of the system. We describe the experience gained by such a surveillance system and outline remaining challenges. Future works will consist in the use of other data sources such as environmental factors in order to improve knowledge on virus transmission mechanisms and determine how to use them for outbreaks prediction. Keywords. Dengue Fever, Epidemiologic Surveillance, Vector-borne disease, Infectious disease, Public Health Surveillance, French Guiana
1. Introduction One of the main objectives of infectious disease surveillance systems is to provide early warning of disease outbreaks to those who can take appropriate response. In the last decade, the critical need for better surveillance became more urgent with the threat of bioterrorism, the recognition of the potential for an influenza pandemic [1] and the emergence or reemergence of infectious diseases in some regions of the world such as the introduction of West Nile Virus in the United States, the Chikungunya in Reunion Island or cholera in Haïti. Dengue virus, which is most commonly acquired through the bite of Aedes aegypti mosquito, is the most important mosquito-borne viral disease affecting humans [2]. This infection is caused by an arbovirus of the Flaviviridae family. There are four viral serotypes designated as DENV-1, DENV-2, DENV-3 and DENV-4. The infection produced a spectrum of clinical illnesses that ranges from an influenza-like illness to potentially fatal dengue hemorrhagic fever (DHF) and dengue shock syndrome (DSS), encephalitis or hepatitis. Despite the current development of 1
Claude Flamand, Epidemiologist. Cellule de l’Institut de Veille Sanitaire en Région Antilles-Guyane. E Mail: [email protected]
630
C. Flamand et al. / The Epidemiologic Surveillance of Dengue-Fever in French Guiana
several dengue vaccines [3], no vaccine and no curative treatment are available for the moment. So the strategies of prevention are limited to vector control and treatment strategies are limited to supportive care aiming at avoiding shock [4]. French Guiana is a 200,000 inhabitant’s French overseas department located in South America. Tropical vector-borne diseases such as dengue fever are responsible for disease outbreaks. Since 2006, a multi-source surveillance system, coordinated by the Cellule de l’Institut de Veille Sanitaire (InVS) en Regions Antilles-Guyane (Cire AG) was implemented to monitor dengue fever patterns, to improve early detection of outbreaks and to allow a better provision of information to health authorities to guide and evaluate prevention activities and control measures. The aim of this paper is to describe the experience gained by such a surveillance system and to outline remaining challenges.
Figure 1. Global architecture of the surveillance system
2. Materials and Methods 2.1. General Description of the Surveillance System The surveillance system integrates health information from multiple data sources, located on the coast and in the inland of French Guiana (Figure 1). 1. Biological laboratories (LABM): From 1991 to 2004, the surveillance system was based on the weekly surveillance of cases diagnosed by the French National Reference Center (NRC) for Arbovirus and Influenza virus, based at the Institut Pasteur de Guyane in Cayenne. From 2004 to 2006, the number of laboratories able to perform the biological confirmation of dengue gradually increased up to 7 laboratories distributed on 5 municipalities located on the coast in the northern part of the country. The definition criteria are virus isolation, viral RNA detection by reverse transcription-PCR (RT-PCR), or the detection of secreted NS1 protein or a positive serological test based on immunoglobulin M (igM)-capture enzymelinked immunosorbent assay (MAC-ELISA) [5; 6]. 2. Sentinel network: In 2006, a sentinel network composed of 30 voluntary general practitioners (GPs) located in the coast (representing around 35% of total GP’s
C. Flamand et al. / The Epidemiologic Surveillance of Dengue-Fever in French Guiana
631
activity) was implemented to collect clinical cases. A clinical case of dengue fever was defined by the occurrence of fever (equal to, or more than 38°C) with no evidence of other infection and associated with one or more non-specific symptoms including headache, myalgia, arthralgia and/or retro-orbital pains. Every week, public health nurses of the Health Regional Agency (ARS) surveillance unit call the sentinel GPs to collect the number of cases seen during the previous week. The weekly incidence of dengue fever in the coast area is estimated using the ratio of all GPs to participating sentinel GPs. 3. Hospital Centers: Since 2006, surveillance from Emergency Departments (EDs) of the three hospitals in French Guiana was set up to collect ED visits for “isolated fever” or “suspicion of dengue”. Furthermore, InVS set up a volunteer surveillance network of hospital EDs to collect data on a daily basis [7]. For each patient, age, gender, zip-code, reason for admission and main medical diagnosis based on the 10th edition of the International Classification of Diseases (ICD-10) are collected. Since 2008, the Hospital of Cayenne, the main city, is connected to this network and enables the monitoring of ED activities related to dengue fever medical diagnosis. Follow-up of hospitalized cases of dengue was also set up to monitor the severity of the epidemics. 4. Health Centers (CDPS): The health care system in isolated territories of French Guiana is based on 17 Health Centers, which are remotely coordinated by the hospital center of Cayenne. Since 2006, a surveillance system based on a data transmission by satellite connection enables the Cire AG to collect, from each center, the weekly number of suspected cases of dengue, following the same criteria as sentinel GPs. 2.1.1. Data Analysis and Statistical Methods Data have been monitored from January 2006 to December 2010. An analysis system using the Shewhart Control Chart based on moving ranges (MR) [8] was implemented to allow a continuous real-time assessment aiming at early outbreak detection. This analysis compares the weekly number of reported cases with a control limit calculated on the basis of the average of previous observations and standard deviation estimated by the moving range of size 2. Every week, data were analyzed according to the Program for Surveillance, Alert and Response (PSAGE) for dengue fever. The PSAGE was elaborated in 2008 by a local vector-borne disease committee composed of epidemiologists, biologists, clinicians, entomologists, specialists in charge of vector control. The program aims at specifying the role and the missions of all stakeholders in the integrated vector management, epidemiological surveillance, laboratory diagnosis, environmental management, clinical case management and communication. Five distinct epidemiological situations have been established: − Stage 1: Sporadic transmission − Stage 2: Presence of dengue fever clusters in some areas − Stage 3: Pre-alert epidemic − Stage 4: Confirmation of the epidemic − Stage 5: End of epidemic For each stage, a commensurate combination of preventive and control measures has been determined. The observations from epidemic periods were excluded from the calculation of the alert threshold. The pre-alert epidemic stage was activated if alert
632
C. Flamand et al. / The Epidemiologic Surveillance of Dengue-Fever in French Guiana
thresholds were exceeded for two following weeks. We confirm the outbreak if the exceeded thresholds lasted two additional weeks. Others points brought to the table, such as the significant increase of the positivity rate of biological analysis or the re-emergence of a serotype were used to confirm the entry into the next stage. The end of epidemic stage was announced when the number clinical cases and biological confirmed cases returned under the thresholds. Table 1. Description of the outbreaks detected from 2006 to 2010, French Guiana Outbreaks period W2006-012 – W2006-34 W2009-01 – W2009-38 W2009-53 – W2010-38
Cases (N) Clinical Confirmed cases cases 15 700 2 300 13 900 4 129 9 400 2 431
Serotypes
Hospitalization
Deaths
DENV-2 DENV-1 DENV-4, DENV-1
204 241 92
4 2 1
3. Results Confirmed and clinical cases were collected and recorded in the database from the 2006 to 2010. Over the study period, 37 812 clinical cases and 10 724 confirmed cases were recorded. The global activity was strongly influenced by the occurrence of outbreaks periods (Figure 2).
Figure 2. Weekly number of biologically confirmed and clinical cases of dengue-fever and outbreaks periods, French Guiana, January 2006 – December 2010
As shown in Figure 2, three major outbreaks were detected during the study period (Table 1). During these outbreaks, 80 signals were triggered for confirmed cases and 64 for clinical cases. The occurrence of all these outbreaks was confirmed by the vector borne disease committee. The average duration of the epidemics varied between 38 and 41 weeks. According to the PSAGE, health authorities decided upon a reinforcement of collective and individual vector control measures proportionate to the severity and magnitude of the epidemiological situation. Aside from the outbreak periods, 19 and 9 signals were respectively triggered by the control chart for confirmed and clinical 2
The surveillance of biologically confirmed cases allowed identifying the beginning of this outbreak in W2005-48.
C. Flamand et al. / The Epidemiologic Surveillance of Dengue-Fever in French Guiana
633
cases. While conducting epidemiologic investigations in order to explain these signals, some relevant clusters also happened to be identified in some municipalities.
4. Discussion The achievements presented in this paper highlight the validity of the surveillance system and its performance to monitor dengue patterns in the whole country of French Guiana, to detect outbreaks and to provide real time information to health authorities. The great variety of data sources constitutes a very sound basis for the analysis and interpretation of the epidemiological situation and an essential tool for decision-making within the vector-borne disease committee. In the future, other statistical methods should be implemented using time-series methodology and taking into account data characteristics such as secular trends, seasonality and abrupt changes. Recent outbreaks showed that the implementation of the PSAGE at a region-wide level was not relevant considering the significant distances between municipalities. Future challenges and developments should focus considering smaller territories by spreading the PSAGE in relevant spatial units. Another major challenge will be outbreak prediction. This step will consist in the use of other data sources for surveillance such as environmental factors (i.e. climatic, meteorological, plant cover and land use) so as to help monitor and predict the spatial and temporal distribution of the virus. A research project is now being developed to use an alternative approach as the spatiotemporal data mining. The approach will consist in highlighting the relevant spatial units and the factors which are associated with a subsequent increase of cases. As an example, the project aims at applying data mining algorithms to identify frequent sequential patterns like < (NDVI ++, Rainfall ++) (BCC ++)>, which means that the combination of high Normalized Difference Vegetation Index (NDVI) and important rainfalls leads “frequently” to an increase in the number of biologically confirmed cases (BCC) of dengue. We’ll follow the spatial and temporal distribution of these sequential patterns to better understand the mechanisms of the virus transmission in order to use them for outbreak prediction.
References [1] [2] [3] [4] [5] [6]
[7]
[8]
M’ikanatha NM, Lynfield R, Van Beneden CA, De Valk H. Infectious Disease Surveillance. Blackwell Publishing. 2007 Gubler DJ. The global emergency / resurgence of arboviral diseases as public health problems. Arch Med Res 2002, 334 : 330-342; Guy B, Saville M, Lang J. Development of Sanofi Pasteur tetravalent dengue vaccine. Human Vaccin 2010. 6(9). Beatty ME, Stone A, Fitzsimons D, et al. Best Practices in Dengue Surveillance: A Report from the Asia-Pacific and Americas Dengue Prevention Boards. Plos Negl Trop Dis. 2010. 4(11):e890. Tran A, Deparis X, Dussart P, et al. Dengue spatial and temporal patterns, French Guiana, 2001. Emerg Infect Dis (2004), 10(4):615-621. Dussart P, Petit L, Labeau B, et al. Evaluation of Two Commercial Tests for the Diagnosis of Acute Dengue Virus Infection Using NS1 Antigen Detection in Human Serum. PLoS Negl Trop Dis. 2008.2(8):e280. Josseran L, Fouillet A, Caillere N, et al. Assessment of a Syndromic Surveillance System Based on Morbidity Data: Results from the Osccour Network during a Heat Wave. 2010. PLoS ONE 5(8): e11984. Montgomery DC. Introduction to Statistical Quality Control, 5th ED. John Wiley & Sons, New York, 2005.
634
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-634
Prescribing History to Identify Candidates for Chronic Condition Medication Adherence Promotion Jim WARRENa,b,1, Debra WARREN a,b, Hong Yul YANG b, Thusitha MABOTUWANAa,b, John KENNELLYc, Tim KENEALYc, Jeff HARRISON d a National Institute for Health Innovation b Department of Computer Science c Department of General Practice and Primary Health Care d School of Pharmacy The University of Auckland, Auckland, New Zealand
Abstract. Poor adherence to long-term prescription medication is a frequent problem that undermines pharmacological control of important risk factors such as hypertension. A medication possession ratio (MPR) can be calculated from Practice Management System (PMS) data to provide a convenient indicator of adherence. We investigate how well prior MPR predicts later MPR, taking MPR<80% as indicative of ‘non-adherence,’ to assess the potential value of MPR calculation on PMS data for targeting adherence promotion activities by general practices. We examine PMS data for two New Zealand metropolitan general practices, one with a predominantly Pacific caseload, across 2008 and 2009. We find prevalence of non-adherence in 2009 to be 51.63% (95% confidence interval [CI] 47.9-55.3) for patients at the Pacific practice and 28.09% (95% CI 25.0-31.1) at the other practice for patients who are demonstrably active with the practice in 2009. The positive predictive value (PPV) of 2008 non-adherence for 2009 nonadherence is 71.80% (95% CI, 66.5-77.1) and negative predictive value (NPV) 61.52% (95% CI 56.9-66.1) for the Pacific practice; PPV is 61.38% (95% CI 54.668.2) and NPV is 82.19% (95% CI 79.2-85.2) for the other practice. The results indicate good potential for decision support tools to target adherence promotion. Keywords. Hypertension, information systems, patient non-adherence, quality indicators.
1. Introduction In this paper, we take adherence as the extent to which a patient’s behavior in taking prescribed medications aligns with the instructions and recommendations from the prescriber [1]. Poor adherence to antihypertensive medications is commonplace, even where cost is not a major concern; for example, a Swedish study found satisfactory refill adherence for major classes of antihypertensive agents to be from 55% to 66% [2]. Despite this, providers do not routinely ask about adherence, are often unaware of poor adherence and do not take it into account when titrating dose [3]. We take a particular interest in Pacific adherence to antihypertensive medication. The Pacific population in 1
Corresponding Author: Prof Jim Warren, Computer Science – Tamaki, The University of Auckland, Private Bag 92019 Auckland 1142, New Zealand; E-mail: [email protected].
J. Warren et al. / Prescribing History to Identify Candidates
635
New Zealand (NZ) has grown dramatically since World War II, from 2,200 people in 1945 to 266,000 in 2006, with 66% living in the Auckland metropolitan area and Samoan being the largest Pacific ethnic group [4]. This Pacific population has a greater cardiovascular disease (CVD) risk than European New Zealanders [5]. Our research has focused on use of Practice Management System (PMS) records to examine quality of long-term condition management in general, and adherence to longterm medications in particular. NZ ranks well for information technology use in General Practice medicine [6]; individual practices have ready access to their prescribing records and can potentially use these to be more aware of their patients’ adherence. While prescriptions provide an indirect measure of adherence (as compared to dispensing or consumption), we find 93% of prescriptions for long-term medication to be matched within a week by a dispense in NZ national claims data [7]; and we find prescribing based adherence to be associated with significantly increased odds of meeting recommended blood pressure (BP) targets for patients with diabetes [8]. We have used PMS data to identify Samoan patients for study of their perspectives on antihypertensive medication adherence [9], and recently for recruiting patients to a feasibility study of adherence promotion by General Practice staff. If past prescribing records are to be useful in targeting medication adherence promotion, however, we are left with a question: just how well does poor adherence from past prescribing data predict later adherence? In the next section, we describe our software tools, data and protocol for the present study addressing that question. We then give the results and in the discussion section focus on the implications of the patterns we find for practical use of PMS data in promotion of patient adherence.
2. Methods 2.1. Analysis Software We have created a platform for analysis of long-term condition management from PMS data, called ChronoMedit (described in depth elsewhere [10]). ChronoMedit has an ontology of PMS data concepts, including a hierarchy of antihypertensive medications, and is designed to answer several specific classes of query, including queries about continuity of medication supply. We use Medication Possession Ratio (MPR) – percent of days a patient is in supply of a medication – as a key statistic, and choose the common threshold of MPR<80% as defining poor (or ‘non-’) adherence. Antihypertensive MPR is calculated with the model that a patient is ‘in supply’ on a given day if issued a prescription that provides supply on that day if dispensed on the day prescribed and thereafter taken as directed. We consider a patient adherent if they have any antihypertensive supply (ignoring partial non-compliance to combination therapy involving multiple types of pills) and disregard stockpiles. Figure 1 shows a ChronoMedit timeline graph illustrating poor adherence for a 12-month evaluation period (EP) starting 1 January 2008. The figure shows a 6-month ‘run-in’ period; a prescription in the run-in may provide supply into the EP. 2.2. Data Ethics approval was given by the NZ ‘Northern X’ Regional Ethics Committee (protocol NTX/09/100/EXP). Herein we analyse PMS data from two Auckland
636
J. Warren et al. / Prescribing History to Identify Candidates
Figure 1. ChronoMedit timeline graph for a patient with poor adherence.
metropolitan general practices: Practice One, which focuses on a Pacific clientele, and Practice Two with a relatively typical Auckland caseload. De-identified data extracts were made in June 2010 and include data from 1 July 2007 on prescriptions, lab tests, BP measures and diagnoses, as well as ethnicity of ‘funded’ patients (NZ residents are encouraged to enroll with one primary health organization which is provided partial subsidy for their care). Patients were included for analysis if they had at least one antihypertensive prescription in the period 1 July 2007 to 31 December 2008 and were over age 20. Prescriptions from 1 July 2007 were used as run-in for MPR calculations on the calendar years of 2008 and 2009. Due to the practice-specific nature of our data, a zero MPR may occur when the patient is adherent but now using another provider. As such, it is relevant to know whether the patient is still in some sense active with the practice. We define a patient as “active” for 2009 if they have any prescription, BP measurement, diagnosis, or any of the lab tests we capture (cholesterol, HbA1c, ACR, microalbumin, creatinine, uric acid, eGFR, or fasting glucose) recorded in the PMS during that year. 2.3. Protocol We examine the distribution of 2009 adherence with respect to 2008 data for both practices. The proportion of patients non-adherent in 2008 that are still non-adherent in 2009 can be interpreted as the positive predictive value (PPV) of detecting ongoing adherence problems. Similarly, the proportion adherent in 2008 still adherent in 2009 is the negative predictive value (NPV). If it is to be valuable, PPV should exceed the overall prevalence of non-adherence in the later timeframe (i.e. reduced false positive, FP, rate as compared to providing adherence promotion to everyone). We look at the relationship of 2008 and 2009 adherence for just those patients active with the practice in 2009 as well as for the total of all patients. Reported confidence intervals (CIs) for proportions use standard Gaussian approximation.
3. Results Table 1 shows the ethnicity code distribution for the funded patients over age 20 at each practice. 842 and 921 patients in Practice One and Practice Two, respectively, met the inclusion criteria for analysis of their antihypertensive adherence. Table 2 shows the distribution of adherence outcomes for the calendar years 2008 and 2009. This shows that for each practice, for total patients as well as just those active in 2009, the
637
J. Warren et al. / Prescribing History to Identify Candidates
PPV of 2008 non-adherence significantly exceeds the 2009 prevalence of nonadherence. Note that counts of patients that were inactive, and thus have MPR=0, in 2009 are the differences of the Total and Active groups (e.g. there were 27 [184 minus 157] patients at Practice One that were adherent in 2008 and then inactive in 2009). Table 1. Self-identified ethnicities by practice1 Practice One Pacific Samoan Cook Island Maori Niuean Tongan Other Pacific NZ Maori European, Asian & Other 1
Practice Two 1988 (79%) 118 (5%) 116 (5%) 98 (4%) 24 (1%) 72 (3%) 86 (3%)
European NZ Maori Asian Pacific Other
3107 (83%) 366 (10%) 86 (2%) 85 (2%) 78 (2%)
Individuals may claim up to 3 ethnicities, the first supplied ethnicity is used here
Table 2. 2009 adherence by 2008 adherence: count, row percentage and 95% CI of row percentage 2009 Active Total Adherent Non-adherent Adherent Non-adherent
2008 Practice One Adherent Non-Adherent Total Practice Two Adherent Non-Adherent Total
251 (61.52%) [56.9%-66.1%] 75 (28.20%) [22.9%-33.5%] 326 (48.37%) [44.7%-52.1%]
157 (38.48%) [33.9%-43.1%] 191 (71.80%) [66.5%-77.1%] 348 (51.63%) [47.9%-55.3%]
251 (57.70%) [53.2%-62.2%] 75 (18.43%) [14.7%-22.1%] 326 (38.72%) [35.5%-41.9%]
184 (42.30%) [37.8%-46.8%] 332 (81.57%) [77.9-85.3%] 516 (61.28%) [58.1%-64.5%]
503 (82.19%) [79.2%-85.2%] 73 (38.62%) [31.8%-45.4%] 576 (71.91%) [68.9%-75.0%]
109 (17.81%) [14.8%-20.8%] 116 (61.38%) [54.6%-68.2%] 225 (28.09%) [25.0%-31.1%]
503 (79.59%) [76.5%-82.7%] 73 (25.26%) [20.4%-30.2%] 576 (62.54%) [59.5%-65.6%]
129 (20.41%) [17.3%-23.5%] 216 (74.74%) [69.8%-79.6%] 345 (37.46%) [34.4%-40.5%]
4. Discussion and Conclusion It is unsurprising (e.g. from [1]) that analysis of PMS prescription data indicates high rates of poor medication adherence; this is a major population health issue generally, and particularly so for the Pacific population. Given the complex psychosocial basis of non-adherence, however, it is surprising that there is as much change in patient status from one year to the next as we observe. Use of the results for adherence promotion depends on the intervention. For an intensive promotion (e.g. counseling and education, dose simplification and support for medication administration), any reduction in FP rate (i.e. intervening on patients who would have been adherent anyway) represents considerable savings. For an inexpensive screening, such as brief conversation with practice staff, or a questionnaire in the waiting room as a precursor to more intensive intervention, the benefit of preventing an FP is less. Even in the screening context, however, PMS data could be useful: MPR<80% is difficult to dismiss without a clear reason (e.g. a prolonged hospital stay);
638
J. Warren et al. / Prescribing History to Identify Candidates
and patients with poor past adherence and now inactive warrant phone contact to determine if they have transferred care. The value of prescribing history decreases as overall prevalence of non-adherence increases – in Practice One there is only around a 20% reduction in FP rate in using past non-adherence as a predictor as compared to assuming everyone non-adherent, whereas the FP reduction is 33% in Practice Two. Practices should adjust intervention strategies to their prevalence rates. Since adherence promotion is just one of many demands on the health workforce, automated methods, such as cell phone based reminders, are an attractive option. MPR<80% is a good candidate as an invocation criterion for such services. Patients could be shown something like Figure 1 to explain why they have been contacted. Study limitations include the use of data from just two practices, each with distinct caseloads. Correlation to national databases (pharmaceutical claims, hospital admissions, mortality) would remove much of the uncertainty in looking at practice data alone; and if this could be done in near real-time via a national e-pharmacy network it would save practices from many FP follow-ups. In conclusion, high rates of poor adherence are indicated in analysis of PMS prescription data for antihypertensive medications. Poor adherence in one year is predictive, although far from perfectly, of poor adherence the next. Thus, practices wishing to target their adherence promotion efforts would potentially benefit from decision support tools that use past prescribing records to compute MPR. Acknowledgments: We thank Dr Kuineleti Chang Wai for her insights on Pacific adherence issues. This work was supported by an NZ Health Research Council Feasibility Study grant (HRC 09/136R) and a University of Auckland Research Development grant.
References [1] [2]
World Health Organization, Adherence to Long-term Therapies: Evidence for action, Geneva, 2003. Andersson, K. Melander, A. Svensson C., Lind, O. and Nilsson, J. L. Repeat prescriptions: refill adherence in relation to patient and prescriber characteristics, reimbursement level and type of medication, Eur J Public Health 15 (2005), 621–626. [3] Heisler, M. Hogan, M. M. Hofer, T. P. Schmittdiel, J. A. Pladevall, M. and. Kerr, E. A When more is not better: treatment intensification among hypertensive patients with poor medication adherence, Circulation 117 (2008), 2884–2892. [4] Statistics New Zealand and Ministry for Pacific Island Affairs, Demographics of New Zealand's Pacific Population, Wellington, 2010. [5] Sundborn, G. Metcalf, P. A. Gentles, D. Scragg, R. K. Schaaf, D. Dyall, L. et.al., Ethnic differences in cardiovascular disease risk factors and diabetes status for Pacific ethnic groups and Europeans in the Diabetes Heart and Health Survey (DHAH) 2002-2003, Auckland New Zealand, N Z Med J 121 (2008), 28–39. [6] Schoen, C. Osborn, R. Doty, M. M. Squires, D. Peugh, J. and Applebaum, S. A survey of primary care physicians in eleven countries, 2009: perspectives on care, costs, and experiences, Health Aff (Millwood) 28 (2009), w1171–w1183. [7] Mabotuwana, T. Warren, J. Harrison, J. and Kenealy, T. What can primary care prescribing data tell us about individual adherence to long-term medication?-comparison to pharmacy dispensing data, Pharmacoepidemiol Drug Saf 18 (2009), 956–964. [8] Mabotuwana, T., Warren, J. and Kennelly, J. A computational framework to identify patients with poor adherence to blood pressure lowering medication, Int J Med Inform 78 (2009), 745–756. [9] Chang Wai, K. Elley, C. R. Nosa, V. Kennelly, J. Mabotuwana, T. and Warren, J. Perspectives on adherence to blood pressure lowering medications among Samoan patients: qualitative interviews, Journal of Primary Health Care 2 (2010), 217–224. [10] Mabotuwana T. and JWarren, . ChronoMedIt--a computational quality audit framework for better management of patients with chronic conditions, J Biomed Inform 43 (2010), 144–158.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-639
639
Challenges for Signal Generation from Medical Social Media Data a
Johannes DREESMANa, Kerstin DENECKEb,1, Niedersächsisches Landesgesundheitsamt, Hannover, Germany b L3S Research Center, Hannover, Germany
Abstract. Early detection of disease outbreaks is crucial for public health officials to react and report in time. Currently, novel approaches and sources of information are investigated to address this challenge. For example, data sources such as blogs or Twitter messages become increasingly important for epidemiologic surveillance. In traditional surveillance, statistical methods are used to interpret reported number of cases or other indicators to potential disease outbreaks. For analyzing data collected from other data sources, in particular for data extracted from unstructured text, it is still unclear whether these methods can be applied. This paper surveys existing methods for interpreting data for signal generation in public health. In particular, problems to be addressed when applying them to social media data will be summarized and future steps will be highlighted. Keywords. Epidemic Intelligence, Signal Generation, Disease Surveillance
1. Introduction Threats to public health, for example those, that are related to disease activity in humans and animals or bioterrorism, are under monitoring by public health officials. Factors such as globalization or climate change contribute to the fast emergence of disease activity and lead to an increased necessity to detect those threats as early as possible. For this reason, early detection of disease activity became even more important. Thus, besides indicators such as number of reported cases or drug prescriptions, new information sources (e.g., social media data from the Web) are considered and investigated for the purpose of epidemiologic surveillance. Epidemiologic surveillance comprises the process of gathering and analyzing data related to human health and disease to early identify and characterize public health events and to be aware of disease activity in the human population. Thus, the objectives of this process are situational awareness and early event detection [1]. In general, within this process occurrence of a public health event is recognized from input data. The input data is processed by detection methods and indicators or hints to health events are identified. In the following signal generation step, several indicators are combined and analyzed. In the simplest case, the indicator frequency is compared to a predefined threshold. In more complex approaches, statistical algorithms are exploited. By now, these algorithms have been used to analyze indicator data received through traditional reporting mechanisms such as the number of cases reported by physicians or laboratories. It is still unclear, whether they are useful to analyze also 1
Corresponding Author.
640 J. Dreesman and K. Denecke / Challenges for Signal Generation from Medical Social Media Data
data collected from other information sources. In particular, the challenges to be addressed when interpreting indicators gained from medical social media data have not yet been identified. In this paper, we describe the problem of signal generation from medical social media data. Existing methods to signal generation are briefly summarized. Finally, the challenges for signal generation to be addressed when exploiting medical social media data for signal generation and epidemiologic surveillance are summarized and steps for future work are pointed out.
2. Signal Generation from Medical Social Media Data 2.1. Signal Generation from Medical Social Media Data A signal is considered a hint to a public health event. To generate a signal, input data is processed, indicators are detected and analyzed. In traditional surveillance, values of indicators are directly collected (e.g., number of cases reported by a laboratory) and provided to the analysis methods. When considering unstructured texts such as medical social media data, these indicators need to be detected first, before generating signals. Indicators to be detected from social media data could be frequent mentions of specific symptoms or disease names. A signal is generated on the basis of indicators and thresholds. If an indicator exceeds the threshold, a signal is generated. The thresholds may be constant or variable in time. They may refer to the absolute value of the indicator or to changes of the indicator in time. 2.2. Existing Systems for Epidemiologic Surveillance Existing systems for epidemiologic surveillance differ in the kind of sources they monitor and the kind of information they provide. Hartley et al. provide an overview on the landscape of event-based surveillance [2]. In contrast to indicator-based (or traditional) surveillance where indicators are used for surveillance (such as number of reported cases or drug prescriptions), event-based systems use additional sources of information. In this paper, the focus is on event-based systems. Two example systems are HealthMap 2 and MedISys 3 using news media and public health websites as information source. Signals are generated by using a simple threshold method. BioSense4 exploits outpatient data along with medical laboratory test results. To this structured data, methods for signal generation as introduced in section 3 are applied. Considering social media data for this purpose has just been started and many challenges still need to be addressed. Further, statistical methods known from traditional surveillance are not yet applied in event-based surveillance. In this paper, we summarize the problems for signal generation from medical social media data.
2
http://www.healthmap.org http://medusa.jrc.it/medisys 4 http://www.cdc.gov/biosense 3
J. Dreesman and K. Denecke / Challenges for Signal Generation from Medical Social Media Data 641
3. Methods for Signal Generation Indicator-based surveillance systems exploit a variety of temporal and spatial methods for signal generation. The basic idea behind all methods is to search for aberrations of the observed values from the expected level. These aberrations might occur in time, in space or in combinations of both. Depending on the disease under investigation and on the reporting system, there might be a substantial variation of the indicator in time or space anyway. In the time domain, this variation is often caused by seasonal effects (e.g. seasonal influenza) and can be taken into account, if expected values are mainly generated from the same season of the past years. Incompleteness of a reporting system, i.e. data for some period of time is missing, is not such an issue for the statistical procedures, if the amount of incompleteness remains stable over time. 3.1. Methods based on Simple Thresholds The Bayes threshold is calculated from the reference data of six weeks before the currently observed value, by estimating the coefficients of a negative binomial distribution from the observed data and by calculating an upper threshold with an alpha of 0,05. The set of reference data can also be extended to reference data from earlier years [7]. Furthermore, the surveillance institutes or health organizations such as Robert Koch-Institut (RKI, Germany) or Center of Disease Control (CDC) have implemented their own thresholds. The RKI threshold [3] is calculated from the reference data of six weeks before the currently observed value. The set of reference data can also be extended to reference data from earlier years. The CDC threshold is usually based on reference data from the past five years. The reference values are taken from several weeks. From the reference values the mean and the variance are calculate an upper 95% prediction limit which serves as threshold [9]. 3.2. Methods based on Regression Analysis The threshold methods described in section 3.1 suffer from the weakness that a secular trend in the indicator can not be considered. Outbreaks in the past are likely to disturb the estimation of the threshold. Finally statistical properties of the method become questionable, if the indicator is presenting very low numbers. To overcome all these problems, Farrington et al. proposed an approach based on generalized linear models, which is by now broadly applied in European countries for indicator based surveillance [8]. The approach fits a regression model to the data over several years, allowing for a secular trend. Outbreaks in the past are automatically identified and removed, and the statistical distribution fits either to rare counts or to frequent counts [8]. 3.3. Methods based on Quality Control Measures Cumulative sum or CUSUM methods [5,6] originated in quality control. They do not focus on the total aberration of an observed value from an expected value in one particular period of time, but on several consecutive periods, and sum up aberrations in one particular direction. If there is a similar tendency in consecutive weeks, the sum of aberrations increases over a particular threshold and a signal is generated. Michael Höhle [3] provides a software package for the statistical software R with several statistical algorithms for surveillance implemented. It contains functionality to
642 J. Dreesman and K. Denecke / Challenges for Signal Generation from Medical Social Media Data
visualize routinely collected surveillance data and provides algorithms for the statistical detection of potential outbreaks. The inputs to these algorithms are univariate or multivariate time series of case counts (number of cases and whether there was an outbreak). In the package, all time series methods mentioned before are implemented. The package is currently used by several European national health authorities. 3.4. Detection of Spatial Clusters by using the Scan-Statistic The spatial Scan-Statistic scans the area of interest by using a circular window [4]. This window is moved over the area and the diameter is changed, such that the window covers one district or several neighboring districts. For each window, joint case density of the districts inside the window is compared with case density outside. Simulation methods are used to assess whether a difference between inside and outside the window is statistically significant. In order to adjust for varying population densities, population data of the regions can be incorporated. If no population data are available, data of another "control disease" can be used as a reference. The spatial Scan-Statistics can be calculated by the computer program SatScan which is freely available [4].
4. Challenges for Generating Signals from Social Media Data Shmueli and Burkom summarized in [10] statistical challenges when monitoring time series. They came up with four differences that characterize the epidemiologic surveillance setting: the underlying background behavior, the nature of outbreaks, evaluation of performance and the requirements and uses of surveillance systems. When evaluating signal generation methods, three criteria are used: sensitivity, specificity and timeliness [2]. Sensitivity and specificity are well known evaluation measures. Timeliness can be measured by subtracting the time of generation from the time of the event itself. This measure is crucial since the early detection of disease activity is a high priority and tools are only accepted by public health officials when they show a clear benefit in contrast to existing reporting and analysis processes. Signal generation methods can normally be adjusted to increase or decrease the single values for the quality measures mainly by adapting thresholds or other parameters. But, this process is very difficult since the measures depend on each other. An increase of specificity can lead to a decrease of sensitivity and so on. Already in traditional surveillance systems, a challenge in the spatial domain is the unequal distribution of the population. In addition, a substantial spatial variation might occur due to different diagnostic or reporting behavior. The latter is much more difficult to assess and adjustments are complicating. In the following, we focus more specifically on concrete challenges when considering social media data for epidemiological surveillance. The statistical methods described before base on certain assumptions [1]. Already indicator-based surveillance in general violates some of these assumptions (e.g., observations are auto correlated). When using social media data, even more assumptions are violated due to the peculiarities of this data. While indicators used in traditional surveillance can be considered true, social media data is extremely noisy and data is coming in every second (e.g. Twitter data).
J. Dreesman and K. Denecke / Challenges for Signal Generation from Medical Social Media Data 643
Indicators to a public health event can be detected in various social media sources (e.g. in news articles and in blog postings). It is unclear, how to aggregate these indicators before analyzing them within the signal generation process. Further, indicators might be mentions of symptoms. But, which symptoms together make a signal to a public health event? In addition, information that is normally available in traditional surveillance could be missing in social media data (e.g. detailed information on the location or the time). It is still a challenge to find a solution how to process incomplete data with these algorithms, whether to consider them anyway or better leave them out.
5. Conclusions and Future Work In this paper, the problem of generating signals for epidemiological surveillance form medical social media data has been characterized. Even though there are signal generation methods from indicator-based surveillance available, it is still unclear, to what extent these methods are applicable to the specific problem of signal generation form social media data. We collected the challenges to be considered. In future, the methods need to be applied to data collected from unstructured documents and solutions for these challenges need to be found. For this purpose, a standard data set would be very helpful that could be used to evaluate the various statistical methods with respect to the evaluation measures sensitivity, specificity and timeliness. Such data set for medical social media data is still missing. This would help to test and adapt methods more easily. Aknowledgements: This research is part of the M-Eco project funded partly under 247829 by the European Commission
References [1]
Fricker RD: Biosurveillance: Detecting, Tracking, and Mitigating the Effects of Natural Disease and Bioterrorism. Encyclopedia of Operations Research and the Management Sciences , 2010 [2] Hartley DM, et al. The Landscape of International event-based Biosurveillance. Emerging Health Threats Journal, 3, 2009 [3] Höhle M. surveillance: An R package for the surveillance of infectious diseases, Computational Statistics (2007), 22(4), pp. 571-582. [4] Kulldorff M, Nagarwalla N. Spatial disease clusters: detection and inference. Statistics in Medicine (1995) 14:799-810. [5] Rossi G, Lampugnani L, Marchi M. An approximate CUSUM procedure for surveillance of health events. Statistics in Medicine,1999, 18:2111-2122. [6] Rogerson PA, Yamada I. Approaches to syndromic surveillance when data consist of small regional counts. Morbidity and Mortality Weekly Report, 2004. 53/Supplement:79-85. [7] Riebler A. Empirischer Vergleich von statistischen Methoden zur Ausbruchserkennung bei Surveillance Daten (Empirical Comparison of statistical methods for outbreak detection in surveillance data), Bachelor thesis, 2004 [8] Farrington P, Andrews N, Beale A, Catchpole M. A statistical algorithm for the early detection of outbreaks of infectious disease. J. R. Statist. Soc. A, 159, 1996: 547-563. [9] Stroup D, Williamson G, Herndon J, Karon J. Detection of aberrations in the occurrence of notifiable diseases surveillance data. Statistics in Medicine, 8, 1989: 323 – 329 [10] Shmueli G, Burkom H. Statistical Challenges Facing Early Outbreak Detection in Biosurveillance. Technometrics, Vol. 52, No. 1. (February 2010), pp. 39-51.
644
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-644
Providing Trust and Interoperability to Federate Distributed Biobanks Martin LABLANSa,1, Sebastian BARTHOLOMÄUSa, Frank ÜCKERTa a Institute of Medical Informatics, University of Münster, Germany
Abstract: Biomedical research requires large numbers of well annotated, qualityassessed samples which often cannot be provided by a single biobank. Connecting biobanks, researchers and service providers raises numerous challenges including trust among partners and towards the infrastructure as well as interoperability problems. Therefore we develop a holistic, open-source and easy-to-use IT infrastructure. Our federated approach allows partners to reflect their organizational structures and protect their data sovereignty. The search service and the contact arrangement processes increase data sovereignty without stigmatizing for rejecting a specific cooperation. The infrastructure supports daily processes with an integrated basic sample manager and user-definable electronic case report forms. Interfaces for existing IT systems avoid re-entering of data. Moreover, resource virtualization is supported to make underutilized resources of some partners accessible to those with insufficient equipment for mutual benefit. The functionality of the resulting infrastructure is outlined in a use-case to demonstrate collaboration within a translational research network. Compared to other existing or upcoming infrastructures, our approach has ultimately the same goals, but relies on gentle incentives rather than top-down imposed progress. Keywords: Biobank, Network, Infrastructure, Federation, Metadata, MDR
1. Introduction Modern biomedical research is in need of large, high-quality and well-annotated stocks of samples. To achieve statistical significance samples taken from one biobank alone are often not sufficient and must thus be complemented by samples and data from other institutions – the need for biobank networking is born. By overcoming the limitations of isolated efforts, research networks aim to connect actors from many different domains reaching from physicians performing clinical studies over pathologists to scientists in basic research. But when trying to connect all these partners, various challenges arise. In this article we focus on the sensitive field of trust among researchers and in an IT infrastructure, as well as various problems of interoperability regarding IT systems, semantics and workflows [1]. We claim that these problems can be helped by the means of a holistic IT concept. To prove our assertion, we provide the concept for open-source software that can be set up by anyone to manage samples, describe them unambiguously despite heterogeneous domains and connect to other such installations for collaboration, e.g. sample interchange. The software will allow every user to reflect their data protection policies and preserve their data sovereignty. 1
Corresponding Author.
M. Lablans et al. / Providing Trust and Interoperability to Federate Distributed Biobanks
645
2. Methods: Concepts and Components 2.1. Trust Barson et al. [2] state that fears and a lack of trust are among the greatest barriers for sharing knowledge, and Sheth [3] and Ölund et al. [4] add that owners of an information source are, in many cases, only willing to share their knowledge if they retain control about who can access it. Applied to the medical domain, this means that having collected data and samples over years, a responsible researcher wants to carefully consider with whom to share this considerable asset. The need for data sovereignty and the various organizational structures suggest that centralized or mono-hierarchic approaches have to be avoided where possible. Data should rather be kept locally and sharing should be optional or even on an ondemand basis. We establish a federated network infrastructure consisting of autonomous nodes. The central software component of each node is a so-called coordinator service, a standalone research infrastructure hosted by a single partner for his own needs. To avoid dependencies on a platform operator, our concept avoids essential software components a partner could not (have) set up himself. Different instances of the coordinator software can be interconnected to build various forms of network structures, resembling different kinds of organizational structures. Information about available samples and data can be requested at each coordinator, but such information never leaves the local node without predefined or manual permission. Moreover, fine grained access rules can aggregate result sets depending on the requesting partner. This federated approach puts partners in control, allowing them to reject requests not only for samples but even data describing them. 2.2. Interoperability Technical interoperability within the infrastructure is achieved by hiding the heterogeneity of the partners’ IT systems behind the local coordinator nodes, which store and communicate data in a homogeneous way. Obviously this just shifts the problem to the local instances, which now have to communicate with the various local IT systems to acquire the necessary data in the first place. This acquisition needs to happen as transparently as possible. As Ochs and Casagrande state, “a successful system must … mesh seamlessly into the researcher’s workflow” [5] (an important aspect of process interoperability) since “if the submission of data for research and monitoring purposes requires an extra step, … the process will likely fail” [6]. Discussing with biobanks of different sizes, we have identified two basic scenarios: (Scenario A) Large biobanks are likely to use proprietary software solutions and might even employ own IT experts. For this scenario the coordinator-nodes provide the socket part of an open machine-to-machine interface based on established technologies and standards (e.g. HL7 or the German xDT format family), so software-specific connectors can plug into them. (Scenario B) Small and mid-size biobanks, however, cannot rely on broad IT expertise and are likely to use spreadsheet-based solutions or even plain paper. These should be replaced by integrated applications designed to easily be set up, managed and used. This includes sample management software of reduced complexity and a userfriendly toolset for electronic case report forms (eCRF), which map input elements to their metadata definitions.
646
M. Lablans et al. / Providing Trust and Interoperability to Federate Distributed Biobanks
Beside their IT, some partners are also disadvantaged regarding their equipment, maintenance and quality control, while well-equipped partners sometimes suffer from insufficient utilization, raising the expenses per sample. Sharing of quality controlled resources could overcome these issues. In our approach, users can provide several kinds of surplus assets to others. The resources are provided “as a service”, i.e. on an on-demand basis and together with trained personnel to handle them. To the user, the assets are displayed in a virtualized form and can be used transparently as if they were his own. Services may include transportation of samples, provision of storage, analyses, quality assurance, certification or the provision of IT components along with the inclusion of existing data sources. Another aspect of interoperability is semantics. According to Riegman et al. [7] samples are only as valuable as the annotations describing them, so precisely defined metadata items are fundamental for a research infrastructure. But due to the heterogeneity, continuous evolution and dissent among partners, a single universal metadata registry is unlikely to serve the needs of all [8,9]. We are currently examining the feasibility of different strategies for community-based metadata based on ISO 11179, ontologies and the Resource Description Framework (RDF).
3. Result: The Infrastructure Applied to a Translational Research Network The Translational Sarcoma Research Network (TranSaRNet), a national consortium researching high-grade sarcomas at eight different locations [10], is one of the first networks for which we are implementing the described infrastructure. Discussing with their clinicians, sample collectors and study centrals, we have developed an exemplary use case which demonstrates how the different components could interact to serve a translational research network. The biobank of the Pathology in Münster (a) offers various services to the consortium, inter alia quality-controlled cryo storage. It is managed using the sample management software and registers its freezers as virtual resources at the IT infrastructure. A clinical collector (b) wants to begin collecting tissue samples, but cannot provide storage of adequate quality. He defines the requirements and the IT infrastructure returns a set of matching biobanks. The collector chooses the aforementioned cryobank. The collector can send and retrieve samples and monitor the current state of his “virtual freezer” using the sample management software. The local clinic (c) informs the collector once a sample is available (d), and reports the case to the corresponding international study central (e.g. for bone (COSS trials) and boneassociated sarcomas (CESS/Ewing trials)) (e). The collector sends the sample to the network’s reference pathologist (f), who returns a histological diagnosis (g). Finally, the sample is sent to the cryobank for asservation (h). The collector retrieves further data along with a pseudonym and from the respective study central (i) and uses an ETL process and/or an eCRF to register (j) the samples along with their study pseudonym at the central database (k). At some point, a sarcoma researcher (l) wants to conduct a study. He uses a web interface to query (m) for the required samples and the IT infrastructure contacts the federated biobanks and searches for matching samples. The collector’s samples are found among the suitable, but his privacy settings do not transmit his identity but rather provide the rough number of available samples only. The scientist now creates a cooperation request with the objective of the study and sends it to the anonymous sample holders.
M. Lablans et al. / Providing Trust and Interoperability to Federate Distributed Biobanks
647
Figure 1: A translational research network connected by instances of the IT infrastructure (use case).
The consortium reviews the request and decides to cooperate. They record the terms for publications and define rules for sample access: the corresponding samples are mapped into a virtual storage container which is defined to be accessible by both partners. The researcher uses the now accessible pseudonyms to retrieve additional medical annotations from the study central (n). As the study includes a complex DNA analysis, the scientists use the IT infrastructure to employ the consortium’s next generation sequencing device (o) and access the results (p).
4. Discussion During the last decade, several projects all over the world have been initiated to provide research infrastructures which might fit our problem, all of which differ in their target groups as well as in their assumptions and philosophies. The following selection of projects provides a short insight and starting point for further investigation. The mission of the Central Research Infrastructure for molecular Pathology (CRIP) [11] is to arrange contacts between researchers and data and material holders. A proprietary upload tool, the “In-house Research Database”, is set up by the CRIP office and allows the partners to upload anonymized extracts of their data to the central CRIP database, rather than allowing to keeping the data locally as in our federated approach. Similar to our concept, the cancer Biomedical Informatics Grid (caBIG) [12] maintains a federated network in which data of partners remains at their locations. caBIG allows to share services and resources, registered at a central index server and use heterogeneous but interoperable data storages. However, as reflected by differing “compliance levels”, these services are not all equally well integrated into caBIG applications and thus far from our concept of transparent resource virtualization. This and other factors, such as incompatibility with restrictive firewalls, leads us to the impression that caBIG requires extensive in-house IT expertise. The Biobanking and Biomolecular Resources Infrastructure project (BBMRI) [13] aims to interconnect high-throughput analysis platforms, biobanks and researchers on a European basis. Within the preparatory phase, BBMRI participants have developed concepts, strategies and procedures for the envisioned infrastructure. With the end of the preparatory phase in 2010, BBMRI will no longer be continued as an EU-funded, initiative, but in the form of single national projects. As our concept explicitly includes interfaces to other networks, our infrastructure might well serve as a national hub for BBMRI, depending on when and how BBMRI is realized.
648
M. Lablans et al. / Providing Trust and Interoperability to Federate Distributed Biobanks
Our federated approach allows partners to exercise their data sovereignty and reject search requests. Using that privilege, though, is not always possible without fear of stigmatization for rejecting a specific cooperation. The search service allows partners to hide their identity in a result set and an optional search broker can even hide the source of search results on a technical level. In order to establish the contact, the requesting partner has to provide information about himself, his project and its purpose, so the data holder can decide to answer, reject or simply ignore the cooperation request. Full anonymity, however, cannot be guaranteed due to the effect of out-of-band knowledge. For example, if a scientist knows that one of his colleagues is partner of the network but his search returns an empty result set, he knows that in particular his colleague did not answer. It has to be made clear that this semi-automated approach performs neither better nor worse than the conventional letter or telephone call. Compared to related work, our goals are ultimately the same: to make data and material of collectors and biobanks available in an unambiguous way to promote collaborative research and the effective use of existing resources. However, by focusing on data sovereignty and user-generated metadata, we believe to provide an alternative way more promising in terms of user acceptance. We are currently implementing our infrastructure at several biobanks to establish a virtual biobank for the University of Münster and connect it with further nodes for research consortiums such as TranSaRNet [10]. If successful, this core network will form the starting point for which we think will herald the start of a new generation of biobanking.
References [1] [2]
[3] [4] [5] [6] [7] [8]
[9] [10] [11] [12]
[13]
Gibbons P, Arzt N, Burke-Beebe S, et al. Coming to Terms: Scoping Interoperability for Health Care. 2007 Barson RJ, Foster G, Struck T, et al. Inter- and Intra-Organisational Barriers to Sharing Knowledge in the Extended Supply-Chain. Proceedings of the International Conference on e-Business and e-Work 2000 Sheth AP. Changing focus on interoperability in information systems: from system, syntax, structure to semantics. Kluwer International Series in Engineering and Computer Science 1999. (495) Ölund G, Lindqvist P, Litton. J. Bims: an information management system for biobanking in the 21st century.. IBM Systems Journal 2007. 64 (1) Ochs MF, Casagrande JT. Information Systems for Cancer Research. Cancer Invest. 2008. 26 (10) Shortliffe E, Sondik E. The public health informatics infrastructure: anticipating its role in cancer. Cancer Causes and Control 2006. 17 (7) Riegman P, Morente M, Betsou F, de Blasio P, Geary P. Biobanking for better healthcare. Molecular Oncology 2008. 2 (3) Davies J, Harris S, Crichton C, Shukla A, Gibbons J. Metadata standards for semantic interoperability in electronic government: Proceedings of the 2nd International Conference on Theory and Practice of Electronic Governance 2008. Cairo Rosenthal A, Seligman L, Renner S. From Semantic Integration to Semantics Management: Case Studies and A Way Forward. Special Issue on Semantic Integration. SIGMOD Record 2004. 33 (3) Dirksen U, Nathrath M, Agelopoulos K, et al. Translational Sarcoma Research Network (TranSaRNet). Journal of Bone and Joint Surgery - British Volume 2010. 92-B (437-b) Schröder C. CRIP: Eine zentrale Infrastruktur vernetzter Biobanken am Frauenhofer IBMT. BioTOPics 2007. 32 (9) National Cancer Institute. caBIG 2009 At a Glance. Accessed November 2010. URL:. http://cabig.cancer.gov/gettingconnected/caBIGresources/annualreport/2009/glance.asp. Archive URL: http://www.webcitation.org/5vAXaPrhZ Asslaber M, Zatloukal K. Biobanks: transnational, European and global networks. Brief Funct Genomic Proteomic 2007. 6 (3)
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-649
649
Web 2.0 in Healthcare: State-of-the-Art in the German Health Insurance Landscape Mirko KUEHNEa,1, Nadine BLINNa , Christoph ROSENKRANZb Markus NUETTGENSa a School of Business, Economics and Social Sciences, Hamburg University, Germany b Department of Economics and Business Administration Goethe University, Frankfurt am Main, Germany
Abstract. The Internet is increasingly used as a source for information and knowledge. Even in the field of healthcare, information is widely available. Patients and their relatives increasingly use the Internet in order to search for healthcare information and applications. “Health 2.0” – the increasing use of Web 2.0 technologies and tools in Electronic Healthcare – promises new ways of interaction, communication, and participation for healthcare. In order to explore how Web 2.0 applications are in general adopted and implemented by health information providers, we analysed the websites of all German health insurances companies regarding their provision of Web 2.0 applications. As health insurances play a highly relevant role in the German healthcare system, we conduct an exploratory survey in order to provide answers about the adoption and implementation of Web 2.0 technologies. Hence, all 198 private and public health insurances were analysed according to their websites. The results show a wide spread diffusion of Web 2.0 applications but also huge differences between the implementation by the respective insurances. Therefore, our findings provide a foundation for further research on aspects that drive the adoption. Keywords. Web 2.0, Health 2.0, Health Insurance
1. Introduction A major chance for participatory healthcare and approaches that integrate patients in healthcare are the ideas associated with concepts such as “Health 2.0” or “Medicine 2.0”. Health 2.0 describes the connection of healthcare, E-Health, and Web 2.0 [1], [2], [3], [4]. The term “Web 2.0” is generally associated with technologies that facilitate interactive information sharing, interoperability, and collaboration on the World Wide Web, leading to the development of social networks, social media, and communities [5], [6], [7], [8], [9]. Health 2.0 involves all types of participants from the healthcare sector (e. g., insurance providers, doctors, hospitals, patients associations, or self-help groups) that try to provide access to healthcare information or services using the Internet and Web 2.0 technologies [1], [3]. Against this background, our research examines the adoption and implementation of Health 2.0. As a first step in our research, we conducted a complete inventory count in the German health insurance landscape. We
1
Corresponding author: Mirko Kuehne. E-mail:{mirko.kuehne}@wiso.uni-hamburg.de
650
M. Kuehne et al. / Web 2.0 in Healthcare
analysed the website of all German health insurance provider regarding their provision of Web 2.0 applications. The remainder of the paper is structured as follows: first, we explain the methodology and design of our research. Second, we present findings from our study. Finally, we discuss the results and give an outlook on further research. In the German healthcare system, health insurances play a highly relevant role. They are responsible for the majority of publicly funded health care provision. There are two main types of health insurance – the public health insurance, which is also known as sickness funds, and the private health insurance. Approximately 85 % of the population is member of one of the 152 public health insurance [10]. Public officers, self-employed people/entrepreneurs, and employees with a gross income above 49,500 EUR per year [11] are usually privately insured by one of the 46 private insurances.
2. Methodology Our study on the website of the public and private insurances follows the method of “third-party web assessment” [12]. We apply the “mystery user” approach [13], that is, an examiner puts her- or himself in the role of a client that requires the services provided by the website in order to ensure inter-subjectivity and realism. This is also known as “mystery shopping”, Wilson, 1998. Three research assistants were trained according to the developed survey guidelines. Afterwards, they independently conducted the study. Furthermore, cross-checks with randomly chosen records were used in order to check the correctness of the collected data. We employ a framework developed by Ganesh and Padmanabhuni [14] in order to assess the technological objects. Ganesh and Padmanabhuni [14] developed a generic conceptual framework in order to structure the Web 2.0 landscape according to the following parameters: “Content”, “Collaboration”, “Commerce”, “Computing as a Service”, and “Technology”. They indicated that for every application domain, an adaptation of the framework is required. Hence, we subjected the framework to experts from the healthcare domain. The expert group consisted of healthcare experts and ITrelated staff from the insurance sector. As a result of structured interviews, they approved the following Web 2.0 technologies as relevant for the healthcare domain: “Blog”, “Wiki”, “Social Tagging”, “Social Networking”, “Chat”, “RSS Feeds”, “Podcast” and “Forum”. Following the conceptual framework from Ganesh and Padmanabhuni [14] “Blog”, “Wiki”, “Social Tagging”, “Chat” and “Social Networking” belong to the item “Collaboration”. “Podcast” and “RSS Feeds” to the item “Content”. Moreover, “Forum” was mentioned by the experts but it is not included in Ganesh and Padmanabhuni’s [14] framework. According to the experts, it belongs to the parameter “Collaboration”; therefore we include Forum in our evaluation. Objects belonging to “Commerce”, “Computing as a Service”, and “Technology” were not mentioned by our experts. In addition, the hype about social networking sites, Raake and Hilker [15] has led us to revise the evaluation criteria “Social Networking”. Next to social networks, which are self-operated by the health insurance providers, several unattached networks such as Facebook and Twitter play an important role. Health insurance providers and their customers are organized in these networks in user-groups. In addition to Facebook and Twitter we also included XING, which is the largest German business online community. Based on the growing
M. Kuehne et al. / Web 2.0 in Healthcare
651
importance of these social networks we supplemented our framework with them. Therefore, our framework is structured as follows: • Content: RSS Feeds, Podcasts • Collaboration; Blog, Wiki, Chat, Social Tagging, Social Networking • Social Networks: Xing, Twitter, Facebook As we conducted a complete inventory count, the database comprises all 46 private and 152 public health insurances. Hence, 198 complete data sets were gathered in total. All criteria are transformed to a binary scoring model. If a criterion is fulfilled (offered) by a health insurer’s website, the health insurer scores one point - otherwise it scores zero points. For the fulfillment of a criterion it is not necessary that the health insurer runs the Web 2.0 technology by itself. Some insurance companies, for example, share a “Blog” or a “Forum”. That means for our survey, if there is an integrated Web 2.0 technology or if it is linked by the website of a health insurance provider (no matter, shared or self-run), the website gets one point for the technology. The same logic applies for the criterion “Social Networking”. If there is a self-operated social network on the website of the health insurance provider or if there are links to further networks, the criterion “Social Networking” is fulfilled. However, the criterion can also be met by an explicit search in the social networks “Facebook”, “Twitter”, and “XING”, where we searched for the particular names of each health insurance provider in the social networks. Therefore, the three social networks are a subset of the criterion “Social Networking”. The criterion was fulfilled if we found a user-group or something similar.
3. Results In the first step, we analyze the results for “Content”. The Web 2.0 technology, which is used the most from this area Podcasts. 35 % of the private and 30 % of the public health insurances offer Podcasts oh their website. RSS Feeds are offered by 32 % of the public and only by 13 % of the private insurances. Next, we shed a light on the area “Collaboration”. The most applied Web 2.0 technology by the public insurers is “Forum” (89 %), followed by “Chats” (83 %). The reason for the strong provision of these two technologies is a shared platform by the public insurances, which provides “Forum” and “Chats”. In contrast, only 13 % of the private insurances provide these technologies. There is also a strong use of “Social Networks”. 59 % of the private and 39 % of the public insurances provide either their own “Social Network” or link to one. The ability for “Social Tagging” and the provision of “Blogs” is almost similar. 11 % of the private insurances provide a “Wiki”, whereas only 1 % of public insurance companies apply this technology. Regarding “Social Networks”, we observe that private health insurances, if compared to public insurers, have a stronger presence in those communities: 41 % of the companies provide an own user-group in “Facebook”, 35% of them have a “Twitter” account, and 24 % are represented in “XING”. The use of social networks by public insurers is lower in comparison: 13% are applying “Twitter” and 3 % are represented in “XING”; only their presence in “Facebook” is almost similar with 38 %. Figure 1 summarizes these findings.
652
M. Kuehne et al. / Web 2.0 in Healthcare
Figure 1. Results of the evaluation.
67 % (31 out of 46) of private health insurances use at least one of the eight Web 2.0 applications (Content and Collaboration) from the framework. 15 companies do not use any of the examined Web 2.0 criteria. Regarding the public insurers, 151 companies (99 %) use Web 2.0 applications from the 2009 framework. That means, almost all public health insurers offer one of the 8 examined Web 2.0 applications on their website (or link to a shared website). Regarding the number of used Web 2.0 applications (Content and Collaboration), we observe that public health insurances have more applications in use than private insurances. On average, private insurances apply three applications – in contrast, public health insurances apply 2.5 applications. Most of public insurances provide two applications (44 %) followed by three application (25 %) and four applications (14 %). In contrast, most of private insurances provide one application (32 %). Two, three, four, and six applications are also less applied by private as by public insurances.
Figure 2. Number of used Web 2.0 applications.
With an average of 1.7, the private insurances are stronger organized in one of the three examined Social Networks. 52 % of private insurances use one, 26 % two and 22 % all three Social Networks. The public insurances use on average 1.4 Social Networks; 66 % use one, 29 % two and only 5 % all three.
4. Discussion Almost all public health insurance companies apply Web 2.0 applications (99 %). Also 67 % of the private health insurances provide any of the examined Web 2.0 applications. Nevertheless, the private insurance companies apply more applications and Social Networks on average. Our findings provide first answers about the State-ofthe-Art of the adoption and implementation of Web 2.0 technologies in the German
M. Kuehne et al. / Web 2.0 in Healthcare
653
health insurance landscape. Next to the wide spread diffusion and adoption of Web 2.0 technologies and Social Networks, we could show huge differences between the two insurance types regarding the adoption and implementation of the applications. Even between the companies within their respective insurance types we observed large differences from “no use” of (no Web 2.0 applications are used) to “strong use” (7 applications are used). But how can the differences be explained? What aspects influence the adoption and implementation of Web 2.0 applications? At present, there is no literature on the disparity of the differences in the adoption of IT and even not in the adoption of Web 2.0 technologies by public and private insurances. Based on our research and interviews we suppose that the disparity of the differences is grounded in the business models of the two insurance types. Private insurances are focusing on product sales whereas public insurances are primarily driven by the differentiation to other public insurances – because the product “public health insurance” with its services is unified by government. According to this, we assume that public insurances try to differentiate themselves to others by providing special services such as Chats with experts or providing health information by RSS Feeds. In contrast, the private insurances try to acquire new customers in Social Networks or explain their products via Podcasts. To explain the differences regarding the adoption of Web 2.0 applications, we started a questionnaire to managers of health insurance companies to uncover the driving factors of Web 2.0 adoption.
References [1] [2] [3] [4] [5] [6] [7]
[8] [9] [10] [11] [12] [13]
[14] [15]
Ferguson T. From patients to end users, British Medical Journal 324 (7337), (2002), 555-556. Van De Belt TH, Engelen LJ, Berben SA, Schoonhoven L. Definition of Health 2.0 and Medicine 2.0: A Systematic Review, Journal of Medical Internet Research 12 (2), (2010) e18. Eysenbach G. Medicine 2.0: Social Networking, Collaboration, Participation, Apomediation, and Openness, Journal of Medical Internet Research, 10 (3), (2008), e22. Eysenbach G. What is e-health? Journal Medical Internet Research, 3 (2), (2001), e20. Musser J, O'Reilly T. Web 2.0 - Principles and Best Practices, O'Reilly Media, Sebastopol, CA, USA, 2007. Vossen G, Hagemann S. Unleashing Web 2.0: From Concepts to Creativity, Morgan Kaufmann Burlington, MA, USA, 2007. O’Reilly T. What Is Web 2.0 – Design Patterns and Business Models for the Next Generation of Software [Internet]. 2005 [updated 30 September 2005; cited 22 July 2010]. Available from: http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html. McAfee AP. Enterprise 2.0: The Drawn of Emergent Collaboration. Sloan Management Review 47 (3), (2006). O’Reilly T. Web 2.0 Compact Definition: Trying Again [Internet]. 2006 [updated 10 December 2006; cited 22 July 2010]. Available from: http://radar.oreilly.com/archives/2006/12/web-20-compact.html. BMG Bundesministerium für Gesundheit, Gesetzliche Krankenversicherung – Mitglieder, mitversicherte Angehörige, Beitragssätze und Krankenstand; Ergebnisse der GKV-Statistik KM1, 2010. BMJ Bundesministerium der Justiz, Sozialgesetzbuch Fünftes Buch - Gesetzliche Krankenversicherung, Bundesanzeiger Bonn, 2009. Irani Z, Love P. Evaluating Information Systems: Public and Private Sector, Butterworth Heinemann, Oxford, (2008). Heeks R. Benchmarking eGovernment. Improving the National and International Measurement, Evaluation and Comparison of eGovernment, In iGovernment Working Paper Series, Institute for Development Policy and Management (Ed.). University of Manchester, 2006. Ganesh J, Padmanabhuni S. Web 2.0: Conceptual Framework and Research Directions, AMCIS 2007 Proceedings, (2007), Paper 332. Raake S, Hilker C. Web 2.0 in der Finanzbranche: Die neue Macht des Kunden. Wiesbaden, 2010.
654
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-654
Improving the Transparency of Health Information Found on the Internet Through the Honcode: a Comparative Study Sabine LAVERSINa,Vincent BAUJARDb, Arnaud GAUDINATb, Maria-Ana SIMONETb, Célia BOYERb 1, a Haute Autorité de Santé, France b Health On the Net Foundation, 81 bd de la Cluses 1205 Geneva, Switzerland;
Abstract. This study aims to show that health websites not asking for HONcode certification (Control sample websites A) do not respect elementary ethical standards such as the HONcode. The HONcode quality and ethical standards and the certification process have been developed by the Health on the Net Foundation to improve the transparency of the health and medical information found on the Internet. We compared the compliance with the 8 HONcode principles, and respectively the respect of principles 1 (authority), 4 (assignment), 5 (justification) and 8 (honesty in advertising and editorial policy) by certified websites (A) and by health websites which have not requested the certification (B). The assessment of the HONcode compliance was performed by HON evaluators by the same standards for all type of sites. Results shows that 0.6% of health websites not asking for HONcode certification does respect the eight HONcode ethical standards vs. 89% of certified websites. Regarding the principles 1, 4, 5 and 8, 1.2% of B respect these principles vs. 92% for A. The certification process led health websites to respect the ethical and quality standards such as the HONcode, and disclosing the production process of the health website. Keywords. HONcode, Certification, Transparency, Trustworthiness, Quality criteria
1. Introduction Studies [1-6] showed evidence of the presence of wrong or incomplete or deceptive health information on health websites. A systematic review [7] of the literature showed that the criteria used to evaluate the quality of health related websites vary from one study to another, and that common quality criteria must be defined. The European Commission released the proposal of a consensual answer, eEurope 2002, quality criteria to apply to the health websites [8]. In 2002, the French authorities, worrying about the quality of the health websites and their information given to the public, passed a law (loi n°2004-810 about the health insurance) mandating the HAS to establish a certification process of health websites:: « The Haute Autorité de Santé is in charge of drawing up a procedure of certification of health related websites … ». In order to fulfil this mission, the Haute Autorité de la Santé (HAS) identified and appointed by selection the Health On the Net Foundation (HON), 1 Corresponding author. [email protected]
S. Laversin et al. / Improving the Transparency of Health Information Found on the Internet
655
for the certification of French health websites. This study was conducted by the HAS with the cooperation of HON. This study aimed at verifying how much: I) the 8 HONcode principles (presented in the Table 1) are respected by a control sample of non-certified websites which have never asked for it (B). II) the HONcode certification led certified websites (A) to respect the HONcode principle sustainably at least for six months after obtaining the certification, thus contributing to maintaining standards of quality health websites provided by the European Commission. III) the certification contributes to the improvement of the quality of the health information given by the websites, especially by the respect of principles 1 (authority), 4 (assignment), 5 (justification) and 8 (honesty in advertising and editorial policy). These four principles provide indispensable conditions (but not enough) to the quality of the information content on the site. We present here the results of the comparative study. The results of the longitudinal study conducted in parallel will be presented at the MED-E-TEL. conference
2. Method and Material 2.1. Study Design: Comparative Study The study compares the HONcode compliance of websites that have been HONcode certified for at least six months (A) to the HONcode compliance of non-certified French health websites (B) that never asked for the certification and were taken as a control sample. 2.2. Sample Constitution The group A has been constituted by selecting all health sites whose publisher was located in France and applying for certification for the first time or certified for less than three months within the period of May 1 to August 1, 2008. Excluded were health websites with the HONcode certification for more than three months. In the absence of a database of health sites in France, Control of French health websites sample (B) has been constituted by querying search engines and databases such as: DMOZ2, official recognized organizations by the HAS, the medical society recognized by the CNOM 3 and Google. The use of various sources allowed the decrease of distortions in the study of non-certified control websites. B sites were categorized by the type of the publisher and were then randomized. The sites from the both two groups A and B have been classified and paired according to the type of the publisher type to allow the building of a comparable population of sites and samples.
2
DMOZ: http://www.aef-dmoz.org/ CNOM: http://www.conseil-national.medecin.fr/index.php?url=lien/index.php&open=2
3
656
S. Laversin et al. / Improving the Transparency of Health Information Found on the Internet
2.3. Health Websites Evaluation All the websites included in the study were evaluated by two evaluators from the HON Foundation in a standardised way according to the HONcode [9-10]. The sites were not anonymised. For the evaluation, principles 2 and 4 are divided in three and two sub-parts respectively, leading to a total of 11 observations. For each website, the respect or nonrespect of each HONcode principle was scored 0 (in conformity) or 1 (in nonconformity) respectively. A website is in conformity with the HONcode when the total of its scores is equal to 0. A second analysis was made to observe the website conformity to HONcode principles 1, 4, 5 and 8. We selected the principles 1,4,5,8 related to the quality of health information. Principle 1 (Authority: Indicate the qualifications of the authors) requires that information be signed by its author and that his qualification is indicated. The reader can appreciate the match between the qualifications of the author and the nature of the information he provides. Principle 4 (Assignment) requires that the information is dated thereby appreciate its freshness and the sources of information are mentioned. The identification of sources of information could be used to verify the consistency between information and the source from which it originates and the quality and relevance of the latter. Principle 5 (Justification: Justify any statement on the benefits or risk of products or treatments) requests the author to provide evidence supporting his claims, including by providing references that may substantiate this level of evidence. Information must be provided in an objective and balanced way. Principle 8 (Honesty in advertising and editorial policy) explicitly requires the separation of what is advertising and what is a health information allowing the reader to unambiguously identify the latter. A website is declared in conformity to those four principles when the total of his scores is equal to 0. 1. Authoritative: indicate the qualifications of the authors 2. Complementarity: information should support, not replace, the doctor-patient relationship, the mission and the audience are explicated. 3. Privacy: Respect the privacy and confidentiality of personal data submitted to the site by the visitor 4. Attribution: Cite the source(s) of published information, date and medical and health pages 5. Justifiability: Site must back up claims relating to benefits and performance 6. Transparency: Accessible presentation, accurate email contact 7. Financial disclosure: Identify funding sources 8. Advertising policy: Clearly distinguish advertising from editorial content Table 1. Presentation of the HONcode principle (summarized)
2.4. Statistical Analysis The observed percentages of the websites consistent with the eight HONcode principles, and of those consistent with principles 1, 4, 5 and 8, were calculated and compared by a Mac Nemar א2 test at a 5% threshold. The exact confidence intervals (CI) were calculated.
S. Laversin et al. / Improving the Transparency of Health Information Found on the Internet
657
3. Results 165 certified websites (A) observed at least six months after certification and 165 noncertified websites (B ) were compared. From among the A websites, 89% (147 sites) were in conformity with the HONcode (CI at 95% : 83 - 93), versus 0.6% (1 site) (CI at 95% : 0 – 3.3) from among the B sample (p < 10-9) (Figure 1). Figure 2 represents the percentage of the websites in accordance with the observed nonconformities from among A and B groups. A statistical significance was searched in a sub-groups analysis. 1087 nonconformities were observed among the B websites (making an average of 6 non-respected HONcode principles per control website), versus 27 nonconformities from among the 165 A. 100
92
89
50
1.18
0.6
0
Compliance to the HONcode Control
Compliance to principles 1,4,5,8 CW
Figure 1. A websites (CW) and Control websites B according to their conformity to the HONcode and to the 1th, 4 th, 5 th and 8 th HONcode Principles.
100
86
74 57
86
76
94
58
57
50 27 0
0.61
3
0
3
1.2
Control
CW
1.2
18 3
0.6
12
0
0.6
3
Figure 2. Percentage observed among the A (control) and B (CW)
The percentage of websites in conformity with the HONcode principle 1, 4, 5 and 8 from among the A was 92% (152 sites) (CI at 95% : 87 - 96), versus 1.2% (2 sites) (CI at 95% : 0.1 – 4.2) from the Control sample websites (B) (p < 10 -9) (figure 1).
4. Discussion Outside of a certification process, the respect of the HONcode principles by health websites appears to be extremely low. In our study, only 0.6% of control websites respect all HONcode principles against 89% for certified websites, with a p-value highly significant (p < 10-9). This finding is reinforced by results of previous studies:
658
S. Laversin et al. / Improving the Transparency of Health Information Found on the Internet
three studies [11-13] that assessed 33, 19 and 182 websites respectively found no websites that respect spontaneously all HONcode principles. In another study [14], covering 90 websites, only 15% of 14 websites were in compliance with all the HONcode principles. Certification appears to be an effective means of enforcing the HONcode principles in a sustainable way since 89% (147 websites) of the certified websites were always compliant at least six months after obtaining the certification. This study has some limitations. The interobserver agreement assessor has not been evaluated and the site evaluated was not blinded. The evaluation of all health sites included in the study was performed by two experienced evaluators of the HON. Other factors may influence the results such as the number of new pages or modified pages. Indeed, a A site will remain complaint if any new page has been published and if any pages already published were modified. The pairing of sites (each site as its own control) should help to minimize this potential bias. This study shows that most of not certified health websites do not respect quality criteria such as those proposed by eEurope 2002. It shows that certification leads websites to respect HONcode criteria, thus improving transparency of the production processes of sites and information they propose. This study cannot conclude that information disseminated by certified sites are more accurate than those issued by non-certified sites, however, respect of HONcode principles 1, 4, 5 and 8 by certified sites helps to improve the transparency of information disseminated. Further studies are needed to assess this point.
References [1] [2] [3] [4] [5] [6] [7] [8]
[9] [10] [11] [12] [13] [14]
Theodosiou, C.A. et al., Does the Internet provide safe information for pre-anaesthetic patients? Anaesthesia 58(8) (2003), 805-806. Mathur, S. et al., Surfing for scoliosis: the quality of information available on the Internet, Spine 30(23) (2005), 2695-2700. Madan, A.K. et al., The quality of information about laparoscopic bariatric surgery on the Internet, Surg Endosc 17(5) (2003), 685 Hargrave, D. et al., Évaluation de la qualité de l'information médicale francophone accessible au public sur Internet : application aux tumeurs cérébrales de l'enfant, Bull Cancer 90(7) (2003), 650-655. Mocnik, A.M. et al., Évaluation des sites francophones spécialisés dans l'obésité accessibles au public, Cah Nutr Diet 39(5) (2004), 340-348. Schmidt, K. et al., Assessing Websites on complementary and alternative medicine for cancer, Ann Oncol 15(5) (2004), 733-744 Eysenbach, G. et al., Empirical studies assessing the quality of health information for consumers on the world wide Web: a systematic review, JAMA 287(20) (2002), 2691-2700. COMMISSION DES COMMUNAUTÉS EUROPÉENNES. COM (2002) 667 final - eEurope 2002: Critères de qualité applicables aux sites web consacrés à la santé.(visited 1/27/2011).http://ec.europa.eu/information_society/eeurope/ehealth/doc/communication_acte_fr_fin.pdf Health On the Net, HONcode principles: http://www.hon.ch/HONcode/Webmasters/Conduct.html [Online] July 1996. [Cited: 30 April 2011.] Health On the Net, HONcode process: [Online] Sep2008. [Cited: 30 April 2011.] http://www.hon.ch/HONcode/Webmasters/StepByStep/StepByStep.html Thurairaja, R. et al., Internet websites selling herbal treatments for erectile dysfunction, Int J Impot Res. 17(2) (2005), 196-200. Burneo, J.G. An evaluation of the quality of epilepsy education on the Canadian World Wide Web, Epilepsy Behav 8(1) (2006), 299-302. Yegenoglu, S. et al., An evaluation of the quality of Turkish community pharmacy web sites concerning HON principles, Telemed J E Health 14(4) (2008), 375-380. Croft, D.R. et al., An evaluation of the quality and contents of asthma education on the World Wide Web. Chest. 121(4) (2002), 1015-1016.
Telemedicine and Mobile Health
This page intentionally left blank
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-661
661
Data Privacy Preservation in Telemedicine : The PAIRSE Project Ebrahim NAGEBAa, Bruno DEFUDEb, Franck MORVANc, Chirine GHEDIRAd, Jocelyne FAYN a, 1 a Université Lyon 1, INSA-Lyon, INSERM, UMS SFR Santé Lyon Est, Bron, France b Institut TELECOM, CNRS UMR Samovar, Evry, France c Université Paul Sabatier, IRIT, Toulouse, France d Université Lyon 1, CNRS UMR LIRIS, Villeurbanne, France
Abstract. The preservation of medical data privacy and confidentiality is a major challenge in eHealth systems and applications. A technological solution based on advanced information and communication systems architectures is needed in order to retrieve and exchange the patient’s data in a secure and reliable manner. In this paper, we introduce the project PAIRSE, Preserving Privacy in Peer to Peer (P2P) environments, which proposes an original web service oriented framework preserving the privacy and confidentiality of shared or exchanged medical data. Keywords. Data Privacy, P2P Environment, Web Services, Ontology, Telemedicine
1. Introduction Most of existing medical information systems, especially Electronic Health Record (EHR) based applications, require mechanisms for dynamic data sharing and exchange at large scale taking into account the security and confidentiality aspects [1] [2]. P2P systems have proved to be a powerful model for defining dynamic and large scale data infrastructures. They don’t impose a global schema shared between all peers but rather an approach where each peer has its own schema and knows mappings to some other peers. P2P advantages can be exploited in eHealth applications for sharing medical data which are usually distributed at different points of care, mainly because of the patients’ mobility. Queries can be exchanged between two peers A and B if A knows a mapping from its schema to B’s schema. Web Service (WS) technology has undoubtedly an enormous impact in the field of distributed systems and is increasingly used in eHealth applications to provide a solution that is both simple and convenient for data exchange, particularly in P2P environment where web services operation is especially attractive because it allows encapsulation of data sources and overcomes the interoperability problem in terms of format heterogeneity [3] [4]. However, P2P query processing requires solving several problems, such as support interoperability at multiple levels, e.g., exchange of heterogeneous data, different privacy policies, identification and authentication. Data transfer from one peer to another should also be encrypted and controlled according to a trust management policy. Every service provider should 1
Corresponding Author. E-mail: [email protected]
662
E. Nageba et al. / Data Privacy Preservation in Telemedicine: The PAIRSE Project
control the three elements of confidentiality being represented by the three questions what?, why? and to whom?, viz, what are the data that a service provider will allow to access, for which purposes these data should be used, and to which recipient these data can be transferred. In addition, the patient should have the possibility to mask sensitive data included in his EHR. Data access is therefore becoming more and more complex. In this paper we present the PAIRSE project, Preserving Privacies in P2P Environments, which addresses the aforementioned issues. PAIRSE concentrates on the issue of privacy preserving query resolution by composing Data Providing (DP) Web Services and employing RDF-oriented query rewriting algorithms to compute a web service composition that covers the query needs, and by providing global and local privacy policies resolution [5] [6]. In the next section, we first describe a telemedicine scenario example demonstrating the applicability of PAIRSE for the preservation of the privacy of shared medical data. We then outline the PAIRSE system architecture and its main components in section 3.
2. Privacy Preservation Scenario in Telemedicine According to the European and national recommendations, each citizen should have an EHR and a health identifier that shall promote telemedicine applications and medical data exchange between healthcare professionals and ease patients’ health information retrieval. However, the privacy and confidentiality of shared data must be respected. Let us suppose that a person has an access provider to his EHR. Meanwhile, this person has subscribed a health insurance to a company which manages his care expenses. After having consulted the data access right policy defined and implemented by the EHR access provider, the person decides not to completely disclose all of the content of his EHR and validates in his profile the information he grants access to every healthcare professional in case of emergency, such as the data concerning his chronic diseases, allergy, the last medical visit summary, and contact information. Let us also suppose that the person has a health problem while skiing or staying in high mountains resorts. By using a portable telemedicine device, such as a Personal ECG Monitor a rescue team member or a first aid person sends medical data encapsulated in a XML document to the healthcare professional located at the emergency center [7, 8]. In turn, the healthcare professional requests through the PAIRSE portal the patient data he feels relevant for taking a decision on the patient transfer to the hospital that best corresponds to the patient context (Fig. 1). From the PAIRSE system point of view, we shall thus distinguish two types of queries: queries that can be processed locally and queries that need a P2P (distributed) processing. We provide an example of both query types in the following subsections. 2.1. Local Query Processing The emergency physician uses the PAIRSE portal to send the following query: for the patient whose social security number is !SSN, select chronic disease ?cd, medical history ?mh, allergy ?all, risk factors ?rf, social security benefits ?ssb. We formulate this query as Q={!SSN, ?CD, ?MH, ?ALL, ?RF, ?SSB} where “!” refers to input parameters and “?” refers to output data. As illustrated in Fig. 1, each query received by PAIRSE should be treated according to the following steps:
E. Nageba et al. / Data Privacy Preservation in Telemedicine: The PAIRSE Project
663
Figure 1. Example of data flow supported by PAIRSE
• • • •
Transform the query in order to integrate the global privacy policy of the application domain such as eHealth Rewrite the transformed query in terms of web services (either defined by the queried peer or defined by peers directly known by the queried peer) Compose and execute web services Filter results according to user preferences.
As depicted in Fig. 1, query Q is decomposed into two queries: query Q1 which is sent to the EHR host, Peer A, and query Q2 which is sent to the insurance company, Peer B. Query Q1 = {!SSN, ?CD, ?MH, ?ALL, ?RF} has as input parameter the patient social security number and as output data, the chronic disease, the medical history, the allergy and the risk factors. Query Q2 = {!SSN, ?SSB} has as input parameter the patient social security number and as output data the social security benefits. PAIRSE transforms query Q sent by the user to take account of the global privacy policy of the EHR portal. Then, it invokes the web services provided by the EHR host and the insurance company, peer A and peer B, to retrieve the required data taking into consideration the local privacy policies of each peer [6]. Finally, it will return in result R only the data corresponding to the privacy preferences validated by the patient.
664
E. Nageba et al. / Data Privacy Preservation in Telemedicine: The PAIRSE Project
2.2. P2P Query Processing In some cases query processing is not restricted to one level but has to be done at multiple levels. The first case corresponds to a query which is not completely satisfied at the first level (some output parameters are not “produced”). The second one corresponds to a query where results are not complete (some data are missing). In both cases, the query has to be propagated to other peers depending on a catalog. The scope of the propagation has to be controlled to avoid infinite propagation. In our example, depending on the availability of the medical data on the EHR host, a query may trigger another query in case of empty headings existing in the patient’s EHR. On the other side, the patient’s EHR should contain links that will help to locate the missing data that might be stored in other sites, e.g., hospitals, clinics, Google Health …. Let us suppose that the patient’s EHR, hosted by peer A, contains data related to allergies and risk factors but that the headings of the chronic diseases and of the medical history are empty. The missing data could be, for example, ECGs stored in a hospital or clinic where the patient has been hospitalized in the past. Thus, the EHR host, peer A, should transform query Q1 into Q3 = {!SSN,?CD} to retrieve data related to chronic diseases, stored by peer C, and Q4 = {!SSN,?MH}, to retrieve data related to medical history, stored by peer D (Fig. 1).
3. PAIRSE Architecture Figure 2 presents the global architecture of the PAIRSE system we designed to fulfill the requirements introduced before. The main component is the Multi-Peer Query Processing (MPQP) which is in charge of composing the web services to answer the user query. The query has to be split into local sub-queries and external sub-queries. The MPQP has to determine which sub-query can be solved and executed locally. For the parts of the queries which cannot be resolved locally, the MPQP component has to search for external services provided by distant peers. Sub-queries are expressed in SPARQL [6] and can be rewritten by subcomponent Query Transformation, integrated in the MPQP component, which will use local ontologies and inter ontologies mapping techniques to transform the original SPARQL query. The sub-queries are then transmitted and executed by either the Local Peer Query Processing (LPQP) component or by the distant peers. Each service has to be selected in accordance to the global privacy policy. Each LPQP component receives the local SPARQL sub-queries and chooses the relevant services using the local DP services registry, which role is to manage a list of local services description. It applies the local privacy policy rules and returns the sub-queries results according to the local privacy policy. The internal and external sub-queries results are aggregated and, if need be, transformed for homogeneity purpose by the MPQP subcomponent “Results Aggregation”. Query rewriting can be performed to be compliant with the associated security policy by adding filters and/or removing triplet patterns from the SPARQL where clause [6]. The Local Privacy Representations component verifies if the services execution respects the local privacy policy. The scenario presented in section 2 is a typical use case that demonstrates the applicability of the PAIRSE system in protecting the privacy of patient’s medical data shared among healthcare professionals. But, other eHealth scenarios can also be envisaged such as data mining for research purposes.
E. Nageba et al. / Data Privacy Preservation in Telemedicine: The PAIRSE Project
665
Figure 2. Global architecture of the PAIRSE system. DP yields for Data Providing.
Acknowledgements. This research work is supported by the French National Research Agency under grant number ANR-09-SEGI-008.
References [1] [2] [3] [4] [5] [6]
[7] [8]
Agrawal R, Grandison T, Johnson C, Kiernan J. “Enabling the 21st century health care information technology revolution,” Commun. ACM, vol. 50, no. 2, pp. 34-42, 2007. Jin J, Ahn G, Hu H, Covington MJ, Zhang X. “Patient-centric authorization framework for sharing electronic health records,” in Proceedings of SACMAT, pp. 125-134, 2009. King RA, Hameurlain A, Morvan F. “Query Routing and Processing in Peer-To-Peer Data Sharing Systems,” IJDMS, vol 2, N° 2, pp. 116-139, 2010. Sellami M, Tata S, Maamar Z, Defude B. “A Recommender System for Web Services Discovery in a Distributed Registry Environment,” in ICIW 2009, pp. 418-423, 2009. Barhamgi M, Benslimane D, Medjahed B. “A Query Rewriting Approach for Web Service Composition,” IEEE Transactions on Services Computing, vol. 3, no. 3, pp. 206-222, 2010. Oulmakhzoune S, Cuppens-Boulahia N, Cuppens F, Morucci S. “fQuery: SPARQL query rewriting to enforce data confidentiality,” Data and Applications Security and Privacy XXIV, LNCS, vol. 6166, pp. 146-161, 2010. Nageba E, Fayn J, Rubel P. “A knowledge model driven solution for web-based telemedicine applications,” Studies in Health Technology and Informatics, vol. 150, pp. 443-447, 2009. Fayn J, Rubel P. “Towards a Personal Health Society in Cardiology,” IEEE Transactions on Information Technology in Biomedicine, vol. 14, no. 2, pp. 401-409, 2010.
666
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-666
Relevance and Usability of a Computerized Patient Simulator for Continuous Medical Education of Isolated Care Professionals in Sub-Saharan Africa Georges BEDIANGa1, Cheick Oumar BAGAYOKOa, b, Marc-André RAETZOa,c, Antoine GEISSBUHLER a a Department Radiology and Medical Informatics, Geneva University, Switzerland b Department of Public Health, Medical School, University of Bamako, Mali c Groupe Médical d’Onex, Switzerland
Abstract. Objective: to explore the relevance and usability of using a computerized patient simulator as a tool for continuous medical education and decision support for health professionals in district hospitals in Sub-Saharan Africa. Methods: based on the diagnosis pathway and decision analysis in uncertainty context, interactive clinical vignettes are developed using VIPS, a computerized patient simulator, taking into account clinical problem situations whose relevance was identified. Vignettes were adapted to take into account local epidemiology, availability of diagnostic and therapeutic resources, and local socio-cultural constraints. The evaluation on VIPS software was made by care professionals and students. Results: a computerized patient simulator can be used to provide initial and continuing medical education in Sub-Saharan Africa. But many challenges exist. Conclusion: further research is needed to measure potential improvements in knowledge, skills, decision-making abilities as well as patient outcome. Keywords. Computerized patient simulator, Telemedicine, Clinical reasoning, Capacity building, Isolated healthcare professionals, Africa
1. Introduction « How do we adapt and teach medical guidelines in our local context? » Many guidelines exist in the field of healthcare aimed at reinforcing the capacity and harmonize the practices of health professionals. Generally, this knowledge is distributed through continuing medical education [1, 2]. The tools traditionally used for the dissemination are medicals journals, books, seminars, conferences, and symposia. Despite the availability of this medical knowledge, there are many difficulties in harmonization and application of these guidelines in different countries with respect to the local conditions such as logistic, humans, technical and economic resources, social and cultural factors [3]. The situation is especially felt in developing countries by health professionals situated in isolated and rural areas. 1
Dr. Georges Bediang, Division of eHealth and Telemedicine, University Hospitals of Geneva, 1211 Geneva 14, Switzerland, e-mail : [email protected]
G. Bediang et al. / Relevance and Usability of a Computerized Patient Simulator
667
Furthermore, guidelines do not take sufficient consideration of the operating environment of the health professionals which entails the continuous assessment of diagnostic pathways (ability to ask the right questions, ability to correctly interpret the answers) and decision pathways (ability to make good decisions in uncertainty). They are therefore frequently ineffective as to the reinforcement of health care professionals. For decades, information and communication technologies communication have also been recognized as having significant implications for medical education [4]. Indeed, many studies outline the importance and the potential of computer simulations as tools for medical education [5, 6, 7, 8]. Simulation offers clinicians a secure practice environment for learning how to react to difficult situations [9]. It has also been demonstrated that defects in clinical data collection can lead to wrong diagnosis and consequently bad decisions [10] with the risk of potential harm to patients. Recent studies have demonstrated the contribution of computerized-based clinical reasoning simulation as a complementary way to increase the experience and skills of learners [9, 11, 12] or as a mean to assess the physicians’ exploration of socio-cultural and demographic factors during a patient consultation [13]. The aim of this paper is to explore the relevance and usability of using a computerized patient simulator as a continuing medical education and decision support tool for health professionals situated in rural areas of French-speaking Sub-Saharan Africa, for improving diagnostic processes and decision-making in the management of patients. This study takes place in the context of the RAFT network, a continuing medical education and tele-expertise network active in 15 countries of French-speaking Africa [3].
2. Materials and Methods VIPS (Virtual Internet Patient Simulator, www.swissvips.ch) is built as a Web application, accessible via the internet through a Java applet. It is based on diagnosis pathways and decision analysis tools developed for improving skills of general practitioners in Switzerland [14]. This computerized patient simulator presents patients and have a query-reply interface simulating the various aspects of a consultation, including history, physical examination, laboratory tests, clinical investigations, and recording of various decisions, including the prescription of medications, patient education and other management decisions [15]. In order to define the clinical cases to be simulated, senior clinicians familiar with the practice in Sub-Saharan Africa indentified clinical situations associated with common diagnostic, treatment or management errors. Each vignette was adapted to reflect the reality of a typical district-level hospital, i.e., a basic secondary care setting with limited laboratory and technical equipments. After the validation, each clinical vignette was introduced into the VIPS program and completed with reference educational material based on review articles, interviews with the local experts and implementation of multimedia teaching materials needed taking into consideration the local environment. The assessment of relevance and usability of these VIPS tool was made with health professionals (physicians, nurses and medical students) active in district hospitals of two member countries of RAFT network (Cameroon and Mali). After every consultation of clinical vignettes by these health professionals on VIPS, they had to fill a questionnaire provided for each case, which was meant to evaluate the usability and
668
G. Bediang et al. / Relevance and Usability of a Computerized Patient Simulator
the utility of the vignette, its appropriateness to the African context. Also, the equipment and the access to Internet were assessed. Numeric analysis was made using EpiData Entry 3.1 and SPSS 17.0.
3. Results 3.1. Participants Eighty-eight people took part in this study; 54 % were from Mali and 46% from Cameroon. These participants were divided into six groups: the medical doctors (59%), the fourth year medical students (12%), the fifth year medical students (9%), the sixth year medical students (2%), the medical students whose academic levels were not mentioned (10%) and the nurses (8%). Among the medical doctors, two thirds were from Mali. The average age of participants was 30.3 ± 7.1 years. About 43% of the participants had a clinical experience between three and six years. We consider clinical experience like the number of years the participant spent in a medical care context either during his training or in his work. 3.2. Contents of Clinical Vignettes We evaluated the relevance of the clinical vignettes content from a general perspective as well as for each step of the consultation. 96.1% of users found relevant the general content of clinical vignettes. The history, the physical examination, the paraclinical examination and the decisions were relevant respectively to 88.4%, 76.3%, 85% and 86.9% of the participants. 66.7% of users were able to find the questions they wanted to ask or the decisions they wanted to take. 94.8% found that the answers to the questions asked were appropriate. The vignettes were said to be complete by 76.8% of the users. Besides, 74.7% of users found that the cases submitted in the clinical vignettes were adapted to the local context and 67.5% of the participants had already faced a similar case. Regarding additional information resources, 87.4% found that the bibliographic references were appropriate, 90.1% found that the references were useful to understand the errors made during the simulation, and 79.3% found that these references were useful to answer to the questions generated by the user. 3.3. Usability of VIPS 38.2% of participants needed less than thirty minutes to totally resolve one case on VIPS program, 51.9% of the users needed between thirty minutes and one hour, and 9.9% needed more than one hour. The usability of VIPS program was evaluated with the users. 76.8% totally agreed that VIPS program was generally easy to use, 18.8% partially agreed and 4.3% partially disagreed. Regarding the ease to progress in the case, to navigate between the steps of the case and to ask questions and to take decisions, total agreement rates were 63.1%, 63.9% and 55% respectively. Moreover, 96.3% of the users considered that VIPS was an entertaining way of learning, while 97.6% of the users enjoyed resolving the cases.
G. Bediang et al. / Relevance and Usability of a Computerized Patient Simulator
669
4. Discussion An adapted computerized patient simulator can be used as an initial and continuous medical education tool and decision support of health professionals situated in rural areas of French-speaking Sub-Saharan Africa is both usable and relevant. We expect the deployment of this tool in district hospitals, supported by the operational team of the RAFT network and regular distance-education sessions will lead to an improvement of decision-making abilities of care professionals, but this will require further studies. The association with the deployment of innovative diagnostic tools, which have the potential to increase decision-making abilities in isolated areas, such as portable ultrasonography and microscopy may be even more effective for improvement of decision-making abilities of care professionals. Thus, sufficient training and ownership of tools and appropriation of these tools and concepts by health professionals, both at the local and academic levels are seen as keys for the success of this project. However, these positive results for the usability of such a tool for the training of health professionals in French speaking Africa need to be put in the context of the constraints and local difficulties, including technical problems. Indeed, 29% of participants experienced internet connection problems while using the VIPS simulator, and many users complain about the low connection bandwidth. In addition, outside major cities, access to computers is often limited and is even harder when it comes to the internet. In some places, the frequent power interruptions also make the use computers difficult. One of the difficulties encountered is also a lack of computer skills of some health professionals. This study has some limitations as to the interpretation of results. First, the number of participants in this study is low and they are from only two countries. Then, the generalizing of its results throughout Africa is difficult. This study only addresses the usability and relevance aspects, further work is required to measure potential improvements in knowledge, skills, decision-making ability, and, eventually outcome of care.
5. Conclusion The usefulness of computerized patient simulator has been demonstrated in the developed world. The results obtained in this study are encouraging. They show that the adaptation of the concept to address issues encountered in the isolated sites of district hospitals in Sub-Saharan Africa is possible although additional efforts to better tailor to the African context are still needed. Acknowledgments. This work is supported by a grant from the Geneva University Hospitals and by the International Solidarity Fund of the State of Geneva.
670
G. Bediang et al. / Relevance and Usability of a Computerized Patient Simulator
References [1]
[2] [3] [4] [5] [6] [7] [8] [9] [10] [11]
[12] [13]
[14] [15]
Davis D, O'Brien MA, Freemantle N, et al. Impact of formal continuing medical education: do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? JAMA 1999;282:867-74. Cantillon P, Jones R. Does continuing medical education in general practice make a difference? BMJ 1999;318:1276-9. Bagayoko CO, Muller H, Geissbuhler A. Assessment of Internet-based tele-medicine in Africa (the RAFT project). Comput.Med.Imaging Graph. 2006;30:407-16. Ward JP, Gordon J, Field MJ, et al. Communication and information technology in medical education. Lancet 2001;357:792-6. Sijstermans R, Jaspers MW, Bloemendaal PM, et al. Training inter-physician communication using the Dynamic Patient Simulator. Int.J.Med.Inform. 2007;76:336-43. Zary N, Johnson G, Boberg J, et al. Development, implementation and pilot evaluation of a Web-based Virtual Patient Case Simulation environment--Web-SP. BMC.Med.Educ. 2006;6:10. Weller JM. Simulation in undergraduate medical education: bridging the gap between theory and practice. Med.Educ. 2004;38:32-8. Klein LW. Computerized patient simulation to train the next generation of interventional cardiologists: can virtual reality take the place of real life? Catheter.Cardiovasc.Interv. 2000;51:528. Nendaz MR, Ponte B, Gut AM, et al. Live or computerized simulation of clinical encounters: do clinicians work up patient cases differently? Med.Inform.Internet Med. 2006;31:1-8. Bordage G. Why did I miss the diagnosis? Some cognitive explanations and educational implications. Acad.Med. 1999;74:S138-S143. Medelez E, Burgun A, Le DF, et al. Integration of electronic resources and communication technologies during Clinical Reasoning Learning sessions. Stud.Health Technol.Inform. 2002;90:10711. Wilson AS, Goodall JE, Ambrosini G, et al. Development of an interactive learning tool for teaching rheumatology--a simulated clinical case studies program. Rheumatology.(Oxford) 2006;45:1158-61. Perron NJ, Perneger T, Kolly V, et al. Use of a computer-based simulated consultation tool to assess whether doctors explore sociocultural factors during patient evaluation. J Eval.Clin.Pract 2009;15:1190-5. Raetzo MA, Restellini A. Docteur, jai. Genève: Editions Médecine et Hygiène, 3ème Edition, 2008. Nendaz MR, Raetzo MA, Junod AF, et al. Teaching Diagnostic Skills: Clinical Vignettes or Chief Complaints? Adv.Health Sci.Educ.Theory.Pract. 2000; 5:3-10.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-671
671
Applications of Medical Intelligence in Remote Monitoring István VASSÁNYI a,1, György KOZMANNa, András BÁNHALMI b, Balázs VÉGSŐ a, István KÓSA a, Tibor DULAI a, Zsolt TARJÁNYI a, Gergely TUBOLY, Péter CSERTI, Balázs PINTÉR a Dept. of Electronic Eng. and Inf. Systems, Univ. of Pannonia, Veszprém, Hungary b Dept. of Artificial Intelligence, University of Szeged, Szeged, Hungary
Abstract. Prevention and rehabilitation efficiency can greatly benefit from the application of intelligent, 24 hour tele-diagnostics and tele-care information systems. Tele-monitoring also supports a new level of medical supervision over the patient’s lifestyle. In this paper we briefly present the architecture and development phase results of the Alpha remote monitoring system. The novelty of the system is the unified and flexible processing of various signals retrieved from modern, unobtrusive devices in an efficient signal abstraction framework. The signals include PIR motion sensors that record patient movement in the home, physiological signals and also patient responses in various tests performed on the GUI of the central home unit. We have developed and tested the prototype system with promising results. Keywords. Home monitoring, medical intelligence, neurological diseases
1. Introduction Currently available home monitoring systems are generally focused on a few physiological signals measured at the home in an automated or interactive manner, and then transmitted to a centre with 24/7 medical supervision. Signal processing is usually limited to thresholding in order to generate alerts for supervisors. Often, the system is targeted to a specific patient group [1,2,3]. The Alpha remote monitoring system prototype was developed in Hungary by the ProSeniis consortium with the aim to add more intelligence to the existing monitoring paradigm. The system is targeted specifically to elderly persons, living alone, with neurological diseases (dementia, Parkinson’s/Alzheimer’s disease, post-stroke rehabilitation). Fig. 1 shows the system components. The system is based on a Home Hub which is a robust laptop category touch-screen computer. It has 3G internet connection, and Bluetooth/ZigBee wireless links to the sensors installed in the home or worn by the patient. Data is transferred from individual Home Hubs to the Data Centre. The centre provides a web GUI for the supervising medical personnel, data analysts, and the family. Based on the experiences of similar projects [4,5], the system is designed to effectively ‘blend into’ the usual home environment in order to improve user acceptance. Wherever possible, we use passive sensors. The system is flexible to host any new type of sensor, but we propose a base configuration as follows: 1
Corresponding author.
672
I. Vassányi et al. / Applications of Medical Intelligence in Remote Monitoring
Physiological and activity sensors are Bathroom scales, Blood pressure meter, ECG recorder pad*, Blood glucose meter, Wrist-worn fine motion sensor*, PIR activity sensor network including fridge open/close, room temperature and lighting. The devices with an * are the consortium’s own development. The software components available on the Home Hub GUI are Cognitive and speech therapy software, Personalized dietary log and analysis and Physical exercise coach for post-stroke rehabilitation; all developed by the consortium. We have implemented the Alpha system prototype using a Java-based workflow engine built into a service-oriented architecture, both on the Home Hub and the centre. This workflow engine is responsible for scheduling measurements for the individual patients, controlling the measurement process (patient and device communication included), processing the measured signals and sending alarms to patients, caregivers or doctors as needed.
Figure 1. Overview of the Alpha System
2. Methods and results Medical intelligence in the Alpha system appears as sophisticated signal processing modules for measured patient data, and as a flexible framework that supports the combination and aggregation of heterogeneous data sources. We use rule based, pattern based and statistical methods to extract medically relevant events, activities and trends form low level sensor data. An example is the estimation of the patient's daily activities based on motion sensor data. Extracted events may also be combined to form more complex events or alerts. For instance, the dietary log and the wrist worn physical activity sensor data may be combined to estimate the daily energy deficit. By providing high level, medically relevant information, we hope to save valuable time for the supervising doctor. This is why we call the system "intelligent". Our results in data processing are connected to physiological data (fine motion sensor and ECG) and behavioral data processing (PIR motion sensor and software interfaces). 2.1. Fall detection and step count using the fine motion sensor The patients wear a watch-like device on their wrist that captures the 3-D acceleration signal with a high resolution. The data is downloaded to the Home Hub from the device once a day in the evening, when it is placed in the charger. The goal of the signal processing module is to analyze the daily record and identify falls and near-falls, and to
I. Vassányi et al. / Applications of Medical Intelligence in Remote Monitoring
673
estimate the step count. Although hip-worn accelerometers are normally considered to serve better for this purpose, we think that the wrist design has better patient acceptance due to its similarity to a watch. The basis of the step count algorithms is that we record the acceleration signal pattern of a single step, and then match this pattern to the patient’s daily record. Our tests showed that the method can identify 98% of all real steps in an experiment not connected to the living lab experiment of the Alpha system; however, some non-steps are also identified. According to our experience, it is much harder to exclude fake steps from the results than with a hip-worn sensor. For fall and near-fall detection, we developed a method that can identify falls as high impact, short time events, and separate them from other similar activities. An improvement compared to other similar methods [6] is that we can identify also falls followed by movement, such that a person lying on the floor may make. The method could identify 80% of the ca. 200 test falls of 7 persons in an experiment not connected to the living lab experiment of the Alpha system. The implementation and deployment of the fall detection algorithm on the wrist-unit to allow real-time fall-detection was planned but will only be carried out in a future project. Main reason for this is the lack of a long-range Bluetooth module in the wrist-unit – therefore the inability to transmit detected adverse events to the Home Hub. Offline fall-detection is still useful in evaluating the patients’ overall physical condition and trends in ADL. Finally, we compute a metabolic equivalent (MET) measure from the raw wrist data by a method based on sliding window absolute integrals. This data is used by the dietary analyzer module. 2.2. ECG processing ECG analysis is an important source of diagnostics especially for post-stroke patients. We plan that our monitored patients will take 10..60 sec ECGs regularly as prescribed by the supervising physician, with a 1-channel device connected to the Home Hub with a wireless link. The measurement is taken in sitting position by two dry electrodes, offering a comfortable ECG measuring method for elderly people. The patient can easily contact with the device by placing his palms on the electrodes. The basic method of the signal processing module is to find heart cycles, extract basic parameters (heart rate, wave amplitudes and distances) and spectral features, classify the heart cycles and compare their distribution to the reference status measurement, or to the preceding measurement. We believe that due to the uncontrolled nature and high noise level of tele-medical ECG measurements, it is not the absolute parameter values, rather their trends of change that carry a clinical relevance. The implementation of this function is under development, preliminary results show the efficiency of this way. Trend analysis is a novelty of our method compared to other approaches [7]. The module also computes the averaged heart cycle and tries to detect atrial fibrillation based on the histogram of R-R distances and the Poincare plot of several heart cycles. This function has been validated using the Physionet database. Our algorithm for heart cycle detection has reached Se and Sp values above 99% on test data of the MIT-BIH arrhythmia database. We need more data to assess the special features like atrial fibrillation detection.
674
I. Vassányi et al. / Applications of Medical Intelligence in Remote Monitoring
2.3. PIR motion sensor processing The home of the monitored patient is equipped with a network of wall mounted PIR motion sensors. Data coming from these is used by the system for • Identifying alarm conditions, such as an excessively long bedroom stay without movement; • Computing summary parameters characteristic to the motion of the patient, and identifying events like going to bed or having a meal; • Analyzing behavioral patterns and changes of patterns. This can be used as a clinical progress indicator especially for dementia, Alzheimer’s disease and Parkinson’s disease. The automatic alerts are implemented in the system as special workflows. The method of behavioral pattern analysis is the hierarchical aggregation and merging of atomic sensor events into longer and longer periods of time labeled with the estimated activity. At the middle level of aggregation, we have 5..20 non-overlapping time periods a day, labeled with one of 8 basic activities (like ‘household activity’, ‘bedroom stay’ etc.). We combine this segmentation of the day with a similar segmentation computed from the wrist-worn accelerometer, and identify the typical patterns. We think that these patterns and their trends of change are closely connected to the clinical state or progress of the patient. We need more real patient data from the living lab experiments to correctly validate this hypothesis. We also plan to ask co-operative patients to keep a daily activity log and subjectively evaluate their condition. This data will be used to validate the time segmentation algorithm. 2.4. Diet logging and analysis Diet monitoring and analysis in the Alpha system begins with a detailed anamnesis taken at the beginning of a monitoring episode. All relevant data, including known diseases and the targeted weight loss/increase, are recorded and the personalized daily minimal/optimal/maximal values for ca. 50 different key nutrients are computed. We also identify and store the list of harmful or forbidden nutrients, foods, dishes, and dish types. After the monitoring begins, the patient is reminded to enter the contents of her/his meal via a dietary logging graphical interface on the Home Hub touch-screen. This interface is designed for elderly people with poor eyesight. It uses a combination of keyword and meal-set based search methods to minimize the required search effort. The database contains ca. 800 foods/dishes frequently consumed by the target group, organized by food type and common sense in hierarchical sets. We estimate that 5-6 touches are needed for a user to find an item and select its quantity and natural unit. The interface supports the extension of the data base as well as the definition of complex food combinations like ‘my favorite sandwich’. For online test, see [8]. The recorded meal is compared to the risk list, and an alarm is generated if necessary. We conducted an experiment not connected to the living lab experiment of the Alpha system. According to our tests with 26 volunteers, the logging interface allows the logging of the real food consumption with ca. 15% precision. The error comes mainly from the incompleteness of the database.
I. Vassányi et al. / Applications of Medical Intelligence in Remote Monitoring
675
2.5. Cognitive therapy Cognitive exercises are important for all neurological and post-stroke patients as a progress indicator and as a means of therapy. We have developed applications running on the Home Hub for cognitive tests, speech therapy and physical exercise. The cognitive test set contains ca. 200 simple exercises designed by experts. Some exercises, in which the patient is asked to draw on the touch screen, are designed specifically to track the progress of Parkinson’s disease. We use a dynamic time warping method to measure the temporal and spatial precision of the drawing, and we also estimate the frequency of the hand tremor if present. The supervising physician can later review a numerical assessment or re-play the drawing action. We performed experiments not connected to the living lab experiment of the Alpha system to test the correctness and precision of the tests on 20 healthy and 5 Parkinsonian subjects. In the speech therapy module we ask the patient to listen to recorded speech, and to pronounce phonemes, words and phrases appearing on the screen. We analyze the quality of the response, and always give immediate feedback. In tests not connected to the living lab experiment of the Alpha system, we found that our speech analyzer method has a ROC area-under-curve above 90% with respect to identifying correct pronunciation.
3. Discussion and conclusions We developed a complex hardware/software architecture for home monitoring of elderly patients. In this framework, we implemented a set of sophisticated signal processing modules GUIs. These were tested with public databases and healthy volunteers with good results. The more accurate validation, however, will only be possible in the living lab experiments involving real patients living in their homes. For more information, please visit our project web-site at www.proseniis.com. The work presented was partly funded by the National Innovation Office, Hungary (projects No. OM-00191/2008-AALAMSRK ‘ProSeniis’ and SI-2/2009, OMFB 00234/2010).
References [1] [2]
[3] [4] [5] [6] [7] [8]
Powell, J. Jennings, A. Armstrong, N. Sturt, J. Dale, J. Pilot study of a virtual diabetes clinic: satisfaction and usability, J Telemed Telecare 2009 15 (2009) 150-152 McCant, F. McKoy, G. Grubber, J.. Olsen, M.K Oddone, E. Powers, B. Bosworth, H.B. Feasibility of blood pressure telemonitoring in patients with poor blood pressure control, J Telemed Telecare 15 (2009), 281-285 Varis, J. et al, Experiences of Telemedicine-Aided Hypertension Control in the Follow-Up of Finnish Hypertensive Patients, Telemedicine and e-Health 15 (2009), 764-769. Hanak, D. Szijarto, G., Takacs, B. A Mobile Approach to Ambient Assisted Living. IADIS Int. Conf. Wireless Applications and Computing, Lisbon, Portugal, 2007. Rialle, V. Lamy, J. Noury, N. Bajolle, L.Telemonitoring of patients at home: a software agent approach, Computer Methods and Programs in Biomedicine 72 (2003), 257-268. Kangas, M. et al, Sensitivity and specificity of fall detection in people aged 40 years and over, Gait & Posture Volume 29, Issue 4, June 2009, 571-574 Christov, I. Real time electrocardiogram QRS detection using combined adaptive threshold, Biomed Eng Online, vol. 3, no.1 (August 2004) http://ginf.hu:8080/dietlog/geedition.swf (in Hungarian)
676
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-676
Virtual TeleRehab: A Case Study Lena PARETOa, Britt JOHANSSONb, Sally ZELLERb, Katharina S SUNNERHAGENc,d, Martin RYDMARKe, Jurgen BROERENb1 a Laboratory of Media Design, University West, Trollhättan, Sweden b NU-Hospital Organisation, Dep. of Research and Development, Trollhättan, Sweden c Institute of Neuroscience and Physiology, University of Gothenburg, Sweden d Sunnaas Rehabilitation Hospital and Faculty of Medicine, University of Oslo, Norway e nstitute of Biomedicine, Mednet, University of Gothenburg, Sweden
Abstract. We examined the efficacy of a remotely based occupational therapy intervention. A 40-year-old woman who suffered a stroke participated in a telerehabilitation program. The intervention method is based on virtual reality gaming to enhance the training experience and to facilitate the relearning processes. The results indicate that Virtual TeleRehab is an effective method for motivational, economical, and practical reasons by combining game-based rehabilitation in the home with weekly distance meetings. Keywords. Occupational therapy, Stroke, Telerehabilitation, Virtual Reality
1. Introduction According to the WHO, by 2050, over 700,000 Americans and 920,000 Europeans will have a stroke each year, of whom more than 50% will survive [1]. There are several reports in the literature showing that virtual reality (VR) games in stroke rehabilitation can be successful [2, 3]. Nevertheless, there has been limited research involving telerehabilitation (TR) with the incorporation of VR games into the home setting [4]. In particular, evaluation studies of telehealth systems for the elderly and disabled are rare [5]. For stroke rehabilitation, motivation to continue work on improving functional abilities after returning home is a major issue for stroke veterans [6]. The authors conclude that designing home-based telerehabilitation programs have to improve functional mobility combining with weekly real-time contact. We deployed a telerehabilitation system [7] in the rehabilitation process for continuous and frequent monitoring of the patient’s functionality in order to deliver occupational therapy in the home environment and adapt it to the patient’s progress. This differs from the typical telemedicine service that involves a short intensive session with one or more clinicians and a patient. In a previous study [8], on-line coaching meetings were shown to be feasible but not efficient enough. The aim of the current study is to investigate the role of training progression interfaces as a boundary object for improving the meaning and effect of online meetings. A boundary object [9] allows people to use a shared information artifact supporting communication and interaction [10]. 1
Corresponding author. Jurgen Broeren. Institute of Neuroscience and Physiology, University of Gothenburg, SU/Sahlgrenska, SE-413 45 Goteborg Sweden. [email protected]
L. Pareto et al. / Virtual TeleRehab: A Case Study
677
2. The Telerehabilitation System The TR system consists of a desktop-sized immersive workbench (www.curictus.com), which uses a three-dimensional (3D) virtual environment with games with an inbuilt rehabilitation component (serious games) designed for upper extremity (UE) movement therapy, and assessment. A patient care management system (PCMS) enables the transfer of real time system data and maintains an archive of all information. Furthermore, the PCMS logged effective training and overall time using the system, performance parameters such as results of each game; number of times run, and performance for each run. The occupational therapist (OT) can observe and graph subjects’ progress and discuss games to be played, view progression data during the on-line meetings as shown in Figure 1.
Figure 1. The telerehabilitation system (left) and a screen shot from a clinician-patient online meeting (right).
3. The Study The scenario is that the user sits at home in front of a computer monitor with stereoscopic 3D visualization and holds a haptic stick (a robotic arm with a track stick, which mediates a feeling of touch and force feedback) with which he/she performs different “serious games”. Prior to entering the study, the stroke subject learned utilizing the TR system and was instructed to play serious games for at least 20 minutes a day during 10 consecutive weeks. Once a week the OT monitors and coaches the subject from a distance. For collection of assessments and audiovisual communication between therapists and patient, the system had bidirectional contact with the homebased units. The referral criteria were 1) diagnosis of stroke; 2) hemparesis in one of the upper extremities, that is, box and blocks score lower than 45 [11]; and 3) no signs of neglect; Exclusion criteria were 1) joint problems or prior injury to arm/hand; and 2) language difficulty that affects information reception. The Regional Ethical Review Board in Gothenburg, Sweden, approved the study and the subject included gave written informed consent. The subject was a woman in her early forties, right handed, and had a firstoccurrence stroke. The time since the stroke was 1 year at the study admission. The cause was an infarct; and the diagnosis was determined by a neurologist after clinical examination and confirmed by computed tomography scan. The subject was able to ambulate independently and could move her arm partially with a weak lateral grasp. Her body awareness and spatial competence were normal. There were no limitations in
678
L. Pareto et al. / Virtual TeleRehab: A Case Study
understanding the information given. Most of her activities of daily living (ADLs) were accomplished by compensating with the right arm. The intervention used a pre-/post-test design. The following outcome measurements were administered: 1) grip strength using an electronic dynamometer [12]; 2) unilateral hand and upper limb function (ARAT) [13]; and 3) current self-ratted health-related quality of life by the EQ VAS on a graduated (0–100) scale, with 100 indicating the best health status [14]. Movement kinematics were measured with the PHANToM Omni® (haptic stylus end-point); from this, time (s) to complete the test were recorded [15]. During intervention, seven online and 3 face-to-face (f2f) meetings were recorded. Conversational Analysis [16] was used to classify content in categories, to analyse fluency of conversation [8], and to identify occurrences of encouragements and positive feedback given by the therapist. Each conversational topic was classified into social talk, health status, planning, training discussion, and technical issues; and the duration was timed. Each motivational phrase was counted, and related to the topic. After the intervention, the subject and the therapist were interviewed.
4. Results The data suggest that the improvement was most prominent in grip force and the UE test. Hand strength increased from 68 to 98 Newton and time to complete the UE test decreased from 227 to 87 seconds. Gains in manual ability (ARAT) were noted from 19/57 to 22/57. The EQ5D Vas score was stable through the whole continuum, i.e. 75 and 80 pre/post testing respectively. The subject had 10 hours of game play and a total time of 13.5 hrs using the system.
Figure 2. Topics discussed during e-meetings.
"
,$ 0 # . # ,* $ #
! # $ ,
# &$ $# # # ' ! $
L. Pareto et al. / Virtual TeleRehab: A Case Study
679
& -'#
$ # .*2 ! (0)$
! -#,#++#,0#0#. 1# !$ The high amount in session 4 was due to the subject’s unusual lack of motivation and self-esteem, recognized by the therapist and acted on accordingly. As much as 82% of the motivational phrases were given during training discussions, which coincide with using the progression data interface. Despite being new to video-conferencing, the subject quickly became accustomed to the medium: “You learn to wait and handle the turn-taking procedure,” as mentioned in the interview. Perceived advantages of the e-meetings were the convenience and time saved, being aware of progressions and the objective measures (“It is difficult to judge yourself”), and the “push” posed by having meetings where training data was reviewed (“You want to have something to show”. The subject visited the progression-data herself for motivation: “Wow, I’ve played a lot”. The Mentioned disadvantage compared to f2f meetings was the feeling of inhibition in verbal and gestural expression. Yet, non-verbal communication was present during meetings such as demonstrating hand positions (Figure 3A), sharing affective states such as joy (Figure 3B), or waving goodbye (Figure 3C).
Figure 3. E-meeting screen shots: A) showing hand position, B) moment of shared joy, C) waving goodbye.
5. Results The proposed TR gaming system (Figure 1) helped the OT to follow up the patient’s involvement with the training program. The PCMS logged automatically the frequency and duration of training sessions, which allowed evaluation of the subject’s compliance with the training program, and her progress. Furthermore, the subject improved in all outcome measures. The haptic technology offers unique possibilities to measure and visualize hand trajectories and obtains significant information about the analyzed patient’s hand movements: precision, speed, and speed adaptation, force applied and force adaptation, search patterns, targets identification and others. In this study, we only analyzed the time to complete the UE test because the aim was to investigate the role of training progression interfaces as a boundary object for improving meaning and effect of online meetings. Previous studies by our group have shown positive effects on motor and cognitive rehabilitation. Computer
, (+/)# !$ # # # # % (+0)# ! ! # $ # ! # ! ! $ #
680
L. Pareto et al. / Virtual TeleRehab: A Case Study
# ! #
$ % $ , %
,
$ # !
# but besides that the $
%
,
$ # !
# # ! %
!
! !
! ! $
References [1] [2] [3] [4] [5] [6] [7]
[8]
[9] [10] [11] [12] [13] [14] [15]
[16] [17]
[18]
WHO, International Classification of Functioning, Disability and Health: ICF, World Health Organization, Geneva, 2001. Adamovich, S. V. Fluet, G. G. Tunik, E. Merians, A. S. Sensorimotor training in virtual reality a review, NeuroRehabilitation 25 (2009), 29–44. Henderson, A. Korner-Bitensky, N. Levin, M. Virtual reality in stroke rehabilitation: a systematic review of its effectiveness for upper limb motor recovery, Top Stroke Rehabil 14 (2007), 52–61. Crosbie, J. H. Lennon, S. Basford, J. R. McDonough, S. M.Virtual reality in stroke rehabilitation: still more virtual than real, Disabil Rehabil 29 (2007), 1139–1146. Koch, S. Home telehealth--current state and future trends, Int J Med Inform 75 (2006), 565–576. Lutz, B. J. Chumbler, N. R. Roland, K. Care coordination/home-telehealth for veterans with stroke and their caregivers: addressing an unmet need, Top Stroke Rehabil 14 (2007), 32–42. Broeren, J. Johansson, B. Ljungberg, C. Pareto, L. Sunnerhagen, K. Rydmark, M. Refinement of an ICT Based NeuroRehabilitation System Employing Telemedicine, Haptics, 3D-visualization and Serious-games. European Notes in Medical Informatics, Antalya, 2009. Broeren, J. Pareto, L. Ljungberg, C. Johansson, B. Sunnerhagen, K. Rydmark, M. Telehealth using 3 D Virtual Environments in Stroke rehabilitation - Work in Progress, Proceedings Intl Conf on Disability, Virtual Reality and Assoc Technologies (2010), 115–122. Star, S. L. Griesemer, J. R.Institutional Ecology, ‘Translations’, and Boundary Objects: Amateurs and Professionals in Museum of Vertebrate Zoology, Social Studies of Science 19 (1989), 387–420. Østerlund, C. The Materiality of Communicative Practices: The boundaries and objects of an emergency room genre, Scandinavian Journal of Information Systems 20 (2008), 7–40. Mathiowetz, V. Volland, G. Kashman, N. Weber, K. Adult norms for the Box and Block Test of manual dexterity, Am J Occup Ther 39 (1985), 386–391. Nordenskiold, U. M. Grimby, G. Grip force in patients with rheumatoid arthritis and fibromyalgia and in healthy subjects. A study with the Grippit instrument, Scand J Rheumatol 22 (1993), 14–19. Lyle, R. C. A performance test for assessment of upper limb function in physical rehabilitation treatment and research, Int J Rehabil Res 4 (1981), 483–492. Brooks, R. EuroQol: the current state of play, Health Policy 37 (1996), 53–72. Broeren, J. Rydmark, M. Sunnerhagen, K. S.Virtual reality and haptics as a training device for movement rehabilitation after stroke: a single-case study, Arch Phys Med Rehabil 85 (2004), 1247– 1250. Mazur, J. M. Handbook of research on educational communications and technology, Lawrence Erlbaum Associates, New York, 2004. Cornelius, C. Boos, M. Enhancing Mutual Understanding in Synchronous Computer-Mediated Communication by Training Trade-Offs in Judgmental Tasks, Communication Research 30 (2003), 147–177. Walther, J. B. Loh, T. Granka, L. Let me count the ways: The Interchange of Verbal and Nonverbal Cues in Computer-Mediated and Face-to-Face Affinity, Journal of Language and Social Psychology 24 (2005), 36–65.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-681
681
Patient Empowerment by Increasing Information Accessibility In a Telecare System a
Vasile TOPACa , Vasile STOICU-TIVADARa,1 “Politehnica” University of Timişoara, Romania
Abstract. Patient empowerment is important in order to increase the quality of the medical care and the life quality of the patients. In this respect, the paper describes how a telecare system can become more „friendly” with the assisted persons (elderly people or post-discharged patients) due to a specific feature addressing the patient access to information from medical texts. The according service is part of the server of a tele-care/tele-assistance system (TELEASIS) and adapts the medical text to “patient” lay person language, contributing in this respect to the patient empowerment process. This component is based on an original terminology interpretation engine which is being briefly described in this paper. The TELEASIS system has a specific interface dedicated to medical personnel allowing the addition and assignment of medical text to patients or group of patients, which can be later accessed by the patients adapted to a patient friendly language. The medial texts are saved on a central medical information database which contains different content formats (text, multimedia, videos). As a conclusion, the adapted information available for the assisted persons and the communication channels established in the system increase the possibility of patients being better informed on their health status. Keywords. Tele-assistance, text accessibility, patient empowerment, elderly people.
1. Introduction The traditional medical treatment or social assistance model (as elderly people care) often ignores the non-medical (emotional, social, and cognitive) aspects of living. In order to access a holistic solution of the healthcare personnel- patients relationship, patient empowerment is an answer to this challenge. Patient empowerment is defined as helping people to discover and use their own innate ability to gain mastery over their disease or status [1]) - by providing education for informed decision-making, assisting patients to weigh costs and benefits of various treatment options, setting self-selected behavioural goals, and providing information about the importance of their role in selfmanagement. Health care professionals are under increasing pressure to become more efficient [2]. In this respect, a collaborative way, as patient empowerment, to improve the quality of the medical care that reduces the burden of medical professionals is a must. The challenges of fostering the adoption of the new paradigm of patient empowerment 1
[email protected].
682
V. Topac and V. Stoicu-Tivadar / Patient Empowerment by Increasing Information Accessibility
differ substantially from those associated with the introduction of new technology. The adoption of the collaborative care approach empowers health care professionals as much as it does patients [3]. That is the reason why in the project TELASIS new functions are developed: related to increased accessibility of the patients to the medical information, or to a better understanding of the medical terms by the patients. But we must be aware that the patient empowerment paradigm has its own pitfalls [4][5]. The healthcare professionals must promote more responsibility for the patients themselves. Medical language is very often hard to understand for regular people. Given this the communication between doctors and patients can suffer especially when dealing with remote communication that can appear in systems like TELEASIS. A research project, using a specialized classifier, tried to evaluate how easy is for regular people to access data expressed in medical language reached the following conclusion “The classifier was then applied to existing consumer health Web pages. We found that only 4% of pages were classified at a layperson level, regardless of the Flesch reading ease scores, while the remaining pages were at the level of medical professionals. This indicates that consumer health Web pages are not using appropriate language for their target audience” [6]. This can affect in a grate manner the accessibility of the patients to their health information. Having a bad understanding of their health status may have a bad influence on their heath evolution. Empowering the patients with more understanding of the medical information related to them will strongly reduce this risk.
2. A better accessibility to information One key element of patient empowerment is enhancing accessibility to information of interest. The TELEASIS system suggests several ways to ensure this like access to a central medical information database, access to additional communication channels, interpreting medical language. The last issue will be the main topic of this paper. Interpreting medical language represents adapting or “translating” information from specialized medical language to lay or patient friendly language.
Figure 1. Getting patient friendly information TELEASIS system is offering patients access to their health data, reports and additional medical information. All this data is stored in an information and content database. Enrolled medical stuff or other power user can add documents to this database, and can set the access rights for patients or groups of patients. In this way each patient can
V. Topac and V. Stoicu-Tivadar / Patient Empowerment by Increasing Information Accessibility
683
access different documents. While allowing the patients to access medical information proves to be useful, as reminded in the introduction, the patients encounter big difficulties in understanding that information. For this, TELEASIS is using an interpretation engine that allows the patient to get the medical information “adapted” to regular language, which is easier for them to understand. The process of user getting a document containing medical information adapted to lay language from TELEASIS database is shown the sequence diagram illustrated in figure 1. The interpretation engine is described in more details bellow.
3. Interpretation Engine Existing solutions: There are several language and text processing tools available online, but we couldn't identify some that are dedicated to specialized language interpretation and text adaptation. A step forward solving this kind of accessibility issue is given by research and tools analyzing the level of accessibility of specialized language. One of these research applied in medical area has delivered a framework for classification of specialized medical language by the level of accessing difficulty [6]. The final results of this research mentioned "We found that only 4% of pages were classified at a layperson level, regardless of the Flesch reading ease scores, while the remaining pages were at the level of medical professionals. This indicates that consumer health web pages are not using appropriate language for their target audience". Other research [7] proposed a framework to inform the design of an "interpretive layer" to "mediate" between lay (illness model) and professional (disease model) perspectives. The classic solution for this area is language interpretation done by human interpreters. The presence of the interpreter makes it possible for the patient and provider to achieve the goals of their encounter as if they were communicating directly with each other. There are several international institutions like IMIA (International Medical Interpreters Association) [8] that are providing standards and frameworks for medical interpreters. Proposed Solution: The area of software applications focused on terminology is increasing, and it also is into the process of standardization. ISO organization has already released standards like ISO/TC 37/SC 3 which "defines standards and best practices for using computers to manage terminology and other language resources." [9] In order to develop qualitative software solutions, some research on these standards are in plan. Also in order to assure the quality of text interpretation the project partially tries to simulate the interpretation done by human simulators. For this, the standards [10] given by the IMIA association has been used as a set of guidelines for this research considering issues like: interpretation, cultural interface, ethical behavior. This was the area specific framework description that is used as guidelines for this research. To perform interpretation of specialized language this research proposes a simple design for an interpreting tool, having two main modules, a text parser (TP) and a terminology dictionary (TD). The text parser will process all the row text, word by word, checking each word against the terminology dictionary. Whenever a word is found it means it's a specialized term. The methodology described here is implemented into a functional prototype developed in Java. Text parser. The text parser is working on raw or tagged text (like HTML) in this phase, performing the basic operation of iterating through all the words and checking whether the word is contained by the dictionary or not. If positive, the meaning of the term is
684
V. Topac and V. Stoicu-Tivadar / Patient Empowerment by Increasing Information Accessibility
appended to the word in parenthesis. It is able to tag words and offer output as XML, or several HTML compliant formatted text. Terminology Dictionary. (TD) The problem with linguistic data, especially natural language processing, is that it is dealing with uncertain information. A method of dealing with this kind of data is using error-tolerant methods like fuzzy string matching. In many cases when dealing with text, the back-end data storage solutions are databases. Fuzzy usage in database information manipulation is being worked on for a while, FSQL (FuzzySQL) or SQLf getting closer to standardization [11]. Not the same thing is true when coming to in memory fuzzy data structures. This project uses a fuzzy data structure that was designed especially for this terminology interpretation. A detailed presentation of this novel data structure named FuzzyHashMap (FHM) can be found in article [12], and the data structure project is available as open source at [13]. Fuzzy Medical Dictionary. We want to identify, in plain text, medical specific terms. The terms have to be identified even though they are not in the canonical form. For this we will use a FHM to build a medical specific terminology dictionary. We will use that terminology dictionary to recognize specialized law terms in the given text. So we consider we are parsing the following phrase: “... in diabetic diet recommendations ..." Each word is checked against the dictionary. When arriving to “diabetic” term, as presented in Fig. 2, the dictionary will search by firstly pre-hashing the term. The hash code for the resulted string “diab” is computed, and it points to the “diabet” entry. The Levenshtein distance [14] (which is the default approximate matching algorithm in FHM) between “diabet” and “diabetic” is 2, which is default threshold value in FHM. So the word diabetic has been associated to the term diabet from the dictionary. In conclusion, the FHM enables finding terms that are not in their canonical form, in a very efficient way. To make this possible, this is error tolerant, so it may do mistakes, but a good threshold and algorithm tuning improves the performance of the FHM.
Fig. 2. Fuzzy searching for word “diabetic”
4. Discussion Medical language is known for its specificity and specialized terminology. Given the fact that human health security may be affected by information understanding lack, the accessibility to this kind of information has attracted some interest in last years. If the foreign people interact with specialized language the accessibility problems are even bigger. In some countries there are laws dedicated to this kind of accessibility. Some research investigating language accessibility and related legislation mentions that "For twenty-three million Americans who speak English less than "very well," language barriers lead to lower quality of and worse access to health care. ... the lack of comprehensive implementation and enforcement leaves millions of patients with
V. Topac and V. Stoicu-Tivadar / Patient Empowerment by Increasing Information Accessibility
685
limited English proficiency forced to accept a lower quality of care than English speakers receive." [15]. In our project we preferred to limit the text adaptation only to terminology labeling, rather than a complete "translation" to lay language. An interesting research [16] concluded that a translation to lay language may decrease the confidence level of the message for patients. The described service has only been tested by student testers. We’re looking forward to perform real scenarios tests with patients.
5. Conclusions Patient empowerment is important in order to increase the quality of life of the patients. The personalized information available for the assisted persons and the communication channels established in the system increase the possibility of the patients to be better informed about their health problems. They can better understand their own health status and problems, by this contributing to patient empowerment.
References [1] [2] [3] [4]
[5]
[6] [7]
[8]
[9] [10]
[11] [12] [13] [14] [15] [16] [17] [18]
Funnell, M, Patient Empowerment, in Critical Care Nursing Quarterly: April/May/June 2004 Volume 27 - Issue 2, p. 201-204 Frohna, J.G. Frohna, A. Gahagan, S. and Anderson, R.M. Tips for communicating with patients in managed care, in Semin. Med. Pract. 4, 2001, pp. 29-36. Anderson, R M., and. Funnell, M M., Patient empowerment: reflections on the challenge of fostering the adoption of a new paradigm, in Patient Education and Counseling, Volume 57, Issue 2, May 2005, Pages 153-157. Fox, N.J. Ward K.J. and O’Rourke A.J. (2005). The ‘expert patient’: empowerment or medical dominance? The case of weight loss, pharmaceutical drugs and the Internet, in Social Science & Medicine, Volume 60, Issue 6, March 2005, Pages 1299-1309 Peter Salmon D., Phil George M., Hall D. (2004). Patient empowerment or the emperor’s new clothes, in Journal of the Royal Society of Medicine, vol. 97, eb. 2004, Pages 53-56 Trudi Miller; Gondy Leroy; Samir Chatterjee; Jie Fan; Brian Thoms , A Classifier to Evaluate Language Specificity of Medical Documents, in HICSS 2007. 40th Annual Hawaii International Conference on System Sciences, 2007 Soergel D, Tse T, Slaughter L., Helping healthcare consumers understand: an "interpretive layer" for finding and making sense of medical information. Stud Health Technol Inform. 107(Pt 2):931-5. PubMed; 2004; International Medical Interpreters Association; http://www.imiaweb.org/default.asp, ISO/TC 37, Terminology and other language and content resources, BUSINESS PLAN of ISO/TC 37 ; http://isotc.iso.org/livelink/livelink/fetch/2000/2122/687806/ISO_TC_037__Terminology_and_other_la nguage_resources_.pdf?nodeid=1160801&vernum=-2 MEDICAL INTERPRETING STANDARDS OF PRACTICE, http://www.imiaweb.org/standards/standards.asp Urrutia; A, Tineo; . L. Gonzalez; C. FSQL and SQLf: Towards a Standard in Fuzzy Databases; Handbook of Research on Fuzzy Information Processing in Databases; Pages: 270-298; 2008 V. Topac'; Efficient Fuzzy Search Enabled Hash Map; 4th International Workshop on Soft Computing Applications (SOFA); July 2010; pages 39 - 44; FuzzyHashMap open source project: http://fuzzyhashmap.sourceforge.net/ Levenshtein. V. Binary codes capable of correcting spurious insertions and deletions of ones., Probl. Inf. Transmission 1, 8-17. 1965. Mara K. Youdelman, The Medical Tongue: U.S. Laws And Policies On Language Access, Journal, 27, no. 2, 424-433; 2008 Ogden J, Branson R, Bryett A, Campbell A, Febles A, Ferguson I, Lavender H, Mizan J, Simpson R, Tayler M.; What's in a name? An experimental study of patients' views of the impact and function of a diagnosis; Fam Pract. 2003 Jun;2003; p:248-53.
This page intentionally left blank
Terminology, Ontologies and Standardization
This page intentionally left blank
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-689
689
A Standard Based Approach for Biomedical Knowledge Representation Ariel FARKASHa, Hani NEUVIRTHa, Yaara GOLDSCHMIDTa, Costanza CONTIb, Federica RIZZIc, Stefano BIANCHId, Erika SALVIc,e, Daniele CUSIe, Amnon SHABOa a IBM Haifa Research Lab, Haifa Univ. Mount Carmel Haifa, 31905, Israel b IMS-Istituto di Management Sanitario SRL, via Podgora, 7-20122 Milano, Italy c KOS Genetic SRL, Viale Ortles, 22/A -20139 Milano, Italy d Softeco Sismat SRL, Via De Marini 1, WTC Tower, 16149, Genoa, Italy e Hypergenes Consortium, Fondazione Filaterete, Viale Ortales 22/4, Milano, Italy
Abstract. The new generation of health information standards, where the syntax and semantics of the content is explicitly formalized, allows for interoperability in healthcare scenarios and analysis in clinical research settings. Studies involving clinical and genomic data include accumulating knowledge as relationships between genotypic and phenotypic information as well as associations within the genomic and clinical worlds. Some involve analysis results targeted at a specific disease; others are of a predictive nature specific to a patient and may be used by decision support applications. Representing knowledge is as important as representing data since data is more useful when coupled with relevant knowledge. Any further analysis and cross-research collaboration would benefit from persisting knowledge and data in a unified way. This paper describes a methodology used in Hypergenes, an EC FP7 project targeting Essential Hypertension, which captures data and knowledge using standards such as HL7 CDA and Clinical Genomics, aligned with the CEN EHR 13606 specification. We demonstrate the benefits of such an approach for clinical research as well as in healthcare oriented scenarios. Keywords. HL7/ISO RIM, HL7/ISO CDA R2, HL7/ISO Clinical Genomics, CEN EHR 13606, Knowledge Representation.
1. Introduction Standards are used to exchange information between disparate applications serving a variety of healthcare processes, often within the same enterprise. The goal is to have 'semantic interoperability' between the trading applications, so that an entity can deal with received data in the same way it deals with its own data. The new generation of object oriented standards facilitates this approach by serving as conceptual data models of persistency layers accessed by the applications. The HL7 v3 Reference Information Model1 (RIM) is an ANSI and ISO-approved standard information model for healthcare data used to derive consistent health information standards such as laboratory, public health, clinical trials and clinical genomics. The Clinical Document Architecture2 (CDA) model is derived from the RIM and specifies the structure for clinical documents. Similarly, the Clinical Genomics Genetic Variation3 (GV) model captures genotype-phenotype relationships. These models are serialized to W3C XML
690
A. Farkash et al. / A Standard Based Approach for Biomedical Knowledge Representation
schemas. In order to allow for a wide variety of use cases, these models have a generic nature. Thus, typically the models are further constrained in implementation guides (also called 'templates') targeted at specific use cases (e.g., CDA Operative Note). Traditionally, health standards were used to exchange patient-specific data. With the emergence of decision support applications, it is evident that knowledge representation becomes crucial to enable the evaluation of relevant knowledge and to generate recommendations. Thus it would be more efficient to have data and knowledge represented over a common language. Indeed, several current efforts use the RIM to represent knowledge models, e.g. HL7 and CDISC clinical trials work in the Study Design model4. The HL7 Clinical Genomics workgroup is developing a new Domain Information Model5 with the genome as the highest organizational entry point. Based on this effort, we have developed derivations of GV models to hold knowledge generated in the course of analyzing SNPs of hyper- and normotensive subjects in a Genome-Wide Association Study. Various approaches and analyses were applied, yielding different results. The GV model can be instantiated so that each 'knowledge instance' holds the results of a certain analysis; thus, researchers can exchange and compare results. More importantly, decision support applications can use the results to combine patient data and disease knowledge to generate advice for the clinician. In Hypergenes6, a European Commission FP7 project exploring the essential hypertension disease model, we built a set of templates for capturing the different artifacts created in the project. For clinical and environmental data, we created an essential hypertension CDA-based template as a comprehensive data representation of data collected on Hypergenes subjects. For other artifacts such as genomic analysis results, subject genotyping, and decision support information, we created templates based on the GV standard. These templates extend information sharing by serving as the underlying data model representing interactions between environmental, clinical, and genomic factors relevant in studying the complex disease of essential hypertension. Moreover, these templates can be incorporated into the CEN EHR 13606 standard7 when it is implemented over the RIM, where the CDA is a composition in the subject's EHR and GV instances are linked compositions in the same EHR folder where the CDA is placed. In this paper we depict the methodology used to capture data and knowledge artifacts in the Hypergenes project, along with concrete examples.
2. Methods We classify the artifacts of our research into three categories: data, knowledge and information. Data is raw clinical or genomic patient data. Knowledge is an understanding of the studied disease that is not specific to any patient. Knowledge may publically available or generated within the project scope, e.g. analysis on data. Information is a subject-specific analysis result that can be used as a prediction or for decision support purposes. The first step in any data-driven research endeavor is data collection (top of Figure 1). Data integration is a multi-step process that involves harmonization, validation, normalization, and transformation into standard structures that can be accepted by the healthcare and medical research communities. Furthermore, the relationships among data items are often described implicitly, e.g., in some supplementary documentation or as tacit knowledge of human experts. These relationships must be expressed explicitly to allow analysis algorithms, which are oblivious to implicit semantics, to use them effectively and avoid wrong conclusions
A. Farkash et al. / A Standard Based Approach for Biomedical Knowledge Representation
691
based on missing implicit data. To that end, we used the HL7 RIM as the information model coupled with the Web Ontology Language (OWL) and Resource Description Framework (RDF) as the semantic data integration technology. By constraining the CDA standard, we designed the Essential Hypertension Summary Document template (EH-CDA) tuned to the use case of essential hypertension. Details on data integration are described in previous works8,9. We genotyped the genomic samples using Illumina 1M arrays, and analyzed raw intensity data with Illumina Genome Studio for genotype calling. We converted the raw data to PLINK PED and MAP files for statistical analysis by PLINK10, an open-source whole genome association analysis toolset.
Figure 1. Data & Knowledge Representation Methodology Overview.
Our next step involved generating disease-specific knowledge by data analysis. Whole-genome association studies (GWAS) are the state-of-the-art approach in genetic epidemiological studies of complex diseases. These diseases are caused by interaction between genetic, environment and lifestyle factors. GWAS aim to reveal the genetic basis for susceptibility to a disease, with the underlying assumption that diseases prevalent in the population are explained by common variations; this is known as the common disease-common variants premise. A main source of such genomic variations is Single Nucleotide Polymorphisms (SNP). Current high-throughput technologies allow simultaneous genotyping of a million SNPs in a single chip. Thus, genomic data used in these studies consists of the genotypes for ~1 million SNPs for thousands of individuals. In population-based case control studies, two groups of individuals are collected: a group that is affected by the disease and a control group. SNPs showing significantly different distribution between the two groups serve to predict a person's susceptibility to the disease and serve as candidates for further research of the disease mechanism, the final outcome of which is a custom chip for early diagnosis. Normally, the artifacts of the research thus far, i.e. data and analysis results, would be used in one of two ways: data may be re-analyzed and analysis results may serve to support a clinical decision. While these are important, we want to extend this approach by capturing the amassed knowledge in a formal and standard representation, allowing the reusability of knowledge. As aforementioned, GV models serve as the basis for both data and knowledge in the genetic variation domain. This is made possible through the Phenotype component used by the GV standard. The Phenotype model design is based on the distinction between observed and interpretive phenotype. The former represents phenotypes observed in the patient e.g., responsiveness to Gefitinib drug due to certain EGFR somatic mutations; the latter represents an interpretation of genomic observations e.g., patient might be resistant to Gefitinib drug due to EGFR somatic mutations. This way, genotype-phenotype associations in GV instances can be incorporated into a patient EHR and a knowledgebase serving clinical decision support applications. Moreover, in capturing analysis-results knowledge, we enable further
692
A. Farkash et al. / A Standard Based Approach for Biomedical Knowledge Representation
analysis to build on methodology and results, allow for reproducibility, and strengthen cross-research collaboration. Representation of analysis-results entails more than just capturing the results themselves, i.e. SNP-disease risk assessment. One must capture analysis metadata such as performer, instance details (e.g. date), and methodology. Finally, to allow reproducibility, one must explicitly define the input dataset. We used a GV template to capture analysis results. The GV model is powerful enough to represent all of the above but one: methodology. Methodology is the scientific workflow that documents the analysis and may even allow execution, given the appropriate input. We therefore used the encapsulation/reference mechanism of GV to reference a workflow markup that represents the algorithm, similar to the mechanism used for raw sequences. Having patient data and disease knowledge represented with the same HL7 v3 constructs enabled us to create instances that capture subject-specific information (i.e., analysis results specific to a subject). Thus, we designed another GV template to capture subject genotyping, analysis results as applied to the subject’s SNPs and references to clinical profile. In current efforts, we aim to combine the above data and knowledge to a patient report that can facilitate clinical decision at the point of care.
3. Results and Discussion The Hypergenes project provided us with an opportunity to apply our approach to widely varying environmental and clinical datasets, and to the genomic data of the corresponding subjects. The clinical data included historical data spanning over 15 years and environmental measures based on questionnaires. There were 28 data sources with ~30,000 records for ~12,000 subjects divided into a discovery phase (3,603) and a validation phase (~8,000). Data was stored in a warehouse (using DB2 pureXML) containing CDA-compliant XML instances following the EH-CDA template model. Our genomic data was comprised of SNP genotyping performed in two centers, Milan and Lausanne, using Illumina 1M-duo arrays in the discovery phase and Infinium iSelectHD beadchip 15K in the validation phase. The raw genotyping data converted to PED and MAP was stored in both file format and relational database for random access. Classic GWAS analyses test every SNP independently for association. Typically, a chi-square test is performed for every SNP, comparing the genotype distribution in the case and the control groups, and a p-value is provided for every SNP. These p-values are used to rank the SNPs. The top scoring SNPs are selected for further research. Since the signals are weak, and many SNPs are being tested, this is a challenging task. In Hypergenes, we enhanced classic analysis by incorporating prior knowledge11. We used public SNP annotations and relied on former studies for associating SNP to various diseases. Next, we trained a logistic regression model, so it learned to utilize SNP annotations to identify a-priori the potential of SNPs to be associated with a trait. The algorithm outputs the predicted prior probability of every SNP to be associated with a disease. This prior is used to re-rank the classic analysis results. The analysis can be described as a sequence of steps applied on the SNP data (feature selection, logistic regression, etc). This gave rise to a new tool for carrying out a sequence flow analysis IBM Bio-clinical Data Mining12 (BDM) tool. The BDM enables the execution of machine learning and data mining algorithms on large datasets. A user can combine various algorithmic building blocks in a workflow to perform a desired task via an XML-based configuration file. Thus, users can utilize the BDM to build the required blocks to execute similar or different flows and analyze their own data.
A. Farkash et al. / A Standard Based Approach for Biomedical Knowledge Representation
693
Following the approach described in the Methods section of this paper, we applied the analysis-results GV template to generate an instance to capture all aspects of the above analysis. We captured SNP details, risk alleles, and p-values in appropriate GV geneticLocus XML constructs. The instance included metadata on the analysis, e.g. performer, date of execution. For methodology representation, we referenced the BDM workflow instance. We captured patient information in a similar manner, using the subject's genotyping with encapsulated BSML Isoform XML constructs for SNPs and alleles. The top sections captured metadata (e.g. genotyping center) and we used the phenotype association mechanism of GV to reference clinical blood pressure observations from the subject’s CDAs (useful mainly in validation phase). Finally, we used a geneticLoci component to analyze the results of an individual, encapsulating the BSML markup for the subject’s risk alleles as back-references to the genotyped alleles.
4. Conclusion In this paper, we depict a methodology to capture data, information and knowledge under a standard meta-model. We describe how we implemented this methodology in the scope of Hypergenes. Finally, we demonstrate the benefits of such an approach for clinical research as well as in additional healthcare oriented scenarios. In future work we will investigate how to integrate information into an EHR following the CEN EHR 13606 standard. Acknowledgements: Research leading to these results has received funding from the European Community's Seventh Framework Program FP7/2007-2013 under grant agreement n° 201550.
References [1]
HL7 Reference Information Model, Health Level Seven, http://www.hl7.org/v3ballot/html/infrastucture/rim/rim.htm [2] Shabo, A. “Integrating genomics into clinical practice: standards and regulatory challenges. Current Opinion in Molecular Therapeutics”, June 2008, 10(3): 267-272. [3] Dolin R. H. et al, “HL7 Clinical Document Architecture, Release 2”, JAMIA 2006;13:pp30-39 [4] HL7 v3 Standards, Universal Realms, Regulated Studies, Study Design Topics. Available online at: http://www.hl7.org/v3ballot/html/welcome/environment/index.html. [5] HL7 v3 Standards, Universal Realms, Clinical Genomics, Domain Information Model. Available online at: http://www.hl7.org/v3ballot/html/welcome/environment/index.html. [6] EC FP7 Hypergenes, Available online http://www.hypergenes.eu/ [7] Health Informatics Part 1: Extended architecture. ENV13606-1, Committee European Normalisation, CEN/TC 251 Health Informatics Technical Committee, 2000. Online at: http://www.centc251.org/. [8] Farkash A. et al. 2006. “Biomedical data integration - capturing similarities while preserving disparities.” Conf Proc IEEE Eng Med Biol. Soc. 2006 1, 4654-4657. [9] Carlson, D. et al. “A Model-Driven Approach for Biomedical Data Integration”. MEDINFO 2010. [10] Purcell, S., et al., PLINK: a tool set for whole-genome association and population-based linkage analyses. Am J Hum Genet, 2007. 81(3): p. 559-75. [11] H. Neuvirth-Telem et al. "Inferring Distributions of Trait-Associated SNPs with Application to Genetic Association Studies". ECCB 2010 poster presentation [12] IBM Bio-clinical Data Mining (BDM) tool on AlphaWorks, http://www.alphaworks.ibm.com/tech/bdm
694
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-694
Ontology-based Framework for Electronic Health Records Interoperability Carolina GONZÁLEZ1,a,b Bernd G.M.E. BLOBELb, Diego M. LÓPEZa,b Electronics and Telecommunications Faculty, University of Cauca, Colombia b eHealth Competence Center, Regensburg University Hospital, Regensburg, Germany a
Abstract. The use of Electronic Health Records (EHR) is wide spread in healthcare. One of the most challenging tasks for EHR systems is to achieve computable semantic interoperability. To address EHR interoperability, a number of standardization efforts are progressing, however these standards are either incomplete in terms of functionality or lacking specification of precise meaning of underlying data. This paper describes an interoperable EHR framework that uses an ontologybased approach to facilitate exchange of information and knowledge among EHR. Based on the proposed framework, an interoperability scenario between a Personal Health Record System, an EHR and a Laboratory System is described. Keywords: Electronic Health Records, interoperability, HL7, ontologies.
1. Introduction For meeting the challenge of improving quality and efficiency of patient’s care, including homecare and prevention, electronic health records (EHR) have to support semantic interoperability [1]. EHR is simply defined as a repository of information regarding the health status of a subject of care [1]. It commonly combines information from a number of distributed health actors intervening in the same or chained care process, exchanging information, but normally not really collaborating. Health information is characterized as being data intensive, complex, changing, life-long, sensible, and policy regulated. In this sense, actors - being human or computers - need to be intelligent enough to be able to process and share this information. Therefore, the most challenging task for EHR systems is to achieve computable semantic interoperability [3]. The goal of semantic interoperability is to be able to recognize and process semantically equivalent information homogeneously, even if instances are heterogeneously represented, i.e., if they are differently structured, and/or using different terminology systems, and/or using different natural languages. This equivalence needs to be robustly computable, and not just human readable, in order for guidelines, care pathways, alerting and decision support components to function effectively and safely across EHR that have been combined from heterogeneous systems [4]. The objective of this paper is to propose an interoperability framework that uses an ontology-based approach to support semantic interoperability among Electronic Health Records regardless of the EHR standard used.
1
Corresponding Author: Carolina González, PhD. Associate Professor, Systems Department, University of Cauca. Calle 5 No 4-70, Popayán, Colombia; E-mail: [email protected].
C. González et al. / Ontology-Based Framework for Electronic Health Records Interoperability
695
2. Methods For guaranteeing computable semantic interoperability between components of a complex system, the system’s architecture is important, i.e., the composition of the right components regarding their structure, behavior and relationships. The Generic Component Model (GCM) [5] provides an architectural framework created with the purpose of analyzing any kind of system, including EHR. Specially, GCM addresses the real world challenge of multidisciplinary domains involved in any EHR through ontology harmonization (mapping). From the philosophical perspective, ontology is a representation of the universals or classes of reality and the relations existing between them. Hereby, universals are “the real invariants or patterns in the world apprehended by the specific sciences” [6]. From a more restricted perspective, a commonly accepted definition for ontologies in computer science is the one provided by Gruber [7] defining ontology as ‘‘a specification of a conceptualization”, i.e. the provision of knowledge representation primitives (classes, attributes and relationships) to model reality. The latter definition differs from the philosophical perspective which categorizes things of reality without interpreting, i.e. conceptualizing them. The above described divergence could be clarified using the GCM to provide a system of ontologies, with the universal (philosophical) ontology on top explaining the nature of the world, followed by reference ontologies (top-level) bridging between the “network” of domain ontologies, from which the latter as well as the application ontologies (i.e. implementation of the knowledge of business transactions) in the bottom are derived. In distributed environments, provide a single ontology describing the Universe is not possible. Several ontologies are independently designed according to the knowledge domain represented. In order to achieve semantic interoperability among ontology-based applications, it is necessary to harmonize their ontologies. There are two ways for ontology harmonization (mapping): the first is the development/extension of domain ontologies from a common top level ontology; the second is the harmonization of characteristics of those domain ontologies such as their structure, definitions of concepts, instances of classes, to find mappings. The approach presented in this paper follows the ontology harmonization approach from a common top level ontology.
3. Results 3.1. Interoperability Framework for EHR Semantic Interoperability The proposed framework aims at supporting the requirements for semantic interoperability in EHR. Figure 1 illustrates the proposed approach.
Figure 1. GCM Architectural Dimension
696
C. González et al. / Ontology-Based Framework for Electronic Health Records Interoperability
The architectural framework is based on the GCM [5]. In the first stage, the GCM allows to describe the real system to be analyzed (i.e. EHR). In the second stage, the GCM domain dimension is used to separate different domains for reducing complexity of inter-related domains (e.g. medical, financial, administrative, etc). The third step allows reducing structural and behavioural complexity of the EHR by decomposing it. The four granularity levels defined in GCM are considered and analyzed (details (e.g. basic concepts), aggregations (e.g. business services), relations networks and business concepts). This step aims at defining domain knowledge to achieve semantic interoperability. In this sense, the domain description is based on domain ontologies, and each granularity level is associated with its corresponding ontology. In the fourth step, the ontology structure is described. According to the granularity level of the EHR system, the aforementioned ontology architecture is used. The different ontologies can be interrelated which requires ontology mapping. In our approach, a detailed concept within an EHR – the GCM Details level – is provided by a domain-specific application and represented using an application ontology derived from that domain’s ontology (domain ontology). At the level of aggregated services (GCM Aggregations level) usually provided by different applications in a domain, the application ontologies have to be mapped at that domain ontology. For representing multi-domain concepts provided by applications from different domains, the GCM Relations Network level applies. To combine representation of different domains the domain ontologies have to be mapped through reference ontologies. To guarantee that the multidisciplinary approach fits the reality, the system of ontologies applied has to be proven at the universal ontology. In other words, the reference ontologies must be derived from universal ontology. As the EHR is a multidisciplinary system, the entire system of ontologies has to be deployed for its representation. In the next section the ontology mapping process is described. 3.2. The Ontology Mapping Process The mapping between ontologies is apparently a smooth process. However, finding adequate mappings is not easy due to the semantic heterogeneity problem of EHR (lack of common vocabularies). In this sense, an effective method to solve the ontology heterogeneity problem by finding and solving mismatches without human intervention is proposed. Figure 2 presents Figure 2. Mapping Process Phases the proposed mapping process phases [8] (adapted from [8]) as described below: a) Capture Information: In this phase, formal application ontologies are created for each EHR to interoperate. For this process, the application domain concepts found are represented using the Web Ontology Language (OWL), thereby obtaining the formal application ontology for each application. Application domain concepts are commonly represented as data-models using different representation languages or formats (EntityRelationship models, object-oriented models, UML or XML specifications, etc). A converter component is used to transform the different information models into formal application ontologies encoded in OWL, using different mechanisms. b) Compute Similarity Ontology: In this phase, individual matching mechanisms are used. The matcher takes as input the two formal application ontologies and returns a
C. González et al. / Ontology-Based Framework for Electronic Health Records Interoperability
697
similarity matrix. The mapping execution process is shown in Figure 3. Following, the selected matcher mechanism are described: Entity-based: Normalization is provided for reducing strings (concepts, entities) to be compared. The normalization process includes: (i) case normalization, (ii) diacritics suppression, (iii) blank normalisation, (iv) link stripping, (v) digit suppression, and (vi) punctuation elimination. Semantic-based: Domain and/or reference ontologies are used. The approach takes into account that the two ontologies to be matched lack a common ground on which the comparison is performed. Therefore, intermediate ontologies (domain or reference ontologies depending of the granularity level) are used. Those ontologies define the common context or background knowledge, supporting disambiguation of multiple possible meanings of concepts and relationships. c) Mapping Post-Processing: In this phase, correctness and consistency of the generated mappings is checked (i.e. recall and precision). In this work, a deductive approach is used, based on description logics (DL).
Figure 3. Mapping Execution
3.3. An Interoperability Scenario in Health In Figure 4, an interoperability scenario between the Personally Controlled Health Record System (PCHR) Indivo, OpenMRS EHR, and a Laboratory System BikaLIMS is shown. Below the system actors and interactions are briefly described: Patient: The patient uses the PCHR-Indivo to manage personal clinical information. She performs three use cases: account provisioning, record access, and populating the record with data. Medical Doctor: The medical doctor uses the OpenMRS system to store diagnosis, tests, procedures, drugs, and other patient related information. Furthermore, it communicates with other systems (i.e. laboratory system). Interactions: The scenario starts when the patient requests an examination. Once the medical doctor has logged into the system, he looks for patient information and modifies the medical record, registering a diagnosis, treatment, etc. In the scenario, the medical doctor orders a laboratory test, which is sent to the LIMS laboratory system. LIMS processes the order and returns the results to OpenMRS, which updates the patient registry. Once the information has been updated, the doctor sends a message to the PCHR-Indivo system. The proposed interoperability framework supports the aforementioned interaction process. The first step corresponds to the conceptual models’ formalization. The conceptual models of the three systems intended to interoperate are translated into formal application ontologies using OWL. Next, the mapping between those ontologies is executed. In this process, the mechanisms described in section 3.2 are used, and a set of
698
C. González et al. / Ontology-Based Framework for Electronic Health Records Interoperability
top level and domain ontologies is required to guarantee the mapping process quality. Finally, the generated mapping is checked in order to verify its consistence.
Figure 4. Interoperability Scenario
4. Discussion and Conclusion EHR interoperability standards and revised related work on approaches for semantic interoperability [3], are either incomplete in terms of functionality or lack the specification of the precise meaning of the underlying data. In this paper the use of a framework for ontology-based semantic interoperability has been described. This approach is presented as a service for integrating different EHR systems. The semantic interoperability scenario has been demonstrated for a real business case meeting user requirements. The prototype includes three systems: INDIVO, OpenMRS and BikaLIMS. In the implementation process, the use of formal application ontologies was necessary, thus improving the information representation of each system to interoperate. Also domain ontologies were used in order to guarantee the correctness of the information to be exchanged. The prototype is currently tested in order to determine the effectiveness of the matching process and the accuracy of the exchanged information. Acknowledgments. The work was supported by NIH Fogarty (D43TW008438) and University of Cauca.
References [1] [2] [3] [4] [5] [6] [7] [8] [9]
Blobel B. Concept Representation in Health Informatics for Enabling Intelligent Architectures. SHTI, Amsterdam: IOS Press; 2006. P. 285 -91; vol 124. ISO TC 215 Health Informatics – EHR- Definition, scope and context. Geneva. 2005. Gonzalez C, Blobel B, Lopez D. Ontology-based Interoperability Service for HL7 Interfaces Implementation. Stud Health Technol Inform. 2010; 155:108-14. Stroetmann K, et al. Semantic Interoperability for Better Health and Safer Healthcare. Research and Deployment Roadmap for Europe. European Commission, 2009. Blobel B. Analysis, design and implementation of secure and interoperable distributed health information systems, Amsterdam: IOS Press; 2002. Smith B. On Substances, Accidents and Universals: In: Defense of a Constituent Ontology. Philosophical Papers; 1997, P.105–127. Gruber T. A translation approach to portable ontologies, Knowledge Acquisition; 1993. P. 199–220. Maedche A, Motik B, Stojanovic L, Studer R, Volz R. Ontologies for Enterprise Knowledge Management. IEEE Intelligent Systems, 2003. Uribe G. Service Base on Ontologies for interoperability between Electronic Health Records Systems. [Master Thesis]. Universidad del Cauca; 2011.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-699
699
Ontology-based Knowledge Management for Personalized Adverse Drug Events Detection Feng CAO a, 1, Xingzhi SUN a, Xiaoyuan WANG a, Bo LI a, Jing LI a, and Yue PAN a a IBM Research – China
Abstract. Since Adverse Drug Event (ADE) has become a leading cause of death around the world, there arises high demand for helping clinicians or patients to identify possible hazards from drug effects. Motivated by this, we present a personalized ADE detection system, with the focus on applying ontology-based knowledge management techniques to enhance ADE detection services. The development of electronic health records makes it possible to automate the personalized ADE detection, i.e., to take patient clinical conditions into account during ADE detection. Specifically, we define the ADE ontology to uniformly manage the ADE knowledge from multiple sources. We take advantage of the rich semantics from the terminology SNOMED-CT and apply it to ADE detection via the semantic query and reasoning. Keywords. Adverse Drug Events (ADE) detection, knowledge management, ontology, semantic reasoning
1. Introduction Adverse Drug Event (ADE), including adverse reaction from a single medication, and drug interaction from two or more medications, is increasingly becoming a serious problem for public health. According to the Journal of American Medical Association [1], ADE has been the 4th leading cause of death. The yearly cost of ADE is $136 Billion in US 2000, greater than total cost of cardiovascular or diabetic care. In this way, there arises high demand for advanced systems to help doctors and patients to detect the potential ADE in clinical practice. There exist the following challenges for ADE detection: First, knowledge sources for ADE detection are heterogeneous, including relational data, semi-structured data, and RDF data. How to build a general system to uniformly manage ADE knowledge is a non-trivial task. Second, personalized ADE detection requires the integration of patients’ record system with ADE knowledge base. There lacks a general framework for the interoperation between these two systems. Third, clinical terms are used in both patients’ records and ADE knowledge. During ADE detection, it is not straightforward to apply rich relationships among clinical terms so that ADE knowledge and patients’ records can be matched based on their semantics instead of literal characters. 1
Corresponding Author: E-mail: {caofeng, sunxingz, wangxyxy, libocrl, jingli, panyue}@cn.ibm.com; IBM Research – 6F, Building 10, 399 Keyuan Road, Zhangjiang Innovation Park, Pudong New District, Shanghai, China.
700
F. Cao et al. / Ontology-Based Knowledge Management
Semantic Web technology can offer great help to address the above mentioned challenges. On the one hand, RDF and ontology have the intrinsic advantage for data integration and interoperation, which can help to resolve the challenges on heterogeneous data sources and the interoperation between patient record system and ADE knowledge base. On the other hand, ontology reasoning can derive the implicit information, which can bridge the gap between patient clinical data and ADE knowledge and enable the semantic matching. This paper presents a personalized ADE detection system, with the focus on applying ontology-based knowledge management techniques to enhance ADE detection services.
2. System Architecture The system takes ADE knowledge and patients’ data as input, uses the standard vocabulary (such as SNOMED-CT) to manipulate these data, and provides the service of personalized ADE detection by semantic query and reasoning. Figure 1 shows the system architecture, in which the basic components are organized in three layers. Storage layer has two repositories, Ontology Repository and ADE Repository, both in the relational model. The ontology repository is used for storing the medical terminology (i.e., SNOMED) and the pre-defined ADE model as ontology. The ADE repository is designed for storing ADE knowledge. Knowledge layer is responsible for pre-processing the ontologies and ADE knowledge, and loading them into the corresponding repositories. Before loading the SNOMED, T-Box Reasoner preprocesses the SNOMED terminology by reasoning out the implicit relationships among the clinical terms. ADE Knowledge Loader and Converter is used to load heterogeneous external knowledge sources to the standardized ADE repository. Access layer enables the personalized ADE detection by semantic query and reasoning. Our Semantic Data Access (SeDA) engine [4, 8] is the key component to publish ADE repository as virtual RDF store, and to enable the semantic query over the relational database. Given a set of drugs, ADE Query Coordinator aims to generate SPARQL [6] queries for retrieving all the related ADEs (together with their conditions and consequences). Further, Clinical Condition Matcher checks the patient’s clinical records against the identified ADE conditions so that personalized ADE can be selected.
Figure 1. Architecture of Knowledge Model Management
F. Cao et al. / Ontology-Based Knowledge Management
701
3. Method 3.1. Semantic Query for ADE Detection
Figure 2. ADE ontology (only shows class and relationship property)
The knowledge of ADE is stored in the ADE database. The ADE ontology, shown in Figure 2, provides the logical view to the external. In addition to drug information, ADE knowledge can be classified as adverse reaction (of a single drug) and drug interaction (of multiple drugs). We use the SPARQL language to express the semantic query over ADE ontology data. Given a patient’s prescription (i.e., a list of drugs), we issue semantic queries to find the adverse reactions caused by each drug di, and the drug interactions for each pair of drugs (di, dj), together with their consequences and conditions. Technically, we build a mapping between ADE ontology and the data stored in the ADE database, according to which, the semantic query in SPARQL retrieves the corresponding ADE information from the database. The mapping follows a broadlyused mapping language, called D2R (Database to RDF) [2]. Based on the mapping, the SeDA engine translates the SPARQL query into the SQL statement for ADE detection. The advantages of this mapping-based approach are twofold. First, converting relational data into virtual RDF data makes it possible to integrate ontology reasoning with the semantic query. Second, from the system design point of view, the mapping based approach adds the independency between the ADE query logic and the physical storage of ADE knowledge. 3.2. Semantic Reasoning over SNOMED The goal of semantic reasoning is to guarantee the completeness of ADE detection by applying the semantics captured in the terminology (in our system, SNOMED). The SNOMED reasoning can be classified into two types in terms of functionality. Enrich ADE knowledge: SNOMED terms are used in the ADE knowledge. By referencing the relationships defined in SNOMED ontology, the ADE knowledge can be enriched. Table 1 shows the types of ADE knowledge enrichment we applied. Let us take type II as an example. Given the ADE knowledge “ingredient_I leads to disorder_D”, if in SNOMED, we find “drug_A has active ingredient J, and J is a subclass of ingredient_I”, we can infer “drug_A leads to disorder_D”.
702
F. Cao et al. / Ontology-Based Knowledge Management
Table 1. Types for ADE knowledge enrichment Type I II
III
ADE Knowledge
SNOMED Relationship(S)
Inferred Knowledge drug_A-> disorder_D
drug_B -> disorder_D
drug_A sct:subClassOf drug_B
ingredient_I -> disorder_D
drug_A sct:hasActiveIngredient ingredient_J AND ingredient_J sct:subClassOf ingredient_I
drug_A-> disorder_D
Nil (no ADE information for a given drug A)
drug_A sct:hasActiveIngredient ingredient_J AND ingredient_J sct:subClassOf ingredient_I AND disorder_D sct:hasCausitiveAgent ingredient_I
drug_A-> disorder_D
TBox reasoning is applied for the ADE knowledge extension. Thus, in the ADE detection, we check ADEs based on both explicit knowledge and implicit knowledge. Enable the semantic matching between ADE conditions and patient’s records: After the semantic query finds the potential ADEs for a list of drugs, to enable the personalized ADE detention, we need to match the identified ADE conditions against the patient’s EMR, which is given as a CDA document. Applying SNOMED reasoning can make this matching done semantically rather than literally. For example, a fragment of CDA document (EMR) states that there is an observation of “Neoplasm on hilus of lung” for a patient:
Suppose the relevant ADE knowledge is “for patient who has disorder on lung, Drug A leads to adverse reaction R”. The query for clinical condition matching is to check if the patient has the observation of disorder with finding site at lung. Q(x) :- emr :patient(x), emr: hasObservation(x, y), sct :Disorder(y), sct :findingSite(y, z), sct :Lung(z) .
From SNOMED, we have:
1) sct: Neoplasm of hilus of lung ⊆ sct:Neoplasm of lung ∩ sct:findingSite. sct:hilus of lung 2) sct:Neoplasm of lung ⊆ sct:Disorder 3) sct:hilus of lung ⊆ sct:Lung
By referencing SNOMED, we know the above patient meets the ADE condition specified by the query. The semantic matching is realized by ABox reasoning over SNOMED and patient’s EMR. More technical details can be found in [5].
4. Conclusion and Discussion We have developed the system for personalized ADE detection, with the following features highlighted: 1. Two kinds of ontologies are introduced into the system, the external standardized terminology system and the specially designed ADE domain ontology. With our Semantic Data Access engine, the system exposes the underlying relational data into virtual RDF view.
F. Cao et al. / Ontology-Based Knowledge Management
703
2. The system supports semantic query on RDF view and semantic reasoning over clinical ontology. With embedded semantic reasoning capability, the system can infer more explicit relationship from both ADE knowledge data and patient’s clinical records, which realizes the personalized ADE detection semantically. 3. We set up a general ADE knowledge management system that can load, store, and query ADE data from heterogeneous sources with different formats. Based on the ADE model, the system supports multiple knowledge sources incorporation. In practice, we have built a comprehensive system that provides four kinds of services to end users, that is, Terminology Service, Drug Service, ADE Loading Service, and ADE Detection Service. We implemented all of these services in J2EE environments with DB2 V9 (as ontology repository and ADE repository), and published them by WebSphere V7 as Web-based applications. The system is deployed in Korean Gil Hospital and allows clinicians to perform ADE search and report together with patients’ health records. Since this kind of intelligent clinical system tends to be knowledge-intensive in commercial use, ADE knowledge enrichment and enhancement is essential for the system. We divide it into four phases: seek for more ADE-relevant information from various data sources, extract ADE knowledge from multiple sources, encode ADE knowledge using standardized codes and terms, and incorporate different knowledge and keep semantic consistency. Our system accommodates three kinds of data sources currently: customer data from Korean Gil Hospital, Structured Product Labeling (SPL) [7] documents from FDA, and Linked Open Drug Data (LODD) [3]. The ADE knowledge extracted from different sources needs to be normalized with the same coding system so that the semantic reasoning methodology can be applied to enrich knowledge and complete semantics. We are developing the SNOMED encoding toolkit to support standard terminology encoding on different sources. In addition, we consider exploring more open data and verifying its effectiveness from the ADE perspective. Acknowledgements. We thank to IBM Korean Ubiquitous Computing Lab, Haifa Research Lab for harmony collaboration in the BlueMedics project, and Gachon University Gil Hospital (Incheon, Korea) for their strong support.
References [1] [2] [3] [4] [5] [6] [7] [8]
Lazarou J, BH. Pomeranz, PN. Corey. Incidence of Adverse Drug Reactions in Hospitalized Patients. In Journal of the American Medical Association. Vol. 279 No. 15, 1998. Chris Bizer JG, Cyganiak R, Maresch O. The d2rq platform v0.5.1 - treating non-rdf relational databases as virtual rdf graphs, in Proceedings of ISWC, 2004. Linking Open Drug Data (LODD). Semantic Web for Health Care and Life Sciences Interest Group. http://esw.w3.org/HCL-SIG/LODD. Ma L, Sun X, Cao F, et al. Semantic Enhancement for Enterprise Data Management. In Proceedings of International Semantic Web Conference 2009. Liu S, Ni Y, Mei J, et al. iSMART:Ontology-based Semantic Query of CDA Documents, in Proceedings of AMIA, 2009. SPARQL Query Lanaguge for RDF. http://www.w3.org/TR/rdf-sparql-query/ Structured Product Labeling Resources. http://www.fda.gov/ForIndustry/DataStandards/ StructuredProductLabeling/default.htm Wang X, Sun X, Cao F, et al. SMDM: Enhancing Enterprise-Wide Master Data Management Using Semantic Web Technologies. In Proceedings of VLDB, 2009.
704
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-704
A Formal Analysis of HL7 Version 2.x Frank OEMIGa 1, Bernd BLOBELb a Agfa Healthcare, Bonn, Germany b eHealth Competence Center, University Hospital Regensburg, Regensburg, Germany
Abstract. Working interoperability not only requires harmonized system’s architectures, but also the same interpretation of technical specifications in order to guide the development processes. But sometimes a specification has not made the underlying model explicit which would enable a coherent understanding. This paper analyses the structures of the HL7 Version 2.x communication standard’s family and presents an UML class diagram for it. Keywords. HL7 Version 2.x, Communication Standard, UML class diagram, Interoperability
1. Introduction The utilization of communication standards in healthcare is normally enforced by jurisdictional and user requirements. To support interoperable implementations of those standards an exemplified model is not only helpful but a necessary prerequisite. Quite a lot of the discussions between vendors and customers about the correct use of HL7 version 2.x [1] are due to the fact, that such a model merely does not exist, at least not officially [2]. Nevertheless, the way the standard is written and the details it contains allow for reverse engineering to extract it. In the following we will elaborate on those details exemplifying it using a UML class diagram [3].
2. Methods To help with the development of such a model, a fine-grained analysis of the HL7 v2.x communication standards is done by carefully examining the standard documents starting with v2.1 up to v2.7. All identified information items are modeled using UML class diagrams afterwards.
3. Results The created class diagram is organized top-down. It starts with the relation of events to messages. It should be noted that a single event may (indirectly) trigger 3 different messages, i.e. beside the initial payload up to two acknowledgements may be sent in 1
Corresponding Author: Frank Oemig, Email: [email protected]; Phone: +49-228-2668-4781; Agfa HealthCare GmbH, Konrad-Zuse-Platz 1-3, 53227 Bonn, Germany. URL: http://www.agfa.com/healthcare
F. Oemig and B. Blobel / A Formal Analysis of HL7 Version 2.x
705
return. This fact may cause difficulties during runtime if it is not considered during implementation. Next, a message has a message structure which can be identified by a unique identifier. In principle, this identifier is identical to a single segment group. However, the standard does not manage it this way. A segment group has a recursive structure, because it is a sequentially ordered list of segments and segment groups.
Figure 1. HL7 v2.x formal model as an UML class diagram
706
F. Oemig and B. Blobel / A Formal Analysis of HL7 Version 2.x
Each segment is described by a three character code, a name and a description and consists of fields representing data elements. As such, a data element has an identifier (5 digit number), a name, a description and a length. A point for regular discussions is the assignment of the field’s attributes to either the field itself or the relationship to a segment, i.e. its use within a segment respectively. Common understanding and the detailed use in individual segments has led to the decision to place the position, the optionality and the cardinality into the individual relationship. As a conclusion the datatype of a specific field stays the same across all usages in different segments. In return, it increases the amount of work when maintaining/editing the standard documents to keep it consistent, but it also enforces numerous discussions about the correct datatype or the way it can be improved for the next version of the standard. The datatypes control the contents of the fields in form of components. A datatype may either be simple, i.e. contain only a single component, or a set of components, so that again a recursive definition is given: Each component makes use of a datatype so that it can be simple or complex again. This fact results in another problem of the standard: In principle, it can be nested to arbitrary depth. But from its header information it is limited to two, so that fields can have components and subcomponents. Datatypes possibly not taking care of this requirement must be identified by hand. A good example is “DR” (date range) consisting of two “TS” (timestamp) with the time and timezone as components. Hence a “DR” cannot be used as a component. An abstract simple datatype can be subclassified as being either uncoded, coded or structured. Structured data types in contrast to uncoded datatypes (“ST”/string and “TX”/text) provide information about the format of the represented data like “TS” for timestamps. A coded datatype (“IS” or ”ID”) has a relationship to a table specifying possible values including their description. Such a table is either prespecified by HL7 and does not allow for changes (i.e. fixed), or may contain example values which can be redefined for site-specific adaptations (i.e. variable). Within the standard a duplication of this specification has been taken place: The datatype “CNE” (coded no exceptions) consists of the component with datatype “ID” being bound to an HL7defined table. Consequently, “CWE” (coded with exceptions) refers to “IS” with userdefined tables. It would be enough to have this specification available in one place only. In return, during maintenance the consistency must be ensured manually in form of a tedious process. The encoding of the messages and their details is controlled by a set of delimiters. Whether they are required or optional and either fixed or variable is noted on the left hand side in Figure 1. The standard defines a set of default delimiters which can be adjusted for use with legacy systems not allowing for those special characters. 3.1. Detailed Comments The single letters (a – u) in Figure 1 mark classes and relations which are explained with some additional remarks in Table 1.
F. Oemig and B. Blobel / A Formal Analysis of HL7 Version 2.x
707
Table 1: Description of marks (a-u) in the HL7 v2.x formal model of an UML class diagram. Ref # a)
b) c)
d)
e)
f)
g)
h)
i)
j) k)
Description The relationship between a message and message structure is not always defined correctly, i.e. more than one exist or is inconsistently defined. Segment groups allow for arbitrary nesting. Data types allow for arbitrary nesting. But due to the standard encoding rules (ER7) no more than two recursions are allowed. As explained above, this fact must be ensured manually. The identification of the correct delimiter depends on the use of the datatype as a field or component and therefore cannot be specified directly. Variable delimiters require a higher development effort so that some implementations do not take care of it. A fixed delimiter is necessary in conjunction with segments, so that a parsing engine can clearly identify segments as parts of a message. Quite often, the correct character “CR” (carriage return) is mistakenly written as “CR/LF”. Most of the implementations assume fixed delimiters; the German message profiles constrain them to be fixed to the given default. In return, the way messages are created or processed cannot be tested. This is a great barrier when it comes to certification of an interoperable encoding. Originally, four of the five delimiters must be given. In later versions this is corrected so that all delimiters must be present. The structures for the three messages initiated by a single trigger event are defined as payload, transport acknowledgement and application acknowledgement. In case of routing the initial messages as a broadcast to a set of recipients it may lead to a high amount of acknowledgement messages in return. A coherent processing of messages across different applications becomes a challenge. The workflow is not explicitly defined in the standard documents. A new proposal for v2.8 should help to avoid the most problematic mistakes. Furthermore, IHE Technical Frameworks specify workflows in form of integration profiles. Fixed segments are fully specified, whereas variable segments may have the last field being added as a new field as often as necessary. Sometimes a table is assigned to a complex field; here it is implicitly meant that the table is assigned to a component.
Ref Description # v2 versions require an l) Early intermediate layer. The datatype “CM” (composite) is used wherever a set of components is needed without specifying the necessary details. As a consequence, there is no simple way to handle it. In the meantime, each datatype has a clear definition. m) Datatypes and fields can either be simple or complex: in principle this is the same fact. n) Code tables with [n..m] have a different cardinality. o) In principle, the delimiters are table values itself. p) In the standard some of the classes are represented as tables as well; 0003 (events), 0076 (message types) and 0354 (message structures) are examples thereof. q) The delimiters are defined with default values. Most implementations cannot handle alternative values. r) The OID is assigned to the table but not to the codesystem because no separation is made for the different versions. In order to handle the different value sets correctly, an OID must be assigned to the codesystem individually requiring an investigation about the semantics. s) The maximum length is officially normative but with a back door which is closed with v2.7. For a correct representation minimum and conformance lengths are introduced. t) Starting with v2.7 a new delimiter is introduced allowing for indicate information which has been truncated by the sending system. But for backward compatibility reasons this new delimiter is optional. u) Components and subcomponents are realized by data types.
708
F. Oemig and B. Blobel / A Formal Analysis of HL7 Version 2.x
4. Discussion In principle, a message structure is nothing else than a segment group. Hence the question can be raised whether a separation in form of two distinct classes is necessary. Another question is the proper use and representation of character sets. Originally, the standard only allows for 7-bit ASCII characters, although common understanding directly works with 8-bit ISO-8859. An elaboration of the associated problems is worth another paper [5].
5. Conclusions The extraction of an UML based class diagram is possible and its availability will decrease expensive discussions and may prevent from wrong implementations enhancing semantic interoperability among different applications. The next logical step is the alignment of this UML class diagram with the generic component model (GCM) [6] to abstract it to a communication standards ontology (CSO) [7, 8] which can then be used to enhance and improve semantic interoperability among different applications. Acknowledgments. The authors are indebted to their colleagues from HL7 for their kind collaboration.
References [1] [2] [3] [4] [5] [6]
[7]
[8]
HL7 Inc., Ann Arbor: "HL7 Version 2.x", http://www.hl7.org Oemig F, Dudeck J. "Problems in developing a comprehensive HL7 database", AMIA Fall Symposium. 1996. Hanley & Belfus Inc. ISBN: 1-56053-208-4, p.841. UML, the Unified Modeling Language, http://www.uml.org IHE, Integrating the Healthcare Enterprise, http://www.ihe.net Oemig F, Blobel B. “Character Sets: An invisible Pre-requisite towards Cross-Border Interoperability?”, EFMI Special Topic Conference, Slovenia, April 2011, accepted paper Oemig F, Blobel B. “Harmonizing the semantics of technical terms by the Generic Component Model", 10th International Special Topic Conference of the European Federation for Medical Informatics in Reykjavik Iceland, 2-4 June 2010, IOS Press, ISBN: 978-1-60750-562-5, 115-121. Oemig F, Blobel B. "Semantic Interoperability between Health Communication Standards through Formal Ontologies", Studies in Health Technology and Informatics 150, IOS Press, ISBN: 978-160750-044-5, (2009), 200-204. Oemig F, Blobel B."A Communication Standards Ontology using Basic Formal Ontologies", Studies in Health Technology and Informatics 156, (2010), IOS Press, London, ISBN: 978-1-60750-564-8, 105113.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-709
709
Simplifying HL7 Version 3 messages Robert WORDENa, Philip SCOTT b, 1 Open Mapping Software Ltd, Cambridge, UK b Centre for Healthcare Modelling and Informatics, University of Portsmouth, UK a
Abstract. HL7 Version 3 offers a semantically robust method for healthcare interoperability but has been criticized as overly complex to implement. This paper reviews initiatives to simplify HL7 Version 3 messaging and presents a novel approach based on semantic mapping. Based on user-defined definitions, precise transforms between simple and full messages are automatically generated. Systems can be interfaced with the simple messages and achieve interoperability with full Version 3 messages through the transforms. This reduces the costs of HL7 interfacing and will encourage better uptake of HL7 Version 3 and CDA. Keywords. HL7, interoperability, standards, XML, mapping
1. Introduction HL7 Version 3 is a semantic standard for healthcare messaging, in which the meanings of messages are defined in terms of a UML-based Reference Information Model (RIM) and its specializations for particular domains and communications, called Refined Message Information Models (RMIMs) [1]. RMIMs are defined in the Model Interchange Format (MIF). An XML Implementation Technology Specification (ITS) defines how instances of this model are serialised as XML documents or messages. HL7 Clinical Document Architecture (CDA) is an application of HL7 Version 3 that defines the form and meaning of clinical documents [2]. CDA has been adopted as a standard by the NHS in England [3], it features in the US ‘meaningful use’ criteria for electronic health records [4] and has been used in eighteen other countries [5]. HL7 Version 3 has been criticized as overly complex and expensive to implement [6]. Local interoperability projects outside the scope of centrally-funded national programmes have generally lacked the technical and organizational infrastructure and rather arcane expertise that seems to be necessary to support a full Version 3 development and deployment. Consequently, the semantically inferior HL7 Version 2 has tended to remain the de facto standard for local level interactions. This paper presents a novel approach to Version 3 message simplification that promises to enable wider adoption and thereby improve semantic interoperability in healthcare.
1
Corresponding Author: Dr. Philip J. Scott, Centre for Healthcare Modelling and Informatics, University of Portsmouth, Buckingham Building, Portsmouth PO1 3HE; E-mail: [email protected].
710
R. Worden and P. Scott / Simplifying HL7 Version 3 Messages
2. Why HL7 Version 3 Messages are Complex RIM-based information modelling is semantically robust but practically complex. Models are constructed as networks of a few core RIM classes (notably the classes Entity, Role, Participation, Act and ActRelationship), linked together by a few types of association. ‘Structural attributes’ on the classes define their role in a particular model. This obligatory design pattern leads to a consistent style of models across all domains. Common data modelling pitfalls (such as a failure to distinguish between individuals and their roles) are exposed and avoided. For a RIM expert, semantic insights are easily transferred from one domain to another, leading to sound and consistent models. RIM-based semantic models are substantially more complex than ‘natural’ data models or class models in UML that are readily understood by domain experts. (These simpler models, called Domain Analysis Models (DAMs), are intended to be used as a first stage of analysis and to be traceable to the RIM-based models; in practice they are rarely maintained or used after initial analysis). A RIM-based model typically has two or three times as many classes and associations as the domain model that conventional analysis would produce. It also has many fixed attributes whose values do not change from one model instance to another. Applying a ‘one size fits all’ semantic modelling approach across diverse healthcare domains has produced models which are overly complex. This extra complexity makes the models quite unapproachable for RIM non-experts and it makes the XML serialisation of the model instances large and complex to read and write. CDA has a further level of complexity in that it requires specialization using various forms of template. For instance, a CDA ‘Entry’ may contain a wide range of discrete pieces of clinical information, but templates must be specified to say what type of information is in each entry. Many hundreds of CDA templates have been designed, and their management is a major source of complexity. An XML instance of HL7 Version 3 or CDA can be deeply nested – twenty levels of nesting is not unusual, with each level representing an association in the RIM-based model of the domain – and typically has large numbers of fixed attributes, whose values do not change from one message instance to the next. Scattered across this large XML structure are a comparatively small number of items of variable information, constituting the actual information content of the message.
3. Approaches to Message Simplification The aim of message simplification is to define a simple XML message format – with shallower nesting and with fewer fixed attributes than a Version 3 or CDA message – but which conveys the same variable information, and can be reliably transformed to and from a full Version 3 message. Once a simplified message has been defined (with transforms to and from the full messages), then a developer can interface systems to full Version 3 messages just by writing software to read and write the simplified messages and using the transforms to convert to and from the full messages. This could greatly reduce the difficulty of building HL7 Version 3 interfaces. It has been recognised for some years that HL7 message simplification is feasible and useful. We shall briefly note work done in Canada and the USA, before proceeding to give a fuller account of work in the UK.
R. Worden and P. Scott / Simplifying HL7 Version 3 Messages
711
The Structured Documents workgroup of HL7 International has sponsored a project called ‘greenCDA’ whose aim is to provide simpler ways of writing and parsing CDA documents. This project has focused on a profile of CDA called the Continuity of Care Document (CCD) which is widely used in the US. The ‘meaningful use’ criteria set by the US federal government have aroused commercial interest in interfacing systems to CCD and a specialization of it called C32. The greenCDA project has produced a simplified CCD using a hand-designed XML format with meaningful business names for elements and attributes and an XSLT to transform the simple form to a conformant instance of CCD/C32. The US Centers for Disease Control (CDC) is working on a greenCDA version of their Healthcare Associated Infection report. There is significant effort involved in developing and testing the XSLT transforms. It remains to be seen how well these techniques scale to the volume of CDA templates in use, and how feasible it is to maintain the simplified message formats and transforms. This may be important in the light of (a) development of the underlying standards and profiles through successive versions, and (b) the iteration and experiment that may be needed to develop appropriate simplifications. We expect that the transforms can be developed in reusable pieces. The question remains how much this modularisation will save effort and reduce errors in transform development. Canada Infoway has been applying message simplification since 2007, using Version 3 reshaping rules [7]. These rules allow associations in RIM-based models with maximum multiplicity 1 to be ‘flattened’ (removed from the model, removing a level of XML nesting from the message) wherever doing so would allow the original model instances to be recovered precisely. The tooling to do this in built into the Version 3 generator (the design tool for Version 3 messages). It allows the renaming of elements and attributes to give meaningful business names, and allows some associations which are eligible to be flattened to be preserved, if so wished. A feature of this approach is that the tools produce not only simplified messages, but also a simplified object/class model and programming interfaces to develop software against that model. Run-time translation in both directions between the simplified object model and the full Version 3 messages is done by automatically generated code so the translation between the simplified model and the full message is guaranteed to be accurate. The simplified model produced in this approach is also useful for domain experts to validate that the model meets their business requirements without having to master the technical complexity of a Version 3 RMIM. These simplification techniques are applicable to messages defined in the static model designer as RMIMs, but do not appear to yet support templates as required for CDA.
4. Mapping-Based Message Simplification This section reports novel work initiated in the UK to simplify Version 3 messages using a semantic mapping approach. This method maps different data structures not directly to one another, but each to a common UML class model. If any two data structures can be mapped to a shared class model, then open-source tools exist [8] to automatically generate an accurate transformation between the two data structures. The data structures are defined as XML schemas. The tooling is built on the Eclipse tool framework, using the Eclipse Modelling Facility (EMF). EMF has a notation called Ecore for representing UML class models. Mappings define how each XML data structure represents information in the class model. Transforms between the
712
R. Worden and P. Scott / Simplifying HL7 Version 3 Messages
structures can be generated automatically from the mappings. When a simplified message is defined, both the simplified message and the full message are automatically mapped to the same simplified class model. The tools can generate and execute reliable transformations in both directions between the simplified message and the full message. This is shown in Figure 1.
Figure 1. HL7 Version 3 message simplification process using semantic mapping.
In Figure 1, the only manual step is that marked ‘select/rename’. The definitions of Version 3 messages are supplied in the HL7 Message Interchange Format and the tools automatically convert these to an EMF Ecore model. If the Version 3 message is a CDA with templates, the tools apply the templates to produce a templated Ecore model. The templated RMIM is a very large tree structure, which corresponds precisely to the tree structure of the XML messages. For some CDA applications, the tree may have millions of nodes, even without permitted recursive self-nesting of subtrees (which makes the allowed number of nodes infinite). The HL7 analyst who defines a message simplification uses the tools to navigate this large RMIM tree, adding annotations to define the simplified message. This involves: •
Marking those leaf nodes that must be populated for the given interchange
•
Marking the internal nodes to be ‘flattened’ to make a shallower XML structure
•
Defining meaningful business names for all retained nodes.
After annotating the class model, further user interaction is minimal. Simplified messages are typically about three times smaller than canonical messages [9]. Simplification works best for tightly defined profiles of Version 3 or CDA messages, where the information to be transferred between systems forms a well-bounded set. This approach is being tested and validated in a UK project to support the Care Assessment Framework (CAF) processes as part of Healthcare and Social Care Integration (HSCI), for which the NHS has defined a set of five CDA-based messages.
R. Worden and P. Scott / Simplifying HL7 Version 3 Messages
713
5. Discussion We believe that the approach outlined here satisfies the following critical success factors for message simplification: • Scope: applicable to any types of HL7 message or documents; • Transform Reliability: demonstrably reliable and testable two-way between simplified and canonical Version 3 messages; • Semantic Precision: fully and clearly defined in HL7 semantics; • Ease of Use: straightforward to interface systems to the simplified messages, significant cost savings compared to direct V3 interfacing; • Breadth of Use: applicable for model-based development, model-based comparative query, or validation of domain models; • Development of Simplified Message Definitions: process of defining simplified messages automated and reliable, scales well and is easily repeatable as message definitions change between versions. A broader consideration is how the document messaging paradigm (simplified or not) will fit with service oriented architectures (SOA) for healthcare interchanges [10].
6. Conclusions Message simplification can greatly reduce the costs of HL7 Version 3 interfaces, hiding the technical complexity of the RIM while preserving its robust semantics. We expect to see continuing development and use of HL7 simplification tools and techniques.
References [1] [2]
Hinchley A. Understanding Version 3. 4th ed. Munich: Alexander Mönch; 2007. Dolin RH, Alschuler L, Boyer S, Beebe C, Behlen FM, Biron PV, et al. HL7 Clinical Document Architecture, Release 2. J Am Med Inform Assoc. 2006;13(1):30-9. [3] Information Standards Board for Health and Social Care. Interoperability. n.d. [cited 12 April 2011]. Available from: http://www.isb.nhs.uk/use/baselines/interoper [4] Office of the National Coordinator for Health Information Technology. Health Information Technology: Initial Set of Standards, Implementation Specifications, and Certification Criteria for Electronic Health Record Technology. Federal Register. 2010;75(144). [cited 17 January 2011]. Available from: http://www.gpo.gov/fdsys/pkg/FR-2010-07-28/pdf/2010-17210.pdf [5] Kaminker D. HL7 Clinical Document Architecture Ambassador Briefing. 11th International HL7 Interoperability Conference, Rio de Janeiro, Brazil; 2010. [6] Browne E. openEHR Archetypes for HL7 CDA Documents. 2008. [cited 17 January 2011]. Available from: http://www.openehr.org/ [7] McKenzie L, Stephens M. Message reshaping rules. 2007 [cited 17 January 2011]. Available from: http://wiki.hl7.org/index.php?title=Message_reshaping_rules [8] V2 and V3 mapping tools. 2009 [cited 17 January 2011]. Available from: http://gforge.hl7.org/gf/project/v2v3-mapping/frs/ [9] Worden R, Scott P. Fragmentary example of simplified XML. 2011 [cited 28 April 2011]. Available from: http://userweb.port.ac.uk/~scottp/MIE2011/Example.pdf [10] Kawamoto K, Lobach DF. Proposal for fulfilling strategic objectives of the U.S. Roadmap for national action on clinical decision support through a service-oriented architecture leveraging HL7 services. J Am Med Inform Assoc. 2007 Mar-Apr;14(2):146-55.
714
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-714
Creating an Ontology Driven Rules Base for an Expert System for Medical Diagnosis Valérie BERTAUD GOUNOTa,1 Valéry DONFACKa, Jérémy LASBLEIZa, Annabel BOURDEa, Régis DUVAUFERRIERa a Unité Inserm U936, IFR 140IFR 140, Faculté de Médecine, University of Rennes 1, France
Abstract. Expert systems of the 1980s have failed on the difficulties of maintaining large rule bases. The current work proposes a method to achieve and maintain rule bases grounded on ontologies (like NCIT). The process described here for an expert system on plasma cell disorder encompasses extraction of a subontology and automatic and comprehensive generation of production rules. The creation of rules is not based directly on classes, but on individuals (instances). Instances can be considered as prototypes of diseases formally defined by "restrictions" in the ontology. Thus, it is possible to use this process to make diagnoses of diseases. The perspectives of this work are considered: the process described with an ontology formalized in OWL1 can be extended by using an ontology in OWL2 and allow reasoning about numerical data in addition to symbolic data. Keywords. NCI Thesaurus, OWL, SWRL, Expert Systems, Biomedical ontologies, Ontology modularization, data-element, value-set.
1. Introduction Expert systems of the 1980s have failed to maintain large rule bases[1][2]. The aim of the current work is to show how semantic web tools can help in this area. We propose a method for (1) extracting a sub-ontology of a particular field (plasma cell neoplasms) from a medical ontology (the NCIT in OWL), and (2) automatically translating this ontology into production rules using SWRL formalism. The goal is to enable easy building of the knowledge base of an expert system. This process based on a formal ontology allows to easily generating a large number of production rules, ensures consistency of the expert system knowledge base and thus makes it easier to understand the reasoning.
2. Material and Methods The NCIT (v10.07) is an ontology and a terminology in the cancer domain with over 80,000 classes, 187 properties (or relations) and 57,000 restrictions. It is currently 1
Corresponding Author: Valérie Bertaud Gounot.
V. Bertaud Gounot et al. / Creating an Ontology Driven Rules Base for an Expert System
715
available as a free way OWL 1.1 of [3]. OWL reasoners like Pellet [4] allow to check consistency of the ontology based on the formal definitions of classes. They can also classify an instance as being an instance of a specific class if the instance at least all the necessary and sufficient conditions of the class. 2.1. Reorganizing the Relationships in the NCIT The NCIT has relationships ("Object Properties") which are not common in ontologies, such as "may_have" and "excludes". These "Object properties" link each disease to its manifestations (signs, symptoms). It’s a fine representation of signs in diseases, but it doesn’t allow classifying instances of diseases according to their signs. Indeed, for a given patient, the signs are “present” or “absent”: the patient will or will not have a sign (relationship "has" and not "may-have" or "exclude"). This leads us to propose that the relationship "disease_has_finding" is a special case of the relation "disease_may_have_finding” as did Natalya Noy [5]. If “disease_may_have_finding” subsumes the relation "disease_has_finding”, it is in accordance with the description logic and also with the reality of the domain. Moreover it enables the reasoners to classify the instances that have or don’t have the sign. 2.2. Extracting a Sub-Ontology We didn’t want to work with the whole NCIT for processing time reasons. We created a “sub-ontology extractor” able to extract a sub-ontology that is to say a subset of the NCIT (classes and their formal definitions). It takes as input parameters (1) an ontology in OWL format, (2) a list of key concepts from which the extraction shall start, (3) the directions in which the extractor shall search, ie to parents, to children and/or to connected concepts thanks to relationships ("Object Properties") and (4) the list of the relationships to be followed. We ran the extractor with PLASMA_CELL_NEOPLASM as key concept (Figure 1). The extractor retrieved all ancestors up to the top, all children down to the leaves, all target concepts connected by a relationship to PLASMA_CELL_NEOPLASM or its children. Then it retrieved parents of all these target concepts in order to link them to the root. 2.3. Developing Production Rules for Medical Decision Support 2.3.1. Logic Background Ontology classifiers(Pellet, FACT ...) implement deductive reasoning. However, in the diagnostic process, we must propose diagnostic hypotheses in an abductive reasoning starting from an observation in which information is inherently incomplete[6]. Deductive reasoning: if ab and if a is true, then b is true. Abductive reasoning: if ab and if b is true, then a is possibly true. 2.3.2. Creating the Prototypical Cases SWRL reasons on instances. Thus it was necessary to generate an ABox (Assertional Box: all information related to specific instances of the domain) from the myeloma TBox (Terminology Box: all classes of the ontology with their formal definitions). As Protégé does not have an assistant to automatically create instances, we used the OWL
716
V. Bertaud Gounot et al. / Creating an Ontology Driven Rules Base for an Expert System
API to automatically generate an instance and define the various assertions for each of the 27 prototypical cases of plasma cell disease. 2.3.3. Creating SWRL Rules for Diagnostic Reasoning We then created the SWRL rules used for abductive reasoning.
3. Results 3.1. Extracting a Sub-Ontology From the NCIT ontology encompassing over 60,000 classes, 187 relations (object properties) and 57,000 limitations, we extracted automatically a sub-ontology : 281 class, 17 relations and 25 restrictions. The resulting sub-ontology completely defines the concepts of the taxonomy of the PLASMA_CELL_NEOPLASM. 3.2. Creating Production Rules for Medical Decision Support Formulating these production rules required the use of four new data properties: "Finding_Has_Diagnosis ", "Finding_ Excludes_Diagnosis" "Finding_Absence_Has_Diagnosis", "Finding_Absence_Excludes_Diagnosis.
For diagnostic reasoning, four generic rules were defined. They are meant to be used to reason on the instances (prototypical cases). (1) If a patient has a sign f and if this sign may be a manifestation of the disease d, then this disease d is a possible diagnostic: Disease_Has_Finding (?d, ?f) ^ Finding (?f) -> Finding_May_Have_Diagnosis (?f, ?d)
(2) If a patient has a sign f and if a disease d excludes this sign, then the disease d is not a possible diagnosis: Disease_Excludes_Finding (?d, ?f) ^ Finding (?f) -> Finding_Excludes_Diagnosis (?p, ?d)
(3) If a sign f is absent in the patient and if this sign f is required for the disease d, then the absence of sign f excludes the diagnosis: Disease_Has_Finding (?d, ?f) ^ Finding (?f) -> Finding_Absence_Excludes_Diagnosis (?p, ?d)
(4) If a sign f is absent in the patient and if a disease d excludes this sign, then the absence of sign f makes the diagnosis possible: Disease_Excludes_Finding (?d, f) ^ Finding (?f) -> Finding_Absence_May_Have_Diagnosis (?p, ?d)
Given the fact that we consider that "Disease_Has_Associated_Disease", "Disease_Has_Abnormal_Cell", "Disease_Has_Cytogenetic_Abnormality" (...) also express Disease-Sign relationships in our sub-ontology, 9 generic SWRL rules are needed in order to be able to drive abductive reasoning on patients and to expoit “excludes” and “may have” relationships. For the whole NCIT, we would have to write the 4 rules for 5 different types of relationships, thus 20 rules would be needed. These production rules define the semiological relationships (relationships between diseases and their manifestations) that will allow to suggest or eliminate a diagnosis depending on the presence or absence of a sign. They follow a first order logic (with variables: “?d” and “?f”). Variables can also be automatically instantiated with all
V. Bertaud Gounot et al. / Creating an Ontology Driven Rules Base for an Expert System
717
instances (individuals) of the ontology resulting in three hundred and five production rules (0 order logic) for our subontology (Fig.1). The system was evaluated with 10 real patient records and exit letters. An input form (http://www.med.univ-rennes1.fr/OntoDiag/) gathering all very bottom medical findings from the ontology was filled for each patient. The production rules were used to provide 2 lists of possible and excluded diagnoses. All diagnoses made by the doctors (domain experts) were in the list of the possible diagnoses. For each possible diagnosis, the number of signs present or absent compared to the number of signs described for the disease in the ontology are also displayed allowing to classify the possible diagnoses by relevance. Finding_Absence_Excludes_Diagnosis (Extraosseous_Lesion, Extramedullary_Plasmacytoma) Finding_Absence_Excludes_Diagnosis (Localized_Lesion, Extramedullary_Plasmacytoma) Finding_Absence_May_Have_Diagnosis (Neoplastic_Plasma_Cells_Present_in_Bone_Marrow, Extramedullary_Plasmacytoma) Finding_Excludes_Diagnosis (Neoplastic_Plasma_Cells_Present_in_Bone_Marrow, Extramedullary_Plasmacytoma) Finding_May_Have_Diagnosis (Arthritis, Heavy_Chain_Deposition_Disease) Finding_May_Have_Diagnosis (Coagulation_Disorder, Heavy_Chain_Deposition_Disease) Figure 1: Example of reified production rules: the SWLR rules were instanciated with the diseases and findings defined in the ontology. For example, the first rule means “if extra-osseous lesion is absent, then extramedullary plasmocytoma diagnosis is excluded”.
4. Discussion Previous work has already demonstrated it was possible to use semi-formal knowledge bases to build expert systems [7]. Our study shows how it is possible to generate the inference rules of an expert system from an ontology written in OWL. The topic ontology and decision support has led us to consider semiotic ontologies in which the entities were not diseases but diagnoses [8][9]. It is clear that this approach is a minority. In a classical medical ontology, diseases are entities that have manifestations (Disease_Has_Finding). The concept of diagnosis is not mentioned. We had to add the “Finding_Has_Diagnosis” relationship. It is not the inverse of the previous one, but a really new relationship. Indeed, the diagnosis is not a disease but a hypothesis of the disease. Computer Assisted Decision Systems based on classical ontologies have rarely been proposed, however, we can highlight the work Jovic [10] who described an architecture similar to ours. One of the special features of our work is the use of abductive reasoning on an ontology. Abductive logic and ontology were already mentioned in several publications[11]. Querying an ontology is classically based on Description Logic and deductive reasonning. This deduction may either be made on the Terminological Box (TBox) or on the Assertional Box (ABox). In accordance with Description Logic, it doesn’t allow to classify a class or an instance if it doesn’t have every necessary and sufficient conditions of at least one class in the ontology. Production rules linking signs to diagnoses could be considered as a HBox (Hypothesis Box), which allows us to use the ontology for decision support in abductive reasoning. The abductive reasoning allows getting results (hypotheses of diseases) even if all the necessary and sufficient findings are known. In the NCIT, a relationship is unusual in ontologies: the "Excludes" relationship. It is useful for medical diagnosis which can be based on negative signs (absent signs).
718
V. Bertaud Gounot et al. / Creating an Ontology Driven Rules Base for an Expert System
The production rules make it possible reasoning on findings known to be absent for a given patient: the fact that the sign is known to be absent either (1) has no influence on the diagnosis if the sign is not mandatory; or (2) excludes the diagnosis if the sign is mandatory in a disease, or (3) strengthens the possibility of the diagnosis if the sign is excluded for the disease. For example, it could be useful to formally define eligibility criteria of clinical trials. Some “may-have” relationships defined in the class could be transformed at the level of the instance into “has” or “exclude” relationships according the definition of the disease chosen in the clinical trial. This approach is an adaptation of a case-based reasoning (CBR) system in which the source case is modified to be in accordance with a new situation: the target case which is expressed in description logic as proposed by Cojan Lieber [12].
5. Conclusion A major problem of expert systems was the creation and maintenance of rule bases. Driving this creation by an ontology can greatly facilitate this process. In this context, the generated rule base could be considered as the HBox of the ontology, that is to say the diagnostic hypotheses box as the Tbox is the terminology box and the ABox is the descriptions of diseases prototypes.
References Liao S. Expert system methodologies and applications—a decade review from 1995 to 2004, Expert Systems with Applications (2004), 1–11. [2] Hayes-Roth B. A blackboard architecture for contro, Artificial Intelligence 26 (1985), 251-321. [3] Fragoso G, de Coronado S, Haber M, Hartel F, Wright L. Overview and utilization of the nci thesaurus, Comp Func Genomics 5(8) (2004), 648–54. [4] Sirin E, Parsia B, Grau B, Kalyanpur A, Katz Y. Pellet: a practical owl-dl reasoner, Web Semantics: Science, Services and Agents on the World Wide Web 5 (2007), 51–53. [5] Noy N, de Coronado S, Solbrig H, Fragoso G, Hartel F, Musen M. Representing the nci thesaurus in owl dl: modeling tools help modeling languages, Appl Ontol 3(3) (2008 Jan 1), 173–190. [6] Pottier P, Planchon B. Description of the mental processes occurring during clinical reasoning, Doi : 10.1016/j.revmed (2010), . [7] Achour S, Dojat M, Rieux C, Bierling P, Lepage E. A umls-based knowledge acquisition tool for rulebased clinical decision support system development, J Am Med Inform Assoc 8(4) (2001 Jul-Aug), 351360. [8] Bertaud-Gounot V, Lasbleiz J, Mougin F, Marin F, Burgun A, Duvauferrier R. A unified representation of findings in clinical radiology using the umls and dicom, Int J Med Inform 77(9) (2008 Sept), 621-629. [9] Bertaud-Gounot V, Belhadj I, Dameron O, et al. Computerizing the radiological sign, J Radiol 88 (2007 Jan), 27-37. [10] Jovic A, Prcela M, Gamberger D. Ontologies in medical knowledge representation, Proceedings of the ITI 2007 29th Int. Conf. on Information Technology Interfaces June 25-28 (2007), . [11] Elsenbroich C, Kutz O, Sattler U. A case for abductive reasoning over ontologies, OWLED Vol. 216CEUR-WS.org ((2006)). [12] Cojan J, Lieber J. An algorithm for adapting cases represented in an expressive description logic, ICCBR (2010), 51-65.
[1]
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-719
719
A Methodology and Supply Chain Management Inspired Reference Ontology for Modeling Healthcare Teams a
Craig E. KUZIEMSKYa,1, Sara YAZDI a Telfer School of Management, University of Ottawa, Ottawa, ON, Canada
Abstract. Numerous studies and strategic plans are advocating more team based healthcare delivery that is facilitated by information and communication technologies (ICTs). However before we can design ICTs to support teams we need a solid conceptual model of team processes and a methodology for using such a model in healthcare settings. This paper draws upon success in the supply chain management domain to develop a reference ontology of healthcare teams and a methodology for modeling teams to instantiate the ontology in specific settings. This research can help us understand how teams function and how we can design ICTs to support teams. Keywords. Ontology, healthcare communication technology
teams,
methodology,
information
and
1. Introduction Frameworks such as the Institute of Medicine have advocated for re-engineering of the healthcare system to support team based care delivery [1]. However this re-engineering effort is a complex endeavor as healthcare systems were not designed to facilitate team based care delivery. Information and communication technologies (ICTs) will play a key role in supporting team based care delivery but developing ICTs to support teams poses some significant systems design challenges. Key processes involved in teamwork such as communication or decision making as well as tasks such as handoff have been shown to be problematic because of coordination or joint information access issues [2]. The essence of team based care delivery is the interoperability of several concepts including data, processes, goals and mandates. Electronic health records (EHR) are a common architecture used to support interoperability, yet research has shown that EHR standards and frameworks tend to support individual rather than team based needs [3]. A further issue with EHRs is they typically provide support for disparate processes rather than an integrated continuum of processes [2]. Teams provide care as a continuum of services and a common source of miscommunication and medical errors are poor information formats or communication exchanges during various services [4]. To better support team based care delivery we need to design ICTs that support the continuum of care within which teams work. 1
Corresponding Author. Author: Dr. Craig Kuziemsky, Telfer School of Management, University of Ottawa, 55 Laurier Avenue East,Ottawa, ON K1N 6N5, e-mail: [email protected]
720
C.E. Kuziemsky and S. Yazdi / A Methodology and Supply Chain Management
Overall we see two key shortcomings to existing informatics research on team based care delivery. One is that it has tended to consider all teams as the same. Rather, different team structures exist (e.g. interdisciplinary, multidisciplinary and transdisciplinary) [5], yet we know little about how these structures will impact ICT design. Second is that much of the existing modeling work on teams has focused on specific objectives such as error provision or specific tasks such as decision making, handoffs or information needs for team meetings. There is no methodology or systematic approach for modeling the information and process needs of healthcare teams. The overall challenge is that we cannot manage teamwork per say, but rather we must manage specific aspects of teams. However a precursor to management is the need to define and model team based concepts and the relationship amongst them. Our research is inspired by the supply chain management (SCM) domain. SCM initially struggled with ICT implementation, finding that expensive ICTs did not provide enhanced performance. Researching that issue identified that SCM is a type of management and that successful ICT design is about managing processes. To better support processes the SCM domain developed a common reference model called the supply chain operations reference model (SCORM) [6]. The SCORM contains five common macro processes (plan source, make, deliver and return) from which more detailed micro level processes can be defined to support specific SCM systems. These micro level processes provide the means of developing dynamic SCM systems while still adhering to interoperable standards. The main benefit of the SCORM is it provides a common architecture for development of standards and best practices for SCM integration and systems design. Healthcare teams could similarly benefit from development of a reference model to support better management and systems design. A first task is to develop a comprehensive methodology and reference model for healthcare teams. This paper describes our work at achieving the above task.
2. Methods Our ontology and methodology were developed based on case study data and a comprehensive literature search on healthcare teamwork. A qualitative content analysis approach was used to design the ontology [7]. Content analysis analyzes characteristics of the data with particular attention to the content and contextual meaning of the text. 2.1. Data Sources Data source one is case studies on team based care delivery. One of the authors (CEK) has studied team based care delivery in several clinical units including palliative care, medical-surgical acute care; day surgery; continuing complex care and diabetic education. Data source two is a literature review on healthcare teams from searching Scopus, Web of Science and Medline. Search terms included ‘healthcare team’ and various other keywords including ‘processes’, ‘communication’ and ‘models’. We also searched on supply chain management including models, methodologies and ICT design to support SCM.
C.E. Kuziemsky and S. Yazdi / A Methodology and Supply Chain Management
721
3. Results After analyzing our case study data and literature we developed an ontology and methodology for modeling collaborative care delivery. The two data sources provided complementary views on teamwork by providing conceptual perspectives of the structure and processes (literature) but also providing an empirical perspective from the case studies. 3.1. Ontology of Healthcare Teams Figure 1 shows our ontology of healthcare teams developed using a content analysis process. The ontology is being modeled in OWL to enable it to be part of the Ontology for General Medical Science (OGMS) initiative (http://code.google.com/p/ogms). As we analyzed the data and started to design the ontology we realized that teamwork has both a structure and set of supporting concepts. Both of those are represented in our ontology. First is the ‘continuum’ concept which represents the structure. The continuum has five sub-concepts (assessment, decision making, care planning, care delivery and evaluation), shown in fig 1. The continuum is our equivalent to the SCORM as the five sub-concepts are macro level concepts common across all teams from our case studies and literature. Understanding and modeling the entire continuum is important to ensure ICTs are designed to support all relevant team processes. For example, a team cannot make a care delivery decision without appropriate consideration on how to implement the decision or more significantly without any evaluation of the outcome of the decision. All relevant processes need to be modeled and integrated. Teams do not work as several disparate processes but rather work as an integrated continuum. Although the continuum acts as a common architecture for healthcare team interoperability we know that different team settings will have different needs. The implementation of the continuum in a specific team setting is influenced by six other ontology concepts shown in fig.1. These concepts represent the micro level details of different team instantiations including team members, team processes, information, governance, team structure, and synchronization points for team processes. Each of these ontology concepts has their own levels of complexity but these are not shown in detail in fig. 1 because of space. Instead several examples will be discussed. The ‘processes’ concept has a sub-concept called ‘decision logistics’. This sub-concept provides micro level details to the decision making sub-concept from the team continuum and refers to the fact that team decision making may take hours (or even days) because it can involve obtaining input from multiple providers in order to arrive at a decision. Therefore different decision states such as ‘in-progress’, ‘negotiation’, or ‘complete’ need to be modeled and supported by an ICT. The ‘members’ concept models the team members and contains sub-concepts ‘skill sets’ and ‘information needs’. Team tasks are delegated at time of care delivery and the skill sets of team members needs to be captured and communicated so that tasks are assigned based on abilities. Information needs refer to individual team member information preferences. A team is a sum of individual providers who will have different preferences with respect to information sources and types how it is formatted and disseminated. For example, some physicians prefer to be notified with regular patient updates while others only wanted notification when a change occurred in a patient’s goal or careplan.
722
C.E. Kuziemsky and S. Yazdi / A Methodology and Supply Chain Management
Figure 1. Ontology of healthcare teams with expansion of the continuum concept
Modeling and understanding information needs is essential to prevent information overload and communication issues. Finally the ‘governance’ concept models the mandates of a team and roles and goals within that mandate. Governance has a subconcept called ‘boundaries’ that identifies boundary issues between professions or between organizations. Professional boundaries refer to disciplinary boundaries such as what a registered nurse can do as opposed to a licensed practical nurse or nurse assistant. Some tasks are out of scope of certain professions and that knowledge is needed for workflow planning. Organizational boundaries model teamwork across different healthcare settings. Crossing organizational boundaries will require the reconciliation of policies and procedures such as for data collection and sharing. 3.2. Methodology for Team Ontology Modeling As we analyzed the data to develop the ontology we drew upon the case studies and literature as to how the different concepts fit together and how healthcare teams should be modeled. The starting point for modeling healthcare teams is the five macro processes of the continuum as they were consistent across all teams. However the manner in which the continuum is implemented will differ. The most significant modeling consideration is the team structure as that will impact implementation of the other concepts. Multidisciplinary team (e.g. knee replacement recovery) cases are quite structured. Members work parallel to each other and collaboration only occurs as necessary. The different continuum processes are reconciled to achieve the final outcome but information and communication exchange is on an as needed basis. Interdisciplinary teams (e.g. palliative care) deal with more complex patient cases and thus care delivery of the continuum processes involves integrated collaboration to ensure the entire team has a common understanding of the patient’s situation and that all team members contribute to defining the patient goals and care delivery to support these goals. Interdisciplinary care delivery will affect information and communication needs as team members not only need to see what other team members are doing but they also work in an integrated manner during continuum processes such as assessment or care planning. Further the decision making process in interdisciplinary teams must represent a consensus decision of all team members and not a decision by one team member. The most significant aspect of the modeling method is that teams are dynamic entities. A change in the team structure will require a new model as each team instantation will implement the micro level concepts differently.
C.E. Kuziemsky and S. Yazdi / A Methodology and Supply Chain Management
723
4. Discussion This is the first comprehensive model of healthcare teams that defines the continuum of teamwork and the supporting concepts that instantiate the continuum in a specific setting. Our ontology represents teamwork as both a structure and a set of concepts that work within that structure in the delivery of healthcare services. We identified a common continuum of five processes that all teams engage in but also identified that the implementation of these processes depends on a number of factors including the team structure, the data and communication needs of the team and its members, and the governance of the team. The key implication for ICT design is to support teams is that design cannot be done with a one size fits all mindset. ICTs to support teamwork must be designed to enable run time agility in how it supports team processes for two key reasons. First is because in our case data we observed that a team may work under different team structures depending on the patient circumstances. The necessary process support will change with each structure. Second, teams constantly change providers and they will have different information and communication needs. This study also demonstrated the value of adapting research from other domains to influence ICT modeling and design in healthcare. The SCORM and its use for designing integrated systems in the supply chain domain was the inspiration for our ontology. One shortcoming of this paper is that it was based on one set of case studies and one literature review. We anticipate other team based concepts will emerge in other settings. Future iterations of our ontology will incorporate additional concepts as well as mapping the ontology to controlled terminologies such as SNOMED-CT and to architectures such as openEHR to assess their abilities to support healthcare teams. We are also using the ontology to model team based simulations to develop standards and best practices to support the management of healthcare teams. Acknowledgements: This work was supported by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada.
References [1] [2] [3] [4]
[5]
[6] [7]
Institute of Medicine. Crossing the quality chasm: a new health system for the twenty-first century.Washington, DC: National Academies Press; 2001. Bates DW. Getting in Step: Electronic Health Records and their Role in Care Coordination. J Gen Intern Med 25(3) (2010), 174–6 Dorr DA, Jones SS, Wilcox AA. framework for information system usage in collaborative care. Journal of Biomedical Informatics 40 (2007),282–287 Ash JS, Berg M, Coiera E. Some Unintended Consequences of Information Technology in Health Care: The Nature of Patient Care Information System-related Error. J Am Med Inform Assoc (11) (2004), 104–112 Choi BCK, Pak AWP. Multidisciplinarity, interdisciplinarity and transdisciplinarity in health research, services, education and policy: 1. Definitions, objectives, and evidence of effectiveness. Clinical and Investigative Medicine, 29(6) (2006), 351-364. Stephens S. Supply Chain Operations Reference Model Version 5.0: A New Tool to Improve Supply Chain Efficiency and Achieve Best Practice. Information Systems Frontiers 3(4) (2001), 471-476. Hseih HF, Shannon SE. Three approaches to qualitative content analysis. Qualitative Health Research 15( 9) (2005), 1277-1288.
724
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-724
Supporting openEHR Java Desktop Application Developers a
Hajar KASHFIa,1, Olof TORGERSSONa Department of Applied Information Technology, Chalmers University of Technology and University of Gothenburg, SE-412 96, Gothenburg, Sweden
Abstract. The openEHR community suggests that an appropriate approach for creating a graphical user interface for an openEHR-based application is to generate forms from the underlying archetypes and templates. However, current generation techniques are not mature enough to be able to produce high quality interfaces with good usability. Therefore, developing efficient ways to combine manually designed and developed interfaces to openEHR backends is an interesting alternative. In this study, a framework for binding a pre-designed graphical user interface to an openEHR-based backend is proposed. The proposed framework contributes to the set of options available for developers. In particular we believe that the approach of combining user interface components with an openEHR backend in the proposed way might be useful in situations where the quality of the user interface is essential and for creating small scale and experimental systems. Keywords. openEHR, clinical application, opereffa, application development framework, data binding.
1. Introduction With the growing presence of electronic healthcare record (EHR) systems developed by various vendors, it becomes increasingly important to agree upon standards that can overcome the resulting interoperability problems [1]. The goal of the openEHR initiative is to develop an open standard that can serve as the basis for both developing EHR systems and to guarantee semantic interoperability between systems [2]. openEHR suggests a two-level architecture for managing data in clinical applications. The top-level is made up of archetypes, which are domain specific models that should be developed by domain experts. The lower level is the openEHR reference model (RM), which is a very generic model for managing clinical data [2]. Regardless of the advantages offered by openEHR, the standard suffers from a rather low adoption rate. While this is not the place to make a proper analysis of the reasons for this, some possible reasons are the complexity of the standard, lack of documentation and training for developers, and a limited set of tools and frameworks available to ease application development. Most of the focus of the community seems to have gone into representing and modelling domain concepts and perfecting the specifications. However, to make openEHR more practical, application developers need to be supported with APIs, frameworks and tools. Of course, a number of application 1
Corresponding author: Hajar Kashfi, Department of Applied Information Technology, Chalmers University of Technology, SE–412 96 Gothenburg, Sweden; E-mail: hajar.kashfi@chalmers.se.
725
H. Kashfi and O. Torgersson / Supporting openEHR Java Desktop Application Developers
development projects exist. Some of the more mature are: the open source health information platform [3] (OSHIP), the open EHR-Gen framework [4], GastrOS [5], and the openEHR reference framework and application (opereffa) [6]. Different approaches can be used to develop openEHR applications. The typical approach is to automatically generate the graphical user interface (GUI) from openEHR archetypes/templates. To our knowledge, current openEHR frameworks and tools are compatible with this model (depicted in Figure 1-A). The idea is that clinicians design and create archetypes (and templates) using existing tools. Based on these, a GUI or GUI artefacts are generated from the given archetypes/templates. To improve the GUI design, manual adjustment of the GUI or its style files is often required. The typical openEHR application is web based. In contrast, there is an approach in which no generation of GUI based on archetypes is done but instead the interface is designed by experts based on the demands of the users of the application. There is then a need to connect this user interface to the archetypes designed in parallel by domain experts. Unfortunately, the current frameworks do not provide enough support for this type of application development. To fill this gap, we have developed an extension to one of the existing openEHR frameworks to help developers easily connect a user interface created by a GUIdesigner to an openEHR based backend. A
B
create
Developer Manual adjustment
Template Template Template
is mapped to
GUI
translated to GUI artefacts
create
Archetype Archetype Archetype
...... ......
validated by AOM instances openEHR-based EHR
GUI
RM RM object AOM object object
User
RM RM object RM object object
...... ......
Data
RM RM object AOM object object
deisgn
Archetype Archetype Archetype
Data
expert
expert User Developer
validated by AOM instances openEHR-based EHR
RM RM object RM object object
data binding
Figure 1.The two development models. The left side model is supported by opereffa.
2. Methods and Tools As mentioned in the previous section, in contrast to the automatic or semi-automatic approach to creating GUIs to openEHR-based applications, there is an alternative illustrated in Figure 1-B. To support this approach, there is a need for a simple and efficient data-binding framework connecting an application’s frontend to its backend (or the business logic). Usually, the idea of data binding is that when the data changes, the changes are reflected automatically by the bound elements on the user interface. In
726
H. Kashfi and O. Torgersson / Supporting openEHR Java Desktop Application Developers
the same manner, if the “outer representation” of data changes, then the corresponding data in backend should be automatically updated as well to reflect the change [7]. We have developed an extension, a Java desktop user interface data-binding layer, to one of the openEHR application development frameworks namely opereffa (compare the left and right sides of the Figure 2). opereffa is an initiative for creating an open source clinical application and is built on top of a Java-based open source framework. Services such as GUI generation, and persistence are supported in opereffa. opereffa generates web-based user interface artefacts using Java Server Faces (JSF) [8] and also makes use of the data binding capabilities of JSF. Open source clinical Application
Medical application
Web based GUI (JSF)
User interface data binding Java open source framework
Java open source RM implementation
Java open source framework
Persistent (PostgreSQL) Automatic GUI generation etc.
ADL parser AOM classes RM classes 'Data validation etc.
Java open source RM implementation
PostgreSQL
openEHR-based EHR
Document Document Archetype
Figure 2.The proposed architecture.
3. Results The new data-binding layer on top of opereffa provides a single entry point for connecting Java desktop GUIs to an archetype-enabled backend. To make the desktop connection framework as easy to use as possible for application developers, it is designed using a facade pattern [9]. This means that there is one single class, ArchetypeDatahandler, which hides all the details of archetype-based data management and therefore is the only class someone using the framework needs to know anything about. To further improve ease of use in applications, the ArchetypeDataHandler is a singleton, meaning that there is only one instance of it in an application and it can easily be accessed anywhere. To connect a pre-designed GUI to the openEHR backend the only steps required are: (i) creating an xml-file specifying the connection between GUI components and archetypes (ii) adding each component in the GUI to the ArchetypeDataHandler, i.e. calling the method add (JComponent c) on the ArchetypeDataHandler. Underlying the implementation is the assumption that each GUI component is logically in relation to one and only one item in an archetype (an item in an archetype can be visually presented in several places on a screen though). This means that for each component there exists an archetype name, and a unique path for the item in that archetype. This information is stored in an xml file, which is used by the
H. Kashfi and O. Torgersson / Supporting openEHR Java Desktop Application Developers
727
ArchetypeDataHandler to decide to which data item a certain GUI component should be connected. A sample of entries in the xml file is shown in Figure 3. Internally, a class named GUIMapper with support from other mapper classes provides the actual functionality. The role of this class is to keep track of all GUI components, their archetypeWrappers and the path of the item in the archetype to which each component is mapped. The synchronization of data shown in the GUI and what is stored in the GUIMapper is realized using various listeners. If data changes on the GUI this change is reflected in the GUIMapper in the backend. On the other hand, if data is loaded from the DB, the GUIMapper updates the GUI as well. All data is kept in memory to be persisted at the right time using opereffa persistence services. To evaluate the functionality of the data-binding framework, a demo application was created using the graphical GUI-editor of NetBeans. In the demo application, four different archetypes are used and connected to 39 GUI components. The archetypes are both special archetypes developed for a decision support system for Xerostomia [10] and adaptations of common archetypes such as openEHR-EHR-OBSERVATION.lab_ test.v1. A snapshot of the demo application is depicted in Figure 4. The screen shows a field where one can search for a patient, a list where one can select one of the patient’s recent sessions and the data recorded at that session. The part of the code that enables it to load and store archetyped data is only a few lines that add components to the ArchetypeDataHandler. The framework provides the rest.
Figure 3Sample entries showing the connection between GUI components and archetypes
4. Discussion and Conclusion The proposed framework for binding a pre-designed GUI to an openEHR-based backend contributes to the set of options available for developers. We particularly believe that the approach of combining GUI components with an openEHR backend in the proposed way might be useful for various small scale and experimental systems as well as systems where the quality of the user interface is of great importance. The generic proposed model for building openEHR applications was illustrated in Figure 1-A. While generating GUIs from archetypes and templates has advantages when it comes to maintenance of large-scale systems, creating good, and not just mediocre, user interfaces from underlying domain models is a non-trivial problem. During the time mature generation techniques are being developed, hooking up designed GUIs to openEHR backends is an interesting alternative. The use of designed GUIs is also interesting from another point of view. GUIs based on a domain model are by necessity rather close to the implementation model of the system. However, when a GUI is designed, the goal of the developers should always be to have an on-screen representation that is as close as possible to the users’ mental model [11]. Therefore, having a simple way of replacing parts of a system with GUIs designed by hand in accordance with the users’ mental models is valuable to handle the situations where generation cannot produce an appropriate solution.
728
H. Kashfi and O. Torgersson / Supporting openEHR Java Desktop Application Developers
The main disadvantage of the approach used in this paper is maintenance. A generated GUI can change automatically if the underlying archetypes/templates change whereas a manually created GUI has to be updated manually. A related problem is that in complex systems the number of GUI components that need to be connected to the backend will be very large. The current implementation is rather limited since it only supports a small number of GUI components. A more complete implementation must of course support all commonly used standard components and perhaps even provide some special controls tailored towards clinical applications. This however is work for the future.
Figure4.The demo application
References [1]
Schloeffel P, Beale T, Hayworth G, Heard S, Leslie H. The relationship between CEN 13606, HL7, and openEHR. In: HIC (2006). Health Informatics Society of Australia; 2006. p. 24. [2] Beale T, Heard S. openEHR Architecture Overview [published on the Internet]. openEHR Foundation; 2009 [cited 2011 April 20]. Available from: http://www.openehr.org [3] Open Source Health Information Platform (OSHIP) [homepage on the Internet]. Multi Level Healthcare Information Modelling - MLHIM [cited 2011 April 20]. Available from: http://www.oship.org [4] Open-EHR-Gen [homepage on the Internet]. Available from: http://code.google.com/p/open-ehr-genframework [5] GastrOS [homepage on the Internet]. The University of Auckland [cited 2011 April 20]. Available from: http://sourceforge.net/projects/gastros [6] openEHR REFerence Framework and Application (Opereffa) [homepage on the Internet]. UCL [cited 2011 April 20]. Available from: http://opereffa.chime.ucl.ac.uk/introduction.jsf [7] Data Binding Overview [webpage on the Internet]. Microsoft Corporation [cited 2011 April 20]. Available from: 2010. http://msdn.microsoft.com/en-us/library/ms752347.aspx [8] Burns E, Griffin N, Schalk C. JavaServer faces 2.0: the complete reference. McGraw-Hill Professional; 2009. [9] Stelting S, Maassen O. Applied Java patterns. USA: Sun Microsystems Inc.; 2002. [10] Kashfi H. Applying a user centered design methodology in a clinical context. Studies in health technology and informatics. 2010 Jan ;160(Pt 2):927-31. [11] Cooper A, Reimann R, Cronin D. About face 3: the essentials of interaction design. Wiley-India; 2007.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-729
729
Large Scale Healthcare Data Integration and Analysis using the Semantic Web John TIMMa, Sondra RENLYa, Ariel FARKASHb IBM Almaden Research Center 640 Harry Rd, San Jose, CA, 95120, US b IBM Haifa Research Lab, Haifa Univ. Mount Carmel Haifa, 31905, Israel a
Abstract. Healthcare data interoperability can only be achieved when the semantics of the content is well defined and consistently implemented across heterogeneous data sources. Achieving these objectives of interoperability requires the collaboration of experts from several domains. This paper describes tooling that integrates Semantic Web technologies with common tools to facilitate crossdomain collaborative development for the purposes of data interoperability. Our approach is divided into stages of data harmonization and representation, model transformation, and instance generation. We applied our approach on Hypergenes, an EU funded project, where we use our method to the Essential Hypertension disease model using a CDA template. Our domain expert partners include clinical providers, clinical domain researchers, healthcare information technology experts, and a variety of clinical data consumers. We show that bringing Semantic Web technologies into the healthcare interoperability toolkit increases opportunities for beneficial collaboration thus improving patient care and clinical research outcomes. Keywords. Healthcare Interoperability, Semantic Web, Modeling, UML, OWL.
1. Introduction Healthcare providers want access to healthcare information to improve coordination of care, increase quality of care, and generate evidence for future medical decision making. In Hypergenes project [1], we aimed to integrate and analyze heterogeneous hypertension data sets from over 30 historical cohorts spanning 15 years with the goal of creating a new data model for representing interactions between environmental and clinical factors in hypertension. A successful outcome will lead to improved diagnostic accuracy, early detection, and personalized treatments. In the past, these types of research efforts involved the development of a new data sharing infrastructure which is time consuming and costly; thus preventing large scale data integration and analysis. Today Semantic Web technologies, new tooling, and healthcare data standards enable this type of large scale integration and analysis. Building on the world-wide web’s scalable, distributed architecture for sharing information efficiently between humans, the Semantic Web provides capabilities that enable information sharing between machines that are semantically consistent. The Semantic Web consists of a set of standards and technologies that include a simple data model (RDF), query language (SPARQL), schema language (RDFS) and ontology language (OWL). These technologies assist in data integration from heterogeneous data sets. Healthcare providers are moving towards health information standards for sharing subsets of patient records using specialty-developed Implementation Guides (IGs) built using
730
J. Timm et al. / Large Scale Healthcare Data Integration and Analysis Using the Semantic Web
standards such as HL7 v3 Clinical Document Architecture (CDA) and aligned with the CEN EHR 13606 specification. Clinicians use the shared content to make more informed medical decisions for their patients and to better coordinate care when their patients get care from multiple sources. In this paper, we depict a methodology that aims to improve data integration, analysis and sharing between clinical information systems and researchers. Our approach brings together standard healthcare information models with semantic web technology in an effort to accommodate multiple user roles and leverage the strengths of different technologies to address specific aspects of the healthcare interoperability problem.
2. Background & Related Work Biomedical information repositories typically contain data related to a specific clinical domain with proprietary semantics [2]. These disparate data sources pose a challenge for data integration [3] that is paramount for improved patient-centric care [4], health data exchange, decision support [5], and semantic query and retrieval of aggregated data for analysis in context of clinical research. CDA is a health information standard that specifies terminology-encoded structure and semantics for clinical documents. CDA documents can be serialized to XML that conforms to a published W3C XML Schema. In most applications, the general CDA structure is constrained by a set of templates that are standardized and published in an implementation guide, such as the Continuity of Care Document (CCD). As in most CDA template specifications CCD IG is written in structured English expressions based on the XML schema element relationships. These conformance statements are usually implemented by Schematron rules to augment the CDA XML schema. Our work includes methods and open source software tools for representing CDA documents and template constraints using Unified Modeling Language (UML) and Object Constraint Language (OCL). The UML modeling language is dominant among IT domain users, whereas clinical domain experts often work with formal ontology definitions. Web Ontology Language (OWL) is a semantic markup language for publishing and sharing ontologies on the World Wide Web. It is endorsed by the World Wide Web Consortium (W3C). OWL is often used as the framework for converging distinctive terminologies into a single coherent ontology; many successful examples exist in clinical research and medical informatics domains [6,7]. For ontology mapping we followed W3C recommendations. There has been some prior work in both using OWL ontologies in conjunction with instance generation [8], and in using OWL to add semantic annotations to UML information models [9]. We extended these to support our multifaceted approach.
3. Methods Our solution (figure 1) starts with a clinical domain researcher (upper left) creating an ontological representation of the information elements of interest needed for a particular study known as a cohort ontology. Ontologies from all data sources are mapped to a common core ontology. Based on past experiences, the clinical domain expert is less interested in the comprehensive data representation than in certain data elements in their proper context that are required for further analysis. A leading design
J. Timm et al. / Large Scale Healthcare Data Integration and Analysis Using the Semantic Web
731
principle of our methodology is to have the clinical domain expert work with an “intuitive” ontology-based method to represent the metadata needed for harmonization, while the healthcare IT domain expert uses modeling languages and semantic web technologies to create, constrain and transform representations of the standard format. The point of collaboration is focused at mapping core ontology to data representation creating a warehouse that is standard, interoperable and allows for semantic query and retrieval of data in research oriented scenarios.
Figure 1. Data Integration Methodology Overview.
The healthcare IT expert (lower left), familiar with data representation methods and standards, is primarily responsible for creating healthcare interoperability models. These models will be used to derive the common format that is collected and subsequently analyzed. We use models based on international standards for healthcare semantics and interoperability that can be serialized to XML. These, along with a set of constraints, serve to unify data into a semantically unambiguous format that makes operations on the data straightforward from a technological standpoint. Integration of data from dissimilar data sources including harmonization, data extraction, validation and normalization is a complex task due to ambiguous metadata, differences in units of measurement, classifications, diversity of protocols, etc.; the process is described at length in previous works [8,10]. Thus in the clinical research scenario we will assume the clinical data provider (upper right) supplies RDF that conforms to the cohort ontology, then by using the cohort to core mapping, the data graph is converted to conform to the core ontology. Mappings between the ontological representation and semantic data representation enable generation of RDF instances that conform to healthcare interoperability models which in turn are fed to an instance generation engine in order to produce standard XML instances. In the health-oriented scenarios standardized data is received either via IHE XDS source or directly inserted using a simple adapter into the XML database. The CDA instance received conforms to the template model; using the UML to OWL model transformation we convert it to an RDF instance that conforms to the OWL template model. Clinical data consumers may then access data via interoperability profiles, e.g. IHE XDS/QED, query XML database directly using XQuery, or query data semantics using SPARQL. The CDA UML model was created as an implementation model that is primarily based on two artifacts: (1) the CDA Refined Message Information Model from HL7 and (2) the CDA XML Schema. This implementation model was developed to support
732
J. Timm et al. / Large Scale Healthcare Data Integration and Analysis Using the Semantic Web
the existing code generation and serialization mechanisms present in the Eclipse Modeling Framework (EMF). The model was imported into EMF and ultimately transformed into a set of Java classes as a part of the Model-Driven Health Tools (MDHT) [11] project in Open Health Tools (OHT). The Java classes in conjunction with a set of additional utility classes make up the base runtime API that can be used to produce, consume, and validate instances of CDA. The template model is a domainspecific model that constrains the CDA model. Classes in a template model extend those in the CDA model. Constraints are modeled using directed associations, property redefinitions, and OCL expressions. The CDA Profile for UML is used to capture additional metadata needed during model transformation and at runtime. Once the template model has been created, it is transformed into an implementation model which leads to the generation of a domain-specific API for constructing and validating instances. All directed associations, property redefinitions and metadata specified in the template model are converted to OCL expressions in the implementation model. Leveraging technology from the Semantic Web enables the transformation of the data models to an OWL representation. Many of the constraints can be modeled using OWL restrictions. For example, a fixed or default value in the template model is translated to an OWL value restriction and a directed association is translated to an OWL cardinality restriction. Some constraints, specified in general OCL expressions, are not readily converted into OWL restrictions. Part of our ongoing research is to determine the best mechanism to represent these types of constraints. Possibilities include a semantic rule language such as the Semantic Web Rule Language (SWRL) or Jena Rules. Connecting the core ontology created by the clinical domain expert and the template model a product of the Healthcare IT expert is a crucial step that requires their collaboration. Core ontology variables and their possible parameterizations are mapped via equivalent class and equivalent property relationships to the template model ontology using Jena API following OWL mapping W3C recommendation.
4. Results & Discussion The Hypergenes project, a Seventh Framework Program (FP7) European Commission funded project exploring the EH disease model, provided us with an opportunity to apply our approach to widely varying environmental and clinical datasets from over thirty historical cohorts. The data included historical clinical data spanning over 15 years and environmental measures based on questionnaires for a total of 8,000 subjects divided into a discovery phase (4000) and a validation phase (4000). The first phase of the project was aimed at defining the corresponding terminology so that all the cohorts’ variables could be mapped to a uniform terminology. Domain experts from the data sources helped define a core hypertension ontology. We then mapped each cohort metadata to this uniform core ontology. The next step involved capturing data semantics using the core ontology. To this end we created the CDA based Essential Hypertension template model (EH-CDA). The quantity, diversity and complexity of data in Hypergenes forced a situation where there was a need to create a large number of templates. This made the modeling process time consuming, challenging, and error prone. Thus we used an automated approach to generate the UML template model from a prototypical XML instance [12]. An OWL representation of the EH-CDA model was then generated from the UML representation. Each template in the model, represented as a UML class specializing the base CDA model,
J. Timm et al. / Large Scale Healthcare Data Integration and Analysis Using the Semantic Web
733
was converted to an OWL class with a subClassOf relationship to the corresponding class in the CDA ontology. Mapping between the path of the UML class in the template model to the generated OWL class was captured and used by the instance generation engine to produce standard XML instances from RDF triples that conform to the model ontology. We developed several mechanisms for accessing the data. Three are described in the methods section: SPARQL endpoint for semantic querying, direct access to the database via XQuery, and data access via standard IHE XDS and QED profiles. However, the partners in charge of analysis in the Hypergenes consortium use tools that rely on a relational schema. For this purpose we built an RDF to relational module. To accomplish this we built an RDFS that represents the relational schema requested by the analytics partner, and wrote an automatic process that creates the relational schema and populates it. For the population we rely on the SPARQL endpoint, thus, we run a query and insert the result into the corresponding tables in the relational schema.
5. Conclusion Increasing requirements to implement IG based data exchanges has highlighted the need for expert tailored tooling, established shared core ontologies, mapping processes, and validation technologies. We describe a methodology that improves data integration, analysis, and sharing between clinical and information systems and researchers. We incorporate domain specific user intuitive tools through the transformation path and applied it in Hypergenes EU project. We believe that semantic data instance generation based on standard information models and terminologies serves as a common language that can improve patient care and clinical research outcomes.
References [1] [2]
EC FP7 Hypergenes, http://www.hypergenes.eu/ Stroetmann, V. et al. Semantic Interoperability for Better Health and Safer Healthcare. SemanticHEALTH Project Report. http://ec.europa.eu/information_society/ehealth [3] Heiler S. 1995. Semantic interoperability. ACM Computing Surveys 27(2), 271-273. [4] Gold J. D., Ball M. J. 2007. The Health Record Banking imperative. IBM Systems Journal 46(1). [5] Bock B.J. et al. 2003. The Data Warehouse as a Foundation for Population-Based Reference Intervals. American Journal of Clinical Pathology 120, 662-670. [6] Schultz S., Boeker M., Stenzhorn H. 2008. How Granularity Issues Concern Biomedical Ontology Integration. MIE, 863. [7] Golbreich C., Zhang S., Bodenreider O. 2006. The foundational model of anatomy in OWL: Experience and perspectives. Web Semantics: Science, Services and Agents on World Wide Web 4(3), 181-195. [8] Farkash A. et al. 2006. Biomedical data integration - capturing similarities while preserving disparities. In Conf Proc IEEE Eng Med Biol. Soc. 2006 1, 4654-4657. [9] Carlson, D. 2006. Semantic Models for XML Schema with UML Tooling, In Proceedings of SWESE 2006. [10] Carlson, D. et al. A Model-Driven Approach for Biomedical Data Integration. In Proceedings of MEDINFO 2010. [11] Model-Driven Health Tools (MDHT), http://mdht.projects.openhealthtools.org [12] Farkash, A. 2010. Facilitating the creation of semantic health information models from XML contents. In Proceedings of CSHALS 2010.
734
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-734
ACGT: Advancing Clinico-genomic trials on cancer – Four years of experience Luis MARTINa,1, Alberto ANGUITAa, Norbert GRAFb, Manolis TSIKNAKISc, Mathias BROCHHAUSENd, Stefan RÜPINGe, Anca BUCURf, Stelios SFAKIANAKISg, Thierry SENGSTAGh, Francesca BUFFAi and Holger STENZHORNb a Biomedical Informatics Group, Universidad Politécnica de Madrid, Spain b Department of Paediatric Oncology and Haematology, Saarland University Hospital, Germany c Biomedical Informatics Laboratory, FORTH, Greece d IFOMIS, Saarland University, Germany e Fraunhofer IAIS, Germany f Philips Research Europe, The Netherlands g Institute of Computer Science, FORTH, Greece h RIKEN Yokohama Institute, Japan I The Weatherall Institute of Molecular Medicine, University of Oxford, UK
Abstract. The challenges regarding seamless integration of distributed, heterogeneous and multilevel data arising in the context of contemporary, postgenomic clinical trials cannot be effectively addressed with current methodologies. An urgent need exists to access data in a uniform manner, to share information among different clinical and research centers, and to store data in secure repositories assuring the privacy of patients. Advancing Clinico-Genomic Trials (ACGT) was a European Commission funded Integrated Project that aimed at providing tools and methods to enhance the efficiency of clinical trials in the -omics era. The project, now completed after four years of work, involved the development of both a set of methodological approaches as well as tools and services and its testing in the context of real-world clinico-genomic scenarios. This paper describes the main experiences using the ACGT platform and its tools within one such scenario and highlights the very promising results obtained. Keywords. Clinical trials, semantic mediation, ontologies, knowledge discovery on databases, workflows
1. Introduction Advances in research methodologies and technology during the last decade have resulted in a rapid increase of information about cancer in general. Still, heterogeneity of infrastructures and data within clinical and research institutions has limited the ability to extract useful knowledge and to apply it to treatment regimens. Current postgenomic clinical trials often rely on ad-hoc built information systems for handling the 1
Luis Martín: PhD Student, Group of Biomedical Informatics, Universidad Politécnica de Madrid, Campus de Montegancedo s/n, 28660 Boadilla del Monte, Spain; E-mail: [email protected].
L. Martin et al. / ACGT: Advancing Clinico-Genomic Trials on Cancer – Four Years of Experience 735
generated data, each based on their own formats and standards. Therefore solutions have to be devised and provided that allow sharing of information gathered in one trial with another, or incorporating external data from disparate sources during the trial if this is required. In addition, guaranteeing the privacy of collected patient data is always an inherently difficult issue. All these tasks further require that some level of syntactic and semantic homogeneity is established for data. The vision of the Advancing Clinico-Genomic Trials on Cancer (ACGT) project (www.eu-acgt.org) was to tackle the above issues by developing a semantically rich grid infrastructure platform in support of multicentric, postgenomic clinical trials, thus enabling discoveries in the laboratory to be quickly transferred to the clinical management and treatment of patients [1].
2. The ACGT Platform In order to be able to deal with the complexities of research and management of cancer, it was obvious that a highly elaborate, yet easy to use, technical infrastructure had to be developed. Features such as intuitive access for end-users, coherent content organization and consistence with the way the different user groups carry out their daily work were mandatory. A thorough design and development has led to the construction of a powerful and versatile ontology-driven grid infrastructure named the ACGT Platform (available from http://purl.org/acgt/portal) (Figure 1). This platform comprises a set of tools and services that cover the requirements described above.
Figure 1. The ACGT Platform. On top, the web interface provides access to the underlying tools (KDD tools, workflow editor). These tools access a set of heterogeneous databases offered by the data access layer. The data of these sources is properly anonymized. The trial builder allows running new clinical trials in the platform.
One main focus while designing and developing the ACGT Platform was to ensure data privacy. Data handled in clinical trials are sensible, and the different legislative bodies therefore impose very strict regulations on this aspect. To achieve this objective, all patient’s sensible data are initially pseudonymized with dedicated tools ensuring that no patient will be identifiable through the data exposed in the platform. Strong security features such as credential-based data access were added to all platform tools and services thus achieving security in the context of large, distributed data processing.
736 L. Martin et al. / ACGT: Advancing Clinico-Genomic Trials on Cancer – Four Years of Experience
2.1. The ACGT Master Ontology on Cancer The ACGT Master Ontology on Cancer (ACGT MO) was developed with the goal of creating a consistent semantic framework to comprehensively describe the domain of post-genomic clinical trials on cancer. This framework is the basis of the semantic interoperability for connecting the different services and data sources in the ACGT Platform. It is written in OWL-DL and contains more than 1600 classes and around 200 properties. The state-of-the-art design principles of the OBO Foundry (http://www.obofoundry.org/) were fundamental in the ontology development. This also includes that well-established ontologies covering parts of the domain were reused as a whole or partly, such as the Foundational Model of Anatomy [2], the OBO Relational Ontology [3] and the Basic Formal Ontology [4]. Other relevant ontologies and terminologies were not directly included since they miss the expected quality criteria but were still used as knowledge source [5]. 2.2. Clinical Trial Designer - ObTiMA ObTiMA is an ontology-based system for creating and conducting clinical trials [6]. It includes a graphical Trial Builder that aids the trial chairman in the design of the Case Report Forms (CRFs) to be used to document each treatment step [7]. The interface allows defining CRF content and layout to capture all relevant patient data during a trial. The resulting descriptions are based on ACGT MO concepts for each CRF item along with metadata, like data type and measurement unit, to setup the trial database. The second major functionality is the patient data management system. It is automatically set-up based on the items defined in the design phase and guides the user through the treatment of the individual patients according to the defined treatment plans. The MO aids in providing the necessary semantic interoperability so that these data are accessible from other components of the ACGT Platform. 2.3. Data Access Layer An important challenge in current post-genomic biomedical research is to efficiently manage and retrieve data from heterogeneous sources. In order to provide seamless data access, syntactic and semantic integration needs to take place. The Data Access Layer, comprised by the Database Wrappers (DWs) and the Semantic Mediator (SM) [8] offers this functionality within the ACGT Platform. The DWs deal with the syntactic heterogeneities, offering a uniform interface to the data resources. This includes uniformity of transport protocol, message syntax, data format (RDF), and query language (SPARQL). The SM tackles semantic heterogeneities—i.e. offering a common data model for accessing the data resources exposed by the DWs. The ACGT MO was adopted as the model exposed to clients of the Data Access Layer. Incoming queries in terms of the MO are translated by the SM and redirected to the DWs, with the results being integrated and presented to the client as a single result set. 2.4. KDD Tools The ACGT Platform comprises a series of knowledge discovery tools for analyzing and extracting useful information from data collected in a clinical trial. With an abundance of such tools available freely, BioMoby [9] and R/Bioconductor [10] being prominent
L. Martin et al. / ACGT: Advancing Clinico-Genomic Trials on Cancer – Four Years of Experience 737
examples, the focus was not set on the development of new tools but rather on seamlessly integrating those existing toolkits in a uniform fashion. The R language was adopted as the prime tool for carrying out statistical analysis of the data. The GridR tool [11] allows the seamless execution of R jobs in parallel to facilitate the efficient development, execution, and re-use of analytical solutions without the need of knowledge about the underlying architecture on the analyst side. 2.5. The ACGT Workflow Environment To assist bioinformaticians in creating their complex scientific workflows, a Workflow Editor and Enactment Environment, called WEEE [12], was implemented and made accessible through the ACGT Portal, thus allowing users to combine different web services into complex workflows. An intuitive user interface permits searching registered services—e.g. GridR scripts—and retrieving data through the Data Access Layer. These elements can then be combined and orchestrated to produce workflows that can be subsequently stored in a user’s specific area and later retrieved and edited. Workflows are executed on a remote machine or in clusters in the Grid so there is no burden imposed on the user’s local machine. The publication and sharing of workflows is also supported so that the user community can exchange information benefitting from each other’s research. WEEE is based on the BPEL workflow standard [13] and supports the BPEL representation of complex bioinformatics workflows.
3. Evaluation: the MCMP Scenario Validation of the ACGT platform was performed in the context of clinically oriented data analysis scenarios. One such was the MCMP (Multi Center Multi Platform) scenario, with the goal of validating the utility of the platform as an information system to exploit data in the context of clinical trials. The setup consisted of a set of biopsies collected by two institutions using the microrarray platforms Affymetrix and Illumina. The related clinical data were stored in a corresponding clinical trial database. All patient private data were anonymized prior to their inclusion in the ACGT environment. The process began by associating database concepts to concepts from the ACGT MO—i.e. appropriate semantic mappings were set-up. This allowed retrieving integrated information from the data sources in a homogeneous manner. After that, we constructed and executed the bioinformatics workflow. This workflow, which implemented a methodology linking microarrays and classical clinical data for biomarker discovery, illustrated the capacity of the ACGT platform to repeat complex analysis on an evolving population of patients. This included data retrieval and integration, normalization, analysis and results presentation.
4. Conclusions and Future Work When launched back in 2006, the ACGT project aimed at providing clinical researchers with an infrastructure to support the requirements of modern clinical trials. From data collection and integration, to workflow design and result analysis, initial studies in the
738 L. Martin et al. / ACGT: Advancing Clinico-Genomic Trials on Cancer – Four Years of Experience
project detected some major points of interest for the area. There were specific needs to cover to alleviate end-users from the most resource-consuming tasks in their daily work. The combination of thorough analysis of scenarios, research on previously proposed solutions and an extensive tool and service development led, after four years of work, to the completion of the ACGT Platform. Intensive testing within real-world scenarios provided highly promising results. The ontology-driven data integration approach, combined with a focus on user-friendliness, proved to be a key factor in the successful deployment of the infrastructure. Future research will focus on facilitating the integration of external services and its utilization in clinical trial environments. Exploitation, maintenance and sustainability of the infrastructure are the current focal areas in the context of follow-up research and development projects. Acknowledgements. This research has been supported by the European Commission funded projects ACGT (IST-2005-026996), p-medicine (FP7-ICT-2009-270089) and INTEGRATE (FP7-ICT-2009-270253).
References [1] Tsiknakis M, Brochhausen M, Nabrzyski J, Pucacki J, Sfakianakis SG, Potamias G, et al. A Semantic Grid Infrastructure Enabling Integrated Access and Analysis of Multilevel Biomedical Data in Support of Postgenomic Clinical Trials on Cancer. IEEE transactions on information technology in biomedicine: a publication of the IEEE Engineering in Medicine and Biology Society. 2008 Mar;12(2):205-217. [2] Rosse C, Mejino JL. A reference ontology for biomedical informatics: the Foundational Model of Anatomy. Journal of biomedical informatics. 2003 Dec;36(6):478-500. [3] Smith B, Ceusters W, Klagges B, Köhler J, Kumar A, Lomax J, et al. Relations in biomedical ontologies. Genome Biology. 2005;6(5):R46+. [4] Smith B, Brochhausen M. Putting biomedical ontologies to work. Methods of information in medicine. 2010 Mar;49(2):135-140. [5] Brochhausen M, Spear AD, Cocos C, Weiler G, Martín L, Anguita A, et al. The ACGT Master Ontology and its applications - Towards an ontology-driven cancer research and management system. Journal of biomedical informatics. 2011 Feb;44(1):8-25. [6] Weiler G, Brochhausen M, Graf N, Schera F, Hoppe A, Kiefer S. Ontology based data management systems for post-genomic clinical trials within a European Grid Infrastructure for Cancer Research. Conference proceedings : Annual International Conference of the IEEE Engineering in Medicine and Biology Society IEEE Eng. in Medicine and Biology Society Conference. 2007;2007:6435-6438. [7] Stenzhorn H, Weiler G, Brochhausen M, Schera F, Kritsotakis V, Tsiknakis M, Kiefer S, Graf N. The ObTiMA System – Ontology-based Managing of Clinical Trials. Stud Health Technol Inform. 2010;160(Pt 2):1090-4. [8] Martín L, Anguita A, de la Calle G, García-Remesal M, Crespo J, Tsiknakis M, Maojo V. Semantic data integration in the European ACGT project. AMIA Annu Symp Proc. 2007 Oct 11:1042. [9] Wilkinson MD, Links M. BioMOBY: an open source biological web services proposal. Briefings in bioinformatics. 2002 Dec;3(4):331-341. [10] Gentleman RC, Carey VJ, Bates DM, Bolstad B, Dettling M, Dudoit S, et al. Bioconductor: open software development for computational biology and bioinformatics. Genome biology. 2004;5(10):R80. [11] Wegener D, Sengstag T, Sfakianakis S, Rueping S, Assi A. GridR: An R-based tool for scientific data analysis in grid environments. Future Generation Computer Systems. 2009 Apr;25(4):481-488. [12] Sfakianakis S, Koumakis L, Zacharioudakis G, Tsiknakis M. Web-based Authoring and Secure Enactment of Bioinformatics Workflows. 4th International Workshop on Workflow Management. 2009 May;2009:88-95. [13] Web Service Business Process Execution Language Version 2.0 Specification, OASIS Standard; cited: 29 April 2011. Available from: http://docs.oasis-open.org/wsbpel/2.0/OS/wsbpel-v2.0-OS.html.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-739
739
Architectural Approach for Providing Relations in Biomedical Terminologies and Ontologies Mathias BROCHHAUSENa,b,1, Bernd BLOBELc IFOMIS, Universität des Saarlandes, Saarbrücken, Germany b Department of Philosophy, University at Buffalo, Buffalo, USA c eHealth Competence Center, University Hospital Regensburg, Regensburg, Germany a
Abstract. The representation of multiple relations is one of the main criteria of ontologies. In formalizing both ontologies and terminologies in biomedicine relations are used to code axioms for the classes of the ontology. However, a huge number of relations represented in medical ontologies and terminologies are derived from language and formal definition is omitted. We present a strategy based on an architectural approach to facility formal analysis of relations for use in ontology systems in biomedicine and in general. Keywords. Biomedical Ontologies, Medical Terminologies, System Theory, Biomedical Relations, Architecture
1. Introduction Relations are central features in ontologies and all terminologies that aim to provide more than just a single hierarchy or completely flat representation of reality. In the past the representation of relations has often been done language-based or inductive from observed instance-instance relations. In order to fully understand representation of relations in ontologies it is important to be aware of the difference between instances (particulars) and types. Basically, there are three kinds of relations: instance-instance relations, type-type relations and instance-type relations. [1] is a key contribution to unify use of relations in the entire biomedical domain. defining type-type relations based on undefined empirical relations between instances. Even though this was a first step towards a more controlled usage of relations in the biomedical arena, the methodology should be improved to gather more formal differences between the relations. The formal definitions provided by the Open Biological and Biomedical Ontologies (OBO) Relation Ontology use expressions that go beyond binary relations and thus, cannot be coded in Web Ontology Language (OWL), in which relations are exclusively binary. In this paper we aim not at ontologies re-using the OBO Relation Ontology, but we discuss issues in widely used medical terminologies that arise from a lack of theoretical well-funded strategies regarding the representation of granularity, thereby focusing on 1
Corresponding author. Dr. Mathias Brochhausen, IFOMIS, Saarland University, P.O. Box 15 11 50, 66041 Saarbrücken, Germany; Phone: +49 681 30264770; Email: [email protected]
740
M. Brochhausen and B. Blobel / Architectural Approach for Providing Relations
the architectural aspect of any concrete or abstract system and its representation [5]. Over the last decade a number of problems have been detected and discussed in SNOMED CT, National Cancer Institute Thesaurus (NCIT) and other terminologies and ontologies in the eHealth arena [2, 3]. We hold that the problems arise from a deficiency of the approaches taken by these controlled vocabularies and ontologies: a lack of well-founded strategies to connect different levels of granularity and medical disciplines. We suggest to use the systems approach in order to accomplish a complete analysis of spheres involved [4]. The Generic Component Model (GCM) provides an architecture framework for the complete process of representation and systematization of a given domain, including the domain’s decomposition/composition [5]. We will use GCM to clarify the relations between the entities in the domain and the informational representation of them.
2. Materials and Methods 2.1. Trans-Granular Relations in Biomedical Ontologies The problem is that in the terminologies mentioned above "concepts" are related with each other based on observations from medical practice, more or less regardless of their position within a system and its subsystems. Sometimes non-matching entities are represented as being linked by not matching relations, as in the example from NCI Thesaurus: (1) (Acinic Cell Breast Carcinoma) Disease_May_Have_Finding (Pain) [6]. The NCIT does not give a textual definition of "Acicnic Cell Breast Carcinoma", but it seems obvious that this term ought to refer to the physical neoplasm that is part of the patient's body. "Pain" defined by the NCIT as: "The sensation of discomfort, distress, or agony, resulting from the stimulation of specialized nerve endings" [6]. Thus, pain is the sensation that is experienced by the organism and is not the same as the stimuli of nerve endings. Correctly, pain is a process or a state of the entire organism. Notably, a physical entity such as a neoplasm is a disease according to the NCIT. However, we stress that a more powerful and coherent interpretation of disease is given by Scheuermann et al. [7]. According to this paper, disease is a disposition to undergo pathological processes. The first problem is the idea to link an organismal structure to a phenomenon of the organism as a whole and its finding within a diagnostic process. This is done using a relation that needs a disease as domain. In order to give a formally more adequate representation we may view the organism, its parts and the processes that take place within or adjacent to the organism as a system and its subsystems. We propose that different levels of granularity (e.g. molecular level, cell level, tissue level, organ level, organismal level) should be viewed as subsystems of one big system. Thus, we are able to distinguish relations within one subsystem from relations that bridge between different subsystems. 2.2. The GCM The GCM is an architecture framework that enables the representation of any real or virtual system including both the system architecture from its business perspective and the system’s development process for the ICT solution supporting or enabling that
M. Brochhausen and B. Blobel / Architectural Approach for Providing Relations
741
business. The approach allows for modelling systems by reducing their complexity and separating the phases of their design, specification, implementation and deployment by representing and interrelating different views, namely Enterprise View, Information View, Computational View, Engineering View (Fig. 1) [5]. For our purpose we can focus on: • Enterprise View - captures the real world business process, in our case all relevant biological processes (physiological and pathological), biomedical process and medical processes. • Information View - captures the informational expression of the Enterprise View, in our case the representation in a terminology or in an ontology.
Figure 1: The General Component Model (GCM)
3. Results Our aim is to give a system theoretic consistent, architecture centric reformulation of (1). Our starting point is that the different levels of granularity, which we have to take into account regarding biological structures and processes, can be viewed as a sequence of interrelated systems and sub-systems. Figure 2 illustrates the fact that within an organism organismal components are system components, however we can view each of these components as systems themselves. Thus the system of interest can be defined at different level of granularity from body through organ, tissue, and cell down to the level of the molecular structure of cells, always considering the system and different granularity levels of subsystems. Within each level of granularity we have a multitude of relations between the components of the system; for instance within an organism we have relations between its organismal components (Fig. 2). Our basic approach is to keep two types of relation distinct: (A) relations at the same granularity level and (B) relations between different granularity levels. We expect that the number of Type B relations can restricted to quite some extend due to the limited scope of selected interesting processes. We have to reformulate (1) as follows: (2) (Acinic Cell Breast Carcinoma) causes (Stimulated Nocireceptors) (3) (Stimulated Nocireceptors) lead_to (Pain Perception in Organism 1) (4) (Pain Perception in Organism 1) is_reported_by (Organism 2)
742
M. Brochhausen and B. Blobel / Architectural Approach for Providing Relations
Figure 2: Systems and components with the two different types of relations
For our approach, it is important to note that "lead_to" is quite different from "causes" as "lead_to" is relating a system component with its system, thus bridging between two levels of granularity. From a formal point of view, this is important since besides the linguistic differentiations between the two verbs used to name the relation this provides us with a formal criterion for difference of the two relations. Note that one could well use "lead_to" in both cases. However, the aim of providing an ontological representa-tion is to provide language-neutral and machine understandable semantic criteria. We hold that a systemic analysis of a given domain and its relations yields to criteria which can be formalized to differentiate relations which are scarcely differentiated by natural language use, even in a more technical or scientific setting like medicine. However, it is important to note that from the GCM perspective there is something even more important happening. The physiological processes of the patient are all captured on the Enterprise View, whereas the abduction from information about the physical state of the patient to information about the symptoms that we can expect is part of the Information View. So, within the GMC framework, the following reformulation of (1) applies: (2’) (Enterprise View) Patient has_Part Acinic Cell Breast Carcinoma (3’) (Information View) Report of Acinic Cell Breast Carcinoma is_positively_correlated with Report of Pain (4’) (Enterprise View) Expectation: Patient experiences Pain Notably, (1) is an example of a sentence expressing probabilistic assertion rather than an assertion about a type being true for all individuals of that type. Rector [8] points out that knowledge of this type for formal reasons cannot be displayed in ontologies. These kind of assertions need to be part of a "background knowledge resources". Based on this distinction, Schulz et al. put forward an argument that knowledge representation is not a task of formal ontologies [9]. We hold that this is an overstatement. Yet, we agree that ontologies cannot be the only resource in a knowledge management system, since it represents only knowledge about entire types and their properties at any time. We have demonstrated that the GCM helps to raise awareness of the different resources within a knowledge management system.
4. Conclusions and Discussion From the above we learned two things: a system theoretic, architecture centric analysis of relations in reality offers interesting opportunities to find formalizable differences between relations and will help to fix the semantics of relations for machine-machine communication. The problem with representing relations in applied ontology is that the
M. Brochhausen and B. Blobel / Architectural Approach for Providing Relations
743
entities represented in most clinical terminologies and ontologies are located in the mesocosm [10]. This fact puts the developers in danger of mixing different types of granularities as they appear in the transition from microcosm (molecular processes) to medical reality (therapeutic processes). Exact rules of relating the entities in the domain representation are missing, but using the GCM and its system-theoretical background can be a first step towards systematizing the representation of relations. GCM will help representing knowledge management systems as a whole, including the different components, e.g. ontology and background knowledge resources, and the different operations carried out by them. We started our analysis of relations by pointing out the difference between instanceinstance relations and type-type relations. The system-theoretical approach raises the question how we plan to keep this distinction up in a framework where we view entities as systems to better grasp the properties they bear for ontological representation. In viewing real world phenomena from the system-theoretical, architecture centric perspective we can do both, viewing cells in general or viewing one particular cell. The latter will not lead to any kind of ontological knowledge regarding cells in general, but nevertheless can be important in a medical knowledge management system (e.g. an HIS). Only the analysis of cells in general can provide us with the properties that need to be represented in an ontology. We would like to add that the newest version of the Web Ontology Language, OWL 2, provides the opportunity to create property chains [11]. This methodology would support the definition of (1) by supplying the chain (2) - (4) as its definition. Nevertheless, in order to distinguish the relations used in the definitory chain, the system-theoretical criteria described above ought to be used.
References [1]
Smith B, Ceusters W, Klagges B, Köhler J, Kumar A, et al. Relations in Biomedical Ontologies, Genome Biology, 6, 5 (2005), R46. PMC1175958 [2] Bodenreider O, Smith B, Kumar A, Burgun A. Investigating subsumption in SNOMED CT: An Exploration Into Large Description Logic-Based Biomedical Terminologies, Artif Intell Med 39/3 (2007), 183/95. [3] Ceuster W, Smith B, Goldberg L. A Terminological and Ontological Analysis of the NCI Thesaurus, Methods Inf Med 44 (2005), 498-507. [4] Lopez DM, Blobel B. A Development Framework for Semantically Interoperable Health Information Systems, Int J Med Inf 78, 2 (2009), 83-103. [5] Blobel B. Architectural Approach to eHealth for Enabling Paradigm Changes in Health, Methods Inf Med 49/2 (2010), 123-34. [6] http://ncit.nci.nih.gov. Last accessed April 28, 2011 [7] Scheuermann RH, Ceusters W, Smith B. Toward an Ontological Treatment of Disease and Diagnosis, Proceedings of the 2009 AMIA Summit on Translational Bioinformatics, 2009, 116-120. [8] Rector A. Barriers, Approaches and Research Priorities for Integrating Biomedical Ontologies, 2008. Available from: www.semantichealth.org/DELIVERABLES/Semanti- cHEALTH_D6_1.pdf. Last accessed: April 28 2011. [9] Schulz S, Stenzhorn H, Boeker M, Smith B. Strengths and limitations of formal ontologies in the biomedical domain, Elect. J. Commun. Inf. Innov. Health. Rio de Janeiro, v.3, n.1, 31-45, Mar., 2009. [10] Smith B. Ontologie des Mesokosmos: Soziale Objekte und Umwelten, Zeitschrift für philosophische Forschung 52 (1998), 521-40. [11] Goldbreich C, Wallace EK. OWL 2 Web Ontology Language New Features and Rationale. Oct 27 2009. http://www.w3.org/TR/owl2-new-features. Last accessed April 28, 2011
744
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-744
Integration of Classifications and Terminologies in Metadata Registries Based on ISO/IEC 11179 a
Sylvie MN NGOUONGOa1, Jürgen STAUSBERGa IBE, Medical faculty, Ludwig-Maximilians-University of Munich, Germany
Abstract. Empirical medical research needs services for the maintenance of item collections. We evaluated the appropriateness of ISO/IEC 11179 “Information technology - Metadata Registries (MDR)” part 3 “Registry Metamodel and basic attributes” for a national MDR. In particular, possibilities of including classifications and terminologies (summarized as vocabularies) using the metamodel of this standard were assessed. The hierarchical structure of classifications and terminologies could be mapped easily to ISO’s metamodel. The Classification Markup Language (ClaML) is attractive as interface standard for the import of classifications into the MDR. The correct linkage between data elements and vocabularies remained unclear however. An extension of the ISO 11179 metamodel might be necessary to satisfy the predefined needs of a national MDR. Keywords. Vocabularies, Metadata Registry, ISO/IEC 11179, ClaML.
1. Introduction and Background Empirical medical research is based on the collection of observations typically stored in database management systems. Maintenance of items, item definitions, value lists, and plausibility checks is a time consuming task being part of the development of item collections, the definition of data exchange protocols, monitoring of data quality, and finding models for statistical data analysis [1]. Support in this maintenance task could reduce workload in several ways [2, 3]: • Definition of item collections could be simplified by a structured template appropriate for empirical medical research. • Item definitions can be reused, either definitions from former projects as well as standardized item collections offered by third parties. • A review of item collections improves quality through harmonization and standardization. • Controlled vocabularies can be integrated und used as value lists. Consequently, the implementation of services for the maintenance of item collections was defined as a high priority issue, especially in support for clinical trials [4]. In a broader view, item collections are denoted as metadata, i.e. data about the recorded observations in empirical research. Services offering support in the definition of item collections are then named metadata repositories or metadata registries. 1 Corresponding author: Sylvie Ngouongo, Ludwig-Maximilians-Universität München, Marchioninistraße 15, D-81377 München, Germany; E-mail: [email protected].
IBE,
S. Mn Ngouongo and J. Stausberg / Integration of Classifications and Terminologies
745
For Germany, a project funded by the Federal Ministry of Education and Research was launched to set up a national metadata repository [5]. Cornerstone in the preparation of that project was the identification of ISO/IEC 11179 Information technology - Metadata Registries (MDR) [6] as basis for the project’s information model. The application in projects as the Cancer Data Standards Repository [7], first own experiences [5] as well as the intensive efforts in a revision of version 2 raised concerns against the applicability of ISO/IEC 11179 in its current state. Focusing on health services research we evaluated the capability of ISO/IEC 11179 of representing and storing well-established classifications and terminologies (summarized as vocabularies), frequently used as items or value lists in empirical research. Furthermore, we developed a concept to import classifications into the MDR.
2. Material and Methods For the evaluation of ISO‘s capability of representing vocabularies we combined a mapping of the vocabularies´ structure to the elements of ISO 11179 metamodel with an import of the vocabularies themselves in a prototypical implementation of a MDR. ClaML is a XML-notation that offers a structure for the exchange of hierarchical healthcare classification systems [8]. We assumed that if ISO 11179 V3 is able to cover ClaML, each classification represented in ClaML could be represented in ISO 11179 V3 as well. Additionally, we analyzed five different vocabularies: • The International Statistical Classification of Diseases and Related Health Problems 10th Revision German Modifications (ICD-10-GM) and the German procedure classification (OPS) are legislatively used for coding in Germany. • The TNM classification of malignant tumors (local Tumor, regional lymph Nodes, Metastases) is a well-established standard in oncology [9]. Whereas ICD-10-GM and OPS are mono-hierarchical classifications with a ClaML-like structure, the TNM classification provides three axes for post-coordination. • The Medical Dictionary for Regulatory Activities (MedDRA) is the international medical terminology used to classify adverse events associated with the use of biopharmaceuticals and other medical products. • The Systematized Nomenclature of Medicine-Clinical Terms (SNOMED-CT) is a multilingual terminology, essential for electronic health records. 2.1. ISO 11179 Work on ISO/IEC 11179 is still going on. We decided to evaluate Committee Draft ISO/IEC CD2 11179-3 (ISO 11179 V3), especially the Data Description package and the Concepts package. In short, the Data Description package differentiates a concept from a representation layer (cf. Figure 1). A concept layer might be established by the identification of men and women as two kinds of persons distinguished by the karyotype. The data_element_concept gender is established with the object_class person and the characteristic sex. Value_domain gender covers the permissible_values “male” and “female”. At last, a data_element is defined by the combination of data_element_concept gender with value_domain <male|female>. All elements of the metamodel can be classifiable_items; some might be concepts, e.g. conceptual_domain and data_element_concept; Classifiable_items can be related to concepts in a classification’s context. A concept_system consists of concepts that
746
S. Mn Ngouongo and J. Stausberg / Integration of Classifications and Terminologies
could be related via a link that has specific roles for each end (cf. Figure 2). ISO 11179 V3 makes no difference between the representation of classifications and terminologies.
Figure 1. Simplified Data Description metamodel according to ISO 11179 V3. Examples in grey.
2.2. Mapping Part of the evaluation was a reconstruction of ISO’s metamodel with the terms and structures of the vocabularies and the terms and structures of ClaML. In congruence with ISO 11179 V3 we represented the reconstructed models with class diagrams from the Unified Modeling Language 2.0. Elements of ISO 11179 V3 are in italics. We however did not intend to realize a mapping between the vocabularies themselves. 2.3. Import To validate our mapping of vocabularies´ terms and structures into the ISO metamodel, we considered two options of import into the MDR: native import from other databases and data’ structures and a ClaML import interface. The CLaML import interface was implemented with the Extensible Stylesheet Language Transformation (XSLT).
Figure 2. Concepts metamodel region from ISO 11179 V3.
S. Mn Ngouongo and J. Stausberg / Integration of Classifications and Terminologies
747
3. Results 3.1. Mapping ClaML supports a strict hierarchical structure of vocabularies with the elements class, superclass, and subclass. ClaML as root element for the definition of the classification as a whole becomes a concept_system, whereas its structural elements are stored as concepts. The structure of the hierarchy among the concepts is built up by means of the class link. Link relates to at least two concepts via an association class link_end, which also assigns a relation_role (e.g. generalization and specialization) to each end. ICD-10-GM, OPS, TNM, MedDRA and SNOMED-CT structures could be easily mapped to the Concepts metamodel. They are supported as concept_systems with formal semantics through the use of assertions. Their nodes at any level become each a concept, which relationship to other nodes are described using link, link_end, relation and relation_role. Because ICD-10-GM and OPS extend a simple hierarchy there are several exceptions and rules. Some of them are covered by ClaML (e.g. dagger/asterix coding, inclusions) and have been mapped over the use of assertion directly, just as the logical expressions of MedDRA within Standardized MedDRA Queries. An automatic utility thereof is however not guaranteed. Concerning the multi-axial structure of the TNM-system, MedDRA and SNOMED-CT relation_role was used as well to represent each granularity. E.g. a relation_role aggregation can be used to set up a complete classification based on the axis T, N and M. We did not implement a function to relate pre-coordinated and post-coordinated SNOMED-CT terms in the MDR so far. 3.2. Import We have got some issues in directly importing classifications from their native format into the MDR according to differences of both structures. We needed to manually reorganize the classifications’ data in their native structure to fit the MDR schema. In order to transform classifications represented in ClaML into a MDR conform structure several XSLT-scripts were linked and successively executed. As a result we obtain a MDR valid XML schema that could be directly imported into a relational database management system with XML support. We could thus successfully handle ICD-10-GM and OPS. We checked our results for formal correctness and consistency with metatada provided by the responsible publishers of the vocabularies. All entries in the MDR met the references so far. Problems arose with different interpretations and extensions of the ClaML-standard. For example the ICD-10 sources of WHO and DIMDI for Germany implement a different representation for shortlists.
4. Discussion The representation of classifications and terminologies as sources for value lists of items and as sources for elements of the concept layer are quite simple in ISO 11179 V3. However, a full coverage of the semantics is not reached and must be implemented on the controller level. Some limitations of ISO 11179 V3, which demand an extension for a use in a metadata repository with a broad area of application, become visible. The appropriate linkage between elements of the Data Description metamodel and elements of classifications remains open. Data_elements or value_domains can be
748
S. Mn Ngouongo and J. Stausberg / Integration of Classifications and Terminologies
related to a classification’s element represented with the concept class via the classifier association. But this requires a doubling of each entry if there is a 1:1-relationship, e.g. a data_element with the value “Viral Meningitis” is classified_as the concept “Viral Meningitis” (ICD-10 A87.-). Redundancies do not suit a good maintenance of a MDR. With “RelationshipType” SNOMED-CT provides the linkage of relationships (links). But ISO 11179 V3 allocates no mechanism for that. As an extension of ISO 11179 V3, links could be conceived as concepts. This will provide a solution to the second issue. SNOMED makes statements about Links over its field “Refinability”. Defining links as concepts could allow the use of assertions for this purpose. Moreover standard formats for representing terminologies such as SNOMED-CT, like ClaML for classifications, are urgently needed, since data explosion makes their maintenance enormously difficult.
5. Conclusion and Work for the Future ISO 11179 V3 contributes many useful ideas for the definition of a national MDR. Nevertheless, the predefined use cases of our project might not be fully compatible with its motivation. Instead of diverging from the standard, we plan to contribute to the future direction of ISO 11179 through the national standardization organization in Germany. Work on the ClaML import interface is still going on. It is intended to be used as plug-in for the maintenance of classifications in the MDR in the near future. Acknowledgements. The presented work is part of the project MDR - Metadata Repository funded by the German Federal Ministry of Education and Research (BMBF).
References [1] [2] [3]
[4] [5]
[6]
[7] [8] [9]
Weinstein JN, Deyo RA. Clinical Research. Issues in Data Collection, SPINE 25/24 (2000), 3104–3109. Mücke R. Trial Item Manager: Towards an Ontology based Specification of Items for Clinical Trials, Technology and Health Care 15/5 (2007), 295-374. Merzweiler A, Weber R, Garde S, Haux R, Knaup-Gregori P. TERMTrial – terminology-based documentation systems for cooperative clinical trials, Computer Methods and Programs in Biomedicine 78 (2005), 11-24. Niland JC. Creating a metadata repository in support of clinical research, In Proceedings of Seoul 53rd Session 2001, International Statistical Institute, http://isi.cbs.nl/iamamember/CD2/pdf/1084.pdf. Stausberg J, Löbe M, Verplancke P, Drepper J, et al. Foundations of a metadata repository for databases of registers and trials, In: Adlassnig KP, Blobel B, Mantas J, Masic I, eds. Medical Informatics in a United and Healthy Europe, Proceedings of MIE 2009.Amsterdam: IOS, 2009:409-413. Information technology -- Metadata Registries (MDR) - Part 3: Registry Metamodel and basic attributes, 3rd Edition. Committee Draft ISO/IEC CD2 11179-3. Date: 2009-03-22. Available at http://www.metadata-standards.org/ [access 2011-01-10]. Nadkarni PM, Brandt CA. The common data elements for cancer research: remarks on functions and structure, Methods Inf Med 45 (2006), 594–601. EN 14463:2007 Health informatics. A syntax to represent the content of medical classification systems. ClaML. Sobin L, Gospodarowicz M, Wittekind C, eds. TNM classification of malignant tumors, Chichester: Wiley-Blackwell, Seventh edition 2010.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-749
749
Development of a New International Classification of Health Interventions Based on an Ontology Framework Béatrice Trombert PAVIOTa, b,1, Richard MADDENd, Lori MOSKALe, Albrecht ZAISSf Cédric BOUSQUETa, b, Anand KUMARa , Pierre LEWALLEa, Jean Marie RODRIGUESa,b,c a University of Saint Etienne, CHU, Department of public health and medical informatics, Saint Etienne, France b INSERM UMR 872 Eq 20, Paris, France c WHO Collaborating center for International Classifications in French Language, Paris, France d University of Sydney WHO-FIC network, Sydney, Australia e Canadian Institute for Health Information, Toronto, Canada f Universitätklinikum, Medizincontrolling, Freiburg, Germany
Abstract: The WHO International Classification of Diseases is used in many national applications to plan, manage and fund through case mix health care systems and allows international comparisons of the performance of these systems. There is no such measuring tool for health interventions or procedures. To fulfil this requirement the WHO-FIC Network recommended in 2006 to develop an International Classification of Health Interventions (ICHI). This initiative is aimed to harmonise the existing national classifications and to provide a basic system for the countries which have not developed their own classification systems. It is based on the CEN/ISO ontology framework standard named Categorial Structure defined from a non formal bottom up ontology approach. The process of populating the framework is ongoing to start from a common model structure encompassing the ICD 9CM Volume 3 granularity. Keywords: Classifications; Standard; Ontology; Intervention;
1. Introduction Since the beginning of medical informatics clinical terminological systems, classifications and coding systems have been developed by independent, divergent and uncoordinated approaches which have produced non reusable systems on overlapping fields for different needs. Most of developed countries have kept on maintaining, updating and modifying their own coding systems for procedures, as well as national adaptations of ICD [1], in order to manage and to fund their health care delivery. The most significant efforts were done in Australia with ACHI (Australian Classification of Health Interventions) or ICD10 AM [2], in Canada with the Canadian 1
Corresponding author : Jean Marie Rodrigues, CHU de St Etienne, SSPIM, Chemin de la Marandière, 42 270 Saint Priest en Jarez, France, E-mail: [email protected]
750
B.T. Paviot et al. / Development of a New International Classification
Classification of Health Interventions (CCI) [3] developed by the Canadian Institute for Health Information (CIHI) and in France with CCAM (Classification Commune des Actes Médicaux)[4]. For some decades several broad pre-coordinated or compositional systems have been proposed to users targeting different goals. The most well known are the UMLS (Unified Medical Language System) [5], LOINC [6] for clinical laboratories, DICOM SDM [7] for imaging, SNOMED CT [8], Convergent Medical Terminology (CMT) [9]. Standardisation in health informatics started in the US with the HL7 user group. The European Standard Body CEN TC 251 WG2 (Comité Européen de Normalisation Technical Committee 251 Working Group 2) and later the International Standard Organisation ISO TC 215 WG3 elaborated and developed a standard approach for biomedical terminology named Categorial Structure which is a bottom up non formal ontology approach. We describe the application of this standard to the ICHI initiative and give the specifications of this classification system in Section 2 (Material and Method). In Section 3 (Results) we discus the perspectives to further develop and accommodate existing classifications rather than creating new ones.
2. Material and method 2.1. Overview At the 2010 WHO FIC meeting the following definition of health intervention was agreed [10]: an activity performed for, with or on behalf of a client(s) whose purpose is to improve individual or population health, to alter or diagnose the course of a health condition, or to improve functioning. This definition includes interventions that apply to more than one client or to a population group. As a consequence the prospective international classification would include interventions across the whole health system. It would include interventions provided by all types of providers: doctors, dentists, nurses, allied and community health workers, traditional medicine providers and public health practitioners. The aims of this international classification are to: • Describe and compare the provision and effectiveness of health interventions at the local, national or international level. • Provide a classification of appropriate scope and detail to which countries may align their more finely grained national or specialty classifications. • Ensure that a classification is available that can be used without adaptation in countries which do not wish to further refine the classification. • Take into account that interventions include elements of ‘western’ and ‘traditional’ medicine. 2.2. Method The development is built on an ontology framework standard method following the CEN TC 251/ISO TC 215 work named Categorial Structure, as several recent national classifications within Europe and Canada did. The CEN/ISO Categorial Structure is defined in the last standards[11-12], as a minimal semantic structure describing the main properties of the different artefacts used as terminology (controlled vocabularies, nomenclatures, coding systems and classifications): a model of knowledge restricted to
B.T. Paviot et al. / Development of a New International Classification
751
1) a list of semantic categories; 2) the goal of the Categorial Structure; 3) the list of semantic links between semantic categories authorised with their associated semantic categories; 4) the minimal constraints allowing the generation and the validation of well formed terminological phrases. Any biomedical artefact claiming conformance to the standard shall attach with the data sent the Categorial Structure of the terminology used. The Categorial Structure shall satisfy the 4 constraints but can add more constraints. The ICHI Categorial Structure is as following: 2.2.1. List of Semantic Categories • • •
The Action semantic category is the set of deeds done by an actor. The top level hierarchy value sets are: Investigation, Treating, Managing, Informing, Assisting, Preventing. The Target semantic categories on which the action is carried out are: Anatomy, Human function, Person/client, Group/population. The Means semantic categories describing the processes and methods by which the action is carried out are: Approach, Technique, Method, and Miscellaneous as devices.
2.2.2. Semantic Links The first semantic link named “hasFocus” connects the Action and the Target semantic categories. The second semantic link called “hasMeans” connects the Action and the Means semantic categories 2.2.3. Minimal Domain Constraints It is necessary to have at least one deed value from the semantic category Action. It is necessary to have at least one semantic link “hasFocus” connecting one deed value to a value of the Target semantic categories. It is authorised to have several semantic links “hasFocus” for one deed value (e.g Anatomy and Human Function, person/client and group/population). The semantic link “hasMeans” is optional. 2.2.4. Development of a Coding Scheme In line with the Categorial Structure, the coding scheme comprises a 7 characters structure for the three axes: 3 letters for the Target, 2 letters for the Action, 2 letters for the Means plus up to x digits. The current intention is that the granularity will be at least equivalent to the granularity of ICD-9-CM Volume 3.
3. Results 3.1. Validation The semantic structure was validated first by a mapping exercise between existing classifications of health intervention from different languages [13] and different fields. The number and type of interventions from existing classification systems mapped are as follows (see Table 1):
752
B.T. Paviot et al. / Development of a New International Classification
Table 1. Mapping of The semantic structure towards existing classifications of health intervention from different languages and different fields. Languages
Number of Interventions in the Field
ACHI (Australia):
100 from Orthopaedics
CCI (Canada):
100 from Random selection
CCHI (China):
75 from Random selection
OPS (Germany):
100 from Endovascular
NCSP (Nordic countries):
100 from Random selection
KTL (Germany):
50 from Rehabilitation
WCPT (USA):
257 from Physiotherapy
CCAM (France):
100 from Cardiology
(CCI/CCAM) (Australia):
23 from Obstretrics
ICNP(USA and Korea)
278 from Nursing practise
More recently the 5338 procedure labels of ICD 9 CM Volume 3 have been mapped to this structure by a Korean team [14] 3.2. Discussion First the strategy of this ICHI initiative can be challenge. Why not taking an internationally used coding system of health interventions as the previous ICPM (International Classification of Procedure in Medicine)[15] or the procedure part of SNOMED CT? In fact there is no an international terminology artefact taking care of the wide field of health interventions needed for the WHO FIC network activities for instance traditional medicine, public health or nursing. Nevertheless the ICHI system is based on the same existing systems semantic model and will be quickly available with the ICD 9 CM Volume 3 coarseness and further on can be populated with the value sets of different national or international systems of health interventions. Among different standardization strategies for biomedical terminologies it was considered not possible to agree on a reference clinical terminology or to standardize a detailed language independent biomedical ontology based on a formal upper level ontology as recommended by the OBO foundry[16]. On the other hand if the feasibility was good for diagnostic, medical and surgical interventions more work is needed to complete the semantic categories for interventions on functioning, public health and traditional medicine.
4. Conclusion This international classification which has not yet been included in the formal program of WHO for financial reasons is not proposed to be used all around the world as ICD for diagnosis. It is rather considered as an incentive to harmonisation. Countries having developed their own classifications of health interventions and interested in comparability of data including case mix systems across countries should modify their existing systems partially to be compliant with the ontology framework but are not
B.T. Paviot et al. / Development of a New International Classification
753
mandated to change the full terminology they use. For countries without an interventions classification and namely developing countries it can be used directly starting from the level of granularity of ICD-9-CM Volume 3 or as a framework to develop national applications. Acknowledgments. We wish to thank the members of the Family Development Committee of WHO FIC and namely Megan Cumerlato, Huib ten Napel, Susanne Hanser, Amy Coenen, Tae Youn Kim and Jiang Quin.
References [1] [2] [3] [4] [5] [6] [7] [8] [9]
[10] [11]
[12] [13]
[14]
[15] [16]
International Statistical Classification of Diseases and Related Health Problems, 2nd edition, World Health Organisation, Geneva, 2004 National Centre for Classification in Health see http://www3.fhs.usyd.edu.au Canadian Classification of Health Interventions http://secure.cihi.ca/cihiweb/dispPage.jsp Agence Technique de l’Information Hospitalière see http://www.sante.atih.gov.fr McCray AT, Nelson SJ. The representation of meaning in the UMLS. Methods Inf Med1995;34(12):193201. Logical Observation Identifiers Names and Codes(LOINC). See :http://www.loinc.org/ DICOM see http://www.xray.hmc.psu.edu/dicom/dicom_home.html SNOMED Clinical Terms. College of American Pathologists.see http://www.snomed.org/ Dolin RH. Kaiser Permanente's Convergent Medical Terminology. Testimony to the National Committee on Vital and Health Statistics, Subcommittee on Standards and Security. May 22, 2003. http://ncvhs.hhs.gov/030522sstr.htm] Madden R :World Health Organization Family of International Classifications: ICHI project plan WHO 2010 Rodrigues J-M, Kumar A, Bousquet C, Trombert B. Standards and Biomedical Terminologies: The CEN TC 251 and ISO TC 215 Categorial Structures. A Step towards increased interoperability. In:S K Andersen et al. (Eds.) MIE 2008 Proc. IOS Press, 2008; pp. 735-740. prEN ISO 1828 :2010. Health informatics – Categorial Structure for classifications and coding systems of surgical procedures . Trombert Paviot B, Madden R, Zaiss A , Bousquet C,Kumar A and Rodrigues JM.:Towards the International Classification of Health Interventions (ICHI). Step 2.Populating the ICHI content model with existing coding systems.in Proceedings PCSInternational Munich 2010 Jung B, Jung C, Rodrigues JM, Bousquet C , Kumar A, Lewalle P,, Trombert Paviot B,, Yang H and Kim S. The revision of the Korean Classifications of Health Interventions based on the proposed ICHI semantic model and lessons learned. MIE 2011 proceedings. 3. International Classification of Procedures in Medicine, World Health Organisation, Geneva 1978 The Open Biological and Biomedical Ontologies see http://www.obofoundry.org/
754
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-754
The Revision of the Korean Classifications of Health Interventions Based on the Proposed ICHI Semantic Model and Lessons Learned Boyoung JUNGa, Chaeyoung JUNGb, Jean Marie RODRIGUESc,d,e,1, Cédric BOUSQUETc,d , Anand KUMARc, Pierre LEWALLEc,, Béatrice TROMBERT PAVIOTc,d , Hoonshik YANGf, Sukil KIMa a The Catholic University of Korea ,Seoul Korea b University of Utah, Salt lake City USA c University of Saint Etienne, CHU, Department of public health and medical informatics ,Saint Etienne France d INSERM UMR 872 Eq 20 Paris, France e WHO Collaborating Center for International Classifications in French Language, Paris, France f College of Medicine Chung-Ang University, Seoul Korea
Abstract. The Korean Medical Association and the Health Information Review Agency have decided to re-engineer the different Korean coding systems of health interventions based on a proposed ontology framework defined in 2010 for the prospective International Classification of Health Interventions (ICHI). The authors present the interim report of the project focused on this model: 5,338 procedures of the Korean version of ICD9-CM 5,150 procedures covered by Korean health insurance and 6,619 uncovered procedure labels were processed with the participation of 8 coders and 310 medical doctors. As of 28th January 61.8% of data was processed. The ontology framework model itself was not enough to represent all the labels when the preliminary data from obstetrics and gynecology was explored. However, when modified with 7 notations, it was possible to assign each label of ICD 9 CM Volume 3 and 30 % to 57 % of specific Korean interventions to the semantic model. Keywords. Health Interventions, Classifications, Ontology, Semantic Model.
1. Introduction The WHO Network for the Family of International Classifications agreed in 2006 that a prospective structure for an International Classification of Health Interventions (ICHI) should be explored based on the CEN/ISO ontology framework named Categorial Structure [1]. A backbone was approved in 2008 as a support to harmonize the existing national classifications and to provide a basic system for the countries which have not 1
Corresponding Author: JM Rodrigues, CHU de St Etienne, SSPIM, Chemin de la Marandière, 42 270 Saint Priest en Jarez, France, [email protected]
B. Jung et al. / The Revision of the Korean Classifications
755
developed their own classification systems of interventions [2] [3]. Since then, the work is continuing towards implementation [4]. The semantic model has three semantic categories or axes: action, target and means. The action is the main axis defining the key of the procedure. The target includes anatomic structures, body parts, functions, or individual where the action is applied. The means refines the action by showing how the action is applied to the target. For each semantic category, preliminary values sets have been defined [4] following the mapping exercise between existing coding systems of health interventions and the semantic model [5]. Their definitions are presented in another conference full paper. The Korean health insurance is the national health insurance of the Republic of Korea with a fee-for-service payment system. The interventions listed as covered procedures of health insurance are reimbursed to the payer. The other interventions named non covered are paid by the out-of-pocket money of the patients. The covered procedures and the non covered procedures have different hierarchies and coding schemes. The legacy coding system for covered procedures is fairly satisfactory. There are, however, a few caveats. If a non covered procedure became a covered procedure then the procedure is deleted from the non covered coding system and entered into a revised coding system of covered procedures. The Korean Classification of Procedures was first built in 1994 for health insurance claims. To overcome the caveats, the Korean Medical Association (KMA) and the Health Information Review Agency (HIRA) decided to revise it based on the proposed ICHI semantic model till the end of 2011.
2. Material The data comes from 3 sources. 1. 5,338 procedures labels of the Korean version of ICD9-CM volume 3 were used as the backbone of the classification. 2. The legacy classification of health interventions for health insurance in Korea has 5,150 procedures labels (covered procedures). It is the main target data to be included in the new classification. 3. For the non covered procedures, 44 university hospitals were requested to submit the data. 18 hospitals submitted the data and after the inspection only 6,619 procedures labels are processed. The data from other hospitals will be processed after they meet the inspection criteria. The total number of labels is 17649.
3. Method The whole process is composed of data collection, data cleaning and assignment of procedures to the 3 semantic categories, validation and hierarchical rearrangement according to the model. The current stage is in the assignment of procedures to the 3 semantic categories . The validation and rearrangement will be done before the end of year 2011. A bilingual web site (Figure 1) with Microsoft SQL server has been built to process the data. 8 graduate students trained in medical informatics have assigned each procedure label to the 3 semantic categories. 5 of them are nurses, 2 are health
756
B. Jung et al. / The Revision of the Korean Classifications
information managers and one is a dental hygienist. Twenty four academic societies are requested to validate the assignment to the semantic categories on the web. 310 doctors registered to participate in the process.
Figure 1. Collaborative web tools for review
Figure 2. Hierarchical rearrangement of items according to 3 semantic categories axes
535 procedures labels including 306 ICD9-CM procedures labels related to obstetrics and gynecology were preliminarily analyzed and went through the hierarchical rearrangement (Figure 2).
757
B. Jung et al. / The Revision of the Korean Classifications
4. Result On 28th Jan. 2011, 10,812 items out of the total 17,649 have been assigned to the 3 semantic categories axes (61.3%) (Table 1). The ICD9-CM volume 3 labels were finalized earlier than others to see how usable it was. Table 1. The progress in the assignment of 3 axes according to sources (as of 28th Jan). Sources
No. of Items with Assignment/No. of Items 5,338/5,338 1,722/5,692 3,752/6,619 10,812/17,649
Korean ICD-9-CM Covered (Legacy) items Uncovered items* Total * collected data only
Completion(%) 100.0 30.3 57.0 61.3
Some limitations of the proposed ICHI semantic model were assessed during the work. Several notations were introduced to overcome them: (Table 2). Table 2. Examples showing notations that were used to modify the 3 axes semantic model. Case No.
Item Code
Description
1
73.6
Episiotomy
2
65.62
3
66.93
Other removal of remaining ovary and tube Implantation or replacement of prosthesis of fallopian tube
4
66.62
Salpingectomy with removal of tubal pregnancy
5
73.21
Internal and combined version without extraction
6
75.92
7
66.93
Evacuation of other hematoma of vulva or vagina Implantation or replacement of prosthesis of fallopian tube
Axes Target Vulva: Perineum Fallopian tube & Ovary Device: Prosthesis>Fall opian tube Fallopian tube&+Fetal or embryonic structure Fetal or embryonic structure
Vulva|Vagina] Hematoma Device: Prosthesis>Fall opian tube
Means
Action
Open
Incision
Open
Removal
Open
Implantation of device|Change
Open
Excision&+Re moval
Per Orifice/Tr ansorifice
Reposition: Internal version and combined version &Extraction: Delivery Drainage: Evacuation Implantationof device|Change
Open Open
If the granularity of the one semantic category axis is coarser than the granularity needed by the label, : is appended at the end of the semantic category value set and after the symbol the more fine granularity value is registered (target in case 1). When more than a target or an action were needed, & was inserted denoting and (target in case 2). | was used denoting or where more than two options were available in an axe (action in case 3, target in case 6 and action in case 7).
758
B. Jung et al. / The Revision of the Korean Classifications
When one item was associated with the other item then &+ was used meaning associated with (target and action in case 4).&- was used to show something excluded (action in case 5). Sometimes pathologic conditions were actual targets of actions. However they were not listed in target axis in the content model. The target in the target axis was put on the left side of ] to keep the original model, and the pathologic condition was put on the right side (target in case 6). Some actions required more than two targets as in sentence structure with indirect and direct objectives. Target as a role of direct object was located on the left side of >, and the other one as a role of indirect object was located on the right side (target in case 7). The model was successfully applied to the rest of the data. When it was applied to the obstetrics and gynecology data, it could gather the procedures labels with similar properties in a group and make rearrangement of the hierarchy easier as shown in Figure 2.We are currently waiting for the validation by doctors which will be presented during the conference.
5. Conclusion On the whole the ICHI semantic model was able to represent most of the ICD 9CM Volume 3 and the specific Korean coding systems labels. Some difficulties still need to be overcome: to find the Action value, some extensions are needed for the number of accepted Targets and Actions and Pathology as a Target. This work is a case study showing how the ICHI international initiative can support the harmonization of national health interventions coding systems starting from the unofficial standard of ICD 9 CM Volume3.
References [1]
[2]
[3] [4] [5]
Rodrigues J-M, Kumar A, Bousquet C, Trombert B. Standards and Biomedical Terminologies: The CEN TC 251 and ISO TC 215 Categorial Structures. A Step towards increased interoperability. In: Andersen SK, et al. (Eds.) MIE 2008 Proc. IOS Press, 2008; pp. 735-740. Madden R, Zaiss A, Thorsen G, Lewalle P, Rodrigues J-M, Weber S, Ustun, B: World Health Organization Family of International Classifications: Developing the International Classification of Health Interventions: Background, Need and Structure, WHO 2008. Weber S, Rodrigues J-M, Madden R, Pickett D, Zaiss A, ten Napel H, Moskal L, Bartz C, Virtanen M. The ICHI content model, WHO 2009. Madden R. World Health Organization Family of International Classifications: ICHI project plan WHO 2010. Trombert Paviot B, Madden R, Zaiss A, Bousquet C, Kumar A, Rodrigues JM. Towards the International Classification of Health Interventions (ICHI). Step 2.Populating the ICHI content model with existing coding systems. In Proceedings of PCSI International Munich 2010.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-759
759
Web-Based Collaboration for Terminology Application: ICNP C-Space a
Claudia C. BARTZa,1, Derek HOY b International Council of Nurses, Geneva Switzerland b SnowCloud, United Kingdom
Abstract. The purpose of this paper is to describe the ongoing evolution of a nursing terminology that involves users in all aspects of the terminology lifecycle. A terminology will not succeed until and unless it benefits users and contributes to improved client outcomes at the point of care. Since the release of ICNP® Version 1 in 2005, users have been necessary partners in research and development, dissemination and education, and, to some extent, in terminology maintenance and operations. ICNP C-Space was launched in 2008 as a platform for collaboration among users and the ICNP team. C-Space applications include, but are not limited to, the ICNP browser, a multi-lingual browser, catalogue development pages, and group discussion pages. Future uses may include work related to ICN research and networks. C-Space adds value to ICNP, ICN, and nursing worldwide by ensuring that terminology users can contribute their expertise to finding workable solutions and developing important products related to ICNP. Keywords. Healthcare terminology, Terminology life cycle model, ICNP C-Space, Terminology user
1. Introduction As a healthcare terminology matures, there comes a point when developers have to rely on users for continued improvement of the terminology and evaluation at the point of use. The goal of standardized documentation in interoperable health information systems, resulting in automatically collected reusable data, is gaining advocates worldwide. Reusable data means that data are entered only once, preferably electronically, and then are available for multiple purposes [1], such as management decision-making, patient outcomes research and healthcare policy development. To ensure continued development, maintenance, and application of a healthcare terminology, it is essential to have full and productive engagement between terminology developers and users, including clinicians, vendors, informatics professionals, and terminologists. The International Council of Nurses (ICN) approved development of the International Classification for Nursing Practice (ICNP®) in 1989 and the alpha and beta versions culminated in the release of ICNP Version 1 in 2005. Prior to 2005, development of the terminology was based on the work of nurse experts who gathered and organized concepts representing the nursing domain in a multi-axial terminology 1
Corresponding Author: Claudia C. Bartz. ICNP & ICN Telenursing Network Coordinator, International Council of Nurses, 3 place Jean-Marteau, Geneva, Switzerland, CH-1201; E-mail: [email protected].
760
C.C. Bartz and D. Hoy / Web-Based Collaboration for Terminology Application: ICNP C-Space
that required combinatorial processes to structure primitive concepts into nursing diagnoses, outcomes, and interventions. From 2005 forward, ICNP development used a formalized language methodology to represent concepts and relationships within the nursing domain. Formal definitions for ICNP are represented in web ontology language (OWL). Versions 1.1 and 2 were released in 2007 and 2009, respectively. ICNP is a compositional terminology that represents the nursing domain of healthcare. ICNP Release 2011 includes 3281 concepts, 669 pre-coordinated diagnosis and outcome statements, and 484 pre-coordinated intervention statements.
2. Purpose As ICNP gained stability in development and maintenance processes, the terminology then needed the creativity and expertise of clinicians and researchers who would implement ICNP in care delivery settings, evaluating its usability and the stored, reusable nursing documentation data. As biennial releases of ICNP continue, its value will only be ensured with productive interaction between the ICNP team and users. The purpose of this paper is to describe a process for the involvement of users in all aspects of the terminology life cycle. Specific objectives of the paper are to (1) describe the evolution of a global nursing terminology from ‘paper and pencil’ structuring of relevant concepts; (2) describe how goals and methods for involving nurses worldwide in the application, evaluation, and quality improvement of ICNP have been implemented and evaluated; and (3) propose future directions for continued collaboration between users and developers. 2.1. Evolving Development Methods The alpha, beta 1, and beta 2 versions of ICNP used 16 axes to organize concepts of the nursing domain. To ensure consistency of use, rules for forming nursing diagnoses and outcomes, and nursing interventions (actions) complied with ISO Health Informatics 18104:2003 [2]. With ICNP Version 1 [3], concepts were coded with unique, randomly assigned 8-digit identifier, consistent with ISO Health Informatics 17117:2007 [4]. However, users voiced their comfort with the former codes (eg, 1; 1.1; 1.1.1) because they were able to add local concepts to the terminology in places that seemed logical [5]. Thus the reaction of users to the ICNP terminology concepts being modeled in web ontology language and given unique codes clearly showed the need for continuous user-developer consultation, collaboration and education. With the release of ICNP Version 1, the increasing number of concepts and the unique codes made it difficult for nurses to use ICNP efficiently and effectively in care delivery settings. The solution for this difficulty was to create subsets of the terminology, or catalogues, with pre-coordinated nursing diagnoses and outcomes, and pre-coordinated interventions. Catalogues would be clinically relevant; applicable to individuals, groups, or communities; and focused on health conditions (eg, diabetes), client phenomena sensitive to nursing interventions (eg, adherence to treatment), specialties (eg, maternal health), or settings (eg, disasters). ICN published guidance for catalogue development [6] and two catalogues [7, 8] with the intent of encouraging users to develop catalogues in collaboration with ICN. While pre-coordinated statements were intended to simplify users’ application of the terminology, users had learned to compose nursing diagnosis, outcome and
C.C. Bartz and D. Hoy / Web-Based Collaboration for Terminology Application: ICNP C-Space
761
intervention statements for the electronic record using combinations of 8-digit codes. Now these multi-coded statements were being superseded by single, 8-digit codes for the pre-coordinated statements. This change also requires continued discussions between users and ICN. 2.2. Life Cycle Model As the ICNP terminology continued to increase in number of primitive and precoordinated concepts, all modeled within the OWL development environment, and as the additional requirements of the programme increased in scope and complexity, a model was developed to organize all the aspects of terminology development. A model was seen as a way to guide internal operations, aid in setting priorities for the work of the programme, structure quality improvement processes, and inform users about how they can contribute to the development and application of ICNP. The model has three main constructs: research and development, maintenance and operations, and dissemination and education [9]. In addition to catalogue development, users conduct research projects in their work settings (eg, academic, clinical). Translations are an important aspect of ICNP development. Dissemination and education involves professional presentations, publications, and academic and clinical applications related to users’ work with ICNP. The model was validated as fit for purpose as overlays of catalogue development (Figure 1) and quality improvement processes were both found satisfactory [10, 11].
Figure 1. Validating Lifecycle Model with Catalogue Development Process.
3. ICNP C-Space An important goal for continuing ICNP terminology development was to establish some means by which users and the ICNP team could more inter-actively continue ICNP development and application. A web-based platform was devised and tested for feasibility. Since its inception in 2008, the capabilities of C-Space have continued to advance in support of the terminology. An ICNP browser was one of the first features of C-Space. Users can download ICNP files from C-Space, using the site as a centralized portal for distribution. The ability of users to access the online browser moved the terminology forward as users asked for various ways of representing the terminology so that it would be as useful,
762
C.C. Bartz and D. Hoy / Web-Based Collaboration for Terminology Application: ICNP C-Space
accessible, and as comprehensible as possible. With each biennial release of ICNP, ICN aims to provide the ICNP representations that users need for clinical applications and continued research. When ICNP is downloaded, users sign agreements that allow ICN to track research, development and translation projects from inception to completion. In 2011, the browser was made multi-lingual, showing and encouraging worldwide involvement with ICNP. The multi-lingual browser also supports continued translation of the terminology as biennial releases include progressive improvements, mostly in the numbers of pre-coordinated statements for use in the standardized documentation of nursing care delivery. A catalogue development project on C-Space with collaboration between community nurses in Scotland and the ICNP team resulted in an additional catalogue for users worldwide [12]. The collaboration also tested processes for communication, interaction, content development, and screen designs. This multi-year work resulted in many lessons learned, to include confirmation of the belief that nurses use language in many different ways to mean many different things. Variation of words and meanings is a challenge for the ICNP terminology as it aims to represent the nursing domain worldwide. More catalogue development projects are currently under way on C-Space. Communication groups are in early stages of development on C-Space. One group has been formed to discuss implementation of ICNP. Members use the asynchronous discussion format to describe their work locally and collaborate internationally to advance ICNP use in care settings. Another group consists of the Directors of the ICNAccredited Research and Development Centres, who are preparing for the biennial consortium meeting in 2011. Directors are encouraged to collaborate in ICNP development, eg, one Centre’s focus on the phenomenon of family care could inform another Centre’s focus on disaster nursing. C-Space usage is described in Table 1. Table 1. Usage Statistics April 2010 to March 2011.
Unique Visitors
Visits from 129 Countries
Pageviews
4,791
8,860
111,069
Registered Users 03/2011 1,520
User Groups 03/2011 6
Downloads 03/2011 822
4. Future Directions ICN recognizes the expanding impact of eHealth and the great potential that the use of information and communication technology can have with healthcare assessment, management, documentation, and reporting nationally and internationally. Data about nurses and nursing are rare to non-existent in international reports of healthcare resources and outcomes. ICN further recognizes the potential for nursing communication and documentation that ICNP, as a standardized terminology for representing the work of nursing, can support and propagate, whether the application is used with complex health information systems or mobile technology, such as mobile phones. C-Space can continue to expand its capabilities to include research using core data sets. ICN core data sets are seen as the research tools for electronic data collection and
C.C. Bartz and D. Hoy / Web-Based Collaboration for Terminology Application: ICNP C-Space
763
analysis in response to focused research questions from any of ICN’s programme areas, eg, regulation, socio-economic welfare, and professional practice [13]. Another potential use for C-Space groups would be to support the ICN Telenursing Network as it seeks to interface with nurses and others professionals worldwide. Collaboration between informatics nurses and telehealth nurses could substantially benefit health technology development, application and evaluation, and support standardized documentation of nurse-sensitive client outcomes that would increase nursing knowledge and improve care delivery.
5. Summary ICNP is increasing in scope of coverage of the nursing domain. Nurses in more regions and countries are implementing clinical applications of ICNP. Among the many challenges for nurses are translation and meeting the technical requirements for clinical applications. C-Space supports a strong network of committed nurses and others who continue to collaborate with ICN to ensure that nurses are able to document their work using ICNP, in a consistent and accurate way to result in reusable data. Then nurses worldwide will be able to describe what nurses do, and what differences nurses make in healthcare outcomes for individuals, families and communities.
References [1] [2]
[3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13]
Hammond, WE Bailey, C Boucher, P Spohr, M Whitaker P. Connecting information to improve health. Health Affairs 29(2), 2010, 284-288. International Standards Organization. International Standard 18104:2003 Health Informatics— Integration of a Reference Terminology Model for Nursing. Geneva Switzerland: International Standards Organization, pp. 3-6. International Council of Nurses. ICNP Version 1.0. Geneva Switzerland: International Council of Nurses, 2005. International Standards Organization. Technical Standard 17117:2007 Health Informatics — Controlled Health terminology — Structure and High-level Indicators, p. 8. Bartz C.C. (personal communication, 16 June 2005) International Council of Nurses. Guidelines for ICNP Catalogue Development. Geneva Switzerland: International Council of Nurses, 2008. International Council of Nurses. Partnering with Patients and Families to Promote Adherence to Treatment. Geneva Switzerland: International Council of Nurses. 2008. International Council of Nurses. Palliative Care for Dignified Dying. Geneva Switzerland: International Council of Nurses, 2009. International Council of Nurses. ICNP Version 2. Geneva Switzerland: International Council of Nurses, 2009. Coenen, A Kim. TY Development of terminology subsets using ICNP. International Journal of Medical Informatics 79, 2010, 530-538. Kim, TY Coenen, A Hardiker N. A quality improvement model for healthcare terminologies. Journal of Biomedical Informatics 43, 2010, 1036-1043. International Council of Nurses, Scottish National Health Service. Scottish Community Nursing Dataset. Geneva Switzerland: International Council of Nurses. in press. Coenen A, Bartz C. (2010). ICNP: Nursing Terminology to Improve Health Care Worldwide. In Nursing and Informatics for the 21st Century: An International Look at Practice, Trends and the Future. 2nd Ed, pp. 207-216.
764
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-764
Mapping Medical Records of Gastrectomy Patients to SNOMED CT a
Eun-Young SO a, Hyeoun-Ae PARK a1 College of Nursing, Seoul National University, Seoul, Korea
Abstract. The purpose of this study is to explore the ability of SNOMED CT to represent narrative statements of medical records. Narrative medical records of 281 hospitalization days of 36 patients with Gastrectomy were decomposed into single-meaning statements, and these single-meaning statements were combined into unique statements by removing semantically redundant statements. Concepts from the statements describing patients' problems and treatments were mapped to SNOMED CT concepts. A total 4717 single-meaning statements were collected and these single-meaning statements were combined into 858 unique statements. Out of 677 unique statements describing patients' problems and treatments, about 85.5% statements were fully mapped to SNOMED CT. The rest of the statements were partially mapped. This mapping result implies that physicians' narrative medical records can be structured and used for an electronic medical record system. Keywords: information sharing, narrative medical records, terminology system, mapping, SNOMED CT, ICNP
1. Introduction Throughout the healthcare sector, the introduction and utilization of information systems is becoming widespread. Electronic Medical Records, which are the most crucial component of hospital information systems, improve the accessibility of medical information and contribute to the readability and completeness of records, allowing users to search for and use information with more ease through greater integration of information [1, 2]. But in order to use such electronic medical records more efficiently, and to facilitate the smooth sharing and exchange of information between systems and medical institutions, it is imperative to be based on a controlled terminology system[3]. In nursing, an electronic nursing records system based on ICNP, a controlled nursing terminology system, was introduced in early 2003 in Korea[4], and went so far as to use the data gathered by this system in decision-making and research [5]. But in the case of physicians’ records, only fragmentary information such as chief complaints[6], decision-making rules[7], discharge summaries, diagnoses, and operation names [3] has been mapped to SNOMED CT. Records that compose a great part of all medical records, such as admission notes, progress notes, and summary discharge notes are still left in unstructured free text format.
1
Corresponding Author: Hyeoun-Ae Park, College of Nursing Seoul National University, 28 Yongon-dong Chongno-gu, Seoul, 110-799 Korea; E-mail: [email protected]
E.-Y. So and H.-A. Park / Mapping Medical Records of Gastrectomy Patients to SNOMED CT
765
Therefore, in the present study, we map doctors’ medical records documented in free-text form to SNOMED CT concepts in order to explore the possibility of structured data input.
2. Method 2.1. Data Collection We analyzed the free-text medical records of patients who were admitted to the Department of General Surgery in a tertiary hospital in Korea, received gastrectomy. Medical records of the patients with gastrectomy were chosen for analysis because gastrctomy is one of the most frequently performed surgeries in Korea with a relatively well defined care procedure. In order to limit the medical records of gastrectomy patients, we eliminated the records of patients who were transferred to other departments before or after the surgery, or who had other operations performed on them simultaneously. Statements were collected in reverse chronological order from the records of the patients admitted on September 30, 2009. Taking into consideration the change of the doctor in charge due to the monthly rotation of the residents at the study hospital, we only included three patients per month in the pool for analysis. We collected the freetext portions of the patients’ medical records, decomposed them into single statements by meaning, and continued the process until there were three patients who no longer yielded statements with new meanings (saturation sampling). As a result, we collected 4717 single statements from the medical records of 36 patients, documented by 19 doctors over a period of 281 days. 2.2. Analysis of Data The collected statements often overlapped in meaning, although they were expressed differently by different doctors. Combining the statements by meaning, a total of 858 unique statements were extracted. We classified the extracted unique statements into those that describe the “medical condition of the patient” (current symptoms, test results, diagnosis and etc.), those that describe “medical procedures performed on the patient” (treatment, administration of medicine, care plans and etc.), and “other statements” (patient’s habits and other administrative information). Of these, the 677 unique statements that describe the “medical condition of the patient” and the “medical procedures performed on the patient” were the target of analysis in this study. First we decomposed each unique statement into concepts and mapped them to SNOMED CT (2009-07-31 international edition) concepts using the CliniClue Xplore browser. The results of the mapping were classified into “fully mapped”, “partially mapped”, and “not mapped”. 2.3. Validation The results of extracting the concepts from statements and mapping them to SNOMED CT concepts were verified by a group of experts. The experts consisted of a surgeon who performs gastrectomies at a hospital, a nurse with a Ph.D degree in nursing
766
E.-Y. So and H.-A. Park / Mapping Medical Records of Gastrectomy Patients to SNOMED CT
informatics with experience in SNOMED CT mapping research, a doctoral student with experience in SNOMED CT and nursing informatics research, and a student with a master’s degree who maintains electronic medical records at a hospital using the SNOMED CT. The experts were presented with the results of the mapping along with possible replacement concepts, and asked for their opinions. The mapping results were finally modified based on their verification.
3. Result When 677 unique statements describing the “medical condition of the patient” and the “medical procedures performed on the patient” were decomposed into concepts and mapped to SNOMED CT concepts, 579 unique statements - 85.5% of the total - were fully mapped and the remaining 14.5% were partially mapped. There were no statements that were not mapped to SNOMED CT concepts. In statements without removing redundancy in meaning, 3740 statements - 93.3% of the total - were fully mapped to SNOMED CT concepts. Regarding the types of statements, those that described the medical condition of the patient (91.9%) showed a higher rate of being fully mapped than statements that described the medical procedures (74.4%) (Table 1). A total of 705 concepts were extracted during the course of the mapping.
Table 1. Mapping of Statements by SNOMED CT
Patient Conditions
Treatments Given
Total
No. of total statements (%)
No. of unique statements (%)
No. of total statements (%)
No. of unique statements (%)
No. of total statements (%)
No.of unique statements (%)
FullyMapped
3071 (96.8)
396 (91.9)
669 (80.0)
183 (74.4)
3740 (93.3)
579 (85.5)
PartiallyMapped
101 (3.2)
35 (8.1)
167 (20.0)
63 (25.6)
268 (6.7)
98 (14.5)
Total
3172 (100.0)
431 (100.0)
836 (100.0)
246 (100.0)
4008 (100.0)
677 (100.0)
Taking the frequency of concepts appearing in the statements into consideration, 705 concepts appeared a total of 9415 times. Out of 705 concepts, 660 concepts – rate of 93.6% were mapped to SNOMED CT. In terms of the types of mapping, 30.2% were lexically mapped, 21.5% were semantically mapped, 13.8% were mapped to a broader concept, 1.1% were mapped to a narrower concept, and 27.0% were mapped more than one concept (Table 2).
E.-Y. So and H.-A. Park / Mapping Medical Records of Gastrectomy Patients to SNOMED CT
767
Table 2. Mapping of Concepts by SNOMED CT
No. of unique concept(%) Mapped to SNOMED CT
660(93.6)
No. of total concept(%) 9135(97.0)
Lexically mapped
213(30.2)
1611(17.1)
Semantically mapped
152(21.5)
4390(46.6)
97(13.8)
1204(12.8)
8( 1.1)
53( 0.6)
190(27.0)
1877(19.9)
Mapped to a broader concept Mapped to a narrower concept Mapped to more than one concept Not mapped to SNOMED CT Total
45( 6.4) 705(100.0)
280( 3.0) 9415(100.0)
4. Discussion The results of the study show that most free-text gastrectomy patients’ medical records documented by doctors are able to be mapped to SNOMED CT. This is similar to the content coverage of SNOMED CT to represent the most common nonduplicated patient problems seen at the Mayo Clinic [8]. In the nonduplicated patient problem list comparing research, SNOMED CT, when used as a compositional terminology, can represent 92.3% of the terms used commonly in medical problem lists. This implies that that SNOMED CT can be used to structure free-text doctors’ medical records. In addition, the mapping rate to SNOMED CT was higher with statements that described the “medical condition of the patient”. In the current electronic medical record system, information on “medical procedures” is relatively easy to use, due to procedures being coded because they are used in doctors’ orders and reimbursements. However, the medical condition of the patient - especially the patient’s symptoms or the doctor’s judgments and opinions - usually remains unstructured as free-text records and is therefore difficult to search for. Therefore if such records become structured based on SNOMED CT, the information will prove extremely useful. In mapping to SNOMED CT concepts, statements about test results such as “platelet: */mm” or “total calcium: *mg/dl” were imbued with value judgments regarding the results. Thus the appropriate concepts were first searched for and mapped to concepts in the “clinical finding” hierarchy, then in the “observable entity” hierarchy when no concept matched. However, concepts describing some clinical laboratory tests could not be found in the abovementioned hierarchies, and existed only as a concept in the “procedures” hierarchy. In these cases, the concepts were considered not mapped. An example is the hepatic enzyme “GOT (Glutamic Oxoloacetic Transaminae)” and “GPT (Glutamic Pyruvic Transaminase)”; GOT existed as “aspartate transaminase level (finding)” in the “finding” hierarchy and was thus able to be mapped, but GPT
768
E.-Y. So and H.-A. Park / Mapping Medical Records of Gastrectomy Patients to SNOMED CT
only existed as “alanine aminotransferase measurement (procedure)” and was thus unable to be mapped. Such issues of inconsistency were present not only in clinical laboratory tests, but also in some pre-coordinated concepts. For example, “no sputum (finding)” or “not hoarse (finding)” could be expressed in pre-coordinated concepts, but concepts such as “no dyspnea” had no pre-coordinated concepts and needed to be expressed in postcoordinated concepts such as “dyspnea (finding)” and “absent (qualifier)”. Precoordinated expressions did exist for some concepts such as “no vomiting (situation)”, but in these cases, meaning of the concept in the “situation” hierarchy did not match up and thus the concepts could not be mapped. Not all concepts must be expressed through pre-coordinated concepts, but issues of inconsistency may arise when similar types of concepts are expressed partly through pre-coordinated and partly through postcoordinated concepts. In addition, post-coordination may prove to be useful in terms of later data utilization, so certain principles regarding these situations must be established during clinical mapping. However, the present study is limited to gastrectomy patients at the Department of Surgery at a tertiary hospital in Korea, and continued further research into the possibility of structuralization of doctors’ records is necessary through the analysis of medical records from various other areas.
References [1] [2] [3] [4] [5] [6] [7] [8]
Dick RS, Steen EB. The Computer-Based Patient Record: an Essential Technology for Health Care, Rev. ed. Washington DC: National Academy Press. 1997. Ginneken AM. The Computerized Patient Record: Balancing Effort and Benefit. Int J Med Inf 2002; 65: 97-119. Kim SH, Han SB, Choi JW. The Expressive Power of SNOMED CT Compared with the Discharge Summaries. J Kor Soc Med Informatics 2005; 11(3): 265-272. Cho IS, Park HA, Chung EJ, Lee HS. Formative Evaluation of Standard Terminology-based Electronic Nursing Record System in Clinical Setting. J Kor Soc Med Informatics 2003; 9(4): 413-421. Kim EM, Park IS, Shin HJ, Ahn TS, Kim YA, Oh PJ, et al. The Analysis of Standard Nursing Statements at Electronic Nursing Records. J of Kor Clinical Nursing Research 2005; 11(1): 149-164. Chin HJ, Kim SG. Standardization of Main Concept in Chief Complaint Based on SNOMED CT for Utilization in Electronic Medical Record. J Kor Soc Med Informatics 2003; 9(3): 235-247. Kim HY, Cho IS, Lee JH, Kim JH, Kim Y. Concept representation of decision logic for hypertension management using SNOMED CT. J Kor Soc Med Informatics 2008; 14(4): 395-403. Elkin PL, Brown SH, Husser CS, Bauer BA, Wahner-Roedler D, Rosenbloom ST, Speroff T. Evaluation of the content coverage of SNOMED CT: Ability of SNOMED clinical terms to represent clinical problem lists. Mayo Clin Proc. 2006 Jun;81(6):741-8.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-769
769
Terminology for the Description of the Diagnostic Studies in the Field of EBM a
Natalia GRABARa, Ludovic TRINQUARTb, Isabelle COLOMBETc CNRS STL UMR 8163, Université Lille 3, rue Barreau, 59653 Villeneuve d'Ascq, France b French Cochrane Center, France, AP-HP, Paris France c Université Paris Descartes, Paris, F-75006 France; HEGP AP-HP, 20 rue Leblanc, Paris, F-75015 France
Abstract. Diagnostic systematic reviews is a relatively new area within the EvidenceBased Medicine (EBM). Their indexing in Pubmed is not precise, which complicates their detection when a systematic review is to be realized. In order to provide an assistance in the selection of relevant studies, we propose to develop a terminology describing this area and the organization of its terms. The terminology is built with a bottom-up approach. It contains 255 terms organized into five hierarchical levels. Only a small proportion of these terms (13%) are already registered in MeSH. This terminology will be exploited in a dedicated web service as a main tool for the detection of relevant diagnostic studies. Keywords. Evidence-Based Medicine; Review, Systematic; Language; Natural Language Processing; Terminology
1. Introduction The aim of systematic reviews (SR) is to provide a synthesis of multiple primary research studies concerned with a given clinical question. Such syntheses are a part of the Cochrane Collaboration effort and published in the Cochrane library. The library is thereby a knowledge base which can be used by health professionals for supporting decisions within the frame of the Evidence-Based Medicine (EBM). The vast majority of SRs addresses the efficacy of interventions to treat or prevent diseases. Other SRs focus on diagnostic or prognostic studies. These reviews can be methodologically challenging. In particular, an essential step is to identify all relevant studies to be included in the review. Identifying diagnostic test accuracy studies is more difficult than searching for randomized trials. First, an exhaustive search strategy should involve several electronic bibliographical databases. Second, the indexing of diagnostic studies is imperfect as there is not a unique keyword for an accuracy study comparable with the term “randomized controlled trial” [1]. Third, methodological electronic search filters for diagnostic studies (which aim to restrict the search to articles that are most likely to be diagnostic studies) are not recommended because they can lead to the omission of a substantial number of relevant studies [2,3]. Fourth, supervised machine learning methods used for the automatic selection of relevant
770
N. Grabar et al. / Terminology for the Description of the Diagnostic Studies in the Field of EBM
studies for therapeutic SRs [4-7] are not efficient because of the small amount of existing diagnostic reviews. Consequently, reviewers often have to screen for eligibility very large number of references, most of them being irrelevant to the clinical question of interest. The whole process is performed manually which is a real burden to reviewers. We propose to help the process of selection of relevant articles with a semantic information retrieval system through a terminological resource. To our knowledge, no such resource have been yet designed and published. Two kinds of approaches are distinguished when creating terminologies, namely the top-down (main high-level concepts are defined and then populated) and bottom-up (terms are observed within the exploited material and then organized into classes, sub-classes etc). Corpora of textual documents and Natural Language Processing (NLP) methods are often used in bottom-up approaches [8-9]. Transformation-based approaches have also been proposed, they exploit HTML and XML metadata [10] or databases [11-12]. In our work, we use corpora and NLP methods, because textual material is easily accessible and contains data actually and naturally used in the area of interest. Other related works should be mentioned. For instance, an ontology of EBM has been proposed [13]. It attempts a modelization of this area and it targets particularly relations which may exist between patient records and meta-analysis results. Another work proposes an ontology related to SRs and meta-analyses [14]. It contains 128 elements exploited for manual tagging of five Randomized Controlled Trials studies in neurosurgery. Intra and inter-annotator comparison shows that such ontologies allow to obtain a high annotation agreement (kappa rating from 0.53 to 0.82) and an improvement in the quality of reporting. We aim at creating a terminology dedicated to diagnostic studies.
2. Material and Methods Material. We exploit a set of corpora and the MeSH terminology [15].The main subset of corpora is composed of scientific literature and reports related to diagnostic studies. It contains: 6 reference articles dedicated to description of the STARD initiative and its main concepts, and 20 diagnostic studies, among which 15 are full-text articles and 5 are abstracts. References and full text of these articles are available upon request. These are supposed to be instantiations of the STARD initiative and to describe studies performed within the EBM framework. This diagnostic corpus contains 105,000 occurrences (or words). Additional corpora are used to ensure the specificity of terms, they cover other types of SRs: prognostic (6 citations, 36,000 occ.), therapeutic (7 citations, 36,779 occ.) and observational (6 citations, 39,800 occ.). MeSH terminology [15] is typically used for indexing the scientific literature in Pubmed database, among which for indexing the SRs. We expect that MeSH provides several terms relevant to diagnostic accuracy studies reviews. If new terms are found in the corpora, and according to the expert validation, they may be considered as additional relevant terms for MeSH. Method. Our method carries out extraction of terms and their alignment with MeSH. Another step is dedicated to the evaluation and structuring of the extracted data.
N. Grabar et al. / Terminology for the Description of the Diagnostic Studies in the Field of EBM
771
Automatic acquisition and alignment of terms. Corpora are first pre-processed through the Ogmios platform [16]. This platform performs the segmentation into words and sentences, POS tagging (assignment of part-of-speech categories: cancers/Noun, cancerous/Adjective) and lemmatization (definition of the normalized form of words: cancers => cancer) with TreeTagger [17]. The step of term extraction is carried out with the syntactic rule-based parser YATEA [18]. Once the terms are extracted from corpora, they are aligned with the MeSH terminology. For all the extracted terms, their frequencies are computed in each processed corpus. This information is assumed to help the selection and validation step: frequencies of terms may be indicative of their specificity to the diagnostic area. Indeed, if terms occur only or more often in diagnostic corpus they show a high specificity, otherwise their specificity to the diagnostic area is lower. Evaluation and structuring. An independent evaluation was performed manually by two experts (a physician and a biostatistician with experience in SR). In cases of disagreements, consensus was established further to discussions. Each extracted term was examined, together with its distributions and frequencies across the corpora. Global inter-expert agreement was assessed with chance-corrected kappa statistics and with simple raw specific agreement indexes, which are the conditional probability, given one expert gives a result, that the other expert gives the same result [19]. Structuring was performed through a bottom-up approach: selected terms were categorized into categories and then subcategories, according to their semantics.
3. Results and Discussion Processing of diagnostic corpus led to extraction of 7,448 terms, among which 1,218 (16.3%) are already registered in MeSH, anf 6,230 are new terms. The acquisition on other corpora produced the following results: observational corpus provides 1,640 terms where 722 (44%) in MeSH; prognostic corpus provides 2,383 terms among which 531 (22.3%) in MeSH; therapeutic corpus provides 1,602 terms among which 590 (36.8%) in MeSH. Table 1: Excerpt of the extracted data. Terms E01 E05 N06 YATEA YATEA N06 YATEA YATEA
diagnosis roc curve prevalence diagnostic accuracy diagnostic performance confidence intervals characteristics curve clinical trials
Diagnostic Ftot Fmet 194 77 14 4 10 6 150 122 30 12 20 5 2 1 12 6
Fstu 117 10 4 28 18 15 1 6
Ntot Nmet 19 6 8 2 2 1 13 6 3 2 14 4 2 1 8 4
Nstu 13 6 1 7 1 10 1 4
Prog Ftot 13 2 3 10 0 7 0 4
Obs Ftot 27 0 11 0 0 3 0 8
Ther Ftot 6 0 0 0 0 8 0 38
Table 1 contains an example of the extracted terms together with their frequencies in various corpora. If an extracted term is also recorded in MeSH, we indicate in the first column its MeSH hierarchical tree (i.e., E, G or N), otherwise it is provided by YATEA. We then indicate frequencies of the extracted terms (frequency in diagnostic corpus Ftot, and
772
N. Grabar et al. / Terminology for the Description of the Diagnostic Studies in the Field of EBM
separately in methodological documents Fmet and studies Fstu). We also indicate the number of diagnostic corpus documents in which these terms occurred (total number Ntot, and separately number of methodological documents Nmet and of studies Nstu). The last three columns indicate the frequencies of these terms in the three other corpora.Further to the expert evaluation, a set of 219 terms is selected. Among these, 26 (13%) are already registered in MeSH (E (n=11), G (n=2) and N (n=11) MeSH trees), while 193 are provided only by YATEA. The inter-expert agreement is NN. An additional set of 36 terms have been added by experts, which gives a total of 255 terms. The additional terms are often variations of the extracted terms (i.e. abbreviations: npv, ppv) or terms suggested by the extracted data (dor and cut point never occurred individually but within larger terms and have been added as individual entry). Within the initial set of 7,448 extracted terms, only 3% of these have been selected. The rejection rate is very important. Some of the rejected terms are indicated in lower part of table 1. Among the rejected terms we observe: (1) common errors usually observed with automatic term extraction methods due to tagging errors; (2) sequences non relevant to a terminology (journals, authors, ...); (3) too general terms (public health, confidence intervals, characteristics curve); (4) terms non specific to diagnostic studies (clinical trials). Specificity of the material needed for the task and current shortcomings of the automatic term extraction may explain such rejection rate. With this kind of data, where rate of selection is both globally low and heterogeneous between experts, inter-expert agreement kappa is low (0,106), although average positive (selection) and negative (rejection) agreements are respectively 0.14 and 0.84. Exploitation of such methods allows to construct a terminology where no existing semantic resources are available and to insure that this terminology will be relevant to the processing of real data. A low number of MeSH terms within the validated data indicates that diagnostic area is poorly covered by MeSH. If MeSH were to be enriched with such terms, the indexing of diagnostic studies would be more precise and would help realization of SRs. Next and final step of the work is dedicated to the structuring of the selected terms. Five levels of terms have been defined. Figure 1 shows the four higher levels corresponding to categories of terms. These four broad categories represent main aspects for diagnostic studies. Notice that nearly all the MeSH terms are positioned under the Test characteristics tree, which indicates again the necessity of such a resource.
Figure 1. Hierarchical tree of the terminology.
4. Conclusion and Perspectives We presented an experience in building a terminology of diagnostic studies within the EBM area. We exploited automatic methods for term extraction and for their alignment
N. Grabar et al. / Terminology for the Description of the Diagnostic Studies in the Field of EBM
773
with an existing terminology (MeSH). Only small part of the acquired and validated terms is already recorded in MeSH. This indicates that MeSH may be enriched with some of the terms from the constructed terminology in order to provide assistance in indexing the diagnostic studies. The validated terms have also been structured, and the resulting semantic resource contains five hierarchical levels. We plan to exploit and evaluate this resource within the webservice dedicated to the automatic selection of literature [20]. Acknowledgments. This work is part of the ReSyTAL project, supported by a research grant from French PHRC, designed to facilitate the selection of relevant scientific literature as well as realization of diagnostic SRs.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]
Haynes RB and Wilczynski NL. Optimal search strategies for retrieving scientifically strong studies of diagnosis from medline: analytical survey. BMJ 2005;330(7501):1162–3. Leeflang M, Scholten R, Rutjes A, Reitsma J, and Bossuyt P. Use of methodological search filters to identify diagnostic accuracy studies can lead to the omission of relevant studies. Clin Epidemiol 2006;59(3):234–40. Meade M and Richardson W. Selecting and appraising studies for a systematic review. Ann Intern Med 1997;127(7):531–7. Aphinyanaphongs Y, Tsamardinos I, Statnikov A, Hardin D, and Aliferis C. Text categorization models for high-quality article retrieval in internal medicine. J Am Med Inform. 2005;12(2):207–16. Cohen A, Hersh W, Peterson K, and Yen P. Reducing workload in systematic review preparation using automated citation classification. JAMIA 2006;13(2):206–19. Demner-Fushman D, Few B, Hauser S, and Thoma G. Automatically identifying health outcome information in medline records. JAMIA 2006;13(1):52–60. Kilicoglu H, Demner-Fushman D, Rindflesch T, Wilczynski N, and Haynes R. Towards automatic recognition of scientifically rigorous clinical research evidence. J Am Med Inform Assoc 2009;16(1):25–31. Condamines A and Rebeyrolle J. CTKB : A corpus-based approach to a terminological knowledge base. In: Proceedings of Computerm’98, Coling-ACL’98. 1998:29–35. Maedche A and Staab S. Mining ontologies from text. In: Dieng R and Corby O, eds, EKAW 2000. Giraldo G and Reynaud C. Construction semi-automatique d’ontologies à partir de DTDs relatives à un même domaine. In: Actes Ingénierie des Connaissances (IC). 28-30 mai 2002. Krivine S, et al. Construction automatique d’ontologies à partir d’une base de données relationnelles : application au médicament dans le domaine de la pharmacovigilance. In: IC 2009. Kamel M and Aussenac-Gilles N. Construction automatique d’ontologies à partir de spécifications de bases de données. In: IC 2009, 2009. Pisanelli D, Zaccagnini D, Capurso L, and Koch M. An ontological approach to evidence-based medicine and meta-analysis. In: MIE 2003, 2003:543–8. Zaveri A, Cofiel L, Shah J, et al. Achieving high research reporting quality through the use of computational ontologies. Neuroinformatics 2010;8(4):261–71. National Library of Medicine, Bethesda, Maryland. Medical Subject Headings, 2001. www.nlm.nih.gov/mesh/meshhome.html. Hamon T, Nazarenko A, Poibeau T, Aubin S, and Derivière J. A robust linguistic platform for efficient and domain specific web content analysis. In: RIAO 2007, Pittsburgh, USA. 2007. Schmid H. Probabilistic part-of-speech tagging using decision trees. In: Proceedings of the International Conference on New Methods in Language Processing, Manchester, UK. 1994:44–9. Aubin S and Hamon T. Improving term extraction with terminological resources. In: FinTAL 2006, number 4139 in LNAI. Springer, August 2006:380–7. Cicchetti DV, Feinstein AR. High agreement but low kappa: II. Resolving the paradoxes. J Clin Epidemiol. 1990;43:551-558 Trinquart L, Fanet A, Grabar N, and Colombet I. A unique web service to facilitate the study selection process in systematic reviews. In: Joint Colloquium of the Cochrane & Campbell Collaborations, 2010.
774
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-774
Representing Knowledge, Data and Concepts for EHRS Using DCM William GOOSSEN a,1 Lector ICT Innovations in Health Care at Windesheim, Zwolle, and director at Results 4 Care B.V., Amersfoort, the Netherlands a
Abstract. With the move towards next generations of Electronic Health Record Systems (EHRS), the focus changes from administrative and data retrieval and data entry system capabilities towards clinical functions. The representation of the clinical knowledge and evidence base into EHRS becomes an important asset for health care, with its own challenges. Clinician’s do want EHRS support but do not want to standardize care, they do want unified terminology and structured data entry but also free text. In addition, information modelers challenge each other for the best solution, and care pathways and other workflows seem to differ for each situation. Such diverging approaches add complexity to the already difficult situation around Information Technology in health care, the EHRS in particular. This paper argues that a change is necessary to adopt Detailed Clinical Modeling as a method to organize clinical knowledge, represent concepts and define data in such a manner that it allows for semantics to be exchanged without being trapped in a specific technology. DCM help to fulfill the requirements for the enter data once, reuse multiple times paradigm for EHRS. Keywords. concept representation, detailed clinical models, archetypes, Electronic Health Records, HL7 templates, information modeling
1. Introduction Next generation of Electronic Health Record Systems (EHRS) should fulfill many functional requirements of clinicians. Example EHRS functions that are increasingly becoming important are: structured data entry, easy data storage and retrieval, exchange of data for continuity of care, use of data for decision support, aggregation of data for quality indicators and epidemiology, and aggregation of data for billing etc. All these functions integrate with each other such that the knowledge required for each of these functions must be taken into account to properly represent the required clinical concepts in EHRS. This need for multipurpose representation of clinical concepts pinpoints to the most granular level of single data elements, their attributes, and their relationships. Moreover, to understand data that are exchanged, or properly compare groups of patients, a high level of standardization is required. At the same time, due to diversity of patients and increasing complexity of health care, a maximum flexibility is required in EHRS configuration for different domains. This cannot be achieved without a whole repertoire of health informatics standards.
1
Corresponding Author, Results 4 Care B.V. De Stinse 15 Amersfoort, the Netherlands. Email: [email protected].
W. Goossen / Representing Knowledge, Data and Concepts for EHRS Using DCM
775
There are several approaches developed that attempt to fulfill these important EHRS functions. Each of these approaches begins with modeling efforts [1], and assumes an architectural framework [2], whether implicit or explicit. Clinical modeling examples in the literature and in practice include clinical elements [3], templates [4], care information models [5], clinical content models [6], clinical templates [7], archetypes [8], detailed clinical models [3, 9], and more. This approach is two level modeling, and it is carried out via disentangling the clinical data specification from the system technical functions [1]. Involving clinicians in such work is feasible, but usefulness is only apparent if a (international) system of governance is established [10]. This paper argues the case that adoption of conceptual level Detailed Clinical Modeling (DCM) is required to move to the next generation EHRS. DCM is both a method and a format to organize clinical knowledge, represent concepts, and define data elements in such a manner that it allows semantics to be exchanged without being trapped in a specific technology. DCM allows representing the semantics in a technology independent way and makes it feasible that next generation EHRS can be developed, and that existing healthcare information technology can interact with EHRS [3, 5, 7, 10, 11].
2. Clinical Modeling Benefits In a recent paper we reviewed a selection of the existing clinical modeling attempts listed above [11]. On the conceptual level, there is almost no difference in the representation of clinical knowledge in the form of data elements, relationships between data elements, attribute expression, and code binding [11]. However, there are differences on the use of a specific reference model and technology versus a more agnostic approach. In addition, there are differences in a top down (derived from a reference model) or bottom up approach (analysis of clinical phenomenon and afterwards link that to reference models). In addition, Blobel argues an architecture of health information technology is required to position these clinical models properly [2]. DCM starts with analyzing, sorting, and formalizing clinical knowledge on the fine-grained level of concepts. Next, the resulting material is structured and standardized on the level of individual and/or closely related data elements for clinical use. Doing this with conceptual modeling renders it possible to create and maintain a set of DCM independently of the technical implementation in which they are deployed. Hence, for an EHRS, clinicians are not completely dependent on vendors, or when a specific system is replaced, the clinical knowledge will remain available. Adding contextual knowledge and meta-information contribute to the overall usefulness of DCM for the different purposes for data use. These all are the core content of part – 2 – of international standard 13972 for DCM under development [12]. For a long term quality of DCM, it is important to engage clinicians, organize governance, enable access, and apply measures for patient safety. Methodologies that facilitate in the DCM work are the core of part – 1 – of the international standard [12].
3. Different Perspectives on Clinical Data The different purposes identified for use of clinical data, each require a careful analysis on detailed level of the requirements, in particular validity, relevance, and reliability of
776
W. Goossen / Representing Knowledge, Data and Concepts for EHRS Using DCM
the data. Differences exist on problem, patient, sample, or population levels, and rules for data aggregation need to be taken into account. Hence, each purpose for data use has a specific set of attributes and constraints for the data entry, storage, processing, presentation, communication, aggregation, and so on. It is rarely possible to organize this for large data sets at once. However, when the big elephants are broken down in small portions, it becomes feasible to eat them. Thus, clinical data elements at the most ‘atomic’, or ‘small molecular’ granular levels are feasible to standardize from the different perspectives on data use [3, 5, 7, 10, 11]. DCM allows exactly doing that. Figure 1 illustrates overlap and differences in representing EHRS data for different purposes. This is the playground for DCM analysis and development.
Figure 1. Different purposes for data use requiring specific knowledge representation in EHRS.
Now a reasonable set of DCM is ready [3, 6], the diversity of patient populations can be addressed in full. We can deploy the same DCM in different clinical domains. This is where additional methods such as Domain Analysis Models [4] or Clinical Templates [7] are applied. In essence, these approaches define a clinical domain, such as diabetes care, skin assessment, care for a patient on a ventilator. It is obvious that there will be many data relevant and necessary in each domain. Some data overlap and other data differ from another domain. DCM would cover the small items like systolic and diastolic blood pressure, HbA1c value, Braden scale for pressure ulcer risk, among many others. Such DCM can be used in the domains via selecting from the DCM collection (repository) what a particular clinical group needs, creating DCM for what is absent, and combining and sometimes constraining the different DCM in the domain model or clinical template. Example constraints for diabetes domain model would be that the blood pressure must be measured in sitting position. Hence, the DCM has systolic and diastolic values and body position. Most domains will not use the latter.
4. From Clinical Data to Technical Implementation via Conceptual Modeling In order to achieve a technology independent representation of clinical knowledge, the DCM content, once sorted out, is modeled in generic information models. There are different options here, such as Unified Modeling Language (UML), Extended Markup Language (XML), or Web Ontology Language (OWL). Currently, most work in this area is done in a pragmatic way, using one of these representation methods. Figure 2 illustrates the three steps from clinical content via generic conceptual modeling to technical implementation. Moving from one-step to another will reveal what is unclear or not sufficiently specified. Hence, a feedback loop from each step to earlier steps
W. Goossen / Representing Knowledge, Data and Concepts for EHRS Using DCM
777
improves the DCM quality and usability, but requires close interactions between clinicians, modelers, and technicians [13].
Figure 2. Three step modeling with DCM.
5. Core Components of a DCM Clinical Modeling work in the past decade has shown that there are several core components [3, 4, 5, 6, 7, 8, 9]. It is beyond the scope of this paper to fully list the components identified in current work on the DCM standard, but it is possible to summarize the most crucial ones [11, 12]. Table 1 shows the main DCM components. Table 1. Three core content areas of Detailed Clinical Models as in ISO daft 13972. Type of Knowledge Representation Clinical knowledge:
Data Element Specification:
Meta Data Specification:
Major Areas Addressed in DCM • Concept definition • Clinical population • Evidence base • Instruction for documentation • Interpretation • Data element • Data element definition • Data type • Unit or value set • Relationships between data elements • Unique code per data element • Detailed data model • Authors • Contact information • Versioning • Keywords • Endorsement / certification
6. Processes Around DCM In addition to the position and the core components of a DCM, three additional areas of concern have been identified and described [9, 11, 12]. The general opinion of the experts working on the DCM standard is that without clinician involvement and some arrangement to obtain endorsement from professional bodies, DCM will not be valid. In addition, clinical practice will continue to evolve, demands for data will probably increase, and challenges put to EHRS require that DCM can change over time. In that respect a large-scale governance structure will be required, similar to the major health classifications such as the International Classification of Diseases (ICD) and terminologies such as the Systematized Nomenclature for Medicine Clinical Terms (Snomed CT). Moreover, we would like to get access to DCM collections [9, 10]. Finally, a more recent evolving issue is that of ensuring patient safety in specifications
778
W. Goossen / Representing Knowledge, Data and Concepts for EHRS Using DCM
for EHRS and other health care information technology [12]. Rule is to keep it simple and not too complex. DCM allows working on a fine-grained level, piece by piece.
7. Discussion and Conclusion It is likely that we will see no endpoint to the discussions and approaches to knowledge representation, concept modeling, and EHRS development. However, it is clear that the different quests for clinical data to accommodate different uses will go on and that EHRS fulfill a crucial role in addressing that requirement. This challenge is often expressed as the ‘enter once in EHRS and use multiple times’ paradigm. In this paper, we have argued that this challenge is not easy met and that it does require a high level of standardization in different knowledge areas. In particular, at the data element level, the specifications will have to be very precise and standardized. However, due to the many diverse DCM we can create, the flexibility to adapt to the diversity of patient populations and practice domains is present. Creating DCM of high quality and maintaining these for a long time requires quality criteria for the content, the modeling, and the methodologies. Hence, the standard 13972 currently under development at the International Standards Organization (ISO) [12] will be important to foster the clinical richness in DCM and deploying that in different EHRS. In the meantime, we see that DCM approaches do represent clinical knowledge, data elements and code binding such that the move to the multi functional next generation EHRS becomes feasible.
References [1] [2] [3] [4] [5]
[6] [7] [8] [9]
[10] [11] [12] [13]
Rector AL, Nowlan WA, Kay S, Goble CA, Howkins TJ. A Framework for Modelling the Electronic Medical Record. Methods Inf Med, 32 (1993), 109-119. Blobel B. Architectural Approach to eHealth for Enabling Paradigm Changes in Health. Methods Inf Med, 49(2) (2010), 123-134. Huff SM, Rocha RA, Coyle JF, Narus SP. Integrating detailed clinical models into application development tools. Medinfo 2004 Pt 2 11 (2004), 1058-1062. Health Level 7. Normative Edition of the HL7 Standards 2010. Ann Arbor, HL7 international. van der Kooij J, Goossen WTF, Goossen-Baremans ATM, Plaisier N. Evaluation of Documents that Integrate Knowledge, Terminology and Information Models. In: Park HA, et al, (Eds). Stud Health Technol Inform 122 (2006), 519-522. Center for Interoperable EHR CiEHR. Clinical Contents Manager. Seoul, Korea, Web documents. http://www.clinicalcontentsmodel.org/main.php. Visited Nov 26, 2010. Hoy D, Hardiker NR, McNicoll IT, Westwell P. A feasibility study on clinical templates for the national health service in Scotland. Stud Health Technol Inform. 129 (2007), 770-774. Beale T. Archetypes and the EHR. Stud Health Technol Inform. 96 (2003), 238-244. Goossen WTF. Using Detailed Clinical Models to Bridge the Gap Between Clinicians and HIT. In: De Clercq E, et al. (Eds). Collaborative Patient Centred eHealth. Proceedings of the HIT@Healthcare 2008. Amsterdam, IOS press (2008), 3-10. Garde S, Knaup P, Hovenga E, Heard S. Towards semantic interoperability for electronic health records. Methods Inf Med. 46 (3) (2007), 332-43. Goossen W, Goossen-Baremans A, M. van der Zel. Detailed Clinical Models: A Review. Healthc Inform Res. 16(4), (2010), 201-214. International Standards Organization. Draft materials ISO 13972 Health Informatics Quality Criteria and Methodologies for Detailed Clinical Models part 1 and part 2. Draft materials. Geneva, ISO. van der Zel M, Goossen W. Bridging the gap between software developers and healthcare professionals. Model Driven Application Development. Hospital Information Technology Europe, 3 (2), (2010), 2022.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-779
779
Ontology-Based Automatic Generation of Computerized Cognitive Exercises Giorgio LEONARDI a,c, Silvia PANZARASAb, Silvana QUAGLINIa Dipartimento di Informatica e Sistemistica, Università di Pavia, Italy b Consorzio di Bioingegneria e Informatica Medica, Pavia, Italy c Dipartimento di Informatica, Università del Piemonte Orientale, Italy a
Abstract. Computer-based approaches can add great value to the traditional paperbased approaches for cognitive rehabilitation. The management of a big amount of stimuli and the use of multimedia features permits to improve the patient’s involvement and to reuse and recombine them to create new exercises, whose difficulty level should be adapted to the patient’s performance. This work proposes an ontological organization of the stimuli, to support the automatic generation of new exercises, tailored on the patient’s preferences and skills, and its integration into a commercial cognitive rehabilitation tool. The possibilities offered by this approach are presented with the help of real examples. Keywords. Ontology, cognitive rehabilitation, exercise adaptation
1. Introduction Cognitive rehabilitation is designed to reduce and/or compensate the impact of cognitive dysfunction in patients suffering from brain damage [1]. Traditional approaches require the patient to perform paper-based exercises and to undergo face-toface visits with specialists, in order to improve his/her attention, and cognitive and memory abilities. Computer-based applications can add great value to traditional methods, since they permit to involve the patient with multimedia features such as, for example, images, sounds and videos. A cognitive rehabilitation tool able to manage these new types of stimuli can create a new and effective experience to support the patient in the rehabilitation process, also proposing to him/her new varieties of exercises, impossible to achieve with paper-based approaches. The use of a knowledge base to organize the stimuli permits to exploit classifications, relationships and properties such as images and sounds to generate new exercises automatically. This ontological organization, in addition to organizing all the multimedia features used for the patient’s rehabilitation process, relieves the specialist of the need to generate byhand all the exercises to be scheduled for a particular patient. Using the stimuli ontology described in this paper, the specialist will have the only task to set up a template for every type of exercise, while the system will compose automatically the exercises by filling the templates with the appropriate stimuli, selecting and properly recombining them using the ontological classifications and the relationships defined. Furthermore, a system to classify the patient’s performance may use the ontology to generate exercises whose difficulty level is selected on the basis of the patient’s skills. To achieve the goals described, the stimuli ontology and the patient’s classification
780
G. Leonardi et al. / Ontology-Based Automatic Generation of Computerized Cognitive Exercises
system have been integrated in the cognitive rehabilitation tool, built on top of the “EPrime” system, presented in [2].
2. Stimuli Ontology An ontology is defined as ‘‘an explicit specification of a conceptualization’’ [3], and is now gaining a specific role in Artificial Intelligence and other fields, such as knowledge engineering and many others, including knowledge management and organization [4]. An ontology is composed by classes (containing the concepts of our knowledge base), attributes (defining the intrinsic properties of a class) and relationships (defining semantic links between different classes). In our ontology, each class represents a stimulus, to be used in the exercises. The stimuli are grouped in taxonomies (hierarchies of concepts with the same common classification). The toplevel concepts define the main semantic categories (“Food”, “Animal”, “Dress”, “Habitation”, etc.). Each category contains sub-classes representing stimuli which are more specific respect to their main category. For example, in Fig. 1, which shows a part of the stimuli ontology implemented with the tool Protegè [5], “Pasta” is classified as a type of “Food”, while “Spaghetti” is a type of “Pasta” (and, in turn, a type of “Food”).
Figure 1. An excerpt of the stimuli ontology.
Attributes are associated to each class, to bind stimuli to the corresponding images, sounds and/or videos which will be shown to the patient (e.g. the picture and the whistle of a train). The relationships between the classes define the semantic links between the corresponding stimuli. In Fig. 1, two relationships are shown: the first one, called “mainIngredient”, binds a “Course” with the main “Food” it is composed of; the second relationship, called “ingredient”, binds the “Course” with all the other ingredients (selected from the “Food” taxonomy) needed to cook the course considered. For example, “SpaghettiWithTomatoSauce” has “Spaghetti” as its main ingredient (relationship defined in the left window), and “OliveOil”, “Onion”, “Tomato”, etc. as the other ingredients (relationship in the rightmost window). E-Prime can use this ontological structure to build new exercises automatically, as described in Section 3.
G. Leonardi et al. / Ontology-Based Automatic Generation of Computerized Cognitive Exercises
781
3. Integration in the Cognitive Rehabilitation System The stimuli ontology has been integrated in [2] using a dedicated tool. Figure 2 shows the overall architecture of the cognitive rehabilitation tool (E-Prime by Psychology Software Tools), completed with the components for integrating the stimuli ontology.
Figure 2. The architecture of the system.
First of all, the stimuli ontology has been defined with the help of specialists in cognitive rehabilitation, through research in the literature and through the Internet. The Protégé editor has been used to formalize the ontology in a machine-readable format (using the XML language generated by Protégé-frames at the moment; restructuring in OWL [7] is a work in progress) and the formalized ontology has been integrated in the “TrialsDB” of E-Prime by means of a custom import tool. Thanks to the Ontologybased engine described in [2], E-Prime can use the imported stimuli ontology to generate new exercises using templates and configuration files defined by the therapists. This approach permits to define, edit and maintain the stimuli ontology using a graphical ontology editor (Protégé), while integrating the new versions of the ontology in E-Prime only when stable releases have been deployed.
4. Automatic Generation of Exercises In this section, we describe how the rehabilitation system uses the stimuli ontology to generate automatically two of the main exercises to be solved by the patient. In the first exercise, called “Find the correct category”, three images (and/or sounds) associated to three different stimuli belonging to the same category are shown on the top of the screen. On the bottom, three categories are listed. One is the correct answer, while the others are wrong. The exercise on the left of Fig. 3 shows images associated to the classes “SpaghettiWithTomatoSauce”, “Cheeseburger” and “ApplePie”. These images are found in the attribute “pictures” associated to the classes listed, which are obtained by choosing a category (“Course”) and selecting three sub-classes from this taxonomy. Considering the possible answers, “Course” will be the correct answer, while other two categories chosen randomly will represent the wrong answers (in this case, “Animal” and “Habitation”). In this way, the system can build many different exercises using this template and it is easy to verify if the patient provides the correct answer. The exercise on the center of Fig. 3 shows how the difficulty level can be changed automatically:
782
G. Leonardi et al. / Ontology-Based Automatic Generation of Computerized Cognitive Exercises
Figure 3. Three exercises generated by the system.
the correct category can be chosen at any level of a selected taxonomy. The higher the level, the (potentially) easier will be for a patient to answer correctly. In this case, the categories (“First” as correct answer; “Second” and “Dessert” the wrong ones) are chosen at the first sub-level of the “Course” taxonomy. Probably it will be more difficult for the patient to recognize the difference between “First”, “Second” or “Dessert” courses, than to recognize the difference between “Course(s)”, “Animal(s)” or “Habitation(s)”. Thanks to this approach, the difficulty level of the exercises can change automatically as suggested by the patient classification system: a module of EPrime able to classify the patient’s performance, providing statistics about his/her ability to solve the exercises currently administered. In the second type of exercise, called “Select the main ingredient”, the system uses the relationship “mainIngredient” to show a course to the patient, and asks him/her what is the main ingredient of the course. In the example shown on the right of Fig. 3, the system selects a “Course” randomly (in this case “Cheeseburger”) and places its image on the top of the screen. The correct answer will be the “Food” “CalfMeat” (“Cheeseburger” “mainIngredient” “CalfMeat”), while the wrong answers are selected randomly among all the other classes in the “Food” taxonomy (which is the range of the “mainIngredient” relationship). The image associated to the “CalfMeat” is placed in a random position on the bottom of the screen; the other positions will contain the wrong answers. It is straightforward for the system to check the correctness of the patient’s answer, by verifying that, in this situation, only the stimuli “Cheeseburger” and “CalfMeat” are linked by the “mainIngredient” relationship. The examples described in this section demonstrate that the stimuli ontology allows the system to automatically generate new exercises, and to verify the correctness of the answers, without human intervention.
5. Discussion The use of ontologies for automatic quiz generation has been studied in the recent years [7, 8]. In this project, the stimuli ontology required to be structured in strict collaboration with the domain experts, to offer the best support for the generation of exercises for a delicate type of patient. For this reason, the control over categories, terminology and multimedia features associated to the stimuli are mandatory, because stimuli and categories must be easily recognizable by the patients solving the exercises. Furthermore, the level of categorization in the taxonomies and the network of relationships is designed to support E-Prime and the ontology-based engine to generate sound and intelligible exercises. Considering all these requirements, we initially did not consider general-purpose ontologies (for example the food ontology on the w3.org site) but we decided for a custom specialized solution built under the control of the domain
G. Leonardi et al. / Ontology-Based Automatic Generation of Computerized Cognitive Exercises
783
experts. On the negative side, this approach could limit the number of stimuli which can be used by the system. As a work in progress, we are restructuring the stimuli ontology to obtain its formalization in OWL. Among the advantages offered by this format (use of a standard language, different levels of abstraction, use of meta-data information, etc.), it allows to import and join different ontologies to re-use them. We plan to study the use of our approach with ontologies imported from different domains and test if the new randomized exercises can be correctly built and solved by the patients. This approach raises some issue: from the technical point of view, switching to OWL means that all the concepts must be disjoint, to offer E-Prime a single solution for every exercise, while the reuse of reference ontologies must take care of at least two problems: 1) vocabulary and relationships network must be evaluated and approved by domain experts, and the exercises approved by the personnel in charge of the rehabilitation task, and 2) different languages are used for different countries. For example, this system has been used in an Italian hospital, therefore the ontology has been defined for Italian patients. The most of the reference ontologies, however, are defined in English, therefore a proper translation must be found or performed.
6. Conclusion Tele-medicine and tele-homecare may represent an appropriate approach for moving care delivery and rehabilitation from hospitals to home, and the use of a computerbased rehabilitation system allows this move. The multimedia features and the variety of the exercises can be considered as key points for the success of this type of system, since it can involve the patient improving the effectiveness of the treatment strategy. This work illustrates that an ontology-based approach permits the automatic generation of exercises for the rehabilitation of patients and the management of a wide range of (multimedia) stimuli. The preliminary tests show encouraging results, as half of the patients declare to prefer using this tool, rather than traditional paper-based exercises. Therefore, it may be considered as a means to create effective tele-homecare services.
References [1] [2]
[3] [4] [5] [6] [7] [8]
Christensen, A. Uzzel, BP. International Handbook of neuropsychological rehabilitation, Plenum Press, 1999. Quaglini, S. Panzarasa, S. Giorgiani, T. Zucchella, C. Bartolo, M. Sinforiani, E. Sandrini G.: OntologyBased Personalization and Modulation of Computerized Cognitive Exercises. Proceedings of the 11th Conference on Artificial Intelligence in Medicine AIME (2009), 240-244. Gruber, TR. A translation approach to portable ontology specification. Knowledge Acquisition, 5(1993), 199–220. Guarino, N. Formal ontology, conceptual analysis and knowledge representation, International Journal of Human-Computer Studies, 43(5/6)( 1995), 625–40. www.protege.stanford.edu http://www.w3.org/TR/owl-ref/ Zitko, B. Stankov, S. Rosić, M. Grubišić, A. Dynamic test generation over ontology-based knowledge representation in authoring shell, Expert Systems with Applications 36, 4 (May 2009), 8185-8196. Tsumori, S. Kaijiri, K. System Design for Automatic Generation of Multiple- Choice Questions Adapted to Students' Understanding, Proceedings of the 8th Int. Conference on Information Technology Based Higher Education and Training (2007), 541-546
784
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-784
Creating a Magnetic Resonance Imaging Ontology Jérémy LASBLEIZab,1 Hervé SAINT-JALMESb, Régis DUVAUFERRIERa, Anita BURGUNa a Unité Inserm U936, IFR 140IFR 140, Faculté de Médecine b Laboratoire de Traitement du Signal et de l’Image; INSERM UMR642 Université de Rennes 1, France
Abstract. The goal of this work is to build an ontology of Magnetic Resonance Imaging. The MRI domain has been analysed regarding MRI simulators and the DICOM standard. Tow MRI simulators have been analysed: JEMRIS, which is developed in XML and C++, has a hierarchical organisation and SIMRI, which is developed in C, has a good representation of MRI physical processes. To build the ontology we have used Protégé 4, owl2 that allows quantitative representations. The ontology has been validated by a reasoner (Fact++) and by a good representation of DICOM headers and of MRI processes. The MRI ontology would improved MRI simulators and eased semantic interoperability. Keywords. MRI, MRI Simulator, OWL, ontology.
1. Introduction Magnetic Resonance Imaging is the most versatile diagnostic imaging technique. It can study T1, T2, diffusion, PH, temperature, spectroscopy… of tissues and of course make images. The vocabulary used by medical imaging constructors is very heterogeneous [1] and physical phenomena involved during MRI are very complex. So the MRI domain needs ontology to make the MRI community sharing the same concepts. To build our ontology we will take into account two MRI representations: MRI simulators and DICOM. The DICOM is an applicative representation with daily-use concepts. MRI simulators give a representation of complex physical phenomena that are involved in MRI and that are not describe in DICOM. The fusion of MRI simulators and DICOM concepts is needed to represent MRI examinations not only in an administrative way but in a useful way for radiologist interpretations.
2. Material and Methods 2.1. Analyzing DICOM [2] The DICOM standard is divided in different parts. The relevant part for MRI is C.8.13 « Enhanced MR Image ». It is a section of the standard part 3: « Information Object 1
Corresponding author.
J. Lasbleiz et al. / Creating a Magnetic Resonance Imaging Ontology
785
Definition ». All concepts of this part, and their DICOM tags, will be included in our ontology, thus will give a semantic interoperability to the ontology. In DICOM, there is a lack of definition for an ontology. We will fill this gap by domain expert definitions, thanks to MRI simulators analysis. 2.2. Analyzing MRI Simulators We decided to analyze two MRI simulators JEMRIS and SIMRI. SIMRI [3], is implemented in C language and is based on the Bloch equations. It enables simulations of 1D, 2D, and 3D images. Although simple, the user interface requires the use of C. The simulator is divided in different parts: Model (Virtual object): Each voxel of the virtual object contains a set of physical values that are necessary to compute the local spin magnetization vector with the Bloch equations. These values are the proton density and the two relaxation constants T1 and T2. MRI sequence: During an MRI experiment, the object is placed in a static magnetic field B0 and is excited by electromagnetic events of two types: RF pulses (B1 field) and magnetic field gradients. The acquisition of the object magnetization state is stored as a complex signal in the kspace to obtain the image. This part is divided in 4 parts: The free precession, precession with application of gradients (specified by its duration and the gradient magnitudes in the three spatial directions), signal acquisition (number of points to capture, bandwidth, readout gradient magnitude and position of this signal in the kspace), the application of RF pulses (specified by its duration, a flip angle and the rotation axis). RF inhomogeneity and gradient non-linearity are not simulated.The user can define the echo train and sequence parameters (repetition time, echo time, flip angle…). Chemical shift and susceptibility artefacts are modeled. JEMRIS [4-5], is a C++ software with XML tags. It uses an optimized library for numerical solutions equations needed to simulate complex RF-pulses. It can deal with multichannel Tx-Rx coil geometries and configurations, nonlinear gradients, chemical shift, reversible spin dephasing (T2*), susceptibility-induced off-resonance, temporal varying processes of the object (e.g., movement or flow), and concomitant gradient fields. The graphical user interface (GUI) is divided in three: one for interactively designing the MRI sequence, another for defining the coil configuration, and one for the setup and execution of the main simulator. The software is divided in 5 classes: sample (describes the physical properties of the object) signal (holds information about the MR signal) model (describes the functionality for solving the physical problem) coil (contains the code for spatially varying RF transmission and signal reception), sequences. The sequence loop is represented as a left-right ordered tree with loops (Fig1). The xml language has been used to serialized C++ objects, describing the different steps of each sequence. The management of time interval has also been taken into account and formalised. The different modules interact with each other.
786
J. Lasbleiz et al. / Creating a Magnetic Resonance Imaging Ontology
Figure1. Echo Planar Imaging sequence schema in JEMRIS [4] yellow =loops , blue = pulses, green = intervals.
2.3. Using Protégé 4, owl2, Ontology Validation To build our ontology, we will use Protégé [6], which is a free, open source ontology editor and knowledgebase framework and the owl language. In our case, the domain has a lot of quantitative informations so we choose owl2, which allows us to define quantitative data properties. First of all, we have taken into account concepts from DICOM and secondly we have added concepts from MRI simulators. We will use an ontology classifier FACT++ to check the ontology consistency. The ontology will be validated by the analysis of 10 MRI examination DICOM headers, extracted with OSIRIX [7] and the possibility to write sequences with the ontology.
3. Results 3.1. Ontology Taxonomy The main classes of ontology taxonomy are: Object of the study: Defined by its size, voxels size, properties (T1, T2, Proton Density, Diffusion, Contrast enhancement cinetic), T2*, movements (general and flux) Device: Magnet (intensity, shape, kind) coil (receiver coil, transmitter coil, multielement coil, region) Gradient (magnetic field, slice selection, diffusion…) Sequence, from this point our vision is different from JEMRIS. Actually, the representation of loop in a vertical way (fig.1) of physical events that are horizontal (dependant to time) and independent cannot be included in an ontology. Therefore we have divided sequences in elementary events: radiofrequency pulse, slice selection gradient, readout gradient… according to SIMRI description of events.
J. Lasbleiz et al. / Creating a Magnetic Resonance Imaging Ontology
787
The signal acquisition modeled, has to be formalised by a mathematical way thanks to Bloch equations resolution as in the two softwares. The formula will be integrated in the ontology. Acquisition results will be divided in: image, quantitative result… Organisation of sequences in taxonomy is a difficult management. An article [8], written in a didactical goal, has organized sequences with their technical characteristics and with loops. Taxonomy doesn’t have loop and the problem is that sequences can be a mix of different techniques that can’t be organised in taxonomy. The solution we have chosen is to classify sequence according to their goals. This solution is intuitive for clear goal: diffusion, angiography image… but less obvious for contrast sequences. So we have chosen to start with a general taxonomy of sequences (Fig.2), adding to each of them the Weighting of final images: T1Weighted, T2Weighted, DPWeighted and T2*Weighted. Constructor acronyms of sequences have been added as synonyms of sequence name.
Figure 2. Contrast sequence Taxonomy
Acquisition parameters are divided in two essential parts: parameters modifying image geometry and parameters modifying image contrast. The ontology relations are: Different kinds of relations between concepts will be defined: General: Has_a , Has_Parameters… ; Quantitative : Has_Value, Has_Unit… Owl2 permits quantitative representation of classes. The relations between classes are then: A Has_Modifyer B, A Increase_When_Decrease B, A Decrease_When_ Increase B will permit to describe variations of parameters. 3.2. Ontology validation With the concepts present in the ontology we can define events that happen during MRI experiences, for examples:
788
J. Lasbleiz et al. / Creating a Magnetic Resonance Imaging Ontology
Spin echo T2 weigthed sequence : Spin_Echo_T2W has_modifier some ((TR and (Has_Unit some milisecond) and (Has_Value some float [>=2000])) and (TE and (Has_Unit some milisecond) and (Has_Value some float [>80]))) radiofrequency pulses of Spin Echo sequence: Spin_Echo Has_Parameter some Radiofrequency_Pulse and ( RadioFrequency_Pulse Has_a Flip_Angle ((Flip_Angle Has_Value value =90) or (Flip_Angle Has_Value value =180)). We extract DICOM headers of 10 MRI examinations with OSIRIX Métadonnées. The analysis shows that concepts of DICOM headers are well represented in the ontology. The problem is that MRI constructors don’t share the same DICOM tags for the same concept.
4. Discussion There is only one work about MRI and ontology. It concerns brain functional MRI [9] and are interested in all the process and not only MRI. However it has already shown the need of ontology in the domain. JEMRIS have also, by using XML, shown the interest of web semantic in physical process description. DICOM also need to be improved with definitions and rules that ontology could define. Our ontology can increase the semantic interoperability in MRI. An ontology has already be implemented on a PACS in that goal [10] but not for MRI examinations.
References B. Gibaud, The quest for standards in medical imaging. Eur J Radiol. May 31, 2010. Digital Imaging and Communication in Medicine: DICOM web site, available from: http://medical.nema.org/. Access January 2011. [3] H. Benoit-Cattin, G. Gollewet, B. Belaroussi, H. Saint-Jalmes, C. Odet, The SIMRI project : a versatile and interactive MRI simulator, Journal of Magnetic Resonance, Vol. 173, pp. 97-115, 2005. [4] http://www.jemris.org, Access January 2011. [5] T. Stöcker, K. Vahedipour, D. Pflugfelder, N. Jon Shah. High-performance computing MRI simulations. Magnetic Resonance in Medicine, 64 (1), 186 – 193, 2010. [6] http://protege.stanford.edu, Access January 2011. [7] http://www.osirix-viewer.com/ , Access January 2011. [8] GE. Boyle, M. Ahern, J. Cooke, NP. Sheehy, JF. Meany, An Interactive Taxonomy of MR Imaging Sequences, RadioGraphics November-December vol. 26 no. 6 e24, 2006. [9] T. Nakai, E. Bagarinao, Y. Tanaka, K. Matsuo, D. Racoceanu, Ontology for FMRI as a biomedical informatics method. Magn Reson Med Sci. 2008;7(3):141-55. Review. [10] DL. Rubin, P. Mongkolwat, V. Kleper, K. Supekar and DS. Channin, Medical Imaging on the Semantic Web: Annotation and Image Markup. In: 2008 AAAI Spring Symposium Series, Semantic Scientific Knowledge Integration, Stanford University, 2008. [1] [2]
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-789
789
Validation of the openEHR Archetype Library by using OWL Reasoning Marcos MENÁRGUEZ-TORTOSAa,1 and Jesualdo Tomás FERNÁNDEZ-BREISa a Departamento de Informática y Sistemas, Facultad de Informática, Universidad de Murcia, CP 30100, Murcia, Spain
Abstract. Electronic Health Record architectures based on the dual model architecture use archetypes for representing clinical knowledge. Therefore, ensuring their correctness and consistency is a fundamental research goal. In this work, we explore how an approach based on OWL technologies can be used for such purpose. This method has been applied to the openEHR archetype repository, which is the largest available one nowadays. The results of this validation are also reported in this study. Keywords. Archetypes, openEHR, Ontology, Reasoning
1. Introduction Domain knowledge based on archetypes plays a fundamental role for the achievement of semantic interoperability of Electronic Health Record (EHR) systems [1]. This means that archetypes should be the clinical knowledge unit exchanged by clinical systems in order to process the clinical data of the patients. Consequently, the quality and accuracy of archetypes is a crucial issue. Archetypes need to be optimally designed for their purpose, and considered trustworthy within their intended communities of use. This requires sound methodologies for designing archetypes, and rigorous and robust processes for validating them against its clinical evidence base. Quality criteria, governance practices for archetype development and editorial policies for certifying the quality of libraries of archetypes were defined by the Q-REC project (http://www.eurorec.org/RD/pastProject_Q-REC.cfm). However, the development of large libraries of archetypes is still relatively new and only openEHR (http://www.openehr.org) has a library with an interesting size for applying quality criteria and methods. In [2], the requirement of formal methods for validating the design and content of archetypes has been identified. So far, few archetype-authoring tools implement techniques for assuring the quality of archetypes. The most significant case is the LinkEHR editor [3], which defines a formal framework for archetype validation. There, archetype constraints are expressed in an algebraic formalism and operations supporting archetype validation are defined and implemented. However, the drawback of this proposal is the absence of a knowledge-based representation of archetypes to perform semantic activities. This is a common issue in archetype editing tools since 1
Corresponding Author.
790
M. Menárguez-Tortosa and J.T. Fernández-Breis / Validation of the openEHR Archetype Library
they represent archetypes by using the Archetype Definition Language (ADL), which has a syntactic orientation. In this paper, we will not focus on the evaluation of the clinical correctness and usability of the archetypes but on using formal semantic methods for checking their technical correctness. A knowledge-based representation of archetypes capable of supporting validation and quality assurance would certainly be very useful for several reasons. First, knowledge models would be used for a proper representation of clinical knowledge, and this would facilitate the development of efficient knowledge management methods. Second, the combination of advanced semantic models with reasoning techniques would certainly reduce the effort required for implementing the quality assurance and validation methods. In this work, we use the Web Ontology Language (OWL) (http://www.w3.org/TR/owl2-syntax/), which is the W3C standard for the exchange of semantic content on the web. In particular, we use its description logics flavor, OWL-DL. Thus, in this work, an OWL-based method for checking the consistency of archetypes is presented. The possibilities and limitations of the approach will be illustrated through its application to the openEHR archetype library, thus the errors found in such library will be reported.
2. Methods 2.1. Semantic Representation of Archetypes Archetypes are detailed and domain-specific definitions of clinical concepts in the form of constrained combinations of the entities of a reference model in a tree-like structure [4]. Concepts in archetypes are characterized by the number of instances that can be part of the association they belong to. In addition, multivalued associations between concepts may be restricted in different ways. First, the cardinality of the association can be constrained by a range. Second, instances might be ordered according to the position of the definition of their concepts in the association. Finally, repeated instances can be allowed or not. An archetype can be defined as the specialization of another one. An archetype concept is defined as the specialization of an entity of the reference model or a concept in the parent archetype. The definition is based on constraints applied to attributes of such entity. Specialization does not mean reuse of the definitions as in object-oriented modeling, but it is a compliance relationship. In this way, if an archetype B specializes an archetype A, then all EHR extracts that are compatible with the archetype B must also be compatible with the archetype A. In addition to the above constraints, an archetype specialization might replace the type of a concept by a compatible type. Our OWL representation of the openEHR reference model was achieved by following the rules proposed by the OMG in the Ontology Definition Metamodel specification (ODM) (http://www.omg.org/docs/formal/09-05-01.pdf). Each concept is defined in our representation by means of an OWL class, and its constraints are defined using OWL-DL axioms. Concept identity is associated with the node id, which is used in the archetype definition to bind concepts and ontological definitions. The concepts in specialized archetypes might include additional annotations that guide the validation process. Those annotations indicate the name of the OWL class in the parent archetype that is being specialized, if any. That binding is based on the concept identifier.
M. Menárguez-Tortosa and J.T. Fernández-Breis / Validation of the openEHR Archetype Library
791
An example is shown next. Figure 1 shows the first definitions of the archetype CLUSTER.inspection.v1. The upper part corresponds to the definition in ADL and the lower one corresponds to the definition in Manchester OWL Syntax (http://www.w3.org/TR/owl2-manchester-syntax/). An inspection is an unbounded cluster of unordered data items. It contains an optional cluster of normal statements. Each concept is defined in OWL by means of equivalency axioms. The constraints on multivalued associations are also translated into one class. CLUSTER[at0000] matches { -- Inspection items cardinality matches {0..*; unordered} matches { -- Normal statements CLUSTER[at0001] occurrences matches {0..1} matches { ...
Class: CLUSTER_at0000 EquivalentTo: CLUSTER and ARCHETYEPED_CLASS and (id value "at0000") and (op_items only COLLECTION_CLUSTER_at0000_items) Class: COLLECTION_CLUSTER_at0000_items EquivalentTo: COLLECTION and (ordered value false) and (id value "COLLECTION_CLUSTER_at0000_items") and (elements max 1 CLUSTER_at0001) Figure 1. Excerpt of the archetype CLUSTER.inspection.v1 and its OWL representation
The archetype CLUSTER.inspection_tympanic_perforation.v1 specializes the previous archetype. It defines the concept normal statements in a different way, since an unbounded number of declarations is allowed. Figure 2 depicts an excerpt of that archetype and the OWL definition of the multivalued association. CLUSTER[at0000] matches { -- Inspection items cardinality matches {0..*; unordered} matches { -- Normal statements CLUSTER[at0001] occurrences matches {0..*} matches {
Class: COLLECTION_CLUSTER_at0000_items EquivalentTo: COLLECTION and (ordered value false) and (id value "COLLECTION_CLUSTER_at0000_items") Figure 2. Excerpt of the archetype CLUSTER.inspection_tympanic_perforation.v1
2.2. Detecting Inconsistent Specializations Using OWL Reasoners The detection of inconsistencies in specializations is a major challenge in archetype edition. An archetype is correct if the set of constraints defined over the reference model and the parent archetype is valid. The specialization of archetypes does not imply inheritance but the definitions in the specialized archetype have to be consistent with the parent's ones. The semantics of archetype specialization is that the OWL semantics of the parent archetype subsumes the one of the specialized archetype. OWL reasoners allow us to find incorrect constraints over the reference model. Thereby, a concept is wrongly defined if the derived OWL class is unsatisfiable. That is, the set of instances of such concept does not conform to the reference model. OWL reasoners infer subclass and equivalent axioms between classes. In this way, checking
792
M. Menárguez-Tortosa and J.T. Fernández-Breis / Validation of the openEHR Archetype Library
the correctness and consistency of a specialization consists on checking whether that subsumption is inferred. In the previous example, the specialization is not subsumed by the parent archetype because the specialized archetype allows any number of normal statements, that is, CLUSTER[at0001]. The basic method does not provide much information about the causes of the inconsistency. Our solution to this issue was isolating the errors. Our OWL representation permits the identification of the classes that violate the definition of the parent archetype. In our work, the precise identification of inconsistencies is based on the definition of additional support classes that allow the isolation of each archetype constraint. Figure 3 shows the representation of the order constraint for the multivalued association items of the concept inspection. Class: ORDER_COLLECTION_at0000_items EquivalentTo: ORDER and (id value "COLLECTION_at0000_items") and (order_value value "false") Figure 3. Example of support class for precise error identification
3. Results Our method has been implemented in the tool Archeck that is available at http://miuras.inf.um.es/archeck. Consistency errors are reported precisely by concept and attribute in the archetype definition. The tool has been implemented in Java and makes use of the openEHR Java tools (http://www.openehr.org/projects/java.html). Ontologies are processed with the OWL API (http://owlapi.sourceforge.net/) and we have used the reasoners Pellet (http://clarkparsia.com/pellet) and Fact++ (http://owl.man.ac.uk/factplusplus/) for our validation experiment. Finally, the transformation of the reference model to OWL based on ODM specification has been automated so can be applied to other reference models such ISO 13606. Our validation experiment has used the archetypes available in the openEHR repository (http://www.openehr.org/svn/knowledge/archetypes). The complete results are available at the previously mentioned website. Our analysis reported 12 inconsistent archetypes, and all of them are wrong specializations. The most common error is due to the incorrect definition of the occurrence constraint, which happens in 11 archetypes, including the running example. Another sort of inconsistency is also present in the archetype CLUSTER.inspection-tympanic_perforation.v1. CLUSTER[at0022] contains a DV_TEXT, but its parent concept allows only DV_CODED_TEXT. In addition to this, Fact++ was faster than Pellet, since both average processing times per archetype were 346 and 1160 ms, respectively. Some discussion about limitations of the approach comes next. When an optional concept has a maximum occurrence constraint, then that might be omitted in the specialization. OWL reasoners raise a validation error in such situations. To overcome this limitation, the maximum occurrence constraint of optional concepts in the parent archetype is included in descendant ones, if undefined. This modeling decision also permitted solving the problem caused when a subclass axiom is inferred instead of an equivalency one between two concepts related in archetype specialization. This modifies slightly the semantics of the specialized archetype, but it does not affect the process of detecting inconsistencies.
M. Menárguez-Tortosa and J.T. Fernández-Breis / Validation of the openEHR Archetype Library
793
Archetypes may include abstract or general concepts, which can be specialized by applying archetype modelling practices, such as the ones proposed by openEHR (http://www.openehr.org/wiki/display/spec/openEHR+Templates+and+Specialised+Ar chetypes). For instance, node identifiers in concept specializations should start with the parent node identifier, e.g. at0001.1 specializes at0001. This is considered in our method, although archetypes that do not follow such practice can be structurally correct. Our method keeps the identifier of the parent node in an annotation in the specialized concept.
4. Conclusions In this work, we have proposed a knowledge-based representation of archetypes able to validate their definitions. We propose a representation of archetypes as OWL classes, therefore clinical information and knowledge contained in EHR extracts might be semantically exploited. In this work, only some structural constraints have been addressed, since the current versions of Fact++ and Pellet do not provide mechanisms for representing and implementing some axioms, especially constraints on some primitive types. The approach has been applied to the openEHR archetype repository, which is the largest available repository. The overall time performance of the process is acceptable. The tool has proved to be useful since a number of archetypes has found inconsistent in the openEHR repository. All the inconsistencies found in the repository are due to specialization errors. Archetypes comply with the reference model because the authoring tools guarantee this issue. This method might be interesting not only for validating archetypes but also for finding and analyzing bad archetype modeling practices, such as node identifiers in concept specializations. Finally, we are working on processing the bindings of archetype concepts to clinical terminologies such as SNOMED-CT and extending the approach to ISO 13606. Expressing archetypes and terminologies in the same formalism will make possible the automatic classification of clinical archetypes and facilitate the semantic interoperability in EHR. Acknowledgements: This work has been possible thanks to the Spanish Ministry of Science and Innovation through grant TSI2007-66575-C02-02 and TIN2010-21388-C02- 02.
References [1] [2] [3]
[4]
European Commission, Semantic interoperability for better health and safer healthcare deployment and research roadmap for Europe. ISBN-13: 978-92-79-11139-6, 2009. Kalra D. EHR archetypes in practice: getting feedback from clinicians and the role of EuroRec. In: eHealth Planning and Management Symposium, 2007. Maldonado JA, Moner D, Bosca D, Fernández-Breis JT, Angulo C, Robles M.: LinkEHR-ED: A multireference model archetype editor based on formal semantics. International Journal of Medical Informatics 78(8) (2009) 559–570. Beale T. Archetypes: Constrained-based Domain Models for future-proof Information Systems. In Eleventh OOPSLA Workshop on Behavioral Semantics: Serving the Customer, 2002.
794
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-794
Grouping pharmacovigilance terms with semantic distance Marie DUPUCHab, Magnus LERCHc, Anne JAMETbd, Marie-Christine JAULENTab, Reinhard FESCHAREKe, Natalia GRABARf a Université Pierre et Marie Curie - Paris6, Paris, F-75006 France b INSERM, U872 eq. 20, Paris, F-75006 France c Consulting & Coaching, Berlin, Germany d HEGP, AP-HP, Paris, France e CSL Behring GmbH, Marburg, Germany f CNRS UMR 8163 STL, Université Lille 3, France Abstract. Pharmacovigilance is the activity related to the collection, analysis and prevention of adverse drug reactions (ADRs) induced by drugs or biologics. Besides other methods, statistical algorithms are used to detect previously unknown ADRs, and it was noted that groupings of ADR terms can further improve safety signal detection. Standardised MedDRA Queries are developed to assist retrieval and evaluation of MedDRA-coded ADR reports. Dependent on the context of their application, different SMQs show varying degrees of specificity and sensitivity; some appear to be over-inclusive, some might miss relevant terms. Moreover, several important safety topics are not yet fully covered by SMQs. The objective of this work is to propose an automatic method for the creation of groupings of terms. This method is based on the application of the semantic distance between MedDRA terms. Several experiments are performed, showing a promising precision and an acceptable recall. Keywords. Natural Language Processing, Medical informatics, Drug safety, Pharmacovigilance, Signal detection, Drug toxicity, Semantics, Terminology
1. Introduction Pharmacovigilance is the activity related to the collection, analysis and prevention of adverse drug reactions (ADRs) induced by drugs or biologics. ADRs are coded with terms from dedicated terminologies, e.g., WHO-ART (World Health Organization Adverse Reaction Terminology) and MedDRA (Medical Dictionary for Regulatory Activities). Safety signal detection – i.e., the detection of previously unexpected potentially causal associations between drugs and ADRs – depends on the quality and specific features of ADR coding. Besides traditional pharmacovigilance methods, statistical algorithms are increasingly utilized to detect signals in large safety databases [1, 2]. To improve signal detection, these methods benefit from groupings of related ADRs [3]. Indeed, the use of very specific terms for coding ADRs may cause a dilution of signals [4]. Thus, various hierarchical levels of MedDRA (PT, HLT, SOC) and manually built SMQs (Standardised MedDRA Queries) have been used for signal detection. The PT (Preferred Term) level - which is most often used in quantitative signal detection - corresponds mainly to specific ADRs, while HLTs (High Level Terms) and SOCs (System Organ Classes) are hierarchical levels above the PT level.
M. Dupuch et al. / Grouping Pharmacovigilance Terms with Semantic Distance
795
As for the SMQs, their objective is to link terms relevant to a medical condition. SMQs are designed by a group of experts, who start with the scientific definition of the medical condition of interest, followed by manual identification of relevant MedDRA terms [5]. This is a labour-intensive task. Evaluation studies of the SMQs have demonstrated that SMQs often present the highest sensitivity [6, 7], but can be overinclusive [7] and, because the reports found might lack specificity, their evaluation can be time-consuming. Finally, relevant PTs may be missing in SMQs [7], and several serious safety topics are not yet addressed. In order to ease and systematize the process of creation of term groupings, automatic methods can be applied. One existing work proposes hierarchical groupings of ADRs [8], but this approach does not necessarily respect medical reasoning. For instance in renal diseases, in addition to terms such as Acute nephritis and Insufficiency renal, which have a hierarchical relation among them, it can be also relevant to consider terms related to laboratory results or medical procedures. Semantic distance may lead to the creation of groupings which respect medical reasoning significantly better: it has previously been applied to subsets of terms from MedDRA [9] and WHOART [10]. In the WHO-ART related work [10], the obtained groupings demonstrated several types of relations: synonyms, antonyms, associated symptoms, abnormal laboratory tests ... However, no evaluation has yet been performed comparing systemgenerated with existing MedDRA groupings (SMQs, HLTs, SOCs). Semantic distance has also been applied to other biomedical terminologies (Gene Ontology [11], MeSH and SNOMED CT[12], and UMLS [13]), with a manual rating and evaluation of pairs of terms. We propose a better adaptation of the semantic distance approaches for the creation of ADR term groupings. The whole set of MedDRA terms is used, and special attention is paid to compare the obtained groupings with reference SMQs, which have become a widely accepted standard in pharmacovigilance organizations.
2. Material and Method Material. Material used is issued from MedDRA 13.0 [14]: ontoEIM and SMQs. The ADR ontology ontoEIM [8] has been created thanks to the projection of MedDRA on SNOMED CT (SNCT) [15] through the UMLS [16]. Only 46% of MedDRA terms are aligned. The terminological representation of MedDRA terms is enriched: their structure is improved and becomes parallel to the structuring in SNCT, and terms receive formal definitions (on four SNCT axes: morphology, topography, causality and expression). Our second material, SMQs, are groupings of MedDRA terms related to a given medical condition (e.g., Acute renal failure). SMQs, consisting of MedDRA PTs and LLTs, are created to assist users in searching ADR reports related to a medical condition. Currently, 84 SMQs have been released. In our experiments, ontoEIM is the source we use to create groupings of ADR terms, while SMQs in their broad version are used as gold standard for the evaluation. Method. Semantic distance between terms is often computed within terminologies. It depends on the number of edges (or the shortest path) between two terms (e.g., four edges between the terms Abdominal abscess and Pharyngeal abscess in Fig. 1), although other factors may be taken into account. We present here the main step of the method, relative to the computing of semantic distance [17] between MedDRA PT and LLT terms through the ontoEIM. We use either a) the ADR terms only (which belong mainly to the Clinical disorder axis D), or b) ADR terms and their formal definitions.
796
M. Dupuch et al. / Grouping Pharmacovigilance Terms with Semantic Distance
Within the formal definitions, we use elements provided by two axes: morphology M (kind of the abnormality) and topography T (anatomical localization). These axes are often involved in the definition of ADRs [18] and they are also frequently represented in ontoEIM, as in this example for terms Abdominal abscess and Pharyngeal abscess defined as follows : – Abdominal abscess: M = Abscess morphology, T = Abdominal cavity structure – Pharyngeal abscess: M = Abscess morphology, T = Neck structure The shortest paths sp are computed between these two terms (axis D) and between their formal definitions (axes T and M). The weight of the edges is set to 1, and the value of each shortest path corresponds to the sum of weights of all its edges. For this pair of terms we obtain the following sp values: spD =4, spT=10 and spM=0.
Figure 1: The shortest paths sp between Abdominal abscess and Pharyngeal abscess computed on the three axes: clinical disorder (D), topography (T) and morphology (M).
Semantic distance is computed which allows to generate a semi-matrix and to apply an ascendant hierarchical classification for the creation of groupings of terms. The minimal threshold is set to 2. We perform several experiments in which we evaluate: one axis (D) vs three axes (D, M, T); all terms in SMQs vs only terms aligned with SNCT; the best grouping for a given SMQ vs merged groupings. Groupings are compared with 9 SMQs (Acute renal failure, Agranulocytosis, Anaphylactic reaction, Cytopenia, Gastrointestinal haemorrhages, Peripheral neuropathy, Rhabdomyolysis, Severe cutaneous adverse reaction, Thrombocytopenia). Evaluation is performed with three classical measures: precision P (number of relevant grouped terms as a percentage of the total number of the grouped terms), recall R (number of relevant grouped terms as a percentage of the number of terms in the corresponding SMQ) and F-measure F (the harmonic mean of P and R).
3. Results and Discussion Figure 2 shows our results – mean, min and max values for precision, recall and Fmeasure - obtained from eight experiments. Four experiments are performed with the best grouping for each SMQ, i) using one axis with the complete set of SMQ terms (1a-bc) or with the aligned terms only (1a-ba), ii) using three axes with the complete set of SMQ terms (3a-bc) or with the aligned terms only (3a-ba). Four more experiments are similar to the above, but are performed with merged groupings. The first four experiments show a promising precision, whereas recall and F-measure are low. Although unsatisfactory for recall and F-measure, such precision nevertheless seems to meet the expectations of pharmacovigilance experts looking for highly specific groupings. The four additional experiments, where we merged the n-best
M. Dupuch et al. / Grouping Pharmacovigilance Terms with Semantic Distance
797
groupings for each SMQ together, were expected to increase the overall performance. As the graphs in Figure 2 show, we can indeed improve the recall with only small deterioration of precision. The F-measure is also improved. When we take into account only one axis (D) the performance is always better than using three axes (D, M, T). This seems to be due to the incompleteness of the available formal definitions. Another factor influencing the performance is related to the use of the complete set of SMQ terms vs the reduced set of only aligned SMQ terms. With the reduced set of terms, we observe a positive effect on the evolution of recall and F-measure (number of terms to be found is reduced), although we observe a negative effect on precision. Additionally, we indicate the min and max values for the three measures. The min-max intervals are visibly very large, which means that there is considerable performance variability for the different SMQs, and that probably various strategies should be used to achieve optimal results for all medical conditions of interest.
Figure 2: Mean, min and max values for precision, recall and F-measure.
4. Conclusion and Perspectives The proposed method applies the semantic distance to the creation of groupings of ADR terms. Such groupings, especially when they show a high specificity, may be a useful tool for the detection of signals in pharmacovigilance database. The method may also be helpful during the creation of new or improvement of existing SMQs. Furthermore, it could be used to create groupings representing the same medical concept in different terminologies. This in turn, would enable researchers to apply the same term groupings to safety databases independent of the terminology used for ADR coding. In our work, several experiments have been performed to compare systemgenerated term groupings with SMQs in their broad version. Future experiments will include comparison with the narrow versions of SMQs. A novel aspect of our work, which consists of the merging of n-best groupings, allows to improve recall without a significant deterioration of precision. Future studies may lead to an adjustment of thresholds and variables (edge weights, coefficients of axes) and to the identification of other factors which influence the quality of groupings. Besides, methods provided by Natural Language Processing may enrich and improve the groupings. Acknowledgments. This work was partly supported by funding from the European Community's Seventh Framework Programme (FP7/2007-2013) for the Innovative Medicine Initiative (IMI) under Grant Agreement [1150004]. The research leading to these results was conducted as part of the PROTECT consortium (Pharmaco-epidemiological Research on Outcomes of Therapeutics by a European ConsorTium, www.imi-protect.eu) which is a public-private partnership coordinated by the European Medicines Agency. Authors are thankful to other participants of this task (C. Bousquet, O. Caster, G. Declerck, R. Hill, A. Kluczka, X. Kurz, N. Noren, V. Pinkston, E. Sadou, J. Souvignet, T. Vardar), but views expressed are those
798
M. Dupuch et al. / Grouping Pharmacovigilance Terms with Semantic Distance
of the authors only.
References [1]
Bate A., Lindquist M., Edwards I., Olsson S., Orre R., Lansner A. & De Freitas R. (1998). A bayesian neural network method for adverse drug reaction signal generation. Eur J Clin Pharmacol, 54(4), 315– 21. [2] Meyboom R., Lindquist M., Egberts A. & Edwards I.(2002). Signal selection and follow-up in pharmacovigilance. Drug Saf, 25(6), 459–65. [3] Hauben M. & Bate A. (2009). Decision support methods for the detection of adverse events in postmarketing data. Drug Discov Today, 14(7-8), 343–57. [4] Fescharek R., Kübler J., Elsasser U., Frank M. & Güthlein P. (2004). Medical dictionary for regulatory activities (MedDRA): Data retrieval and presentation. Int J Pharm Med, 18(5), 259–269. [5] CIOMS (August 2004). Development and Rational Use of Standardised MedDRA Queries (SMQs): Retrieving Adverse Drug Reactions with MedDRA. Report of the CIOMS Working Group, CIOMS. [6] Mozzicato P.(2007). Standardised MedDRA queries: their role in signal detection. Drug Saf, 30(7), 617–9. [7] Pearson R, Hauben M, Goldsmith D, Gould A, Madigan D, O’Hara D, Reisinger S, Hochberg A.(2009). Influence of the MedDRA hierarchy on pharmacovigilance data mining results. Int J Med Inform, 78(12), 97–103. [8] Alecu I., Bousquet C., Jaulent MC. (2008). A case report: using SNOMED CT for grouping adverse drug reactions terms. BMC Med Inform Decis Mak, 8(S1), 4. [9] Bousquet C., Henegar C., Louët A., Degoulet P. & Jaulent M. (2005). Implementation of automated signal generation in pharmacovigilance using a knowledge-based approach. Int J Med Inform, 74(7-8), 563–71. [10] Iavindrasana J., Bousquet C., Degoulet P. & Jaulent M.(2006). Clustering WHO-ART terms using semantic distance and machine algorithms. In AMIA Annu Symp Proc, p. 369–73. [11] Lord PW, Stevens RD, Brass A & Goble CA. (2003). Investigating semantic similarity measures across the Gene Ontology: the relationship between sequence and annotation. Bioinformatics 19(10): 12751283 [12] Caviedes JE, Cimino JJ. (2004). Towards the development of a conceptual distance metric for the UMLS. Journal of Biomedical Informatics 37:77-85 [13] Al-Mubaid H, Nguyen HA. (2009). Measuring semantic similarity between biomedical concepts within multiple ontologies. Trans. Sys. Man Cyber Part C, 39(4):389--398 [14] Brown E., Wood L. & Wood S. (1999). The medical dictionary for regulatory activities (MedDRA). Drug Saf., 20(2), 109–17. [15] Stearns M., Price C., Spackman K. & Wang A. (2001). SNOMED clinical terms: overview of the development process and project status. In AMIA, p. 662–666. [16] NLM (2008). UMLS Knowledge Sources Manual. National Library of Medicine, Bethesda, Maryland. www.nlm.nih.gov/research/umls/. [17] Rada R., Mili H., Bicknell E. & Blettner M. (1989). Development and application of a metric on semantic nets. IEEE Transactions on systems, man and cybernetics, 19(1), 17–30. [18] Spackman K. & Campbell K. (1998). Compositional concept representation using SNOMED: Towards further convergence of clinical terminologies. In AMIA 1998, p. 740–744.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-799
799
The Archetype-Enabled EHR System ZKARCHE – Integrating the ISO/EN 13606 Standard and IHE XDS Profile Michael KOHLER,a,1 Christoph RINNERa, Gudrun HÜBNER-BLODERb, Samrend SABOORb, Elske AMMENWERTHb, Georg DUFTSCHMIDa a Section for Medical Information Management and Imaging Center for Medical Statistics, Informatics and Intelligent Systems Medical University of Vienna, Austria b UMIT-University for Health Sciences, Medical Informatics and Technology Hall in Tirol, Austria
Abstract. The EHR system ZK-ARCHE automatically generates forms from ISO/EN 13606 archetypes. For this purpose the archetypes are augmented with components of the reference model to achieve so-called “comprehensive archetypes”. Data collected via the forms are stored in a list which associates each value with the path of the corresponding comprehensive archetype node coded as W3C XPath. From this list archetype-conformant EHR extracts can be created. The system is embedded with the IHE XDS profile to allow direct data exchange in an environment of distributed data storage. Keywords. EHR, ISO/EN 13606, archetype, form generation, archetype-conform EHR extract
1. Introduction The project EHR-ARCHE 2 aims to support health care providers in finding those contents within electronic health records (EHR), which are relevant for their respective information needs, in consideration of the ever growing Information overload from chronically ill. It emanates from an IHE XDS [1] based distributed data storage architecture with a central metadata component. The EHR data are represented as fullystructured ISO/EN 13606 EHR extracts. The ISO/EN 13606 standard [2, 3] is based on the dual model approach, which means that the representation of the EHR data are described by a reference model (RM) and a set of archetypes (AT) [4]. In this paper we describe the EHR system ZK-ARCHE, which serves as the data source within EHR-ARCHE’s IHE XDS environment. Its purpose is to support the fast and convenient creation of archetyped EHR extracts, as well as their provision to the XDS-repository and registration within the XDS-registry.
1 2
Corresponding Author: Spitalgasse 23, 1090 Vienna, Austria, [email protected]. See http://www.meduniwien.ac.at/msi/arche/
800
M. Kohler et al. / The Archetype-Enabled EHR System ZK-ARCHE
2. Method The functionality of the system can be grouped in three main steps (see Figure 1): (a) After the user selects an archetype from the archetype repository, the system automatically generates a corresponding data collection form. (b) Data collected by means of the form are stored as archetyped EHR extracts. (c) The latter can then be transferred to the repository and registered. This step is supported by automatically retrieving the required IHE XDS metadata from the EHR extracts.
Figure 1. Functionality of the EHR system ZK-ARCHE
We developed the ZK-ARCHE system according to the classic Model-ViewController (MVC) [5] pattern. In the following we describe how the different tasks are split up between the model, the view and the controller. 2.1. Model The model is an instance of the Archetype Object Model (AOM). ATs only contain those attributes of RM classes, which they constrain, i.e. they represent a “differential view” of the RM. When processing an AT we therefore have to additionally consider the RM. As suggested in [6] we use a so-called “Comprehensive AT” for this purpose, which augments the AT with the mandatory attributes of the RM classes that are not constrained by the AT. In the Comprehensive AT all referred ATs (slots) are included and augmented like the referring AT. To ensure unambiguous node-IDs, the node-IDs of the referred ATs are extended with the ID of their AT as prefix. Predefined data (e.g. Unified Code for Units of Measure Object Identifier) are filled in and every node of the Comprehensive AT is associated with a relative W3C XPath.
801
M. Kohler et al. / The Archetype-Enabled EHR System ZK-ARCHE
2.2. View The model is visualized as a data collection form to allow user input. The structure and input options of the form are generically derived from the model. The RM classes referred to within the model are transformed to form widgets as follows: • The COMPOSITION represents the view’s root class and corresponds to the whole form. • SECTIONs are displayed as individual pages within a tab-box. • ENTRYs and CLUSTERs group their sub-elements as defined by the AT. • For the data values held by ELEMENTs we support entry fields of data types date, time, number, text including selection lists, and boolean. Data values that are not entered by the user (e.g., fixed values prescribed by the archetype or system-provided metadata such as instance identifiers of RECORD_COMPONENTs) are automatically completed and cannot be edited in the form. • If the comprehensive AT prescribes an occurrence > 1 for a node, the corresponding widget may be dynamically duplicated via a button in the form. All data are internally held in a list of key-value pairs (see Table 1). Each value included in the EHR extract is associated with a key that consists of the absolute W3C XPath of the AT-node holding the value. Emanating from this list it is possible to create a complete AT-conformant EHR extract; no additional information such as the Comprehensive AT is required. This technique is therefore also appealing for integrating archetypes into existing EHR systems. It was also successfully applied in [7]. Table 1. Sample entries in the key-value list, which holds the data to be stored in the EHR extract. The creation time of the EHR extract (1st row) is generated by the system. The service start time (2nd row) and heading of the lab findings SECTION (3rd row) are prefilled by the system respective by the AT and may be adapted by the user. Key
Value
/EHR_EXTRACT/time_created[@xsi:type='TS']/time
2011-0215T14:23:00Z
/EHR_EXTRACT/all_compositions[archetype_id='CEN-EN13606COMPOSITION.discharge_summarization_note.v1/at0000' and @xsi:type='COMPOSITION'][1]/session_time[@xsi:type='IVL']/low[@xsi:type='TS'] /time /EHR_EXTRACT/all_compositions[archetype_id='CEN-EN13606-COMPOSITION. laboratory_report.v1/at0000' and @xsi:type='COMPOSITION'][1]/content [archetype_id='CEN-EN13606-SECTION.Laboratory_findings.v1/at0000' and @xsi:type='SECTION'][1]/name[@xsi:type='SIMPLE_TEXT']/originalText
2011-0117T06:54:04Z
Laboratory findings
2.3. Controller The controller creates the model for a selected AT and derives the view from the model. Documents collected via the view are converted to the key-value list, from which the AT-conformant EHR extract can be directly generated. The EHR extract can either be stored locally or in an IHE XDS repository. When storing the document in the IHE XDS repository, 23 metadata required for registering the document are retrieved from the EHR extract and the EHR system. Some of these metadata, e.g., the EHR system
802
M. Kohler et al. / The Archetype-Enabled EHR System ZK-ARCHE
ID and the document ID, are set by the system or derived from the AT, e.g., classCode, language. Others, e.g., the service start and stop time, are entered by the user.
3. Results ZK-ARCHE was implemented as a web application in Java using the Archetype Definition Language (ADL)-Parser 3 of the openEHR foundation and the ZK Framework4. As client it only requires a web-enabled browser without any plugins. For the communication to the IHE-XDS environment the “Sense” infrastructure from ITH icoserve [8] was used. Within the EHR-ARCHE project we developed 128 ATs for the domain of diabetes treatment [9]. Hereby the ZK-ARCHE system provided valuable assistance by allowing the physicians involved in the AT design process to visualize each draft of an AT as data collection form on the fly. Twelve of the 128 ATs are of type COMPOSITION and include the other ATs via slots. They are the starting point of the form generation process. The largest AT contains 119 slots. It results in a form of initially 745 input fields, which may be dynamically extended (e.g. by adding further table rows). This AT consists of 22.122 lines of code in the ADL. The resulting Comprehensive AT has 35.998 lines of code, and thus enlarges the AT by 63%. The creation of the corresponding form takes 5 sec on an Intel Core 2 Quad Q9400 computer. Besides numerous test documents we also created 29 documents based on real anonymised patient information. They were all successfully stored as ATconformant EHR extracts and uploaded into the IHE XDS environment.
4. Discussion In [10] a method for the automatic creation of forms from openEHR ATs is described. It does not, however, address how to create AT-conformant EHR extracts from the collected data. Under the name of Opereffa an open source application is developed, which allows forms to be generated from openEHR ATs [11]. In [7] and [12] approaches of integrating openEHR ATs in existing EHR systems are presented. The tool LinkEHR [6] allows existing data to be mapped to ATs, to transform them into AT-conformant data. It supports the ISO/EN 13606, openEHR and HL7 Clinical Document Architecture (CDA) data models. However, automatic generation of forms for ATs is not in the focus of this tool. EHRflex [13] creates forms from ISO/EN 13606 ATs. Although it follows a slightly different approach, it provided helpful evidence on the implementation of our forms. Our ZK-ARCHE System extends the before mentioned tools and systems with its embedding into an IHE XDS environment. In [14] a health information framework is described, which integrates a commercial EHR system into an IHE XDS architecture. It supports the exchange of free text hospital discharge letters embedded in CDA documents. For the creation of the Comprehensive AT some assumptions had to be made to simplify the implementation. Optional attributes of the RM, which are not constrained by the AT, are not included in the Comprehensive AT. Slots may only be filled with a 3 4
http://www.openehr.org/projects/java.html http://www.zkoss.org/
M. Kohler et al. / The Archetype-Enabled EHR System ZK-ARCHE
803
single AT. ELEMENT nodes with unspecified data type in the AT (matches {*}) are interpreted as data type SIMPLE_TEXT. Without these assumptions the Comprehensive AT would further grow in relation to the AT. Because of the direct derivation of the form from the Comprehensive AT, the form usability depends on the modeling of the AT. Complex structures in the AT result in an equally complex form. This problem could be solved by adding a GUI design tool to the system, which allows the generated forms to be manually edited. Alternatively, an intermediate layer for describing the visualization of an AT could be added, similar to the description of an archetyped EHR extract’s visualization such as presented in [15]. Acknowledgements. The project EHR-ARCHE is funded by the Austrian Science Fund (Fonds zur Förderung der wissenschaftlichen Forschung FWF), Project number P21396.
References [1] [2] [3] [4]
[5] [6] [7]
[8] [9]
[10]
[11] [12] [13] [14] [15]
Integrating the Healthcare Enterprise (IHE), IT Infrastructure Technical Framework, vol. 1 (ITI TF-1, chapter 10), vol. 2 (ITI TF-2, chapter 3.14 and Appendix L), I. t. H. E. (IHE), Editor. 2007. European Committee for Standardization, EN 13606 Electronic healthcare record communication. 2007. International Organization for Standardization, ISO 13606 Electronic health record communication. 2008. Beale T. Archetypes, Constraint-based domain models for future-proof information systems. In Eleventh OOPSLA Workshop on Behavioral Semantics: Serving the Customer. 2002. Seattle, Washington, USA: Northeastern University, Boston. Reenskaug T. Models-views-controllers. 1979, Technical note, Xerox PARC. Maldonado JA, Moner D, Boscá D, Fernández-Breis JT, Angulo C, Robles M. LinkEHR-Ed: A multireference model archetype editor based on formal semantics, Int J Med Inform 78(2009), 559-70. Chaloupka J. Automated integration of archetypes into electronic health record systems based on the Entity-Attribute-Value model, in Section for Medical Information Management and Imaging. 2009, Diploma thesis, Technical University of Vienna: Vienna. ITH icoserve. sense - smart eHealth solutions. 2010; Available from: http://www.ithicoserve.com/loesungen/sense-smart-ehealth-solutions/uebersicht/. Rinner C, Kohler M, Hübner-Bloder G, Saboor S, Ammenwerth E, Duftschmid G. Creating ISO/EN 13606 Archetypes based on Clinical Information Needs. In Accepted at EFMI Special Topic Converence STC 2011. 2011. Laško, Slovenia. Schuler T, Garde S, Heard S, Beale T. Towards Automatically Generating Graphical User Interfaces from openEHR Archetypes, in Ubiquity: Technologies for Better Health in Aging Societies, Hasman A, Haux R, VanderLei J, DeClercq E, FHR, eds. France, 2006, I O S Press: Amsterdam. p. 221-226. Arikan S, Shannon T, Ingram D. Opereffa. 2009; Available from: http://opereffa.chime.ucl.ac.uk/introduction.jsf. Chen R, Klein GO, Sundvall E, Karlsson D, Ahlfeldt H. Archetype-based conversion of EHR content models: pilot experience with a regional EHR system, BMC Med Inform Decis Mak 9(2009), 33. Brass A, Moner D, Hildebrand C, Robles M. Standardized and flexible health data management with an archetype driven EHR system (EHRflex), Stud Health Technol Inform 155(2010), 212-8. Alves B, Muller H, Schumacher M, Godel D, Abu Khaled O. Interoperability prototype between hospitals and general practitioners in Switzerland, Stud Health Technol Inform 160(2010), 366-70. van der Linden H, Austin T, Talmon J. Generic screen representations for future-proof systems, is it possible? There is more to a GUI than meets the eye., Computer Methods and Programs in Biomedicine 95(2009), 213-26.
804
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-804
Using a Logical Information Model-Driven Design Process in Healthcare Yu Chye CHEONGa1,, Linda BIRDa, Nwe Ni TUNa, Colleen BROOKSa a MOH Holdings Pte Ltd, Singapore
Abstract. A hybrid standards-based approach has been adopted in Singapore to develop a Logical Information Model (LIM) for healthcare information exchange. The Singapore LIM uses a combination of international standards, including ISO13606-1 (a reference model for electronic health record communication), ISO21090 (healthcare datatypes), SNOMED CT (healthcare terminology) and HL7 v2 (healthcare messaging). This logic-based design approach also incorporates mechanisms for achieving bi-directional semantic interoperability. Keywords. Logical Information Model, Semantic Interoperability, Healthcare Standards, Messaging
1. Introduction Most clinical applications can send or receive point-to-point messages using standards, such as HL7 version 2. However, for two or more clinical systems to share healthcare data unambiguously, the structure, the (reference) terminology and the semantics must all be agreed upon. This is a requirement for truly shareable Electronics Health Records (EHRs) and downstream functionality such as clinical decision support and care planning that relies on semantic interoperability. The current lack of message standardisation in Singapore is hindering information sharing between healthcare clusters, sectors and facilities. HL7 v2 is the current de facto standard for healthcare messaging in Singapore – however, there are numerous different HL7 v2 message profiles being used, and widespread use of local extensions and locally defined Z-segments. As a result, national information exchange, querying and conformance quality testing has been difficult. These challenges are further exacerbated by disconnected terminology sets, which differ in their degree of precoordination due to differing local interfaces and information structures. To achieve bi-directional semantic interoperability [1] within this multi-profile environment, each clinical system must be able to produce and consume every message variation. Each system may therefore need to support dozens of interfaces to other systems. To address these interoperability issues, a logical information model is needed to harmonize (reference) terminology, semantics and structure. The Singapore Logical Information Model is a critical enabler for national initiatives such as the National Electronic Health Record (NEHR) system [2], which aims to consolidate distributed information from various institutions into a single electronic health record for each patient. 1
Corresponding Author: {yuchye.cheong, linda.bird, nweni.tun, colleen.brooks}@mohh.com.sg
Y.C. Cheong et al. / Using a Logical Information Model-Driven Design Process in Healthcare
805
2. Method The Singapore Logical Information Model (LIM) is an implementation-independent information model for healthcare data exchange. The LIM is based on a standardsbased Logical Reference Model (LRM) and includes a set of ‘archetypes’, or reusable building blocks of clinical information. These archetypes can be further constrained into ‘templates’ to meet specific use cases. The LIM defines the structure, reference terminology and clinical content of healthcare data exchanges. The LIM can be expressed in a machine-readable format that can be used to generate a variety of artefacts such as exchange format specifications, conformance validation software, user interfaces and human readable documentation. The LIM’s novel use of ‘design pattern’ constructs support a diversity of pre-coordination approaches used by clinical systems to populate their messages using native interface terms. The process of developing the LIM and resulting artefacts is shown below in Figure 1.
Figure 1: LIM Design Process
Firstly, a Logical Reference Model (LRM) was developed to provide both modelling integrity and flexibility. It incorporates the following international standards: • ISO 13606-1 [3]: A profile of the ISO 13606 reference model is used, in which certain attributes were removed due to a lack of a tangible use case in our local context and to reduce modelling complexity. Some ISO 13606-1 constraints were also relaxed in the LRM - for example, some mandatory constraints were changed to optional, where existing clinical systems could not support ISO 13606’s record-keeping metadata requirements, (e.g. AUDIT_INFO.committer [0..1]), or where a standard default value has been defined for Singapore (e.g. RECORD_COMPONENT.synthesised: default=”FALSE”). Other changes made to ISO13606-1 include the extension of FUNCTIONAL_ROLE to allow a Participation_Type and Participation_Time, and the extension of IDENTIFIED_ENTITY to support Singapore-specific demographic requirements.
806
Y.C. Cheong et al. / Using a Logical Information Model-Driven Design Process in Healthcare
•
ISO 21090 data types [4]: A profile of the ISO 21090 data types is used, in which some datatypes (e.g. MO) were excluded, and the HXIT attributes were removed (except for validTimeLow and validTimeHigh required for II).
Besides ISO 13606, Singapore also evaluated the HL7 Reference Information Model (RIM) as the basis for the LRM. The HL7 v3 RIM artefacts (e.g. DIMs, CIMs and CMETs) require a high level of technical skill to interpret, thereby inhibiting widespread and effective clinician validation. There is also an overlap in the semantics of the RIM and SNOMED CT, which can lead to ambiguities. In view of these issues, it was decided that the RIM should not form the basis for the LRM. Secondly, a Logical Information Model (LIM), conforming to the LRM, was developed for Singapore’s healthcare information exchange. The requirements analysis for the LIM was based on two main approaches: • An evidence-based approach involved the analysis of existing healthcare information exchange. All relevant message profiles (primarily HL7 v2) in Singapore were fully documented in a consistent format, and validated against several million messages in conjunction with local implementation groups. Message types such as ADT (Admission/Discharge/ Transfer), Pharmacy Order and Laboratory Results were covered. A number of local message profiles exist for each of these message types, each using a surprising diversity of representations for the same or similar semantics. • A clinician-driven approach to gathering requirements for the NEHR and Discharge Summary documents. The LIM was developed as a set of reusable, clinical ‘archetypes’ for each ENTRY that needed to be exchanged (e.g. ‘Problem/Diagnosis’, ‘Pharmacy Order’). Archetypes were initially developed based on modelling the clinical semantics of the data that was currently being exchanged, rather than modelling the ‘intended’ meaning of the HL7 v2 message. In many cases, this resulted in a single HL7 v2 field being mapped to two different LIM elements (where the meaning of data included in this field differed between existing profiles), and two different HL7 v2 fields being mapped to the same LIM element (where the meaning of data used in a field of one profile was actually the same as that used in a different field of another profile). For each LIM element, mappings to the relevant local message profiles were developed to provide traceability back to the source requirement. The constraints defined on each LIM element were the lowest-common-denominator of all existing message profiles. For example, if the cardinality of a particular element was mandatory in one local profile, but optional in another, then the LIM element cardinality was set to optional, to cater for all existing information requirements. Record-keeping metadata was mapped to, and supported by, the LRM attributes. The LIM supports the binding of elements to both the national ‘reference terminology’ and various ‘interface terminologies’ used within local clinical systems. To support the diversity of pre-coordination allowed in clinical interface terms, ‘design patterns’ (DP) were introduced, based on the SNOMED CT concept model [5][6]. These design patterns allow more than one split between the information model and the terminology model to be represented, and then normalised for consistent, national querying. The approach used to normalise the interface terms is shown in Figure 2. A reverse process is also being developed to take the normalised terms and convert them back into a system-specific structure to enable bi-directional semantic interoperability.
Y.C. Cheong et al. / Using a Logical Information Model-Driven Design Process in Healthcare
807
Figure 2. Use of Design Patterns
Thirdly, a series of use case-specific Templates were developed for each message or document type, as a set of constraints on the LIM. Templates have been developed for two main purposes: • To represent the mapping from an existing messaging profile to the LIM • To represent the set of elements and constraints that forms the national standard for a given message type – referred to as the National Data Definition Specification (NDDS). Each NDDS accommodates all data currently being exchanged for a given message type, and all anticipated future requirements. Lastly, from each NDDS one or more format-specific National Data Exchange Specifications (NXDS) are generated. These NXDSs include guidance on how each LIM element in the associated NDDS is mapped into the specific exchange format. NXDSs for two exchange formats have been developed – namely: • Logical XML (LXML): This exchange format has been developed as a direct XML serialisation of each LIM-based NDDS (called NXDS-LXML). This enables the exchange specification and conformance testing software to be generated in a completely automated way from the clinician-validated requirements, represented in the LIM. Use case-specific XML tag names have been used to make implementation easier, and enable simple conformance compliance testing to be achieved using XML schema. However, to minimise the maintenance costs arising from changing business requirements, and provide a future-proof capability, the LXML is developed by extending the record-keeping components of the ISO 13606 reference model XML schema. This enables a pair of simple XSLT transforms to be written which takes any LXML instance and converts it to/from a generic ISO 13606-1 XML schema. • HL7 v2: An HL7 v2.3.1 [7] NXDS specification (called NXDS-HL7 V2) has been developed for each NDDS. These national HL7 v2 profiles include Singapore-specific cardinalities, constraints and value domains. The HL7 v2 NXDSs minimise information loss from the NDDSs by including those entries
808
Y.C. Cheong et al. / Using a Logical Information Model-Driven Design Process in Healthcare
that do not fully map to standard HL7 v2 segments, into additional structured OBX and NTE segments (also referred to as ‘archetyped v2’). This approach allows additional information to be included in the HL7 v2 messages, while still maintaining conformance to the standard message segment tables.
3. Results and Discussion The LIM currently supports the generation of 6 main NDDSs – ADT, Pharmacy Order, Pharmacy Dispense, Laboratory Results, Radiology Results and ACIDS (Acute Care Inpatient Discharge Summary) – and 12 NXDSs (HL7 v2.3.1 and LXML for each NDDS). Variations to these message types (including smaller, constrained versions tailored to the NEHR requirements) can be achieved with little additional effort. The above LIM-based design approach has initially been implemented on an extremely small tooling budget. The LIM has been documented in the form of a spreadsheet, in which each ‘archetype’ is represented on a separate worksheet (using a predefined definitional format), and each ‘template’ is represented using a column of this worksheet (to document each template constraint against the associated data components). NDDSs are generated by auto-filtering the rows of the spreadsheets, based on the appropriate template constraints, HL7 v2 NXDSs are generated through manual mappings, and LXML NXDSs are generated by manually serialising the NDDSs into XML schema. The intention, however, is to transition to a comprehensive and highly automated tooling suite to fully realise the benefits of the above approach. We plan to implement terminology normalisation and denormalisation algorithms over the LIM’s design patterns, and a query language over the LIM semantics, which can be transformed to system-specific queries over multiple heterogeneous data sources. In conclusion, we believe that the establishment of the LIM is a critical step in achieving bi-directional semantic interoperability in Singapore, and ultimately achieving greater clinical safety in the interchange of healthcare information.
References Stroetmann VN, (Ed.), Kalra D, Lewalle P, Rector A, et al. Semantic Interoperability for Better Health and Safer Healthcare, SemanticHEALTH Report, European Communities, 2009. [2] Singapore National Electronic Health Record System [3] ISO 13606-1. Electronic Health Record Communication - Part 1: Reference Model, 2008. [4] ISO 21090. Harmonized Datatypes for Information Interchange, 2009. [5] IHTSDO. SNOMED Clinical Terms User Guide: January 2010 International Release, 2010. [6] Spackman KA. Expressions and Context Patterns, IHTSDO, 2008. [7] Quinn J, (Tech. Chair). HL7 v2.3.1 Final Standard, 1999. [1]
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-809
809
SNOMED CT Implementation: Implications of Choosing Clinical Findings or Observable Entities a
Anne Randorff RASMUSSENa,1, Kirstine ROSENBECK a Department of Health Science and Technology, Medical Informatics, Aalborg University, Denmark
Abstract. Internationally, it is a priority to develop and implement semantically interoperable health information systems.[1] One required technology is the use of standardised clinical terminologies. The terminology, SNOMED CT, has shown superior coverage compared to other terminologies in multiple clinical fields. The aim of this paper is to analyse SNOMED CT implementation in an Electronic Health Record (EHR). More specifically, differences and consequences of applying clinical findings (CFs) as an alternative to observable entities (OEs) is analysed. Results show that CFs represents the content of the templates with better coverage, with more parent concepts and with a higher degree of fully defined terms than the OEs. We discuss the possibility to further evaluate the observable entity hierarchy to overcome a potential overlapping use of the two hierarchies. Keywords. Clinical terminology, Implementation, SNOMED CT, Observable entity, Clinical finding, Electronic Health Record
1. Introduction Multiple definitions of identical concepts are a challenge in data communication in health care. Use of standardised clinical terminologies has the potential to ensure unambiguous data definition. This is a prerequisite in achieving semantic interoperability between health information systems. There exists numerous clinical terminologies, but SNOMED CT has shown to be superior regarding coverage in multiple clinical fields.[2,3] Therefore, SNOMED CT is chosen as the point of departure in this study. SNOMED CT is maintained and refined by the International Health Terminology Standardisation Organisation (IHTSDO). The organisation has published strategies and rules for the implementation of SNOMED CT to unify future implementation [4]. However, these are mostly theoretical as only few SNOMED CT implementation projects are documented.[5] Inevitably, there will be deviations between the way SNOMED CT is implemented in real-life projects and the theoretical recommendations. These deviations are important to report, since they increase knowledge on possible implementation strategies for SNOMED CT. To support this, Alan Rector has argued that the goal of clinical terminologies is implementation in clinical information systems. In addition, he doubted that all terms currently part of SNOMED CT was actually 1
Anne Randorff Rasmussen, Fr. Bajers Vej 7 C2-, DK-9220 Aalborg Ø, [email protected].
810
A.R. Rasmussen and K. Rosenbeck / SNOMED CT Implementation
operational: “It is a significant clinical task to find out what situations the term is intended to cover which might actually be recorded in an operational record”.[6] This study is based on implementation of SNOMED CT in an EHR-system in the Northern Jutland Region in Denmark. The terminology is implemented alongside the configuration the EHR-system. Our point of departure is two locally designed clinical templates “nursing status” and “physical examination”. As they are clinical notes, a structured narrative approach was chosen. Structured narratives combine the familiarity, ease of use and freedom of expression of the narrative with the ability to browse data based on the gross structure represented by sections, fields and paragraphs.[7] In the specifications provided by IHTSDO, it is stated that the OE hierarchy in SNOMED CT should be used for coding sections, fields and paragraphs. “Concepts in this hierarchy can be thought of as representing a question or procedure which can produce an answer or a result.” 2 However, when mapping expressions from the respective templates to SNOMED CT, a lack of quality and comprehensiveness was found in the OE hierarchy. The aim of this paper is to systematically analyse the implications of applying CFs as an alternative to OEs when configuring the “nursing status” and “physical examination” templates.
2. Method
Figure 1. Overview of the method applied to compare OEs and CF in this study
The method applied is illustrated in Figure 1. Templates that represent two clinical domains are included in this study to achieve expressions with varied characteristics. The data set consists of a total of 34 clinical expressions: Physical examination (22 expressions) and Nursing status (12 expressions). 8 cases of compounded terms exist in the data set, e.g. 'skin and mucosa finding’, and ‘respiration and circulation’. 7 of these are found in ‘Nursing status’. The clinical expressions were mapped to SNOMED CT OEs and CFs respectively. When mapping the compounded terms we initially strived to find a pre-coordinated concept covering both expressions, otherwise post-coordination by combination is used to represent the expressions. The analysis framework was developed to systematically evaluate the usefulness of a set of SNOMED CT concepts. In the research literature there are rather few 2
http://www.ihtsdo.org/snomed-ct/snomed-ct0/snomed-ct-hierarchies/observable-entity/#c1513
A.R. Rasmussen and K. Rosenbeck / SNOMED CT Implementation
811
methods for analysing SNOMED CT. An exception is [8] where an informationcontent measure is developed. This measure is based on the analysis of the parents, pathways and branches of SNOMED CT. However, we want to analyze retrieval and reuse potential, therefore the exact measure of [8] is not applicable, but our approach is similarly based on these core-characteristics of SNOMED CT. In the analysis, we compared and assessed the potential of each hierarchy to represent the clinical expressions. The analysis is conducted within the following areas; content coverage, level of granularity and concept definition. These are described in details below. The content coverage is analyzed to assess whether concepts in SNOMED CT are able to represent clinical expressions. Also the use of pre- and postcoordinated concepts is stated. The level of granularity is examined, defined as the level of detail associated with each concept. Hence, the number of parent concepts is measured, as shown in Figure 2. This measure is chosen, as it expresses the potential of the concept to be used for data retrieval purposes, as search strategies can be based on either one of the parents or the concept itself. The parents make it possible to retrieve data based on a more granular level based on inherited meaning only. A Wilcoxon signed-rank test is performed to assess whether there is a significant difference between the number of parent concepts in the CF and OE hierarchy. This test is done contrary to a paired t-test, as we cannot assume normal distribution. B) A)
Figure 2 A) 4 parent nodes and B) 8 parent nodes. Identical parent concepts are only included once
The concept definition is examined, defined as whether the concept is primitive or fully defined. A concept is primitive when its logic definition does not sufficiently express its meaning. Further, primitive concepts do not have the defining relationships needed to computably distinguish them from their parent or sibling concepts.[9] For fully defined concepts, aggregated data can be based on characteristics that are stated by other expressions than the inherited meaning.
3. Results The results of analysing the content coverage, level of granularity and concept definition is presented in following tables; The content coverage is shown in Table 1. A coverage of 100% for the CF and 94 % for OE is achieved. Post-coordination is used more frequently to represent the clinical expression in OE than in CF and more pre-coordinated concepts of the compounded expressions was found for the CF hierarchy than in the OE. In CF the
812
A.R. Rasmussen and K. Rosenbeck / SNOMED CT Implementation
compounded expressions 'Skin and mucous membranes' and 'sleep and rest' exist. In the OE hierarchy these expressions exist as separate concepts only. The tables (Table 2a and Table 2b) show results of analysing level of granularity and concept definition for each expression. Table 1. SNOMED CT coverage for the clinical expressions in nursing status and physical examination. Nursing Status and Physical Examination CF OE
PreCoordination 31 (91%) 25 (74%)
PostCoordination 3 (9%) 7 (20%)
Total 34/34 (100%) 32/34 (94%)
functional perf. and activity
Nutrition
Defecation
Micturition
Respiration
Circulation
Skin
Mucosa
Pain / sensation
Sleep (and rest)
Rest
Psychosocial
Cognitive funct.
Communication
Value belief
Sexuality
Reproduction
5
5
16
16
4
4
4
4
4
7
7
5
5
4
5
7
4
CF
5
5
7
12
4
4
6
4
5
6
0
7
7
4
6
5
5
OE
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
CF
1
0
1
1
0
0
0
0
1
1
-
1
0
1
1
1
1
Clinical expressions NoP
OE
Def
Table 2a Results assessing the number of parents (NoP) and definition for nursing status.
6
0
5
4
4
14
5
5
3
6
OE
1
1
1
1
1
1
1
1
1
1
1
1
1
-
-
1
1
1
1
-
1
1
1
CF
1
1
1
0
0
0
0
0
0
0
1
0
0
1
1
0
0
0
0
0
0
1
0
NoP Def
Skin
4 7
Neurological
6 6
Limb
4 7
Beck structure
0
13 10
Rectum
0
4
Urogenital
4
7
Breast
5
4
Abdominal
Respiratory ausc.
5
6
Endocrine Lymphoid system Truncus
6
8
Neck
8
9
Nose
6
10
Ear
4
4
Oral cavity
4
9
Eye/Vision
7
5
Skull
5
6
Mental state 4
5
Head and neck
5
4
Physics
4
CF
General
OE
Clinical expressions
Cardiac ausc.
Table 2b Results assessing the number of parents (NoP) and definition for nursing status.
The Wilcoxon signed-rank test shows a significant difference in the number of parent concepts for the two hierarchies with p=0.031. The average number of parents for the concepts in OE is 5.15 and for CF 6.33, looking at the concepts for the physical examination only, the difference in number of parents increase. This means that for the whole dataset and especially for the physical examination, the CF hierarchy has more granulated concepts than the OE hierarchy. A similar difference is obtained when comparing the level of definition for each concept. All OEs are primitive, whereas 56% of CF is fully defined. Also, it is observed that the proportion of fully defined concepts is higher if we look at the physical examination alone. 4. Discussion In this study a comparison was performed by mapping concepts from two clinical templates to concepts from the OE and the CF hierarchies. The aim was to investigate whether the CF contribute with a higher quality and comprehensiveness than the OEs.
A.R. Rasmussen and K. Rosenbeck / SNOMED CT Implementation
813
Existing literature lacks focus on the usage of the specific SNOMED CT hierarchies. The main objective in the scientific literature is to investigate the potential of SNOMED CT to cover the content of different clinical domains.[5] The results show that the needed concepts can be found in both the OE hierarchy (94%) and CF hierarchy (100%), which potentially can induce ambiguous encoding. The problem is not merely redundant concepts, but that two hierarchies can be used interchangeably. IHTSDO suggest that each hierarchy has a certain purpose, but our study and a study by Lee et al. suggest that the stated purposes are not clear enough to allow consistent mapping. [10] Lee et al. find overlaps between e.g. the “clinical finding” and “morphologic abnormality” hierarchies when mapping a palliative care dataset. To keep the mapping consistent, they introduce guidelines. However, local guidelines cannot handle terminology inconsistencies between organisations. Therefore, in time, improving the consistency of SNOMED CT itself would be preferable. Improving the consistency to avoid redundant use of hierarchies is not a simple task. In our study, it is suggested that the parameters: content coverage, granularity and definition might be useful in determining which hierarchy reaps most benefits in terms of retrieval and reuse purposes. Using these parameters and the “nursing status” and “physical examination” datasets, the CF hierarchy is superior to the OE hierarchy. However, these are only two clinical examples and even among these there are differences in the results. To the authors knowledge similar studies examining the same dataset using two different SNOMED CT hierarchies is not available. Therefore, more studies are needed on the implementation of SNOMED CT with the focus of analysing the usage of the hierarchies. Acknowledgement. This research is part of our PhD-studies that are co-financed by Region Northern Jutland, CSC Scandihealth and Trifork A/S.
References [1]
Garde S, Knaup P, Hovenga EJS, Heard S. Towards Semantic Interoperability for Electronic Health Records. Methods Inf.Med. 2007;3:332. [2] Brown SH, Rosenbloom ST, Bauer BA, et al. Direct Comparison of MEDCIN® and SNOMED CT® for Representation of a General Medical Evaluation Template. AMIA.Annu.Symp.Proc. 2007:75. [3] Chute CG, Cohn SP, Campell KE, Oliver DE, Campell JR. The content coverage of clinical classifications. For The Computer-Based Patient Record Institute's Work Group on Codes & Structures. JAMIA 1996;3(3):224. [4] International Health Terminology Standards Development Organisation. IHTSDO. Available at: http://www.ihtsdo.org/. Accessed 11/18, 2009. [5] Cornet R, de Keizer N. Forty years of SNOMED: a literature review. BMC Med Inform Decis Mak 2008 Oct 27;8 Suppl 1:S2. [6] Rector AL. Clinical terminology: why is it so hard? Methods Inf Med 1999 Dec;38(4-5):239-252. [7] Johnson SB, Bakken S, Dine D. An Electronic Health Record Based on Structured Narrative. Journal of the American Medical Informatics Association :JAMIA 2010 21:54. [8] Cornet R. Information-content-based measures for the structure of terminological systems and for data recorded using these systems. Stud.Health Technol.Inform 2010:1075. [9] IHTSDO. SNOMED Clinical Terms. User Guide. 2010 January. [10] Lee DH, Lau FY, Quan H. A method for encoding clinical datasets with SNOMED CT. BMC Med Inform Decis Mak 2010, 10:53
814
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-814
What is the Coverage of SNOMED CT® on Scientific Medical Corpora? a
Dimitrios KOKKINAKISa1 Centre for Language Technology, Department of Swedish Language, the Swedish Language Bank, University of Gothenburg, Gothenburg, Sweden
Abstract. This paper reports on the results of a large scale mapping of SNOMED CT on scientific medical corpora. The aim is to automatically access the validity, reliability and coverage of the Swedish SNOMED-CT translation, the largest, most extensive available resource of medical terminology. The method described here is based on the generation of predominantly safe harbor term variants which together with simple linguistic processing and the already available SNOMED term content are mapped to large corpora. The results show that term variations are very frequent and this may have implication on technological applications (such as indexing and information retrieval, decision support systems, text mining) using SNOMED CT. Naïve approaches to terminology mapping and indexing would critically affect the performance, success and results of such applications. SNOMED CT appears not well-suited for automatically capturing the enormous variety of concepts in scientific corpora (only 6,3% of all SNOMED terms could be directly matched to the corpus) unless extensive variant forms are generated and fuzzy and partial matching techniques are applied with the risk of allowing the recognition of a large number of false positives and spurious results. Keywords. SNOMED CT; Scientific Medical Corpora; Quality Assessment; Term Validation; Term Variation; Term Mapping
1. Introduction Term variation is considered an obstacle to systematic knowledge acquisition and to many NLP applications [1]. The aim of this work is to develop and apply techniques for automatically mapping structured concepts from the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) to unrestricted medical texts to evaluate the validity and reliability of the terminology content [2]. The textual material used in this work is based on large samples of scientific medical corpora, covering a broad spectrum of medical subfields and is not limited to clinical data. The corpus is used as a test bed for exploring and measuring coverage and quality related to the concept instances. Our approach aims to give an empirical indication of the quality of terms and identify potential problems or shortcomings related to the choice of terminological forms in the resource. Therefore, it applies a number of processing steps that intend to overcome most of the potential limitations and deficiencies identified, e.g. by generating term variants and alternative surface realizations of concepts. Each generated variant found in the corpora is linked to its recommended form via its unique 1
Corresponding author: Dimitrios Kokkinakis, Centre for Language Technology, Dept of Swedish, the Swedish Lang. Bank, Box 200, 405 30 Gothenburg, Sweden; E-mail: [email protected].
R on Scientific Medical Corpora? D. Kokkinakis / What is the Coverage of SNOMED CT
815
concept id-number as stated in SNOMED CT and can be queried on-line: .
2. Background There have been a number of studies described in the literature to measure the coverage of SNOMED CT with respect to textual samples in different medical/clinical subdomains and diagnosis/problem lists and also to devise ways to augment its content [3]. A characteristic of these studies has been the high percentage of agreement or coverage obtained between the terminological resource and the textual data. In [4] it is shown that the majority of entries in diagnosis/problem lists were found in SNOMED CT (88.4%), while of the 145 missing terms, only 20 represented significant concepts missing, resulting in concept coverage of 98.5%. In the work by [5] it is emphasized that SNOMED-CT has promise as a coding system for clinical problems. In [6] 85% of the clinical significant information was captured, while the results in [7] showed that of the 4996 problems in a test set, SNOMED CT could correctly identify 4568 terms. [8] describe a system combining vector space and the regular expression modules and a top precision on recognizing SNOMED terms of 82.3%. Finally [9] by using Case Report Forms discuss that most of the core clinical concepts were covered (88%); however, far fewer of the concepts were fully covered (that is, where all aspects of the text item could be complete without post-coordination; 23%). In addition, the majority of the concepts (83%) required post-coordination to better capture complex clinical concepts.
3. Materials and Methods 3.1. Scientific Medical Corpora A large Swedish scientific medical corpus is used as a reference for measuring the coverage and quality related to the concept instances of the Swedish translation of SNOMED CT. The corpus comprises the electronic archives of the Swedish Medical Association Journal, Läkartidningen, (), one of the most reliable sources for comprehensive and up-to-date scientific medical knowledge in Swedish. The material covers a broad spectrum of medical subfields, including special issues on different topics such as Sexually Transmitted Diseases; Oncological Medicines and Medical Ethics. Since 1996 the archive’s content exists in digital format, including XML-annotated versions. Table 1 shows some characteristics of the corpus which currently comprises 28,113 different articles and approx. 26,5 million tokens. Table 1. Corpus characteristics Publ. Year 1996 1997 1998 1999 2000 2001 2002
Articles 2345 2116 2089 1779 1908 1940 2159
Tokens 2,050,000 2,007,000 2,223,000 2,096,000 2,027,000 2,122,000 2,044,000
Publ. Year 2003 2004 2005 2006 2007 2008 2009
Articles 2151 2201 1803 1941 2004 1908 1769
Tokens 1,784,000 1,867,000 1,535,000 1,615,000 1,676,000 1,782,000 1,735,000
816
R on Scientific Medical Corpora? D. Kokkinakis / What is the Coverage of SNOMED CT
3.2. SNOMED CT® (Swedish) SNOMED CT is a large and systematically organized computer processable collection of health and social care terminology. It is also a common computerized language, a so called compositional concept system in which concepts can be specialized by combinations with other concepts, e.g. by post-coordination [10] which describes the representation of a clinical meaning using a combination of two or more concept identifiers. According to the international release of July 2008, SNOMED CT includes more than 315,000 active concepts (for English), organized into 19 top-level hierarchies, containing over 806,000 English language descriptions and more than 945,000 logically-defining relationships. The first Swedish release of April 2010, provided by the Swedish National Board of Health and Welfare (Socialstyrelsen, ), included 278,000 concepts; disorders being the largest group with >63,000 concepts followed by procedures with >48,000 concepts. 3.3. SNOMED CT Pre-Processing Three types of pre-processing have been taken place. All terms have been tokenized, converted to low case, while all homonyms, that is terms that happens to have the same surface form with another term that possibly belongs to some other hierarchy, have been merged into a single term with all individual identifiers joined. For instance, the term blodprov ('blood sample') belongs to either Specimen#119… or Procedure#396…; according to the previous discussion, a new merged terms has been created, with all of its characteristics preserved: blodprov# Specimen#119…# Procedure#396…. Moreover 3,8% of the SNOMED terms are over 10 tokens long and were not used since they are not suitable for automatic mapping using the methodology followed here. 3.4. Generation of Term Variants Even within the same text, a term can take many different forms. [11] discuss that a term may be expressed via various mechanisms including orthographic variation, usage of hyphens and slashes, lower and upper cases, spelling variations, various Latin/Greek transcriptions and abbreviations. Some of the many possible variation types are further described in [2: 161-219]. This rich variety for a large number of term-forms is a stumbling block for many applications, as these forms have to be recognized, linked and mapped to terminological and ontological resources; for a review on normalization strategies see [12]. Moreover, a number of necessary adaptations of the terminological content have to take place in order to produce a format suitable for text processing, for instance indexing. This is a necessary step, since many term occurrences cannot be identified in text if straightforward dictionary/database lookup is applied. We provide here an outline of various ways we have implemented to deal with term variation: morphological: such as the generation (or programmatic identification) of inflection and derivational patterns, e.g. plural and participle forms etc. structural variations: capture the link between a term, e.g. a compound noun and a noun phrase containing a right-hand prepositional phrase, such as skin neoplasm vs. neoplasm of/in/on the skin. Note that compounds in Swedish are written as a single word, i.e. hudtumör (‘skin neoplasm’) which implies that compound segmentation is taking place to distinguish head and modifier(s).
R on Scientific Medical Corpora? D. Kokkinakis / What is the Coverage of SNOMED CT
817
compounding: the inverse of the previous; a noun phrase containing a right-hand prepositional phrase, or a two word term, is re-written as a single-word compound, e.g. glomerulär filtration (‘glomerular filtration’) becomes glomerulusfiltration and tumör i tibia (‘tumor of tibia’) becomes tibiatumör. splitting: a single-word compound is splitted into its head and modifier(s). This way we also capture a number of spelling mistakes, i.e. splitted compounds that should have been written as a single word; e.g. synovialled ('synovial joint') and synovial led. modifications, orthographic variation, substitutions and types of exclusions: these are transformations that associate a term with a variant in which the head word or one of its argument has an additional modifier, hyphenation, e.g. b cell vs. b-cell; substitution of Arabic to Roman numbers, e.g. NYHA type 2 vs. NYHA type II; deletion of embedded acronyms or parts of lengthy multiword terms (function words, punctuation), e.g. diabetes mellitus type 1 vs. diabetes type 1 vs. type 1 diabetes coordination: transformation that associates >2 terms with a composite variant. Sometimes such entities are coordinated by their heads, e.g. interleukin-1 och -6 actually interleukin-1 och interleukin-6 (‘interleukin-1 and 6’) and sometimes by their arguments, e.g. hjärt- och njursvikt actually hjärtsvikt och njursvikt (‘heart and kidney failure’). Using compound segmentation we try to associate a head or modifier of a segmented form to its elliptic counterpart. partial matching: related to the previous, by applying automatic compound segmentation on all text tokens not already captured by SNOMED we try to match subparts of words, e.g. insulinnivå (‘insulin level’); here the compound word has been segmented to insulin+nivå. This way we can capture at least a part, here insulin, that either appears in the head or modifier position or both, but no occurrence of the compound as such is present in the terminology. acronyms: recognized using various regular expression patterns see [13] (near) synonyms: manually added and flagged as new. For instance, for läkemedel (‘drug’) we have added the near synonyms preparat and farmaka. spelling variants and fuzzy matching: so far we have been restrictive to fuzzy matching due to the risk of capturing a lot of false positives; spelling variants have been semi-automatically added though, e.g. koloskopi vs. coloskopi. 3.5. Filtering of Term Variants A small number of existing terms, as well as a number of some of their generated variants, are problematic with respect to ambiguity with the general vocabulary and have been either completely removed from the term list or filtered out after annotation. The first group consists of terms of length 1-2 characters (208 terms), e.g. ja, -3 and II. The second group consists of terms of length 3-5 characters which we have manually inspected and some, predominantly qualifiers, removed, since they are also common in the general vocabulary, e.g. eller (‘or’), man, dollar and under.
4. Results and Discussion The total number of SNOMED annotations obtained, including the term generation and filtering process, were 2,783,216, this number corresponds to 7,86% (or 20114 unique terms) of the SNOMED terms could be identified in the corpus. The baseline, i.e. the SNOMED terms matched in the corpus without any processing, were less than half,
818
R on Scientific Medical Corpora? D. Kokkinakis / What is the Coverage of SNOMED CT
1,057,235 this figure implies that 6,3% of the SNOMED terms could be found in the corpus by a direct match approach. Another large group of terms as probably expected were inflected forms, 683,206. Out of all the annotations, 28,4%, were partial ones, e.g. laktatacidos (‘lactic acidosis’) is matched as +, here both parts are in SNOMED but not the compound itself. We have also manually and in detail examined the results of 20 randomly chosen, annotated articles in order to get an indication of what types of terms have been left unrecognized and whether they were any ambiguous terms recognized, despite the filtering process described earlier. 1279 annotations were obtained (using the enhanced SNOMED; 36% partial matches); 48 potential terms were left unmatched, not in SNOMED, (e.g. otorré; mitokondrier, nucleus accumbens [the synonym accumbenskärna exists though]). >8 were wrong due to ambiguity (e.g. ‘body’ was referring to a person Kropp). In general, several problems have to do with the existence/mixture of laymen forms and anglicisms. Neither stemming nor coreference (e.g. “…chromosome 17. This chromosome is…”) were used. Stemming usually results into conflated ambiguous terms. Perhaps such processing could have increased the coverage a bit more with the risk of a large number of false positives. Currently the Swedish SNOMED CT does not contain synonymic term variants just recommended ones. To be a useful resource for practical applications it is required to be enhanced with synonyms, tightly integrated with existing recommended terms. Obviously, applications using SNOMED CT should also provide appropriate mechanisms for coping with text and term variation and disambiguation. Our results showed that simple means can enhance the recognition of term variants that otherwise would have been neglected during the automatic processing.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13]
Nenadié G, Ananiadou S, Mcnaught J. Enhancing automatic term recognition through recognition of variation. The 20th Conf. on Computational Linguistics - COLING. Switzerland. (2004). Jacquemin C. Spotting and Discovering Terms through Natural Language Processing. MIT Press. 2001. Patrick J, Wang Y, Budd P. An automated system for conversion of clinical notes into SNOMED clinical terminology. ACSW '07 5th Australasian symposium on ACSW frontiers - Volume 68. 2008. Wasserman H, Wang I. An applied evaluation of SNOMED CT as a clinical vocabulary for the computerized diagnosis and problem list. AMIA Annu Symp. (2003):699-703. Penz JF, et al. Evaluation of SNOMED coverage of Veterans Health Administration terms, Stud Health Technol Inform. (2004) 107(Pt 1):540-4. Lussier YA, Shagina L, Friedman C. Automating SNOMED coding using medical language understanding: a feasibility study. Proc AMIA Symp., (2001), 418–422. Elkin PL, et al. Evaluation of the Content Coverage of SNOMED CT: Ability of SNOMED Clinical Terms to Represent Clinical Problem Lists. Mayo Clin Proc. 81(6), (2006), 741-748. Ruch P, Gobeill J, Lovis C, Geissbühler A. Automatic medical encoding with SNOMED categories. BMC Medical Informatics and Decision Making, 8 (Suppl 1), (2008). Richesson RL, Andrews JE, Krischer JP. Use of SNOMED CT to represent clinical research data, JAMIA. 13(5) (2006), 536-46. Spackman K, Gutai J. Compositional Grammar for SNOMED CT Expressions in HL7 V. 3. 2008. Tsujii J, Ananiadou S. Thesaurus or Logical Ontology, Which One Do We Need for Text Mining? J. of Language Resources and Evaluation. (2005) 39:1, 77-90. Krauthammer M, Nenadic G. Term identification in the biomedical literature. J Biomed Inf. 37(6):51226. 2004. Kokkinakis D, Dannélls D. Recognizing Acronyms and their Definitions in Swedish Medical Texts. 5th Languages Resources and Evaluation (LREC). Genoa, Italy. Pp. 1971-1974. 2006.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-819
819
Assisting the Translation of the CORE Subset of SNOMED CT Into French Hocine ABDOUNEa, Tayeb MERABTIb,c, Stéfan J. DARMONIb,c, Michel JOUBERTa,1 a LERTIM, Faculty of Medicine, University of Aix-Marseille 2, France b CISMeF, Rouen University Hospital, France c TIBS, LITIS EA 4108, Institute of Biomedical Research, University of Rouen, France Background: the Core Subset of SNOMED CT is part of the UMLSCore Project dedicated to study problem list vocabularies. SNOMED CT is not yet translated into French. Objective: to propose an automated method to assist the translation of the CORE Subset of SNOMED CT into French. Material: the 2009 AA versions of the CORE Subset of SNOMED CT and UMLS; use of four French-language terminologies integrated into the UMLS Metathesaurus: SNOMED International, ICD10, MedDRA, and MeSH. Method: an exact mapping completed by a close mapping between preferred terms of the CORE Subset of SNOMED CT and those of the four terminologies. Results: 89% of the preferred terms of the CORE Subset of SNOMED CT are mapped with at least one preferred term in one of the four terminologies. Discussion: if needed, synonymous terms could be added by the means of synonyms in the terminologies; the proposed method is independent from French and could be applied to other natural languages. Keywords. Problem lists, SNOMED CT, UMLS, Translations
1. Introduction Weed first introduced and has since popularized the concept of the problem-oriented medical record [1]. The problem-oriented record consists of four essential elements: the data base, problem list, detailed plans, and structured progress notes dealing with each of the identified problems. Problem lists data are often used to drive functions other than clinical documentation, e.g. generation of billing codes, supporting clinical research and quality assurance. In an ideal world, everybody should use a single, standardized problem list vocabulary. In reality, most institutions use their own local vocabularies. The U. S. National Library of Medicine (NLM) started the UMLS-CORE Project to study problem list vocabularies [2]. The Unified Medical Language System (UMLS) is a valuable resource for terminology research. CORE stands for Clinical Observations Recording and Encoding, a mnemonic referring to the capture and codification of clinical information in the summary segments of the medical record such as the problem list, discharge diagnosis and reason for the encounter. The UMLS-CORE 1 Corresponding author : Michel Joubert, Lertim, Faculté de Médecine, Université de la Méditerranée, 27 boulevard Jean Moulin, 13005, Marseille, France
820
H. Abdoune et al. / Assisting the Translation of the CORE Subset of SNOMED CT Into French
Project has two goals: 1) to study and characterize the problem list vocabularies of large health care institutions in terms of their size, pattern of usage, mappability to standard terminologies and extent of overlap, and 2) to identify a subset of UMLS concepts that occur with high frequency in problem lists to facilitate the standardization of problem list vocabularies. A CORE Problem List Subset was derived based on datasets from several institutions. The most frequently used terms, about 14’000 in all, represented about 95% of the usage volume in each institution. These were mapped to 6’800 UMLS concepts, which formed the basis of the UMLS-CORE Subset. SNOMED CT covers a high percentage (81%) of the identified UMLS-CORE concepts [3]. Our aim is to propose an automated method to assist a translation of the CORE Subset of SNOMED CT (shortly, CORE Subset in what follows) into French. This study follows a work related to an assistance of an automated translation of SNOMED CT into French [4]. The translation of SNOMED CT is currently being performed in Canada by the Infoway institution in accordance with the IHTSDO organization [5].
2. Material 2.1. Unified Medical Language System The UMLS project launched by the NLM integrates health terminologies in a single Metathesaurus [6]. To date, the UMLS Metathesaurus contains a hundred terminologies. More specifically, within the Metathesaurus we will be using: the MRCONSO table, which lists all the concepts incorporated in the UMLS with no duplication and in which each concept is attributed a unique identifier (CUI), and the MRREL table which describes explicit relationships, if any, between concepts in the original terminologies. Within MRREL, we only use the following explicit mappings: primary_mapped_to/from, mapped_to /from, other_mapped_to/from [7]. We worked with the 2009 AA version of UMLS. Our mappings operate exclusively on preferred terms (PTs) of each French-language terminology: SNOMED International (107’900 PTs), ICD10 (9’306 PTs), MeSH (25’186 PTs), et MedDRA (18’209 PTs). 2.2. CORE Subset of SNOMED CT SNOMED CT is a hierarchical structure of concepts. It contains 310’074 terms in the 2009 version integrated into UMLS. These terms are organized along axes. The most representative axes are: disorder (73’006 terms), procedure (53’119 terms), finding (33’626 terms). CORE Subset version 2009AA is a set of SNOMED CT concepts which represent the most frequently used (14’000 terms) in the databases of the institutions studied by the NLM [3]. These terms have been mapped by NLM to 6’800 UMLS concepts, and more than 5’000 to SNOMED CT concepts. They are principally distributed along the following axes: disorder (3’794 concepts), finding (752 concepts), procedure (396 concepts).
H. Abdoune et al. / Assisting the Translation of the CORE Subset of SNOMED CT Into French
821
3. Method The mapping method is as follows: suppose two terms t1 and t2 of two different terminologies, suppose CUI1 and CUI2, the respective projections of t1 and t2 in the Metathesaurus, then t1 and t2 are mapped if: 1) CUI1=CUI2 (in MRCONSO), this corresponds to an exact mapping, and/or 2) there is an explicit mapping between CUI1 and CUI2 (in MRREL). The algorithm is run sequentially, all the possible mappings, exact and explicit, are tried to align each couple of terms. When an explicit mapping relationship exists (e.g. SNOMED CT to ICD-9-CM [8]) between two concepts, CUI1 and CUI2, it is likely that all terms designating CUI2 can be mapped to terms designating CUI1, whatever the terminologies and whatever the language in which they are formulated. In other words, explicit mappings between two terminologies can be “reused” for other terminologies by means of the UMLS concept structure [9].
4. Results Table 1 shows the contribution of each of the four French-language terminologies with regard to the three most representative axes of the CORE Subset. For instance, 3’277 terms of SNOMED International map disorder concepts of the CORE Subset. They represent 86% of the 3’794 terms of the CORE Subset of this axis. Table 1: Contribution in number and percentage for each terminology by axis in the CORE Subset. Terminologies SNOMED Int. ICD10 MeSH MedDRA
Disorder 3,277
Finding /
86%
522
Procedure /
69% 2,733
/
72%
262
477
/
364
/
7 / 2%
63% 2,151
/
57%
48% 2,505
66%
/
/
66%
118
/
162
/
30% 495/
66%
41%
Table 2 shows the the number of PTs of the union of the four French-language terminologies mapped to CORE Subset PTs (concepts) with regard to the three studied axes. For instance, the disorder axis shows 3’463 of the union of terminologies mapped to 3’794 CORE Subset concepts, they represent 91% of them. In the end, the method allows the translation of 89% of CORE Subset terms along these three axes. Table 2: Number of PTs in the union of French-language terminologies aligned by axis with PTs of the CORE Subset. # of PTS of French # of PTs of % of PTs of Axes Terminologies the CORE Subset the Core Subset Disorder
3, 463
3, 794
91%
Finding
632
752
84%
Procedure
291
396
73%
Total
4,386
4,942
89%
822
H. Abdoune et al. / Assisting the Translation of the CORE Subset of SNOMED CT Into French
5. Discussion and Conclusion Table 1 shows that the contribution of SNOMED International for translating terms is about 80% of the terms of CORE Subset along the three axes, and that ICD10 contribution is 63%. These results may be explained by the fact that 91% of SNOMED International terms are integrated into SNOMED CT, and that 87% of ICD10 terms are also integrated into SNOMED CT [4]. Considering the three axes in Table 2 (disorder, finding, and procedure), it is possible to propose at least one proposal for the translation of 4’386 of the 4’942 CORE Subset terms, that means 89%. Terminologies are integrated into the UMLS Metathesaurus by experts by means of exact and explicit mappings. Then we can expect that terms of different terminologies referring to a same biomedical concept are attached to a same Metathesaurus concept. So, the mapping we operate does not need validation in our mind, because they have been made previously. With the intent of improving the assistance of the translation of CORE Subset, we would like to propose a set of French-language terms and of synonyms to an original English set. This proposal is based on the construction of the UMLS Metathesaurus itself: the Metathesaurus is a terminology integration system, in which synonymous terms from various terminologies are clustered into concepts, allowing for seamless mapping between terms from different terminologies through a UMLS concept [10, 11]. For instance, The CORE Subset concept “acute myocardial infarction” is translated to infarctus aigu du myocarde in the French ICD10, and into the same MedDRA PT with synonyms “acute myocardial infarction, unspecified site”, “acute myocardial infarction, unspecified site, episode of care unspecified” (expressed in English). Let remark that this concept is not mapped to MeSH. Moreover, synonymy is a symmetric relationship between terms. In order that transitivity can be applied: a synonymous term of another term is considered a synonym of the synonyms of the latter synonym. Hence, it is possible to build a set of terms for a term made of preferred terms originating from different terminologies and via synonyms in these terminologies. As such, MeSH can largely contribute thanks to its 97’000 synonyms, not counting more than 20’000 French synonyms added by the CISMeF team (Rouen University Hospital, France), not yet integrated in the French translation of MeSH. As previously proposed for assisting the translation of SNOMED CT into French [4], our method could be improved by exploiting hierarchical relationships within some terminologies and propose more generic terms for the translation of more specific ones when exact and explicit mappings are not successful. This refinement seems promising but collides with two difficulties: 1) it requires a human expertise to validate a translation proposal, and 2) some research studies have shown the possible confusion that may occur in some terminologies in the interpretation of hierarchical relationships, notably between IS_A and PART_OF relationships [12, 13]. Moreover this kind of inheritance due to hierarchies does not apply to the concept of synonymy described above. The automated method we propose for assisting the translation of the CORE Subset terms is not dependent on French, since it works at a conceptual level and not at a lexical one. Hence, it can be reused for another natural language than French, on condition that terminologies in this language are sufficiently integrated in the Metathesaurus.
H. Abdoune et al. / Assisting the Translation of the CORE Subset of SNOMED CT Into French
823
Acknowledgements: The authors thank the National Library of Medicine of the United States who provided them with the UMLS knowledge sources and the CORE Subset of SNOMED CT. The authors are also grateful to Richard Medeiros, Rouen University Hospital Medical Editor, for editing the manuscript.
References [1] [2] [3] [4]
[5] [6] [7] [8] [9] [10] [11] [12] [13]
Weed LL. Medical records that guide and teach. N Engl J Med 1968; 278: 593-600 and 652-7. Fung KW, Mc Donald C, Strinivasan S. The UMLS-CORE project: a study of the problem list terminologies used in large healthcare institutions. JAMIA 2010; 17(6): 675-80. The CORE Problem List Subset of SNOMED CT. http://www.nlm.nih.gov/research/umls/Snomed/core_subset.html Joubert M, Abdoune H, Merabti T, Darmoni S, Fieschi M. Assisting the translation of SNOMED CT into French using UMLS and four representative French-language terminologies. Proc. AMIA Annu Symp 2009; 2009:291-5. Canada Health Infoway. http://www.ihtsdo.org/members/ca00/ National Library of Medicine. UMLS Metathesaurus. http://www.nlm.nih.gov/pubs/factsheets/umlsmeta.html Fung KW, Bodenreider O. Utilizing the UMLS for semantic mapping between terminologies. Proc AMIA Annu Symp. 2005: 266-270. Imel M. A closer Look: The SNOMED Clinical Terms to ICD-9-CM Mapping. Journal of AHIMA 2002; 73; 66-69. Bodenreider O, Nelson SJ. Beyond synonymy: Exploiting the UMLS Semantics in Mapping Vocabularies. Proc AMIA Annu Symp 1998: 815-9. McCray AT, Nelson SJ. The Representation of meaning in the UMLS. Methods Inf Med 1995; 3:193201. Bodenreider 0. Biomedical Ontologies in Action: Role in Knowledge Management, Data Integration and Decision Support. Yearb Med Inform, 2008: 67-79. Cimino JJ, Min H, Perl Y. Consistency across the hierarchies of the UMLS Semantic Network and Metathesaurus. J Biomed Inform 2003; 36: 450-61. Ceusters W, Smith B, Kumar A, et al. Mistakes in medical ontologies: where do they come from and how can they be detected? Stud Health Technol Inform 2004; 102: 145–63.
824
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-824
Recording Associated Disorders Using SNOMED CT a
Ronald CORNETa, Nicolette F de KEIZER a Department of Medical Informatics, Academic Medical Center, University of Amsterdam, The Netherlands
Abstract. Multidisciplinary communication about patients with multiple and often interrelated diseases is of utmost importance to guarantee high quality of care. In this paper we focus on storing into the electronic medical record patients’ disorders which are associated with each other, taking into account the role of SNOMED CT. The objectives of this paper are to design and discuss possibilities to appropriately record the associations between two disorders as defined in SNOMED CT and to get insight into the use of the relationship “associated with” in SNOMED CT and its consequences for data reuse. Our study showed that textual and concept-based reproducible recording of reusable data is hampered due to incorrect or incomplete modeling of associations between disorders in SNOMED CT. A possible solution for this is to record constituting characteristics of concepts directly into the record, instead of only being represented in the terminology. Further research on binding of information models and terminologies is needed. Keywords. Terminological system, SNOMED CT, electronic medical record, semantic interoperability
1. Introduction With the aging population more and more patients have multiple, chronic and often interrelated diseases. To streamline diagnostic and treatment activities good communication is required between the care providers that are involved from different disciplines. These care providers will have different clinical perspectives, for example a neurologist might describe a patient as having a ‘diabetic neuropathic arthropathy’, while the same situation is described by a diabetologist as ‘Type II diabetes mellitus with neuropathic arthropathy’ and as ‘arthropathy associated with a neurological disorder’ by an rheumatologist. A semantically interoperable electronic medical record (EMR), i.e. a medical record in which the meaning of the data can be exchanged and understood across the borders of systems, clinical contexts, and users, should support interdisciplinary communication. Terminological systems which explicitly define medical concepts are essential to realize this. SNOMED CT is considered to be a comprehensive clinical healthcare terminological system that can be used as the foundation for EMRs and other applications. Due to its separation of concepts and descriptions, each unique concept can be described by multiple synonymous terms which support the use across the borders of medical specialties. SNOMED CT provides formal definitions for its
R. Cornet and N.F. de Keizer / Recording Associated Disorders Using SNOMED CT
825
concepts using IS A relationships and attribute relationships. Relationships provide a formal way to reflect the semantics of a concept. In this paper we focus on recording patient’s disorders which are associated with each other, taking into account the role of SNOMED CT. We analyze whether situational descriptions recorded from different clinical perspectives, as in the example above, convey the same meaning and can be used interchangeably when using the data for retrieval and aggregation. The first objective of this paper is to discuss and compare three ways of representation to appropriately record the associations between two disorders as defined in SNOMED CT. The second objective is to get insight into the consistent or inconsistent use of the relationship associated with in SNOMED CT and the consequences for reasoning with concepts that are defined by this relationship.
2. Material and Methods 2.1. Representation of Information in a Patient Record When representing information in a patient record, three possibilities can be distinguished: textual, concept-based, and instance-based representation, i.e., referring to each of the corners of the semiotic triangle [1]. Textual representation refers to information that can only be (humanly) interpreted based on the description of a concept. For example, in SNOMED CT, diabetes mellitus type 1 and type 2 are defined as children of diabetes mellitus, without defining explicitly the difference between the genus (diabetes mellitus) and the species (type 1 and type 2) or among the species. There are various reasons why no explicit definition of the difference is given. It may either be an error of omission (i.e., the difference can be made explicit, but is lacking), or a limitation of the concept model or representation (i.e., no attributes exist in the terminology to adequately describe the difference), or the concept is a so-called natural kind for which no explicit difference can be specified [2]. Concept-based representation refers to information that is explicitly represented as part of a definition of a concept. For example, diabetes mellitus is defined as having a finding site which is an endocrine pancreatic structure. Creating an instance of the concept diabetes mellitus in a patient record does not provide explicit reference to an instance of an endocrine pancreatic structure. Whereas this may not be necessary in most cases, it may be relevant in some others. With concept-based representation it is not possible to make explicit for example whether different disorders refer to the same instance of “endocrine pancreatic structure” or to other instances thereof. Instance-based representation enables to make the above distinction. If a patient with diabetes gets a pancreas transplant, then it may be relevant to distinguish disorders of the original pancreas from disorders related to the transplanted pancreas. This can be realized by creating appropriate instances in the medical record [3]. 2.2. Methods and Analyses The July 2010 release of SNOMED CT was used. In this release, 292,073 active concepts are defined, for which 760,950 English-language preferred and synonymous descriptions are provided. The concepts are defined using a total of 1,210,095 relationships, which are is_a relationships and attribute relationships such as finding site. The concepts, descriptions and relationships have been imported into an
826
R. Cornet and N.F. de Keizer / Recording Associated Disorders Using SNOMED CT
MSAccess database. SNOMED CT contains 64,162 active disorder concepts, i.e., concepts of which the fully specified name ends with “(disorder)”. We focused on attribute relationships between disorders and specifically on the relationship associated with and its subtypes due to and after. Therefore, we selected all active SNOMED CT disorders including associated with, due to or after in their description (textual representation) to evaluate the adequacy of its concept-based representation, i.e. whether a formal definition describing the association is present, comparable to [4]. Furthermore, queries were created to extract active disorder concepts that are defined with associated with, due to, and after relationships, and to analyze the amount of the types of disorders they interrelate.
3. Results 3.1. Impact Analysis The importance of proper use of terminological systems lies in the possibility of aggregating information at different levels of detail. This can be taxonomic reasoning (DM type II is a disorder of the endocrine system), partonomic reasoning (endocrine pancreatic structure is part of the pancreas) or syndromic reasoning (tetralogy of Fallot involves among others ventricular septum defect, pulmonic valve stenosis). The current modeling in SNOMED CT focuses on supporting these kinds of reasoning, by supporting at least 3 ontological commitments [5]. However, the modeling provides no proper solution for reasoning with associations between disorders. From the example in the introduction, if a neurologist records “diabetic neuropathic arthropathy (201724008)” one can infer that the patient has 3 related disorders: arthropathy, neurological disorder and diabetes mellitus. However, in SNOMED CT it is only a type of arthropathy. Neurological disorder and diabetes mellitus are referenced by means of the associated with relation. Although this makes sense (as arthropathy is not a kind of diabetes mellitus) it hampers reuse of data. For example, selecting patients with some kind of diabetes will not include patients with a diabetic neuropathic arthropathy. The reuse is further hampered by the fact that a single clinical situation is represented by multiple concepts in SNOMED CT that represent different clinical perspectives. Due to the way in which these concepts are modeled in SNOMED CT, they result in different inferences. In the situation described above a diabetologist may record this situation as “Type II diabetes mellitus with neuropathic arthropathy (314904008)”. The definitions of the two concepts (201724008 vs. 314904008) in SNOMED CT are so different that their most specific common ancestor is “Disorder of body system”. In SNOMED CT, this could be resolved by defining concepts so that associated disorders are defined as parents rather than via an associated with attribute. However, this would for example render an arthropathy as a kind of diabetes, rather than a related disorder, which is undesirable. Therefore, a solution needs to be found in the way in which this information is recorded in the EMR. 3.2. Text-Based, Concept-Based, and Instance-Based Representation Clearly, text-based representation is insufficient for reproducible retrieval and aggregation of patient information. The analysis above shows that simple concept-
R. Cornet and N.F. de Keizer / Recording Associated Disorders Using SNOMED CT
827
based representation also impedes reproducibility. A possible solution for this is to record constituting characteristics of concepts directly into the record, instead of only being represented in the terminology. In the above example, 3 distinct disorders that the patient has should be recorded: − The patient has diabetes mellitus type II − The patient has arthropathy − The patient has disorder of nervous system In addition the associations between the disorders should be recorded: − The disorder of nervous system is associated with the diabetes mellitus type II − The arthropathy is associated with the disorder of nervous system − The arthropathy is associated with the diabetes mellitus type II In this way, the information that the SNOMED CT concept represents is preserved, but presented in a way that is independent of clinical perspective. The perspective is provided when the information is retrieved, i.e., it can be regarded diabetes mellitus type II which has an associated arthropathy, but also as an arthropathy which is associated with diabetes mellitus. Ideally, one does not only record the type of disease, but also explicitly identify it, i.e., create explicit reference to an instance. This instance can be referred to in other clinical situations involving the same disease, e.g. when a patient is later diagnosed with diabetic nephropathy it will refer to the representation of the same instance of diabetes mellitus. 3.3. Textual vs. Concept-Based Representation of Associated Disorders in SNOMED CT In total 2,804 active disorder concepts are described by a fully specified name that contains ‘associated with’, ‘due to’ and/or ‘after’. Of these 35% (n=969) are formally defined by associated with, due to and/or after relations. The targets of these relations are in majority disorders (for 780 concepts) and procedures (for 142 concepts). In total 2,981 unique source disorder concepts and 674 different target disorder concepts are formally interrelated via associated with (n=1,011), due to (n=1,551) and/or after (n=718) relations. The three most frequently used target disorder concepts of the associated with, due to and/or after relations are hypersensitivity reaction, traumatic injury and diabetes mellitus. The overlap between concepts with textual representation and with concept-based representation is only about 26% (n=780). The 2,981 concepts involving association in their definition constitute 4.6% of all active disorder concepts in SNOMED CT.
4. Discussion and Conclusion In this paper we describe the impact of the way in which associations between disorders are modeled in SNOMED CT, and the extent to which such associations are used in SNOMED CT. Over 4.5% of all active disorder concepts in SNOMED CT involve a formally represented association with another disorder, with a small overlap of concepts which have a textual representation of association. This, together with the fact that more and more patients suffer from multiple and interrelated diseases,
828
R. Cornet and N.F. de Keizer / Recording Associated Disorders Using SNOMED CT
supports the relevance of the subject of this paper. Our study showed that reproducible recording of reusable data is hampered by (incomplete) modeling of associations between disorders in SNOMED CT and by the lack of adequate inferencing procedures. First, many (65%) of all active disorder concepts described by a fully specified name that contains ‘associated with’, ‘due to’ and/or ‘after’ lack a formal relationship describing that association. Although this can in part be explained by the use of other relations representing this association, e.g., causative agent, still a larger part of these associations is not represented formally. Second, we show that multiple SNOMED CT concepts, which cannot be inferred as equivalent, can describe a single clinical situation from a different clinical perspective. A limitation of our study is that the analysis performed is only based on the descriptions and the formal concept definitions containing ‘associated with’, ‘due to’ and/or ‘after’. It reveals that there are other comparable textual descriptions for associations, e.g., ‘complication’, or ‘secondary’, which are not taken into account in this study, and conversely, the textual descriptions may have been represented by other relationships, as pointed out above. Furthermore, in the analysis of the formal definitions, inherited properties may have been disregarded. The proposed representation of associated diseases in the EMR requires further analysis and research. As discussed by Rector in [6], some concepts represent situations rather than disorders. These concepts can be questioned to be relevant in a reference terminology and better fit in an interface terminology. However, concepts that involve associated diseases may represent actual disorders and hence belong to a reference terminology. It should be possible to make the distinction explicit whether concepts represent disorders or situations. The way in which concepts should be stored in the patient record (e.g., concept-based or instance-based) should also be made explicit. This also requires investigation on the binding of information models and terminologies and on the use of advanced logic-based reasoning. Only then we will reach a situation in which data in the EMR can be appropriately aggregated and reused. Acknowledgments: The authors serve as member of the International Health Terminology Standards Development Organisation Technical Committee (RC) and Content Committee (NdK).
References [1] [2] [3] [4] [5] [6]
Campbell KE, Oliver DE, Spackman KA, Shortliffe EH. Representing thoughts, words, and things in the UMLS, J Am Med Inform Assoc. 5(5) (1998), 421-31. Cornet R, Abu-Hanna A. Auditing description-logic-based medical terminological systems by detecting equivalent concept definitions, Int J Med Inform. 77(5) (2008), 336-45. Ceusters W, Smith B. Tracking referents in electronic health records, Stud Health Technol Inform. 116 (2005), 71-76. Mougin F, Bodenreider O, Burgun A. Looking for Anemia (and Other Disorders) in SNOMED CT: Comparison of Three Approaches and Practical Implications, AMIA Annu Symp Proc. 2010:527-31. Schulz S, Cornet R, Spackman KA. Consolidating SNOMED CT’s Ontological Commitment, Applied Ontology 6(1) (2011), 1-11. Rector AL. What’s in a code? Towards a formal account of the relation of ontologies and coding systems, Stud Health Technol Inform. 129(Pt 1) (2007), 730-4.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-829
829
SNOMED CT’s RF2: Is the Future Bright? a
Werner CEUSTERSa,1 New York State Center of Excellence in Bioinformatics & Life Sciences, Buffalo, USA
Abstract. SNOMED CT’s new RF2 format is said to come with features for better configuration management of the SNOMED vocabulary, thereby accommodating evolving requirements without the need for further fundamental change in the foreseeable future. Although the available documentation is not yet convincing enough to support this claim, the newly introduced Model Component hierarchy and associated reference set mechanism seem to hold real promise of being able to deal successfully with a number of ontological issues that have been discussed in the recent literature. Backed up by a study of the old and new format and of the relevant literature and documentation, three recommendations are presented that would free SNOMED CT from use-mention confusions, unclear referencing of real-world entities and uninformative reasons for change in a way that does not force SNOMED CT to take a specific philosophical or ontological position. Keywords. SNOMED CT, RF2, change management, meaning
1. Introduction SNOMED CT is a clinical reference terminology designed to enable electronic clinical decision support, disease screening and enhanced patient safety. It was first released in 2002 following the merger of SNOMED-RT and Clinical Terms Version 3. In 2010, the International Health Terminology Standards Development Organization (IHTSDO) announced the future distribution of SNOMED CT under a new format called 'RF2' [1] of which more detail became officially available with the January 2011 version [2-4]. The RF2 format is claimed to offer greater flexibility and more explicit and comprehensive version control than RF1 with new features for configuration management thereby accommodating evolving requirements without a need for further fundamental change in the foreseeable future [4]. One such feature is that RF2, through the introduction of a new hierarchy called the ‘SNOMED CT Model Component’ [2] which includes the existing Concept Model, allows SNOMED CT to be described in terms of its own structure thereby reducing, so it is hoped, the burden and costs incurred by content developers, implementers and release centers while at the same time improving product functionality and quality. The current documentation of RF2 is marked by a focus on making language- and realm extensions as well as mappings towards other terminologies more manageable. It introduces in addition a number of merely cosmetic changes to the existing history mechanism. But at first sight, it seems also to hold much promises to deal with a number of issues concerning the ontological underpinnings of SNOMED CT that have been reported upon in the literature such as, 1
Corresponding Author: Werner Ceusters. Ontology Research Group, New York State Center of Excellence in Bioinformatics & Life Sciences, University at Buffalo, 701 Ellicott street, Buffalo NY 14204, USA; E-mail: [email protected].
830
W. Ceusters / SNOMED CT’s RF2: Is the Future Bright?
for example, the underspecification of reasons for change [5], the (in)adequacy of SNOMED’s intensional and extensional definitions [6], its still incoherent ontological commitment [7], and the ambiguities and conflations in its conceptual structures and in its treatment of terms proposed as ‘synonyms’ [8]. The goal of the work reported on here was to assess whether RF2 represents an opportunity to resolve these issues whether immediately or in the foreseeable future.
2. Methods SNOMED CT’s documentation and its Concept Model as reflected in the Linkage Attributes were studied for all releases from January 2002 to July 2010. To assess the evolution of the Concept Model, we generated from the relationship tables included in each version a graph representing the relationships actually used in linking conceptIDs from one hierarchy to conceptIDs from the same or another hierarchy, thereby keeping track in each version of the number of times a specific relation, e.g. ‘USING DEVICE’ was used in relation to the status, e.g. ‘current’, ‘ambiguous’, etc., between specific hierarchies. As an example, the relationship ‘Computerized tomography guided biopsy of brain (procedure) METHOD Biopsy – action (qualifier value)’ in version V would increment the occurrence count of the 5-tuple ‘procedure – (0) METHOD qualifier value – (0)’ for version V where ‘0’ indicates the status ‘current’. For each tuple, 10 examples of relationships for further inspection – specifically those that revealed astonishing results such as ‘substance (2) SAME AS procedure (0)’ – were selected to find commonalities in the underlying causes for error and of assessing to what extent they relate to the issues described in the introduction. Finally, the new Model Component hierarchy was investigated to see whether it could be expanded with additional entries capable of either solving the issues, or if not, making them explicit.
3. Results: Three Recommendations The data upon which our analysis and recommendations are based can be downloaded from [9]. They indicate that many problems can be traced back to underlying causes: (1) a mixing of object and meta-language and use-mention confusions, (2) unclarity about what some conceptIDs exactly denote, and (3) use of ambiguous and uninformative codes for the reasons why concepts are inactivated. Unfortunately, the documentation of RF2 is not yet explanatory enough and lacks clearly worked out examples to assess for each issue identified whether it can be resolved by merely introducing new Model Component entries and associated data types or whether other measures are required as well. Our first – and by far not exhaustive – proposal is therefore formulated in terms of the following three recommendations which experts in RF2 can then implement more adequately in the new format they have designed: 1. do not make double use of the ConceptID as an identifier for the concept and an identifier for the Concept Component; 2. add to each Concept Component a field that indicates to what broad category the intended referent of that concept belongs; 3. expand the Concept Inactivation Value sub-hierarchy with concepts that reference whether a change in SNOMED CT is motivated by (1) a change in
W. Ceusters / SNOMED CT’s RF2: Is the Future Bright?
831
reality, (2) the SNOMED CT authors’ or users’ understanding of reality as reflected in the advance of the state of the art in the biomedical domain, or (3) a mistake that is strictly internal in SNOMED CT as an information artifact [10].
4. Discussion SNOMED CT is in its Technical Reference Guide described as ‘a concept-based terminology which means that each medical concept is uniquely identified and can have multiple descriptions’. Readers are further told that ‘concepts are related to each other by hierarchical relationships’ and that ‘relationships are also defined to describe additional attributes of concepts’ [11]. Until the January 2010 version, SNOMED CT’s authors defined a concept as ‘a clinical idea to which a unique ConceptId has been assigned ’ thereby further specifying that ‘each Concept is represented by a row in the Concepts Table’ [12]. In 2010, in line with earlier critiques about the ambiguities concept-based systems in general suffer from [13], the glossary of the Technical Reference Guide marks the word ‘Concept’ as ‘an ambiguous term. Depending on the context, it may refer to: a clinical idea to which a unique ConceptId has been assigned; the ConceptId itself, which is the key of the Concepts Table (in this case it is less ambiguous to use the term “concept code”); the real-world referent(s) of the ConceptId, that is, the class of entities in reality which the ConceptId represents (in this case it is less ambiguous to use the term “meaning” or “code meaning”)’ [14]. However, merely pointing this out, however true it might be, does not yet solve the problem. For one could still read in the same document, for example, that a SNOMED CT term is ‘a text string that represents the Concept’. So what is it then that is represented by a term: (1) the clinical idea, (2) less likely, but nevertheless in line with the expressed ambiguity – the ConceptId, or (3) the real-world referent(s)? The same question must then be asked for the several hundred occurrences of the word ‘concept’ throughout the SNOMED CT documentation. In some cases, readers can infer from the context which meaning is intended, but in most cases, only the SNOMED CT authors can provide the answer by rewriting the entire documentation. Unfortunately, as inspection reveals, it is very hard for readers and even for SNOMED CT authors, to disambiguate on the basis of the minimal context provided in sentences in which the word ‘concept’ appears between concept as clinical idea and concept as meaning, i.e. as real-world referent. This is not only because clinical ideas are real-world entities themselves – although of a different nature than, for example, persons, viruses and surgical procedures, and some being such that they are about other real-world entities while others are about nothing at all [8] – but also because SNOMED CT authors have not yet made it clear what sorts of real-world entities their concepts represent: denoting real-world entities unambiguously requires ontological commitment and it has been shown that SNOMED CT is incoherent in this respect [7]. Relying on ‘meaning’ unfortunately doesn’t help much. According to SNOMED CT’s glossary definition for ‘concept’ discussed above, the meaning of a concept(Id) would correspond to what Frege referred to as the ‘Bedeutung’ (‘reference’, ‘extension’) of a term [15]. However, in the User Guide, it is specified that ‘a “concept” is a clinical meaning identified by a unique numeric identifier (ConceptId) that never changes. The concepts are formally defined in terms of their relationships with other concepts. These logical definitions give explicit meaning which a computer
832
W. Ceusters / SNOMED CT’s RF2: Is the Future Bright?
can process and query on’ [16]. Here, the word ‘meaning’ corresponds rather to Frege’s ‘Sinn’ (‘sense’, ‘intension’) [15]. And finally, in the SNOMED-CT Editorial Guide, a document that became part of the official documentation only since the latest release (although parts of it existed earlier in the form of drafts for comments), SNOMED CT is described as a ‘terminological resource’ which ‘consists of codes representing meanings expressed as terms, with interrelationships between the codes to provide enhanced representation of the meanings’ [17]. As a result, the reader is not only left with the question what sort of meaning is discussed each time the word ‘meaning’ is used – the Editorial Guide is indeed more about ‘meanings’ than ‘concepts’ – but also what actually is represented in SNOMED-CT: (1) clinical ideas – in people’s minds or concretized in writings, software programs and presentations, respectively called L2 and L3-entities in [8], (2) a broader group of real-world referents that includes not only tangible entities such as patients and knives but also the processes in which the latter participate and the forces they undergo, or (3) ‘meanings’. Without a clear answer to these questions, an answer that might be different for each individual occurrence of the word, SNOMED CT users will make interpretations in different ways, thereby rendering their data mutually incompatible. It will be difficult also to grasp, yes, the meaning of statements such as ‘The meaning of a Concept does not change [emphasis added]’, when immediately followed by the sentence ‘If the Concept’s meaning changes because it is found to be ambiguous, redundant or otherwise incorrect, the Concept is made inactive [emphasis added]’ [11]. For the same reason, probably, it has escaped the attention of the SNOMED CT authors that relationships of the sort ‘event MAY BE navigational concept’, ‘person MOVED TO namespace concept’ and, indeed ‘physical object IS A inactive concept’ do not have the same sort of meaning as ‘procedure METHOD physical object’ [9]. The former are statements about the concepts as representational units in SNOMED CT itself (i.e. meta-language statements), while the latter is a statement about the referents of these concepts (an object-language statement). The problem arises because SNOMED CT does not assign, in contrast to entries in the Description and Relationships Table, a separate component ID to an entry in the Concept Table.
5. Conclusion The three recommendations, despite being very modest, address the issues sufficiently. The first solves the object-/metalanguage confusion. The second solves the problem of what sort of entity in each individual case is referenced by a conceptId. Potential values for the proposed field can be based not only on the L1/L2/L3 distinction [8] – roughly: first-order entities that are not about anything (e.g. person, scalpel) / beliefs, desires, intentions whether about something (e.g. a diagnosis) or about nothing (e.g. some psychotic beliefs) / and information artifacts such as staging scales, guidelines, and, indeed, SNOMED CT itself – but also on whether a universal or defined class is referenced [18], and potentially even on the putative ‘possibilia’ and ‘non-existing entities’ [19] endorsed by terminology and ontology developers who do not wish to be hampered by the complexity of Ontological Realism [20]. By doing so, SNOMED CT can even maintain a philosophically rather neutral position even though a clear shift towards OBO Foundry compatibility is observable. And finally, the rather ad hoc motivation for inactivating concepts is catered for by our third recommendation.
W. Ceusters / SNOMED CT’s RF2: Is the Future Bright?
833
Acknowledgements: The work described was funded in part by grant R21LM009824 from the National Library of Medicine. The content of this paper is solely the responsibility of the author and does not necessarily represent the official views of the NLM or the NIH.
References [1] [2] [3]
[4]
[5]
[6]
[7]
[8]
[9] [10]
[11] [12] [13] [14] [15] [16] [17] [18] [19] [20]
International Health Terminology Standards Development Organisation. SNOMED Clinical Terms® Technology Preview Guide - January 2010 International Release (US English) 2010. International Health Terminology Standards Development Organisation. SNOMED CT® Release Format 2.0 Reference Set Specifications - Version 1.0a (January 2011 International Release) 2011. International Health Terminology Standards Development Organisation. SNOMED Clinical Terms® Release Format 2.0 Data Structures Specification - Version 1.0a (January 2011 International Release) 2011. International Health Terminology Standards Development Organisation. SNOMED CT® Release Format 2.0 Guide for Updating from RF1 to RF2 - Version1.0a (January 2011 International Release) 2011. Ceusters W, Spackman KA, Smith B, editors. Would SNOMED CT benefit from Realism-Based Ontology Evolution? American Medical Informatics Association 2007 Annual Symposium Proceedings, Biomedical and Health Informatics: From Foundations to Applications to Policy; 2007 November 1014; Chicago IL: American Medical Informatics Association. Mougin F, Bodenreider O, Burgun A. Looking for Anemia (and Other Disorders) in SNOMED CT: Comparison of Three Approaches and Practical Implications. AMIA Annual Symposium Proceedings. 2010:527-31. Schulz S, Cornet R. SNOMED CT's Ontological Commitment. In: Smith B, editor. ICBO: International Conference on Biomedical Ontology. Buffalo NY: National Center for Ontological Research; 2009. p. 55-8. Ceusters W, Smith B. A Unified Framework for Biomedical Terminologies and Ontologies. In: Safran C, Marin H, Reti S, editors. Proceedings of the 13th World Congress on Medical and Health Informatics (Medinfo 2010), Cape Town, South Africa, 12-15 September 2010. Amsterdam: IOS Press; 2010. p. 1050-4. Ceusters W. Additional Data for MIE2011; www.referent-tracking.com/CeustersMIE2011AddData.zip. 2011. Ceusters W. Applying Evolutionary Terminology Auditing to SNOMED CT. American Medical Informatics Association 2010 Annual Symposium (AMIA 2010) Proceedings. Washington DC2010. p. 96-100. International Health Terminology Standards Development Organisation. SNOMED CT® Technical Reference Guide - January 2011 International Release - (US English)2011. The International Health Terminology Standards Development Organisation. SNOMED CT® Technical Reference Guide – July 2009 International Release2009. Smith B. Beyond concepts: ontology as reality representation. Proceedings of the third international conference on formal ontology in information systems. Amsterdam: IOS Press; 2004. p. 73-84. International Health Terminology Standards Development Organisation. SNOMED CT® Technical Reference Guide - July 2010 International Release (US English)2010. Frege G. Über Sinn und Bedeutung. Zeitschrift für Philosophie und philosophische Kritik. 1892;100:25-50. International Health Terminology Standards Development Organisation. SNOMED Clinical Terms® User Guide - January 2011 International Release - (US English) 2011. International Health Terminology Standards Development Organisation. SNOMED CT® Editorial Guide - January 2011 International Release - (US English) 2011. Smith B, Ceusters W. Ontological Realism as a Methodology for Coordinated Evolution of Scientific Ontologies. Applied Ontology. 2010;5(3-4):139-88. Ceusters W, Elkin P, Smith B. Negative Findings in Electronic Health Records and Biomedical Ontologies: A Realist Approach. International Journal of Medical Informatics. 2007 March;76:326-33. Lord P, Stevens R. Adding a Little Reality to Building Ontologies for Biology. Plos ONE. 2010;5(9):e12258.
834
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-834
Serious Adverse Event Reporting in a Medical Device Information System Fabrizio PECORAROa and Daniela LUZIa1 Institute for Research on Population and Social Policies (IRPPS), National Research Council, Rome, Italy
a
Abstract. The paper describes the design of a module that manages Serious Adverse Events (SAEs) reporting within a Clinical investigation on Medical devices. This module is integrated in a Medical Device Information System (MEDIS) that collects data and documents exchanged between applicants and the National Competent Authority during the clinical investigation lifecycle. To improve information sharing among different stakeholders and systems MEDIS design and developed were based on the HL7 v.3 standards. The paper provides a conceptual model on SAEs based on HL7 RIM that underlines Medical Device characteristics. Keywords. Medical Device, Clinical Investigation, HL7, Serious Adverse Event
1. Introduction Serious adverse event reporting in Clinical Investigations (CIVs) encompasses an intensive and long-standing interaction between different stakeholders acting in different environments, producing and using different types of data according to specific aims: CIVs’ sponsors, investigators, human research ethics committees, National Competent Authorities (NCAs), clinical trial monitors, patients. National laws, European directives as well as Good medical practice and guidelines provide the legal framework for CIVs on both pharmaceutical products and Medical Devices (MDs) identifying responsibilities of the parties concerned, defining adverse events (AEs) and serious adverse events (SAEs), requiring specific information on severity, causality and action taken to safeguard patient safety. Although there are an increasing number of applications that support CIV management (electronic data capture, patient recruitment, site management, etc.), only a limited number is devoted to SAE reporting and monitoring. They are generally developed to facilitate the communication among investigators within hospitals or networks dealing with many CIVs at a time [1,2]. Usually, SAE reporting still relays on paper-based communication and this is especially true in the MD domain. Moreover, it is worth noting that SAEs on MDs require additional data, compared to those related to investigational drugs: specification whether SAEs depend on device malfunction, failure, misuse and reporting whether subjects others than patients (e.g. operators or caregivers) are involved in SAEs.
1
Corresponding author: Daniela Luzi: Institute for Research on Population and Social Policies (IRPPS), National Research Council, Via Palestro 32, 00185, Rome, Italy; E-mail: [email protected].
F. Pecoraro and D. Luzi / Serious Adverse Event Reporting in a Medical Device Information System 835
In our vision SAEs reporting should be embedded within NCA’s information systems that support regulatory submission of CIV proposals and monitor the entire lifecycle of CIV performance. In this way SAE reporting could a) benefit from information already available (detailed MD description, clinical protocol, investigational sites, etc.), b) become a landmark to exchange information with other stakeholders, including other NCAs where a CIV takes place, c) enhance the process of analysis and monitoring of similar and/or related SAEs. These advantages can assure patient safety through a timely and widespread diffusion of this information. Moreover, the importance of information sharing in this domain makes it crucial to develop interoperable information systems based on standardized clinical data. The paper describes a Medical Device information system (MEDIS) developed by the National Research Council focusing in particular on the design and development of a module that manages SAE reporting. Taking the above-mentioned requirements into account, MEDIS was designed and developed to interoperate with other systems in particular with other NCA registries and with the European Databank on Medical Device (EUDAMED). For this reason MEDIS design was based on Health Level 7 (HL7) v.3 standards [3]. Paragraphs 2 and 3 describe respectively the main issues concerning SAE reporting and a brief overview of standardization initiatives, while paragraph 4 describes the SAE conceptual model and motivates HL7 adoption providing the related Refined Message Information Model (RMIM).
2. Main Issues in SAE Reporting in Clinical Investigations Risk management is one of the main concerns in the development of MDs that starts in the product’s design phase and is continuously verified by both manufacturers and regulatory authorities when a MD is placed on the market. When a manufacturer proposes a CIV on MD, a detailed risk analysis document is a pre-requisite for a CIV approval as it determines levels of probability as well as degree of severity of identified risks that may occur. Both in CIVs and post market surveillance adverse event reporting and its evaluation represent one of the most important means to test MD efficacy and safety, provided that collected data are comparable. Differently from adverse events reported in the framework of surveillance systems, all adverse events occurred during CIVs are collected in Case Report Forms (CRFs) by the manufacturer enabling their analysis. Only recently can NCAs partially achieve this task thanks to the recent enforcement of MEDDEV 2.7/3 [4] that established a common set of data to be exchanged when a SAE occurs. However, the “cumulative overview” provided by the MEDDEV template based at the most on sending excel forms, together with a limited emphasis on the necessity of establishing automatic procedures to exchange data efficiently risk to reduce SAE reporting to a simple notification. The improvement of electronic methods to detect and diffuse SAE information in an integrated environment can enhance NCA’s role in safeguarding patient safety [5]. Under this perspective the MEDIS system intends to contribute to development of a CIV infrastructure that facilitates data sharing including SAE reporting within the information gathered in a national registry. For these reasons, a specific module of SAE reporting was developed within the MEDIS system supporting 1) applicants in SAEs reporting activity (both initial and final report) providing a set of forms related to the description of SAE (severity, causality, MD tracking information), subjects involved and action taken for each subject; 2) NCA in monitoring SAEs occurred at national
836 F. Pecoraro and D. Luzi / Serious Adverse Event Reporting in a Medical Device Information System
level, and managing communications (e-mail and other required documents) exchanged during the SAE lifecycle.
3. Standardization Initiatives Although MDs are increasingly used in the daily medical practise, data modelling and standardization are at initial state. Efforts toward interoperability and standardization are carried out mainly by the Clinical Data Interchange Consortium (CDISC) and HL7. However, CDISC is focused on data standardization related to clinical trials on pharmaceutical products. Only recently a SDTM (Study Data Tabulation Model) Device sub-team has been formed with the aim of developing a domain that describes information (properties and characteristics) usable to capture data and metadata collected by manufacturers during CIVs on different types of MDs, such as implantable, imaging and diagnostic MDs [6]. Within HL7 interoperability initiatives some models were released focusing only on particular aspects of this domain [3]. The combination of HL7 methodology and CDISC data model has been used to develop a CIV representation within the BRIDG project (Biomedical Research Integrated Domain Group) [7,8] that however does not address the characteristics of MD domain.
4. Serious Adverse Event Domain Analysis Model (DAM) According to the HL7 v.3 methodology figure 1 depicts the portion of the MEDIS DAM [9] modelling SAE reporting using UML class diagram notation.
Figure 1. Serious Adverse Event Domain Analysis Model.
The Act class Serious Adverse Event describes the main information about SAEs. It is related to one or more Assessment Results that contain the description of the assessment performed by each SAE Evaluator. An evaluator represented by the stereotype Participation can be either the applicant (Applicant Environment) of the CIV
F. Pecoraro and D. Luzi / Serious Adverse Event Reporting in a Medical Device Information System 837
or the Principal Investigator belonging to the Investigation Centre where the SAE took place. Both of them are represented by the stereotype Role and related to the Entity Organization. Each Report has an Author (Participation) that in MEDIS is a Person (Entity) who has the right to access the system. Finally, a Serious Adverse Event is also described by: the Location (Participation) where the SAE occurred; the Medical Device (Role) that is Deployed (Participation) in the SAE and the Subject (Participation) who experienced the SAE. A Subject is a Person who can be either the Patient who is using the MD or the Care Giver that works with it, represented by the stereotype Role. Each Subject is connected with the class Action Taken that reports the medical procedures to mitigate effects of a SAE. This class is related to the class Report. Compared with SAE information required for pharmaceutical clinical trials, MD domain described by the MEDIS DAM includes additional information as required by MD directives and guidelines related to SAE reporting [4,10]. It adds the notation of different subjects who might be involved in a SAE (patients, users), identification of the causality (SAE related to the investigational device and/or procedure in deploying it), and data on the decisions taken by the NCA that for example might interrupt a CIV and/or asks for further information.
Figure 2. Serious Adverse Event Refined Message Information Model.
The RMIM (fig. 2) derived from the DAM gives grounds of the adoption of HL7 as well as of the application of its conceptual model in the domain of SAE reporting between an applicant of an approved CIV and the relevant NCA. In this context the information that according to regulations identifies whether the adverse is a reportable one, establishes causality and severity grades progressively evaluated by both investigators and applicants, is represented by the process of gathering data related to the event to be notified. The often-criticized ambiguity [11,12] of HL7 definition of the Act class that can be specialised either as an action (such as Observation or Procedure) or as a Structured document (such as ContextStructure and Document) is implicitly disambiguated by the class code attribute as used in the Act Report (classCode=DOC).
838 F. Pecoraro and D. Luzi / Serious Adverse Event Reporting in a Medical Device Information System
Based on the definition of the Act class (see § 1.3 and 3.1.1.1 of [13]), we consider the Act Report as a container of documented actions. The double interpretation of HL7 Act class is confirmed by the attribute statusCode, that tracks creation, updating and versioning of the document. Moreover, the relationship Participation-Act identifies the responsible actor who is the legal authenticator of the Report. This is a crucial aspect also in the context of registry managed by a NCA that has to identify the attributability (authored and signed) of the information reported [14]. Similar uses of the Act Document have been already balloted in the domain of regulated studies to collect data and audit trail information about the experimental units involved in a clinical study (see Regulated Studies Domain of [3]).
5. Conclusion The paper presents the methodology used to design SAE reporting activities within an NCA information system to submit and monitor CIV on MDs at national level. This allows users to increase consistency in SAE data, reduce time of reporting, track status of SAE lifecycle as well as facilitate the analysis of reported events. The necessity of sharing information among different stakeholders as well as of systems’ interaction led us to choose HL7 v. 3 standards to design MEDIS system. This contributes to improve the use of standards in a relatively new and expanding domain of clinical research. Acknowledgements. This study was supported by the Italian Ministry of Health through the MEDIS project (MdS-CNR collaboration contract n° 1037/2007).
References [1] [2]
[3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]
London JW, Smalley KJ, Conner K, Smith JB. The automation of clinical trial serious adverse event reporting workflow, Clinical Trials 6 (2009), 446-454 Mitchell R, Shah M, Ahmad S, Rogers AS, Ellenberg JH. A unified web-based Query and Notification System (QNS) for subject management, adverse events, regulatory, and IRB components of clinical trials, Clinical Trials 2 (2005), 61-71 Health Level Seven, v3. http://hl7.org/v3ballot/html/welcome/environment/index.htm MEDDEV 2.7/3. Guidelines on Medical Devices – Clinical Investigations: Serious Adverse Event Reporting. December 2010. Murff HJ, Patel VL, Hripcsak G, Bates DW. Detecting adverse events for patient safety research: a review of current methodologies. Journal of Biomedical Informatics 36 (2003), 131-143 Smoak C. CDISC for the Medical Device and Diagnostic Industry: an Update, (2009) Available at: http://www.wuss.org/proceedings09/09WUSSProceedings/papers/cdi/CDI-Smoak.pdf Fridsma BD, Evans J, Hastak S. Mead CN, The BRIDG project: a technical report. JAMIA 15 (2007), 130-137 BRIDG project (Biomedical Research Integrated Domain Group). Available at http://bridgmodel.org/ Luzi D, Pecoraro F, Ricci FL, Mercurio G. A medical device domain analysis model based on HL7 Reference Information Model. In: Proceeding of MIE 2009, IOS Press, (2009) 162-166. ISO 14155. Clinical investigation of medical devices for human subject. Draft version 2011 Vizenor L. Actions in health care organizations: an ontological analysis. In: Proceedings of MEDINFO 2004 11 (2004), 1403-1407. Smith B, Ceusters W. HL7 RIM: An Incoherent Standard. In: Hasman A, Haux R, Lei Jvd, Clercq ED, Roger-France F, eds. In: Proceedings of MIE 2006. Amsterdam, IOS Press, (2006) 133-138. Health Level Seven Reference Information Model, hl7.org/v3ballot/html/infrastructure/rim/rim.html Rector A, Nolan W, Kay S. Foundations for an electronic medical record. Methods of Information in Medicine 30 (1991); 179-186.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-839
839
Metadata - an International Standard for Clinical Knowledge Resources a
Gunnar O KLEINa1 Dept of Microbiology, Tumour and Cell Biology, Karolinska Institutet, Sweden
Abstract. This paper describes a new European and International standard, ISO 13119 Health informatics – Clinical knowledge resources - Metadata that is intended for both health professionals and patients/citizens. This standard aims to facilitate two issues: 1) How to find relevant documents that are appropriate for the reader and situation and 2) How to ensure that the found knowledge documents have a sufficient or at least declared quality management? Example of use is provided from the European Centre for Disease Control and Prevention. Keywords. metadata, decision support systems, clinical knowledge resource, standard, ISO 13119
1. Introduction The internet is rapidly changing the way we access medical knowledge. Health professionals use web based knowledge sources and digital documents are provided from databases and via e-mail. Also the patients/citizens turn to the internet for advice. The European Commission has published a set of quality criteria for health related websites [1] as one way of establishing trust in web resources. A trust-mark indicating a “minimum” level of trustworthiness requires: • a set of quality requirements. This might be very difficult to agree on as relevant for all contexts. • third party control by governmental bodies or professional associations of all possible documents to receive the mark. • reliance on a self-declaration by the issuer in which case the user of the information has no real guarantee that the criteria are met even if the mark is there. • Instead of reviewing the actual content of the medical knowledge resources, we can define processes behind their development. Health authorities in many countries and in co-operation with the Commission have considered the possible requirement for legislation and control procedures, but generally the conclusions have been that rather than trying to ban bad quality information, one should facilitate for the citizens as well as for the health professionals to find the type of information they request where quality criteria behind a knowledge resource are easily accessible. One feasible and important approach is to establish a set of metadata for each knowledge resource to describe the content and the procedures behind its production. 1
Corresponding author. Gunnar Klein, 177 77 Stockholm, Sweden, E-mail: [email protected]
840
G.O. Klein / Metadata – An International Standard for Clinical Knowledge Resources
In this paper the development of a standardised set of metadata is described. The following issues are addressed: What are the possible uses of a standardised set and What are the basic principles of the new standard in the field?
2. Materials and Methods This study is based on the work of the European and International standards organisations during the years 2000-2010. This started with a literature review on the use metadata for various health care purposes and the general development of metadata for intersector use, the Dublin Core and various initiatives to propose metadata especially for clinical guidelines. The development of the first draft and discussions of the standardisation working groups was followed by extensive and repeated international review and comments with suggestions for improvements from many nations. The author was the project leader of the standardization project started in CEN that led to the publication of the CEN/TS 15699:2009 [2] and further enhanced in ISO to the ISO 13119 [3] also to be published as a European standard.
3. Results 3.1. The Scope of the Metadata Standard This standard defines a number of metadata elements that describe documents containing medical knowledge, primarily digital documents provided as web resources, accessible from databases or via file transfer, but can be applicable also to paper documents, e.g. articles in the medical literature. It is based on the ISO 15836:2009 Information and documentation- Metadata – The Dublin Core metadata element set [4]. The metadata should: • support unambiguous and international understanding of important aspects to describe a document, e.g. purpose, issuer, intended audience, legal status and scientific background • be applicable to different kinds of digital documents e.g. recommendation from consensus of a professional group, regulation by a governmental authority, clinical trial protocol from a pharmaceutical company, scientific manuscript from a research group, advice to patients with a specific disease, review article • be possible to present to human readers including health professionals as well as citizens/patients • be potentially usable for automatic processing e.g. to support search engines to restrict matches to documents of a certain type or quality level 3.2. Characteristics of the Metadata Element Set In the element descriptions below, each element has a descriptive label intended to convey a common semantic understanding of the element, as well as a unique, machine- understandable, single-word name intended to make the syntactic specification of elements simpler for encoding schemes.
G.O. Klein / Metadata – An International Standard for Clinical Knowledge Resources
841
Each element is optional and repeatable. Metadata elements may appear in any order. The ordering of multiple occurrences of the same element (e.g. Creator) may have a significance intended by the provider, but ordering is not guaranteed to be preserved in every system. To promote global interoperability, a number of the element descriptions suggest a controlled vocabulary for the respective element values. The Dublin Core set assumes that different domains develop where necessary controlled vocabularies as specialisations of the content of the general purpose metadata element set and adding other metadata elements as required. This standard is such a specialisation for the medical knowledge domain. 3.3. Metadata Groups The metadata elements are grouped purely for human navigational purposes: • Resource form • Intended use • Subject and Scope • Identification and source • Quality control The total number of metadata element tags in this standard is 150. 3.4. Examples of Specialisations In some areas the standard contains an enumerated list of specialisations to be used for the content under some metadata elements. 3.4.1. Healthcare Specialization for Type One example is for the element Type defined by Dublin Core: Definition: Nature or genre of the content of the resource. The following Types are from the Dublin Core 2009: Text, MovingImage, StillImage, Sound, Dataset, InteractiveResource, Software, Device. Table 1: The following terms may be used to describe Type.Text in health care:
Journal_article Book_chapter Book Report Abstract Patient_information FAQ Algorithm Clinical guideline Policy-strategy Information_standard
Teaching_material Computable clinical information model Terminological_resource Metainformation Case_report Proposal Event Service_description Product_information Critically_appraised_topic
Known_uncertainty Observational_study Qualitative_study Randomised_controlledtrial Research_study Review Systematic_review Structured_abstract Care_pathway
842
G.O. Klein / Metadata – An International Standard for Clinical Knowledge Resources
3.4.2. Example of Healthcare Specialization for Situation This defined as: Description of the situation where the knowledge is intended to be used (HC). This can also be understood as the intended role of the knowledge resource. Healthcare specific specialisation: • Clinical_guidance • Self_guidance • Supporting_software • Research_protocol • Background_knowledge 3.5. Overview of the Metadata Classes Figure 1 shows an overview of the classes.
Figure 1. Overview of the Metadata elements for Clinical Knowledge Resources.
3.6. A First User of the New Standard - ECDC The European Centre for Disease Control and Prevention, ECDC, which is a rather new European Union agency that is mainly active in the surveillance and prevention of communicable disease has an ambitious programme for knowledge management that shall serve not only its internal staff an specially commissioned experts but also the member states of the European union with their national agencies for control of communicable diseases. This organisation was looking for their own implementation of the Dublin Core metadata standard when they were approached and studied the new standard already at the draft stage. They have now implemented its use as a routine part of their work together with other strategies for knowledge retrieval in an Enterprise wide search system.
4. Discussion A system of metadata tags, the names of the elements can have many different uses. The first uses have been within larger organisations that have a need to ensure that their documents, the most common form of a knowledge resource, can be found using automatic retrieval methods. If metadata are assigned in a consistent and well
G.O. Klein / Metadata – An International Standard for Clinical Knowledge Resources
843
structured way to each document, this can ensure complete retrieval of all relevant documents meeting the search profile whereas various other ways indexed or not using the core content of a document without any metadata can usually not ensure that all relevant documents are found. The other major feature of a good metadata based system is to exclude irrelevant documents because of a much more specific search profile. With the explosive growth of various document resources, this becomes more and more important. It is something a clinician is frequently experiencing searching knowledge on various dedicated medical knowledge sites. It is of course also a common problem for the general public using general purpose search engines as Google on the general world wide web. It should be emphasized that there is no requirement in the standard to use all of the metadata elements available. This is a set of optional elements and typically a publisher of a type of knowledge resource only uses a small subset. If required it is possible to extend the set by additional elements. For some elements there is very detailed guidance provided where there was some good justification to propose details. In other areas the users will have to develop their own guidance documents if consistency is to be achieved. There is also another use of metadata that is not related to retrieval but for the user of the resource to be able to understand what the resource is, its intended use, source and possibly quality control. The latter is achieved largely through reference to the Grade system for clinical guideline documents, which is also acknowledged by the WHO [5]. Acknowledgments: This study was supported by the European Union Network of Excellence: Semantic Mining. During the first years of this work the author was co-operating with Dr Anders Thurin, Göteborg, Sweden. His important contributions are gratefully acknowledged together with many other experts of CEN/TC 251/WG 2 and ISO/TC 215/WG 3.
References [1] [2] [3] [4] [5]
The European Commission, COM(2002) 667, eEurope 2002: Quality Criteria for Health related Websites. CEN/TS 15699:2009. Health informatics – Clinical knowledge resources – Metadata. The European Committee for Standardization, Brussels, (2009). ISO/DIS 13119:2011. Health informatics – Clinical knowledge resources – Metadata. International Organization for Standardization, Geneva, (2011). ISO 15836:2009 Information and documentation- Metadata – The Dublin Core metadata element set. International Organization for Standardization, Geneva, (2009). H.J. Schünemann, A. Fretheim, A.D. Oxman. Improving the use of research evidence in guideline development: 9. Grading evidence and recommendations. Health Research Policy and Systems 4 (2006), 21.
844
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-844
Comparing Existing National and International Classification Systems of Surgical Procedures with the CEN/ISO 1828 Ontology Framework Standard Jean M. RODRIGUESa,b,c,1, Ann CASEYd , Cédric BOUSQUETa,b, Anand KUMARa, Pierre LEWALLEa, Béatrice TROMBERT PAVIOTa,b. a University of Saint Etienne, CHU, Department of public health and medical informatics, Saint Etienne, France b INSERM UMR 872 Eq 20, Paris, France c WHO collaborating center for International Classifications in French Language, Paris, France d Royal College of Nursing of United Kingdom, London, England
Abstract: Among different standardization strategies for biomedical terminologies the European Standard Body CEN TC 251 followed by ISO TC 215 have stated that it was not possible to convince the different European or international member states using different national languages to agree on a reference clinical terminology or to standardize a detailed language independent biomedical ontology. Since 1990 they have developed since an approach named the Categorial Structure that standardises only the terminologies’ model structure. The methodology for the Categorial Structure development and a comparison of the different existing classification systems based on this ontology framework is presented as a step towards increased interoperability between biomedical terminologies through conformity to a minimum set of ontological requirements. Keywords: Standard; Biomedical terminology; Categorial Structure; Ontology;
1. Introduction There is a growing need to compare data produced at national and international levels on a range of shared concerns relating, for instance to population based indicators, Electronic Health Record safety, OECD (Organisation of Economic Cooperation and Development), trans border migration of population, case mix and procedure payment . Clinical terminology systems, classifications and coding systems that are drawn upon to that end have unfortunately been developed using independent, divergent or uncoordinated approaches which have produced non reusable systems with overlapping fields for different requirements. For some decades, several broad pre-coordinated or compositional systems have been proposed to users targeting different goals for example UMLS (Unified Medical Language System) [1], LOINC [2] for clinical 1
Corresponding author: JM Rodrigues, CHU de St Etienne, SSPIM, Chemin de la Marandière, 42 270 Saint Priest en Jarez, France, E-mail : [email protected]
J.M. Rodrigues et al. / Comparing Existing National and International Classification Systems
845
laboratories, DICOM SDM [3] for imaging or SNOMED CT [4]. At the same time most of developed countries have continued to maintain, update and modify their own coding systems for procedures and their national adaptations of ICD, in order to manage and to fund their health care delivery. Significant efforts have been made for example in Australia with ACHI (Australian Classification of Health Interventions) and ICD10 AM [5], in Canada with the Canadian Classification of Health Interventions (CCI) [6] developed by the Canadian Institute for Health Information (CIHI) and in France with CCAM (Classification Commune des Actes Médicaux) [7]. Standardisation in health informatics started in the US with the HL7 user group. The European Standard Body CEN TC 251 WG2 (Comité Européen de Normalisation Technical Committee 251 Working Group 2) and later the International Organisation for Standardization (ISO) TC 215 WG3 elaborated and developed a standard approach for biomedical terminology named Categorial Structure. We outline that ontology framework and the latest standard on terminologies of surgical procedures currently pending final approval [8] in section 2 below: in section 3 we compare major national and international classifications of surgical procedures in the light of that standard. Finally we discuss the role that standard could play not only to support the comparison of classifications and coding systems of surgical procedures but, to facilitate their harmonization towards a more complete semantic interoperability based on a shared biomedical ontology.
2. CEN/ISO Categorial Structure Standard Approach 2.1. Definition The CEN Categorial Structure was defined within some linguistic variations [9], as a minimal set of health care domain constraints to represent a biomedical terminology in a precise health care domain with a precise goal to communicate safely. It is a definition of a minimal semantic structure or ontology framework describing the main properties of the different artefacts used as terminology (controlled vocabularies, nomenclatures, reference terminologies, coding systems and classifications): a model of knowledge restricted to; 1) a list of semantic categories; 2) the goal of the Categorial Structure; 3) the list of semantic links between semantic categories authorised with their associated semantic categories; 4) the minimal constraints allowing the generation and the validation of well formed terminological phrases. Any biomedical terminology artefact claiming conformance to the standard shall attach with the data sent the Categorial Structure of the terminology used. The Categorial Structure shall satisfy the four constraints but can add more constraints. 2.2. Specifications for Terminologies of Surgical Procedures 1. 2.
The main semantic categories are Human Anatomy, Deed, Interventional Equipment and Lesion . The semantic links are has_object, has_site, has_sub_surgicaldeed, has_means 2.1. has_object is authorised between Deed and Human anatomy or Interventional Equipment or Lesion
846
J.M. Rodrigues et al. / Comparing Existing National and International Classification Systems
3.
2.2. has_site is authorised between Interventional equipment or Lesion and Human anatomy 2.3. has_means is authorised between Deed and Human anatomy, Interventional equipment or Lesion 2.4. has_sub_surgicaldeed is authorised between Deed and Deed The minimal constraints required 3.1. A Deed and has_object shall be present 3.2. Human anatomy shall always be present either with a has_object or with a has_site 3.3. Use of Lesion shall be restricted to macroscopic lesion and to cases where it allows to differentiate the procedure from procedures using the same deed and the same human anatomy; 3.4. When has_sub_surgicaldeed is used the Deed on the right side of the semantic link must be conform to the rules 3.1, 3.2 and 3.3.
3. Comparison of Existing Classification Systems of Surgical Procedures One goal of the standard is to support comparisons between existing classification systems of surgical procedures. During the development of the standard, new national and international surgical procedures classification systems were developed, some claimed conformance with the European standard initially specified in 1995. The comparison of their conformance to the standard was undertaken firstly to assess whether or not the most advanced systems met or nearly met the requirements of the standard and to identify whether at the level of the ontology framework, the various systems are as different as they appear to be in their terminology part. The systems were mapped to the standard specifications described above by a Task Force of TC 215 WG 3 and reviewed by each organisation or country experts. At the international level the selected systems were IHTSDO SNOMED CT (procedures only) [10] and WHOFIC ICHI [11]. Five national existing systems were selected: Australia ACHI [12], Canada CCI [13], France CCAM [14], Japan Surgical Society procedures codes [15] and USA ICD 10 PCS [16].
4. Discussion Table 1 show that the selected international and national classifications or terminology systems of surgical procedures are based on the semantic categories of the standard with some restrictions for the category Lesion. This is characterized only by SNOMED CT and the Japan Surgical Society system although the other systems may use some Lesion value sets without specifying a semantic category. For the semantic links all the studied systems use has_object but only 4 out of 7 (SNOMED CT, ICHI, CCAM and the Japan Surgical Society) are based on all the semantic links. Only SNOMED CT and ICHI explicitly define the list of domain constraints. None of the systems prescribe the list of minimal domain constraints. From this comparison it can be said that the most recently developed international and national terminologies and classification systems of surgical procedures are based on the CEN/ISO 1828 standard semantic categories. Only 4 out of 7 are based on all
J.M. Rodrigues et al. / Comparing Existing National and International Classification Systems
847
the specified semantic links and only 2 out of 7 explicitly prescribe the list of domain constraints with none prescribing a minimal list. On the path to increasing semantic interoperability to level 2 (understanding the terms with the meaning of the sender) conformance to the EN/ISO 1828 ontology framework standard is an opportunity which has started to be used by the most advanced classification systems [17] and the international ICHI initiative. This use will be completed by explicitly associating the Categorial Structure ontology framework to biomedical terminologies exchanges protocols. This step will ease the development of a full shared biomedical ontology based on an upper level ontology needed to reach the level 3 of semantic interoperability when the receiver or final user can process the data as safely as he can do with his own terms and meaning. Acknowledgements. We wish to thank our partners in the CEN TC251 WG2 and ISO TC 215 WG 3 and the convenors Hendrik Olesen, Ann Harding, Magnus Fogelberg, Chris Chute and Heather Grain and terminology and biomedical ontology experts namely Alan Rector from University of Manchester and Barry Smith from the University at Buffalo. Special thanks to our partners in the GALEN program and namely Pieter Zanstra, Egbert van der Haring, Robert Baud, Jeremy Rogers.
References [1] [2] [3] [4] [5] [6] [7] [8] [9]
[10] [11] [12] [13] [14] [15] [16] [17]
McCray AT, Nelson SJ. The representation of meaning in the UMLS. Methods Inf Med 1995;34(12):193-201. Logical Observation Identifiers Names and Codes(LOINC). See :http://www.loinc.org/ DICOM see http://www.xray.hmc.psu.edu/dicom/dicom_home.html SNOMED Clinical Terms. College of American Pathologists.see http://www.snomed.org/ National Centre for Classification in Health see http://www3.fhs.usyd.edu.au Canadian Classification of Health Interventions http://secure.cihi.ca/cihiweb/dispPage.jsp Agence Technique de l’Information Hospitalière see http://www.sante.atih.gov.fr prEN ISO 1828. Health informatics – Categorial Structure for classifications and coding systems of surgical procedures. Rodrigues J-M, Kumar A, Bousquet C, Trombert B. Standards and Biomedical Terminologies: The CEN TC 251 and ISO TC 215 Categorial Structures. A Step towards increased interoperability. In:S K Andersen et al. (Eds.) MIE 2008 Proc. IOS Press, 2008; pp. 735-740. Snomed CT see http://www.nlm.nih.gov/research/umls/Snomed/snomed_main.html ICHI project plan version 2.2 .R Madden .In Proc WHO-FIC Annual meeting Toronto 16-22 Octobre 2010 Australian Classification of Health Interventions (ACHI) see: www.ncch.com.au Canadian Classification of Health Interventions (CCI) see: http://secure.cihi.ca/cihiweb/dispPage.jsp ?cw_page=codingclass_cciover_e French Classification des Actes Medicaux (CCAM) see: http://www.ameli.fr/fileadmin/user_upload/ documents/CCAMV23.pdf Japan Surgical Society Procedure codes Procedure Coding System (USA) (PCS): see: www.cms.hhs.gov/ICD9ProviderDiagnosticCodes/ 08_ICD10.asp Rodrigues J-M, Rector A, Zanstra P, et al. An ontology driven collaborative development for biomedical terminologies: from the French CCAM to the Australian ICHI coding system. Stud Health Technol Inform. 2006;124:863-8.
848
J.M. Rodrigues et al. / Comparing Existing National and International Classification Systems
APPENDIX Table 1. Comparison of selected international and national classifications or terminology systems of surgical procedures using ENISO1828 standard
ENISO 1828 Categorial SNOMED CT structure Category Deed Category Human Anatomy Category Lesion Category Interventional Equipment
Method action
Procedure site direct Direct morphology Direct device Procedure site Semantic link direct hasSite Procedure site indirect Using device Using access device Semantic link hasMeans Indirect morphology Indirect device Semantic link Access, HasSubApproach surgicalDeed List of domain Yes constraints List of minimal No domain constraints Semantic link hasObject
ICHI
CCAM
CCI
ACHI
action
Action axis 2 and 3
Field 3
Axis 3
Axis 1 and 3
Acts
Field 2
Axis 1
Axis 2 and 4
Organ/area
-
-
-
Lesion
Field 5
-
Axis 6
Instruments or device
Anatomical structure Target Body Anatomical (body structure site axis 1 structure) Morphological abnormal structure Device
Japan Surgical Society Procedure Code
ICD10 PCS
Device
Axis4 technique
hasObject
hasObject
hasSite
hasSite
hasSite
-
hasSite
Secondary organ/area
hasMeans
hasMeans
-
-
-
Approaching method/ device
-
Acces, Approach
-
-
-
Sequence of acts
Yes
No
No
No
No
No
No
No
No
No
No
No
hasObject hasObject hasObject
Target
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-849
849
Model Driven Development of Clinical Information Sytems using openEHR Koray ATALAG a,1, Hong Yul YANG a, Ewan TEMPERO a, Jim WARREN a,b a Department of Computer Science, bNational Institute for Health Innovation The University of Auckland, Auckland, New Zealand
Abstract. openEHR and the recent international standard (ISO 13606) defined a model driven software development methodology for health information systems. However there is little evidence in the literature describing implementation; especially for desktop clinical applications. This paper presents an implementation pathway using .Net/C# technology for Microsoft Windows desktop platforms. An endoscopy reporting application driven by openEHR Archetypes and Templates has been developed. A set of novel GUI directives has been defined and presented which guides the automatic graphical user interface generator to render widgets properly. We also reveal the development steps and important design decisions; from modelling to the final software product. This might provide guidance for other developers and form evidence required for the adoption of these standards for vendors and national programs alike. Keywords. EHR, HIS, openEHR, interoperability, GUI.
1. Introduction In this paper, we present the development methodology of a .Net/C# desktop application (GastrOS) for endoscopy reporting which is driven by openEHR models. The main drivers of such a model driven software development are: 1. Transfer of domain knowledge from healthcare professionals into software is ineffective using traditional development process where technical professionals need to capture and transform this into code. Put simply, software can be as good as this hand-over [1]. Two-level modelling technique in openEHR, essentially a model driven approach, allows clinicians to engineer knowledge using high-level tools which is then fed into the technical environment and consumed readily. This ensures the requirements are correct, complete and collected in a timely fashion. 2. The main challenge in achieving semantic interoperability lies in the nontechnical domain and has to do with establishing common language, sharing data set definitions and creating computable information and knowledge artefacts [2]. openEHR defines methods and processes which meets these requirements. 3. The main determinant of software cost is the maintenance phase [3]. Healthcare is no exception. Redevelopment due to modifications includes 1
Corresponding Author: Koray Atalag, MD, PhD. Department of Computer Science, The University of Auckland, Private Bag 92019 Auckland 1142, New Zealand; E-mail: [email protected].
850
K. Atalag et al. / Model Driven Development of Clinical Information Sytems Using openEHR
redesign, coding, testing and deployment, which is very costly. Therefore being able to introduce these changes by modelling without redevelopment is very tempting and can potentially reduce the total cost of health information systems (HIS) significantly. We have selected digestive endoscopy as the clinical domain which is a niche area with excellent standardisation of domain content. The Minimal Standard Terminology for Digestive Endoscopy (MST) contains a "minimal" list of terms and structure which is used to record the results of an endoscopic examination [4]. It provides a simple and uniform hierarchy for data entry which allows for consistent and intuitive generation of graphical user interfaces (GUI) automatically.
2. Methods openEHR formalism effectively separates domain knowledge from software code using domain specific models called Archetypes. This is commonly known as Two-Level Modelling. Archetypes (top level) represent clinically meaningful concepts such as blood pressure measurement. They use common technical building blocks expressed in Reference Model (RM) (lower level). In the runtime software are driven by these models for dynamic GUI creation, data binding, validation and querying [1]. Thus altering software after deployment mainly involves remodelling by domain experts without the need for another redevelopment cycle. RM consists of a small set of technical models which depicts the generic characteristics of health records (e.g. data structures and types) and context information to meet ethical, medico-legal and provenance requirements. In GastrOS RM entities usually correspond to individual GUI widgets. Archetypes provide the semantics and structure of domain concepts. They constrain RM building-blocks and form a computer processable model. Practically they specify particular record entry names, data structures, data types, value sets and default values. It is also possible to link each data item to biomedical terminologies. openEHR Templates bring together relevant Archetypes to define higher level models such as a discharge summary. Tighter constraints can be put on Archetypes (e.g. exclude some data items and values or renaming them). During implementation Templates are serialised into operational templates which contain all the structure and data items in included archetypes. 2.1. Modelling A number of openEHR Archetypes have been created using the free and open source (FOSS) openEHR Archetype Editor. The following sections from MST are included: • Examination Information: consists of reasons for endoscopy, examination characteristics and complications. • Endoscopic Findings: for each organ using MST hierarchy, terms, attributes, attribute values and anatomical sites. • Interventions: diagnostic and therapeutic procedures performed. • Diagnoses: list of diagnoses for each organ. These sections are then filled with appropriate entry archetypes which further chain a myriad of structural archetypes carrying bulk of the MST content. Finally openEHR templates have been created using the Ocean Template Designer for each of the three examination types: upper and lower gastrointestinal endoscopy and ERCP.
K. Atalag et al. / Model Driven Development of Clinical Information Sytems Using openEHR
851
2.2. Implementation GastrOS has been developed using the .Net platform and C# programming language. We have used MS Visual Studio 2008 IDE. The C# openEHR library2 (openEHR.Net) has been included in the project which implements the 1.0.1 release of the openEHR RM and Archetype Object Model (AOM) specifications. It is used to build applications by composing RM objects, validate against AOM and serialise to/from XML [5]. GastrOS architecturally consists of a simple wrapper application which is used for patient management, and the model-driven structured data entry (SDE) component. SDE takes in an operational template and dynamically creates appropriate GUI forms. This component has the additional capability of validating and persisting data. SDE follows the model-view-controller (MVC) paradigm; such that the user interaction and presentation logic is completely independent of the logic for handling and persistence. SDE first parses the input operational template into a tree-like data structure which consists of archetype objects conforming to AOM. Each archetype object acts as a blueprint for a specific part of the data to be entered and stored, as well as sets its GUI widget. SDE defines a set of mapping rules to determine what kind of GUI widget to create for what kinds of data elements. For example it would create a text box for a textual entry (e.g. name of a drug), a drop-down list for a restricted range of values (e.g. organ types), or a panel for a composite value that further contains sub-values (e.g. a list of diagnoses). These rules, which are fairly generic so as to accommodate as wide a range of clinical domains as possible, are combined with the novel GUI directives to finely adjust the aesthetics and visual behaviour of the GUI. Currently there is a hot debate about these directives as the openEHR formalism does not provide any means to handle presentation of information. It is generally accepted that this should be modelled as a different layer along with the archetypes and templates. So far studies in this area have been very scarce in the literature [6,7]. We have taken a more practical approach and exploited the annotations property in openEHR Templates, which can be defined for any data item at a distinct path in any included archetype. A skeleton data instance, which we call as value instance to hold the user entered data, is created initially by the GUI generator at once which comprises only the top level hierarchy and mandatory items depicted by the RM. Then, the GUI generator recursively creates the associated widgets on the form. Each widget, representing a leaf node data item, instantiates its own value instance and then binds to the skeleton. By this way, an exact representation of the AOM is formed. During data entry, if the user wants to create additional instances of certain data elements where multiplicity is allowed, additional data instances are appended to the skeleton. When the user decides to save, parts of the value instance which correspond to the empty widgets are first pruned and then serialised into XML and persisted saved in a relational database (both MS Access and SQLite). When a value is cleared (set to null) after data have been committed, that part of the value instance is removed from the value instance.
3. Results Table 1 shows some of the pertinent GUI Directives defined by our group which appear in Figure 2. For the full list please refer to the project website [8]. 2
openEHR.Net has initially been developed by Ocean Informatics Pty. Ltd. and then extended by our team.
852
K. Atalag et al. / Model Driven Development of Clinical Information Sytems Using openEHR
Table 1. Pertinent GUI Directives defined and used in GastrOS.
isOrganiser: when this is set item will be displayed as a group (e.g.within a frame, form etc.) which will contain all its children. The Heading items in MST, such as NORMAL, LUMEN, STENOSIS etc., have this directive set causing them to be displayed in groups within a frame. Any container item will simply be ignored when isOrganiser is not set and will be grouped under the first isOrganiser parent (if any). This simplifies working with highly nested clinical models. isCoreConcept: We assume that Core Concepts are real-world entities which we can talk about their absence. For example a clinical finding (tumour, bleeding etc.) can be reported as present but also as absent or unknown. However it doesn’t make sense to report absence of tumour grade or physical examination. This directive depicts that an item with all its children (if any) will be handled and repeated as a whole on the GUI and saved data. When data are saved it wouldn’t make sense to repeat attributes of a clinical finding defining its nature. For example in Figure 2 when Stenosis term is selected as a finding it should not have more than one Appearance attribute because the values might be mutually exclusive or potentially contraindicate with other selected attributes. Rather, the Core Concept as a group should repeat with a different set of attributes and values. An exception is the anatomical sites; in most cases more than one site will be involved. When data are saved, for each core concept only one attribute can be expected and one or more anatomical sites. The example below illustrates a case with a repeating attribute where values are mutually exclusive and should not be permitted (second): <Stenosis; Appearance=Extrinsic, Traversed=Yes, Sites=Cardia,Fundus,Incisura> <Stenosis; Appearance=Extrinsic, Traversed=Yes, Traversed=No, Sites=Cardia,Fundus,Incisura> showAs (form|splash, modal|modeless|smart): this determines the behaviour when an item’s values or children are displayed. The item's label will be shown as a reference (e.g. link, button or similar) and the contents will be shown on another page, a separate form (form) or a pop-up screen (splash). (smart) parameter causes to create a modeless form which closes when loses focus which saves one click during data entry.
Figure 1. Sample GUI form with associated GUI directives.
K. Atalag et al. / Model Driven Development of Clinical Information Sytems Using openEHR
853
4. Discussion and Conclusion The preliminary results of our larger study indicated that the openEHR based application, on the average, took nine times less time and were seven times less complex to implement; thereby making it significantly more maintainable [9]. Considering the paramount contribution of maintenance phase to total software cost (approximately 70-80%) this may translate into significant cost savings [3]. Since endoscopy domain is a narrow domain it can be argued that generalisability of our results will be limited. However as we experiment with other domains, such as anatomical pathology, our initial impression is that the GUI Directives may be applicable beyond endoscopy. However current work to extend GastrOS model to include generic archetypes such as Blood Pressure and Adverse Reactions revealed that further additions to the GUI Directives presented in this study are required; therefore more work is needed. With regard to software usability, since the appearance and behaviour is depicted rather mechanically, good usability principles can be embedded into program logic and may result in more consistent GUI. GastrOS source code, models and documentation have been published on Codeplex (http://gastros.codeplex.com) as FOSS software to enable wider dissemination of research results and also to foster collaboration [8]. In conclusion, we believe this study will help materialise how the model driven methodology, brought about by openEHR, works and bridges the gap between modelling and software development. Another important premise is the potential for enabling a high level of semantic interoperability among different HIS which is particularly important in developed jurisdictions, such as Europe, where this is not only desirable but essential. Acknowledgments: This work was supported by a research grant from the University of Auckland (Project No: 3624469/9843).
References [1]
[2] [3] [4]
[5] [6] [7]
[8] [9]
Beale T. Archetypes: Constraint-based domain models for future-proof information systems. In: Eleventh OOPSLA Workshop on Behavioral Semantics: Serving the Customer. Seattle, Washington, USA: Northeastern University; 2002. p. 16-32. ISO TR 20514 - Electronic Health Record Definition, Scope and Context. ISO; 2005. Sommerville I. Software Engineering. 6th ed. Addison Wesley; 2000. Delvaux M, Korman L, Armengol-Miro J, Crespi M, Cass O, Hagenmüller F, Zwiebel F. The minimal standard terminology for digestive endoscopy: introduction to structured reporting. International Journal of Medical Informatics 1998 Feb;48(1-3):217-225. openEHR.Net Programming Library. Available from http://openehr.codeplex.com Schuler T, Garde S, Heard S, Beale T. Towards automatically generating graphical user interfaces from openEHR archetypes. Stud Health Technol Inform 2006;124:221-6. van der Linden H, Austin T, Talmon J. Generic screen representations for future-proof systems, is it possible? There is more to a GUI than meets the eye. Comput Methods Programs Biomed 2009 Sep;95 (3):213-226. GastrOS Endoscopy Application Project. Available from: http://gastros.codeplex.com Atalag K, Yang HY, Warren J. On the maintainability of openEHR based health information systems – an evaluation study in endoscopy. In: Proceedings of the 18th Annual Health Informatics Conference, HIC 2010. Melbourne, Australia: HISA; 2010 p. 1-5.
This page intentionally left blank
Translational Research
This page intentionally left blank
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-857
857
A Metadata-Based Patient Register for Cooperative Clinical Research: A Case Study in Acute Myeloid Leukemia a
Anja S. FISCHERa,1, Ulrich MANSMANN a Institute for Medical Informatics, Biometry and Epidemiology (IBE), LudwigMaximilians-University Munich, Germany
Abstract. In many medical indications clinical research is organized within study groups which provide and maintain the clinical infrastructure for their randomized clinical trials. Each group also manages a data center where high quality databases store the study specific individual patient data. Sharing this data between study groups is not straightforward. Therefore, a concept is needed which allows to represent a detailed overview on the information available across the cooperating groups. We propose a metadata based patient register and describe a first prototype. It provides information about available patient data sets to interested research partners while the typical register approach only collects a predefined limited core data set. This register implementation enables cooperative groups to allocate clinical data for future research projects in distributed data sources beyond the restrictions of core data sets. Additionally, it supports the research network in communication and data standardization and complies with a governance structure which is compatible with ethical aspects, privacy protection, and patient rights. Keywords. metadata, patient register, CDISC ODM, data integration, networked clinical research
1. Introduction Academic clinical research is organized by study groups which provide and maintain the infrastructure to run large randomized clinical trials. Typically, there a several national or international study groups working on the same medical indication. Each study group also manages a data center which performs the data management for ongoing studies but also manages large databases from completed clinical trials. Warehouse techniques can be used within those data center to explore relevant clinical information across different studies of the study group. Relevant issues which need well documented patient data are for example: meta analyses, prognostic factor research, biomarker research, subgroup analyses, simulation of future trials, health economic research, or determination of surrogate endpoints. Since those activities are mostly from exploratory character, one also needs extensive data sets to validate findings of interest. Often, data repositories of single study groups are not large enough to manage exploration and validation of specific clinical aspects. This is an incentive to 1
Corresponding author: Anja S. Fischer, Institute for Medical Informatics, Biometry and Epidemiology (IBE), Marchioninistr. 15, 81377 Muenchen, Germany; E-mail: [email protected]. This work was supported by the German José Carreras Leukaemia Foundation (DJCLS H06/04V).
858
A.S. Fischer and U. Mansmann / A Metadata-Based Patient Register
establish an infrastructure for cooperation between academic study groups with clinical research in a specific indication. Since the research questions in cooperative clinical research are quite broad, it is not helpful to establish a classical patient registry between the cooperating study groups which contains a uniform core data set restricting the question of interest. Whereas many different definitions of patient registers exist [1-6] and various implementations of this concept are found [7, 8], the uniform standardized data set of every patient is common to all of them. Its size can vary; a survey on 14 German disease registers [7] found an average number of about 200 collected items per patient. Furthermore, it may be problematic to share patient information (even in a pseudomized or anonymized form) between the clinical study groups. Partners may be ready to share project specific data, but may be reluctant to provide extensive patient profiles for a central registry. Partners may be less reluctant to share information on patient information available in their repositories. This can be done by sharing study specific data dictionaries which define the data items of the study and the ways they are measured. Even, it may be easy to disclose which item is measured with good or bad quality for which patient. Consequently it is required to collect syntactic and semantic meta information about a data item in a specific study. This comprises metadata about the data item representation and stored values as well as contextual metadata concerning the data capture process. Representation and contextual metadata can be elevated from certain study documents (i.e.: Data dictionary as central information on the structure of a specific study database, data validation plan as the document which defines data quality, and a study protocol to explain the logic which sets the variables of a study in their specific logical context. Content metadata (i.e. patient wise availability and quality of item values collected in the study) has to be compiled directly from the study database and must be a updated regularly as the study data collection progresses. The metadata provides a reliable planning basis for cooperative research projects. It simplifies communication between collaborating partners and supports and accelerates the development process of a feasible common research protocol. We will present an IT infrastructure based on modern technical components and internationally accepted data standards for extraction, transformation and loading of metadata into a metadata-based patient register. The developed technical infrastructure has to be implemented into a governance infrastructure which assures data safety, privacy rights, and a transparent cooperative work. We also show that the implementation of the concept allows improving standardization of data management in clinical studies between the cooperating study groups. With our concept we follow the general principles of caBIGTM [9] of opening and implementing the cross-communication between distributed and federated data sources in oncology. Our approach is a deviation from the fully federated model of caBIGTM by establishing the metadata-based patient register as a central component. It offers a central link to available clinical research data of a patient in the research community. As a case study we consider a metadata-based patient register for four German study groups on Acute Myeloid Leukemia (AML) which is a rare disease characterized by a high mortality rate [10]. In Germany, investigator-driven multicenter treatment optimization trials are the main instrument in clinical leukemia research [11]. In the course of the trial a broad range of clinical data is collected providing the basis for evidence-based evaluation of the trial objectives. All trials together offer a rich
A.S. Fischer and U. Mansmann / A Metadata-Based Patient Register
859
information basis to perform meta-analyses, sub-group analyses, discovery and validation studies for biomarkers and surrogate endpoints, and diagnostic as well as prognostic rules. The heterogeneity in clinical documentation in AML studies (i.e. therapies and therapy outcome, concurrent diseases, etc.) is a recurring challenge in cooperative research projects. Therefore this is an interesting and significant field for evaluation of the concept of a metadata-based patient register.
2. Methods The problem of collating AML clinical data from multiple centers for meta-analysis: The classical patient data registry was discarded because of the severe restrictions implied by a uniform data set. The warehouse concept can not be applied because the partners did not agree on a permanent sharing of full patient data. The metadata based approach offers sufficient flexibility for the design of research projects by maximal protection of the individual patient data. The design of the processes for collation and for the management of metadata, the approach taken for requirements elicitation: For requirements compilation as well as documentation of available sources of clinical data semi-structured interviews with selected staff of study groups were conducted. The project stakeholders discussed and assessed the approach on a regular basis. Details of the system design: Discussions, requirement engineering and decisionmaking were supported by modeling of core data processes with the Business Process Modeling Notation (BPMN), i.e. (1) process of metadata extraction from data source (2) the load process of metadata into the register (3) the extraction and forwarding process of clinical data. Tools and techniques used for building the system: Various metadata standards (ISO/IEC 11179 [12], CDISC ODM [13], Resource Description Framework [14]) have been assessed regarding their ability to transmit extracted meta information from clinical data sources to the meta data oriented patient register. An important demand put on an appropriate metadata format is its power to convert data from legacy study databases with various technological back-ends (e.g. MS Access, MS SQL Server) to an international accepted format. The assessment resulted in the choice of CDISC ODM to act as model for implementation of metadata standardisation, extraction, transmission and storage. Software interfaces and tools were modeled with UML 2.0 and implemented with Java, JAXB, XML, Hibernate, Lauch4J, Ant, Maven. A PostgreSQL database acts as back-end for central metadata storage.
3. Results An evaluation of possible meta information about a clinical data source to be extracted and loaded into the metadata register was conducted and resulted in the following definition on which meta information will be collected about a clinical data source: (1) Attributes of the research project (e.g. project type, research plan synopsis, etc.), (2) status of data management processes (e.g. data capture, data validation, database closure, etc.), (3) Description, structure and content of (electronic) case report forms
860
A.S. Fischer and U. Mansmann / A Metadata-Based Patient Register
(e.g. scheme of study visits and forms), (4) Description of data items (e.g. item description, data type and precision, location of item in case report form, etc.) , (5) Data validation plan, (6) Pseudonyms of included patients, and (7) “Captured/MissingFlag” (i.e. a True/False flag, indicating on the data item level, if clinical information about a single patient was captured (True) or is missing (False)) Since the CDISC ODM format isn’t able to document the Captured/Missing flag an extension of the ODM standard was required. The ODM extension was documented in an amended XML schema. Software for fully automatic metadata and clinical data extraction from distributed data sources under different ownership was implemented. It allows the data owning study group to control the transferred data. On one hand it can be configured to extract patient pseudonyms and Captured/Missing information. This conversion of clinical information to the metadata format is conducted on basis of mapping information. The mapping instructions are documented in XML format defined by an XML schema. The so called ‘DB2ODMMapping’ allows the specification of mapping constraints between a relational database and ODM data items as well as constraints on interpreting the Captured/Missing-Status of data values. Second the software is able to extract clinical data from a relational database on request of a cooperative research project. The clinical data to be extracted can be configured in the ‘DB2ODMMapping’ file. Further software tools for processing of collected meta information have been implemented, i.e. for loading metadata into the central database and for creation of meta data documentation in PDF format. All project related software has been implemented in Java 6. A modular concept of three Java APIs (core, dataaccess, odm) support software maintenance and enable software re-usability. At present meta information about three clinical trials from two AML study groups has been integrated into the central meta-data based patient register. Together these three data sources contain clinical information about 4115 leukemia patients. Automatic extraction of clinical data from the study databases on basis of available meta information has been tested. Clinical evidence concerning the status and classification of AML (i.e. French-American-British classification, WHO classification) from 4102 patient data sets has been extracted and provided for statistical analysis. This process disclosed classification inconsistencies between the trials and allowed to start a process to standardize between both study groups. The prototype allows straightforward extension to the full set of available clinical trial in several study groups.
4. Discussion The challenges of clinical research ask for a cooperative efficient use of high-quality data. Such data is in general available in databases of clinical trials, especially of randomized controlled studies. Sharing the data of such studies has to be done with care and within a transparent and regulated setting to protect patient rights as well as integrity of the clinical data. The concept and prototype for a general cooperative infrastructure in clinical research is presented which complies with legal, ethical and technical requirements. It supports cooperative initiatives in consolidation of available clinical evidence for evaluation of open research questions. Potential cooperative
A.S. Fischer and U. Mansmann / A Metadata-Based Patient Register
861
projects are: (1) Discovery and validation studies for prognostic and predictive models, biomarkers and surrogate endpoints, (2) planning data capture for future trials, and meta-analyses using individual patient data (surrogate endpoints, treatment effects, subgroup analyses). Processes for metadata extraction and loading into the central register facility have been implemented and are highly supported by comfortable software tools. In addition, the metadata-based patient register acts as a platform for network communication and data standardization activities. Besides the ongoing integration of metadata from clinical study databases future work will concentrate on modeling and implementation of a web-based register platform and of data transformation processes for harmonizing clinical data from different sources.
References [1] [2] [3] [4] [5]
[6] [7] [8] [9] [10] [11] [12]
[13] [14]
Dreyer NA, Garner S. Registries for robust evidence. JAMA. 2009 Aug 19;302(7):790-1. Gliklich RE, Dreyer NA, editors. Registries for Evaluating Patient Outcomes: A User’s Guide. 2nd edition. Rockville (MD): Agency for Healthcare Research and Quality (US); 2010 Sep. Drolet BC, Johnson KB. Categorizing the world of registries. J Biomed Inform. 2008 Dec;41(6):100920. Epub 2008 Feb 5. Brooke EM. The current and future use of registers in health information systems. WHO Offset Publ No. 8 1974 pp. ii + 43 pp. Arts DG, De Keizer NF, Scheffer GJ. Defining and improving data quality in medical registries: a literature review, case study, and generic framework. J Am Med Inform Assoc. 2002 Nov-Dec;9(6):60011. Gladman D, Menter A. Introduction/overview on clinical registries. Ann Rheum Dis. 2005 Mar;64 Suppl 2:ii101-2. Review. Stausberg J, Altmann U, Antony G, Drepper J, Sax U, Schuett A. Registers for networked medical research in Germany: Situation and prospects. Appl Clin Inf, 2010. 1: p. 408-418. Newton J, Garner S. Disease Registers in England. Institute of Health Sciences, University of Oxford, 2002. ISBN 1 8407 50286. National Institutes of Health, National Center for Research Resources. CaBIGTM overview. 2006. [cited at 2011 Apr 29]. Available from http://www.ncrr.nih.gov/publications/informatics/caBIG.pdf. European Medicines Agency, Committee for Orphan Medicinal Products. Public summary of opinion on orphan designation, EMA/COMP/804144/2009. London, 2010. Hehlmann R, Berger U, Aul C, Büchner T, Döhner H, Ehninger G, et al. The German competence network 'Acute and chronic leukemias'. Leukemia. 2004 Apr;18(4):665-9. ISO/IEC 11179-3+COR1 (2003) Information Technology - Metadata Registries (MDR) Part 3: Registry Metamodel and Basic Attributes. Second edition 2003-02-15 Incorporating COR1. Available from http://jtc1sc32.org/doc/N1151-1200/32N1168-ISO-IEC11179-3-2003COR1.zip. http://www.cdisc.org/models/odm/v1.3/index.html. [cited 2011 Mar 06]. http://www.w3.org/standards/techs/rdf#w3c_all. [cited 2011 Mar 06]
862
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-862
De-identifying an EHR Database Anonymity, Correctness and Readability of the Medical Record a
Kostas PANTAZOSa1, Soren LAUESENa, Soren LIPPERTa Software Development Group, IT-University of Copenhagen, Denmark
Abstract. Electronic health records (EHR) contain a large amount of structured data and free text. Exploring and sharing clinical data can improve healthcare and facilitate the development of medical software. However, revealing confidential information is against ethical principles and laws. We de-identified a Danish EHR database with 437,164 patients. The goal was to generate a version with real medical records, but related to artificial persons. We developed a de-identification algorithm that uses lists of named entities, simple language analysis, and special rules. Our algorithm consists of 3 steps: collect lists of identifiers from the database and external resources, define a replacement for each identifier, and replace identifiers in structured data and free text. Some patient records could not be safely de-identified, so the de-identified database has 323,122 patient records with an acceptable degree of anonymity, readability and correctness (F-measure of 95%). The algorithm has to be adjusted for each culture, language and database. Keywords. Electronic Health Record, de-identification, database, confidentiality
1. Introduction Vast amounts of data are generated from medical systems in structured and free text formats. Although the data exist, clinicians cannot access them due to confidentiality. The goal of this project is to irreversibly convert patient records from a specific EHR database to unidentifiable records with low distortion of medical correctness and readability. This de-identified database can support research in the healthcare area, improve development of medical software and train new users of the system. In the medical informatics area, several de-identification algorithms have been developed [1, 4, 5, 6, 7, 8]. Meystre et. al. [3] present a review of recent research on deidentifying electronic health records. Their results showed that most de-identification systems focus on structured data and less on free text. The ones that de-identify free text use mainly predefined medical records (e.g. pathological reports). To our knowledge, previous research focus on de-identifying datasets extracted from tables in an EHR database, and none has presented a de-identification algorithm for a full EHR database, ensuring acceptable levels of anonymity, medical correctness and readability. Furthermore, the literature review [3] showed that previous studies focus more on
1
Corresponding author: Kostas Pantazos, E-mail: [email protected]; IT-University of Copenhagen, Rued Langgaards Vej 7, DK-2300, Copenhagen,
K. Pantazos et al. / De-Identifying an EHR Database
863
anonymity and medical correctness and less on readability of the de-identified records. Finally, this is the first study on de-identifying Danish healthcare records.
2. Challenges Anonymity can be ensured by finding all the identifiers and altering them. Medical correctness means preserving the medical information as well as ensuring consistency. We defined two types of consistency in an EHR database: internal and external consistency. Internal consistency means that identical identifiers (e.g. civil registration numbers) in the original version are also identical in the new version for each patient. External consistency means that identical identifiers (e.g. last name) in the original version are also identical in the new version across patients. This will for instance preserve family relationships. Readability can be ensured by replacing the identifiers with appropriate real values. An electronic health record database contains tables with only structured data (e.g. civil registration number and diagnosis name) and tables with free text, often with embedded structured data (e.g. medical notes with a diagnosis name). Preserving anonymity of the patient and medical correctness in structured tables is easy because the context is pre-defined and all identifiers are replaced according to the rules of the format. In contrast, de-identifying free text tables is a challenging task due to the undefined context, language ambiguities and medical eponyms (e.g. Aaron can be a first name or part of the medical term “Aaron Sign”). Another challenge is to preserve internal and external consistency without affecting medical correctness and anonymity.
3. Solution We investigated a full 12 gigabyte database with 437,164 patient records containing diagnoses, notes, laboratory data, etc. Figure 1 outlines our process. 3.1. Database Investigation We examined the database (65 tables) to find tables that might reveal patient identity. We found 9 tables with only structured data and 13 tables with free text. We
Figure 1. Overview of the de-identification process for an entire Danish EHR database
864
K. Pantazos et al. / De-Identifying an EHR Database
investigated the fields and created a list of identifiers, e.g. CPR-number (the Danish civil registration number). We also found quasi identifiers (e.g. street name) [2]. In total we found 9 identifiers (CPR-number, first name, middle name, last name, address, telephone number, e-mail, web URL, picture) and 13 quasi identifiers (zip-code, city, country, date of birth, date of death, age, hospital name, clinic name, clinician’s first name, clinician’s last name, clinician’s alias, first name and last name of relatives). We investigated identifiers and quasi identifiers in the database and found several challenging issues: number ambiguity (a phone number can also be interpreted as a CPR number), language ambiguity (Hans is a Danish pronoun, but can also be a male first name), medical eponymous names (Aaron), city names and clinic names that can also be person names, and corrupted data (invalid CPR numbers in structured data). Our algorithm extracts lists of all the identifiers from the database. The lists are used by the algorithm to identify ambiguous names and numbers in free text. 3.2. External Identifiers In addition to the identifiers from the structured parts of the database, we used public lists of place names, hospital names, clinic names and medical eponymous names. These names allowed the algorithm to find more ambiguous names in free text, and to de-identify person names that occurred only in free text. 3.3. Algorithm Structured data: The algorithm replaces all identifiers in structured data. Each family name is consistently replaced by another family name with roughly the same frequency in the database. As an example, the name Nielsen might be replaced by Hansen wherever Nielsen occurs. First male names and first female names are handled in a similar way. CPR-numbers are consistently replaced by another CPR-number. The CPR format is: DDMMYY-CSSG where DDMMYY is the birth date. The day (DD) and month (MM) are changed to a random, consistent day and month. C stands for century and denotes 1900 or 2000. This is not changed. SS (serial number) is randomized. G shows gender, and is not altered (e.g. number 280210-1546 is replaced with 2006101656).Some identifiers, e.g. telephone numbers, are replaced by a random number. Free text: The algorithm looks at each word in the free text and determines whether it is a family name, a male first name, a female first name, a place name, an eponymous medical name, etc. If it is only one of these, it is replaced according to the rule for this kind of name. If it is more than one kind, the word is ambiguous and a special rule is used. Here is an example of a special rule: If a person name is also an eponymous medical name (Aaron), it should not be replaced. This would destroy medical correctness in case it actually is a medical term. However, if it actually is a person name, keeping the name might harm anonymity. Our special rule is to keep the name if it is a frequent name (occurs more than 200 times). This will have little impact on anonymity. If it is a rare name, we delete the patient entirely from the database. The algorithm looks at each number and determines by its format and value whether it is a phone number, a CPR-number, etc. If it is only one of these, the corresponding rule is applied. Otherwise the number is ambiguous and the algorithm uses simple language analysis to determine the type.
K. Pantazos et al. / De-Identifying an EHR Database
865
Figure 2. A de-identification example
Consistency: Family doctors often make notes that refer to other family members by name or CPR-number. Since the algorithm consistently replaces person names and CPR-numbers, these references remain consistent. City names, hospital names and clinic names are replaced consistently within a single free text, but not across all free texts. A consistent replacement might expose the identifier since there are rather few replacements for cities, hospitals and clinics. Readability: Since the algorithm replaces names and numbers with other real names and numbers of the same kind, the new data will look "real". However, if names were consistently replaced by a completely random name, the data pattern might look strange. As an example, the common name Nielsen might be consistently replaced by the rare name Pantazos. As a result we would suddenly have 10,000 Pantazos in the database. For this reason the algorithm replaces a name with a new name of roughly the same frequency. Figure 2 shows an example of how the algorithm de-identifies data.
4. Results We evaluated our system manually with a sample of 369 randomly chosen medical free text records extracted from MedicalRecordLine table (7.2 gigabyte). Figure 3 presents the evaluation results. The algorithm did not alter frequent Danish names (>200) that were also medical names. We were aware of this from the beginning but would not
Figure 3. Evaluation results
866
K. Pantazos et al. / De-Identifying an EHR Database
distort the medical correctness. Since the names are frequent, there is little impact on anonymity. A previous version of the algorithm did not de-identify patient names in genitive form. We adjusted our algorithm to deal with the genitive form. Precision was affected because of the many ambiguous names and abbreviations that were replaced in places where they should not. This had a negative impact on readability and medical correctness. However, the result is very readable because only 109 words out of 71,721 words were wrongly replaced. Anonymity was not affected. The program took 60 hours using a computer with 4 gigabyte of memory to create the new database (12 days using a computer with 1 gigabyte memory). Of these 60 hours, 5 hours were spent on analyzing and replacing the text and 55 hours on updating the records in the database. During the de-identification process the system deleted ¼ of the data, 114,315 patient records (Danish ambiguous names: 1,282, Medical eponymous names: 43,119, corrupted data and age > 90 years: 69,914). In case we had not used the frequency rule, we would have lost another 55,000 patients from ambiguous and eponymous names. The result of our de-identification process is an EHR database containing 323,122 patient records.
5. Conclusion It is feasible to de-identify an EHR database and achieve an acceptable level of anonymity, correctness and readability of the medical record. This database is adequate for supporting research, development and training where users are aware of the confidentiality. If you know name, address and CPR-number of a specific person, you will not be able to find his/her health record. However, it is not adequate for general publication of the database where someone maliciously might look for weakness. The principle of the algorithm can be used for other EHRs, but modifications caused by database structure and language should be considered.
References [1] [2] [3]
[4]
[5] [6]
[7] [8]
Berman J. Concept-Match Medical Data Scrubbing. How Pathology Text Can Be Used In Research, Archives of Pathology & Laboratory Medicine 2003, 680-6. Emam KE, Jabbouri S, Sams S, Drouet Y, Power M. Evaluating Common De-Identification Heuristics for Personal Health Information. Journal of Medical Internet Research 2006, 8(4):e28. Meystre S, Friedlin FJ, South BR, Shen S, Samore MH. Automatic de-identification of textual documents in the electronic health record: a review of recent research. BMC Medical Research Methodology 2010, 10:70. Gupta D, Saul M, Gilbertson J. Evaluation of a Deidentification (De-Id) Software Engine to Share Pathology Reports and Clinical Documents for Research, American Journal of Clinical Pathology 2004, 176-86. Sweeney L. Replacing Personally-Identifying Information in Medical Records, the Scrub System. In: Cimino JJ, ed. Proceedings, Journal of the American Medical Informatics Assoc 1996, 333-337 Szarvas G, Farkas R, Busa-Fekete R. State-of-the-Art Anonymization of Medical Records Using an Iterative Machine Learning Framework. Journal of the American Medical Informatics Association 2007, 574–8 Uzuner O, Luo Y, Szolovits P. Evaluating the state-of-the-art in automatic de-identification. Journal of the American Medical Informatics Association 2007, 550-563. Velupillai S, Dalianis H, Hassel M, Nilsson GH. Developing a standard for de-identifying electronic patient records written in Swedish: Precision, recall and F-measure in a manual and computerized annotation trial, International Journal of Medical Informatics 2009, 78-90.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-867
867
Service Oriented Data Integration for a Biomedical Research Network Matthias GANZINGERa,1, Tino NOACKa, Sven DIEDERICHSb,c, Thomas LONGERICHc, Petra KNAUPa a Department of Medical Informatics, University of Heidelberg. b Helmholtz-University-Group “Molecular RNA Biology & Cancer”, German Cancer Research Center (DKFZ). c Institute of Pathology, University of Heidelberg. Heidelberg, Germany.
Abstract. In biomedical research, a variety of data like clinical, genetic, expression of coding or non-coding ribonucleic acid (RNA) transcripts, or proteomic data are processed to gain new insights into diseases and therapies. In transregional research networks, geographically distributed projects work on comparable research questions with data from different resources and in different formats. Providing an information platform that integrates the data of the projects can enable cross-project analysis and provides an overview of available data and resources (tissue, blood, etc.). For a German liver cancer research network consisting of 22 individual projects, we develop the integrated information platform pelican – platform enhancing liver cancer networked research. In our generic approach, data are made available to the research network by standardized data services based on technologies provided by the cancer Biomedical Informatics Grid (caBIG). It has shown that publishing service metadata in a corresponding repository is a major prerequisite for automated discovery, integration, and conversion of data records and data services. We identified data confidentiality and intellectual property considerations as major challenges while establishing such an integrated information platform. As a first result we implemented a working prototype to validate our approach. Keywords. biomedical research, service oriented architecture, data integration
1. Introduction Biomedical informatics research can provide resources that represent, visualize and analyze large-scale genetic data efficiently and flexibly [1]. Nevertheless, a lack of interoperability among data resources from independent institutions is described as a severe problem for biomedical research in current literature [2]. The variety of representations and semantics usually leads to data sets that are stored in heterogeneous formats, described with different terminologies and analyzed with dedicated applications. This heterogeneity may hamper the development of new strategies targeting cancer [3] and their translation from bench to bedside. Several approaches have been started to address this problem. Data warehouses are introduced, so that data 1 Corresponding author: Matthias Ganzinger, Dpt. of Medical Informatics, University of Heidelberg, Im Neuenheimer Feld 305, 69120 Heidelberg, Germany; E-mail: [email protected]
868
M. Ganzinger et al. / Service Oriented Data Integration for a Biomedical Research Network
from biological databases can be integrated, locally stored, and analyzed [4, 5]. It has been recognized that collecting information from different areas of research offers important advantages: Relevant independent results are tied together and specialists are pointed into new directions [6]. In Germany, a transregional research network (TRN) on hepatocellular carcinoma (HCC) has been established. Within the TRN, 22 biomedical research projects cover the whole range of research from molecular pathogenesis to the development of new targeted therapies. The task of our group is to develop, validate, and apply an information platform that is tailored to the scale and multidisciplinary nature of the TRN. The integration of tissue, molecular, genetic, and clinical data into a common platform shall enable data sustainability and comprehensive analyses. The aim of this paper is to introduce the special requirements that arise from a biomedical research network for the information platform and to discuss the resulting architecture blueprint. We want to share our experiences in using tools from the cancer Biomedical Informatics Grid (caBIG) initiative to build the system.
2. Methods The 22 TRN projects are located at four major independent research institutions. Each project organizes its own research data. There is a considerable amount of data, distributed over the various institutions in different terminologies and in different formats. Standards, specifications, tools and standard operating procedures are necessary to ease the integration of biomedical research data on HCC. Our aim is to provide an efficient and secure environment to perform queries and analyses on integrated scientific information while respecting the distributed nature of the TRN. 2.1. The Pelican Architecture The information platform pelican (platform enhancing liver cancer networked research) is built in an iterative process. We started with a case study in two projects by analyzing the currently implemented way of data storage. In this study, we analyzed data structures, identified overlapping data and ambiguous data structures. Further, we analyzed the applicability of open-source applications and specifications to our TRN. We found two major concepts of data storage for an integration platform: first, a central data warehouse into which data from all sources are loaded. I2b2 [7] is an example for a data warehouse used in biomedical context. The other concept is to federate data. That means, all data are kept separately but are made available for integrated analyses by using standardized interfaces. For example, caBIG [8] – the National Cancer Institute’s (NCI) cancer Biomedical Informatics Grid – provides tools to build a federated system. We decided to implement pelican as a service oriented architecture (SOA). As our base framework we chose components provided by the caBIG initiative. caBIG was established to improve cancer research by sharing, discovering, integrating, and processing disparate clinical and research data resources to improve cancer research. This includes the development of applications for data management and analysis, guidelines, and informatics standards. These tools are based on a grid architecture (caGrid) to link applications and resources in the caBIG environment [2].
M. Ganzinger et al. / Service Oriented Data Integration for a Biomedical Research Network
869
Figure 1. Semantics of pelican data services are described using a standardized vocabulary. Both technical metadata and semantic descriptions of the services are published to a directory service.
3. Results In pelican, all data contributed by individual projects are transformed into data services using the Cancer Common Ontologic Representation Environment (caCORE) Software Development Kit (SDK). As shown in Figure 1, metadata are generated for all data services and published to a directory. The vocabulary used for the description has to be standardized throughout the TRN. In addition to data services, analytical services are developed and made available within pelican. In the final version of pelican, researchers will be able to find data services hosting data necessary to answer their research question in the service directory. These data services are, together with analytical services, chained into a workflow that analyses the data of various sources and presents the results to the researcher. Figure 2 illustrates this process chaining concept. Further TRN projects can be easily added. Standard operating procedures and supporting tools will be developed to provide a smooth way of converting raw data as generated in the projects into data services conforming standards for pelican. In this process, it is especially important to apply the corresponding metadata correctly. Otherwise, it will be impossible to find the data sources in the directory and apply automated correlation algorithms.
Figure 2. Individual services providing data or analytical services can be combined into a service chain. To do this, a researcher identifies the services of interest by using the service directory. The chain is executed by pelican and the results are returned.
870
M. Ganzinger et al. / Service Oriented Data Integration for a Biomedical Research Network
3.1. TRN Specific Requirements As a first step to assess the requirements for the pelican system, we conducted a survey among TRN project managers. For this purpose we developed a questionnaire consisting of 13 questions. For about one half of the questions standardized answers were provided with check boxes, the other half was free text. The survey covered various aspects of data usage, data confidentiality and intended use of the new system. The evaluation of the survey made obvious, that there is a strong concern among researcher about the confidentiality of data contributed to the system. These concerns were mostly about two aspects: 1. Researchers want to control who can access the data they contributed to the TRN. They want to keep the data confidential among specific project members or the whole TRN until they are published. 2. Getting access to the data of another project may lead to a significant advantage of somebody’s own research. If this leads to a publication, rules have to be established and enforced how the contributors of the data are to be acknowledged e.g. by means of co-authorship. 3.2. Data Confidentiality and Intellectual Property To address the TRN requirements, pelican architecture includes several confidentiality measures. Data services and as such data itself can be left under the control of the contributing project. Projects can define access control lists for their services if necessary. For the use of data generated by other projects, all TRN projects agreed on a set of rules. To support the enforcement of these rules, access to data is recorded by a comprehensive audit logging concept. Audit logs are monitored by the central project office on a regular basis. Our survey showed that 55% of the projects are only willing to share their data after data confidentiality concepts such as those proposed by us have been implemented in pelican. 3.3. Integration Platform To test the architectural design of pelican, a prototype was built. It uses caCORE SDK to implement data services. However, it does not yet allow for dynamic process generation. Instead, a process for a specific research question is prepared statically. To answer this research question, it is necessary to correlate genomic microarray data of three data services provided by two projects. Data types used are array comparative genomic hybridization data (aCGH), methylation data and expression of coding and non-coding ribonucleic acid (RNA). In parallel, the service directory has been implemented and work on the standardized vocabulary has been started.
4. Discussion When pondering whether the data warehouse or the federated approach would suit the needs of the TRN better, we chose to build a federated system. With this concept, it is possible for the individual projects to keep control over their data since data from different projects are encapsulated in distinct data services. Thereby, access control
M. Ganzinger et al. / Service Oriented Data Integration for a Biomedical Research Network
871
mechanisms are easy to comprehend and to manage. In contrast, a data warehouse would usually combine all data in one database making it much harder to apply access permissions on an individual basis and communicate these settings to the projects. Using the pelican prototype, we were able to demonstrate, that it is possible to integrate data of different TRN research projects using a SOA based on caBIG components. We were able to statically correlate several genetic data sources and thus support the researchers of two projects. Further, we started to implement the metadata directory and the implementation of caGRID [2]. The security concept designed for pelican is accepted throughout the TRN, as our survey substantiates. However, further work needs to be done regarding the user interface to ensure user acceptance: pelican should be as easy to use as the tools currently used by the researchers. To enable the dynamic composition of process chains, a workflow engine has to be added to pelican. Candidates for this are Business Process Execution Language (BPEL) based engines or the Taverna workflow management system [9]. Finally, we need to examine other caGRID enabled tools provided by the caBIG initiative to find out, if those can complement pelican to further improve the TRN research. Acknowledgements: The authors would like to thank the German Research Foundation (DFG) for funding SFB/TRR 77 – “Liver Cancer. From molecular pathogenesis to targeted therapies.”
References [1] [2] [3]
[4] [5] [6] [7] [8] [9]
Knaup P, Ammenwerth E, Brandner R, et al. Towards clinical bioinformatics: advancing genomic medicine with informatics methods and tools. Methods Inf Med 2004; 43(3):302–7. Oster S, Langella S, Hastings S, et al. caGrid 1.0: an enterprise Grid infrastructure for biomedical research. J Am Med Inform Assoc; 15(2):138–49. Madhavan S, Zenklusen J, Kotliarov Y, Sahni H, Fine HA, Buetow K. Rembrandt: helping personalized medicine become a reality through integrative translational research. Mol. Cancer Res 2009; 7(2):157–67. Lee TJ, Pouliot Y, Wagner V, et al. BioWarehouse: a bioinformatics database warehouse toolkit. BMC Bioinformatics 2006; 7:170. Hart RK, Mukhyala K. Unison: an integrated platform for computational biology discovery. Pac Symp Biocomput 2009:403–14. Schork NJ. Genetics of complex disease: approaches, problems, and solutions. Am. J. Respir. Crit. Care Med 1997; 156(4 Pt 2):S103-9. i2b2: Informatics for Integrating Biology & the Bedside [cited 2011 Apr 19]. Available from: URL:https://www.i2b2.org/. Welcome to the caBIG® Community Website — [cited 2011 Apr 19]. Available from: URL:https://cabig.nci.nih.gov/. Tan W, Missier P, Foster I, Madduri R, Goble C. A Comparison of Using Taverna and BPEL in Building Scientific Workflows: the case of caGrid. Concurr Comput 2010; 22(9):1098–117.
872
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-872
Single Source Information Systems can Improve Data Completeness in Clinical Studies: an Example from Nuclear Medicine a
Susanne HERZBERGa, b,1, Martin DUGASa, b Institute of Medical Informatics, University of Münster, Germany b IT Department, University Hospital of Münster, Germany
Abstract. Data for clinical documentation and medical research are usually managed in separate systems. A documentation system for myocardial scintigraphy (SPECT/CT-data) was developed, implemented and assessed in order to integrate clinical and research documentation. This paper presents concept, implementation and results regarding data completeness of this single source information system. Completeness of documentation increased highly significantly (p < 0.0001) after implementation of this system. Keywords. Single source information system, EHR re-use, hospital information system, follow-up, data completeness, reminder system
1. Introduction Usually, there are separate systems for clinical documentation despite existing overlap between data items: hospital information systems (HIS) for routine medical documentation in electronic health records (EHRs) and electronic data capture (EDC) systems for clinical studies. These systems are managed in a dual source concept [1]. In contrast, a single source information system reuses routine healthcare data for clinical research. A separate documentation system possesses quite a few disadvantages, for instance, “Inefficiencies in clinical trial data collection cause delays, increase costs, and may reduce clinician participation in medical research” [2]. Moreover, “Routine data are potentially cheaper to extract and analyze than designed data…” and “... have the potential to identify patient outcomes captured in remote systems that may be missed in designed data collection” [3]. Furthermore, transcription errors are eliminated and patient recruitment for clinical trials is facilitated [4]. In the following, we present a single source information system which was designed for a study on cardiovascular risk stratification by combination of risk factor analysis, in-vitro diagnostics and single photon emission computer tomography/computer tomography (SPECT/CT) in order to be able to predict individualized risk concerning coronary events [5]. Clinical studies typically consist of several visits. After an initial assessment, several follow-up visits need to be organized and documented. Therefore, follow-up 1
Corresponding Author: Susanne Herzberg.
S. Herzberg and M. Dugas / Single Source Information Systems Can Improve Data Completeness
873
data needs to be collected according to each study protocol at certain time points. According to Chan et al. “data completeness varied substantially across studies” [6] which may be caused by the huge documentation workload of physicians in routine care [7]. Forster et al. report that the median rate of loss to follow-up in a 15-country study was 8.5% [8]. Consequently, data completeness in studies is a critical and widely unsolved problem. Organizational issues, for instance regarding scheduling, can cause loss to follow-up. We implemented two workflows in the HIS and compared data completeness before and after this intervention: A HIS-based follow-up system to support study documentation by automatic creation of follow-up forms according to study protocols [9] and a generic reminder system to monitor completeness [10] in documentation.
2. Methods 2.1. Design of Single Source Information System for SPECT/CT Study Electronic case report forms (CRFs) regarding medical history, stress and rest injection protocols were designed using tools of the local HIS (ORBIS® from AGFA Healthcare [11]). These forms were identified in a process analysis of the SPECT/CT study, for details see [5]. Checkboxes, lists and number fields with only few narrative text fields are used in order to provide structured data for statistical analysis. A work list contains all uncompleted forms for physician’s review. Conditional items with related plausibility checks are applied to improve data quality. Error messages occur if data items are invalid or missing. To minimize data entry efforts, item values are calculated automatically wherever possible. The report generator of ORBIS® is used to extract HIS data for quality control and research purposes. Authorized study physicians can perform these queries. The report tool generates csv-files suitable for import in statistic software packages. HIS reports are pseudonymized to protect patient data privacy. A data management team performs monitoring to verify data validity in the research database. 2.2. Concept of HIS-Based Reminder System Regarding form Completeness A HIS-based reminder system identifies incomplete CRFs within the HIS and sends notifications to the responsible person after a certain grace period, for details see [10]. This reminder system needs a flexible configuration component, because a large number of clinical studies are performed simultaneously and each study consists of several CRFs with individual responsibilities regarding documentation. An escalation mechanism to notify different groups of people about incomplete documentation is provided. For instance, when the study physician is not completing a CRF within a certain time frame, the principal investigator will be notified. From a technical perspective, a definition table stores for each CRF type a query to identify incomplete forms. These queries are executed periodically. A schedule table manages due records. After expiration of each grace period, notifications are prepared. To avoid over-alerting, summary e-mails per study and escalation level are generated. Reminder messages are sent via a communication server.
874
S. Herzberg and M. Dugas / Single Source Information Systems Can Improve Data Completeness
2.3. Concept of HIS-Based Follow-Up System A HIS-based follow-up system automatically generates follow-up CRFs in time according to each study protocol - and enqueues these forms into the work list of the responsible study personnel. This system needs a configuration component, because for each follow-up CRF in each clinical study, a different follow-up schedule needs to be applied. Similar to the HIS-based reminder system, study-specific periodic database queries identify due follow-up forms. Triggered by a due follow-up form, a database procedure creates a follow-up event which is translated by a communication server into an health level seven (HL7) message and transferred to the import interface of the clinical information system. Within this system, clinicians can access their departmental work lists with patient-specific follow-up forms. 2.4. Analysis of Data Completeness Data completeness before and after implementation of HIS-based reminders and HISbased follow-up was analyzed using HIS reports and statistic software (PASW from SPSS [12]). An exact chi-square test was applied to test for significant changes regarding completeness by CRF type before and after introduction of each system. Two-sided P values < 0.05 were interpreted as statistically significant.
3. Results A single source information system can combine clinical and scientific documentation and thus avoids multiple data entry. Regarding the SPECT/CT study, within 22 months 1308 patients were documented by 8 physicians and 8 radiographers (1358 medical history protocols, 1372 stress and 1275 rest injection protocols). Documentation consisted of 301 attributes, three quarters were conditional items. The HIS-based reminder system was started in September 2009. The documentation periods from May 2009 until July 2009 (before implementation) and from October 2009 until December 2009 (after implementation) were compared. Reminders were configured for medical history forms, stress and rest injection protocols. Two grace periods were used: the first was set to one day and the recipient was the responsible study physician, the second escalation level was set to one week and the recipient of these e-mails was the principal investigator. Completeness increased highly significantly (p < 0.0001) for each form type after implementation of the reminder system: medical history form 93% (145 of 156 forms) versus 100% (206 forms), stress injection protocol 90% (142 of 157 forms) versus 100% (201 forms) and rest injection protocol 31% (45 of 147 forms) versus 100% (208 forms). 46 reminder e-mails to the responsible study physician and 53 reminder e-mails to the principal investigator were sent to complete 2 medical history forms, 8 stress and 20 rest injection protocols. The 2 medical history forms were completed after 1 and 56 days. A HIS-based follow-up system to automatically generate follow-up forms as described in the methods section was implemented for the SPECT/CT study. 196 follow-up forms were automatically generated within 13 weeks of operation. Overall, data quality improved substantially compared to previous paper-based documentation. For comparison, we assessed the completeness of the previous paper-
S. Herzberg and M. Dugas / Single Source Information Systems Can Improve Data Completeness
875
based documentation. We took a random sample of 19 forms from February and March 2009 (before implementation of electronic documentation for this study). No patient (0 out of 19) was completely documented in the paper-based documentation.
4. Discussion The design and implementation of this system in nuclear medicine demonstrate that a single source information system is technically feasible and accepted in the clinical setting. It can be integrated into the existing clinical workflow without disruption. Paper-based entries are error-prone, for instance due to legibility problems. In contrast to that, electronic forms have a significantly reduced error rate [13]. A reminder system on top of a single source information system can clearly improve data completeness. In particular, it is feasible in a commercial HIS setting with all its technical and license constraints (e.g. access to internal HIS data model is restricted, available interfaces are limited). On the other hand, because most hospitals are using commerical HIS, our approach should be scalable and transferable to other sites, at least to hospitals with the same HIS product. It would be very interesting to analyze whether our approach is also feasible with products from other HIS vendors and how much resources are required for technical implementation. Due to the fact that a physician spends nearly a quarter of his working time on clinical routine documentation [7], additional documentation efforts for research purposes need to be minimized. A first proof-of-concept study concerning a cardiology trial was published in 2007 [2]. It was integrated into the clinical environment, but there was no integration into an existing inter-departmental CIS in contrast to our approach. Furthermore, this proof-of-concept study was tested in only two live patient encounters. In our approach, all cardiological patients of the department were documented in the single source system. Timely and complete follow-up documentation is a significant issue in clinical research. This task is supported by the HIS-based follow-up system. Recruitment of suitable patients and complete documentation are key issues in clinical trials: In 2006, a meta-analysis of more than 100 trials showed that “less than a third (31 %) of the trials achieved their original recruitment target and half (53 %) were awarded an extension” [15]. Recruitment rates were increased significantly by the use of a HIS-based clinical trial alert system [16]. However, standard EDC systems are not integrated into the routine clinical workflow of the HIS. EDC systems can support follow-up documentation. Welker states that “The central storage of data and ubiquitous user access allows the inclusion of intelligence that can remind individual users to perform required tasks; i.e. remind the investigator site when an enrolled patient is due for a follow-up visit…” [17]. Because all clinicians work with the HIS, HIS work lists are attractive locations for follow-up reminders. Interventions for quality improvement should be embedded within HIS [10]. The HIS-based reminder system monitors continuously completeness of documentation and notifies responsible physicians about incomplete documentation depending on escalation level. There is evidence regarding the efficiency of HIS-based reminders in the literature, for instance Staes et al. report that computerized alerts improve outpatient laboratory monitoring of transplant patients [18]. In our experience, a second escalation level (notification of a principal investigator) is valuable, because it helps to identify and resolve organizational
876
S. Herzberg and M. Dugas / Single Source Information Systems Can Improve Data Completeness
problems in the documentation process at an early stage. To avoid over-alerting, grace periods and number of e-mails need to be configured carefully in cooperation with the study team.
5. Conclusion A single source information system whose components are a follow-up system and a computer-based reminder system to identify incomplete documentation forms can improve completeness of finalized forms significantly.
References [1] [2] [3]
[4] [5]
[6] [7] [8]
[9] [10]
[11] [12] [13] [14] [15] [16] [17] [18]
Dugas M, Breil B, Thiemann V, Lechtenbörger J, Vossen G. Single Source Information System to connect patient care and clinical research, Stud Health Technol Inform 150 (2009), 61-65. Kush R, Alschuler L, Ruggeri R, et al. Implementing Single Source: the STARBRITE proof-of-concept study, J Am Med Inform Assoc 14 (2007), 662-673. Williams JG, Cheung WY, Cohen DR, et al. Can randomised trials rely on existing electronic data? A feasibility study to explore the value of routine data in health technology assessment, Health Technology Assessment 7 (2003), 1-117. Dugas M, Lange M, Berdel BE, Müller-Tidow C. Workflow to improve patient recruitment for clinical trials within hospital information systems – a case study, Trials 9 (2008), 2. Herzberg S, Rahbar K, Stegger L, Schäfers M, Dugas M. Concept and implementation of a single source information system in nuclear medicine for myocardial scintigraphy (SPECT-CT data), Appl Clin Inf 1 (2010), 50-67. Chan KS, Fowles J, Weiner JP. Electronic health records and reliability and validity of quality measures: A review of the literature, Medical Care Research and Review 67 (2010) 503-527. Ammenwerth E, Spötl HP. The time needed for clinical documentation versus direct patient care, Methods Inf Med 48 (2009) 84-91. Forster M, Bailey C, Brinkhof MW, et al. Electronic medical record systems, data quality and loss to follow-up: survey of antiretroviral therapy programmes in resource-limited settings, Bulletin of the Word Health Organization 86 (2008), 939-947. Herzberg S, Fritz F, Rahbar K, Stegger L, Schäfers M, Dugas M. HIS-based support of follow-up documentation – concept and implementation for clinical studies, Appl Clin Inf 2 (2011), 1-17. Herzberg S, Rahbar K, Stegger L, Schäfers M, Dugas M. Concept and implementation of a computerbased reminder system to increase completeness in clinical documentation, Int J Med Inform 80 (2011), 351-358. AGFA.com [Internet]. Agfa Healthcare; c2011 [updated 2010 Mar 25; cited 2011 Jan 16]. Available from: http://healthcare.agfa.com/. SPSS.com [Internet]. Illinois: SPSS, Inc.; c2011 [cited 2011 Jan 16]. Available from: http://www.spss.com/. Hogan WR, Wagner MM. Accuracy of data in computer-based patient records, J Am Med Inform Assoc. 5 (1997) 342-355. CDISC.org [Internet]. Clinical Data Interchange Standards Consortium; c2011 [cited 2011 Jan 16]. Available from: http://www.cdisc.org/. McDonald AM, Knight RC, Campbell MK, et al. What influences recruitment to randomised controlled trials? A review of trials funded by two UK funding agencies, Trials 7 (2006) 9. Embi PJ, Jain A, Clark J, Bizjack S, et al. Effect of a clinical trial alert system on physician participation in trial recruitment, Arch Intern Med 165 (2005) 2272-2280. Welker JA. Implementation of electronic data capture systems: Barriers and solutions, Contemporary Clinical Trials 28 (2007) 229-236. Staes CJ, Evans RS, Rocha BH, et al. Computerized alerts improve outpatient laboratory monitoring of transplant patients, J Am Med Inform Assoc 15 (2008) 324-332.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-877
877
Reporting Qualitative Research in Health Informatics: REQ–HI Recommendations Zahra NIAZKHANIa,b, Habibollah PIRNEJADa,b,1, Jos AARTSb, Samantha ADAMSb, Roland BALb a Department of Medical Informatics, Urmia University of Medical Science, Iran, b Health Care Governance, Institute of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
Abstract. To develop a set of recommendations for authors of qualitative studies in the field of health informatics, we conducted an extensive literature search and also manually checked major journals in the field of biomedical informatics and qualitative research looking for papers, checklists, and guidelines pertaining to assessing and reporting of qualitative studies. We synthesized the found criteria to develop an initial set of reporting recommendations that are particularly relevant to qualitative studies of health information technology systems. This paper presents a preliminary version of these recommendations. We are planning to refine and revise this version using comments and suggestions of experts in evaluation of health informatics applications and publish a detailed set of recommendations. Keywords. Qualitative research, guidelines, health informatics, HIT systems
1. Introduction Qualitative research methods are increasingly valued in evaluation of health information technology (HIT) impacts [1]. This line of research can be described as ‘inductive’, ‘subjective’ and ‘contextual’ helping to understand social phenomena such as user perceptions, the context of system implementation or development, and the processes by which changes occur or outcomes are generated [2, 3]. Qualitative research is also characterized by using methods that are flexible to adjust to circumstances and sensitive to the social context of the study. On the one hand, these methods enable studying a small number of cases in detail, capturing data that is rich and complex, developing explanations at the level of meaning or micro-social processes rather than context-free rules, and answering ‘how’, and ‘why’ questions. On the other hand, possessing these features by itself challenges comparing the results of different qualitative studies, if the researchers do not follow more or less the same rules in conducting research and reporting results. From this perspective, applying criteria for qualitative studies both at the level of conducting research and reporting their results is considered advantageous [4]. Following concerns raised in the HIS–EVAL workshop about the quality of evaluation studies and their reports in health informatics [5], Talmon et al. took a fundamental step in developing the STARE–HI guidelines in order to improve the 1
Corresponding Author: H. Pirnejad. E-mail: [email protected]
878
Z. Niazkhani et al. / Reporting Qualitative Research in Health Informatics
quality of evaluation reports [6]. This guideline was endorsed by major medical/health informatics organizations worldwide, and is now contributing to the vision of evidencebased health informatics. However, largely inspired by guidelines for reporting of quantitative biomedical studies (e.g., CONSORT and QUROM), the STARE–HI unintentionally falls short in taking several critical criteria pertinent to reporting qualitative HIT studies into account. To address this shortcoming of the STARE–HI, this paper aims to provide an initial set of recommendations for authors of qualitative HIT studies on how to present their research clearly and comprehensively.
2. Methods Pertinent papers, guidelines, and checklists specific for assessing or reporting of qualitative studies were searched in PubMed, Medline, google, and googlescholar from 1990 to September 2009. We also manually checked: the journals of ‘Qualitative Health Research’, ‘Journal of Evaluation in Clinical Practice’, ‘International Journal of Qualitative Methods’, and ‘Qualitative Research Journal’, the reference list of identified articles, the website of Qualitative Research in IS [7] and writing up a qualitative study [8], the instructions for authors and reviewers of qualitative research such as [9-15]. To develop a preliminary version of recommendations that are relevant for HIT research reports, the first and second authors selected and reviewed 48 most relevant publications found in our search. This preliminary version was shared with the other authors of this paper. As experienced qualitative HIT researchers, and editorial board members and reviewers of biomedical informatics journals, all the authors of this paper discussed the most important criteria and developed the following recommendations for Reporting Qualitative research in Health Informatics (REQ–HI). This short paper presents only the reporting recommendations that are most applicable for qualitative research reports and that have not been very well developed in the STARE–HI. A detailed description of recommendations for structuring good qualitative HIT reports will be published later.
3. REQ-HI Recommendations 3.1. Abstract and Keywords The abstract of qualitative HIT reports should be structured, yet short, with the same basic structure of quantitative research except the “Outcome measures”. The label “Results” is also replaced by “Findings” [10]. After a brief general subject matter, the objective or study question must be stated clearly and concisely. In addition to the type of HIT system and the study setting, the Methods section must note the data collection methods (e.g., focus groups), types of data (e.g., pictorial data), number of participants and the type of sampling method to recruit them, and the type of qualitative analysis. Only main findings and main conclusions directly derived from the findings particularly those of high relevance to the health informatics community should be stated here. To enhance retrieving these studies in search, terms denoting the approach such as ‘qualitative research’ (MeSH heading), ‘field research’, ‘qualitative evaluation’, ‘interviews’, ‘observations’, ‘focus groups’ (MeSH heading), ‘qualitative document analysis’, and ‘ethnography’ should be noted among the study key words.
Z. Niazkhani et al. / Reporting Qualitative Research in Health Informatics
879
3.2. Introduction The main goals of the ‘Introduction’ in a qualitative HIT study are: 1) to present the rationale of the proposed study. The ‘Introduction’ should identify a problematic issue in recent HIT research or a gap that a qualitative study is able to address. 2) To present the rationale behind the study method. That is to inform the reader that addressing the study objective requires a qualitative approach. The strengths of qualitative research methods lie in explorative, hypothesis generating, and conceptual analysis. It should be clear from the ‘Introduction’ that the research methods build on these strengths. 3) To present the research question. Contrary to quantitative studies, qualitative studies are most likely not testing a prediction rather they have an exploratory or conceptual nature. Therefore, instead of developing a hypothesis, in the last paragraph, the authors should re-iterate the rationale for their proposed study and clarify their research question, the one that the study aims to explore, understand, or explain. Meanwhile, carefully reviewing the HIT literature will provide a context to justify the choice of qualitative study and to set the stage for the study question. Alternatively, a theory can be used to guide the research providing that the authors clarify why this is relevant, or what this theoretical perspective adds to our understanding of the problem at hand. 3.3. Methods In qualitative research, methodology greatly influences the findings. Therefore, it should contain sufficient information for the reader to asses the rigor of data collection process and the data analysis and interpretation. This section then must include: 3.3.1. The Type of Qualitative Approach The type of qualitative approach must be described in detail and explicitly to enable the reader to judge whether it fits with the study question. If necessary, the choice of methodology should be explained in relation to alternative methodology or in the case of using several methods, it should be indicated how they complement each other and why this combination is necessary. For example, if a research aimed to gain a deeper understanding of cognitive tasks that physicians undertake to write admission orders, a phenomenological approach with think-aloud observations would likely be more appropriate than a grounded theory approach using focus groups. 3.3.2. The Type of Data It is important to explain what the data set is composed of and why it is the most useful set to answer the study question. Any textual, audiovisual, and pictorial documents that are collected and used such as meeting scripts, implementation documents, screen shots, computer-printouts, patient records, computer-generated activity reports, pictures of work stations, etc. should be described in detail. Also the number of data collection events and their duration should be specified (e.g., how many hours of observations). A thorough description of the processes of handling the data set is also relevant in some circumstance such as using an interview guide, note-taking and transcribing, ensuring anonymity and confidentiality, etc. It is recommended to keep a timeline with the methodology used e.g., to mention which data was collected when or which documents belong to what phase of the study or system use (e.g., pre- or post-HIT implementation).
880
Z. Niazkhani et al. / Reporting Qualitative Research in Health Informatics
3.3.3. Participants When sampling, the qualitative researchers do not aim to establish a random or representative sample of a population, rather to identify informant people who have information or experiences about the study subject. It should be argued why the selected recruitment strategy (e.g., purposive or convenience sampling) were the most appropriate to provide access to the type of knowledge sought by the study. Enough information should be provided to help the reader to understand what the sample represents and who initially was excluded and why. It is also relevant to document how participants were approached (e.g. face-to-face or telephone). The sample size (and whether or not the saturation of data was reached and in what way), important variations within participants (e.g., their prior experience of a HIT system), and even non-participation (in case there are relevant reasons behind this) should be reported. 3.3.4. Research Team and Reflexivity Researchers of a qualitative study are considered as one of the main study equipments themselves and are seen to have far greater influence on the Findings than quantitative researchers. Their characteristics, experience or training, assumptions, interests in the research topic, potential biases, influence on the data collection (e.g., choice of location), and their dual roles (e.g., user and researcher) should therefore be reported. 3.3.5. Analysis of Data Qualitative analysis is less standardized than statistical analysis. To enable readers to accept or challenge the reasoning of the researchers, or to assess how adequate or rigorous are the ‘Findings’, the authors must clearly describe the logic and any techniques used to analyze the entire data set. It should be clear who analyzed data with what inter-rater agreement (e.g., inter-observer or inter-analyst comparisons); how the codes, themes, or interpretations were developed; and whether any triangulation, audit trial, and member checking of the findings with the research participants were done. 3.4. Findings The main findings in relation to the original research question should be presented clearly. Not only the major themes but also diverse cases (e.g., negative ones) and minor themes should be described. The presentation of findings should be in a way to allow the readers to distinguish the data, the analytic framework used, and the interpretation. The authors should give an account of the data (e.g., what the user perception is) and also an interpretation of that (i.e., what this perception mean) [8]. Presenting direct participant quotations or field notes will help authors to communicate the themes or findings effectively and to back up their argument with evidence. A table or figure (e.g., of emerging themes) can be very helpful in clarifying the ‘Findings’. 3.5. Discussion Section The first paragraph of ‘Discussion’ is the best place to answer the research question clearly. The authors then should relate their findings to other studies and discuss the contribution that their study makes to existing knowledge or understanding of an issue but be very cautious in generalizing them to a wider world. They must discuss whether
Z. Niazkhani et al. / Reporting Qualitative Research in Health Informatics
881
or not their findings are transferable to other settings. It is also worth that the authors evaluate and discuss their findings or interpretations in terms of reflexivity (e.g., reflecting upon the researcher’s own influence on the construction of meanings or study process) and credibility (e.g., conducting triangulation or respondent validation). It is also better to comment on whether or not the study has had any impact on for example future updates, trainings, and management of HIT systems.
4. Conclusion This initial set of recommendations was developed to promote a clear and comprehensive reporting of qualitative HIT research. Given the diversity of methods for conducting qualitative HIT studies, however, this version of REQ–HI recommendations by no means provides detailed recommendations on all relevant aspects. We kindly invite editors, reviewers, and readers of biomedical informatics journals to comment on this version in order to improve its quality and applicability.
References [1]
[2] [3] [4] [5]
[6] [7] [8] [9]
[10] [11] [12] [13] [14] [15]
Niazkhani Z, Pirnejad H, Berg M, Aarts J. The Impact of Computerized Provider Order Entry (CPOE) Systems on Inpatient Clinical Workflow: A Literature Review. J Am Med Inform Assoc. 2009;16(4):539-49. Kaplan B, Shaw NT. Future directions in evaluation research: people, organizational, and social issues. Methods Inf Med. 2004;43(3):215-31. Ash JS, Guappone KP. Qualitative evaluation of health information exchange efforts. J Biomed Inform. 2007;40(6 Suppl):S33-9. Cohen DJ, Crabtree BF. Evaluative criteria for qualitative research in health care: controversies and recommendations. Ann Fam Med. 2008;6(4):331-9. Ammenwerth E, Brender J, Nykanen P, Prokosch HU, Rigby M, Talmon J. Visions and strategies to improve evaluation of health information systems. Reflections and lessons based on the HIS-EVAL workshop in Innsbruck. Int J Med Inform. 2004 30;73(6):479-91. Talmon J, Ammenwerth E, Brender J, de Keizer N, Nykanen P, Rigby M. STARE-HI--Statement on reporting of evaluation studies in Health Informatics. Int J Med Inform. 2009;78(1):1-9. Website of the Qualitative Research in IS. Int J Qual Health Care [cited October 6, 2010]; Available from: http://www.qual.auckland.ac.nz/ Advice on writing up a qualitative study. [cited 2010 6th of October ]; Available from: http://www.psy.dmu.ac.uk/michael/qual_writing.htm CASP. Qualitative research: appraisal tool. 10 questions to help you make sense of qualitative research. 2006 [cited November 02, 2010]; Available from: http://www.sph.nhs.uk/sphfiles/Qualitative%20Appraisal%20Tool.pdf/?searchterm=qualitative%20research Rowan M, Huston P. Qualitative research articles: information for authors and peer reviewers. CMAJ. 1997;157(10):1442-6. Kuper A, Lingard L, Levinson W. Critically appraising qualitative research. BMJ. 2008;337:a1035. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349-57. Malterud K. Qualitative research: standards, challenges, and guidelines. Lancet. 2001;358(9280):483-8. Cote L, Turgeon J. Appraising qualitative research articles in medicine and medical education. Med Teach. 2005;27(1):71-5. Qualitative research review guidelines – RATS. [cited October 13, 2010]; Available from: http://www.biomedcentral.com/info/ifora/rats
882
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-882
Cell seeding of Tissue Engineering Scaffolds studied by Monte Carlo simulations Andreea ROBUa,1, Adrian NEAGU b, Lacramioara STOICU-TIVADARa a University “Politehnica” Timişoara, Romania b Center for Modeling Biological Systems and Data Analysis, Victor Babeş University of Medicine and Pharmacy, Timişoara, Romania
Abstract. Tissue engineering (TE) aims at building multicellular structures in the laboratory in order to regenerate, to repair or replace damaged tissues. In a wellestablished approach to TE, cells are cultured on a biocompatible porous structure, called scaffold. Cell seeding of scaffolds is an important first step. Here we study conditions that assure a uniform and rapid distribution of cells within the scaffold. The movement of cells has been simulated using the Metropolis Monte Carlo method, based on the principle that cellular system tends to achieve the minimum energy state. For different values of the model parameters, evolution of the cells’ centre of mass is followed, which reflects the distribution of cells in the system. For comparison with experimental data, the concentration of the cells in the suspension adjacent to the scaffold is also monitored. Simulations of cell seeding are useful for testing different experimental conditions, which in practice would be very expensive and hard to perform. The computational methods presented here may be extended to model cell proliferation, cell death and scaffold degradation. Keywords. differential adhesion, dynamic cell seeding, scaffold, tissue construct
1. Introduction Tissue engineering (TE) is a relatively new field of biomedical research. Closely related to regenerative medicine, TE develops new therapies for patients who suffered tissue damage [2, 7]. A widely used approach to TE consists in culturing cells on a porous scaffold made of a biocompatible and biodegradable material. Cells are harvested from the patient, expanded in Petri dishes, and seeded onto scaffolds. The optimization of cell seeding is essential for the development of functional tissue constructs in vitro [1, 2]. It has been shown that if the cell seeding is uniform, the development of tissue constructs is more rapid and their mechanical properties are closer to the ones of native tissues. The mechanical properties of tissue constructs are largely due to the synthesis of extracellular matrix (ECM) – a web of proteins produced by cells. ECM production depends on the quality of cell seeding. If cell seeding is uniform, the culture medium equally reaches all the cells in the scaffold, providing gas and nutrient transfer to them. 1
Corresponding Author. Andreea Robu, Faculty of Automation and Computers, Blvd. Vasile Parvan, No. 2, 300223, Timisoara, Romania; E-mail: [email protected]
A. Robu et al. / Cell Seeding of Tissue Engineering Scaffolds Studied by Monte Carlo Simulations
883
Thus, a proper cell development and cell proliferation is ensured. Currently, the mechanical resistance of tissue constructs grown in the laboratory is about one order of magnitude below the corresponding native tissues [7, 8]. The objective of this study is to find the optimal conditions that lead to a uniform and rapid distribution of cells in the scaffold. The basic principle that underlies this study is the differential adhesion hypothesis (DAH) proposed by Steinberg, which states that constituent cells of a tissue tend to reach configuration of lowest energy of adhesion; that is, cells tend to establish largest possible number of strong bonds with their environment [4, 5]. Cells interact with each other due to cohesion forces, and adhere to scaffold via adhesion forces. Thus, the selfassembly of cells into multicellular constructs is governed by the interaction energy between cells and by the interaction energy between cells and the scaffold [4, 5].
2. Methods The studied model system consists of a cell suspension located near a porous scaffold, bathed in culture medium. The model is built on a cubic lattice (of 50×50×150 nodes). The Oz axis is the longitudinal axis of the system. The length unit, equal to one cell diameter, is the distance between two adjacent nodes. The cell suspension occupies the (with ), where each node of the network is occupied either by a region each node is occupied either by an cell or by a medium particle. In the region immobile (scaffold) particle or by a medium particle; this region models the scaffold, with pores filled with culture medium, and, eventually, by cells [3, 5]. The total adhesion energy of a system composed of t types of cells in the vicinity of a substrate can be brought to the form [5]: (1) is the number of links between two particles (of type and ), is the where number of links between the cells of type and the substrate; is the cell-cell interfacial tension, whereas is the cell-substrate interfacial tension [5]. To simulate the evolution of the cellular system in the vicinity of the scaffold, we used the Metropolis Monte Carlo algorithm. Running Monte Carlo Steps (MCS) consists of exchanging a cell position with another cell or a culture medium particle from its vicinity [3, 5]. The current study is based on Monte Carlo simulations performed for different values of the following model parameters: (i) the cohesion energy between cells, (ii) the adhesion energy between cells and scaffold, (iii) the radius of pores and (iv) the radius of the orifices that connect the pores. As output parameters we monitored (i) the centre of mass of all cells, (ii) centre of mass of seeded cells and (iii) the concentration of the cells remained in suspension. The centre of mass of seeded cells is an indicator of cell distribution within the scaffold; its dependence on elapsed MCS is a measure of the rate of cell seeding. Since experiments on dynamic cell seeding of scaffolds monitor the concentration of the cell suspension adjacent to the scaffold [2], we also plotted this parameter versus the elapsed MCS.
884
A. Robu et al. / Cell Seeding of Tissue Engineering Scaffolds Studied by Monte Carlo Simulations
3. Results and Discussions Table 1 presents the values of the input parameters that we used in the simulations, and the values obtained for the output parameters; it also points to the relevant figures. The input parameters values were selected for the current study on empirical basis, after many previous tests that shown which are the optimal energy values and the relevance of the scaffold’ porosity for an uniform cell seeding. Table1. Values of input and output parameters in representative simulations. Cell-cell Cell-scaffold Radius Radius of MCS interaction interaction of pores circular orifices energy energy 0
Plateau of for seeded cells
0.6
5
2
80 000
0.6
5
2
80 000 110,110,100
0
0.6
8
1
0.25
5
2;3; 4;5 2
0;0.4;0.8
80 000 80 000
110
110;110; 100;100 75
Set of Plateau of Plateau of for all fraction of simulations, Figure cells in cells suspension 90 0.2 I, Fig. 1a,1b,1c 90,90,60 0.2;0.2;0.5 II, Fig. 2a,2b,2c 90;90; 0.2;0.2; III, Fig. 80;60 0.3;0.5 3a,3b,3c 35 0.9 IV, Fig. 4a,4b,4c
The volume percent concentration of the cells in the initial suspension was 1%. As shown on Fig. 1a, in about 7×104 MCS a stationary state is reached, in which the centre of mass of seeded cells is very close to the centre of mass of the scaffold, (Fig. 1a, upper curve). This indicates that the distribution of the cells in the scaffold is uniform (see also the snapshot in Fig. 1c). The centre of mass of all cells reaches a plateau at because a part of the cells remain in suspension. In Fig. 1b we observe that already at 2×104 MCS about 75 % of the cells penetrated the scaffold, and soon a plateau is reached with 20% of the cells remaining in suspension. is reached later because cells rearrange inside the scaffold. However, the plateau of In experiments, the cell suspension is permanently homogenized (by magnetic stirring); therefore, the vast majority of the cells penetrate the scaffold. In our simulations, however, the mobility of the cells is described by the same algorithm, both in suspension and in the scaffold, so part of the cells will remain in suspension (Fig. 1c). Further refinements of the model should include the possibility to ascribe a larger motility for cells (and aggregates of cells) in suspension. In the second set of simulations, with parameters given in the second row of Table 1, we varied the cohesion between cells. For a cell-cell interaction energy of 0.8 cell aggregates emerge (Fig. 2c), and the penetration of cells into the scaffold is slower. ), and Note, however, that the cell-substrate interfacial tension is still negative, (
Fig.1a The centre of mass of all cells The centre of mass of seeded cells
Fig.1b Cell concentration in suspension (Table 1, row 1)
Fig.1c Final configuration represented using VMD[9]
A. Robu et al. / Cell Seeding of Tissue Engineering Scaffolds Studied by Monte Carlo Simulations
Fig. 2a The centre of mass of all cells The centre of mass of seeded cells
Fig. 2b Cell concentration in suspension (Table 1, row 2)
885
Fig.2c Final configuration represented using VMD[9]
cells enter the scaffold, albeit slowly, while also preserving cell-cell contacts. Figure 2b shows that after 8×104 MCS more than half of cells are still in suspension. In the third set of simulations (parameters in Table 1, row 3), we varied the radius of the orifices between pores. Surprisingly, an increase of the radius of orifices from 2 to 3 cell diameters did not influence the seeding rate (Fig 3a, crosses and dots) and the final extent of seeding (the plateau of the plots shown as + sings and dots on Fig. 3b). Moreover, as the radii of the orifices increased, the fraction of seeded cells decreased; circles (squares) on Fig. 3b refer to orifice radius of 4 (5) cell diameters.
Fig. 3a The centre of mass of all cells The centre of mass of seeded cells (Table 1, row 3)
Fig. 3b Cell concentration in suspension (Table 1, row 3)
Fig.3c Final configuration represented using VMD [9]
In the fourth simulation (parameters in Table 1, row 4) the attraction between cells is higher than twice the cell-scaffold attraction, making the cell-scaffold interfacial energy positive. Our simulations show clearly that the emergent configuration is a result of a tug-of-war between cell-cell and cell-substrate interaction. This has been suggested earlier on the basis of a careful experimental study [6]; our approach brings quantitative arguments for the correctness of this observation.
Fig. 4a The centre of mass of all cells The centre of mass of seeded cells (Table 1, row 4)
Fig. 4b Cell concentration in suspension (Table 1, row 4)
Fig. 4c Final configuration represented using VMD[9]
886
A. Robu et al. / Cell Seeding of Tissue Engineering Scaffolds Studied by Monte Carlo Simulations
4. Conclusions This work presents a lattice model and a computational algorithm able to evaluate energetic and geometric factors that may be tuned to assure optimal cell seeding. Scaffold pore sizes and the diameter of the orifices between pores influence cell seeding only in extreme conditions: if the orifices are small (comparable to the cell diameter), or if they are large (exceeding half of the pore diameter), such that the scaffold is not contiguous and does not offer enough biomaterial to be attached to. If cells do not adhere to each other, but they adhere to the scaffold, the seeding is rapid and cell distribution is uniform. If the cell-cell interaction energy is nonzero, but small enough to ensure a negative cell-scaffold interfacial tension, uniform distribution is reached, but the process is slower. Seeding is severely hampered if the cell-cell interaction energy is larger than twice the cell-substrate interaction energy, rendering the cell-scaffold interfacial tension positive. Moreover, if the cell-cell interaction energy is high, regardless of the interaction between the cells and the scaffold, cells tend to aggregate and their penetration into the scaffold is slowed down drastically. Although it accounts for the competition between cell-cell and cell-substrate interaction energies, our study of the impact of cell aggregation on the rate of cell seeding is not accurate, since the present algorithm is unable to describe the fast movement of cell aggregates in the stirred suspension. Future developments of the computational framework proposed here need to incorporate a hybrid algorithm that differentiates between individual cell motility and the movement of cells and aggregates of cells with the flow of cell culture medium. Such a development is especially appealing, since it would enable one to simulate also perfusion cell seeding [1]. Also, future models might account for cell proliferation, cell death and scaffold degradation.
References [1]
[2]
[3]
[4] [5] [6]
[7] [8] [9]
Francioli, S.E. Candrian, C. Martin, K. M. Heberer, Martin, I. Barbero. A. Effect of three-dimensional expansion and cell seeding density on the cartilage-forming capacity of human articular chondrocytes in type II collagen sponges. Journal of Biomedical Materials Research. Part A. (2010) 95(3):924-931. Vunjak-Novakovic Gordana, B. Obradovic, I. Martin, P.M. Bursac, R.Langer and L.E.Freed DynamicCell Seeding of Polymer Scaffolds for Cartilage Tissue Engineering, Biotechnology Progress Volume 14, Issue 2, 1998, 193–202 Robu, A. P. Neagu, A. Stoicu-Tivadar, L. A computer simulation study of cell seeding of a porous biomaterial, Computational Cybernetics and Technical Informatics (ICCC-CONTI), 2010 International Joint Conference, ISBN: 978-1-4244-7432-5, 225-229 Foty Ramsey, A. Malcolm S. Steinberg, The differential adhesion hypothesis: a direct evaluation, Developmental Biology,Volume 278, Issue 1, 1 February 2005, 255-263 Neagu, A. Kosztin, I. Jakab, K. Barz, B. Neagu, M. Jamison, R. Forgacs, G. Computational Modeling of Tissue Self-Assembly, Modern Physics Letters B, Volume 20, Issue 20, (2006), 1217-1231 Ryan, P.E. Foty, R.A. Kohn, J. and Steinberg, M.S. Tissue spreading on implantable substrates is a competitive outcome of cell– cell vs. cell–substratum adhesivity, Proc. Natl. Acad. Sci. U.S.A. (2001) 98(8):4323-4327. Griffith L.G. and Naughton, G. Tissue engineering - current challenges and expanding opportunities, Science 295 (5557) (2002), 1009 Semple, J.L. Woolridge N.and Lumsden, C.J. In vitro, in vivo, in silico: Computational systems in tissue engineering and regenerative medicine, Tissue Engineering 11 (3-4) (2005), 341-356. Humphrey, W. Dalke, A. Schulten, K. VMD – Visual Molecular Dynamics, J. Mol. Graphics 14, (1996) (33-38) (http://www.ks.uiuc.edu/Research/vmd/).
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-887
887
The ONCO-I2b2 Project: Integrating Biobank Information and Clinical Data to Support Translational Research in Oncology Daniele SEGAGNIa, Valentina TIBOLLOa, Arianna DAGLIATIc, Leonardo PERINATIa, Alberto ZAMBELLIa, Silvia PRIORIa, Riccardo BELLAZZIb,a a IRCCS Fondazione S. Maugeri, Pavia, Italy b Dipartimento di Informatica e Sistemistica, Università di Pavia, Italy c Institute for Advanced Studies, Pavia, Italy
Abstract. The University of Pavia and the IRCCS Fondazione Salvatore Maugeri of Pavia (FSM), has recently started an IT initiative to support clinical research in oncology, called ONCO-i2b2. ONCO-i2b2, funded by the Lombardia region, grounds on the software developed by the Informatics for Integrating Biology and the Bedside (i2b2) NIH project. Using i2b2 and new software modules purposely designed, data coming from multiple sources are integrated and jointly queried. The core of the integration process stands in retrieving and merging data from the biobank management software and from the FSM hospital information system. The integration process is based on a ontology of the problem domain and on open-source software integration modules. A Natural Language Processing module has been implemented, too. This module automatically extracts clinical information of oncology patients from unstructured medical records. The system currently manages more than two thousands patients and will be further implemented and improved in the next two years. Keywords. I2B2, oncology research, biobanks, natural language processing, translational research, hospital information system integration
1. Introduction ONCO-i2b2 is a project funded by the Lombardia region, in Italy, which aims at supporting translational research in oncology. The project exploits the software solutions implemented by the Informatics for Integrating Biology and the Bedside (i2b2) research center, an initiative funded by the NIH Roadmap National Centers for Biomedical Computing and headed by Partners HealthCare Center in Boston [1]. The i2b2 project developed a data warehouse and a set of software solutions that are based on an architecture called “hive”. The “hive” has different software cells devoted to data extraction, data manipulation or data analysis tasks [2]. Within ONCO-i2b2, the University of Pavia and the hospital IRCCS Fondazione S. Maugeri (FSM) have integrated the i2b2 infrastructure with the FSM hospital information system (HIS) and with a cancer biobank that manages both plasma and cancer tissues. The integration with the HIS provides the access to all the electronic
888
D. Segagni et al. / The ONCO-I2b2 Project
medical records of cancer patients. The majority of the data collected in the FSM HIS is represented by textual reports. It was therefore necessary to develop and integrate inside the ICT architecture a Natural Language Processing (NLP) module in order to extract important information and clinical tests results, such as patients’ histological reports [3]. The oncology biobank provides bio-specimens prepared from a collection of blood and tissue samples, taken with the informed consent of healthy individuals and oncologic patients. The aim of this paper is to describe the basic steps of the integration process and to present the current status of the ONCO-i2b2 project.
2. Method The ONCO-i2b2 software implemented at the FSM hospital is designed to integrate data from many different sources and collected for different purposes, in order to allow researchers querying and analyzing the vast amount of information coming from the clinical practice. The main data sources that we have integrated into the i2b2 data warehouse are the hospital pathology unit, the biobank and the HIS. In the following we will describe the detail of the integration process. 2.1. FSM Pathology Operative Unit and Biobank Data associated to the biospecimens stored inside the biobank are almost automatically uploaded from the hospital pathology unit. A semi-automatic procedure has been implemented to populate the biobank database in order to decrease the time of insertion and reduce the possibility of human error. One of the major efforts made during this implementation was to anonymize each cancer biospecimen, by creating a twodimensional DataMatrix barcode that does not include any direct reference to the donor patient. Cancer tissues or plasma samples are selected by researchers and placed in new tubes labeled with the new barcode. Granted users may retrieve the information related to the donors through a specialized software application that also show the patient’s informed consent. The biobank database is periodically synchronized (several times during the day) in order to keep biobank samples data constantly updated. The information on the biological samples contained in a biobank are loaded into the i2b2 data warehouse through a complex series of Extract, Transform, Load (ETL) operations that involve data extraction, processing and mapping in the data warehouse [4]. The ETL activity was performed relying on the KETTLE [5] developed within the Pentaho project [6]. Table 1 shows the number of patients and biological samples currently available in the biobank divided by hospital medical unit of origin and type. Figure 1 shows the different steps of the integration process. Step 1 is the semi automated data extraction from the pathology unit, step 2 describes the anonymization process involving biosamples before they are stored in biobank. Step 3, instead, represents the i2b2 data warehouse where data from different sources are collected through ETL transformations. Step 4 shows how the information coming from the HIS is integrated, too.
889
D. Segagni et al. / The ONCO-I2b2 Project
Table 1. Biobank biospecimens count divided by hospital operative unit of origin and type. Table data are related to the period 1-12-2009 - 15-01-2010. FSM Unit Senology Surgery
Patient 237 75
Tissue 729 567
Plasma 729 243
312
1296
972
TOTAL
1. PATHOLOGY MEDICAL UNIT Asynchronous biobank database update
Anatomical Pathology Database 2. BIOBANK
Anonymized barcode
Biobank Database
E T L
3. I2B2 Reseracher Client
I2B2 Data Warehouse
E T L
I2B2 Web Server
4. FSM – INFORMATION SYSTEM
FSM Electronic Medical Record
Hospital Information System Database
Figure 1. ICT architecture designed to integrate information from the FSM medical units and the hospital information system.
2.2. FSM Hospital Information System The information collected in the FSM HIS is made available to the I2B2 service through an ETL process that transforms the medical information of interest in concepts that will be queried in the research phase. Some of these oncological concepts refer to various key facts collected in the pathological anatomy electronic report that are only available in textual format. A NLP software module has thus been developed to extract this information from the FSM HIS for each cancer patient that have at least one biological sample stored in the biobank. To address the problem of extracting structured information from pathology reports for research purposes, we developed an NLP module based on the
890
D. Segagni et al. / The ONCO-I2b2 Project
GATE system [7] to automatically identify and map anatomic and diagnostic noun phrases found in full-text pathology reports to SNOMED concept descriptors. The pathology unit uses unstructured or semi-structured text documents to represent this information. Therefore, we identified a set of regular expressions that matched clinical phrases commonly found in pathology reports; such expressions are then properly processed by the NLP parser. In particular, the retrieved data relate to a set of oncological SNOMED codes, to the values derived from clinical tests, like scoring breast carcinomas stained with HercepTest or to the scores of the expression of Ki-67, a nuclear antigen protein used to determine the growth fraction of tumors [8]. The system has been internally validated by a manual verification by the medical experts involved in the study on a subset of 100 cases with 100% accuracy. This module is now a part of the overall data warehouse management strategy. 2.3. The Integrated Architecture: i2b2 The i2b2 data warehouse, called Clinical Research Chart (CRC), is designed to manage data from clinical trials, medical record systems and laboratory systems, along with many other types of clinical data from heterogeneous sources [9]. The CRC stores this data in three tables, the patient, the visit and the observation tables. The three data tables, along with two of the lookup tables (concept and provider), are the main components of the so-called star schema of the data warehouse. The most important aspect of the construction of a star schema is identifying what is a “fact”. In healthcare, a logical fact is an observation on a patient. The dimension tables contain further descriptive and analytical information about attributes in the fact table. The i2b2 infrastructure installed at FSM provides a web-based access to any type of data described in the previous paragraphs. Data information are stored in the i2b2 data warehouse through complex ETL transformations following a cancer-specific ontology that combines atomic information to create a well defined medical observation. The extracted information can be analyzed through the i2b2 web client with appropriate plug-ins specially configured [10, 11].
3. Results Since December 2010 the entire software system has been installed and is currently running at FSM. The aim of the implementation of the architecture was to allow the FSM researchers to exploit i2b2 query capabilities relying on the user-friendly web interface available. To achieve this goal we focused on the development of data integration processes, on the design of NLP modules and on the management and anonymity of the biological samples contained in biobank. Integration of these data from heterogeneous sources required several key steps: i) creation of a specific software to upload the information available in the pathology unit; ii) generation of new barcodes when the biosamples are archived in biobank; iii) design and configuration of an NLP software module to extract information from unstructured text documents relevant to the clinical characterization of patients in cancer research; iv) creation of ETL transformations to populate the i2b2 data warehouse with concepts related to cancer research. Currently, the i2b2 instance installed in FSM consists in 2214 patients (312 of them have at least one biological sample in the cancer biobank), 25826 visits, 163
D. Segagni et al. / The ONCO-I2b2 Project
891
concepts (divided into demographic data, diagnoses, clinical measurements, histological reports, therapies and biobank samples) and 93680 observations.
4. Discussion The novel IT architecture created at FSM is a concrete example of how integration between different information from heterogeneous sources can be correctly implemented and make available for scientific research. In order to continuously improve i2b2 easiness of use for hospital researchers, we added at the i2b2 web client application novel plug-ins for data export and for phenotype exploration [12]. One of the major efforts made during the implementation of the i2b2 extensions was to be fully compliant with i2b2 development guidelines, so that our software modules and architecture can be reused by the other researchers of the i2b2 community. Exploiting the potential of this IT architecture, the next steps of the project will involve the extension of the data set imported by the HIS as well as the management of data from laboratory tests. We also plan to continue extending the capabilities of the FSM i2b2 architecture by implementing new plug-in devoted to data analysis; in particular, we are working on an extension of the i2b2 query engine by adding temporal query capabilities Finally, another important point for the future development of the project will be the integration of patient’s genotype data, which will require careful evaluation both in terms of the data representation and storage and of data security and privacy. Acknowledgements. This paper describes the ONCO-i2b2 project, funded by the Lombardia Region, in Italy. We gratefully acknowledge Prof. Carlo Bernasconi and the Collegio Ghislieri in Pavia for their active support.
References [1]
Murphy SN, Mendis M , Hackett K, et al. Architecture of the open-source clinical research chart from Informatics for Integrating Biology and the Bedside, AMIA Annu Symp Proc. (2007), 548-52. [2] Murphy SN, Weber G, Mendis M, et al. Serving the enterprise and beyond with informatics for integrating biology and the bedside (i2b2), J Am Med Inform Assoc. (2010), 124-30. [3] Jurafsky D, Martin JH. Speech and Language Processing, An Introduction to Natural Language Processing,Computational Linguistics, and Speech Recognition Second Edition, Prentice Hall, 2008. [4] Kimball R, Ross M, Thornthwaite W, Mundy J, Becker B. The Data Warehouse ETL Toolkit (2nd edition), 2008 [5] Pentaho Corporation, Pentaho Data Integration Kettle Documentation (http://kettle.pentaho.com), 2011 [6] Bouman R, J. van Dongen. Pentaho Solutions , Wiley, 2009 [7] The University of Sheffield, GATE software (http://gate.ac.uk/sale/tao/split.html), 2011 [8] Broyde A , Boycov O, Strenov Y, Okon E, Shpilberg O , Bairey O. Role and prognostic significance of the Ki-67 index in non-Hodgkin's lymphoma, Am J Hematol. 2009 Jun;84(6):338-43. [9] Partners HealthCare Systems, I2B2 software (v.1.5) documentation, 2008. [10] Mendis M, Wattanasin N, Kuttan R, et al. Integration of Hive and cell software in the i2b2 architecture, AMIA Annu Symp Proc. (2007), 1048. [11] Murphy SN, Churchill S, Bry L, et al. Instrumenting the health care enterprise for discovery research in the genomic era. Genome Res. (2009), 1675-81 [12] Bellazzi R, Segagni D et al. R Engine Cell: integrating R into the i2b2 software infrastructure, J Am Med Inform Assoc (2011)
892
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-892
IT Infrastructure Components to Support Clinical Care and Translational Research Projects in a Comprehensive Cancer Center Hans-Ulrich PROKOSCHa,b1, Markus RIESb, Alexander BEYERb, Martin SCHWENKb, Christof SEGGEWIESb, Felix KÖPCKEa, Sebastian MATEa, Marcus MARTINb, Barbara BÄRTHLEINa, Matthias W. BECKMANNc,d, Michael STÜRZLe, Roland CRONERf, Bernd WULLICHg, Thomas GANSLANDTb, Thomas BÜRKLEa a Chair of Medical Informatics, University Erlangen-Nuremberg, Germany b Medical Informatics & Comm. Center, c University Cancer Center Erlangen, d Dept. of Obstetrics and Gynecology, e Division of Molecular and Exp. Surgery, f Dept. of Surgery, g Dept. of Urology, University Hospital Erlangen, Germany
Abstract. This paper presents the concept of an integrated IT infrastructure framework established at the comprehensive cancer center at the University Hospital Erlangen. The framework is based on the single source concept where data from the electronic medical record are reused for clinical and translational research projects. The applicability of the approach is illustrated by two case studies from colon cancer and prostate cancer research projects. Keywords. Comprehensive cancer, center, cancer documentation, single source concept, translational research, IT infrastructure framework
1. Introduction Oncology care is provided in complex transsectoral and interdisciplinary networks of service providers. Within cancer research in recent years we have seen a massive growth in data, especially when molecular, genomic and clinical data shall be linked [1]. In Germany comprehensive cancer centers have been established in order to provide centers of excellence for cancer care, medical education as well as clinical and translational cancer research. Traditionally however, many data collections and IT components in hospitals and research institutions have been developed and implemented independently from each other and typically without any crosslinks. In this context, Beckmann and colleagues have complained about the enormous multiplied documentation requirements for physicians [2]; Shortliffe and Sondick have emphasized, “if the submission of data for research and monitoring purposes requires 1
Corresponding author: E-mail: [email protected]
H.-U. Prokosch et al. / IT Infrastructure Components
893
an extra step, . . . the process will likely fail” [3]. This has lead to the design and implementation of integrated informatics research platforms on one side [4] and single source solutions [5] on the other side. In the implementation phase of the Erlangen University Cancer Center (UCCE) in 2007/2008 it has been realized, that a comprehensive and integrated information technology framework with a high level of data reuse will be a major pillar of a successful comprehensive cancer center. In this publication we describe the architecture of such a framework supporting both: cancer care and cancer research. We further present two small case studies illustrating the value gained already from this implementation.
2. Methods At Erlangen University Hospital a comprehensive workflow-based electronic medical record system (EMR Soarian® from Siemens; compare [6]) has been stepwise introduced within the last decade. Furthermore until 2008 clinical cancer registration was still performed as separate data entry based on paper chart review or pathology reports. For this purpose at Erlangen University Hospital an Oracle-based proprietary cancer documentation system (TUREK-2) has been established. Before the UCCE was started clinical trial documentation was based on paper or using individual software solutions. Biobanking was decentralized with specimen tracking and annotation data often stored in excel sheets. When in 2007 the UCCE IT infrastructure concept was defined, it was decided that a single source approach with the Soarian® EMR closely connected with the clinical cancer registry database as core components should be pursued. Further, those core components should be complemented by commercially available standard products wherever possible. Thus the requirement specification consisted of an integrated framework comprising 1) the Soarian® electronic medical record, 2) the clinical cancer registry database, 3) a centralized biobanking management software, 4) a central clinical trials database, 5) a flexible clinical data warehouse and 6) standard services to assure compliance with data protection requirements in this environment. Keeping the single source documentation approach in mind, it implied that data should be captured only once at its origin and be afterwards available for multiple reuse. Ideally, digital structured data acquisition should be an integrated part of the clinical treatment process. Thus, an analysis of the clinical workflows related with the various steps in cancer care was performed. Based on this the Soarian® electronic medical record system has been extended with numerous workflow-supported assessment forms for the documentation of cancer anamneses, diagnostic data, therapy documentation and follow-up data. Additionally all data entry forms were based on the German wide standardized definition of a minimal basic cancer dataset. The therapeutic decision process for cancer patients, pursued within interdisciplinary cancer conferences has been supported with conference planning and documentation forms. The proprietary homegrown clinical cancer registry database has been substituted by GTDS® (a cancer registration system once developed with funding support of the German Ministry of Health and the German Cancer Society and today used in more than 60 clinical cancer registries throughout Germany) [7]. Biobank management support is provided by the commercially available Starlims® system. The GCP-certified commercially available clinical trials management system SecuTrial® has been established as a campus-wide platform for clinical trials. In addition to the existing
894
H.-U. Prokosch et al. / IT Infrastructure Components
commercial Cognos® data warehouse, the I2B2 toolbox has been evaluated and established as a user-friendly and flexible clinical data warehouse [8, 9]. Finally, secure data flow between the framework´s components and compliance to the German data protection law is supported by standardized modules provided by the German Technology and Method Platform for Networked Medical Research (TMF) [10].
3. Results The value of the IT infrastructure framework established at the UCCE shall be illustrated with two early case studies, where the above components have been applied. 3.1. The Polyprobe Project Polyprobe is a multicentric research project, aiming at validating the major predicitive/prognostic genes for colorectal cancer in a prospective diagnostic study by applying novel automatized nucleic acid extraction procedures from formalin-fixed paraffine-embedded tissues and quantitative RT-PCR procedures for high-throughput gene expression analyses of 61 marker genes. Within a period of 3 years 650 patients shall be included in the study. The IT concept within this project completely follows the single source idea. Nine assessment forms have been implemented within the EMR to capture diagnostic, therapeutic and study-specific data integrated into the colon cancer treatment process. Within the EMR those data are identified by a hospital-wide patient identification as well as a pseudonym generated in advance for all study participants. Patient consent is documented within the EMR as well. After a final quality validation by a research physician, those data are flagged to allow the export of pseudonymized records into a CSV format, which directly matches the import format of the SecuTrial® clinical trials management system. Thus, regular data import of quality assured data into the research database is supported. Biospecimens extracted within surgery or endoscopy are transferred to the pathology department for diagnostic purposes as well as storage within the UCCE biobank for further research analysis. Those specimens stored for research purposes are identified with special probe identification numbers, which are documented as linking information within the patient´s pathology report in the EMR and also imported into probe related records in the SecuTrial® system. Besides its batch import functionality SecuTrial® provides secure web-based data entry forms which support direct eCRF-based data entry for the second study center (Frankfurt University Hospital) which has not yet been able to also implement a single source approach. Until today 141 patients have been enrolled into the study at Erlangen and were documented within the EMR. From those, data of 20 patients have currently been imported into SecuTrial® and released for monitoring purposes. The external project monitors use the SecuTrial® monitoring workflows for their study specific quality management process. 3.2. The German Prostate Cancer Consortium Database The German Prostate Cancer Consortium comprises a group of more than 70 urologists, pathologists and basic researchers throughout Germany. Founded in 2003 their aim is to improve prostate cancer research with interdisciplinary and crossinstitutional cooperation. For this purposes between 2007 and 2009 a web-based joint
H.-U. Prokosch et al. / IT Infrastructure Components
895
research database has been established including data on prostate cancer diagnosis, therapy and follow-up as well as the characterization of biospecimens collected at the participating centers. Data capture for this database was provided through web-based data entry screens. Despite the high research interest of all partners this solution was finally not accepted because it required time-consuming manual data entry of parameters which usually in similar form have already been documented in the local medical record system. Thus, it was decided to move towards a single source/data warehouse approach, reusing data already documented in local electronic medical records. Urology clinics at Erlangen and Münster University hospital were chosen as pilot centers, since both of them had already established a comprehensive prostate cancer documentation within their EMR system. For data protection reasons a two level architecture was established using three I2B2 installations specifically extended and adapted for this scenario. Every participating partner (currently Münster and Erlangen) has a local I2B2 installation. Datasets are regularly exported from the EMR systems, pseudonymized and imported into the local I2B2 data warehouse. Thus, those local I2B2 instances do already provide query and analysis features for the respective Urology Clinics on their “own” data. Regularly researchers can initiate transfer of further anonymized data from the local instances into one common I2B2-DPKKResearch Database. This central I2B2-instance provides a password-protected webbased secure query interface for all DPKK members.
4. Discussion Due to the complex structure of oncology documentation, which originates over long treatment periods in different clinical disciplines, implementation of an IT-based documentation process is a complex mission. Oncology data are generated by different clinical specialties, clinical care documentation and research databases are traditionally separated, biological and molecular research based on high-throughput systems is often not linked with clinical research. Translational research project in future will need integrated efficient data management platforms which can easily be accessed by various data analysis and data mining tools. The caBIG initiative in the U.S. has aimed at mastering this challenge supported by large funding efforts and a variety of gridbased tools have been developed and applied in various scenarios [11]. Reusing the electronic medical record for clinical research has been identified as one large challenge for medical informatics, therefore tools as the caBIG modules need to be closely integrated with EMR databases [12]. McConnell and colleagues for example have presented a pilot deployment of caTRIP at Duke Comprehensive Cancer Center [4]. Ochs and Casagrande have described their view on “information systems for cancer research” and provided an overview of the systems and interactions needed to handle clinical trials and high-throughput data in cancer research. Their vision was that such systems should ideally interact gracefully with institutional systems for clinical care and would utilize institutional IT infrastructure and expertise. Large parts of their vision have been implemented at Erlangen University Hospital within the last three years. Workflow-supported EMR-documentation linked with the above described single source concept has enhanced such documentation efforts, making the data available for clinical care (including billing, quality assurance programs and discharge letter creation), clinical cancer registries and research purposes at the same time. Pseudonymization tools developed to meet national data protection
896
H.-U. Prokosch et al. / IT Infrastructure Components
requirements could be integrated seamlessly into the transfer processes between the EMR and research databases. In the above described case studies the CSV files exported from the EMR database are currently only imported into the SecuTrial® or the I2B2 databases respectively. In a next step those data will also be used as upload-/import-files for the Erlangen Cancer Registry. Additionally, in the future UCC defined core data records of all cancer patients can be exported from the EMR and imported into a joint I2B2-based UCC research platform. Having linked those data also with the identifiers of the biospecimens within the Starlims® biobank management system, those data can also be used as clinical annotations for the biobank. This illustrates the opportunities arising based on the integrated IT infrastructure framework implemented at Erlangen Comprehensive Cancer Center, making EMR data available for multiple secondary use purposes. Nevertheless, even though this paper illustrates the successful implementation of a single source approach, we shall not neglect that on a semantic and process level, implementing those data reuse concepts has been quite complex. Major challenges which needed to be mastered were related to the definition of common cancer specific minimal data sets and the alignment of process steps for clinical care documentation, register documentation and trial documentation with each other. Describing all those aspects however, would go beyond the limits of this paper and shall be focus of a separate publication. Acknowledgement: parts of the described projects have been funded by the German Federal Ministry of Education and Research and by the German Cancer Aid.
References [1]
Ochs, M.F. Casagrande. J.T. Information systems for cancer research, Cancer Invest 26 (2008), 10601067. [2] Beckmann, K. Jud, S. Heusinger, K. Schwenk, M. Bayer, C. Häberle, L. et al. Dokumentation in der gynäkologischen Onkologie, Der Gynäkologe 43 (2010), 400–410. [3] Shortliffe, E.H. Sondick. E.J. The public health informatics infrastructure: anticipating its role in cancer. Cancer Causes Contr 17,7 (2006), 861. [4] McConnell, P. Dash, R.C. Chilukuri, R. Pietrobon, R. Johnson, K. Annechiarico, R. Cuticchia, A.J. The cancer translational research informatics platform, BMC Med Inform Decis Mak 8, 60 (2008), doi:10.1186/1472-6947-8-60. [5] Dugas, M. Breil, B. Thiemann, V. Lechtenbörger, J. Vossen, G. Single source information systems to connect patient care and clinical research, Stud Health Technol Inform 150 (2009), 61-65. [6] Haux, R. Seggewies, C. Baldauf-Sobez, W. Kullmann, P. Reichert, H. Luedecke, L et al. Soarian workflow management applied for health care, Methods Inf Med 42 (2003), 25-36. [7] Altmann, U. Katz, F. Dudeck, J. Das Gießener Tumordokumentationssystem GTDS: Software für klinische Krebsregister, Spiegel der Forschung (2002), 4–10. [8] Murphy, S.N. Weber, G. Mendis, M. Gainer, V. Chueh, H.C. Churchill, S. et al. Serving the enterprise and beyond with informatics for integrating biology and the bedside (i2b2). J Am Med InformAssoc 17,2 (2010), 124-130. [9] Ganslandt, T. Mate, S. Helbing K, Sax, U. Prokosch, H.U.. Unlocking Data for Clinical Research - The German i2b2 Experience. Appl Clin Inf 2 (2011), 116–127. [10] Helbing, K. Demiroglu, S.YRakebrandt, .F. Pommerening, K. Rienhoff, O. Sax. U. A Data Protection Scheme for Medical Research Networks. Review after Five Years of Operation. Methods Inf Med 49, 6 (2010) 601-607. [11] Fenstermacher, D. Street, C. McSherry, T. Nayak, V. Overby, C. Feldman, M. The Cancer Biomedical Informatics Grid (caBIG). Conf Proc IEEE Eng Med Biol Soc 1 (2005), 743-746. [12] Prokosch, H.U. Ganslandt, T. Perspectives for medical informatics - Reusing the electronic medical record for clinical research, Methods Inf Med 48, 1 (2009), 38–44.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-897
897
Using a Robotic Arm to Assess the Variability of Motion Sensors Lukas GORZELNIAKa,b,1, André DIASc, Hubert SOYERd, Alois KNOLLe Alexander HORSCHa,c a Institut für Medizinische Statistik und Epidemiologie, Technische Universität München, Munich, Germany b Institut für Epidemiologie, Helmholtz Zentrum München, Munich, Germany c Depts. of Computer Science & Clinical Medicine, University of Tromsø, Tromsø, Norway d Fakultät für Informatik, Technische Universität München, Munich, Germany e Institut für Informatik VI, Technische Universität München, Munich, Germany
Abstract. For the assessment of physical activity, motion sensors have become increasingly important. To assure a high accuracy of the generated sensor data, the measurement error of these devices needs to be determined. Sensor variability has been assessed with various types of mechanical shakers. We conducted a small feasibility study to explore if a programmable robotic arm can be a suitable tool for the assessment of variability between different accelerometers (inter-device variability). We compared the output of the accelerometers GT1M and GT3X (both ActiGraph) and RT3 (Stayhealthy) for two different movement sequences. Keywords. Accelerometer, validation, robot, inter-device-variability
1. Introduction Motion sensors ease the assessment of physical activity (PA) and provide objective recording of the PA components: intensity, frequency and duration. A common type of motion sensors is accelerometers. They vary in size, sampling rate, proprietary movement detection algorithms, calibrations, access to raw sampling data and output variables, i.e. S.I. units, or proprietary counts or vector magnitude units (VMUs). Previously published studies on reliability of accelerometers have analyzed data generated by human motion in scenarios with standardized conditions for each subject [1, 2]. In these studies the possible variability among the sensors (of the same type) is measured under controlled conditions. To assess the measurement errors of the accelerometers, devices have been mounted on vibration machines such as jigs or shakers, in order to generate acceleration data under controlled conditions [3]. However, to the knowledge of the authors, there is no study comparing the variability of the accelerometers GT1M, GT3X (ActiGraph) and the RT3 (StayHealthy) (details see section 2.1) by using an industrial robot for carrying out clearly defined and reproducible movements. Robots 1
Lukas Gorzelniak, IMSE Klinikum rechts der Isar der TU München (Bau 523), Ismaninger Str.22, D-81675 Munich, Germany.E.mail: [email protected].
898
L. Gorzelniak et al. / Using a Robotic Arm to Assess the Variability of Motion Sensors
work at a very high precision. They can be programmed for simple to complex motion sequences along different spatial axes, and thus have the potential of a better simulation of human and artificial movements than shaker devices. The aim of this paper is to examine the variability of the GT1M, GT3X and the RT3 accelerometers by using an industry robot for defined and repeatable movements.
2. Material and Methods 2.1. Accelerometers For this exploratory study, 11 piezoelectric triaxial RT3 (Stayhealthy, Monrovia, CA, USA), 5 biaxial GT1M and 5 triaxial GT3X accelerometers (ActiGraph LLC, Pensacola, FL, USA) were used. The RT3 records activity in 3 orthogonal directions at a sampling rate of 1Hz. The measured accelerations are converted to a digital representation, then processed as activity counts, and finally stored as VMUs. The GT1M is a micro-electromechanical system which measures acceleration in the vertical and horizontal plane at a sampling rate of 30 Hz. PA is filtered and expressed as activity counts, which is a quantification of the amplitude and frequency of the detected accelerations summed over a user-specified time interval. The GT3X is the successor of the GT1M and can assess activity in 3 orthogonal directions. Both ActiGraph accelerometers support the PA representation in terms of VMUs. All accelerometers in our study have been customized at one second post-filtered recording as it is the highest frequency in common to all sensors with VMUs output. 2.2. Industrial Robot For defined and repeated movements the industrial robot TX90 (Stäubli Robotics, Pfäffikon, Switzerland) was used. The robot has an articulated arm and can execute movements in 6 degrees of freedom with a repeatability precision of ± 0.03 mm. The high degree of freedom can approximate (mimic) human movements by the robot. 2.3. Accelerometer Attachment For a rigid attachment of the sensors, a single RT3 holder was screwed on the robotic arm. GT1M and GT3X accelerometers were attached to the same holder by using double-sided Velcro tape. This provided a stable attachment of the sensors (Figure 1).
Figure 1. Robotic arm with a single RT3 device attached.
L. Gorzelniak et al. / Using a Robotic Arm to Assess the Variability of Motion Sensors
899
The robot was mounted on a laboratory table and programmable through a cable connected interface. 2.4. Protocol Single accelerometer units were consecutively mounted on the robotic arm at exactly the same position before the programmed motion was executed. Acceleration data for two types of movement were recorded for each device during a motion sequence at two randomly selected speeds of the robot: The first sequence consisted of simple movements along each axis, beginning at the resting position of the robot. The second sequence was “random” with components along all axes. The sequences were chosen to assess each axis individually and combined. We did not try to mimic human movement in this study. The two sequences were repeated three times after short breaks of no movement at both speed levels. The robot program had to be started manually (for different speeds) thus, the data among the accelerometers were not exactly synchronized, causing a varying period of inactivity (activity gap). This gap was used to assess the signal-tonoise ratio and discarded for the comparison of the VMUs output. The gap location was known because all sequences had exact durations. The first non-zero value in the data defined the beginning of the time series. In order to evaluate the variability of the three accelerometers, descriptive statistics and illustrative figures were used.
3. Results As the output scale differs between devices of different manufacturers, a comparison was conducted based on relative rather than absolute values. All accelerometers recorded movements in VMUs, but differed in their co-domain due to different manufacturers or different numbers of measurement axes. The GT3X recorded the highest accelerations during the specified motion sequence (10.77 ± 0.29 VMU mean), reaching the highest peaks (71.80 ± 4.55 VMU max) compared to the GT1M (7.13 ± 0.21 VMU mean; 66.52 ± 10.67 VMU max) and the RT3 (5.31 ± 0.86 VMU mean; 34.91± 6.28 VMU max), all values ± standard deviation, respectively (see Table 1). The variability of the mean VMUs recorded during both types of movement was about 40 ± 24% in the RT3, 8 ± 15% in the GT1M, and 6 ± 11% in the GT3X. This is illustrated in Figures 2-4, in which data from each sensor type is plotted in a separate graph. Ideally, only a single line should be visible in each plot. Taking the displacement in the synchronization into account, the GT1M and the GT3X accelerometers overlapped fairly well in the graphs. Peaks and breaks before each repetition can be identified by small discrepancies. For the RT3 accelerometers, no clear line of measurement is observable, and small amounts of motion were continuously recorded during breaks. These are assumed to be noise. The Signal-to-Noise ratio calculated from data within the interval of no movement between the motion sequences can be found in Table 1. Both ActiGraph accelerometers identified the rigid period more precisely than the RT3. Except for one device, all RT3 accelerometers showed non-zero values for the noise standard deviation (see Table 1).
900
L. Gorzelniak et al. / Using a Robotic Arm to Assess the Variability of Motion Sensors
Table 1. Accelerometer output for the entire repeated movement sequence with the robotic arm Sensor Name I RT3 II RT3 III RT3 IV RT3 V RT3 VI RT3 VII RT3 VIII RT3 IX RT3 X RT3 XI RT3 I GT1M II GT1M III GT1M IV GT1M V GT1M I GT3X II GT3X III GT3X IV GT3X V GT3X
Number of Values 124 140 138 118 136 137 129 138 119 139 130 135 134 133 133 136 134 134 134 134 134
Mean VMUs 6,35 6,81 5,36 4,46 5,93 4,73 4,12 5,65 4,44 4,87 5,65 7,14 7,13 6,85 7,08 7,43 11,01 10,34 10,60 10,91 10,97
Standard Deviation (SD) 6,95 7,09 6,59 6,78 7,83 5,93 6,96 7,61 6,39 6,57 6,56 14,56 16,88 14,35 14,71 14,95 18,76 17,47 18,86 19,68 18,55
Maximum
Noise SD
27,39 32,50 31,40 37,00 46,17 31,13 43,46 29,77 29,00 41,23 35,00 58,00 82,00 62,00 73,00 57,58 75,00 67,00 78,00 69,00 70,00
2,20 4,61 2,00 0,00 3,20 1,34 4,80 2,98 2,32 3,71 3,48 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
Signal-toNoise Ratio 2,88 1,48 2,68 1,85 3,53 0,86 1,90 1,91 1,31 1,62 - -
Figure 2. Plotted VMU data assessed during the movement sequence for each RT3.
Figure 3. Plotted VMU data assessed during the movement sequence for each GT1M.
Figure 4. Plotted VMU data assessed during the movement sequence for each GT3X.
L. Gorzelniak et al. / Using a Robotic Arm to Assess the Variability of Motion Sensors
901
4. Discussion Our results indicate that the data acquired by the RT3 accelerometers are less reliable than data provided by the GT1M or the GT3X. The RT3 units produced a higher noise ratio during our experiments and, in agreement with previous reports [3], we found a greater inter-unit-variability compared to the ActiGraph accelerometers. The robot provides an objective comparison method and can be programmed to mimic human movements. As this was our first attempt to use a robot for exploratory purposes, the protocol contains several drawbacks: The robot was mounted on a steel table and during the faster motion sequence, movements of the robot are likely to have caused vibrations on the table. This probable noise may have decreased the accuracy of the accelerometer output. Therefore, we advise to use a rigid, grounded positioning, e.g. by mounting it on a block of concrete. Alternatively, the vibrations of the placement ground, as well as the robot itself, should be measured by mounting additional accelerometers. Unfortunately, in this initial experiment we did not record the movement from the robot data interface, which could have served as gold standard. Regarding the movement sequence, artificial breaks should be avoided to eliminate the synchronization burden. Last but not least, the signal-noise-ratio was computed from no-motion intervals with varying length. In future studies this will be done in a more standardized way.
5. Conclusion Using an industrial robot to perform repeating movements at a very high accuracy for testing different accelerometers is a promising method and generated reliable results. Although we did not assess intra-unit-variability of different motion sensors in this study, we were able to compare inter-unit-variability for two similar movement types in three different accelerometers, despite the limitations in the study protocol. Continuation of these studies is work in progress. Acknowledgements: This research was funded/supported by the Graduate School of Information Science in Health (GSISH) and the TUM Graduate School. The authors thank Martin Eder.
References [1] [2]
[3]
Guy C. le Masurier, Sarah M. Lee, and Catrine Tudor-Locke, Motion Sensor Accuracy under Controlled and Free-Living Conditions, Med Sci Sports Exerc. 2004 May;36(5):905-10. Bassett DR Jr, Ainsworth BE, Swartz AM, Strath SJ, O'Brien WL, King GA,Validity of four motion sensors in measuring moderate intensity physical activity, Med Sci Sports Exerc. 2000 Sep;32(9 Suppl):S471-80. Powell SM, Jones DI, Rowlands AV, Technical Variability of the RT3 Accelerometer, Med Sci Sports Exerc. 2003 Oct;35(10):1773-8.
902
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-902
The Single Source Architecture x4T to Connect Medical Documentation and Clinical Research Philipp DZIUBALLEa,1, Christian FORSTERb, Bernhard BREILa, Volker THIEMANNa, Fleur FRITZa, Jens LECHTENBÖRGERb, Gottfried VOSSENb, Martin DUGASa a Department of Medical Informatics, University of Münster, Germany b Department of Information Systems, University of Münster, Germany
Abstract. Clinical trials often require large and redundant documentation efforts, because information systems in patient care and research are separated. In two clinical trials we have assessed the number of study items available in the clinical information system for re-use in clinical research. We have analysed common standards such as HL7, IHE RFD and CDISC ODM, regulatory constraints and the documentation process. Based on this analysis we have designed and implemented an architecture for an integrated clinical trial documentation workflow. Key aspects are the re-use of existing medical routine data and the integration into current documentation workflows. Keywords. Clinical information systems, EHR re-use, single source, system architecture, clinical data management system
1. Introduction Clinical trials require extensive documentation efforts as they often include hundreds to thousands of attributes per patient. These data are commonly captured twice in two independent information systems. Daily routine documentation is entered into a clinical information system (CIS). Study documentation occurs on Case Report Forms (CRFs) and is stored in dedicated research databases (Clinical Data Management System, CDMS). Double data entry entails occupation of time and impacts negatively on the documentation behaviour of physicians and nurses. Recent studies show that clinicians spend about a quarter to one third of their daily working time for routine documentation [1, 2] and study documentation is added on top. Many patients have at least a basic electronic medical record [3] and routine data is available in CIS, eligible for re-use [4, 5] in clinical research. Although re-use applies to 11% - 69% of data items [6, 7], trial documentation processes connecting patient care with clinical research rarely exist. To address this issue, the eSoure Data Interchange (eSDI) Initiative of the Clinical Data Interchange Standards Consortium (CDISC) has promoted the eSDI Document [8] to analyse the use of electronic technology in the context of eSource data 1
Corresponding author: Philipp Dziuballe, Institute of Medical Informatics, University of Münster; E-Mail: [email protected]
P. Dziuballe et al. / The Single Source Architecture x4T
903
regulations in clinical trials. In this document the “single source” scenario is envisioned among others, as a promising concept that implies to capture medical data at one single point reducing double data entry and promoting secondary use [5]. A first prototype of this concept was successfully implemented by Kush et al. [9]. Our own research projects show the feasibility of this approach [10, 11]. Another scenario of the eSDI paper is the “Extraction and Investigator Verification” solution, where documentation occurs within the CIS. With regard to this concept, the Retrieve Form for Data Capture (RFD) [12] profile was jointly developed by CDISC and the Integrating the Healthcare Enterprise (IHE) Initiative, to enable data capture for clinical research and other purposes within a CIS session. It defines four actors who participate in specific transactions based on web services. This profile is extended in the REUSE project [6], through the integration of forms towards a profile called “Retrieve & Integrate Forms for data capture” to enable direct study documentation within the CIS reusing present medical routine data. Furthermore, the procedure of clinical trials is strictly regulated by law and supervised by authorities like European Medicines Agency and U.S. Food and Drug Administration. These international regulations have to be respected while connecting routine and research documentation. In this paper, we pursue the following objectives: We intend to identify the amount of available routine data in CIS eligible for study documentation. After that, we design and implement an architecture to facilitate the re-use of available medical routine data for clinical trial documentation with due regard to regulatory constraints and the usage of established international standards.
2. Materials and Methods We have analysed CRF items of two currently conducted multicentre trials at the University Hospital Münster (UKM) concerning the availability and manifestation in CIS ORBIS from Agfa Healthcare [13]. To assess the secondary use potential in our approach, we have identified the amount of CIS data suitable for re-use in two studies conducted at the UKM; their CRFs contain 278 and 318 items, respectively. This occurs through a manually review of all implemented CIS forms available in the respective clinical department by a medical informatics professional. We have also analysed clinical trial documentation workflows through interviews with employees of the Centre of Clinical Trials in Münster and with research physicians at the UKM. After that, we have conducted a literature search and identified communication standards, established in the healthcare and clinical research domain. We have selected the Operational Data Model (ODM) published by the CDISC because of its ability to archive trials [14] and exchange metadata definitions and data. We have also reviewed the IHE RFD profile to exchange documentation forms between CDMS and investigators’ CIS. With respect to the CIS features, we have identified the existing information system architecture at the UKM. Regarding interfaces we have analysed the Health Level 7 (HL7) communication standards. Based on our analyses, we have designed and implemented a system architecture. The clinical trial metadata was processed in ODM format.
904
P. Dziuballe et al. / The Single Source Architecture x4T
3. Results 3.1. Analysis To identify the amount of re-usable CIS data and to examine the desirability of a single source approach, we have manually mapped 596 study items (278 items from the first trial, 318 from the second) to the CIS documentation. We assigned three categories: “"identified in CIS,” “require modification” and “not found in CIS”. Figure 1 shows that 47% of the CRF items were found within the available routine patient care documentation. For about 11%, the item values cannot be used directly because they are free text or contain only similar information. In this case, a modification of the item value is necessary. About 42% of the CRF items are not found in CIS. This result shows that the re-use of CIS data values is a rewarding step for study documentation, as almost half of the required items (47%) for study documentation are available. 47%
Identified in CIS Require modification
11% 42%
Not found in CIS 0%
20%
40%
Amount of items 60%
Figure 1. Amount of identified CRF items in CIS based on an analysis of two clinical trials.
Concerning the RFD profile, the analysis of the present CIS architecture shows that a direct implementation within the CIS is limited due to technical restrictions (proprietary system structure) as well as license issues (restrictions regarding new interfaces). Specifically, direct import of CRFs is not available in the current CIS version. ORBIS neither supports Web services nor XForms and pre-filling of CRF items as described in RFD. 3.2. Architecture To overcome these drawbacks and make use of existing CIS data, we developed an architecture to create an interface between CIS and CDMS, compliant with regulatory constraints and recommendations (GCP, eSDI, Title 21 CFR Part 11) and applying established standards. RFD was extended and refined, which broadly resulted in a combination of RFD and the single source concept of the eSDI document. We developed an integrated documentation process based on identified regulatory principles. Certainly, t is not feasible to directly implement this process in current CIS solutions and due to their limitations we designed a middleware component (Figure 2) to connect it to clinical research systems. This mediator – called x4T (exchange for Trials) – is hosted in the hospital environment to establish the integrated documentation process.
CIS
x4T
CDMS
Figure 2. Single source architecture with the middleware component x4T between the CIS and the CDMS.
x4T enables the exchange of forms and medical routine data and also enables to send notifications to study physicians. Single source eCRFs will also be prepared for
P. Dziuballe et al. / The Single Source Architecture x4T
905
presentation and data input. The interfaces of x4T and the CDMS are able to exchange eCRF definitions and completed ones. x4T consists of the following modules: • Interface management Due to heterogeneity in CIS landscape, the CIS-x4T connection needs to be adapted to the communication interface provided by the CIS. Standards for healthcare communication such as HL7 messages and Clinical Document Architecture are preferred. For systems that do not serve those interfaces XML and a specific wrapper are used. The CDMS-x4T communication relies on ODM wherever supported by the CDMS; otherwise it is adaptable to the specific protocol. • User management Configuration of sites including users and their associated roles is done within the user management. The completed CRFs need to be signed by the user. • Form mediator and database It is currently not possible to directly store and display eCRFs inside the CIS. Therefore a separate form database is needed to temporarily store eCRFs. The mediator transforms these eCRFs into displayable XForms and enables prefilling of items. With regard to regulatory requirements pre-filled items need to be verified by the user and after documentation a copy of the CRF will be archived in CIS. If there are several item occurrences available, for instance blood pressure values at many points in time, the correct value needs to be selected and confirmed by responsible study physicians. • Ontology matching This module handles the eCRF pre-filling with routine patient care data. Eligible data items have to be verified and mapped with controlled vocabularies. A semantic layer is currently missing in ORBIS so that data annotation occurs externally in x4T. CRF items also need to be matched with this vocabulary. Pre-filling is possible in case of overlapping semantic concepts. A conversion engine translates measurement units or calculates item values in case of different data types. For instance, patient age for a CRF is calculated from date of birth and date of visit provided by the CIS. • Notification management To support clinical workflow, the CIS user has to be notified about new eCRFs to be filled.
4. Discussion and Future Work In this paper, we have proposed a system architecture to support integrated clinical trial documentation workflows. The central component of this architecture is the middleware server x4T to establish the connection between patient care and research systems. Due to the current gap of standards, a direct link from hospital to study systems would require an adapter for every CIS and CDMS in a full mesh topology for multicentre trials. With the mediator approach, only one interface per system is required. According to our architecture, study forms are documented in a single system with the advantage that documentation forms have not to be built in any CIS. In case of CRF updates, implemented CIS forms as realised in [10, 11] would cause huge maintenance costs. x4T enables re-use of medical data and pre-filling of study forms.
906
P. Dziuballe et al. / The Single Source Architecture x4T
Avoiding discrepancies in the validation of eligible CIS data, clinicians or other experts need to be consulted to reach consensus. The whole amount of re-usable CIS data may differ from study to study and can finally be calculated in a real trial setting. Pre-filling is only possible if CIS data are well-structured. In order to utilize our implementation, the x4T interfaces have to be adjusted. A proof-of-concept of x4T is planned for a clinical study in dermatology.
5. Conclusion Pre-population of eCRFs with CIS data is a promising approach to avoid redundant data entry because of a quite large overlap between CDMS and CIS items. Due to limitations of current CIS and regulatory constraints, the exchange of data between CIS and CDMS should be enabled by a mediator.
References [1] [2] [3]
[4]
[5] [6] [7] [8] [9]
[10]
[11] [12] [13] [14]
Ammenwerth, E. Spötl, H.P. The time needed for clinical documentation versus direct patient care. A work-sampling analysis of physicians' activities, Methods Inf Med. 2009;48(1):84-91. Tipping, M.D. Forth, V.E. O'Leary, K.J. Malkenson, D.M.. Magill, D.B Englert, K. Williams, M.V. Where did the day go?-a time-motion study of hospitalists, J Hosp Med. 2010 Jul-Aug;5(6):323-8. Jha, A.K. DesRoches, C.M. Campbell, E.G. Donelan, K. Rao, S.R. Ferris, T.G. Shields, A. Rosenbaum, S. Blumenthal, D. Use of electronic health records in U.S. hospitals, N Engl J Med. 2009 Apr 16;360(16):1628-38. Williams, J.G. Cheung, W.Y. Cohen, D.R. Hutchings, H.A. Longo, I M.F.. Russell, T. Can randomised trials rely on existing electronic data? A feasibility study to explore the value of routine data in health technology assessment. Health Technol Assess. 2003;7(26):iii, v-x, 1-117. Prokosch, H.U. Ganslandt, T. Perspectives for medical informatics. Reusing the electronic medical record for clinical research. Methods Inf Med. 2009;48(1):38-44. El Fadly, A. Lucas, N. Rance, B. Verplancke, P. Lastic, P.Y. Daniel, C. The REUSE project: EHR as single datasource for biomedical research, Stud Health Technol Inform. 2010;160(Pt 2):1324-8 Zahlmann, G. Harzendorf, N. Shwarz-Boegner, U. Paepke, S. Schmidt, M. Harbeck, N. Kiechle, M. EHR and EDC Integration in Reality. Applied Clinical Trials 2009. eSDI Initiative. [http://www.cdisc.org/esdi-document] Kush, R. Alschuler, L. Ruggeri, R. Cassells, S. Gupta, N. Bain, L. Claise, K. Shah, M. Nahm, M. Implementing Single Source: the STARBRITE proof-of-concept study, J Am Med Inform Assoc. 2007 Sep-Oct;14(5):662-73. Breil, B. Semjonow, A. Dugas, M. HIS-based electronic documentation can significantly reduce the time from biopsy to final report for prostate tumours and supports quality management as well as clinical research. BMC Med Inform Decis Mak. 2009 Jan 20;9:5. Fritz, F. Ständer, S. Breil, B. Dugas, M. Steps towards single source--collecting data about quality of life within clinical information systems. Stud Health Technol Inform. 2010;160(Pt 1):188-92. IHE International: IHE ITI Technical Framework Supplement: Retrieve Form for Data Capture (RFD). [http://www.ihe.net/Technical_Framework/upload/IHE_ITI_Suppl_RFD_Rev2-1_TI_2010-08-10.pdf] Agfa Healthcare. [http://www.agfahealthcare.com/] Kuchinke, W. Aerts, J. Semler, S.C. Ohmann, C. CDISC standard-based electronic archiving of clinical trials, Methods Inf Med. 2009;48(5):408-13.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-907
907
Information Technology Solutions to Support Translational Research on Inherited Cardiomyopathies Riccardo BELLAZZIa,1, Cristiana LARIZZA a,, Matteo GABETTA a,, Giuseppe MILANI a,, Mauro BUCALO a,, Francesca MULAS a,, Angelo NUZZO a,, Valentina FAVALLI b, Eloisa ARBUSTINI b a Dipartimento di Informatica e Sistemistica, Università di Pavia, Italy b IRCCS Fondazione Policlinico S. Matteo, Pavia, Italy
Abstract. The INHERITANCE project, funded by the European Commission, is aimed at studying genetic or inherited Dilated cardiomyopathies (DCM) and at understanding the impact and management of the condition within families that suffer from heart conditions that are caused by DCMs. The project is supported by a number of advanced biomedical informatics tools, including data warehousing, automated literature search and decision support. The paper describes the design of these tools and the current status of implementation. Keywords. Translational research, Enhancing Biomedical Research, Dilated cardiomyopathy
1. Introduction Dilated cardiomyopathy (DCM) occurs when diseased heart muscle fibres become weakened and cannot effectively pump blood to the body. The weak heart muscles also allow one or more chambers of the heart to expand. With time, the enlarged heart gradually deteriorates, causing congestive heart failure. DCM is one of the leading causes of Heart Failure due to systolic dysfunction, and at least 30% of DCM are of familial/genetic origin [1]. The INHERITANCE project (Integrated Heart Research In Translational Genetics of Cardiomyopathies in Europe), funded by the European Commission, seeks to study the genetic or inherited DCM and to understand the impact and management of the condition within families that suffer from DCMs. The INHERITANCE project is structured into 6 research areas that study different facets of the DCM condition, including clinical cardiogenetics, -omics, i.e. genetic testing, transcriptomics, proteomics and metabolomics, animal studies, structural studies, treatments, and biomedical informatics, which aims to implement information technology solutions to support the project team in managing the huge quantity of scientific, clinical and patient data generated by the project. This paper focuses on the biomedical informatics methods and tools that have been made available to the INHERITANCE researchers.
1
Corresponding author, [email protected].
908
R. Bellazzi et al. / Information Technology Solutions to Support Translational Research
2. IT Solutions to Support Clinical Research
Figure 1. The knowledge management and data analysis architecture of the INHERITANCE project.
INHERITANCE, on top of a database application, implements a layer of software instruments to support translation of the results of the project into guidelines and clinical practice as well as to support the scientific discovery process. This layer includes data warehousing, intelligent querying of the phenotype data, integrated search on biological data and knowledge repositories, text mining of the relevant literature, and case-based reasoning. We refer to these components as the knowledgemanagement system of INHERITANCE. The overall design of the knowledge management architecture is described in Figure 1. The data-warehouse (i2b2) is populated through a set of automated queries that extract patients’ data from the INHERITANCE database. The data are then made available to the researchers through a data mining and exploration tool. Literature is searched through a text mining strategy based on Natural Language Processing (NLP). Finally a decision support tool, exploiting the patient data-base, the text mining tool and software solutions to automatically access biomedical data bases, provides support in refining patient’s diagnosis. In the following we will briefly describe each of the components included in the final architecture and the state-of-art of the project.
3. The Architecture of the Data Warehouse for Patients’ Data Exploration The INHERITANCE project collect patients’ data in a specialized database called Cardioregister [https://cardioregister.com/Pages/Main.aspx], a web-based system designed to collect, exploit and download anonymised data of patients and families with DCM, offering the ability to produce customized reports with data. The data collected in Cardioregister are automatically uploaded in a data warehouse for data exploration and dynamic querying. The data warehouse exploited in the INHERITANCE project is based on the i2b2 software system [2] (http://www.i2b2.org/software). The goal of the i2b2 project (Informatics for Integrating Biology and the Bedside) is to provide clinical investigators with a software infrastructure able to integrate clinical records and research data in the genomics age.
R. Bellazzi et al. / Information Technology Solutions to Support Translational Research
909
The i2b2 core software tool is a data warehouse, which can be accessed via a query generation tool. The i2b2 data model is based on the “star schema” [3] and the entire i2b2 software architecture is built on web services, called cells. New i2b2 cells can be developed and added relying on the web service architecture. The i2b2 web client query interface allows queries to be dynamically created and executed by researchers and returns the patients’ set that satisfy the queries. The terms used to create the queries are specified through an ontology, which needs to be customized on the specific biomedical application. In order to empower i2b2 with fast multidimensional inspection of phenotypic data, we have included in the tool, as a plug-in, the Phenotype Miner system [4]. Phenotype miner has two main components: i) the Phenotype Editor, for the automated definition of phenotype queries, ii) a customized version of the Mondrian OLAP engine (http://mondrian.pentaho.org) for dynamic data inspection. Within the INHERITANCE project, i2b2 will be also integrated with two different software environments, including automated literature analysis and decision support.
4. Tools for Automated Literature Analysis Automated literature analysis is becoming an essential need in current biomedical research. Text Mining (TM) and Natural Language Processing (NLP) provide algorithms and techniques for automated elaboration of textual content. This task is particularly important in the early stage of any study, in which resuming the available knowledge is crucial to formulate initial hypotheses and plan next tasks. The challenge is to broaden the search of potentially useful information to generate new hypothesis [5]. For instance, an added value could be to suggest that a candidate gene is often related to another gene, which has not been previously considered. Our goal is to provide tools to INHERITANCE able to provide such kind of utilities. Therefore, we focused on genetic studies, in which a set of initial hypotheses of gene-disease association is made on some candidate genes, so that the first step is to explore the recent literature to confirm their possible role in the disease mechanism. We developed a tool able to extract the concepts of interest (genes and medical terms, like pathologies) using a structured knowledge base like Unified Medical Language System (UMLS) [6], by which we can derive genes/disease annotation. Moreover, we also implemented similarity metrics, based on a relevance measure of the terms for each gene, to identify which terms each gene shares between each other. In this way we can represent a graph in which the nodes connection reflect how tightly related those terms are according to the available literature. The analysis method we propose aims to derive a literature-based gene annotation by extracting UMLS terms related to diseases from the abstracts of the publications referencing each gene. The overall analysis consists of 3 main steps: • querying PubMed via Web Services to retrieve the most recent literature about specific genes/diseases • automatically extracting concepts (genes/disease) from PubMed abstracts based on NLP techniques • constructing annotation/co-citation networks to interpret available knowledge and suggest new hypotheses that can be tested. The details of the literature analysis system and the medical concepts extraction are described in [7].
910
R. Bellazzi et al. / Information Technology Solutions to Support Translational Research
5. The Reasoning and Decision Support Tool A crucial aspect of the knowledge management system of INHERITANCE is the definition of a tool that can guide the clinicians in properly ranking the DCM causative genes, so that their screening can be effectively performed in the clinic. The goal is thus to prioritize around 30 genes for screening on the basis of the patients’ symptoms. The large variability of the patients’ data and the limited amount of formalized knowledge available requires the design of a decision support tool able to provide instruments for analogical reasoning to clinicians, including case similarity, information retrieval and text mining. Each clinical case is usually described by hundreds of features, including anamnesis and family information, life-style, lab tests and exams, ECGs, echo-cardiography data. Among the collected data, some of them are considered as “red flags”, i.e. biomarkers that may be related to some gene mutation, as their cause-effect relationships have not yet been fully established. To cope with this problem, we have implemented the following strategy: • all Pubmed abstracts are retrieved and included into an abstracts database; • all Pubmed abstracts are analysed in order to extract the concepts of interest (genes and medical terms); every concept is searched with all its synonyms coming from UMLS meta-thesaurus and Gene database. The results of this analysis (genes and medical terms cited in the Pubmed abstracts) are stored in the abstracts database so that the association between each article and the extracted concepts is made available for the next step. • for each candidate gene, Pubmed is queried in order to obtain the reference to the articles that are directly related. From these articles, exploiting the results of the previous step, the system generates a list of the associated UMLS concepts in order to find new gene-red flag links. A further step is to find out non-direct relationships, i.e. two concepts that are directly associated in a loose way, but strongly associated with a common concept [8]. While the first two steps are done only once to create the corpus and calculate the interesting gene/medical concepts occurrences, the third one depends on the specific analysis and can be repeated for every further investigation. Let us note that, since UMLS concepts are hierarchically interrelated, the relationships between a gene and a red flag may also occur at different levels of the hierarchy, including their descendants (more specific concepts) or their ancestors (more general ones). The final matching process give rise to an augmented weighted list of red flags related to a gene, where the weight can be calculated on the basis of the frequency of the gene/red flag relationships. Once a single patient case is available, it is possible to compute a matching function with the current patients data and the weighted list of red flag to derive a prioritized list of genes. Moreover, it is also possible to retrieve similar cases with known mutations and therapy and highlight right or wrong previous diagnostic decisions. Rather interestingly, the process evolves over time, and the list of red flags may be varied accordingly to the change in the available knowledge reported in Pubmed and in knowledge repositories. We have currently retrieved a corpus of more than 7000 documents by querying Pubmed for DCM over the period 2005-2010. From this corpus we extracted 455 genes and 867 UMLS concepts. As a preliminary task we verified that such lists contain the 27 genes and 20 red flags provided by the physicians as related to DCM. In this way it was possible to confirm the available background knowledge. To test the potentials of the proposed approach, we extracted a list of possible new genes or red flags that could
R. Bellazzi et al. / Information Technology Solutions to Support Translational Research
911
be related to DCM. Finally, for each red flag, we produced a list of associated genes ordered by their relevance. The relevance depends on the number of articles that support the red flag-gene association. This information provides a gene prioritization list that can be useful in clinical routine for diagnostic purposes [9].
6. Conclusions The main task of the INHERITANCE project is to investigate the molecular basis of inherited DCM. To this end we have developed and implemented a set of software tools to support data management and decision support, including: 1. A datawarehouse for fast phenotype data exploration based on the i2b2 system. 2. A tool for automatic literature analysis and literature-based discovery 3. A system for supporting reasoning and decision on a single case, with the aim to prioritize gene screening and to discover new gene-concept associations. The project, after its first year, has already collected 168 patients coming from four medical centers. The data warehouse and the text mining tools have been implemented and tested, while the decision support tool is currently still under development. Acknowledgments. This work was supported by the INHERITANCE project, funded by the European Commission. We thank Lorenzo Monserrat, HealthEncode and the Cardioregister team for their effective collaboration.
References [1] [2] [3] [4] [5] [6] [7] [8] [9]
Ahamad F, Seidman JG, Seidman CE. The genetic basis for cardiac remodeling. Ann Rev Genomics. Hum Genet 6 (2005) 185-216. Murphy SN, Weber G, Mendis M, et al. Serving the enterprise and beyond with informatics for integrating biology and the bedside (i2b2). J Am Med Inform Assoc. 17(2) (2010)124-30. Kimball R, Ross M. The data warehouse toolkit. second edition, Wiley and Sons, 2002 Nuzzo A, Segagni D, Milani G, Rognoni C, Bellazzi R. A dynamic query system for supporting phenotype mining in genetic studies. Stud Health Technol Inform, 129(Pt 2) (2007) 1275-1279. Roos M, Marshall MS, Gibson AP, et al. Structuring and extracting knowledge for the support of hypothesis generation in molecular biology. BMC Bioinformatics. Suppl 10 (2009) S9. Lindberg DA, Humphreys BL, McCray AT. The Unified Medical Language System. Methods of information in medicine 32 (1993) 281–291. Nuzzo A, Mulas F, Gabetta M, et al. Text Mining approaches for automated literature knowledge extraction and representation. Stud Health Technol Inform. 160(Pt 2) (2010) 954-8. Ganiz M, Pottenger WM, Janneck CD. Recent advances in literature based discovery. Lehigh University, CSE Department, Technical Report, LU-CSE-05-027 (2005). Bellazzi R, Larizza C, Gabetta M, et al. Translational Bioinformatics: Challenges and Opportunities for Case-Based Reasoning and Decision Support, Case-Based Reasoning: 18th International Conference, ICCBR 2010, Proceedings (Lecture Notes in Computer Science / Lecture Notes in Artificial Intelligence, (2010), 1-11.
This page intentionally left blank
Usability, HCI, Cognitive Issues
This page intentionally left blank
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-915
915
Emerging Approaches to Usability Evaluation of Health Information Systems: Towards In-Situ Analysis of Complex Healthcare Systems and Environments Andre W. KUSHNIRUKa1, Elizabeth M. BORYCKIa, Shigeki KUWATAb, Joseph KANNRY c a School of Health Information Science, University of Victoria, Victoria, British Columbia, Canada b Tottori University Hospital, Tottori, Japan c Mount Sinai Medical Center, New York, New York
Abstract. The effective evaluation of health information technology (HIT) is currently a major challenge. It is essential that applications we develop are usable, meet user information needs and are shown to be safe. Furthermore, to provide appropriate feedback to designers of systems new methods for both formative and summative evaluation are needed as applications become more complex and distributed. To ensure system usability a variety of methods have emerged from the area of usability engineering that have been adapted to healthcare. The authors have applied methods of usability engineering, working with hospitals and other healthcare organizations designing and evaluating a range of HIT applications. We describe how our approach to doing portable low-cost usability testing has evolved to the use of clinical simulations conducted in-situ, within real hospital and clinical units to rapidly evaluate the usability and safety of healthcare information systems both before and after system release. We discuss how this approach was extended to development of methods for conducting in-situ clinical simulations in a range of clinical settings. Keywords: human computer interaction, usability, usability testing, in-situ
1. Introduction A wide variety of health information technology (HIT) has appeared ranging from wireless hand-held applications to Web-based patient record systems. Although innovations in HIT have the potential to dramatically improve and streamline health care, there are a number of critical problems and issues related to their successful implementation and acceptance by end users and consumers. One of the main areas of concern revolves around the following question: how can we ensure the applications that we develop are usable, meet user information and workflow needs and are safe? The design of HIT applications that are intuitive to use and that support human information processing is essential. This has become increasingly recognized as being 1
Corresponding author: Andre W. Kushniruk: E-mail: [email protected]
916 A.W. Kushniruk et al. / Emerging Approaches to Usability Evaluation of Health Information Systems
critical as more and more complex software and hardware applications appear in healthcare. Usability is a measure of how effective, efficient and enjoyable a system is to use [1]. Closely related to issues of usability are issues of software safety and workflow, with the need to ensure that new devices and software increase patient safety and that workflow can be carried out in an effective and efficient manner. Methods from usability engineering have been applied to improve the usability of systems. This includes usability inspection methods, involving analysis of a user interface by an expert to identify usability problems, and usability testing, which involves observing representative users of a system carrying out representative tasks. The importance of usability testing in healthcare has been increasingly recognized. However, the issue of how to best test and evaluate systems so that the results are both ecologically valid and generalizable to real complex clinical settings has remained to be resolved. This paper describes our work in the evolution of approaches to the evaluation of the use and usability of HIT applications, given the widespread increase in both usage and complexity of environments in which they are deployed. This paper begins with a discussion of the development of a low-cost portable usability approach that has been taken into the field to conduct studies of end users of applications in real naturalistic settings. The approach has been used to evaluate a variety of applications and devices ranging from electronic medical records (EMRs) to Web-based information resources designed for both health care professionals and lay persons [2]. We then follow this with a discussion of our most recent work in extending the concept of usability testing to conducting more realistic and ecologically valid studies involving clinical simulations conducted “in-situ” - i.e. in real clinical settings where information technology is or will be deployed. In the early stages of our work and early experimentation with usability engineering in healthcare, we employed a number of different approaches to conducting usability testing including setting up a “fixed” usability laboratory setting. However, our experience has indicated that since this approach did not allow for collection of data at the site where the software under study is actually installed, conclusions made about a system’s usability and the generalizability of findings and predictions varied in their accuracy. In addition, for many of our studies it is essential that we conduct them in the actual environment in which they are being used, in order to determine how aspects of a particular environment may be affected by interacting technologies (e.g. imaging or bar-coding technologies) and how users interact with a system in a real setting, which is not realistically possible without employing a portable in-situ approach. With the advent of inexpensive screen recording software and high quality portable digital video cameras, the costs have decreased for conducting such studies along with an increase in the portability of the equipment such that it can be taken into any hospital or clinical environment, thereby simplifying the process. Figure 1 illustrates a continuum of approaches we have developed to guide design of usability studies. Our initial projects were mainly located on the far left side of the continuum in that they involved laboratory usability testing of systems taken out of their “natural” environment. This progressed to the development of more elaborate and realistic usability testing environments and study designs, which have previously been termed “clinical simulations” [3], however they were typically still conducted within a laboratory environment. In recent years we have moved many of our studies out of the laboratory and located both simulation studies and naturalistic studies within real-world environments (e.g. clinical settings). As indicated in Figure 1, in-situ studies may
A.W. Kushniruk et al. / Emerging Approaches to Usability Evaluation of Health Information Systems 917
consist of simulations taking place in a real setting (e.g. a hospital room or operating room off hours) or they may involve naturalistic recording of real healthcare activities.
Figure1. A continuum of usability/simulation studies and settings.
2. An In-Situ Approach for Evaluating HIT Applications In this section of the paper we will describe the set-up of in-situ usability testing that can be taken into any type of setting, ranging from the clinical (e.g. hospital rooms) to the home setting (e.g. to study use of e-health applications by patients and providers). This set-up has so far been used for a number of projects, ranging from the study of nurse’s information needs to its use in the evaluation of a new medication order entry system (using bar-coding technology) prior to its deployment in a hospital in Japan [3] as well as the study of an introduction of an EMR at major American medical center, involving in-situ testing both before and after system go-live. Our typical studies carried out in naturalistic clinical settings involve asking subjects (e.g. nurses or physicians) to interact with systems to carry out real tasks (in some studies subjects may also be asked to “think aloud” while carrying out the task, which is audio recorded). The subject’s overt physical activities are recorded using one or more low-cost digital cameras (and ceiling mounted cameras where required). In addition to recording physical activities and audio of think aloud, the actual computer screens are also recorded as a digital movie file, with the audio portion of the movie corresponding to subject’s verbalizations. In order to do this we are currently using a freely available software product called Hypercam©. This type of inexpensive (or free) screen recording software allows one to record all the computer screens as a user interacts with the system under study, and stores the resultant digital movie for later playback and in-depth analysis of the interaction. The equipment we have used for many of our usability studies of HIT applications is both low-cost and portable. This typically includes: (1) one or more computers to run the software under study on, (2) screen recording software which allows the computer screens to be recorded as movie files (with audio input of subject’s “thinking aloud” captured using a standard microphone plugged into the computer), (3) one or more external digital cameras to video record user’s physical interactions. In studies being conducted remotely, the equipment may also include a Webcam attached to the computer that the user is interacting with. The studies we have conducted using this equipment have been carried out in a range of settings. The total cost of the equipment is minimal (i.e. under $1,500 US). It should be noted that data collected using this combination of recording methods (i.e. screen plus
918 A.W. Kushniruk et al. / Emerging Approaches to Usability Evaluation of Health Information Systems
video recordings of users’ physical interactions) can provide for very high fidelity recordings of user interactions, both in terms of the realism of the setting (as studies can be conducted in actual clinical settings where the application is being used in real life, leading to higher fidelity testing than is possible in a laboratory study) and higher quality recordings (with advances in low-cost digital recording).
3. Analysis of Data Collected The analysis of the data collected (e.g. screens of user interactions, video recordings of users’ problems) varies from informal analysis, which consists of simply playing back the movies of user interactions to identifying particular usability problems (e.g. where a user is unable to carry out a requested task) in the presence of designers, hospital staff, managers etc. The analysis can also involve video annotation of the movie file using software such as Transana© (a freeware video annotation program that allows analysts to “mark up” and time stamp movies of user interactions with a system) as described in Kushniruk and Patel [2]. The typical result of carrying out a usability test includes identification of specific usability problems (often in a meeting setting with system developers, customers, and hospital or management staff present). The intent of our work is typically to provide rapid feedback about system usability to provide useful information to improve system design, deployment, or customization in an efficient and rapid manner. Our most recent projects have involved applying usability engineering methods (including our low-cost portable approach) to identifying potential errors that may be caused by a system (e.g. inappropriate medication defaults in an order entry system), or “induced” by poor user interface design [4].
4. Experiences to Date We have carried out a number of studies at varied locations (e.g. Mt. Sinai Medical Center, New York and Tottori University Hospital, Japan). Some of our earliest work involved usability testing of a patient record system at a major US medical center where the methods described in this paper resulted in a ten-fold decrease in the number of problems encountered by users of an electronic patient record system. The data analysis was conducted in a cost-effective (under $3,000 US) and efficient manner with specific recommendations for system improvement being incorporated in an improved system within several hours to weeks from the time of data collection [5]. Usability problems related to issues such as lack of interface consistency, problems in representing time sequences and issues in matching user specified terms to computer terms were identified. We have also employed a similar approach to detecting and correcting potential user problems and preventing medical error in a range of systems [4]. More recently, we have employed the method to determine how medical workflow may be inadvertently affected by the introduction of a medication order entry system [3]. In one study, which was conducted in the actual clinical setting where a new medication order entry system was deployed, subjects (nurses and doctors) were videorecorded while they interacted with both the computer system under study and patients in order to administer and record medications given to the patient. This study was conducted as a clinical simulation in-situ (i.e. in a real hospital room) just prior to system deployment. The results from such study have been used to identify not only
A.W. Kushniruk et al. / Emerging Approaches to Usability Evaluation of Health Information Systems 919
problems with user interfaces but also to assess how the new electronic application affected workflow and patient care. In this study, for example, it was found from an analysis of the video recordings that the introduction of the computer system would negatively affect the workflow by making it rigid and sequential (through the prescribed order of steps imposed by the medication order entry system) as compared to the typical workflow implemented prior to the introduction of a system. In some of the simulation cases (e.g. under emergency conditions) this very prescriptive workflow posed a safety challenge (e.g. particularly when users have to deal with patient emergencies) and hence recommendations were made for providing an override capability under such conditions prior to widespread system rollout. In a current extension of this approach we are applying the method to examine the impact of clinical best practice guidelines on physician workflow using an electronic medical record system at a major American hospital center. This is involving both in-situ testing of users interacting with the guidelines both (prior to widespread release) as well as naturalistic testing of the system after deployment for use with real patients (using the same unobtrusive recording technology and set up in both cases).
5. Discussion In-situ approaches can be used to not only conduct simulations pre-implementation but also allow for post system release recording of real naturalistic interactions with systems in “live” use. Hence predictions made from in-situ studies can be tested as the system goes live (by keeping the recording equipment already in place going). Other advantages include its low cost in terms of equipment. Furthermore, by locating the studies within the actual organization where a system is going to be used, we are able to obtain direct access to a range of representative subjects and gain an improved understanding of the impact of local organizational issues and factors upon usability and safety. The impact of interfacing technologies in the real setting can also be identified. Challenges include obtaining permission to conduct studies in a real environment and issues regarding obtaining rooms and locations after hours for simulation testing. However, it is argued that if we are to ensure that the results of usability testing apply to real-world settings these types of studies are necessary.
References [1] [2] [3] [4] [5]
Preece, J., Rogers, Y., and Sharp, H., Interaction design: Beyond human-computer interaction. New York: John Wiley & Sons, 2002. Kushniruk, A.W. and Patel, V.L. Cognitive and usability engineering methods for the evaluation of clinical information systems, J Biomedl Inform, 37, 2004, 56-76. Borycki, E., Kushniruk, A., Kuwata, S., Kannry, J. Use of simulation in the study of clinician workflow. AMIA Annual Symposium Proceedings, 2006, 61-65. Kushniruk, A.W. Triola, M. Borycki, E. Stein, B. Kannry, and J. Technology induced error and usability. Int J Med Inform, 2005, 74, 519-526. Kushniruk, A.W. Patel, V.L. Cimino, J.J. and Barrows, R. Cognitive evaluation of the user interface and vocabulary of an outpatient information system. Proceedings of the 1996 Annual AMIA Conference, 1996, 22-26.
920
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-920
Contextualization of Automatic Alerts During Electronic Prescription: Researchers’ and Users’ Opinions on Useful Context Factors Elske AMMENWERTHa1, Werner O HACKLa, Daniel RIEDMANNa, Martin JUNGa a Institute for Health Information Systems, UMIT – University for Health Sciences, Medical Informatics and Technology, Hall in Tyrol, Austria
Abstract. Computerized Physician Order Entry (CPOE) Systems can reduce the number of medication errors and Adverse Drug Events (ADEs). However, studies have shown that users often override alerts, as they feel these are too unspecific for the given patient context. It is unclear, however, how alerts could be contextualized, that is adapted to the clinical context. Based on a literature search, we developed a list of 20 possible context factors. We asked 69 international CPOE researchers and 120 physicians from four hospitals in two countries to judge the usefulness of each factor. Researchers judged the following factors as most important: 1.) Severity of the effect, 2.) Clinical status of the patient, 3.) Probability of occurrence, 4.) Risk factors of the patient, 5.) Strength of evidence. Physicians judged the following factors as most important: Severity of the effect, clinical status of the patients, complexity of the case, and class of drug. These topranked context factors could be used to re-design the way alerts are presented in CPOE systems, to increase sensitivity of alerts, to reduce overriding rates, and to improve medication safety. Keywords. CPOE, electronic prescribing, e-medication, Delphi, user survey, context factor, evaluation
1. Introduction Medication errors and resulting preventable Adverse Drug Events (ADEs) are an important issue of global healthcare [1]. It is estimated that in the U.S., over 770,000 people are annually injured or die in hospitals due to ADEs [2]. The use of computerized physician order entry (CPOE) systems can reduce both, medication errors as well as ADEs [3-4]. Depending on the level of decision support provided, CPOE systems may provide alerts on drug-drug interaction or other drug-related problems or provide drug-related guidance. Recent research showed that, however, users often override drug safety alerts in CPOE systems [5]. Some proposals have been made to reduce alert overriding, such as to tailor (filter, prioritize) alerts depending on age or allergies of the patients [5-6], or on experience of the user [5, 7]. At the moment,
1
Corresponding Author: Elske Ammenwerth, Institute for Health Information Systems, Eduard Wallnöfer Zentrum 1, 6060 Hall in Tyrol, Austria, E-Mail: [email protected]
E. Ammenwerth et al. / Contextualization of Automatic Alerts During Electronic Prescription
921
however, systematic investigations on possible context factors that can be used to tailor alerts, and on the usefulness of each factor, seems to be missing. The objective of this work is to present a list of possible context factors that can be used to tailor alerts in CPOE systems, and to assess the usefulness of each context factors from the point of view of international researchers and clinical users.
2. Methods To establish a list of possible context factors, we conducted a literature search, comprising a hand search of major health informatics journals and a PubMed search. We searched for papers on electronic prescribing and CPOE systems and we analyzed which possible context factors were mentioned in these papers. Overall, 67 were analysed in detail, and found context factors were then summarized and organized into distinct categories. We stopped the search after we felt that saturation was reached and no new factors could be detected. 2.1. International Delphi survey Through a search of recent CPOE-related publications in PubMed, we identified 214 international researchers that had broad experience in electronic medication, and invited them to participate in a Delphi survey. During this web-based survey (based on LimeSurvey), the researchers got the list of the found 20 context factors, together with a short explanation. They were then asked to mark those factors they found most useful to prioritize and filter alerts, to add factors they found missing, and then to identify the five most important context factors. The survey was conducted in two rounds. During the second round, the results of the first round were fed back to the researchers who were then able to modify their judgment. 2.2. User survey of physicians Besides the researchers’ point of view, we were also interested in the point of view of clinical users. We therefore invited 60 physicians from a community hospital in Denain (France) and 207 physicians from three hospitals in Copenhagen (Denmark) to answer the same questions as in the Delphi survey. The survey was translated in the local language and organized as a paper-based survey. Both hospitals already had implemented electronic prescribing with some basic level of decision support such as drug-drug-interaction, with automatic alerts in Region H and optional alerts in Denain.
3. Results Overall, we identified 20 context factors that were discussed in the literature as a possible way to contextualize alerts, and to reduce alert overriding rates and alert fatigue (Table 1). A more detailed description is available in a separate publication [8].
922
E. Ammenwerth et al. / Contextualization of Automatic Alerts During Electronic Prescription
Table 1. Context factors that have been discussed as a way to contextualize (prioritize, filter) CPOE alerts according to the clinical situation. Context factor Factor related to organizational unit (department, hospital) or the user: • Characteristics of the patient population of the unit • ADE rate of the unit • Specialty of the unit • Workload within the unit • Professional experience of the user • Current task of the user • Personal preferences of the user • Repetition of alerts to a user • Override-rate of alerts within the unit Factors related to the patient or drug: • Demographic data of the patient • Risk factors of the patient • Tolerance with regard to the drug • Complexity of the patient case • Clinical status of the patient Factors related to the alert: • Class of drug the alert refers to • Severity of the effect • Probability of occurrence • Strength of evidence • Topicality of the alert • Type of alert
3.1. International Delphi survey From the 214 invited international researchers, 69 (32.2%) completed both rounds. From these 69 researchers, 45 (65.2%) self-assessed their CPOE expertise as “advanced”. Over half of the participants held a university perspective and approximately one-third a health care provider perspective The chosen top-five useful context factors for prioritizing alerts in descending order of usefulness are: 1. Severity of the potential effect of an ADE; 2. clinical status of the patient; 3. probability of occurrence of the ADE; 4. risk factors of the patient; and 5. strength of evidence. In the free-text comments, the researchers named the following factors as potentially missing: Drug history (including stopping of a drug); given application forms of a drug (route of administration); whether a prescription is based on a clinical protocol; whether there is advice that may reduce the ADE risk; and already planned clinical actions (such as the lab monitoring). 3.2. User survey of physicians Overall, we got 26 responses from the Hospital of Denain (return rate: 43.3%) and 94 responses from the three hospitals in Copenhagen (return rate: 45.4%). The chosen most useful context factors for prioritizing alerts are: Severity of effect, clinical status of the patients, complexity of the case, and class of drug. The issued free-text comments did not bring new ideas for missing context factors.
E. Ammenwerth et al. / Contextualization of Automatic Alerts During Electronic Prescription
923
4. Discussion We identified 20 possible context factors from the literature and asked researchers and clinical users to judge their usefulness: Both groups emphasized the usefulness of severity of effect and clinical status of the patient (see Figure 1). The researchers additionally favored more evidence- and research-based factors, while the users additionally favored more clinically oriented factors.
Figure 1. Top-ranked context factors by CPOE researcher and clinical users.
We included researchers that were identified by their number of publications on CPOE systems, with a large majority coming from universities and health care providers. Their self-assessment showed that we in fact were able to gather more experienced CPOE experts. The point of view of industrial experts is mostly not covered. In the user survey, we included physicians from overall four hospitals in two countries. All of them had experiences with using CPOE systems, and with alerting. The return rate was sufficiently high in all cases. The results can, however, not easily be generalized to other hospitals with other CPOE systems. The found factor “severity of the potential effect of an ADE” has been controversially discussed in the literature, with some authors supporting it [9, p. 37], others being more critical [10, p. 446]. The factor “clinical status of the patient” has also been supported by others [5, p. 144]. It seems quite clear that inclusion of more clinical parameters such as lab values can help to better tailor alerts. Another highranked factor, strength of evidence, is mentioned by others [9, p. 37]. Not surprisingly, only the participating researchers, having a strong research background, rate this as an important factor. “Probability of occurrence” and “risk factors” of the patients are voted highly by the researchers, but are only seldom mentioned in the literature. An interesting result was the poor ranking achieved by the context factor “personal preferences of the user”. This factor is mentioned quite frequently in the literature [5], and is quite highly ranked by the users (but not in the overall top-5-factors), but worstranked by the experts. To our knowledge, this study is the first attempt to systematize the notion of “contextualization of alerts”, and to ask experts and users on the most useful factors. The top-ranked-factors in both groups could now be exploited in CPOE systems. For example, CPOE vendors could try to systematically integrate information on the clinical status of the patient, information of the probability of the effect indicated in the alert, or strength of evidence for the alert, to prioritize alerts and/or to filter them. Based on human-computer interaction paradigms and usability research, CPOE user interfaces could then be further optimized, with alerts of different priority shown in different ways (interruptive or not, different size and color, different location). This
924
E. Ammenwerth et al. / Contextualization of Automatic Alerts During Electronic Prescription
all could help to improve the sensitivity of alerts and reduce alert overriding rates and alert fatigue. Whether this is possible, it has, however, to be shown in further quantitative or qualitative trials. In our study, we looked at the clinical setting. It would be interesting to investigate whether the context factors are different when looking at the patient-oriented systems, used at home to document drug intake or self-prescriptions.
5. Conclusion Our results show various context factors that could be used to better tailor alerts to the clinical situation. The top-ranked context factors could be used to re-design the way alerts are presented in CPOE systems, to increase sensitivity of alerts, to reduce overriding rates, and to improve medication safety. Acknowledgments. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n°216130.
References [1]
Schnurrer J, Frölich J. Zur Häufigkeit und Vermeidbarkeit von tödlichen unerwünschten Arzneimittelwirkungen. Internist. 2003;44:889-95. [2] Shojania KG, Duncan BW, McDonald KM, Wachter RM, editor. Making Health Care Safer: A Critical Analysis of Patient Safety Practices, Evidence Report/Technology Assessment No. 43, AHRQ Publication No. 01-E058. Rockville, MD:: Agency for Healthcare Research and Quality; 2001. [3] Ammenwerth E, Schnell-Inderst P, Machan C, Siebert U. The Effect of Electronic Prescribing on Medication Errors and Adverse Drug Events: A Systematic Review J Am Med Inform Assoc. 2008;15(5):585-600. [4] Hug BL, Witkowski DJ, Sox CM, Keohane CA, Seger DL, Yoon C, et al. Adverse drug event rates in six community hospitals and the potential impact of computerized physician order entry for prevention. J Gen Intern Med. 2010 Jan;25(1):31-8. [5] van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc. 2006 Mar-Apr;13(2):138-47. [6] Khajouei R, Jaspers MW. The impact of CPOE medication systems' design aspects on usability, workflow and medication orders: a systematic review. Methods Inf Med. 2010;49(1):3-19. [7] Grizzle AJ, Mahmood HM, Ko Y, Murphy JE, Armstrong EP, Skrepnek GH, et al. Reasons provided by prescribers when overriding drug-drug interaction alerts. Am J Manag Care. 2007 Oct;13(10):573-8. [8] Riedmann D, Jung M, Hackl W, Ammenwerth E. Reducing alert overload by contextualization of CPOE alerts: Development and validation of a context factor model. BMC Med Inform Decis Mak, submitted. 2011. [9] Kuperman G, Bobb A, Payne T, Avery A, Gandhi T, Burns G, et al. Medication-related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc 2007;14(1):29-40. [10] van der Sijs H, Aarts J, van Gelder T, Berg M, Vulto A. Turning off frequently overridden drug alerts: limited opportunities for doing it safely. J Am Med Inform Assoc. 2008 Jul-Aug;15(4):439-48.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-925
925
Reducing Clinicians’ Cognitive Workload by System Redesign; a Pre-Post Think Aloud Usability Study L.W.P. PEUTEa, N.F. DE KEIZERa, E.P.A. VAN DER ZWANa, M.W.M. JASPERSa a Department of Medical Informatics, Academic Medical Center – University of Amsterdam, The Netherlands
Abstract: Interactive Health Information systems are often considered cognitively complex by their users, leading to high cognitive burden and increased workload. This paper explores if Think Aloud usability testing provides valuable input to effectively redesign a web-based Data Query Tool in Intensive Care and to reduce physicians’ cognitive workload during system interaction. Pre and post redesign usability testing demonstrated a major reduction in the cognitive task workload after redesign of the tool. Classification of revealed usability problems by means of the User Action Framework pointed out that usability problems related to the cognitively planning of actions by system users foremost affected cognitive task workload. This result may support Health Information system (re)design efforts on how to tackle the system’s cognitive complexity and in so doing improve on its usability. Keywords: Assessment-Evaluation, User Workstation, Design aspects, Usability
Interfaces,
Health
Professional
1. Introduction Usability evaluation is an essential but complex part of Health Information (HI) system development. Its purpose is to identify usability problems to improve the system interface design so that it can be used efficiently, effectively, satisfactory and foremost safely by clinicians [1]. In contrast to conventional usability evaluation methods, usability methods that emerged from the field of cognitive psychology are ever more viewed upon as essential in HI system (re)design. They are considered to transcend the level of revealing interface design flaws and advance to the stage of providing insight into clinicians’ cognitive processing in achieving system tasks [2]. In doing so, these methods eventually aim to contribute to designing intuitive HI systems that support and facilitate clinical care by keeping the cognitive task workload of its users to a minimum [3]. The classic Thinking Aloud (TA) method is one of the usability evaluation methods that stem from cognitive science and is generally considered the “gold standard” in usability testing [4]. Scientific research on the validity of the method has shown that subjects’ verbalized information in TA usability testing accurately reflect users thought processes when interacting with a system [5]. Inverting these insights however to successful system (re)designs and thereby minimizing the cognitive task workload of system usage is still challenging.
926
L.W.P. Peute et al. / Reducing Clinicians’ Cognitive Workload by System Redesign
This paper investigates the effect of TA usability testing’s input in a system redesign project by comparing users’ cognitive task workload in terms of improved efficacy (correctly performed tasks) and efficiency (task completion time) pre and post system redesign. Usability problems revealed in the pre and post TA test were classified and compared by use of the User Action Framework (UAF) [6]. UAF classification is based on Norman's theory of action and categorizes usability problems in the sequence of a user’s cognitive and physical actions in performing a task in a system. We hypothesize that the earlier a user is obstructed in this sequence, due to usability problems in a system, the higher the cognitive task workload of a user is. The potential of TA usability testing in supporting HI system (re)design with the aim to reduce users’ cognitive task workload in a system and the beneficial effect of applying the UAF classification in this perspective are furthermore discussed in this paper.
2. Methods 2.1. System Background: NICE Online The evaluated and redesigned system in this study is a web-based Data Query Tool of the Dutch National Intensive Care Evaluation (NICE). The NICE registry collects demographic, physiological and clinical data on patients admitted to Dutch ICUs to detect differences and trends in quality of ICU delivered care. To provide participating ICUs with the possibility to query their own data and compare their performance with their peers or with national averages a web-based Data Query Tool was developed in a standard software development cycle. In the NICE Query Tool users define a ‘query’ themselves to compose a graph or a table depicting the selected information. An example of such a query in NICE Online is: ‘compare an ICU’s standardized mortality ratio (SMR) of medical patients to the national mean SMR of medical patients in the year 2009. Figure 1 provides screen shots of both the first and redesigned NICE Query Tool. Additional information on the development of the Tool and its functionality is published in [7]. 2.2. Pre-Post Study Design In October 2008, a pre-TA usability evaluation was performed (pre-test) to assess the overall usability and cognitive complexity of developing queries in the, at that moment available, Query Tool. Eight end-users were contacted to participate in a Think Aloud (TA) study, with an equal representation of new and more skilled users. A portable usability laptop with Morae software was used to document subjects’ TA verbalizations and video and record their (mouse) actions in the system. Sessions took place in the clinical workspace of the subjects and six predefined germane tasks, consisting of several subtasks and varying in complexity, were all given to subjects during the TA test in random order. Revealed usability problems were input to redesign the Query Tool interface. In the beginning of 2010 post redesign TA testing was performed on a beta-test version of the redesigned tool to measure the effectiveness of the redesign efforts. Again, eight end-users were contacted of which four new test users participated who were comparable to the pre TA user test group in terms of computers skills and previous experience with the Query Tool, and four users who had also participated in the pre TA study. Bias of pre-defined task learnability for these four users was
L.W.P. Peute et al. / Reducing Clinicians’ Cognitive Workload by System Redesign
927
negligible, since the time between the pre and post study was around one year. In the post TA testing, similar circumstances were upheld as in the pre TA test, including the tasks to be performed in the system. To compare overall task efficacy between the pre-and post TA sessions, the shortest routes to correctly perform the tasks and the corresponding end-results in both the old and redesigned system were determined by highly experienced data managers of the NICE registry. The correct task end-results were then applied as the ‘golden standard’ to measure the percentage of tasks correctly completed by subjects in the TA sessions. To compare overall task efficiency, time on task measurements had to be adjusted for optimization of system response for the display of query results (e.g. users had to wait over one minute for display of the query result in the old system in contrast to 3 seconds in the new system). Pre-post overall task efficiency measurements were therefore compared in terms of the additional time it took users to complete tasks in the system both pre and post as opposed to the time it takes to complete the tasks by the shortest route in both system designs.
Figure 1. NICE Online; Screenshots of the Physician Data Query Tool pre (left) and post (right) redesign.
2.3. Usability Problem Classification; The User Action Framework UAF classification places detected system usability problems in the context of four subsequent phases of the user interaction cycle; Planning- high level, Planningtranslation, Physical actions and Assessment [6]. Usability problems relating to the planning phase concern users’ cognitive actions for planning how to perform a task; e.g. the inability to track where you are in a system. The translation phase is about cognitive actions to determine how to carry out the intentions; related usability problems are incorrect button labeling or vague symbols. Physical action pertains to executing the actions by manipulating user interface objects; usability problems are e.g. button proximity or small size. The assessment phase is about perceiving, interpreting and evaluating the resulting system state to decide whether the action was indeed accurately performed; related usability problems concern users’ misunderstandings of system feedback. UAF classification starts with four user interaction phases on the first level; accurate classification can go up to six levels. This paper limits its results to the first level to provide a general insight into the relation between performance measures on the cognitive task workload and the UAF classified usability problems found.
928
L.W.P. Peute et al. / Reducing Clinicians’ Cognitive Workload by System Redesign
3. Results Overall, 12 subjects were included in the pre and post TA study (4 subjects were included in both pre and post TA). In total, 36 usability problems were revealed by two usability analysts in the pre TA test and 35 usability problems were revealed in the post TA test. UAF categorization of usability problems was performed by both usability analysts separately (κ=0.91). Of the usability problems detected in the pre-test 34 (94%) were resolved by redesigning the Query Tool, 5 (14%) were considered overlapping with usability problems found in the post-test and thirty new usability problems were revealed in the post TA Test. Table 1. Pre and post redesign measurements of task efficiency and efficacy. Subjects Overall Task Efficiency (min) - Optimal route (min) - Adjusted Overall Task Efficiency (min)* Overall Task Efficacy
Pre TA Test (8) 50.16 (sd 7.62) 20.05 + 30.11 24 (50%)
Post TA Test (8) 19.40 (sd 6,01) 12.51 + 6.49 46 (96%)
Efficacy: Total number of tasks (completed) (8x6=48), * deviation in min from optimal route
Overall task efficiency in the pre TA test was extremely low; users took on average 30 minutes longer to perform the tasks in the system compared to the time it would have taken them when they would have known what to do and how to act in the system (shortest route) (Table 1). This extra time was reduced to less than 7 minutes in the post test. The fact that subjects were able to complete only 50% of the tasks during the pre TA test confirmed that usage of the Tool for developing queries was cognitively complex. This percentage increased to 96% after redesign. Table 2 shows the usability problems categorized by ‘first level’ UAF classification in both the pre and post TA tests. The majority of usability problems in the pre-test concerned the ‘planning’ phase (64%). It appears that these problems were accountable for the high cognitive task workload associated with the low values measured for task efficacy and efficiency in the pre test. After redesign the majority of the 30 new usability problems detected in the post TA test concerned the ‘assessment’ phase (63%), showing an evident shift in the phase of interaction in which the new revealed usability problems occurred compared to the pre-test. However, these post TA usability problems did not or only minimally seem to affect users’ cognitive task workload. Apparently, usability problems related to the assessment phase did not have a great impact on task efficacy or efficiency. Analysis of the verbal protocols and video recording of users’ actions showed that usability problems in the Assessment phase were mostly related to users’ preferences in interface layout, such as graph colour and display of system feedback related to information on the screen. Table 2. Usability problems in the Pre and Post TA test classified by first level UAF UAF Phase 1. Planning 2. Planning (translation) 3. Physical Action 4. Assessment
Pre TA Test (36) 8 (22%) 15 (42%) 13 (36%)
Post TA Test (35) 10 (29%) 3 ( 8%) 22 (63%)
L.W.P. Peute et al. / Reducing Clinicians’ Cognitive Workload by System Redesign
929
4. Discussion and Conclusion In this study the input of TA usability testing in redesigning a web-based Data Query Tool of a National ICU Quality Registry led to a clear reduction in its complexity and hence the cognitive task workload of its users. Optimization of the task accuracy was obtained from 50% of tasks completed in the pre-test to 96% of similar tasks completed in the post test. However, redesign of the Tool also caused thirty new usability problems to occur. This is not surprising as it is well known that usability evaluation is an iterative process; subsequent changes to a user interface design might reveal other problems that again need user testing [8]. The fact that users’ efficiency was highly improved after redesign of the Tool indicates that the new usability problems detected in the post TA test minimally affected their cognitive task workload in use of the redesigned system. Applying UAF classification in our study was particularly useful to compare the nature of the pre and post detected usability problems and their effect on users’ cognitive task workload. UAF classification revealed a potential cause-effect relation between the occurrence of usability problems in the planning phase of the users’ – system interactions and their apparent negative effect on their cognitive task workload in terms of task efficiency and efficacy. Indeed, input from the pre TA test to the Query Tool redesign efforts offered insight on how to tackle the usability problems in the planning phase and in so doing furthered the development of the Query Tool to better support users’ cognitive processes in Data Querying. The new usability problems detected in the post test were mostly related to the Assessment phase, indicating that these problems were more or less of a cosmetic nature. As such, they did not provoke additional cognitive burden. Those usability problems that placed a high cognitive burden on system use were thus successfully reduced in just one redesign iteration of the Query Tool. Future studies that apply TA testing in a redesign cycle should focus redesign efforts on those aspects of the system that affect the planning of tasks by end-users, especially when high cognitive task workload of complex HI system tasks is seen as a major barrier for system use.
References [1] STANDARDIZATION SIOO. ISO 9241-11 Ergonomic requirements for office work with visual display terminals (VDTs) – part 11: guidance on usability. 1998. [2] Kushniruk, A.W. Patel, V.L. Cognitive and usability engineering methods for the evaluation of clinical information systems, J Biomed Inform 37 (1) (2004), 56-76. [3] Horsky, J. Zhang, J. Patel, V.L. To err is not entirely human: complex technology and user cognition, J. Biomed. Inform. 38 (4) (2005), pp. 264–266. [4] Jaspers, M.W. A comparison of usability methods for testing interactive health technologies: methodological aspects and empirical evidence, Int J Med Inform 78 (5) (2009), 340-53. [5] Hertzum, M. Hansen, K.D. Andersen, H.H.K. Scrutinizing usability evaluation: Does thinking aloud effect behaviour and mental workload?, Behaviour & information Technology 28 (2) (2009), 165-181. [6] Andre, T.S. Hartson, H.R. Belz, M.S. McCreary, A.F. The user action framework: A reliable foundation for usability engineering support tools, Int J Human-Computer Studies 54 (2001),107-36. [7] Peute, L.W. de Keizer, N.F. Jaspers, M.W. Cognitive evaluation of a physician data query tool for a national ICU registry; results of two think aloud variants and their application in redesign, Stud Health Technol Inform 1 (2010), 309-13. [8] Kaplan, B. Harris-Salamone, K.D. White paper: Health IT project success and failure: recommendations from literature and an AMIA workshop, AM Med Infom Assoc 16 (2009), 291-9.
930
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-930
Impact of Alert Specifications on Clinicians’ Adherence a
M. M. LANGEMEIJER a, L. W. PEUTE a, M. W. M. JASPERS a Department of Medical Informatics, Academic Medical Center – University of Amsterdam, The Netherlands
Abstract. Computerized alerts provided by health care information systems have been shown to enhance clinical practice. However, clinicians still override more than half of the alerts. This indicates that certain aspects of alerts need improvement to fulfill their purpose of supporting clinicians in decision making. This paper reports on a systematic review on studies evaluating alert specifications and their impact on clinicians’ alert adherence. The review revealed that use of colors and icons to distinguish different alert severity levels and presenting high severity alerts in an interruptive fashion increases clinicians adherence to alert recommendations. Alert message contents that lack clinical importance or provide incorrect texts increase alert non-adherence. Few studies have yet focused on the impact of alert specifications on clinicians’ adherence. A research agenda is needed on alert specifications and their impact on clinicians’ adherence in order to develop alerts that truly support clinician decision making. Keywords. Hospital Information Systems, Alert, Reminder Systems, Clinicians Adherence, Design aspects, Clinical Decision Support
1. Introduction Clinical decision support systems (CDSS) can have beneficial effects on clinicians’ performance in daily practice (1). Certain types of CDSS provide decision support through computerized alerting of clinicians on (critical) situations that require their attention or special action. Alerts provided by Computerized Physician Order Entry (CPOE) systems have been proven to reduce duplicate orders, overdoses, allergic reactions, and drug interactions (2). Also, higher clinicians’ compliance to clinical guidelines has been reported as a beneficial effect of alert implementation (3). However, one of the barriers to attaining these beneficial effects is that 49% to 96% of the alerts are still overridden (4), undermining their purpose. Often heard reasons for overriding an alert is “alert-fatigue” as a result of low specificity (4, 5). Alerts of low specificity are often ‘clicked away’ without being read even when overriding them could cause adverse events. Next to alert specificity, the graphical alert design influences alert overriding; a minor change in the design of an alert shown on a computer screen may have a major impact on a clinician’s action (6). However, in what way alert specifications of different severity and specificity may affect clinician adherence is still unclear. In this paper we present the findings of a systematic review of studies that evaluated effects of interventions concerning different alert specifications on clinicians’ adherence.
M.M. Langemeijer et al. / Impact of Alert Specifications on Clinicians’ Adherence
931
2. Methods In this systematic review we define an ‘alert’ as ‘a message that becomes visible to inform the user of a certain situation that requires attention’. An alert is generated by a rule base that is incorporated in a health care information system. In this review we refer to health care information system as defined by (7): “all computer-based components which are used to enter, store, process, communicate, and present health related or patient related information and which are used by health care professionals or the patient themselves in the context of inpatient or outpatient patient care”. Alert characteristics which are defined in this review are ‘type’, ‘design’ and ‘message content’. Type is defined by two characteristics; intrusive/non-intrusive and interruptive/non-interruptive. Intrusive messaging is considered if it overlays the computer ordering screen. Alert messaging is defined as interruptive if they require a user action before a clinician can proceed with the next step of ordering (e.g. providing a reason for alert overriding). Design of an alert is defined by two elements: graphical (e.g. the use of colors), and screen (e.g. the size of an alert or its components, the alignment of alert components, and the use of icons). Message content of an alert is defined as informative content of the alert that is shown to the user (e.g. alert severity, options for alternative treatments etc.). Clinician’s alert adherence is considered in terms of a clinician following the recommendation of the alerts message. MEDLINE and EMBASE were systematically searched from January 1, 1990 until January, 1 2009 using a combination of Medical Subject Headings (MeSH terms) and keywords. These terms were grouped as (A) interactive computer systems, (B) alert, warning, reminder, or feedback, (C) alert specifications (e.g. design). Within each group, the terms were combined by the operator “OR”. The three groups were combined by the operator “AND”. The search was narrowed down to articles written in English. All titles and abstracts of these articles were reviewed by the first author. The two other authors each reviewed half of the total set. Studies were rated as relevant if in the abstract the following items were mentioned: 1) the system under study is an interactive health information system, 2) the study is about clinician alert adherence, and 3) the study objective is the evaluation of at least one of the following alert specifications (type, design, or message content). Selected articles were discussed in a meeting and if all three reviewers agreed upon inclusion, full texts were reviewed. A standard data collection form was applied to review the included articles.
3. Results The literature search generated a total of 1711 articles (MEDLINE 1055, EMBASE 656) of which 386 were duplicates. From the remaining 1325 articles, 16 were selected for full text review based on their titles and abstracts. After full text review, only seven articles were found eligible for inclusion. One was excluded because it was about a system that had no interactive user interface, four were excluded because the full text did not provide detailed information on the alert specifications, and four were excluded because they did not accurately describe the study designs. Table 1 gives an overview of the included articles with the year of publication, study design, setting, system type, the results in terms of alert specifications, and the described effect on clinicians’ adherence. Full references of the included studies are provided in a technical report, which can be found at (8).
932
M.M. Langemeijer et al. / Impact of Alert Specifications on Clinicians’ Adherence
Table 1. Overview of studies evaluating impact of alert specifications on clinicians’ adherence Investigator, Year of Pub.
Study design
Settting
System
Alert specification
Effect
Shah NR et al., 2006
Descriptive
Outpatient
CPOE
Type: Tiered based on severity level; 1) interruptive requiring elimination of interaction 2) interruptive requiring reason 3) not interruptive
Positive
Paterno MD et al., 2009
Cohort study
Inpatient
CPOE
Type: Tiered based on severity level; 1) interruptive requiring discontinuing one of the orders 2) interruptive requiring discontinuing one of the orders or providing a reason 3) not interruptive
Positive
van Wyk JT et al., 2008
RCT
Outpatient
EHR
Type: Automated alerting vs. on-demand alerting
Positive
Alexander GL 2007
Descriptive
Inpatient
EHR
Type: Automated alerting vs. on-demand alerting
No effect
Eliasson M et al., 2006
Crosssectional
Outpatient
CPOE
Design: Colors to indicate severity: red = high, yellow = medium, white = low
Unclear
Design: Different icons for domain of notification (Pregnancy, Breast-feeding, Medication). Taylor L et al., 2004
Descriptive
Outpatient
CPOE
Content: clinical importance of alert, and correctness of drug/disease information
Negative
Tamblyn R et al., 2008
RCT
Outpatient
CPOE
Type: Automated alerting vs. on-demand alerting
No effect on type, Negative effect on content
Content: clinical importance of alert, and correctness of drug/disease information
Five of the studies provided specific information about the different types of alerts. Shah et al. tiered the presentation of a selective set of alerts based on their severity levels into 3 categories. Categories one and two were considered severe and were designed to interrupt the clinician requiring a direct action; either eliminating the contraindication for level 1 or providing an override reason for level 2, while, the less severe ones, level 3, were presented in a non-interruptive fashion, requiring no action by clinicians. This study reported an adherence rate of 67% with interruptive alerts requiring action. Paterno et al. studied whether the rate of clinician compliance with drug-drug interaction alerts improved when a tiered presentation of alerts was implemented. Alert log data were analyzed at two academic medical centers using the same alerts but one displayed alerts by severity level (tiered presentation) while the other did not. This study showed that the overall compliance rate for tiered alerts was almost three times higher than for non-tiered alerts (29% vs. 10%). A randomized control trial (RCT) by Van Wyk et al. studied automated alerts (the recommendation is automatically shown to the user) and on-demand alerts (a user has to actively initiate the overview screen to access the recommendation) versus no
M.M. Langemeijer et al. / Impact of Alert Specifications on Clinicians’ Adherence
933
intervention. The RCT showed that the alerting version significantly improved the performance of clinicians for screening and treatment of dyslipidemia as compared to the on-demand version. Another RCT study by Tamblyn et al. compared the effect of customizable automated alerts and customizable on-demand alerts on drug prescribing problems and alert overrides. A greater absolute number of automated alerts were seen and revised by clinicians, but both groups underused the alerts. As a result, there was no significant difference in the overall prevalence of prescribing problems by the end of the follow-up period. Therefore clinician adherence was not affected. Likewise, the study of Alexander investigated the impact of automated alerts compared to on-demand alerts on clinical responses of health care providers and reported no significant difference in clinicians’ adherence. Only one of the studies, Eliasson et al., provided specific information about the visual design aspects of alerts. This study investigated a system where icons (differing in type for pregnancy, breast-feeding, and medication) appeared in patient situations that required attention. The background color of the alert changed for the various severity levels; Red for high, Yellow for medium, White for low. This study showed that these types of alerts were quickly adopted in daily clinical routine. The adoption can be due to adherence to alerts, though the study did not directly mention the actual effect of the alert design specifications on clinicians' adherence. Two of the studies, Taylor et al. and Tamblyn et al, reported on content specificities of alerts. Taylor assessed the feasibility and performance of automated alerts within an electronic decision support tool of a prescribing system. Among other reasons, lack of clinical importance of alerts and incorrectness of drug/disease information respectively counted for 34% and 4% of clinicians’ non-adherence to automated alerts. Tamblyn et al. likewise showed that from the total number of alerts seen by clinicians 16% were ignored because of incorrectness of drug/disease information and 29% because of lack of clinical importance.
4. Discussion The findings of this systematic review suggest that specific types of alert presentation can influence clinicians’ adherence to the recommendations provided. First, clinicians’ acceptance of alerts and likelihood of compliance with the alert recommendations could increase when they would only be interrupted by alerts of highest severity, which is with the highest clinical importance. A reduction in the number of interrupting alerts, particularly those with low severity, could prevent alert fatigue and alert overriding by clinicians. Automated alerts rather than on-demand ones do not seem to be associated with better performance of clinicians though in the RCT by Van Wyk et al. automated alerts improved adherence in comparison to on-demand alerts. The results of this RCT are consistent with the findings of a major review (8). This review showed that clinicians’ performance is improved in conditions wherein they are automatically prompted by clinical decision support systems compared to situations which required them to activate the system themselves. These conflicting results may be explained by the fact that other factors besides alert specifications such as alert specificity and severity which likewise influence clinicians’ adherence were neglected. Certain alert design specifications have a positive influence on clinicians’ adoption of alerts. One of the studies in this review showed that the use of different colors for differentiating alert severity levels and the use of icons for indicating the domain of
934
M.M. Langemeijer et al. / Impact of Alert Specifications on Clinicians’ Adherence
notification may enhance clinicians’ awareness of situations requiring their attention and improve quick adoption of alerts in clinical practice. The effect of these alert designs on clinicians’ alert adherence yet remained unclear. The message content specification of an alert might also impact clinician adherence. Two studies showed that that alerts with incorrect information and unclear clinical consequences were among contributing factors of clinician non-adherence, which findings were acknowledged by Van der Sijs (4). This systematic review has several limitations. Because the term “alert” is not a MeSh term, the term “alert” was combined with other but similar terms like “warning” and MeSh terms like “feedback” and “reminder” to find relevant articles. Further work is to broaden the search strategy to find more studies that might shed light on other alert specifications and their impact on clinicians’ adherence to the alerts. Furthermore, only two of the seven studies concerned RCTs which produced conflicting results, so the results are poor and inconclusive. Besides the limited number of studies and RCTs found by this review, most of the included publications focused on the effect of one single alert specification on clinicians’ adherence. Therefore, the reported adherence might be influenced by other alert specification aspects not of focus in the study, biasing the study results. Most important, adherence is influenced by alert specificity and severity as well. A research agenda is needed to investigate the impact of variations in alert specifications in relation to alert specificity and sensitivity on clinicians’ adherence. The ultimate aim is to develop alert designs that truly support clinician decision making and improve clinical outcomes. We will start this research by experiments evaluating the effect of different types, designs and message contents of alerts in relation to alert specificity and sensitivity level on clinicians’ adherence in two Dutch academic hospital settings.
References [1]
[2]
[3] [4] [5] [6]
[7] [8]
Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA 293-10 (2005), 1223-38. Kaushal R, Shojania KG, Bates DW. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med 23-163/12 (2003), 1409-16. Rosenberg SN, Shnaiden TL, Wegh AA, Juster IA. Supporting the patient's role in guideline compliance: a controlled study. Am J Manag Care 14-11 (2008), 737-44. van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc 13-2 (2006), 138-47. Shah NR, Seger AC, Seger DL, Fiskio JM, Kuperman GJ, Blumenfeld B, et al. Improving acceptance of computerized prescribing alerts in ambulatory care. J Am Med Inform Assoc 13-1 (2006), 5-11. Bates DW, Kuperman GJ, Wang S, Gandhi T, Kittler A, Volk L, et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. J Am Med Inform Assoc 10-6 (2003), 523-30. Ammenwerth E, de Keizer N. An inventory of evaluation studies of information technology in health care trends in evaluation research 1982-2002. Methods Inf Med 44-1 (2005), 44-56. Langemeijer MM, Peute LW, Jaspers MWM. Impact of Alert Specifications on Clinicians’ Adherence, Technical Report 2011-01, Department of Medical Informatics, University of Amsterdam. Available at http://kik.amc.uva.nl/KIK/reports/TR2011-01.pdf
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-935
935
Medication Decision-Making on Hospital Ward-Rounds Melissa BAYSARIa,1, Johanna WESTBROOKb, Richard DAY c,d a Australian Institute of Health Innovation, Faculty of Medicine, University of New South Wales, Sydney, Australia b Centre for Health Systems and Safety Research, Australian Institute of Health Innovation, Faculty of Medicine, University of New South Wales, Sydney, Australia c Department of Clinical Pharmacology and Toxicology, St Vincent’s Hospital, Sydney, Australia d Faculty of Medicine, University of New South Wales, Sydney, Australia
Abstract. This research explored the decision-making process of selecting medicines for prescription on hospital ward-rounds. We aimed to determine when and with whom medications were discussed, and in particular, whether shared decision making (SDM) occurred on ward-rounds. As a low level of computerized decision support was in place in the hospital at the time, we also examined whether the decision support aided in any medication discussions. Fourteen specialty teams (46 doctors) were shadowed by the investigator while on ward-rounds and all verbal communication about medications was noted. Most medication discussions took place away from the patient bedside and the majority took place between two or more doctors. While a great deal of doctor-patient communication regarding medications took place on ward-rounds, very little of this comprised SDM. More frequently, doctors informed patients of the medications they would be or were currently taking. The computerized decision support had little impact on treatment decision-making. While the value of SDM is often acknowledged in the literature, it appears to be rarely practiced on hospital ward-rounds. Keywords. Shared decision making, prescribing, ward-rounds
1. Introduction It has been suggested that the greatest challenge to information technology development in healthcare is expanding our understanding of decision-making in the complex healthcare environment [1]. Research has shown that doctors appear to select medications based primarily on the probability that a drug will be effective in controlling the disease and on the potential side effects which may result from using a drug [2], but it has also been proposed that patient demands (i.e. patient expectations and preferences) influence clinical decision-making [3]. Shared decision-making (SDM) is the process whereby a doctor and patient exchange information and treatment preferences and reach an agreement about an appropriate treatment [4]. It follows that SDM is the ideal model for treatment decision-making from the recognition that uncertainty surrounds treatment decisions 1
Corresponding author: Melissa Baysari
936
M. Baysari et al. / Medication Decision-Making on Hospital Ward-Rounds
for many conditions and that patients vary in their preferences for health states, tolerances for pain and long-term outlooks [5,6]. Active participation in treatment decision-making by patients has been associated with greater patient satisfaction and better health outcomes, possibly via increased adherence to treatment recommendations and increased perceived control over one’s illness [7,8]. Despite these potential benefits, SDM is not often practiced [9,10], although the bulk of research in this area has been done in primary care. We set out to investigate SDM in a hospital setting. Hospital ward-rounds have been identified as one of the most valuable times for sharing information, problem solving and planning a patient’s treatment [11]. We aimed to determine when and with whom medications were discussed on ward-rounds, and to determine whether patients played an active role in medication decision-making. As a low level of computerized decision support was in place in the hospital at the time, we also examined whether the decision support aided in any medication discussions.
2. Method 2.1. Details of the Computerized Provider Order Entry (CPOE) system This study was conducted at a 320 bed teaching hospital in Sydney, Australia. At the time of the study, June-November 2010, all wards were using the CPOE system MedChart (www.isofthealth.com) except for the emergency department and the intensive care unit. MedChart is an electronic medication management system that links prescribing, pharmacy review, and drug administration. The CPOE included some basic decision support comprising pre-written orders and order sets, computerised alerts (allergy, pregnancy, therapeutic duplication, and over 100 locally developed rulebased messages e.g. drug therapeutics committee decisions, administration instructions) and a Reference Viewer look up tool that allowed prescribers to access reference information (e.g. Therapeutic Guidelines) by clicking on a tab at the top of the prescribing screen. 2.2. Participants Fourteen medical teams were recruited to participate in the study via direct approach, phone or email. The teams included cardiology, clinical pharmacology, lung transplantation, colorectal surgery, two gastroenterology teams, two gerontology teams, haematology, infectious diseases, nephrology, neurology, and two palliative care teams. Medical teams typically included one senior doctor (consultant), one (or more) registrar, one (or more) resident and occasionally interns (first year post graduation) and medical students. Some ward rounds (5/37) were observed to take place without a senior doctor present. In total, 46 doctors were observed. 2.3. Procedure Medical teams were shadowed by one of the investigators (MB) while on their wardrounds. The investigator followed each team as they discussed patient cases and interacted with patients. On occasions where the computer (fixed to a lightweight trolley) was not taken to the patient’s bedside, the investigator remained in the hallway
937
M. Baysari et al. / Medication Decision-Making on Hospital Ward-Rounds
with the computer and only accompanied the team to the bedside if invited to do so by a participating doctor. All verbal communication about medications was noted and information was classified into the following categories: Where medication discussions took place (at the patient’s bed, in the hallway), whether the discussion took place among team members or between a team member and a nurse or pharmacist (doctor and doctor, doctor and nurse, or doctor and pharmacist) or between a team member and patient, the nature of the conversation between a team member and patient (see Table 1), whether the content of an alert was discussed, and whether the Reference Viewer was used during a medication discussion. Each medical team was observed on two or three ward rounds (except for one team that was observed only once because they reported never using a computer on ward rounds), resulting in 58.5 hours of observation in total. Ethics approval was obtained from the human research ethics committee of the hospital and the University of NSW.
3. Results One hundred and seventy-six verbal behaviours about medications were exchanged between two or more healthcare providers. Most of these conversations took place away from the patient, with only 41 (23%) verbal behaviours taking place at a patient’s bedside. The majority of medication discussions among providers were between two or more doctors (91%), with only a small number taking place between a doctor and nurse (7%) or doctor and pharmacist (2%). One hundred and twenty six verbal behaviours took place between a team member (i.e. junior or senior doctor) and a patient. The nature of these behaviours and some examples are presented in Table 1. Doctors frequently told patients what medications they should be taking but rarely involved patients in the decision to order medication. Table 1. Nature of discussions about medications between doctors and patients Type of verbal behaviour
Example
Doctor told patient what medication they are currently taking or will take (Paternalistic decision making) Doctor asked patient if/what medication they would like to take (SDM) Doctor asked patient about medications they are currently taking Patient asked doctor about medications
“You have a nasty infection so I’ve put you on antibiotics”
Doctor answered patient’s question about medications
“Would you like some medication for your constipation?” “Do you take this medication everyday?” “All this talk about Calcium and heart attacks, what does it all mean?” “One study is not gospel. We don’t want to take you off the Calcium tablets”
Number observed (%) 65 (51.5)
2 (1.5) 32 (25.5) 17 (13.5) 10 (8)
No doctor was seen discussing the content of a computerized alert with another team member or patient, but the Reference Viewer tool was used on five occasions during discussions about medications. On one occasion, a doctor was observed using the tool to look up the trade names of a number of medications and then relayed these names to a patient. On the other occasions, doctors used the tool to review medication information during a discussion about medications with other team members.
938
M. Baysari et al. / Medication Decision-Making on Hospital Ward-Rounds
4. Discussion In this setting, ward-round treatment decision-making typically consisted of discussions between two or more doctors away from the patient bedside. Doctors rarely involved nurses or pharmacists in the decision-making process, a finding consistent with previous research [12]. While a great deal of doctor-patient communication regarding medications did take place, very little of this comprised SDM. Several factors may have contributed to this failure to engage in SDM in this setting. Some medical problems (e.g. preventative screening) have clear decision points and so may be more suited to SDM than many hospital medical problems (e.g. acute situations). It has been suggested that SDM requires a longstanding relationship between doctor and patient so that each party is able to understand the values and biases of the other [13]. A relationship of this kind is not always possible in the hospital setting, where hospital stays are relatively short and interactions between patient and doctor often brief. Time is viewed as a limited resource on ward-rounds and time pressure has been identified as the most common barrier to SDM adoption [14]. Studies have also shown that a patient’s desire to participate in treatment decision-making is dependent on a range of factors, including patient age, sex, and the severity of their disease [5,15]. One might expect hospital patients (many of whom are elderly and experiencing serious illnesses) to be unwilling to participate. Regardless, it is still recommended that doctors offer all patients the opportunity to actively engage in the process of making treatment decisions [13,16]. In this setting, doctors employed a paternalistic approach whereby they informed patients of the treatment that was or would be initiated. It is now widely recognized that this treatment approach is only appropriate during emergency situations [16]. Little research has examined the impact of computerized decision support on medication decision-making. As the content of computerized alerts was never featured in medication discussions, it can be deduced that the alerts played a very minor role, if any, in drug choices made on ward-rounds. The Reference Viewer, on the other hand, was utilized on several occasions to obtain medication information. Decision support of this kind allows prescribers to access relevant information only when they believe it is needed and so provides a non-interruptive alternative to computerized alerts. This study was limited by the fact that observations were conducted at only one hospital so findings may not be generalizable to other settings. Patient-doctor relationships were not observed over long periods of time (only 1-3 times) so some medication conversations may have been incomplete.
5. Conclusion While a great deal of doctor-patient communication regarding medications took place on ward-rounds, very little of this comprised SDM. More frequently, doctors informed patients of the medications they would be, or were currently taking. Medication discussions typically took place between doctors on a team, not nurses or pharmacists, and usually occurred away from the patient’s bedside. While the value of SDM is often acknowledged in the literature, it appears to be rarely practiced on hospital ward rounds. A potential therefore exists for interventions, such as decision aids, to facilitate SDM in the hospital setting.
M. Baysari et al. / Medication Decision-Making on Hospital Ward-Rounds
939
Acknowledgements: This research is supported by NH&MRC Program Grant 568612.
References [1] [2] [3]
[4] [5] [6] [7]
[8] [9]
[10]
[11] [12] [13] [14]
[15] [16]
Kushnirick AW, Evaluation in the design of health information systems: Application of approaches emerging from usability engineering, Computers in Biology and Medicine 32 (2002), 141-149 Bradley CP, Decision making and prescribing patterns: A literature review, Family Practice 8 (1991), 2762-2787. Geneau R, Lehoux P, Pineault R, Lamarche P. Understanding the work of general practitioners: A social science perspective on the context of medical decision making in primary care, BMC Family Practice 9 (2008), 12. Charles C, Gafni A, Whelan T. Shared decision-making in the medical encounter: What does it mean? (Or it takes at least two to tango), Social Science & Medicine 5 (1997), 681-692 Kaplan RM, Frosch DL. Decision making in medicine and health care, Annual Review of Clinical Psychology 1 (2005), 525-556. Frosch DL, Kaplan RM. Shared decision making in clinical medicine: Past research and future directions, American Journal of Preventative Medicine 17 (1999), 285-294 Brody DS, Miller SM, Lerman CE, Smith DG, Caputo GC. Patient perception of involvement in medical care: Relationship to illness attitudes and outcomes, Journal of General Internal Medicine 4 (1989), 506-511. Kaplan SH, Sheldon G, Ware JE. Assessing the effects of physician-patient interactions on the outcomes of chronic disease, Medical Care 27 (1989), S110-S127 Braddock CH, Edwards KA, Hasenberg NM, Laidley TL, Levinson W. Informed decision making in outpatient practice: Time to get back to basics, Journal of the American Medical Association 282 (1999), 2313-2320. Makoul G, Arntson P, Schofield T. Health promotion in primary care: Physician-patient communication and decision making about prescription medications, Social Science and Medicine 41 (1995), 12411254. Busby A, Gilchrist B. The role of the nurse in the medical ward round, Journal of Advanced Nursing 17 (1992), 339-692. Manias E, Street A. Nurse-doctor interactions during critical care ward rounds, Journal of Clinical Nursing 10 (2001), 442-450. Kon, AA. The shared decision-making continuum, Journal of the American Medical Association 304 (2010), 903-904. Lagare F, Ratte S, Gravel K, Graham ID. Barriers and facilitators to implementing shared decisionmaking in clinical practice: Update of a systematic review if health professionals’ perceptions, Patient Education and Counseling 73 (2008), 526-535. Levinson W, Kao A, Kuby A, Thisted RA. Not all patients want to participate in decision making: A national study of public preferences, Journal of General Internal Medicine 20 (2005), 531-535. Emanuel EJ, Emanuel LL. Four models of the physician-patient relationship, Journal of the American Medical Association 267 (1992), 2221-2226.
940
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-940
A Qualitative Analysis of Prescription Activity and Alert Usage in a Computerized Physician Order Entry System Rolf WIPFLIa,1, Mireille BETRANCOURTb, Alberto GUARDIAa, Christian LOVIS a a Division of Medical Information Sciences, University of Geneva and University Hospitals of Geneva b TECFA – University of Geneva Geneva, Switzerland
Abstract. Medical alerts in CPOE are overridden in most cases. The need for alerting systems that are better adapted to physicians’ needs and work processes is recognized. Our study aims to shed some light on how medical alerts are used and how they are integrated in the work process. Work analysis and interviews resulted in a hierarchical task analysis of prescription during ward rounds at the University Hospitals of Geneva. The results indicate that non-modal medical alerts are appreciated as an “insurance” for drugs that are out of the routine set. In the case of drugs that are often prescribed, alerts are ignored as physicians feel comfortable prescribing them. Non-interrupting alerts do not cognitively overcharge physicians, but the question is how to display the numerous alerts so that they are easily accessible when needed. Further, inexperienced physicians lack a mental representation of what evaluations the system is doing with the prescriptions and when alerts are triggered. This may lead to lack of trust or overconfidence, both of them potentially harmful. Keywords. CPOE, medical alert, task analysis, usability
1. Introduction The aim of the present paper is to analyze the prescription behavior of physicians and their use of medical alerts with a homegrown computer physician order entry (CPOE) system with an integrated decision support system (DSS) at the University Hospitals of Geneva, a teaching hospital with 2000 beds and 15.000 electronic prescriptions a day. The scope of the study is limited to the use during ward rounds. Research in other hospitals has shown that medical alerts have a low compliance rate [1] but nevertheless improve prescription behavior and patient safety [2]. It is generally agreed that alert systems have to be better adapted to the needs and work processes of prescribing physicians. If alerts would be better timed, more specific and
1
Corresponding Author: Rolf Wipfli, University Hospitals of Geneva, Division of Medical Information Sciences, Rue Gabrielle-Perret-Gentil 4, 1211 Geneva 14, Switzerland.
R. Wipfli et al. / A Qualitative Analysis of Prescription Activity and Alert Usage
941
displayed in a user-friendly way, they would act as an even more powerful decision support system than today. The prescription activity with CPOE can be described in a top-down manner accessing job descriptions, hospital guidelines, medical guidelines and their implementation in the resulting CPOE. Conversely, in a human-centered approach, the activity can be constructed on physicians’ representation of the information in the CPOE and how they handle the medical information in a real work context. As for medical alerts, there seems to be a discrepancy as the low compliance rate shows. In order to study prescription activity, ethnographic work observations and interviews [3], work simulations [4] and focus groups [5] have been applied. A method to model the prescription process is cognitive task analysis. The result is a hierarchical representation of main tasks and the depending sub tasks. Researchers have used this technique to represent the drug administration process [6]. A similar method is MAD (Method of analytic task description) [7] which is used in the present study. The goal is to represent the physician’s activity in order to make alerts better adapted to it.
2. Method In a first step, 5 deputy heads of different divisions at University Hospitals of Geneva were questioned in semi-directive interviews. The aim was to get a wide range of requirements and a broad perspective on the alerting systems in their divisions. The scope was not limited to CPOE, but aimed to cover general use of alerts in the medical field. Two divisions have been selected for conducting further analysis: the division of cardiology in the department of internal medicine and the division of pediatric surgery in the department of adolescents and children. In each division, a ward round in the morning was accompanied to see how medical personnel act and communicate during prescription activity. Work procedures were observed and notes taken. The work itself was not interrupted as far as possible. When the moment seemed right emerging questions were asked according to the methodology of contextual inquiries. Each deputy head of division selected a physician for further semi-directive interviews. In the case of cardiology it was an attending physician with 10 years of experience with CPOE and in the case of pediatric surgery an advanced resident with 2 years of experience with the CPOE. The interviews were always opened with the request “to recount a recent clinical case where an alert has been displayed”. When narrations stopped or when something was unclear, further questions were asked to complete the view on the prescription process. In each of the services we interviewed 2 more residents, each with 8-14 month of experience with CPOE. The interviews took 20-40 minutes, were audio recorded and transcribed. The transcriptions were analyzed in order to identify the different activities in the prescription process and their temporal and causal relations. This data completed the findings provided by work analysis.
3. Results 3.1. Interviews with deputy heads of division Alerts in CPOE are in general regarded as a good means to provide decision support, as the deputy head of division support projects which go further in this direction.
942
R. Wipfli et al. / A Qualitative Analysis of Prescription Activity and Alert Usage
However, some brought up issues make the CPOE less utile. First, alerts once entered in the system can be outdated. The processes how to keep them up-to-date is not yet implemented (i.e., for patients who were carrier of methicillin-resistant staphylococcus aureus (MRSA) and who are now readmitted to the hospital). Another example is reminder alerts that should be given the last day of hospitalization (i.e., bacteriological tests), a day the system cannot forecast. This leads to an alert every day and therefore to a low compliance rate and alert fatigue. Some physicians criticize the authentication warnings when accessing patient records out of their responsibility. They are regarded as interruptive, intimidating and as a lack of trust in them. None of them complained about the amount of alerts and they agree that it is usually difficult to make alerts more specific given that the user range is very broad (medical specialties, experience and expertise). Concerning usability issues, some deputy head of division are concerned with the quality of medical work by inexperienced physicians. They fear that novice physicians might use electronic prescribing as a poor substitute for thorough clinical analysis. According to them, residents depend too much on decision support systems. Another usability issue has been identified in the display of information. Some alerts are out of the visual focus region when using the system and thus leading to low response levels to the alerts. No one had the impression that there are superfluous alerts. However, some concerns were expressed that the number of alerts will soon overcharge the screen. Form usability was also mentioned as some interaction elements like pull down menus can lead easily to errors when choosing a wrong unit in drug prescription. 3.2. Work analysis There are two situations where drugs are prescribed. In the first case, a physician is on a night or weekend shift and does the prescription alone. In most cases however, the physician is on a ward round together with other residents, nurses and in some cases with a deputy head of division and/or attending physicians who lead and supervise the prescription process. The decision making process in these cases is collaborative. Prescriptions and medical forms are entered by one designated resident after the visit of a patient or even at the end of the ward. The question arises what impact alerts have on the prescription process when they appear some time after having made the decision. 3.3. Interviews with attending physicians and residents Only one of the interviewed physicians could recall a recent medical case where he was alerted during the prescription. Apparently, alerts like drug interaction alerts and dosage alerts do hardly lead to critical incidents which would be remembered. The alerts are rather seen as contextual information (coming from the drug compendium) for a drug or drug combination, which may also be ignored in favor of the division’s own rules. The alerts were considered by nobody to be interruptive. This may be due to non-modal alerts (not interrupting the work process) and to the fact that drug prescription is never inhibited. While the two more experienced physicians had a more detailed mental representation of what tests are conducted by the system and what alerts are triggered by these tests, the less experienced residents had only a fuzzy representation of what the system is testing. Indeed, when asked if they would expect an alert for a given use case, a typical answer was for example: “I don’t know. You have to ask the
R. Wipfli et al. / A Qualitative Analysis of Prescription Activity and Alert Usage
943
programmers of the system.” This issue was never stated as a problem in the interviews. Still, if this is the case, physicians will find it difficult to trust a system completely; if they do, they risk missing potential dangerous situations where there is no alert. Statements by residents, the deputy head of divisions, and research [8] indicate that they will not look for any, if the system is not warning them. Also, some express doubts on whether the system has up-to-date information (for instance for weight-based drug dosage alerts in pediatrics or drug interactions in cardiology where they often introduce new drugs). Physicians were aware that they don’t pay attention anymore to alerts. Both visited divisions had a specialized drug set they prescribed very often. Drug alerts for their most common prescriptions were routine to them and the respective alerts were ignored. When asked whether they find them useful they responded that they were confident that they know the risks for the drugs in their medical domain, but they appreciate such an alert system for drugs they don’t prescribe often as for instance psychiatric or neurological drugs. None of them could report such a situation, but it does comfort them that the system would intervene. Both divisions used a limited set of about 5 drugs per patient, but they already find it difficult to understand the visualization of drug-drug interaction alerts where one drug has interactions with several others. An important alerting mechanism stays the feedback of the nurses who are used to prescribe a set of common dosages, routes, and frequencies of prescription. In contrast to the CPOE, they are also aware for what diagnose the drug is prescribed for. 3.4. Task analysis The method of analytic task description (MAD) resulted in the hierarchical tree as shown in Figure 1. The sticky-man symbol represents a physician-initiated task, a computer represents a computer-initiated task; the label “opt” describes an optional task. The relations between a task and its subtasks are “alternative”, “parallel”, “sequential”, or “no order”. This task representation may be used to create use cases and scenarios for prototype development and usability testing.
4. Discussion The present study gives some insights in the prescription process with a CPOE and how alerts are handled. The main finding is that physicians appreciate alerts as insurance for situations they are not familiar with. Also, non-modal alerts are not overcharging the physicians, however attention should be paid on how to best visualize the ever growing number of alerts. Finally, as it is not visible to the physicians, they have in general no mental representation of what prescriptions the decision support system is checking. These issues have to be addressed in future research.
5. Conclusion The present qualitative study offers a means to understand what causes lay beneath the low compliance towards alert systems and how to improve them. We will use the findings to develop a prototype for alert systems which will be further studied in
944
R. Wipfli et al. / A Qualitative Analysis of Prescription Activity and Alert Usage
usability tests. The presented method may be easily adapted to other work contexts and research questions in the medical field.
Figure 1. Analytic Method of Task description for prescription process during a ward round
References [1] [2] [3]
[4] [5]
[6] [7] [8]
Van der Sijs H, Aarts J, Vulto A, Berg A. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc. 2006;13(2):138-47. Schedlbauer A, Prasad V, Mulvaney C, et al.What evidence supports the use of computerized alerts and prompts to improve clinicians’ prescribing behavior? J Am Med Inform Assoc. 2009;16(4):531-8. Beuscart-Zéphir M-C, Pelayo S, Bernonville S. Example of a Human Factors Engineering approach to a medication administration work system: potential impact on patient safety. Int J Med Inform. 2010;79(4):43-57. Van der Sijs H, Van Gelder T, Vulto A, Berg M, Aarts J. Understanding handling of drug safety alerts: a simulation study. Int J Med Inform. 2010;79(5),361-9. Weingart SN, Massagli M, Cyrulik A, Isaac T, Morway L, Sands DZ, Weissman JS. Assessing the value of electronic prescribing in ambulatory care: a focus group study. Int J Med Inform. 2009;78(9):571-8. Lane R, Stanton NA, Harrison D. Applying hierarchical task analysis to medication administration errors. Appl Ergon. 2006;37(5):669-79. Scapin DL, Pierret-Golbreich C. Towards a method for task description: MAD. Work with display units. 1990;89:371-9. Campbell EM, Sittig DF, Guappone KP, Dykstra RH, Ash JS. Overdependence on technology: an unintended adverse consequence of computerized provider order entry. In AMIA Annu Symp Proc. 2007 Nov 10-14; Chicago, IL. P. 94-8.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-945
945
Combining Usability Testing with EyeTracking Technology: Evaluation of a Visualization Support for Antibiotic Use in Intensive Care Aboozar EGHDAMa,1, Johanna FORSMANa, Magnus FALKENHAVb,c, Mats LINDd, Sabine KOCHa a Health Informatics Centre, Department of Learning, Informatics, Management and Ethics, Karolinska Institutet, Sweden b Department of Anesthesiology and Intensive Care, Karolinska University Hospital Solna, Stockholm, Sweden c Department of Physiology and Pharmacology, Karolinska Institutet, Stockholm, Sweden d Department of Informatics and Media, Uppsala University, Sweden
Abstract. This research work is an explorative study to measure efficiency, effectiveness and user satisfaction of a prototype called Infobiotika aiming to support antibiotic use in intensive care. The evaluation was performed by combining traditional usability testing with eye-tracking technology. The test was conducted with eight intensive care physicians whereof four specialists and four residents. During three test phases participants were asked to perform three types of tasks, namely navigational, clinical and tasks to measure the learning effect after 3-5 minutes free exploring time. A post-test questionnaire was used to explore user satisfaction. Based on the results and overall observations, Infobiotika seems to be effective and efficient in terms of supporting navigation and also a learnable product for intensive care physicians fulfilling their need to get an accurate overview of a patient status quickly. Applying eye-tracking technology during usability testing has shown to be a valuable complement to traditional methods that revealed many unexpected issues in terms of navigation and contributed a supplementary understanding about design problems and user performance. Keywords. Usability evaluation, eye-tracking, information visualization, decision support, intensive care
1. Introduction Intensive care is a complex and time critical work environment. Patients in intensive care units receive a large amount of medication, their condition can change rapidly and intensive care physicians are forced to make fast decisions. One area of decision making great importance is antibiotic use. It is known that antibiotics are over-used in 1
Corresponding author: Aboozar Eghdam, Health Informatics Centre, LIME, SE 17177 Stockholm, Sweden; E-mail: [email protected]
946
A. Eghdam et al. / Combining Usability Testing with Eye-Tracking Technology
intensive care units [1] and patients are often unnecessarily treated with broadspectrum antibiotics [2]. Antibiotic use in intensive care requires time critical decisionmaking based on complex information that is usually spread amongst different information systems with different logins, functionalities and user interfaces [3]. Health information systems (HIS) need therefore to be adapted to the context of use and to support, not to hamper, clinical work processes as well as clinicians’ cognitive processes. This means, both the graphical user interfaces (GUI) and the interaction with a specific health information system or e-service should be designed according to clinicians work practice [4]. A number of methodical and empirical methods from the area of human-computer-interaction (HCI), human factor and usability engineering have been applied to evaluate health information systems in order to verify and optimize HIS usability [5] [6]. Usability testing usually combines quantitative measures, such as for example time measurements, and qualitative measures such as user perception. Subjective measures to gather cognitive data derived by for example “thinking aloud” are often applied. Eye-tracking technology is an objective way to measure and analyze eye movement, point of gaze, patterns of visual attention and eye fixation. The analysis of provided data from the eye-tracking equipment is based on an assumption about the relationship between eye fixation and people’s thoughts [7]. The eye-tracking technique increases the understanding of what users are looking at, for how long, and on their visual navigation path [8]. Eye tracking supplements traditional usability testing approaches by providing further information which researchers cannot observe and provides distinctive intuitiveness about search, attention and reading patterns which the test participant cannot report during the evaluation. Exploring eye movements is a contribution to HCI by understanding users’ desires on interfaces and adjusting them consequently in real time [7]. We think that a combination of usability testing as a practical assessment of system effectiveness, efficiency and user satisfaction, and eye-tracking technology can provide additional understanding of design problems and user performance. Improving the human-computer interface and summarizing patient-level information are seen as some of the core challenges for the design of clinical decision support systems (CDS) [9]. To enable targeted, patient-specific antibiotic use, we have therefore developed a visualization support for decision making during antibiotic use, called Infobiotika, and we will present its evaluation in this paper. We did an exploratory usability investigation and an initial performance testing of Infobiotika. The purpose of this research was to investigate if Infobiotika supports efficient and effective navigation and observe the users’ navigation paths, visual scan patterns and distribution of visual attention. Furthermore, the purpose was to explore if users find the information needed to support antibiotic treatment in intensive care and the learnability of Infobiotika. In addition to quantitative results, qualitative comments were captured during the test to obtain the participants’ thoughts and feelings about Infobiotika and its functionality.
2. System Description Infobiotika is a visualization support to provide a patient overview during antibiotic treatment in intensive care. It provides an overview of a patient’s clinical condition by gathering clinical data from different systems, that is; an electronic patient record
A. Eghdam et al. / Combining Usability Testing with Eye-Tracking Technology
947
system, a patient data management system (PDMS), a bacteriological laboratory system and a radiology information system. Infobiotika is expected to complement the current system environment during rounds, in resting-rooms, at surgical wards, and in time-critical situations when the intensive care physician is without the support of an infection disease consultant. The context of use is therefore treatment with antibiotics in intensive care. Infobiotika gives an overview in presenting the data by tables, trees and graphs.
3. Method We applied usability testing, measuring system effectiveness, efficiency and user satisfaction according to ISO 9241-11 [10], combined with eye-tracking technology. 3.1. Participant Characteristics Two main user groups were identified as test subjects: Specialists and residents, depending on their full time experience at the intensive care unit. Both groups were intensive care physicians with medical responsibility for intensive care demanding patients and with anesthesiology as a medical specialty covering both anesthesia and intensive care. None of the participants had worked with Infobiotika before and neither had participated in a former research project on visualization nor in a usability test. All worked at hospitals different from the one where the prototype was developed. To be able to validate the outcome of the tasks in the study, four so-called super-users were requested to participate in a pilot-test and in the preparations of the tasks. (Table 1) Table 1. Number and type of participants in Infobiotika usability evaluation study
Specialists Residents Usability experts
Study Participants (Danderyd Hospital) 3 2 0
Study Participants (S:t Görans Hospital) 1 2 0
Pilot-test Participants (Karolinska Hospital) 2 0 2
3.2. Study Design and Procedure A mixed method design was used for this test with between-subjects design for type of users with two levels of experience (specialists and residents) and within-subjects design for tasks. The test consisted of pre-test arrangements, introduction to the study and prototype, performance of the tasks and a debriefing session. One at the time, participants were greeted by the test moderator and guided to the test room which was a non-clinical area at the intensive care unit. The moderator started with an introduction and provided guidance. A short video was shown in order to give exactly the same introduction about the prototype’s functionality and features to each participant. Further instructions during the test were provided by slides shown on the screen during the test. All participants used the same computer, prepared with installed eye-tracking equipment, and performed pre-designed tasks. Participants were supposed to perform three types of pre-defined tasks in three phases of the test, i.e. 15 navigational tasks, 8 clinical tasks and 6 tasks to measure the learnability effect after 35 minutes free exploring time. At last, a post-test debriefing session was arranged with
948
A. Eghdam et al. / Combining Usability Testing with Eye-Tracking Technology
the moderator and a post-test questionnaire (SUS2) with 10 questions was answered by the test participants. The test materials such as consent forms, a background questionnaire, introduction to the study, pre-recorded video, interviews, the SUS questionnaire and the prototype were in Swedish language. The regional ethical review board in Stockholm approved the study no. 2010/1202-31/1.
4. Results The participants started the evaluation by completing 15 navigational tasks which were designed to give information on performance time, navigation paths and the accuracy of responses. Results of navigational tasks of both target groups showed that the physicians most often succeeded to solve the tasks. In average, they finished 79.4% of the tasks. The participants’ time spent on performing tasks was productive and paths taken were close to expected paths which had been prepared by a senior ICU physician prior to the test. A number of participants started to use tables but changed to the use of graphs further along the test session. In general, residents were more interested in graphs and specialists in tables. In addition, the results showed that specialists performed the tasks slightly faster than residents. Specialists were faster in 7 and equal to residents in 2 out of 15 tasks. In the second phase of the test, participants were asked to complete 8 clinical tasks, structured on the basis of an example dialogue with an infection disease consultant. These results showed that 91% of specialists and 100 % of residents completed all tasks. According to the participants’ comments during the test and in the post-test interview, Infobiotika fulfilled most of their expectations to support antibiotic use at intensive care. Although they had some suggestions for improvements, their overall impression about Infobiotika was positive. After 3-5 minutes of free exploring time in the third phase, the participants performed 6 tasks selected out of the 15 tasks from the first phase but slightly modified to avoid the possibility of memorizing previous answers. The performance in this phase of the test measured the effect of learning. The results showed that participants who used the same path to solve comparable navigation tasks were faster in 5 out of the 6 tasks, indicating a positive learning effect. Based on recorded eye-tracking data of all the participants, specialists stayed more focused on specific screen elements while residents were exploring the user interface more in its entirety. The eye-tracking data showed further an increasing use of charts and graphs during the test session. This could indicate that graphs will give a patient overview more efficiently and effectively with more practice. Results of the SUS questionnaire showed that the average overall satisfaction rate of the 8 participants based on the 10-item SUS assessment scale is 79.5% and the participating physicians perceived Infobiotika to be a quick and acceptable way to provide an overview of a patient status.
5. Discussion The goal of this usability testing was to observe end users interacting and performing tasks with Infobiotika. Participants proposed a few improvements in the current design but considered Infobiotika to be a potentially valuable aid in supporting faster decision2
System Usability Scale
A. Eghdam et al. / Combining Usability Testing with Eye-Tracking Technology
949
making concerning antibiotic treatment options. The eye-tracking equipment applied in this test was extremely useful for analyzing and understanding users’ actions and helped the analyzer to discover additional issues about the difference between specialists and residents’ performances. However, there were some limitations in conducting the test and analyzing data obtained from the usability testing. The test environment was not the real clinical setting and the number of participants was limited to 8 but they came from different hospitals and had not been involved in developing Infobiotika. Because of participants’ time constraints, we also limited the test sessions to 30 minutes each and could therefore not explore all features.
6. Conclusion Applying eye-tracking in the mobile laboratory setting has shown to be a valuable complement to traditional usability methods and revealed many additional issues in terms of navigation and user behavior, especially when comparing specialists and residents. However, results would need to be confirmed through evaluation with a larger number of participants in real clinical settings. Acknowledgements: This study was financed by the Health Informatics Centre, Karolinska Institutet. We also would like to thank the participating physicians for performing the test.
References [1]
Cars, O., Högberg, L. D., Murray, M., Nordberg, O., Lundborg, C. S., So, A. D., et al. (2008). Meeting the challenge of antibiotic resistance. BMJ, 337(3), 726-728. [2] Harbarth, S., & Samore, M. H. (2005). Antimicrobial Resistance Determinants and Future Control. Emerg Infect Dis., 11(6), 794-801. [3] Sintchenko, V., Coiera, E., & Gilbert, G. L. (2008). Decision support systems for antibiotic prescribing. Current Opinion in Infectious Diseases, 21(6), 573-579. [4] Ash, J. S., Coiera, E., & M., B. (2004). Some Unintended Consequences of Information Technology in Health Care: The Nature of Patient Care Information System-related Errors. Journal of the American Medical Informatics Association, 11(2), 104-112. [5] Tang, P., & Patel, V. (1994). Major issues in user interface design for health professional workstations: summary and recommendations. International Journal of Bio-Medical Computing, 34(1-4), 139-148. [6] Kushniruk, A., Triola, M., Borycki, E., Stein, B., & Kannry, J. (2005). Technology Induced Error and Usability: The Relationship between Usability Problems and Prescription Errors When Using a Handheld Application. International Journal of Medical Informatics, 74(7-8), 519-526. [7] Tobii. (2010, November 15). What is eye-tracking? Retrieved November 15, 2010, from Tobii Technology: http://www.tobii.com/corporate/eye_tracking/what_is_eye_tracking.aspx [8] Pool, A., & Ball, L. J. (2006). Eye tracking in HCI and usability research. In C. Ghaoui, Encyclopedia Of Human Computer Interaction (pp. 211-219). Hershey PA: Idea Group Reference. [9] Sittig, D. F., Wright, A., Osheroff, J., Middleton, B., Teich, J. M., Ash, J. S., et al. (n.d.). Grand challenges in clinical decision support. Journal of Biomedical Informatics, 41(2), 387-392. [10] ISO 9241-11 (1998). Ergonomic requirements for office work with visual display terminals, Part 11: Guidance on usability. Geneva: International Organisation for Standardization;
950
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-950
Design of a Mobile, Safety-Critical in-Patient Glucose Management System Bernhard HÖLLa,1, Stephan SPATa, Johannes PLANKb, Lukas SCHAUPPb, Katharina NEUBAUERb, Peter BECKa, Franco CHIARUGIc, Vasilis KONTOGIANNISc, Thomas R. Pieberb, Andreas HOLZINGERd a JOANNEUM RESEARCH Forschungsges.m.b.H., Institute for Biomedicine and Health Sciences, Graz, Austria b Medical University of Graz, Department of Internal Medicine, Division of Endocrinology and Nuclear Medicine, Graz, Austria c Foundation for Research and Technology - Hellas, Institute of Computer Science, Computational Medicine Laboratory, Heraklion, Crete, Greece d Medical University of Graz, Institute of Medical Informatics, Research Unit HumanComputer Interaction, Graz, Austria
Abstract. Diabetes mellitus is one of the most widespread diseases in the world. People with diabetes usually have long stays in hospitals and need specific treatment. In order to support in-patient care, we designed a prototypical mobile in-patient glucose management system with decision support for insulin dosing. In this paper we discuss the engineering process and the lessons learned from the iterative design and development phases of the prototype. We followed a usercentered development process, including real-life usability testing from the outset. Paper mock-ups in particular proved to be very valuable in gaining insight into the workflows and processes, with the result that user interfaces could be designed exactly to the specific needs of the hospital personnel in their daily routine. Keywords. Diabetes Mellitus, User-Computer Interface, Mobile Computing, Computer-Assisted Drug Therapy, Workflow
1. Introduction Diabetes mellitus is one of the most widespread diseases in the world. People with diabetes are more likely to be hospitalized and to have longer durations of hospital stay than those without diabetes. It is estimated that 22% of all in-patient days were accounted by people with diabetes and that in-patient care accounted for half of the total US medical expenditures associated with this disease [1]. These findings are due, in part, to the continued worldwide expansion of type 2 diabetes. The in-patient glycemic control of acute diseased patients with diabetes is often considered secondary in importance. However, studies demonstrate that in-patient hyperglycaemia has been found to be an important marker of poor clinical outcome and mortality among diabetic patients and that aggressive treatment of diabetes and 1
Corresponding author: Bernhard Höll, JOANNEUM RESEARCH Forschungsges.m.b.H., Institute for Biomedicine and Health Sciences, Elisabethstraße 11a, 8010 Graz, Austria, E-mail: [email protected]
B. Höll et al. / Design of a Mobile, Safety-Critical In-Patient Glucose Management System
951
hyperglycaemia results in reduced mortality and morbidity [2]. Therefore, patients suffering from diabetes require continuous glycemic control during in-patient stays including close monitoring of blood glucose and determination of suitable treatment strategies. In this paper we discuss the requirement engineering process and the lessons learned from the iterative design and development phases of a mobile in-patient glucose management system with decision support for insulin dosing.
2. Methods The development of mobile applications in a medical context provides engineers with a complex task. In addition to the aim of supporting workflow requirements, usability and clinical safety are important issues to consider when designing the user interfaces and system functionalities. Therefore, the consistent pursuit of an user-centered design is a crucial condition and must include an understanding of the users, their environment and the context in which the application is used [3,4,5,6,7]. A team consisting of physicians and nurses of the Division of Endocrinology and Metabolism at the Medical University of Graz, as well as engineers from JOANNEUM RESEARCH and the Medical University of Graz, was established to develop the user interface design and the functionalities of the in-patient glucose management system, tailored to the needs of the end-users. Project partners from the Foundation for Research and Technology Hellas performed a first design of the conceptual data model starting from the first user interface mock-up and organised external reviews of the obtained results. We discussed each design decision relating to the user interface, system functionality and the underlying protocol for decision support for insulin dosage within this team. We integrated the results into an intuitive software system based on essential, but user-tailored functionalities. Figure 1 shows the iterative development process of the in-patient glucose management system. In the first step we interviewed physicians and nurses about current treatment workflows for type 2 diabetic patients at the Division of Endocrinology, in order to understand and determine workflow patterns for medical decision-making and problems and risks associated with glucose management. We generated a status report describing current workflows, based on various patient scenarios, as a starting point for the target analysis. We then identified and discussed relevant publications related to the ideal in-patient management of hyperglycaemia including validated glucose control protocols with diabetes specialists [8,9,10,11]. The protocol based on a basal/bolus regimen as provided by the RABBIT 2 Trials 2 proved to be most promising for the clinical diabetes experts due to its straightforward advice for insulin dosing, which was shown to be associated with improved outcomes. In the final step, we extracted the most important user requirements from the status report and the findings of the protocol reviews. These were then implemented in a software prototype.
2
RABBIT 2 - Randomized Study of Basal-Bolus Insulin Therapy in the In-patient Management of Patients With Type 2 Diabetes
952
B. Höll et al. / Design of a Mobile, Safety-Critical In-Patient Glucose Management System
Figure 1. Process chart of prototype design
The last step of the first iteration of the development process, involved performing real-life usability trials with three diabetes specialists as participants. We used the Thinking Aloud testing method [12] followed by a semi-structured interview. We documented all tests on video, interpreted the results and integrated suggested improvements into the revised requirement specification document. The second design iteration consisted of integrating test results into the requirement set and developing a detailed user-interface for the application using paper-mockups. The development process is accompanied by continuous interdisciplinary meetings regarding risk identification, evaluation and the setting of appropriate measures to avoid these risks. Emphasis is placed on both technical and medical risks.
3. Results This section reports the results of the prototype development and the usability tests. In the first development iteration, we demonstrated the identified basic functionality using Microsoft Excel with VBA Scripts. Microsoft Excel was chosen, due to the extensive display options of charts, the quick and easy visualization of glucose and insulin profiles, as well as optical alarm borders. The user evaluation of the first glucose management prototype resulted in an extensive requirement specification document with the following main conclusions, which formed the starting point for the second design iteration: • Execution of the application via a mobile device to allow activities to be performed directly at the patient’s bed. • No data storage on the mobile device. Wireless communication via web services to an external server, on which the data should be placed. • Documentation and visualization of the most important parameters relating to diabetes care on the mobile device. • Automated decision support for insulin dosage. • Reminder for open tasks through an active task management. • Avoidance of manual (and multiple) inputs. A connection to the hospital and laboratory information system is necessary in order to transfer administrative data automatically. We designed the revised user requirements using Visio stencils for Android in a paper mockup screenplay consisting of all functionalities of the glucose management system and again discussed the results in the team. Based on the design and functionality, which was identified through the mockups, we are currently implementing an Android-based mobile client application, which communicates via web services (Apache CXF) with a java-based web server running on Apache Tomcat.
B. Höll et al. / Design of a Mobile, Safety-Critical In-Patient Glucose Management System
953
The server application has been implemented using Hibernate and the Spring Framework based on a model driven design and development approach and transfers data securely from and to the HIS of the hospital via HL7 v2.4 interface. Figure 2 shows the already implemented main screen with the visualization of the most important measurement and insulin administration parameters of the mobile inpatient glucose management application. In addition, the figure shows the main functionalities of the application. ‘Patient List’ presents all patients administered at the ward including a filter function to show only patients enrolled for the glucose management. ‘Open Tasks’ reminds users of the system to perform all recommended tasks like ‘Blood Glucose Measurement’, ‘Insulin Administration’ or ‘Therapy Adjustment’ in time. ‘Blood Glucose Measurement’ enables users to retrieve the blood glucose value directly from the laboratory information system and documents the measured values in the glucose management system. Physicians approve the current therapy for the patient (e.g. insulin medication, current insulin dosage, hypoglycemia borders) using the function ‘Therapy Adjustment’. Finally, the decision support protocol for insulin dosing suggests the needed insulin dosage of the patient based on the measured blood glucose values and administered food using the function ‘Insulin Administration’.
Figure 2. Screenshot of the Android-based Prototype
4. Conclusions and Future Research In this paper, we presented the user-centered design process of a safety-critical inpatient glucose management system. Medical end-users have been involved in every step of the design phase. In other words, clinicians have conceptualized the design of the system. Engineers now have to implement the design in an optimal software solution. Our experiences through the first and second iteration steps show that clinicians and engineers have very different points of view concerning software. While engineers often focus on gathering as much functionality as possible, clinicians prefer software which offers only the required base functionality but a well sophisticated user interface, tailored to current workflow patterns. A problem we encountered during the requirement analysis is that end-users without a trigger, often do not know what
954
B. Höll et al. / Design of a Mobile, Safety-Critical In-Patient Glucose Management System
specific functions should be provided by a software solution. Therefore, as a result of the first iteration step, a Microsoft Excel prototype was used as a trigger to give clinicians a preliminary idea as to how an in-patient glucose management system, including a computerized decision support, could look. After the presentation of the prototype, the participants were able to give a clearer idea of their requirements for a glucose management system, which were then used as inputs for the second iteration step. We used paper mockups of the second iteration step, which simulate the full system functionality on a mobile device, as the next trigger. At the moment we are implementing the server application and an Android-based mobile prototype, which already contains full functionality. We will test the resulting prototype of the second iteration in a clinical study. Acknowledgements. This work was partly funded by the E. C. under the 7th Framework Program in the area of Personal Health Systems under Grant Agreement no. 248590. [13]
References [1]
[2] [3] [4] [5] [6]
[7] [8] [9]
[10]
[11]
[12]
[13]
Moghissi, E.S. Korytkowski, M.T. Dinardo, M. Einhorn, D. Hellman, R. Hirsch, I.B. et.al., American Association of Clinical Endocrinologists and American Diabetes Association Consensus Statement on Inpatient Glycemic Control, Endocrine Practice 15 (2009), 1-17. Clement, S. Braithwaite, S.S. Magee, M.F. Ahmann, A. Smith, E.P. Schafer R.G. and Hirsch, I.B. Management of Diabetes and Hyperglycemia in Hospitals, Diabetes Care 27 (2004), 553-591. Hameed, K. The application of mobile computing and technology to health care services, Telematics and Informatics 76 (2007), 66-77. Wu, J. Wang S. and Lin, L. Mobile computing acceptance factors in the healthcare industry: A structural equation model. International Journal of Medical Informatics 76 (2007), 66-77. Holzinger A. and Errath, M. Mobile computer Web-application design in medicine: some research based guidelines, Universal Access in the Information Society 6 (2007), 31-41. Holzinger, A. Hoeller, M. Bloice M. and Urlesberger, B. Typical Problems with developing mobile applications for health care: Some lessons learned from developing user-centered mobile applications in a hospital environment, International Conference on E-Business (ICE-B 2008), Porto (PT), IEEE (2008), 235-240. Svanaes, D. Alsos O.A. and Dahl, Y. Usability testing of mobile ICT for clinical settings: Methodological and practical challenges, International Journal of Medical Informatics 79 (2010), 24-34. Inzucchi, S.E. Management of Hyperglycemia in the Hospital Setting, New England Journal of Medicine 355 (2006), 1903-1911. Umpierrez, G.E. Smiley, D. Zismann, A. Prieto, L.M. Palacio, A. Ceron, M. Puig A. and Mejia, R. Randomized Study of Basal-Bolus Insulin Therapy in the Inpatient Management of Patients with Type 2 Diabetes (RABBIT 2 Trial), Diabetes Care 30 (2007), 2181-2186. Umpierrez, G.E. Hor, T. Smiley, D. Temponi, A. Umpierrez, D. Ceron, et.al., Comparison of Inpatient Insulin Regimens with Detemir plus Aspart Versus Neutral Protamine Hagedorn plus Regular in Medical Patients with Type 2 Diabetes, Journal of Clinical Endocrinology Metabolism 94 (2009), 564569. Korytkowski, M.T. Salata, R.J. Koerbel, G.L. Selzer, F. Karslioglu, E. Idriss, A.M. Lee, K. Moser A.J. and Toledo, F.G.S. Insulin therapy and glycemic control in hospitalized patients with diabetes during enteral nutrition therapy: a randomized controlled clinical trial, Diabetes Care 32 (2009), 594-596. Holzinger A. and Leitner, H. Lessons from Real-Life Usability Engineering in Hospital: From Software Usability to Total Workplace Usability, Holzinger, A. & Weidmann, K.-H. (Eds.) Empowering Software Quality: How can Usability Engineering reach these goals? Vienna, Austrian Computer Society (2005), 153-160 http://www.reactionproject.eu/news.php, last visit: 2011-04-30.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-955
955
Facilitating the Iterative Design of Informatics Tools to Advance the Science of Autism David R. KAUFMANa , Patrick CRONIN a , Leon ROZENBLIT b , David VOCCOLA b , Amanda HORTON b , Alisabeth SHINEa , and Stephen B. JOHNSON c a Department of Biomedical Informatics, Columbia University, New York, NY, USA b Prometheus Research, LLC, New Haven, CT, USA c Simons Foundation, New York, NY, USA
Abstract. This paper describes a usability evaluation study of an innovative first generation system (Data Dig) designed to retrieve phenotypic data from the large SFARI data set of 2700 families each of which has one child affected with autism spectrum disorder. The usability methods included a cognitive walkthrough and usability testing. Although the subjects were able to learn to use the system, more than 50 usability problems of varying severity were noted. The problems with the greatest frequency resulted from users being unable to understand meanings of variables, filter categories correctly, use the Boolean filter, and correctly interpret the feedback provided by the system. Subjects had difficulty forming a mental model of the organizational system underlying the database. This precluded them from making informed navigation choices while formulating queries. Clinical research informatics is a new and immensely promising discipline. However in its nascent stage, it lacks a stable interaction paradigm to support a range of users on pertinent tasks. This presents great opportunity for researchers to further this science by harnessing the powers of user-centered iterative design. Keywords. Usability evaluation, clinical research informatics, iterative design.
1. Introduction Recent advances in basic and applied clinical science are increasingly being translated into clinical practice and affording greater opportunities for improved patient care across a broad spectrum of medical conditions. The rapid pace and scope of this research has necessitated the development of new information technologies to support data integration, management and workflow. Clinical research informatics (CRI) is a burgeoning discipline whose efforts are focused at the intersection of clinical research and biomedical informatics [1]. CRI affords new opportunities to make tangible progress on longstanding, seemingly intractable clinical problems by leveraging new technologies for exploring very large data sets for prediction, visualization, and hypothesis generation [2]. Autism spectrum disorder (ASD) is a heterogeneous syndrome characterized by a multitude of behavioral, social and communication problems. The scope and complexity of ASD requires the development of large and comprehensive collections of individuals and their families to facilitate genotypephenotype studies[3]. The Simons Foundation Autism Research Initiative (SFARI) has
956
D.R. Kaufman et al. / Facilitating the Iterative Design of Informatics Tools
established a permanent repository of phenotypic and genetic data set from 2,700 families, each of which has exactly one child affected with ASD. SFARI Base, a web-based platform developed in collaboration with Prometheus Research LLC, provides access to scientific data and associated information management and analytic tools to advance the science of autism[4]. The primary function of SFARI Base is to gather scientific data and biospecimens from studies conducted at clinical sites, and to pool the results of analyses carried out on these materials. It not only affords researchers the possibility for accessing data to test hypotheses or explore relationships in a data set, but also may offer new methods for discovery and hypothesis generation. However, even the best designed systems present difficulties for users. It is increasingly recognized that the usability and learnability of a system are critical determinants of both the acceptance of a technology and its efficacy as a productive tool [5]. The objective of the work reported in this paper is to extend a usability and iterative design framework to clinical research informatics tools. CRI is a new area of research and presently lacks an established paradigm for supporting user interaction. At present, there is a paucity of usability research in this area.
2. Evaluation of SFARI Base 2.1. Usability Framework The research is grounded in a cognitive engineering framework, which is an interdisciplinary approach to the development of principles, methods and tools to assess and guide the design of computerized systems to support human performance. The approach is centrally concerned with the analysis of cognitive tasks. The objective is not only to characterize deficiencies, but to identify the ways in which resources (e.g., through redesign or training) can help structure task performance and guide accelerated learning (e.g., via the use of better cues that signal the next step). The framework focuses on the sorts of competencies and knowledge required by users to accomplish tasks in knowledge-rich domains. The approach incorporates both usability inspection methods and usability testing [6-8]. Usability inspection methods are performed by trained analysts and usability testing involves the use of representative subjects. Through a process of triangulation, these methods are likely to reveal a wider range of problems that impede productive use of a system than any single method alone[5]. 2.2. Evaluation and Iterative Design of SFARI Base SFARI Base provides a suite of database tools that serves a broad spectrum of users including: scientific investigators conducting autism research, research coordinators, data managers, and autism data curation experts. Each of these users plays a different role in the research enterprise, has different needs and possesses different skills. The long term objective is to discover and characterize the ways in which the suite of SFARI tools can be used productively to advance the science of autism. The methods employed in our research program include: a) cognitive walkthrough, b) heuristic evaluation, c) usability testing, d) web-based survey of scientists, and e) participant design study. In this paper, we present data from the first in a series of usability evaluations of SFARI Base tools.
D.R. Kaufman et al. / Facilitating the Iterative Design of Informatics Tools
957
2.3. Usability Evaluation of Data Dig Data Dig is a database query tool designed to retrieve phenotypic data from the large SFARI data set of 2700 families across more than 6000 variables. Investigators can search for a particular subset of the data by selecting variables of interest. For example, a researcher may only be interested in male probands (subjects with autism) under the age of 10 with a particular range of scores on one of the autism diagnostic tests or behavior checklists. Variables can be discovered by browsing through expert-created variable-groups (instantiated via a tagging model) or by searching for substrings within variable names, titles, and descriptions. A cognitive walkthrough (CW) was performed on the Data Dig Tool to identify issues with the interface. A CW is a task-analytic method that represents the goals and subgoals for each task, each step or action to be taken, the necessary knowledge, and the feedback presented to the user (i.e., what’s visible on the display) after an action has been completed. At each step, we can identify potential problems in the interface or in the cognitive demands of the task. Task complexity can be revealed in variables such as: 1) number of actions needed, 2) number of screen transitions, 3) time needed to complete a task and 4) required chunks of knowledge. Two experienced investigators conducted the CW and 30 unique usability issues and the problems were revealed. They were subsequently coded according to a modified version of Nielsen usability heuristics. In addition, a panel of three researchers independently ranked the issues according to the severity of the problem on a 5 point scale. Usability testing was performed on three individuals including a psychiatrist who studies autism and two informaticists who had extensive experience working with clinical and scientific databases. Each user had different levels of domain and system knowledge which became apparent during the testing. Recent autism related literature was surveyed to create sample questions that could be answered by querying the SFARI database using Data Dig. The questions were divided into three levels of difficulty based on the query complexity. Each subject was given a ten minute instructional period by the experimenter. Then the users formulated queries to answer the questions using Data Dig and performed the think-aloud protocol while the interaction was recorded using Morae video-analytic usability software. The subjects were instructed that their goal was to identify the data in the SFARI database that would allow them to answer specific questions. The following are samples of these questions, reflecting different levels of complexity: 1. How many probands (autistic children) are in the database? 2. Can you get the Autism Diagnostic Interview – Revised (ADI-R) total score? 3. Is there data on the proband’s birth? Specifically I want to know the proband’s head circumference and weight at birth. Also, can you tell if the proband was born vaginally or by a C-section? The users were able to answer most of the queries with some help from the moderator. However, the users experienced numerous difficulties learning how to master the different elements of the system. We documented more than 50 usability problems ranging from relatively minor to more serious ones that impeded effective and efficient use of the tool to answer queries. The problems with the greatest frequency resulted from users being unable to understand meanings of variables, filter categories correctly, use the Boolean filter, and correctly interpret the feedback provided by the system. Subjects had difficulty forming a mental model of the
958
D.R. Kaufman et al. / Facilitating the Iterative Design of Informatics Tools
organizational system underlying the database. This precluded them from making informed navigation choices while formulating queries. The usability issues identified through the cognitive walkthrough were matched with instances recorded through the usability testing. The categories were discussed by the team, and issues relevant to improving the functionality of Data Dig were identified. The modifications identified were designed to improve user’s mental model of the system, query construction proficiency, ability to correctly interpret system feedback, and limit user frustration. Our objective was to identify tractable changes that could be implemented for the next iteration. We proposed recommendations for application improvements from four major categories: navigation, feedback, enhancing functionality, and consistency. One of the problems is illustrated in Figure 1.
Figure 1: Lack of visual cues to mark selected variables.
The first step in using the tool is to select a set of variables. The problem, as illustrated in Figure 1, is that it is difficult for an individual to determine which variables they had selected because there is no visual feedback. In the picture above, the user had selected 4 of the 5 available variables that are displayed in Step 2 (specifying filter criteria on scores of a measure). We observed three instances of users failing to select variables because they had thought that it had been previously selected. We proposed 3 possible solutions: 1. a checkbox could be added next to the variable in step 1 to indicate if it had been selected; 2. Once a variable had been selected it could disappear in step 1 after it was displayed in step 2, 3. The link with the variable in step 1 could change to a different shade once it has been selected. Any of these options could reduce the barrier to completing the variable selection process. The Boolean filters also posed significant challenges for the users. The problem is that there were 10 different methods of Boolean filtering, and the methods were always available even when they did not apply. For example, if a user set the filter to less than male, then all of the females will be displayed. There was no method
D.R. Kaufman et al. / Facilitating the Iterative Design of Informatics Tools
959
of setting multiple Boolean filters in a single variable. For example, a user cannot identify all individuals between the ages of 4 and 12. There was no ability to have a disjunctive (OR) criteria between different variables; all filters added automatically assume conjunctive (AND) conditions on the data. Thus an individual could not filter on all individuals with an ADI-R diagnosis of autism OR an ADOS diagnosis of autism (the two most commonly used diagnostic measures).
3. Discussion Clinical research informatics is a discipline concerned with providing new tools to advance clinical science and practice. Although this work is immensely promising, we presently lack a stable interaction paradigm for enabling scientific researchers and other users to access and analyze large data sets. This presents significant challenges as well as opportunities for human-computer interaction researchers to contribute to the advancement of effective and enabling tools. The study presented in this paper was a usability evaluation of an innovative CRI database tool. The study documented a range of significant usability problems and presented potential solutions. Researchers at Columbia University and the Simons Foundation are collaborating closely with developers at Prometheus Research in the iterative design process. Data Dig represented a first generation application and subsequent applications proved to be more robust and easier to use as determined by usability evaluations. Future work includes participatory design studies involving scientists in the process of fashioning prototypes. The findings from this work could result in more effective tools and also contribute to the development of a stable interaction platform and thus serve to advance this important new discipline.
References [1]
Embi PJ, Payne PR. Clinical research informatics: challenges, opportunities and definition for an emerging domain. J Am Med Inform Assoc. 2009;16: 316-27. [2] Lehmann CU, Kim GR, Johnson KB, Lehmann HP, Law PA, Tien AY. Pediatric Research and Informatics. In: Pediatric Informatics: Springer New York; 2009, p. 439-454. [3] Johnson SB, Whitney G, McAuliffe M, et al. Using global unique identifiers to link autism collections. J Am Med Inform Assoc. 2010;17: 689-95. [4] Simons Foundation Research Initiative (SFARI). Available at http://sfari.org [5] Jaspers MW. A comparison of usability methods for testing interactive health technologies: methodological aspects and empirical evidence. Int J Med Inform. 2009;78: 340-53. [6] Kaufman DR, Mehryar M, Chase H, et al. Modeling knowledge resource selection in expert librarian search. Stud Health Technol Inform. 2009;143: 36-41. [7] Kaufman DR, Patel VL, Hilliman C, et al. Usability in the real world: assessing medical information technologies in patients' homes. J Biomed Inform. 2003;36: 45-60. [8] Yu H, Lee M, Kaufman D, et al. Development, implementation, and a cognitive evaluation of a definitional question answering system for physicians. J Biomed Inform . 2007;40: 236-51. [9] Nielsen J. Usability Engineering. Boston: Academic Press; 1993. [10] Polson PG, Lewis C, Rieman J, Wharton C. Cognitive Walkthroughs - a Method for Theory-Based Evaluation of User Interfaces. International Journal of Man-Machine Studies. 1992;36: 741-773.
960
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-960
Evaluation of Computer Usage in Healthcare Among Private Practitioners of NCT Delhi a
GANESHKUMAR P a 1, ARUN KUMAR SHARMAb and RAJOURA OP b Deptartment of Community Medicine, SRM Medical College Hospital & Research Centre, Kattankulathur, Tamilnadu, India. b Department of Community Medicine, UCMS & GTB Hospital, Delhi
Abstract. Objectives: 1. To evaluate the usage and the knowledge of computers and Information and Communication Technology in health care delivery by private practitioners. 2. To understand the determinants of computer usage by them. Methods: A cross sectional study was conducted among the private practitioners practising in three districts of NCT of Delhi between November 2007 and December 2008 by stratified random sampling method, where knowledge and usage of computers in health care and determinants of usage of computer was evaluated in them by a pre-coded semi open ended questionnaire. Results: About 77% of the practitioners reported to have a computer and had the accessibility to internet. Computer availability and internet accessibility was highest among super speciality practitioners. Practitioners who attended a computer course were 13.8 times [OR: 13.8 (7.3 - 25.8)] more likely to have installed an EHR in the clinic. Technical related issues were the major perceived barrier in installing a computer in the clinic. Conclusion: Practice speciality, previous attendance of a computer course, age of started using a computer influenced the knowledge about computers. Speciality of the practice, presence of a computer professional and gender were the determinants of usage of computer. Keywords: Medical informatics applications, Attitude to computers, Computer utilization, Health Personnel, Cross-sectional studies, India
1. Introduction Indian health system is straining to deal with increasing cost and demand pressures and a shortage of skilled health care workers till the root of our community. Given this reality, we can achieve maximum impact on health outcomes and where scarce financial and human resources are deployed as effectively as possible. The strategy by which this can be achieved is through the implementation of world class E-Health capability. Further, information and communication technology (ICT) has been proposed as an important strategy to combat medical errors and quality-of-care deficits.[1] In India, 70% of the health care services are being provided by the private sector [2], which is not integrated with the government system. Hence application of 1
Corresponding Author: Dr.P.Ganeshkumar, Assistant professor, Department of Community Medicine, SRM Medical College Hospital & Research Centre , SRM University, Kattankulathur - 603203, Tamilnadu, India. Telephone number: +91-44-9840640483, +91-44-45030120 Fax number : +91-44-2745 5106. E-mail: [email protected]
P. Ganeshkumar et al. / Evaluation of Computer Usage in Healthcare Among Private Practitioners 961
ICT in this sector remains incompetent and the large amount of information about their health services has not been shared or reported to any government body which is a major lag in the regulation of healthcare in our country. We did not find an Indian study pertaining to computerization of health services in private sector. Hence it became an important question to find out what is the status of computerization and uses of e-health by the private practitioners. Hence the study was designed to evaluate the usage and knowledge of computer and Information & Communication Technology (ICT) among private practitioners and to evaluate the determinants of the computer usage.
2. Materials and Methods This cross-sectional study was conducted in a randomly selected three districts out of 10 administrative districts of New Delhi, the Capital city of India, among clinic based private medical practitioners from November 2007 to December 2008. Private medical practitioners are those who are self-employed and not attached to any hospital. Only modern medicine practitioners and doctors who practiced at least for one year in the same location were included in the study. Ministry of health & family welfare, Govt. of India mentions modern medical practitioners are those who practice allopathic medicine in contrast to traditional Indian medicine. Due to the lack of previous studies in India, we decided to go by convenience sampling technique; hence data was collected from 600 practitioners. In order to make the sample representative of the private practitioners registered with Indian Medical Association (IMA), stratified random sampling method was used where as a first step we randomly selected 3 out of 10 administrative districts. Each eligible practitioner was assigned a digital code and from each district 200 practitioners were randomly selected using random number table. After obtaining an informed written consent, the participants were contacted for a prior permission. They were interviewed by a structured, pre-tested, pre-coded investigator administered interview schedule which collected the information about their usage, knowledge, potential barriers in using a computer and determinants of owning a computer in the clinic. The information thus collected was entered in the MS Excel spreadsheet and analysis was done using SPSS software where descriptive tables were generated and logistic regression analysis was done to demonstrate the findings.
3. Results 3.1. Demographic and Professional Details In the study population, 85.5% were males and their age ranged from between 29 and 62 years where the mean age of the study population was 45.46±5.52 years. In the study population only 1.8% was super specialty practitioners whereas MBBS graduates were in large number (58%). Nearly one tenth of the practitioners has a computer professional in the family and nearly half of the practitioners were practicing more than 10 years and also consulting more than 4 hours per day.
962 P. Ganeshkumar et al. / Evaluation of Computer Usage in Healthcare Among Private Practitioners
3.2. Usage of Computers in the Clinic About 77% of the practitioners reported to have a computer but only 63(10.5%) practitioners installed it in their clinic and about three-fourth of the practitioners have the accessibility to internet but only 10(1.5%) had it in their clinic. Though 22% of all the respondents had known about EHR, only 8.8% of them were using it in their clinic and almost all of them appointed a separate staff for data entry in the EHR. Reported usage of EHR was very limited which were mostly for the purpose of registration and maintaining a list of patients they consulted with limited information of them. General surgeons and general practitioners were the least common users of EHR and more of super specialty practitioners were the most common users of EHR (see Table 1). Table 1. Distribution of practice speciality with presence of EHR & Knowledge about Computers PRACTICE SPECIALITY
PRESENCE OF EHR IN THE CLINIC (n=53)
COMPUTER KNOWLEDGE MEAN SCORE
Number
(MEAN±SD)
General practice
20(5.7)
2.26±1.05
General surgery
1(3.6)
2.48±1.04
Internal medicine
11(17.2)
2.42±1.07
Super speciality
16(24.6)
3.1±0.98
Others(Paeds,O&G)
5(5.3)
2.43±1.03
Statistical test
X2: 32.22
df:4
p: value: 0.000
ANOVA SSB:40.02 df:3 p:value :0.000
Factors such as gender (p=0.056), number of years of practice (p=0.211), age of first usage of computers (p=0.834) and income (p=0.233) didn’t influence the EHR usage in the clinic whereas those practitioners who attended a computer course were 13.8 times [OR: 13.8 (7.3 - 25.8)] more likely to have installed an EHR in the clinic. 3.3. Knowledge of Computers A composite score of the knowledge on computers was calculated by giving weightage of 60%, 10% and 30% to software, hardware and internet respectively. Hence a final score was obtained by summing up the weighted scores of the knowledge questions. The mean scores of knowledge about hardware, software and internet were 2.19±1.32, 2.22±1.46 and 2.95±1.35 respectively, which showed that, knowledge regarding internet was high among practitioners compared to that about hardware and software. It was seen that mean score of male respondents was significantly higher than that of the female respondents and one-fourth of the practitioners who scored more than 3.3 were less than 42 years old .Also it was seen super speciality practitioners were significantly more knowledgeable than others (see Table 1). Availability of computer (p=0.000),
P. Ganeshkumar et al. / Evaluation of Computer Usage in Healthcare Among Private Practitioners 963
previous attendance of computer course (p=0.000) influenced positively the knowledge on computers. 3.4. Potential Barriers in Using a Computer Fifteen questions were used with a Likert scale for assessing the perceived barriers in using a computer. It was seen that technical issues were considered as the major perceived barrier and logistic related issues were the least perceived barrier. Most of the practitioners (86.3%) thought that lack of time was the major barrier in installing a computer in their clinic and early half of the practitioners disagreed to that high maintenance cost of the computer and data entry being a cumbersome process could be reasons for not installing a computer in their clinic. 3.5. Predictors of Owning a Computer Logistic regression analysis was used to find the predictors of owning a computer among the private practitioners. It was found that, super speciality practitioners were 8 times [OR: 8.18(2.57-9.99)] more likely to own a computer, compared to general practitioners and also presence of a computer professional in the family increases the likelihood by 4 times and females were 50% less likely to own a computer than males(see Table 2). Table 2. Predictors of owning a computer INDICATOR
ODDS RATIO
ADJUSTED
P VALUE
Speciality practice
1.9(1.15-3.12)
0.011
Super speciality practice
8.18(2.57-5.99)
0.000
Presence of computer
3.93(1.67-9.26)
0.002
0.493(0.27-0.87)
0.016
professional in the family Female practitioners
4. Discussion In our study, out of 600 practitioners, there was an over representation of male practitioners which may be due to the presence of more male practitioners in the study area. There was a considerably high work load among the practitioners in the study area and only 10.8% of the doctors had a computer professional in their family which shows that the practitioners had a lesser chance of getting influenced by the professionals other than healthcare. It was observed that 77% of the practitioners reported to have a computer but only 63(10.5%) practitioners installed it in their clinic and only 10(1.5%) had an internet connection in their clinic. Computer usage in private practice was very less, which may be due to reasons such as technologies being less
964 P. Ganeshkumar et al. / Evaluation of Computer Usage in Healthcare Among Private Practitioners
understood and not given due priority. Absence of even a single research paper published regarding this issue in India itself explains the poor adoption and less understanding of these technologies in the healthcare. Out of 600 practitioners only 53(8.8%) were having an EHR in the clinic. Though gender, number of years of practice, age of first usage of computer and income haven’t influenced the EHR usage, those practitioners who attended a computer course were 13.8 times more likely to have an EHR in their clinic. It was found also that awareness about maintaining patient records in electronic form (EHR) existed among less than 10% of the respondents which shows that there was a poor understanding about usage of computer in the clinic. This shows that existing knowledge by training influences more positively in practicing a new technology in their clinic. Nearly half of the practitioners were having a satisfactory knowledge on computers and overall knowledge regarding internet was high. It was seen that super speciality practitioners scored more in knowledge about computers than other category of practitioners. Not only had the availability of a computer and internet favoured an increase in knowledge of computers, it is also the age of first usage of computer and previous attendance of any computer course that influenced positively the knowledge about computers. Being a cross sectional study, it was difficult to ascertain whether having knowledge about computers increased its usage or vice versa. Most of the practitioners (83.3%) thought that lack of time was the major barrier in implementing a computer in their clinic. When the barriers were categorized by giving scores, it was seen that technical related issues were the major perceived barrier among the practitioners. Logistic regression analysis was carried out to identify predictors of owning a computer by private practitioners in our study. It was observed that practice speciality, income, presence of a computer professional in the family and gender were significant determinants of owning the computer and usage of it. It was also found that the super speciality practitioners were 8 times and presence of a computer professional in the family were 4 times more likely to have the probability of owning a computer. At present the computer usage in healthcare among private practitioners is extremely limited and the purpose they use EHR is only maintaining a list of patients they consult.
5. References [1] [2]
Institute of Medicine. Crossing the quality chasm. Washington DC: National Academy Press, 2001. Ministry of statistics and programme implementation, Government of India. Morbidity, Health Care and the Condition of the Aged: Jan –June, 2004.NSS 60th round. New Delhi. March 2006. Report No. 507.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-965
965
Contextual Inquiry Method for UserCentred Clinical IT System Design a
Johanna VIITANENa Strategic Usability Research Group, Aalto University, Finland
Abstract. Little can be found in the literature about the applicability of field study methods, particularly contextual inquiry, in the health informatics field. This paper aims to inform and promote the use of contextual inquiry for user-oriented design of clinical information technology (IT) systems. The paper describes how the method was applied in two empirical studies to gather data about end-users’ needs, as well as the use and usability of dictation solutions and electronic nursing documentation systems from the viewpoint of their end-users’ in real working surroundings. Experience indicates that, compared to typical usability evaluation methods, contextual inquiry may provide valuable support for user-centred design activities: the method is suitable for increasing researchers’ understanding of clinical practices, contexts of work, and end-users’ interaction with numerous IT systems. However, in clinical settings there are special challenges related to recording and privacy issues, a wide variety of clinical practices and contexts of technology usage, as well as the hectic nature of clinical work. Keywords. Contextual inquiry, user-centred design, clinical IT system
1. Introduction Field study methods have not been widely adopted in the health informatics field, although the need for a participatory and user-centred design approach in technology development has been strongly acknowledged. Research literature on user involvement in healthcare technology development typically deals with a usability evaluation approach and studies that are conducted in the later phases of system development. Recently, researchers have suggested that, compared to the evaluation approach, field studies of clinical work are more suited for informing conceptual problems and developing an understanding of the wider context in which the clinical information and communication media are used [1,2]. Experiences from field studies have indicated that ethnographic methods (such as interviews, observations, and artefact analysis) have helped to efficiently explain relevant work practices (e.g. [3,4]). Furthermore, methods used to derive the requirements for healthcare systems are criticised as being inadequate (e.g. [5,6]). Among others, Malhotra et al. [5] and Croll and Croll [6], have stated that the biggest risk faced in developing IT systems for a healthcare setting is to understand the complex environments in which these systems are used. Little can be found in the literature about the applicability of field study methods, particularly contextual inquiry, in healthcare technology development. A few researchers have reported contextual inquiry studies. Gennari and Reddy [7] applied the participatory design approach and used contextual inquiry to design and build a protocol screening tool of clinical trial protocol management. Gil-Rodríguez et al. [8]
966
J. Viitanen / Contextual Inquiry Method for User-Centred Clinical IT System Design
applied the method to collect information about cognitive, symbolic, and practical characteristics of information technology (IT) systems use on daily tasks in clinical settings, with the aim of supporting the design of graphical user interfaces for telecardiology applications. Furthermore, some researchers have aimed at encouraging user-oriented methods for assessing clinicians’ needs, and user requirements, for system design purposes. Already in the year 1995 Colbe et al. [9] argued that the contextual inquiry method has several advantages in obtaining a more comprehensive analysis of the true needs of users. In their review-based articles, Chan [10] and Martin et al. [11] introduced the contextual inquiry method with reference to its developers Holtzblatt and Beyer [12], and explained the principles of the method. This paper aims to promote the adoption of the contextual inquiry method among practitioners and researchers in the health informatics field and provide information about the specific characteristics of healthcare contexts that are essential to be considered when applying the method. The described experiences and lessons learned are based on two empirical studies: a dictation study and an evaluation of nursing documentation systems.
2. What is the Contextual Inquiry Method? Contextual inquiry is a field data gathering technique that forms the core of contextual design. The method enables researchers to create an understanding of who the users really are and how they work on a day-to-day basis. This understanding becomes the basis for developing a system model that will support users’ work. From the user’s viewpoint, the method helps people crystallise and articulate their work experience. Throughout the design process, contextual inquiry can be used to challenge the developers’ current understanding and system design for users [13]. Contextual inquiry does not provide a set of steps to follow for collecting and interpreting user information; rather it describes concepts that guide the design and implementation of information collection and analysis sessions [12]. Inquiry studies typically involve four to eight users. In practice, the procedure of the inquiry is simple: while observing the user at work, the researcher asks about the user’s actions in order to understand their motivation and strategy. The four principles of the method are [12]: − Context: Inquiry takes place in the actual work environment, with emphasis on gathering concrete data and ongoing experience. − Partnership: The overall aim is to create a partnership which fosters the creation of a shared understanding and discovery of work and practices. In the inquiry, the user is the expert on the work, whereas the researcher is an apprentice who is willing to learn about and understand the user’s work. − Interpretation: Interpretation means determining what the user’s words and actions mean together. It is a chain of reasoning that turns a fact into an action relevant to the designer’s intent. Design is built upon interpretation of facts. Researchers share these interpretations during inquiries with users. − Focus: Focus defines the point of view a researcher takes while studying work. The focus steers the conversation and gives the interviewer a way to keep the discussion on topics that are useful without taking control back from the user.
J. Viitanen / Contextual Inquiry Method for User-Centred Clinical IT System Design
967
3. Overview of the Dictation and Nursing Documentation System Studies The contextual inquiry method was applied in two empirical studies to gather data about end-user needs as well as the use and usability of a range of technology applications in clinical settings. In both studies, the overall aim was to create a comprehensive understanding of the use situations, and thereby gather data to support the further development and redesign of the currently used clinical IT systems. The first study, Dictation Study with Physicians, had its focus on investigating the procedures of dictation utilising a variety of techniques. The study was carried out in spring 2008 in a large hospital in Finland and involved seven physicians from three hospital units. Of these physicians, two used cassette dictation as their primary method for dictation, whereas three used digital and two voice-recognition techniques [14]. The second study, Evaluation of Nursing Documentation Systems, focused on documentation tasks in nursing work and incorporated four system implementations in electronic health record (EHR) systems [15], which were all based on the Finnish national nursing model [16]. The study was conducted in spring 2010 with 18 Finnish nurses who were representatives of seven healthcare organisations. All of the contextual inquires with physicians and nurses were conducted in real working environments and followed the principles of the contextual inquiry method [12]. In general, the inquiries followed the same structure. The structure and themes for inquiries are presented in Table 1. Each inquiry lasted about one hour and was guided by an experienced usability practitioner. A recorder and a digital camera were used to record interviews for later analysis. Table 1. The predetermined structure and themes for inquires in two empirical studies. Phases of inquiry. Phase 1: Background Discussion about users’ backgrounds and their previous experiences with the clinical IT systems and tool. Phase 2: Practical exercise The user is asked to conduct a dictation or documentation entry as they would normally do and, while working, explain and give reasoning for their action. Phase 3: Summary and futuristic views
Themes in dictation study - Education and current job description - Information technology skills and enthusiasm - Dictation methods and experiences
Description of the situation and surroundings in which documentation is typically conducted. A dictation walkthrough in practice from the beginning till the end using a real patient case: - the beginning of the dictation - dictating, the use of the dictation solution and related IT systems and applications - end of the dictation - approval of transcribed dictation (cassette and digital dictation) - discussion of performed activities Evaluating and discussing mobile phone dictation concepts (prepared concepts illustrated using storyboards)
Themes in nursing documentation system evaluation - Education and current job description - Information technology skills and enthusiasm - Working history and experiences with nursing documentation techniques - Descriptions of daily work and situations in which documentation is conducted, as well as patient information retrieved A documentation entry exercise using prewritten patient case scenarios, which included the following: - background of the patient (e.g., age, the reason for coming to the hospital) - what the patient has told the nurse about her condition - description of the conducted nursing activities, including medication given and interaction with related parties - how has the situation evolved during the shift/outcome of the appointment. After the exercise, discussion on performed activities Discussions on collaborative use of documented data, fluency of documentation, availability and accessibility of the information, and ideas for improvements
968
J. Viitanen / Contextual Inquiry Method for User-Centred Clinical IT System Design
4. Experiences with the Contextual Inquiry Method: Advantages and Challenges Experiences from the described empirical studies showed that the contextual inquiry method has several advantages and challenges when employed in clinical contexts. The experiences and lessons learned are summarised in the following table (Table 2). Table 2. Summary of methodology findings: Advantages and challenges of applying contextual inquiry. Advantages Enables the researchers to make insightful observations, enquire about the clinicians’ actions, and identify general and context-specific needs when the studies include numerous healthcare units and organisations. Addresses the issues of clinical IT system usage from the task and end-user oriented perspectives (versus system-centred evaluation of the usability characteristics of a single clinical IT system). Thereby, the study can include a number of techniques (e.g., for dictation) and diverse systems (e.g., different implementations of nursing documentation systems), which are used to perform similar tasks in various clinical environments. Makes it possible to analyse clinicians’ actions with interactive systems in environments in which numerous systems are used simultaneously and some of those are integrated together. Provides the researchers with an opportunity to increase their understanding of healthcare technology, as well as medical terminology and working practices. Can reveal needs and problems in system usage that the clinicians cannot articulate. Enables the gathering of a large amount of qualitative data. The gathered data can be used for several purposes, e.g., to analyse the success of interaction and user interface design; to describe the contextual issues around healthcare ICT use; to addressed issues of usability from a wide perspective; to determine users’ needs and wishes concerning improvements; to support the design of new applications. Provides concrete data about IT systems’ usage in clinical settings: interaction between the user and the systems, effectiveness of use, and communication and information sharing aspects. Challenges Requires an access to real healthcare settings and permission to record audio or voice data. Might be time-consuming to conduct due to its highly qualitative nature. Requires clinicians’ participation. While working in hectic and critical environments, clinicians tend to be busy with customary clinical tasks and unexpected emergencies. Issues of recording of medical and patient data, as well as patient privacy and health data security aspects, are essential to be considered. All of the pictures and other recorded data need to be carefully anonymised, at the latest, in the analysis phase. It is easy to question the representativeness of the data, since the interview studies typically involve a rather small number of users per user group. When the total number of involved users is rather small, how are we to take into account the wide variety of clinical practices and contexts of technology usage?
5. Conclusion The relatively small number of usability studies conducted in the health informatics domain may derive from the identified challenges in applying user-oriented methods in the health informatics domain. Different from typically applied evaluation methods, contextual inquiry approaches the study issues from the perspective of performing clinical tasks in real environments. Thereby, contextual inquiry may provide valuable support for user-centred design activities. The method enables researchers to approach usability from a broader perspective and reveals results that go beyond what can be
J. Viitanen / Contextual Inquiry Method for User-Centred Clinical IT System Design
969
found by a traditional stationary user-interface evaluation. Contextual inquiry is suitable for increasing researchers’ understanding of clinical practices, the characteristics and various contexts of clinical work, as well as end-users’ interaction with numerous IT systems. Additionally, inquiries conducted in real clinical contexts provide rich qualitative data for the purposes of developing new concepts and visions of future ICT systems. What is more, findings from inquiries indicate direct clinical response and have high descriptive value. Nevertheless, special challenges in clinical settings are related to recording and privacy issues, the wide variety of clinical practices and contexts of technology usage, diversity of clinical applications, heterogeneity of studied user groups, as well as the hectic nature of clinical work.
References [1] [2]
[3]
[4]
[5] [6] [7] [8]
[9] [10] [11] [12] [13]
[14] [15]
[16]
[17]
Alsos OA, Dahl Y. Towards a Best Practice for Laboratory-Based Usability Evaluations of Mobile ICT for Hospitals, Proc. NordiHCI 2008, ACM Press, Lund, Sweden, 3-12, 2008. Horsky J, McColgan K, Pang JE, et al. Complementary Methods of System Usability Evaluation: Surveys and Observations During Software Design and Development Cycles, Journal of Biomedical Informatics 43 (2010), 782-90. Weng C, McDonald DW, Sparks D, McCoy J, Gennari JH. Participatory Design of a Collaborative Clinical Trial Protocol Writing System, International Journal of Medical Informatics 76S (2007), 245251. Reuss E, Naef P, Keller R, Norrie M. Physicians’ and Nurses’ Documenting Practices and Implications for Electronic Patient Record Design, Proc. USAB2007, Springer-Verlag, Berlin, Heidelberg, 113-118, 2007. Malhotra S, Laxmisan A, Keselman A, Zhang J, Pavel VL. Designing the Design Phase of Critical Care Devices: A Cognitive Approach, Journal of Biomedical Informatics 38 (2005), 56-76. Croll PR, Croll J. Investigating Risk Exposure in e-Health Systems, International Journal of Medical Informatics 76 (2005), 460-465. Gennari JH, Reddy M. Participatory Design and an Eligibility Screening Tool, Proc. AMIA2000, Philadelphia, Hanley & Belfus, 290-294, 2000. Gil-Rodriguez EP, Ruiz IM, Iglesias AA, Moros JG, Rubiò FS. Organizational, Contextual and UserCentered Design in e-Health: Application in the Area of Telecardiology, Proc. USAB2007, SpringerVerlag, Berlin, Heidelberg, 68-82, 2007. Colbe JM, Maffitt JS, Orland MJ, Kahn MG. Contextual Inquiry: Discovering Physicians’ True Needs. In Gardner RM, ed.: Proc. AMIA Fall Symposium, Philadelphia, Hanley & Belfus, 469-473, 1995. Chan W. Increasing the Success of Physician Order Entry Through Human Factors Engineering, Journal of Healthcare Information Management 16 (2002), 71-79. Martin JL, Murphy E, Crowe JA, Norris BJ. Capturing User Requirements in Medical Device Development: The Role of Ergonomics, Physiological Measurements 27 (2006), R49-R62. Beyer H, Holtzblatt K. Contextual Design: Defining Customer-Centered Systems, Academic Press, San Diego, USA, 1998. Holtzblatt K, Jones S. Contextual Inquiry: A Participatory Technique for System Design. In Schuler D, Namioka A, eds.: Participatory Design Principles and Practices, Lawrence Erlbaum Associates, Inc., New Jersey, USA, 1993. Viitanen J. Redesigning Digital Dictation for Physicians: A User-Centred Approach, Health Informatics Journal 15 (2009), 179-190. Viitanen J, Kuusisto A, Nykänen P. Usability of Electronic Nursing Record Systems: Definition and Results from an Evaluation Study in Finland. In Borycki EM, Bartle-Clar JA, Househ MS, Kuziemsky CE, Schraa EG, eds.: International Perspectives in Health Informatics. Studies in Health Technology and Informatics 164 (2011), IOS Press, Amsterdam, 333-338, 2011. Nykänen P, Viitanen J, Kuusisto A. Hoitotyön kansallisen kirjaamismallin ja hoitokertomusten käytettävyys, (Project report in Finnish), University of Tampere, Report D-2010-7. [Internet] 2010. [cited 2011 April 20] Available from: http://www.cs.uta.fi/reports/dsarja/D-2010-7.pdf. Tanttu K. National Nursing Documentation Project in Finland 5/2005-5/2008: Nationally Standardized Electronic Nursing Documentation, presentation. [Internet] 2008. [cited 2010 June 10] Available from: http://www.vsshp.fi/fi/dokumentit/15158/National-Nursing-Project-2005-2007.pdf.
970
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-970
A Method to Measure the Reduction of CO2 Emissions in E-Health Applications a
Paola DI GIACOMOa,1 , Peter HÅKANSSON b University of Udine, Faculty of Medicine – Ericsson Telecommunications Italy b Ericsson Telecommunications LTD, Sweden
Abstract. Climate change is perhaps the topmost challenge of our time. To prevent climate change from severely impacting almost every facet of life on the planet, scientific consensus points to a need to reduce the emissions of greenhouse gases (GHG), measured in terms of CO2 equivalents (CO2e), by as much as 80 percent by 2050. So far the focus has centered on incremental reductions of CO2 e emissions in areas in which they are highest, without negatively impacting the economy. But there is also a large untapped opportunity to drive economic growth by applying transformative solutions. In this paper, a method to evaluate CO2e reduction in the e-health applications is presented. Keywords. CO2 reduction, e-health, sustainable broadband-enabled services.
1. Introduction The Information Communication Technology (ICT) industry sector is today responsible for about 2% of global CO2 emissions [1]. ICT services and applications like virtual meetings, flexi-work, e-commerce and e-health have a potential to significantly reduce CO2 emissions. Life cycle assessments (LCA) [2] constitute a well-established methodology and tool for measuring CO2e emissions, and are used for comparing the emissions of different systems. This paper takes the traditional LCA approach one step further, by presenting: • A method for assessing the potential reduction of future CO2e emissions and the results of a case study in the e-health domain • Indeed, this method is especially useful for evaluating the potential ICT-based solutions to reduce CO2e emissions in other sectors not traditionally associated with ICT.
2. Current Measurement Methods At present, several methods are used to analyze the effect of introducing ICT-based solutions to replace traditional solutions and thereby reduce CO2e emissions [3]. These methods are seldom based on life cycle assessment and typically only include end-user 1
Corresponding author: Paola Di Giacomo. University of Udine at Ericsson Telecommunications Italy, Via Anagnina 203, 00118 Roma (Italy), E-mail: [email protected].
P. Di Giacomo and P Håkansson / A Method to Measure the Reduction of CO2 Emissions
971
equipment. Most standards and assessment methods tend to focus on particular aspects of the life cycle – there are, for example, a number of “energy labeling” standards for products that use electricity. One such is the EU energy label, Energy Star. Unfortunately, none of these methods include “infrastructure” as pat of their assessment of direct and indirect impacts. Consequently, these methods have only limited value. Recent scenario-building studies have demonstrated that comprehensive LCAs are, in fact, necessary to provide a holistic image of environmental impacts. 2.1. LCA Methodologies LCA methodologies, such as “Process-Sum” and “Economic Input-Output”, have two different approaches for evaluating environmental impact. There are also hybrid models that use adaptations of these methods in an effort to take advantage of key benefits while overcoming certain inadequacies. Several standardization organizations (European Telecommunications Standards Institute (ETSI) and the International Telecommunication Union (ITU) [4]) are currently developing standards that should provide guidelines for performing an LCA relating to ICT-based products, services and solutions. In summary, there is not currently an agreed methodology for measuring the potential reductions in CO2e emissions that ICT-based solutions can provide. Nevertheless, the industry should adopt the use of comprehensive LCAs to calculate the potential of ICT-based solutions to reduce CO2e. The adoption of a holistic LCA methodology would: • Enable companies/policy makers to prioritize/support solutions and balanced decisions regarding sustainability • Help put focus on the total level of energy usage and highlight the potential for CO2e reductions in business cases, thereby motivating investments in ICT.
Figure 1. Overview of a holistic LCA method for comparing ICT-based systems with conventional systems that deliver equivalent services (C-LCA = CO2e-baes LCA).
3. Methodologies for Assessing the Use of ICT to Reduce CO2e Emissions Figure 1 presents a schematic illustration of a holistic LCA method for comparing a new ICT-based service with a conventional service. The method builds on results from LCA studies of ICT-based systems, for example, PCs and network access, and LCA
972
P. Di Giacomo and P Håkansson / A Method to Measure the Reduction of CO2 Emissions
studies the conventional systems from traditional sectors, for example buildings and transport [5]. The conventional system and the ICT-based system are each assessed in the same way and compared. 3.1. System Definition The first step – system definition – entails defining the processes and boundaries of the system. In some cases, depending on the service, it might be necessary to consider both fixed and mobile broadband in order to analyze the environmental impacts of the introduction of ICT [6]. 3.2. Data Collection During the second step – data collection – data is collected from a variety of sources, such as LCA databases, field studies, and statistics. This baseline data enables the comparison against with reductions or increases can be measured or estimated. The availability of published LCA data is limited. Statistics about travel (distance, in kilometers), transportation (weight, in 1000Kg*kilometer), building area (in square meters), and so forth, must be collected during the life cycle inventory (LCI) phase. Some LCA data about mobile and fixed broadband networks (for instance, data about PCs) has already been published. 3.3. CO2e Assessment Based on the LCA Method In the third step – assessment of CO2e impacts – the CO2e emissions of the defined ICT-based system and the conventional system including the infrastructure. This assessment is based on the LCA methodology (see Table 1). For ICT-based system, the use of mobile broadband services must, based on data traffic, also consider: user profiles and behavior, including type of mobile device and the characteristics of mobile network access and the core or transmission network and specified data centers. Fixed broadband has many different user profiles with individual types of PCs, modems, or home network setups. Access sites and data traffic may be aggregated to form a total or average ICT system user profile for all users or for an entire company, hospital, general organization etc. Finally, it is necessary to determine the total use of the specific service and all related services [7]. Table 1. Mandatory elements to consider when assessing the CO2e emissions of an IT service. Mandatory Elements to Consider Type of end-user equipment (PC, mobile phone) Use time and baseline operation (standby)
Energy consumption for network access
Average data traffic of the service
Electricity mix in the “organization” studied
Comments This information provides manufacturing impact and operation characteristics This is dependent on electrical power consumption and the type of end-user equipment. The specification needs to be based on user behavior The type of access and use time is used to quantify network access. Data traffic can’t be used because most energy consumption related to network access standby This is used to quantify data transport, cable infrastructure and data centers, in order to calculate the services share of the total network infrastructure All electrical power consumption in the operation phase can be adjusted to the specific organization
P. Di Giacomo and P Håkansson / A Method to Measure the Reduction of CO2 Emissions
973
4. Comparison of Systems After the CO2e assessment is complete, it is possible to compare the two systems and evaluate the potential of the ICT-based service to reduce CO2e emissions. The total results of the analysis are twofold: potential reduction factor and relative reduction factor. The potential reduction factor is the total reduction in CO 2e divided by the total CO2e of the new ICT-based system.
5. Introduction of E-Health in Croatia 5.1. Background The term e-health refers to the application and use of ICT in all aspects of healthcare to provide better and more efficient services and to facilitate access to healthcare [8-9]. The Healthcare Networking Information System, developed by Ericsson in Croatia, is a comprehensive ICT solution for integrating healthcare processes, information management and business workflows. In this study, the system was particularly used to transfer prescriptions and referrals electronically, reducing also the need for printing prescriptions for patients and patients to travel. 5.2. Data and Assumptions Croatia has about 4.5 million inhabitants, 55 percent of whom live in Croatia. There are about 260 cars per 1000 people. In addition, there are 6600 primary healthcare terms/units in Croatia and approximately one doctor for every 450 people. The main assumptions were, as follows, like in a typical GP-centered organization of health care: • The e-referral service can reduce patient visits to hospitals or specialists (approximately 12 million per year) in average by 50 percent taking into account that the referrals do not need additional visits, if most of them are based on an examination • On average, patients travel 10km + 10 km per visit; twenty-five percent of patients travel by car and the other 75 per cent by public transport and the eprescription service can reduce paper consumption by 50 percent. 5.3. The New E-Health System The actual data center consumes 400MWh of electrical power a year. There are approximately 10,000 PCs in the network. Therefore, in all likelihood, the allocation of the total system to the two services studied will decline over time, especially as new services are introduced. The main assumptions were as follows: • About 10 per cent of the system’s PCs were installed in parallel to the system. There, 10 percent of the total system will be allocated to the e-referral service • One percent of the total system is allocated to the e-prescription service. In total, the two services in the e-health system account for about 330 metric tons of CO2e emissions per year. Of this amount, PCs and networks account for over 90 percent and the data center accounts for the rest. Given that patients reduce their travel, on average by almost three visits per year, the potential reduction in travel about 7 kg
974
P. Di Giacomo and P Håkansson / A Method to Measure the Reduction of CO2 Emissions
CO2e per patient and year. This is a result in a reduction of up to 15,000 metric tons of CO2e provided 50 percent of all travel can be avoided (see Figure 2).
Figure 2. Graphical presentation of e-health case study results.
6. Conclusions The e-health system is installed to support primary healthcare in Croatia can significantly reduce CO2e emissions, thanks to reduction in patient travel as well as to the reductions in paper consumption. Taken together, the e-referral and e-prescription services have the potential to reduce CO2e emissions by up to 15,000 metric tons per year while two services only add 330 metric tons of CO2e/year from operation and manufacturing activities. The potential reduction factor over a 20-year period is up to 50, depending on whether infrastructure is included and, if so, to what extent.
References [1] [2]
[3] [4] [5]
[6] [7] [8]
[9]
Boutin JP, Villeneuve C, Wells JP. Greenhouse Gas emissions offsets through videoconferences and teleconferences, Literature review, Global e-Sustaniability Initiative (GeSi) (Phase 1), 2006. Miyamoto S, Irie Y, Harada H. Factor analysis of environmental load reduction induced for various information technology systems, Proceedings of 11th LCA Case Studies Symposium, Joint SETAC Europe, ISIE meeting and LCA Forum, Lausanne, Switzerland, 3-4 December 2004. U. Östermark, Eriksson E. LCA of a videoconference: a comparative study of different ways of communication, Proceedings of 7th LCA Case Studies Symposium, SETAC-Europe, 1999. Berkhout F, Hertin J. Impacts of Information and communication Technologies on environmental Sustainability: speculations and evidence, Report to the OECD, 2001. Fuchs C. The implications of new information and communication technologies for sustainability, Center for Information and Communication Technologies & Society, University of Salzburg, Austria, 2006. NTT Service Integration Laboratories in Japan, The Green Vision 2020, NTT Group CSR Report 2010, 2010. Pamlin D, Szomolányi K. Saving the climate @ the speed oflight, Proceedings of European Telecommunications Network Operators´ Association (ETNO) and World Wildlife Fund (WWF), 2006. Burton D, Cavanagh J, Johnston G, Mallon K. Towards a High-Bandwidth, Low-Carbon future Telecommunications-based Opportunities to Reduce Greenhouse Gas Emissions, Climate Risk Pty Limited Telstra Report, Australia, 2007. Nakamura J, Nishi S, Kato K, Takahashi KI. Environmental Assessment of e-learning based on a Customer Survey, Proceedings of Fourth International Symposium on Environmentally Conscious Design and Inverse Manufacturing, Eco Design 2005, Tokyo, Japan, 12-14 December, 2005.
EFMI Invited Session: Health Informatics Research Management
This page intentionally left blank
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-977
977
Medical Informatic Research Management in Academia - the Danish Setting. Stig KJÆR ANDERSENa Department of Health Science and Technology, Aalborg University 1
a
Abstract. The condition that the Danish universities have been subject to severe changes through the last decade has had huge consequences for management of research at the level of a discipline as Medical Informatics. The presentation pinpoints some of the instruments, which is on top of the management agenda in the new academic reality in Denmark. Performance contracts, organizational structure, general management, research constraints, ranking and performance issues, economy linked to production, ownership, and incitements are issues affecting the way research are done. The issue of effective research management is to navigate in this reality, ensure inspiration and influx from other environments dealing with medical informatics problems, in theory as well as in praxis - and shield the individual researcher from emerging bureaucracy, leaving room for creativity. Keywords. Management, research management, research agenda, creativity,
1. Introduction The conditions for research management at all levels at the Danish universities have been subject to radical changes since 2002, where a new university law was passed in the Danish parliament revealing a new governing structure. Furthermore, in 2009 major restructuring of the Danish universities took place, where the 12 universities were merged to 7 and, as a consequence, large organizational reallocations took place and new revised management structures were imposed on the Danish universities. The political mantra in the changes has been to manage universities in the same way as it is done in industry, implemented as a top down management structure with a powerful board and a director (rector) referring to a board who has a majority of nonuniversity external members. The agenda for the government funding research has shifted the ratio between basic research programs and political defined strategic research programs to the expense of basic research, and the offspring of new companies based on research results has been highly prioritized. “From idea to invoice” was one of the announcements from the Ministry of Science Technology and Innovation indicating the new trend. The classical Humbolt university virtues has slowly but surly been down prioritized in the Danish academic environment as a consequence of the new university management and its late revisions [1]. These significant changes in running universities have of course a significant impact on research management in the field of medical informatics. 1
Corresponding author
978
S. Kjær Andersen / Medical Informatic Research Management in Academia – The Danish Setting
2. The national level Medical informatics academic research has also been influenced by a long gliding change of the national healthcare setting, where all have equal rights to be treated, to a system where private insurance systems and private hospitals causes unequal opportunities for treatments. In 2007 another shift took place that had consequences for the Danish health care settings: the Danish regional structure was changed from 13 counties to 5 regions as well as the financing model for the delivery of health care changed to a centralistic model, argued in a possible increase in efficiency and a less growing resource consumption. The continuous hunt for possibilities for keeping the resource consumption not growing too fast in relation to the GNP is serious affecting the conditions for implementing e-health solutions and hence affecting the context for the medical informatics research in academia. The managing of medical informatics research in a continuous changing world is a challenge in the cross field between basic research, applied technology and a demanding clinical situation. The politically imposed changes have made it an even bigger challenge. These are the conditions for the research management in Medical informatics in a academic environment where the focus on education and dissemination are equally important.
3. The governmental instruments Following key initiatives are the general prerequisite for the research management in academia in general and in Medical Informatics in particular and this cause in some way the step away from the classical university. Some of the initiatives are recognized internationally, others are tailored to the reality of the Danish governmental research policy. This lists what the management of medical informatics research has to navigate through. Performance contracts: Between the research ministry and the universities performance contracts have been signed. The key research related performance requirements are: passed PhD’s, number of publications, international recognitions, and number of start up companies. This forces the management of medical informatics research to focus on measurable recognizable quantity on expenses of quality. Organizational structure: A clear top-down single-strand level-structure has been introduced: the university board, the university management, the faculties, the departments, and the individual researches organized in research groups. The influences of traditional university collegial organs and councils have been severely reduced. The success of a strong governance influence the research agenda and priorities are severer dependent on the scientific capacity of the leaders in office. General management: managerialism has been introduced with the purpose of having more professional management by generalists in all levels of management. The consequences are increased control and a more homogenous acting organization. The pay is more resources for administration instead of for research. Research constraints. Strategic programs, national as well as international, have considerable influence on the research agenda for research groups, partly due to encouragement from university management as a aid for resource and partly due to the
S. Kjær Andersen / Medical Informatic Research Management in Academia – The Danish Setting
979
research subject defined by experts. This gives less power and freedom to chose research subjects for the individual researcher and research group. The concern is here for academia to fulfill the obligation of focus on the long-term agenda for the next 510 years. Ranking and performance measure: At the macro level, it has been a integrated part of branding universities to be as high as possible on some of the “top 100” lists of universities. This is one of the ways the competition between universities in Denmark has materialized. At the micro level, the number of publication in international journals, the impact factor, the h-factor, and other bibliometric measures are the parameters. Basically this is a way of quantifying the requirement of recognition in the scientific society. As such it is efficient, but other obligations as broader dissemination may be down prioritized. Economy linked to production: The public part of the university funding has become increasingly coupled to Key Performance Indicators (KPI), meaning that the production focus is on optimizing these KPI, whish could lead to unintentional publication strategies and less optimal dissemination of results. Ownership: A shift in ownerships to research inventions from the individual (the researcher) towards the employer (the university) and the call from the management to focus on patents and commercialization add a dilemma between closeness and open ness, which is especially delicate at a basically public funded institution.
4. The research group level The research in Medical Informatics has a long tradition, the first international conferences, the foundation of international associations and scientific societies emerged in the 60ties and 70ties and have over the years established its own research agenda. The emerging of new technologies and the global “go internet” has heavily influenced this agenda, which is well displayed in the program of the present MIE2011. The task for research management is to carry the tradition outlined in the continuously development of the Medical Informatics research agenda. The task for research management of Medical Informatics in an academic setting is to navigate in reality using the available instruments in a positive manner. Medical Informatics is cross disciplinary by nature and hence inspiration and influx from other environments dealing with medical informatics problems, in theory as well as in praxis, is very important. Handling these interfaces between environments, often a cultural gap is another important task for research management. At the bottom line, the Medical Informatics research is powered by the creativity and knowledge of individual researchers having a common research toolbox of experience, methods, theories and praxis. To shield the individual researcher from emerging bureaucracy and the unforeseen consequences is another important issue.
References [1]
Madsen, O.M: Universitetets død, Kritik af den nyliberale tendens (in Danish), Bogforlaget Frydenlund A/S, 2009, ISBN 9788778878205.
980
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-980
Research Management in Healthcare Informatics — Experiences from Norway Arild FAXVAAGa,1, Pieter TOUSSAINT a,b, Trond S JOHANSENa Norwegian EHR Research Centre, Faculty of Medicine, University of Science and Technology (NTNU), Trondheim, Norway b Department of Computer and Information Science, Faculty of Information Technology, Mathematics and Electrical Engineering, NTNU, Trondheim, Norway a
Abstract. This paper reports on the experiences with establishing a multidisciplinary healthcare informatics research community at the Norwegian University of Science and Technology (NTNU) in Trondheim, Norway. A multidisciplinary research group in healthcare informatics must maintain strong connections to computer science, social science, biomedicine and healthcare researchers. Those organizing the research must create a milieu that fosters true collaboration across disciplines. The researchers must have good access to healthcare institutions, to healthcare professionals as well as to patients. A healthcare informatics laboratory creates an arena for experiments as well as for validation of health-it technologies. Keywords. Healthcare informatics research, Research management
1. Background Although healthcare informatics now is recognized as a research field in its own, healthcare informatics research intersects, and is shaped by research in computer sciences, social sciences, biomedicine and healthcare. The field is also influenced by a trend towards making health-IT development and implementation programs important component of healthcare modernizing efforts. The vision of health-it systems as catalysts of change and means of empowering the patient has also improved the opportunities for funding of research within healthcare informatics. During the last 10 years, the Norwegian University of Science and Technology (NTNU) has built up a multidisciplinary healthcare informatics research community. NTNU is the only Norwegian technical university that also has a medical faculty. The technical faculties are located within walking distance from the medical faculty and the university hospital, creating a fertile ground for building research groups in biomedical science and engineering. In the 1990’s the Research Council of Norway established a program for healthcare informatics research. In 2002, NTNU established a research program for health informatics with participation from the Faculty of Medicine, the Faculty of Information Technology, Mathematics and Electrical Engineering, and Faculty of Social Science and Technology Management. The next year NTNU was 1
Corresponding Author: Arild Faxvaag, The Norwegian EHR Research Centre, Medical-Technical Research Centre, N-7489 Trondheim, Norway; E-mail: [email protected]
A. Faxvaag et al. / Research Management in Healthcare Informatics — Experiences from Norway 981
awarded a grant to establish a national centre for research on electronic health records. In this short paper, we will report our experiences with establishing this research centre and what we believe contributes a multidisciplinary research community.
2. The research domain Healthcare informatics research lies at the intersection between healthcare research, computer sciences and social sciences. Health-IT systems can inform hospital leaders, healthcare professionals and patients. Health-IT systems can provide data for healthcar services research, and health technology assessments. Healthcare informatics has intersections to bioinformatics as well as to cognitive science, workflow management, knowledge representation and guideline systems. Healthcare informatics research can inform the design of commercial health-IT systems and provide methods for validation and assessment of implemented systems. The breadth of the research domain and the large number of stakeholders makes it hard to decide on which particular sub-field to engage. A multidisciplinary healthcare informatics research community must lay the ground for researchers with a broad spectrum of interests. Further, the research should be organized so that the groups learn from each other.
3. Creating an environment for multidisciplinary research In the 1990’s the Research Council of Norway established a program for healthcare informatics research. Towards the end of the decade, before the centre was established, some PhD students working with health-it complained of not having a milieu to discuss and share their experiences. They typically worked in a small group, unaware of other university groups that could be doing almost the same type of research. At the same time, the health authorities planned for building a new university hospital. Those who managed the hospital planning process envisioned modern health-IT systems in the new hospital. As a response to these challenges NTNU established a research program for healthcare informatics [1]. It began as a series of open meetings that created an arena for fostering discussions about healthcare informatics. The meetings were announced via e-mail lists and the web. Typically researchers, representatives from industry, hospital managers, healthcare professionals and students attended. In hindsight, this created an arena for networking, and for developing a language for sharing and discussing problems related to healthcare and information technology. A meeting where healthcare professional presented a problem could result in design of a student project by a researcher from social sciences. The healthcare professional would secure access to the domain and act as a co-supervisor to the student. If the student project became successful, it could be developed into a PhD project. Our experiences with establishing this networking arena has led us to conclude that such an activity is necessary for the success of a healthcare informatics research community. When NTNU was awarded the grant for establishing a National EHR research centre, the university could offer PhD students and their supervisors a shared office environment at the university hospital campus. Since the establishment the centre has become a place where PhD students and faculty can choose to do their work. However, all researchers at the centre are also affiliated to, and have office space, at another
982 A. Faxvaag et al. / Research Management in Healthcare Informatics — Experiences from Norway
department at the university. Organizing the research centre this way, we provide a second office to researchers, while the researchers at the same time keep their connection to their “mother department” and research domain. This “second office policy” has been crucial for the success of the research centre. Keeping the connection the different university departments makes it easy to recruit PhD and Master students to the healthcare informatics field. At the same time unnecessary tensions between the “mother department” and the research centre is avoided since the researcher always primarily belong to the latter.
4. Securing access to healthcare institutions and personnel. A healthcare informatics research community must have good access to the domain. This means having the opportunity to do ethnographic studies, observing healthcare personnel in their interaction with information systems as well as with patients. The list of actors also includes hospital managers, and people employed at the IT-department of the institution. Access should be secured through bilateral agreements between the university and the healthcare institution. It is our experience that good access to the domain also requires an engagement of the leader of the department that is to participate. Based on our experiences, we believe that researchers should interact with these leaders, and invite and encourage them to present health-it related problems from their own perspective. Researchers should also be able to recruit healthcare personnel for participation in usability experiments, design workshops and other activities at the healthcare informatics laboratory at the centre. We consider it an advance that hospital employees can participate in the laboratory while dressed in white.
5. Create fruitful interactions with healthcare informatics industry and consulting companies A healthcare informatics research community must foster interaction between researchers and healthcare informatics industry. The industry can participate by providing the researchers with working versions of their systems for use and testing in the laboratory. Further, they can co-sponsor research projects to benefit from theory, models and prototypes that come out of the project. It is our experience that representatives from the industry should be encouraged to participate in networking events. The industry should also recruit from our students. As alumni, these could strengthen the network between the healthcare informatics researcher environment and the industry.
6. Establishing laboratory facilities Having a healthcare informatics laboratory creates novel opportunities for healthcare professionals and patients to participate in experiments where new health-IT prototypes and concepts are tested. The test results can inform further design of the prototype. A laboratory also creates an arena where the prototype, test object or situation can provide
A. Faxvaag et al. / Research Management in Healthcare Informatics — Experiences from Norway 983
stimuli for the test persons to reflect on how they work with information and how information systems can support different tasks. Our laboratory has also been used during workshops and in focus group interviews. Finally, our laboratory has been used to assess health-IT technologies that already are in use.
7. Secure funding of healthcare informatics research One of the challenges faced by health informatics researchers is finding appropriate grant programs to apply for research funding. There are three types of programs to choose from: informatics oriented, medical science oriented and social science oriented. All three pose very different requirements and expectations to project proposals, which must be taken into account when developing the proposals. As a result, even though projects are multi-disciplinary, they will be formulated in a biased way in order to meet the requirements of the specific funding program they apply for. In principle this jeopardizes the ideal balance between the different research communities that are represented in the project. It would be preferable to have funding programs that are dedicated to medical informatics research and accommodate true multi-disciplinary proposals. These types of programs are, or have been, around. But very often they have a short life span and tendency to favor either the medical or the IT community.
8. What might go wrong Below we list a number of threats for multi-disciplinary research within healthcare informatics: • A project fails to balance research and development tasks evenly. So, either the project becomes mere development or consultancy, focused at solving local problems, or the project becomes a fundamental research project without sufficient relevance for the domain. • Researchers coming from different research traditions fail to understand each other and/or don't respect each other’s research approach. As a result the collaboration is poor or even absent. • Tension and conflict between different stakeholders due to poor coordination and ambiguous vision. There are many stakeholders in the multidisciplinary healthcare informatics environment, both internal (at the university) and external (i.e. health institutions and industry). A minimum of staff is necessary to coordinate, and mediate between, different stakeholders. Further, a clear vision and strategy may function as a powerful tool to create a transparent and eclectic culture were people pull in the same direction. • Industrial partners have their own agenda, not so much geared towards knowledge development but more to product development. This can make them disinterested in the research part of a project. • Health care partners mindful of their day-to-day clinical work responsibilities may limit the opportunity for experimentation and innovation in a project. So, although the intention is to involve these partners in a project as providers of a so-called 'work place' practice, this work place role may be very limited.
984 A. Faxvaag et al. / Research Management in Healthcare Informatics — Experiences from Norway
•
Prototypes developed will never be transformed into proper, well-evaluated, products
References [1] NTNUs program for healthcare informatics: http://hi.ntnu.no
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-985
985
Research Management: the case of RN4CAST a
Dimitrios ZIKOSa1 John MANTASa Laboratory of Health Informatics, National and Kapodistrian University of Athens
Abstract. Successful research management requires multifunctional, equal teamwork and efficient coordination, aiming to increase the impact of the research outcomes. Aim of this paper is to present the strategies that have been followed to successfully manage the RN4CAST study, one of the largest multi country research projects ever conducted. The paper focuses on the core research strategies rather than on the administrative management activities also required for the success of this case report. Management of a multi-country nursing survey requires the use of common data collection tools, applicable to every context, research protocols supporting the scope of the research, data models for multi-country analyses and global dissemination strategies.
Keywords. Research Management, Nursing, RN4CAST
1. Introduction The methodological approach for the efficient management of research has been discussed many decades ago in research papers [1], [2]. Recently many authors define research management as opposed to the “research administration” which is a centralized approach to conduct a medical research [3]. This new approach requires all partners’ active participation, but also of the communities, potential interest groups, policymakers and other stakeholders [4]. The link between research strategies and successful management is very important while the achievements of a research can be proved to be the key of scientific research management [5]. General management practices applicable in research management include the need for empowering partners and equally working together beyond institutional boundaries; communicating effectively with stakeholders to create new knowledge and utilize it throughout unique practices. Successful research management does not only imply project management in financial and administrative terms but also involves the research itself. Nowadays research involves international collaboration; therefore resource mobilization and use of proper methods of dissemination to different stakeholders are key success factors. The success is also based on the ability to mobilize multi-country and multi-disciplinary teams while knowledge management and use of essential informatics tools for health research are important. Finally the role of coordination is equally important for the efficient management of a large scale research [6].
1
Corresponding author
986
D. Zikos and J. Mantas / Research Management: The case of RN4CAST
2. Scope Aim of this paper is to present the strategies followed in order to successfully manage the RN4CAST study, one of the largest ever multi-country nursing workforce research projects. This case study focuses on the RN4CAST practices that have been agreed through a common consensus and collaborative work to tackle lingual, conceptual and organizational variations between the participant countries, thus developing an effective and at the same time democratic multi-country research environment.
3. Research Management in the case of RN4CAST RN4CAST, the largest nurse workforce study in Europe will add to accuracy of forecasting models and generate new approaches to more effective management of nursing resources in Europe. RN4CAST is a consortium of 15 partners in 11 European countries. Each European partner conducted surveys from over 50,000 nurses and outcomes of tens of thousands of patients [7]. 3.1. Common Study Protocols Nursing job varies across European countries participating in the RN4CAST study. Despite common characteristics, there are differences in the organization of the healthcare system [8]. In order to agree on common principles regarding the research methodology in all countries, an international protocol was prepared to standardize data collection process and instruments for the cross-country analyses. Differences between the national study protocols were reported by each team, discussed by the consortium and approved by the coordinator. 3.2. Data Sources and definitions An opening discussion regarding data sources identified a limitation in the case of some countries, regarding the availability and/or quality of routinely collected data. This limitation was tackled using an additional instrument to primarily collect patient data not readily available in routinely collected databases and this strategy allowed the timely inclusion in the analysis. Participating hospitals were selected through a common strategy, explicitly describing the type and size of eligible hospitals, nursing units and the type of eligible nurses. ’Nurses’ have been clearly defined in all countries based on the European Union definition (directive 2005/36/EC), therefore variations in the local interpretation of what is a nurse have been overcome. The survey instruments were based on a common template that all partners agreed to use. The instruments were translated into all primary languages using the backwardforward translation method and evaluated with the CVI instrument [9] by experts in every country, while no changes to the core template were allowed. Standard definitions of all variables were agreed, based on (i) previous knowledge (ii) wellknown validated instruments and (iii) research team expertise [10]. Finally, identifiers indicating survey variables (ie International Classification of Diseases-ICD, Diagnosis Related Groups-DRGs) were decided and commonly used by most national studies.
D. Zikos and J. Mantas / Research Management: The case of RN4CAST
987
3.3. Data Collection, Analysis and results exploitation The strategy followed to facilitate data collection was based on the enrollment of a field manager in each hospital as key contact with national research teams. Once data was collected by all countries, there have been gathered centrally by the research coordinator to perform preliminary analyses of the raw datasets to identify out-of-range, missing values and data entry errors, producing a cleaned version. A statistical analysis model was selected to explore specific research questions within each country but also through cross-country analyses. The strategy for the dissemination of the results is comprised by (i) yearly stakeholder meetings during the project life circle (ii) agreement upon a common strategy for publications and authorship (iii) a special issue of scientific journal dedicated to RN4CAST (iv) drafting and co-authoring a synthesis document presenting and comparing the conclusions of the data analyses across countries, with possible Europe-wide conclusions (v) an observatory book bringing together a sample of country case studies and contextual contribution of nursing in the quality of care.
4. Discussion The case of RN4CAST indicates that the road to the successful management of a multi country large scale research crosses two different levels of challenges. Other than successful financing, mobilization, reporting etc, which mainly refer to the project management/administration, there are challenges directly addressing the content and methodology of the research itself. These challenges address the methods of the survey, data harmonization issues, data collection, multi-level data analysis strategies and finally dissemination of the results providing added value on the national surveys in EU level. The above mentioned challenges are key factors for the validity of the survey results and the scientific quality of large scale surveys. Acknowledgements: RN4CAST is coordinated by the Centre for Health Services & Nursing Research at the Catholic University Leuven. University of Pennsylvania, USA, contributes with its specialized research expertise derived from previous international research. Many thanks to the principal investigators of the RN4CAST consortium: Tomasz Brzostek, Reinhard Busse, Maria Teresa Casbas, Sabina De Geest, Peter Griffiths, Juha Kinnunen, Anne Matthews, AnneMarie Rafferty, Carol Tischelman, and Theo Van Achterberg and to all RN4CAST partners
References [1] [2] [3] [4] [5] [6] [7]
Russel R. Management of Research. Nature. 1947; 160(4068):547. Smith W. Research management. Science. 1970; 167(3920):957-9. Peiró S, Artells Herrero JJ.Management of research in healthcare centers. An exploration through nominal group dynamics. Gac Sanit. 2001; 15(3):245-50. De Rosa C, Rosemond ZA, Cibulas W, Gilman AP.Research management in the Great Lakes and St. Lawrence River basins: challenges and opportunities. Environ Res. 1999; 80(3):274-9. Zhang WY, Zheng J, Li YC. Practice and experience of the scientific research management. Zhonghua Nan Ke Xue. 2003; 9(8):634-8. Merry L, Gagnon AJ, Thomas J. The research program coordinator: an example of effective management. J Prof Nurs. 2010; 26(4):223-31. The RN4CAST project official website. Online. Available from: www.rn4cast.eu
988 [8]
D. Zikos and J. Mantas / Research Management: The case of RN4CAST
Needleman J, Buerhaus P, Mattke S, Stewart M, Zelevinsky K: Nurse-staffing levels and the quality of care in hospitals. N Engl J Med 2002; 346: 1715-1722. [9] Polit D, Beck C, Owen SV: Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Res Nurs Health 2007; 30: 459-467. [10] Sermeus W, Aiken L, van den Hede K, Rafferty AM, Griffiths P, Moreno-Casbas M, Busse R, Tishelman C, Scott A, Bruyneel L, Brzostek T, Kinnunen J, Schubert M, Schoonhoven L, Zikos D, RN4CAST Consortium. Nurse forecasting in Europe (RN4CAST): Rationale, design and methodology. BMC Nursing 2011; 10:6.
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-989
989
eMeasures: A standard format for Health Quality Measures Catherine CHRONAKIa1, Charles JAFFEa, Bob DOLINa On behalf of HL7 International,
Abstract. Health quality measures can be used to improve the effective use of Electronic Health Record systems (EHRs) in health care delivery. The Health Quality Measures Format (HQMF) is a standard for representing a health quality measure as an electronic document. This presentation will present the standard, review the development process of quality measures for EHR system using HL7 CDA R2, and reflect on the outlook for eMeasures implementation and adoption. Keywords. health information technology standards, quality measures
1. Introduction Health quality measures can be used to improve the effective use of Electronic Health Record Systems in health care delivery. The National Quality Forum aims to significantly improve the quality and efficiency of patient care by making possible the capture and reporting of quality measure information for physicians and other health care providers [1]. The Collaborative for Performance Measure Integration with EHR systems has the following objectives [2]: (a) To create a standardized way of communicating Performance Measures; (b) To establish standards that permit structured, encoded Performance Measure information to be incorporated into EHR applications while preserving the clinical intent of the Performance Measure; and (c) To improve the process of Performance Measure update and maintenance for EHR vendors. The Health Quality Measures Format (HQMF) is a standard for representing a health quality measure as an electronic document. Quality measures or indicators provides indications of outcome regarding the performance of an individual or an organization in relation to specific actions, processes or outcome measured based on a set of clinical criteria and evidence base [3]. The next section (Methods) describes the HQMF standard, which as of March 2010 is a HL7 Draft Standard for Trial Use (HL7 DSTU). Then, Results and Outlook cites areas that the HQMF reflecting on opportunities for global adoption.
1
Corresponding author
990
C. Chronaki et al. / eMeasures: A Standard Format for Health Quality Measures
2. Methods Through standardization of a measure's structure, metadata, definitions, and logic, the HQMF provides for quality measure consistency and unambiguous interpretation. A health quality measure encoded in the HQMF format is referred to as an "eMeasure". Standardization of document structure (e.g. sections), metadata (e.g. author, verifier), and definitions (e.g. "numerator", "initial patient population") enables a wide range of measures, currently existing in a variety of formats, to achieve at least a minimal level of consistency and readability, even if not fully machine processable. An HQMF document is a defined and complete information object that can exist outside of a messaging context and/or can be a payload within an HL7 Version 2 or Version 3 message. Thus, the HQMF complements HL7 messaging specifications. The exact method by which an eMeasure is exchanged is outside the scope of this standard.
Figure 1: Structure of an HQMF document.
HQMF requires that a receiver of an eMeasure be able to algorithmically display the document on a standard Web browser such that a human reader would extract the same quality data as would a computer that is basing the extraction on formally encoded eMeasure entries. Material within a section to be rendered is to be placed into the section.text field. The content model of this field is the same as that used for other Structured Document specifications (see Figure 1). The HQMF Model is derived from the HL7 Reference Information Model (RIM), through the use of the HL7 XML Implementation Technology Specification (ITS). It is a "Constrained Information Model" (CIM), derived from a broader "Domain Information Model" (DIM). The QualityMeasureDocument class is the entry point into the HQMF model, and corresponds to the XML element that is the root element of an eMeasure document. An eMeasure document is logically broken up into a header and a body. The QualityMeasureDocument class inherits various attributes from the InfrastructureRoot class of RIM, including templateId and typeId. Setting the value of.templateId in an instance signifies the application of a set of templates, which may be applicable at the level of the QualityMeasureDocument or at a finer granularity i.e. section or entry.
C. Chronaki et al. / eMeasures: A Standard Format for Health Quality Measures
991
Key notions of HQMF are: Data Criteria, Population Criteria, and Measure Observations. Data Criteria are assertions that can be True or False frequently looking at raw EHR data and they are used primarily to define whether a patient is included in Numerator, Denumerator, etc. In HL7 terms, Data Criteria are formalized as RIM patterns coupled with vocabulary. Population criteria, just like data criteria are assertions that can be found to be true or false, thereby providing a means for HQMF to formalize a measure's population parameters based on combinations of Data Criteria. Measure observations are not criteria, but rather, are definitions of observations, used to score a measure and are tied to a specific population e.g. average systolic blood pressure.
Figure 2: eMeasure Development Process
3. Results and Outlook Health Quality Measures Format (HQMF) supports the development process of eMeasusres for quality reporting (see Figure 2). It is an HL7 standard developed to streamline the process of developing interoperable quality measures for EHR systems using HL7 CDA R2 [4]. Looking into the future of HIT, it is important that eMeasures are taken into account in the HL7 EHR-S Functional Model and its emerging profiles. Furthermore, education with eMeasures and wide world-wide awareness and adoption fostering shared understanding of concepts and interoperable implementations will help develop consistent tools for measuring health care quality, and as Lord Kelvin put it: "If you cannot measure it, you cannot improve it." Lord Kelvin (1824-1907).
References [1] [2] [3] [4]
National Quality Forum http://www.qualityforum.org Collaborative for Performance Measure Integration with EHR systems http://www.amaassn.org/ama1/pub/upload/mm/472/wkgrparecommendation.pdf Health Quality Measures Format: eMeasures http://www.hl7.org/v3ballot/html/domains/uvqm/uvqm.html HL7 CDA R2 Quality Reporting Document Architecture (QRDA) http://www.hl7.org/documentcenter/Ballots/2008sep/downloads/CDAR2_QRDA_R1_DSTU_2009AP R.zip
992
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-992
Clinical information systems: cornerstone for an efficient hospital management Christian LOVISa1 Division of Medical Information Sciences University Hospitals of Geneva, Geneva, Switzerland a
Abstract. The university hospitals of Geneva are the largest consortium of public hospitals in Switzerland. This organization is born in 1995, after a political decision to merge the seven public and teaching hospitals of the Canton of Geneva. From an information technologies perspective, it took several years to reach a true unified vision of the complete organization. The clinical information system is deployed in all sites covering in- and outpatient cares. It is seen as the cornerstone of information management and flow in the organization, for direct patient care and decision support, but also for the management to drive, improve and leverage the activities, for better efficiency, quality and safety of care, but also to drive processes. As the system has become more important for the organization, it has required progressive changes in its governance. The high importance of interoperability and use of formal representation has become a major challenge in order to be able to reuse clinical information for real-time care and management activities, and for secondary usage such as billing, resource management, strategic planning and clinical research. This paper proposes a short overview of the tools allowing to leverage the management for physicians, nurses, human resources and hospital governance. Keywords. Hospital management; clinical information system
1. Introduction Implementing and deploying a clinical information system in a care organization should be one of the most disruptive changes ever [1, 2], if they understand the need for improving processes and culture, the need for improved efficiency and quality of care. Thus, implementing such a system is only possible with deep changes in care and management processes. The high human and economic cost of inefficient care and errors has been well documented and has received a lot of attention [3]. One of the most striking facts is that, while care providers are daily using technologies of the 21 century, the healthcare system is often working and managed with paperwork and processes of the 19th century [4]. Unfortunately, deploying information technologies in healthcare organization is an important challenge, and the road is a large cemetery of failures and painful experience [5]. Success factors are, however, well described. Clinical leadership; strong involvement at the highest decision levels of the 1
Corresponding author ; Christian Lovis, University Hospitals of Geneva, Division of Medical Information Sciences, 1211 Geneva 4, e-mail : [email protected]
C. Lovis / Clinical Information Systems: Cornerstone for an Efficient Hospital Management
993
organization; long and sustained financial and human investments; reliable infrastructure; added-value for all actors and updated decision-support are key factors. This work at presenting some aspects of what has been developed at the University Hospitals of Geneva to leverage the return on investment of the clinical information system in the domain of decision support for the management.
2. Background The University Hospitals of Geneva (HUG) constitute the major public care providing consortium and teaching hospitals in Switzerland. It covers primary, secondary, tertiary and ambulatory care. HUG are using an in-house developed clinical information system (CIS) that integrates commercial systems and covers all clinics and care. The system is Java, service-oriented (SOA) and has a component-based architecture with a message-oriented middleware. It has a full paperless computerized provider order entry (CPOE) coverage, it supports workflows, clinical pathways and complex decisionsupport. The system builds a complete transversal support for physician and nursing orders, for planning and execution of all care activities.
3. Decision support for management Only some examples are shown to illustrate the various type of secondary usage of clinical information to support and leverage the management in a hospital. These examples are grouped according to the various professions in the hospital. 3.1 Medical management Several support for the medical management has been developed. Some of them are about standards and quality of care, such as whiteboards to see the way clinicians use clinical pathways or how fast discharge letters and reports are signed. Other are more devoted to patient flows, such as synoptic views of the activity of the emergency department (Fig 1).
Figure 1: Realtime emergency department activity
The Figure 1 illustrates one of these tools. The dashboard can be seen in all terminals and is shown on large screens. It is automatically updated in real-time with
994
C. Lovis / Clinical Information Systems: Cornerstone for an Efficient Hospital Management
activities in the ER, including display of admission-discharge-transfers, diagnosis, infectious status and numerous other clinical information. It helps the management of the ER and the proactive preparation of wards that will have to admit patients later. 3.2 Nursing management One of the challenges in our hospitals is to manage in a clever and proactive manner our nursing staff. This means achieving a good adequacy of staffing and needs in each ward. Figure 2 illustrates two reports used daily to organize human resources in wards. The left image shows the consolidated load per patient in a ward, and the right images displays the daily detailed load for one patient, for each type of care
Figure 2: Predictive nursing load in a ward
. Because the complete nursing activity is planned and computerized, it is possible to know in advance the exact care planned for each patient individually, and to compute the global care requested for each ward, by care, by group of patients, and thus to allocate resources accordingly. Because all care is validated after execution, the nursing management can then measure the adequacy between what was requested and what has been really produced. 3.3 Hospital management There are numerous whiteboards and indicators used by various people in the administration of the hospital, from logistics such as the pharmacy, the billing center and the top management. The figure 3 illustrates one of the consolidated views, that displays a “radar” view of a department with each branch being one of the institutional wide indicator. These indicators include high level information, such as: Satisfaction; absenteeism; bed occupancy; patient cost weigh; outpatient clinics revenue; percentage of discharge letters signed 7 days after discharge; evolution of the number of FTE’s; number of inpatients; length of stay; etc. That is, indicators about satisfaction, revenues, costs, means and resources and efficiency. These indicators are computed using the information existing in the hospital information system and have been built in order to bring a real added-value for the management of the departments.
C. Lovis / Clinical Information Systems: Cornerstone for an Efficient Hospital Management
995
Figure 3: Global view of institutional indicators for the management of departments
4. Conclusion Hospital and clinical information are cornerstones to build management decisionsupport. Daily routine information, from demographics to direct patient care, can be reused to provide decision-support at all management levels of hospitals. The real challenge is to have a tightly interoperable system, with common and shared semantics and definitions. Without these, the large data warehouses will not be able to provide high added-value knowledge with consolidated sources, such as logistics, human resources and care. Providing this kind of decision-support is a very strong incentive for sustained investment in this field and brings healthcare management to the 21st century.
References [1] Thouin MF, Hoffman JJ, Ford EW. The effect of information technology investment on firmlevel performance in the health care industry. Health Care Manage Rev. 2008 JanMar;33(1):60-8. [2] Lorenzi NM, Ash J, Einbinger J, McPhee W, Einbinger L. Transforming Health Care through Information. Lorenzi NM, Ash J, Einbinger J, McPhee W, Einbinger L, editors. New York, LLC: Springer-Verlag; 2004. [3] Kohn LT, Corrigan J, Donaldson MS, eds. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000. [4] ITAC. Report to the President, Revolutionizing Health Care through Information technology. President’s Information Technology advisory Committee. June 2004. [5] Aarts J, Berg M. Same systems, different outcomes--comparing the implementation of computerized physician order entry in two Dutch hospitals. Methods Inf Med. 2006;45(1):53-61.
996
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved. doi:10.3233/978-1-60750-806-9-996
Patient Centered Integrated Clinical Resource Management a
Jacob HOFDIJKa1 Partner in Casemix, Coordinator Implementation integrated care funding at the Ministry of Health, the Netherlands
Abstract: The impact of funding systems on the IT systems of providers has been enormous and have prevented the implementation of designs to focused on the health issue of patients. The paradigm shift the Dutch Ministry of Health has taken in funding health care has a remarkable impact on the orientation of IT systems design. Since 2007 the next step is taken: the application of the funding concept on chronic diseases using clinical standards as the norm. The focus on prevention involves the patient as an active partner in the care plan. The impact of the new dimension in funding has initiated a process directed to the development of systems to support collaborative working and an active involvement of the patient and its informal carers. This national approach will be presented to assess its international potential, as all countries face the long term care crisis lacking resources to meet the health needs of the population. Keywords: Problem Oriented Medical Record, Chronic Disease Management, Integrated care, Casemix, Individual Careplan, Personal Health Record.
1. Introduction The health care reforms of the last thirty years have been driven by the impotence of stakeholders to manage the health care system, and most of all its outcome. The first steps have been to try to identify the health care process within hospitals. After the initial work of Ernest A. Codman in 1914 [1] on the definition of the product of the hospital it took until in the end of the seventies Robert Fetter and John Thompson introduced the concept of the Diagnosis Related Groups (DRG) [2]. The development of the system was focused on managing the quality of health care delivery in US hospitals by identifying outliers within a group of similar patients. The DRG or CaseMix approach has travelled across the globe in the last three decennia and has been tested adapted and applied in many countries, but mainly for funding purposes. The original purpose to measure the outcome of the health delivery system still has to be achieved. One of the main reasons for not achieving this has been the focus on the inpatient episode, which limits the view on one admission of a patient. As the admission is not seen in the perspective of the health issue of the patient, it is only a fragment of the journey of the patient in the health care system. The main reason for this limitation is the lack of data about the journey of the patient. The health care system is traditionally constructed in silos, following the way the funding systems are organised. The traditional funding of health care systems has a main focus on the 1
Corresponding author: [email protected]
J. Hofdijk / Patient Centered Integrated Clinical Resource Management
997
provision of care by primary, secondary and tertiary care. Porter and Teisberg introduced in their book “ Redefinition of health care competition “ the need to focus on the full treatment cycle as the way to really add value to the patient [3]. The application of this approach requires a paradigm shift from supply to demand orientation. A demand orientation is in line with the broadly adopted patient centred approach, which however still lacks international implementation. The best guidance for this approach still is the problem-oriented methodology introduced by Lawrence Weed as early as 1969 [4]. Although widely acknowledged as the core concept for medical records and medical treatment, it has not been adopted nor implemented within IT systems on a scale that would match the support for the patient centred approach. A missing link with funding health care delivery seems to be key.
2. The Dutch breakthrough The Dutch healthcare system has introduced in its reform the paradigm shift by changing the relationship between the main stakeholders: the patients, the providers and the insurance companies. One of the important actions was the introduction of a national insurance scheme, which is mandatory for all citizens and provides access to the base set of services. The new law also requires the insurance companies to contract health care services on price and quality. The new infrastructure offers the opportunity to really focus on the patient and to change the traditional relations between payers and providers. By the introduction of care products as the base for the contract resource management in hospitals have changed fundamentally. Instead of a focus on dumb parameters, like admissions, bed days, first visits, information is required at the level of patients treated for a specific disease. 2.1. Hospital funding based on contracts The Dutch decided not to implement the DRG system as it only focuses on inpatients and the objective of the Dutch approach was to take the health issue of the patient as focal point. That required a new approach to the registration of clinical data in hospitals. A step towards the problem oriented medical record as data had to be recorded at the level of the episode of care, as defined by Hornbook [5]. Since 2000 Dutch hospitals have started to register data by episode, starting with a referral to the hospital to a medical specialist. At that moment within the IT systems a care trajectum record is created for that specific health issue. It is used as reference to both clinical information, like the referral information, the care request, the diagnosis and the treatment, and process information about encounters, examinations, tests, diagnostics and surgical procedures. The information at the level of the episode is used to support the physician during the care process, but is also gathered in management databases at institutional level. With this information profiles can be created at different levels of aggregation, for individual patients, at the level of providers, at the level of diseases treated and many other ad hoc views. By the structural link of the data to the health issue of the patient both described by the care request and the diagnosis a new dimension has been created in the resource management of hospitals. As the shift in the funding of hospitals from budgeting to contracting will be completed in 2012, the hospitals need to change their information
998
J. Hofdijk / Patient Centered Integrated Clinical Resource Management
management strategies. In 2011 hospitals need to contract over 75 % of their products with insurance companies, so they need to have information about the profiles of their care products and the associated costs. So the sense of urgency to have actual cost price information about the procedures and ancillary services is growing by the day. Examples of the newly developed management information will be presented. 2.2. Information on the quality Since the introduction of the new funding scheme attention has been given both by the Inspection for health (IGZ) as by the providers coordinated by the Dutch Medical Association “Quality of care as front”. The focus of these projects was the development of indicators to be used in the contracting process between hospitals and insurers. The ambition was high, but it turned out to be quite difficult to achieve consensus about the indicators. The end report describes the indicators, which have been defined for 10 diseases. It was a tedious process to define the indicators and required parameters. A very positive development was that the insurers confirmed that they would use the indicators as important parameters in the contracting process. The insurer CZ has committed actively excluded from contracting hospital providing Colon cancer treatment below the national quality thresholds. The announcement of this policy has created a lot of discussion, only showing the breakthrough of the quality dimension in the hospital contracting equation. 2.3. Chronic care funding The next step in the process of the health reform dealt with chronic diseases, partially driven by the spectacular growth expectations for the coming decades. To prevent a long-term care crisis in 2025, action was needed. An important development was the introduction of the concept of the care standard, which describes good care for chronic care patients based on the guidelines and protocols. The Dutch Diabetes Federation developed the first care standard in 2003. The standard was the result of a close collaboration of over twenty different provider associations and the patient association. The care standard describes three main aspects of the prevention and care for chronic diseases: the care, the organization and the indicators of quality. One other principle of the care standard is the individual care plan, which will be coordinated for and with the patient and a multidisciplinary team of care providers. In 2007 a pilot project was run by ZonMW [6] with 10 different so called care groups to organise and contract the delivery of a disease management program for diabetes. It was a kind of extension of the health issue approach in hospitals, but now for chronic diseases. The care group was introduced as a new entity to contract in one market the different care providers involved in chronic disease management and in a second market the insurance companies. After the pilot the contracting of disease management programs for Diabetes has been nationally covered. One important element of the program is the development of software to not only exchange information between providers, but also to manage the treatment plan. Currently a number of regional datacenters are in place collecting information and sharing performance indicators among the care providers [7]. These datacenters will combine their information to a national registry, which will both provide benchmark information to providers, as it will publish information for patients to better choose providers in supporting them to manage their disease.
J. Hofdijk / Patient Centered Integrated Clinical Resource Management
999
3. Conclusion The Dutch shift to patient centered care has resulted in real changes in the delivery system. It has changed the relation between the stakeholders so structural, that there is no way back. The traditional silo’s crack; while care providers and patients are looking for state of the art 2.0 solutions to develop supporting information systems linking to the personal health record of the patient to further improve the quality of life of the patients. The impact on information systems is enormous and the use of information for clinical resource management will gradually shift from a price orientation to finding the best quality for the patient in a joint process of providers, patients and insurers. So in the end the dreams of Codman and Weed will come true and will provide the next generation a sustainable health care system using the problem oriented record even across institutions and involving the patient.
References [1] [2] [3] [4] [5] [6] [7]
Codman EA., Arch Pathol Lab Med. 1990 Nov;114(11):1106-11. Case mix definition by diagnosis-related groups. Fetter RB, Shin Y, Freeman JL, Averill RF, Thompson JD.Med Care. 1980 Feb;18(2 Suppl):iii, 1-53. No abstract available. Harv Bus Rev. 2004 Jun;82(6):64-76, 136. Redefining competition in health care. Porter ME, Teisberg EO. Harvard University, Harvard Business School, Boston, USA. [email protected] Medical records, medical education and patiënt care. The problem-oriented record as a basic tool. Lawrence L. weed, M.D. ISBN 0-8511-9188-X Hornbrook MC, Hurtado AV, Johnson RE, Health care episodes: definition, measurement and use, Med Care Rev, 1985;42(2):163-218.p. 171 Integrating Care through Bundled Payments — Lessons from the Netherlands Jeroen N. Struijs, Ph.D., and Caroline A. Baan, Ph.D. N Engl J Med 2011; 364:990-991March 17, 2011 Samenwerking en Samenhang in de keten, Evaluatie en resultaten Project DiabeteszorgBeter, Zwolle H. Bilo, 2009.
This page intentionally left blank
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved.
1001
Subject Index 3LGM2 537 abstracting and indexing 492 accelerometer 897 access by mobile phone 417 access control 601 accreditation 218 Actigraph GT3X 445 activity analysis 295, 422 activity 18 AdaBoost 574 adolescent 8 adoption 335 adverse drug event 412, 569 adverse drug events (ADE) detection 699 affordance 63 Africa 666 agents 300 alert 930 ambient assisted living 460 ambulance 349 analysis methods 460 Android 83 annotation 559 antibiotic cost 477 anticoagulant therapy 43 aortic aneurysm 359 application development framework 724 application 213 archetype 255, 774, 789, 799 archetype-conform EHR extract 799 architecture 739 architecture of participation 280 Arden syntax 165 assessment-evaluation 925 AT 275 ATC drug classification 512 attitude to computers 960 augmented medical intervention 175 autism 270 automatic IE from patient records 527
awareness 364 baseline survey 387 benchmarking 542 binary classifiers 579 biobank 644, 887 biocomputational modelling 432 biomedical ontologies 714, 739 biomedical relations 739 biomedical research 867 biomedical terminology 844 bio-ontologies 145 blended e-learning 213 booklets 63 BPMN 2.0 482 business layer 537 camera 455 cancer 8 cancer documentation 892 capacity building 666 cardiac rehabilitation 88 case-based learning 203 Casemix 996 cataloguing 492 categorial structure 844 CDA 305 CDISC ODM 857 CDSS 135 CEN EHR 13606 689 center 892 certification 654 change management 829 chemotherapy 392 children 18 chronic disease management 996 chronic disease 33 citizen empowerment 98 ClaML 744 classification 465, 749, 754 classifier performance 532 clinical application 724 clinical coding 594 clinical core processes 482
1002
clinical data management system 902 clinical decision support (CDS) 130, 140, 930 clinical decision support system (CDSS) 103, 120, 150, 195, 412 clinical finding 809 clinical guideline 130, 477 clinical information provision 155 clinical information system 335, 902, 965, 992 clinical investigation 834 clinical knowledge resource 839 clinical pathways 482 clinical process monitoring 507 clinical reasoning 666 clinical research informatics 955 clinical rules 115 clinical terminology 809 clinical text 559 clinical times 507 clinical trial 170, 325, 734 clinical work practice 374 clinical workflow 432 clinicians adherence 930 cloud computing 93, 115, 379 CME 238 CO2 reduction 970 co-construction 68 cognitive rehabilitation 779 collaboration 280, 364 collaborative health care delivery 417 communication server 170 communication standard 704 communication support 359 comprehensive cancer 892 computer assisted medical intervention 175 computer communication networks 522 computer simulation 180 computer utilization 960 computer-assisted drug therapy 950 computer-assisted image analysis 465 computerized patient simulator 666 concept representation 774 conceptual model 427 confidentiality 862 consent 621 consumer health information 38
consumer participation 13 context factor 920 contextual inquiry 965 continuous sensor data 460 controlled vocabulary 492, 502 coordination 606 COPD 28, 455 CPOE 135, 290, 320, 392, 920, 940 creativity 977 critical care 402 cross-sectional studies 960 data collection 13 data binding 724 data completeness 872 data driven 140 data integration 185, 857, 867 data privacy 661 data transfer 58 data warehouse 170 database 862 date elements 517, 714 decision making 397 decision support 125, 165, 839, 945 de-identification 606, 862 Delphi 920 dementia 120 Dengue fever 629 design 392, 925, 930 detailed clinical models 774 diabetes 23, 48, 103, 369, 594, 950 diagnosis 120 diagnosis reasoning 559 dictionary 38 diet 23 differential adhesion 882 digital pen 325 dilated cardiomyopathy 907 discrete wavelet transform 470 disease surveillance 160, 639 diversity 330 drug prescription 125, 512 drug safety 325, 794 drug toxicity 794 durability 68 dynamic cell seeding 882 dynamic Web server 270 dyslipaemia 125 economic evaluation 407 economics 208
1003
eConsent 344 edge detection 470 edge strength 470 education 213, 218, 422 e-health 68, 155, 265, 407, 970 e-health service delivery 537 EHR re-use 872, 902 elderly people 681 e-learning 238, 248 electronic collaboration 354 electronic data capture 325 electronic health record (EHR) 38, 58 243, 255, 285, 295, 305, 310, 344, 349, 354, 369, 374, 379, 437, 559, 589, 601, 694, 774, 799, 809, 849, 862 electronic medical record 502, 824 electronic patient record 83, 94, 260, 285, 339, 374, 584 electronic prescribing 320, 920 electronic symptom reporting 13 e-medication 920 emergency 349 emotions 63 empirical study 243 encounters 295 enhancing biomedical research 907 entity-relationship graph 3 epidemic intelligence 160, 639 epidemiologic surveillance 629 e-prescription 374 EPS 374 evaluation 18, 78, 120, 125, 208, 238, 339, 402, 432, 920 event-driven architecture 160 evidence 208 evidence-based guidelines 125 evidence-based medicine 769 exercise adaptation 779 expected medical benefit 175 expert system 195, 714 eye-tracking 945 Facebook 616 factuality levels 559 federation 644 flexibility 68 follow-up 872 form generation 799 formative evaluation 417 French Guiana 629
fuzzy logic 170 gait parameters 445 GAITRite 445 game based training 228 GCM 537 GELLO 130 genomic medicine 165 goal directed design 228 GP (General Practitioner) 344, 354 GPS 349 grid networks 450 GUI 849 guideline compliance computation 512 guidelines 487, 877 H1N1 564 Health 2.0 649 health care 88, 260 health informatics 208, 218, 223, 877 health information 73, 616 health information systems 295, 335, 422 health information technology (HIT) 387, 877 health information technology standards 989 health insurance 649 health interventions 754 health personnel 960 health professional workstation 925 health services research 407 health technology assessment 407 health Web 53 healthcare 379 healthcare informatics research 980 healthcare interoperability 729 healthcare modernisation 374 healthcare policy issues 285 healthcare practices 63 healthcare processes 93 healthcare professionals 28 healthcare standards 804 healthcare teams 719 healthcare terminology 759 healthcare Web 2.0 280 health-enabling technologies 18, 460 heath care quality assurance 180 heterogeneous data integration 502 HIS management 522
1004
HL7 170, 694, 709, 774, 834 HL7 Version 2.x 704 HL7/ISO CDA R2 689 HL7/ISO Clinical Genomics 689 HL7/ISO RIM 689 home monitoring 671 HONcode 654 HONcode certification 53 Hooke and Jeeves Pattern Search 554 hospital 335, 507 hospital acquired infection 145 hospital information system (HIS) 402, 542, 849, 872, 930 hospital information system integration 887 hospital information system success 427 hospital management 992 hospital network 155 human-computer interaction 280, 915 human factors 412 hypertension 634 i2b2 502, 887 IBM Medics 3 ICNP 759, 764 IHE 265 image segmentation 470 impact assessment 432 implementation 330, 392, 809 indexing 584 India 960 individual careplan 996 infectious disease 629 information and communication technology 78, 719 information modeling 774 information overload 369 information provision 8 information retrieval 477, 549, 584 information security 601 information sharing 764 information storage and retrieval 492 information system success 427 information system (IS) 270, 392, 427, 517, 634 information technology 33 information translation 155 information visualization 945 infrastructure 537, 644
innovation 78 INR 43 in-situ 915 in-situ interviews 63 insulin 103 integrated care 996 integrated medical-dental electronic health record 387 integrating the healthcare enterprise 482 intensive care 397, 402, 945 interaction design 228 inter-device-variability 897 Internet 53, 73, 270, 315, 492 Internet usage 73 interoperability 98, 165, 185, 295, 305, 694, 704, 709, 849 intervention 749 intractable disease 255 ISO 13119 839 ISO/CEN 13606 255, 799 ISO/IEC 11179 744 isolated healthcare professionals 666 IT governance 275 IT infrastructure framework 892 iterative design 955 k-nearest neighbour method 579 knowledge discovery on databases 734 knowledge management 145, 699 knowledge representation 689 knowledge-sharing 190 knowledge-utilisation 190 laboratory medicine 487 laboratory results interpretation 195 language 769 law and security 417 liver diseases 195 liver function tests abnormalities 195 logical information model 804 logistics 315 machine learning 140, 554 magnetic resonance imaging 465 management 977 mapping 709, 764 MDR 644 meaning 829 medical alert 940 medical consultation 190 medical device 834
1005
medical education 233, 248 medical imaging 611 medical informatics 223, 549, 794 medical informatics applications 960 medical intelligence 671 medical providers’ dental data need 387 medical terminologies 739 medical text archiving 190 medical text retrieval 190 medical ward – technical service communication 135 medical-dental holistic care 387 medication automation 374 MedWISE 280 messaging 804 metadata 203, 644, 839, 857 metadata registry 175, 744 methodology 719 m-health 33, 48, 83 mobile computing 950 mobile phone 23 mobile Web services 349 modeling 487, 497, 729 monitoring and clinical context 412 MRI 784 multi-class classification 579 multidimensional data 115 multi-method approach 427 multimodal mining 477 multi-modal information search 450 narrative medical records 764 national deployment 354 natural language processing (NLP) 527, 549, 589, 594, 769, 794, 887 NCI Thesaurus 714 neonatal intensive care 115 network analysis 564 network 644 networked clinical research 857 neurological diseases 671 nomenclature 460 nosocomial infection 554 nurse 320, 402 nursing 985 nursing information system 339 nutrition 23 observable entity 809
obstructive lung disease 594 occupational medicine 238 occupational therapy 676 oncology research 887 ontological reasoning (OWL2) 512 ontology 165, 185, 584, 661, 694, 699, 719, 734, 749, 754, 779, 784, 789, 844 ontology modularization 517, 714 open source 265, 445 openEHR 255, 724, 789, 849 opereffa 724 optimization 554 organisational change 260 organization 135 organs transplantation 300 OSCE 233 osteoporosis 432 otoneurology 579 OWL 714, 729, 784 P2P environment 661 PACS 397 palliative care 437 Parkinsonian syndromes 465 Parkinson’s disease 594 partial least squares 243 patient consent 58, 203 patient empowerment 681 patient management 364 patient non-adherence 634 patient records 275 patient recruitment 170 patient register 857 patient safety 601 PCT 437 PEHR 344 personal health information 606 personal health record (PHR) 63, 98, 108, 344, 996 personalization 48 pervasive developmental disorder 270 pervasive health 497 pharmacogenetics 569 pharmacovigilance 794 physician acceptance 150 physician-patient relations 13 physicians’ information needs 369 podcast 248 point of care decision making 190
1006
policy 208 politics 330 practice consultants 354 practitioner liabilities 611 prediction 574 prescribing 935 prescription appropriateness 487 preventive integrated care 28 privacy 285, 497, 606, 616, 621 problem lists 819 problem oriented medical record 996 process analysis 507 process assessment 542 professionalism 218 prognostic 140 provider and organization registry 265 public health surveillance 629 pulmonary rehabilitation 455 qualitative research 290, 392, 877 quality assessment 814 quality criteria 654 quality indicators 88, 634 quality measures 989 quality of care 374 quality of information systems 542 question answering 549 radiology 359 radiology information systems 402 real-time analysis 115 reasoning 789 regional health information networks 310 regional health networks 265 relevance 339 reminder system 872, 930 repositories 203 requirements 354, 392 research agenda 977 research management 977, 980, 985 resistance profile 477 REST architecture 108 review 13, 223, 769 RF2 829 risk adjustment 180 RN4CAST 985 robot 897 ROC curve 532 safety 374
scaffold 882 scales of infrastructure 68 schizophrenia 574 scientific medical corpora 814 screening for abdominal aortic aneurysm 228 SDLC 392 secondary use 502 security 285, 450, 621 self care 103 self-help 23 self-management 23, 33, 43 self-monitoring 43 semantic integration 185 semantic interoperability 517, 804, 824 semantic mediation 734 semantic model 754 semantic reasoning 699 semantic web tools 512 Semantic Web 729 semantic wiki 93 semantics 502, 794 sensitivity 574 serious adverse event 834 service events 295 service-oriented architecture (SOA) 98, 295, 310, 349, 867 shared decision making 935 shared record 359 signal detection 794 signal generation 639 single source 892, 902 single source information system 872 site visit 422 skill training application 228 smart objects 315 SNOMED CT 764, 809, 814, 819, 824, 829 social constructivism 374 social media 48 social networking 616 social-medical discovery 3 socio-technical approach 339, 422 software design 412 specificity 574 spontaneous reporting system 564 standard 98, 344, 709, 749, 839, 844 statistical modeling 532
1007
stroke 676 structuring and contextualization of medication events 527 support vector machines 579 surgery 359 surgical site infections 145 survey 73 sustainable broadband-enabled services 970 Swedish 559 SWOT 379 SWRL 714 system architecture 305, 522, 902 system implementation and management 285 system theory 739 systematic review 407 systems integration 310 task analysis 940 technical infrastructure 325 tele-assistance 681 telehealth 621 telemedicine 103, 611, 661, 666 tele-rehabilitation 28, 676 teletriage 407 term mapping 814 term validation 814 term variation 814 terminology 759, 769, 794 terminology life cycle model 759 terminology system 764, 824 ternary logic 170 test ordering 487 text accessibility 681 text mining 160 theoretical models 223 time consumption 320 tissue construct 882 traceability 275 transinstitutional collaboration 359 translational research 887, 892, 907 translations 819 transparency 53, 654
triangulation study 369 trust 497 trustworthiness 53, 654 ubiquitous computing 497 UML 729 UMLS 819 UML class diagram 704 usability 208, 260, 915, 925, 940 usability evaluation 228, 945, 955 usability testing 915 usefulness 339 user configurability 280 user interface 487, 925 user involvement 392 user survey 920 user training 93 user-centred design 965 user-computer interface 950 VAERS 564 validation 897 value-set 517, 714 vector-borne disease 629 virtual medical record (vMR) 130 virtual patient 203, 233 virtual reality 676 virtual university 248 vocabularies 744 VPH 432 ward round 213, 397, 935 Warfarin 569 watermarking 611 wavelet domain 470 Web 2.0 649 Web based ulcer record 417 Web services 661 wellbeing 78 Wiimote 455 wireless technology 28 workarounds 290 workflow 482, 734, 950 WWW 589 XML 709
This page intentionally left blank
User Centred Networked Health Care A. Moen et al. (Eds.) IOS Press, 2011 © 2011 European Federation for Medical Informatics. All rights reserved.
1009
Author Index Aarts, J. Abdoune, H. Acharya, A. Adams, S. Adlassnig, K.-P. Allaert, F.-A. Altmann, J. Ammenwerth, E. Andersen, S.K. Andresen, H. Angelova, G. Anguita, A. Arbustini, E. Ardillon, V. Asim, M. Atalag, K. Auverlot, B. Avery, A. Backfried, G. Bagayoko, C.O. Baker, C.J.O. Bakken, S. Bal, R. Balka, E. Ball, R. Bánhalmi, A. Barber, N. Barch, A. Bärthlein, B. Bartholomäus, S. Barthuet, E. Bartz, C.C. Baujard, V. Baysari, M. Beck, P. Beck, T. Beckmann, M.W. Bediang, G. Bellazzi, R. Bellika, J.G. Ben Said, M. Bergh, B.
v, 290, 392, 877 819 387 877 165 611 482 208, 369, 522, 799, 920 v, 28, 977 606 527 734 907 629 621 849 611 374 160 666 145 280 392, 877 285 564 671 374 233 892 644 155 759 53, 654 935 950 265 892 666 887, 907 455 270 265, 344
Bernicot, T. 584 Bernonville, S. 412 Berntsen, G. 13 Bertaud Gounot, V. 714 Bertaud, V. 517 Betrancourt, M. 940 Beuscart-Zephir, M.-C. 208, 412 Beyer, A. 892 Bianchi, S. 689 Bird, L. 804 Birkle, M. 265 Blinn, N. 649 Blobel, B. 305, 497, 694, 704, 739 Blom, S.R. 78 Boelmans, K. 465 Boere-Boonekamp, M.M. 78 Bohec, C. 517 Bonderup, M.A. 43 Borycki, E.M. 379, 915 Botsis, T. 564 Bouaud, J. 125, 512 Bourdé, A. 517, 714 Bousquet, C. 749, 754, 844 Boyer, C. 53, 73, 654 Boytcheva, S. 527 Brattheim, B. 359 Breil, B. 502, 902 Brender, J. 208 Briggs, J. 223 Bringay, S. 629 Brochhausen, M. 734, 739 Broeren, J. 676 Brooks, C. 804 Brunet, P. 248 Brüntrup, R. 437 Bucalo, M. 907 Buckeridge, D.L. 145 Bucur, A. 734 Buffa, F. 734 Burgun, A. 784 Bürkle, T. 325, 502, 892 Cameron-Tucker, H. 33 Campillo-Gimenez, B. 584
1010
Cao, F. 699 Carmeli, B. 140 Carrasqueiro, S. 407 Carvalho, L. 629 Casey, A. 844 Catley, C. 115 Ceusters, W. 829 Cheong, Y.C. 804 Chevrier, R. 195 Chiarugi, F. 950 Chiba, T. 255 Chomutare, T. 48 Choquet, R. 185 Christiansen, E.K. 417 Chronaki, C. 989 Chyou, P.-H. 387 Cinquin, P. 175 Coatrieux, G. 611 Cohen, G. 554 Colombet, I. 135, 769 Comac, P. 218 Conti, C. 689 Cornet, R. 824 Creswick, N. 397, 402 Croner, R. 892 Cronin, P. 955 Cruchet, S. 73 Cruz-Correia, R. 275, 300 Cserti, P. 671 Cuggia, M. 248, 517, 584 Cummings, E. 33 Cunha, J.P.S. 310 Cusi, D. 689 Dagliati, A. 887 Dahamna, B. 492 Dalianis, H. 559 Darmoni, S.J. 492, 819 Daskalakis, S. 243 Daumke, P. 594 Davies, D. 203 Day, R. 935 de Bruijn, B. 532 de Clercq, P.A. 103 de Keizer, N. 88, 180, 208, 824, 925 de la Cruz, E. 305 Defude, B. 661 Denecke, K. 160, 639 Detschew, V. 507 Deuster, T. 265
Di Giacomo, P. 970 Dias, A. 445, 897 Diederichs, S. 867 Dinesen, B. 28 Dolin, B. 989 Dolog, P. 160 Donfack, V. 714 Dormann, H. 325 Döring, A. 445 Dreesman, J. 160, 639 Duclos, C. 487 Duftschmid, G. 369, 799 Dugas, M. 502, 872, 902 Dulai, T. 671 Dumontier, M. 165 Dupuch, M. 794 Durand, T. 155 Durieux, P. 135 Duvauferrier, R. 517, 584, 714, 784 Dziuballe, P. 902 Ebrahiminia, V. 125 Eccher, C. 108 Egbert, N. 335 Eghdam, A. 945 Ehrler, F. 83 Ekeland, A.G. 417 Ekinci, O. 507 Eklund, A.-M. 549 El Ghazali, A. 270 El-Masri, S. 349 Encarnação, P. 407 Eyraud, E. 155 Falcoff, H. 125 Falkenhav, M. 945 Faraggi, M. 135 Farkash, A. 689, 729 Favalli, V. 907 Favre, M. 125 Faxvaag, A. 359, 364, 601, 980 Fayn, J. 661 Fernandez Luque, L. 455 Fernández-Breis, J.T. 789 Ferraz, V. 300 Fescharek, R. 794 Finlay, D. 218 Finozzi, E. 238 Fischer, A.S. 857 Fischer, M. 265 Fitzpatrick, P. 33
1011
Flamand, C. Flatow, F. Forkert, N.D. Forsman, J. Forster, A.J. Forster, C. Fosse, E. Frey, A. Fritz, F. Gabetta, M. Gallos, P. Ganeshkumar, P. Ganslandt, T. Ganzinger, M. Garcelon, N. Garin, E. Garin-Michaud, A. Gattnar, E. Gaudinat, A. Geissbuhler, A. Georg, G. Georgiou, A. Ghedira, C. Gietzelt, M. Gilad, D. Giorgi, I. Gjære, E.A. Gobeill, J. Goldschmidt, Y. Golse, B. González, C. Goossen, W. Göransson, B. Gorzelniak, L. Grabar, N. Graf, N. Griffon, N. Grimsmo, A. Grisot, M. Guardia, A. Guirao Aguilar, J. Hackl, W.O. Hains, I.M. Håkansson, P. Handels, H. Hangaard, S.V. Hanmer, L.A. Hanna, P. Hanser, S.
629 265 465 945 145 902 38 335 902 907 243 960 502, 892 867 584 584 155 507 477, 654 53, 666 135 223 661 460 233 238 606 477 689 270 305, 694 774 260 445, 897 769, 794 734 492 601 68 940 455 920 397, 402 970 465 43 427 218 594
Happe, A. Harrison, J. Hartvigsen, G. Hartz, T. Harvey, J. Hauser, J. Haux, R. Hege, I. Heid, J. Heimly, V. Heinrich, R. Heinze, O. Hejlesen, O.K. Helm, E. Henriksen, E. Hermanides, J. Herzberg, S. Hibberd, R. Hoekstra, J.B. Hofdijk, J. Höll, B. Holleman, F. Holst, B. Holzinger, A. Horsch, A. Horton, A. Househ, M. Hoy, D. Hrdlicka, J. Hurlen, P. Hübner, U. Hübner-Bloder, G. Hyppönen, H. Iltanen, K. Imbriani, M. Isaacs, S. Ishihara, K. Issom, D. Itälä, T. Jaffe, C. Jahn, F. Jais, J.P. James, A. Jamet, A. Janols, R. Jaques, D. Jaspers, M.W.M. Jaulent, M.-C. Jessup, M.
584 634 23, 48, 445 437 374 33 18, 460 203 203 354, 601 265 344 28, 43 482 13 103 872 374 103 996 950 103 465 950 13, 445, 897 955 616 759 574 v 335 369, 799 208 579 238 427 255 83 295 989 542 270 115 794 260 195 150, 925, 930 794 33
1012
Johansen, M.A. Johansen, M.D. Johansen, T.S. Johansson, B. Johnson, S.B. Joubert, M. Joutsijoki, H. Juhola, M. Jung, B. Jung, C. Jung, M. Kajiwara, M. Kanatani, Y. Kannry, J. Kapp, C. Kashfi, H. Katharaki, M. Kaufman, D.R. Kemps, H. Kenealy, T. Kennelly, J. Kent, C. Kilsdonk, E. Kim, S. Kimura, E. Kirchner, G. Kirchner, M. Klein, G.O. Klema, J. Knaup, P. Knijnenburg, S.L. Knoll, A. Kobayashi, S. Koch, S. Koetsier, A. Kohler, M. Kokkinakis, D. Kommeri, J. Kononowicz, A.A. Kontogiannis, V. Korpela, M. Kortekangas, P. Koster, P. Kozmann, G. Köpcke, F. Kósa, I. Kraaijenhagen, R. Kreuzthaler, M. Kuehne, M.
13 43 601, 980 676 955 819 579 579 754 754 920 73 255 915 265 724 243 955 88 634 634 140 150 754 255 160 325 839 574 867 150 897 255 945 180 369, 799 814 450 203 950 422 295 621 671 502, 892 671 88 589 649
Kumar, A. Kuo, M.-H. Kushniruk, A.W. Kuwata, S. Kuziemsky, C.E. Kvist, M. Lablans, M. Laforest, F. Lamy, J.-B. Landais, P. Landau, D. Langemeijer, M.M. Lapão, L. Larizza, C. Lasbleiz, J. Lauesen, S. Laurent, J.-F. Laversin, S. Le Beux, P. Lechtenbörger, J. Lee, B.C. Lee, E. Lemkes, B.A. Leonardi, G. Lerch, M. Leroy, N. Lewalle, P. Li, B. Li, J. Liaskos, J. Liebe, J.-D. Lilholt, P.H. Lillebo, B. Lind, M. Lindgren, H. Lindsköld, L. Line, M.B. Linge, J. Lippert, S. Liu, H. Liu, S. Longerich, T. López, D.M. Loškovska, S. Lovis, C. Luciano, J. Ludwig, W. Luukkonen, I.
749, 754, 844 379 379, 915 915 719 559 644 155 125, 487 270 140 930 275 907 714, 784 862 584 654 248 902 569 23 103 779 794 412 749, 754, 844 699 699 243 335 43 364 945 120 228 606 160 862 130 130 867 305, 694 190 83, 185, 195, 320, 477, 940, 992 165 18 422
1013
Luzi, D. Maas, R. Mabotuwana, T. Madden, R. Mahnke, A. Majeed, R.W. Maknickas, R. Malamateniou, F. Maman, Y. Mansmann, U. Mantas, J. Marcilly, R. Marschollek, M. Marshall, M.S. Martin, L. Martin, M. Martinovic, D. Massari, P. Mate, S. Mathews, A. Mazzoleni, M.C. McAllister, G. McCullagh, P. McGregor, C. Mei, J. Melby, L. Mels, G. Menárguez-Tortosa, M. Merabti, T. Mesika, Y. Meyer, R. Michel-Verkerke, M.B. Milani, G. Mimori, T. Mn Ngouongo, S. Moen, A. Moreau-Gaudry, A. Morvan, F. Moskal, L. Mulas, F. Müller, F. Müller, H. Münch, U. Münchau, A. Mykkänen, J. Nageba, E. Nakić, D. Nave, R. Neagu, A.
834 325 634 749 387 170 470 93 140 857 243, 985 412 18 165 734 892 58 492 502, 892 325 238 218 218 115 130 601 185 789 819 3, 569 320, 554 339 907 255 744 v 175 661 749 907 325 450 315 465 98, 295 661 190 233 882
Neubauer, K. 950 Neuvirth, H. 689 Névéol, A. 492 Niazkhani, Z. 392, 877 Nies, J. 135 Niinimäki, M. 450 Noack, T. 867 Noussa Yao, J. 512 Nuettgens, M. 649 Nuzzo, A. 907 Nyheim, B. 417 Nykänen, P. 208, 497 Oemig, F. 704 Okhmatovskaia, A. 145 Oliveira, G. 300 Oliveira, I.C. 310 Oliveira, M. 407 Oliven, A. 233 Padbury, J. 115 Pagani, M. 238 Pan, Y. 699 Pantazos, K. 862 Panzarasa, S. 779 Papakonstantinou, D. 93 Pareto, L. 676 Park, H.-A. 764 Park, H.K. 569 Pasche, E. 185, 477 Patapovas, A. 325 Pecoraro, F. 834 Peek, N. 88, 103, 180 Pein, W. 18 Pelayo, S. 412 Perinati, L. 887 Petkovic, M. 58, 621 Peute, L.W.P. 150, 925, 930 Pfeifer, F. 482 Pieber, T.R. 950 Pintér, B. 671 Piras, E.M. 63, 108 Pirnejad, H. 392, 877 Plank, J. 950 Pletneva, N. 73 Poulymenopoulou, M. 93 Priori, S. 887 Prokosch, H.-U. 315, 325, 502, 892 Punys, V. 470 Quaglini, S. 779 Quantin, C. 611
1014
Quenel, P. Raetzo, M.-A. Rajoura, O.P. Ralevich, V. Rasmussen, A.R. Reid, D. Renly, S. Riazanov, A. Riedmann, D. Riemer, J. Ries, M. Rigby, M. Rinner, C. Rinott, R. Rizzi, F. Robel, L. Robu, A. Rodrigues, J.M. Rodrigues, P.P. Rognoni, C. Roitman, H. Roode, J.D. Rootjes, I. Rose, G.W. Rosenbeck, K. Rosenkranz, C. Rottscheit, C. Roux, C. Rozenblit, L. Röhrig, R. Rubin, Y. Ruch, P. Ruotsalainen, P. Rüping, S. Rydmark, M. Saboor, S. Saddik, B. Saint-Jalmes, H. Salvi, E. Samwald, M. Sandblad, B. Saranto, K. Savolainen, S. Schack, P. Scharnweber, C. Schaupp, L. Schmidt-Richberg, A. Schmuhl, H. Schneider, B.
629 666 960 58 809 33 729 145 920 265 892 208 369, 799 140 689 270 882 749, 754, 844 275 238 3, 569 427 290 145 809 649 387 611 955 170 140 185, 477 497 734 676 369, 522, 799 349 784 689 165 260 422 295 18 18 950 465 344 265
Schober, D. Schubert, R. Schuler, A. Schulz, S. Schwenk, M. Scott, P. Seddig, T. Sedlmayr, M. Segagni, D. Seggewies, C. Seim, A. Senathirajah, Y. Sengstag, T. Seppälä, A. Séroussi, B. Sfakianakis, S. Shaban-Nejad, A. Shabo, A. Sharma, A.K. Shillabeer, A. Shine, A. Silvent, A.-S. Simon, A.C.R. Simon, C. Simon, M. Simonet, M.-A. Skipenes, E. Skorve, E. Slaughter, L. Slonim, N. Smrz, P. So, E.-Y. Sojer, R. Sorvari, H. Soualmia, L.F. Soyer, H. Spat, S. Staemmler, M. Starren, J.B. Stausberg, J. Stegwee, R.A. Stenico, M. Stenzhorn, H. Stoicu-Tivadar, L. Stoicu-Tivadar, V. Storck, M. Strasser, M. Stroetmann, K. Stürzle, M.
185 18 482 589, 594 892 223, 709 594 315 887 892 364 280 734 497 125, 512 734 145 689 960 8 955 175 103 125 265 73, 654 417 330 38 140 160 764 325 497 492 897 950 537 387 744 78 108 165, 734 882 681 213 482 432 892
1015
Sun, X. 699 Sunnerhagen, K.S. 676 Takahashi, R. 255 Talmon, J. 208 Tamblyn, R. 145 Tancredi, W. 228 Tarjányi, Z. 671 Tatara, N. 23 Tcharaktchiev, D. 527 Teisseire, M. 629 Tempero, E. 849 Teodoro, D. 185, 477 Thiel, R. 432 Thiemann, V. 902 Thirion, B. 492 Tibollo, V. 887 Timm, J. 729 Toft, E. 28 Tolar, M. 285 Topac, V. 681 Torgersson, O. 724 Toussaint, P. 359, 606, 980 Traver Salcedo, V. 455 Trinquart, L. 769 Trombert Paviot, B. 749, 754, 844 Tsiknakis, M. 734 Tsimerman, Y. 3, 569 Tuboly, G. 671 Tun, N.N. 804 Tuomainen, M. 98 Turlin, B. 517 Turner, P. 33 Tøndel, I. 606 Ückert, F. 213, 437, 644 Uribe, G. 305 van der Sijs, H. 290 van der Velden, M. 68 van der Zwan, E.P.A. 925 van Engen-Verheul, M. 88 Varpa, K. 579 Vassányi, I. 671 Vassilacopoulos, G. 93 Vassilakopoulou, P. 68
Végső, B. Velupillai, S. Venot, A. Viceconti, M. Vieira-Marques, P. Viitanen, J. Vincendeau, S. Vion, E. Virkanen, H. Vishnyakova, D. Voccola, D. Vossen, G. Walters, E.H. Wang, X. Waring, J. Warren, D. Warren, J. Westbrook, J.I. Wintell, M. Winter, A. Wipfli, R. Wolf, K.-H. Woodham, L. Worden, R. Wullich, B. Wyatt, J. Xie, G. Yang, H. Yang, H.Y. Yasini, M. Yazdi, S. Yogev, S. Yoshihara, H. Zaiβ, A. Zambelli, A. Zanutto, A. Zary, N. Zeller, S. Zhou, B. Zikos, D. Øyri, K. Årsand, E.
671 559 125, 487 432 300 965 517 270 295 477 955 902 33 699 374 634 634, 849 397, 402, 935 228 542 940 460 203 709 502, 892 223 130 754 634, 849 487 719 3 255 594, 749 887 63 203 676 130 985 38 23, 48