Designing Inclusive Interactions
P. Langdon • P. John Clarkson P. Robinson Editors
Designing Inclusive Interactions Inclusive Interactions Between People and Products in Their Contexts of Use
123
Dr. Patrick Martin Langdon Department of Engineering University of Cambridge Trumpington Street Cambridge CB2 1PZ, UK
[email protected]
Prof. Peter John Clarkson Department of Engineering University of Cambridge Trumpington Street Cambridge CB2 1PZ, UK
[email protected]
Dr. Peter Robinson Computer Laboratory University of Cambridge 15 JJ Thomson Avenue Cambridge CB3 0FD, UK
[email protected]
ISBN 978-1-84996-165-3 e-ISBN 978-1-84996-166-0 DOI 10.1007/978-1-84996-166-0 Springer London Dordrecht Heidelberg New York British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Control Number: 2010921207 © Springer-Verlag London Limited 2010 1-Clic™ is a registered trademark of Owens-Illinois Inc., One Michael Owens Way, Perrysburg, OH 435512999, USA, www.o-i.com Atmel® is a registered trademark or trademark of Atmel Corporation or its subsidiaries, in the US and/or other countries. Baseline™ is a registered trademark of Fabrication Enterprises, Inc., Post Office Box 1500, White Plains, New York 10602, USA, www.fabricationenterprises.com Bluetooth™ is a trademark of the Bluetooth SIG Inc., Bellevue, Washington, USA, www.bluetooth.com CorelDRAW is a trademark or registered trademark of Corel Corporation and/or its subsidiaries in Canada, the United States and/or other countries. Marmitek™ is a trademark of Marmidenko B.V., www.marmitek.com Windows® and Windows® are registered trademarks of Microsoft Cooperation in the United States and other countries Shellpak™ is a trademark of MeadWestVaco, 501 South 5th Street, Richmond, VA 23219, USA, www.meadwestvaco.com SPSS™ is a trademark of SPSS Inc., 233 S. Wacker Drive, 11th Floor, Chicago, IL 60606, USA, www.spss.com Wii™ is a trademark of Nintendo, www.nintendo.co.uk Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant laws and regulations and therefore free for general use. The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made. Cover design: eStudioCalamar, Figueres/Berlin Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Preface
This book contains the foremost papers from the Cambridge Workshop on Universal Access and Assistive Technology (CWUAAT) held at Fitzwilliam College, Cambridge, in March 2010. This is the fifth of a series of workshops stemming from Inclusive Design that are held every two years in alternation with the Royal College of Art’s INCLUDE conference. The workshop theme, Designing Inclusive Interactions, responds to the recent changes in the research landscapes in the fields of Human Computer Interaction, Computer Science, and Healthcare as a result of new technology and innovation. As is evidenced by the themes of previous CWUAAT conferences, such as “Designing Inclusive Futures” (2008), Inclusive Design already brings together many of these disciplines within a context of ageing and disability. This has led us directly to the focus of this workshop on “Inclusive interactions between people and products in their contexts of use”. In the context of demographic changes leading to a greater number of older people, the general field of inclusive design research strives to relate the capabilities of the population to the design of products by better characterising the user-product relationship. By 2020, almost half the adult population in the UK will be over 50, with the over 80s being the most rapidly growing sector. Around 22% of the UK population were estimated to be disabled in 2003. Inclusive populations are known to contain a greater variation in sensory, cognitive and physical user capabilities. As a result, interaction design for future generations of products will need to be inclusive. Research into accessibility for interface design has always represented an unconventional, multi-disciplinary arena, indicating the necessity to bring together a number of pragmatic disciplines, such as assistive technology, mechanical and electrical systems design, computer interface design, and medical and rehabilitation practice. It has moved from isolated activities in disparate fields, such as engineering, occupational therapy and computer science, to the more interdisciplinary perspective evident today in areas such as healthcare and inclusive design. As a result of this, there is now a need for the transfer of knowledge and techniques from inclusive design research into the HCI community; and secondly,
vi
Preface
there is also a requirement for research that can relate complex interactions with a product to inclusion. It is our view that combining the study of interaction with an inclusive approach to user-centred design will form a novel and useful interdisciplinary framework for investigating and improving today’s product designs. The papers that have been included were selected by reference to the peer assessments of an international panel of reviewers from many key disciplines such as computer science, assistive technology, engineering and product design. This panel and the chapters from the final contributors represent a sample of leading national and international research in the fields of inclusive design, universal access, and assistive and rehabilitative technology. As in 2006 and 2008, there have also been significant trans-disciplinary contributions from researchers in architecture and healthcare reflecting the need to understand the new developments in the wider social and economic context of inclusive and assistive technology design. This book is divided into five areas: I.
II. III. IV. V.
Understanding Users for Inclusive Design concerns research that addresses the nature of inclusive performance, such as the effect of environmental context on capability in interaction; Measuring Inclusion focuses on the quantification of impaired capability and tools and methods to measure inclusion; Inclusive Interaction looks at research that brings together interface design and theory with inclusive capability requirments; Assistive Technology is about the relationship of inclusive design to special purpose design and adaptations for specific impairments; Inclusion and Healthcare looks at healthcare research in areas that encroach into design for greater inclusion.
In the tradition of CWUAAT, we have solicited and accepted contributions over a wide range of topics, both within individual themes and also across the workshop’s scope. We anticipate that this will encourage inter-disciplinary research leading to better designs. It is expected that that this will benefit industry, government and end-users thereby effectively reducing exclusion and difficulty, in the workplace, at home and at leisure. As in previous years but with additional emphasis, we would like to thank all those authors, reviewers and administrators who have contributed to the CWUAAT 2010 International Workshop and to the preparation of this book. Many thanks are due also to the non-contributing members of the Programme Committee. Finally, thanks are particularly due to Mari Huhtala and Suzanne Williams, who both play a key role in bringing the resulting publication to fruition between final submission and the Workshop itself. We would also like to thank the staff at Fitzwilliam College. Pat Langdon, John Clarkson and Peter Robinson The CWUAAT Editorial Committee University of Cambridge March 2010
Contents
List of Contributors …………………………………………………………....xi
Part I Understanding Users for Inclusive Design 1.
The Effects of Hand Strength on Pointing Performance P. Biswas and P. Robinson …………………………………………….....3
2.
Harnessing Different Dimensions of Space: The Built Environment in Auti-biographies S. Baumers and A. Heylighen ..………………………………………....13
3.
Have I Just Pressed Something? The Effects of Everyday Cold Temperatures on Dexterity E. Elton, D. Dumolo and C. Nicolle .…………………………………....25
4.
Understanding the Co-occurrence of Ability Loss S.D. Waller, E.Y. Williams, P.M. Langdon and P.J. Clarkson …….....35
5.
Accessibility is in the Palm of Your Hand E.M. Rodriguez-Falcon and A. Yoxall ………………..……………...…45
viii
Contents
Part II Measuring Inclusion 6.
Quantifying Exclusion for Tasks Related to Product Interaction S.D. Waller, E.Y. Williams, P.M. Langdon and P.J. Clarkson …........57
7.
Investigating the Accessibility of State Government Web Sites in Maryland J. Lazar, P. Beavan, J. Brown, D. Coffey, B. Nolf, R. Poole, R. Turk, V. Waith, T. Wall, K. Weber and B. Wenger ….................….69
8.
Developing User Data Tools: Challenges and Opportunities F. Nickpour and H. Dong ………………………………………..…...….79
9.
User-pack interaction: Insights for Designing Inclusive Childresistant Packaging J. de la Fuente and L. Bix ………………………………………………..89
10. A Colour Contrast Assessment System: Design for People with Visual Impairment H. Dalke, G.J. Conduit, B.D. Conduit, R.M. Cooper, A. Corso and D.F. Wyatt ………………………………………………………...…..…101
Part III Inclusive Interaction 11. Evaluating the Cluster Scanning System P. Biswas and P. Robinson …………….……………...……….……...113 12. Facets of Prior Experience and Their Impact on Product Usability for Older Users J. Hurtienne, A-M. Horn and P.M. Langdon ……………..…..………123 13. Investigating Designers’ Cognitive Representations for Inclusive Interaction Between Products and Users A. Mieczakowski, P.M. Langdon and P.J. Clarkson ………....……...133 14. Prior Experience and Learning: Generational Effects upon Interaction C. Wilkinson, P.M. Langdon, and P.J. Clarkson …………….....……145
Contents
ix
Part IV Assistive Technology 15. Expressing Through Digital Photographs: An Assistive Tool for Persons with Aphasia A. Al Mahmud, Y. Limpens and J.B. Martens ……………...….……..157 16. An Investigation into Stroke Patients’ Utilisation of Feedback from Computer-based Technology J. Parker, G.A. Mountain and J. Hammerton ………..……….……...167 17. How to Make a Telephone Call When You Cannot Operate a Telephone T. Felzer, P. Beckerle and S. Rinderknecht ………………….....……177 18. Husband, Daughter, Son and Postman, Hot-water, Knife and Towel: Assistive strategies for Jar opening A.Yoxall, J. Langley, C. Musselwhite, E.M. Rodriguez-Falcon and J. Rowson……………………………………………………….………. 187 19. Email Usability for Blind Users B. Wentz, H. Hochheiser and J. Lazar ………………………….…….197
Part V Inclusion and Healthcare 20. The Involvement of Primary Schools in the Design of Healthcare Technology for Children M. Allsop, R. Holt, J. Gallagher, M. Levesley and B. Bhakta ………209 21. Gaming and Social Interactions in the Rehabilitation of Brain Injuries: A Pilot Study with the Nintendo Wii Console R.C.V. Loureiro, D. Valentine, B. Lamperd, C. Collin and W.S. Harwin …………………………………………………………......219 22. Promoting Behaviour Change in Long Term Conditions Using a Self-management Platform P.J. McCullagh, C.D. Nugent, H. Zheng, W.P. Burns, R.J. Davies, N.D. Black, P. Wright, M.S. Hawley, C. Eccleston, S.J. Mawson and G.A. Mountain .…………………………………………………..……....229 Index of Contributors …………………………………………………..…..239
List of Contributors
Allsop M.J., Institute of Engineering Systems and Design, School of Mechanical Engineering, University of Leeds, Leeds, UK Al Mahmud A., Department of Industrial Design, Eindhoven University of Technology (TU/e), Netherlands Baumers S., Department of Architecture, Urbanism and Planning, Katholieke Universiteit Leuven, Leuven, Belgium Beavan P., Department of Computer and Information Sciences, Universal Usability Laboratory, Towson University, US Beckerle P., Institute for Mechatronic Systems in Mechanical Engineering, TU Darmstadt, Darmstadt, Germany Biswas P., Computer Laboratory, University of Cambridge, Cambridge, UK Bix L., School of Packaging, Michigan State University, MI, US Bhakta B., Academic Department of Rehabilitation Medicine, Faculty of Medicine and Health, University of Leeds, Leeds, UK Black N.D., Pro-vice Chancellor for Research and Innovation, University of Ulster, UK Brown J., Department of Computer and Information Sciences, Universal Usability Laboratory, Towson University, US Burns W.P., School of Computing and Mathematics, University of Ulster, UK Clarkson P.J., Engineering Design Centre, Department of Engineering, University of Cambridge, Cambridge, UK Coffey D., Department of Computer and Information Sciences, Universal Usability Laboratory, Towson University, US Collin C., Royal Berkshire NHS Foundation Trust, Reading, UK Conduit B.D., Department of Materials Science and Metallurgy, University of Cambridge, Cambridge, UK
xii
List of Contributors
Conduit G.J., Department of Condensed Matter Physics, Weizmann Institute of Science, Ben Gurion University, Beer Sheva, Israel Cooper R.M., Selwyn College, Cambridge, UK Corso A., Design Research Centre, Faculty of Art, Design and Architecture, Kingston University, UK Dalke H., Design Research Centre, Faculty of Art, Design and Architecture, Kingston University, UK Davies R.J., School of Computing and Mathematics, University of Ulster, UK De la Fuente J., School of Packaging, Michigan State University, MI, US Dong H., Inclusive Design Research Group, School of Engineering and Design, Brunel University, West London, UK Dumolo D., Department of Human Sciences, Loughborough University, Loughborough, UK Eccleston C., Bath Pain Management Unit, University of Bath, UK Elton E., Ergonomics and Safety Research Institute, Loughborough University, Loughborough, UK Felzer T., Institute for Mechatronic Systems in Mechanical Engineering, TU Darmstadt, Darmstadt, Germany Gallagher J.F., Institute of Engineering Systems and Design, School of Mechanical Engineering, University of Leeds, Leeds, UK Hammerton J., Faculty of Health and Wellbeing, Sheffield Hallam University, Sheffield, UK Harwin W.S., School of Systems Engineering, University of Reading, Reading, UK Hawley M.S., Health Services Research, University of Sheffield, UK Heylighen A., Department of Architecture, Urbanism and Planning, Katholieke Universiteit Leuven, Leuven, Belgium Hochheiser, J., Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA, US Holt R.J., Institute of Engineering Systems and Design, School of Mechanical Engineering, University of Leeds, Leeds, UK Horn A-M., Department of Business and Economics, Freie University Berlin, Berlin, Germany Hurtienne J., Engineering Design Centre, Department of Engineering, University of Cambridge, Cambridge, UK Lamperd B., Royal Berkshire NHS Foundation Trust, Reading, UK Langdon P.M., Engineering Design Centre, Department of Engineering, University of Cambridge, Cambridge, UK Langley J., Art and Design Research Centre, Sheffield Hallam University, Sheffield, UK Lazar J., Department of Computer and Information Sciences, Universal Usability Laboratory, Towson University, US Levesley M.C., Institute of Engineering Systems and Design, School of Mechanical Engineering, University of Leeds, Leeds, UK Limpens Y., Department of Industrial Design, Eindhoven University of Technology (TU/e), Netherlands
List of Contributors
xiii
Loureiro R.C.V., School of Systems Engineering, University of Reading, Reading, UK Martens J.B., Department of Industrial Design, Eindhoven University of Technology (TU/e), Netherlands Mawson S.J., Centre for Health and Social Care Research, Sheffield Hallam University, UK McCullagh P.J., School of Computing and Mathematics, University of Ulster, UK Mieczakowski A., Engineering Design Centre, Department of Engineering, University of Cambridge, Cambridge, UK Mountain G.A., School of Health and Related Research, University of Sheffield, Sheffield, UK Musslewhite C., Centre for Transport and Society, University of the West of England, Bristol, UK Nickpour F., Inclusive Design Research Group, School of Engineering and Design, Brunel University, West London, UK Nicolle C., Ergonomics and Safety Research Institute, Loughborough University, Loughborough, UK Nolf B., Department of Computer and Information Sciences, Universal Usability Laboratory, Towson University, US Nugent C.D., School of Computing and Mathematics, University of Ulster, UK Parker J., Centre for Health and Social Care Research, Faculty of Health and Wellbeing, Sheffield Hallam University, Sheffield, UK Poole R., Department of Computer and Information Sciences, Universal Usability Laboratory, Towson University, US Rinderknecht S., Institute for Mechatronic Systems in Mechanical Engineering, TU Darmstadt, Darmstadt, Germany Robinson P., Computer Laboratory, University of Cambridge, Cambridge, UK Rodriguez-Falcon E.M., Department of Mechanical Engineering, The University of Sheffield, Sheffield, UK Rowson J., Department of Mechanical Engineering, The University of Sheffield, Sheffield, UK Turk R., Department of Computer and Information Sciences, Universal Usability Laboratory, Towson University, US Valentine D., Royal Berkshire NHS Foundation Trust, Reading, UK Waith V., Department of Computer and Information Sciences, Universal Usability Laboratory, Towson University, US Wall T., Department of Computer and Information Sciences, Universal Usability Laboratory, Towson University, US Waller S.D., Engineering Design Centre, Department of Engineering, Cambridge University, Cambridge, UK Weber K., Department of Computer and Information Sciences, Universal Usability Laboratory, Towson University, US Wenger B., Department of Computer and Information Sciences, Universal Usability Laboratory, Towson University, US Wentz B., Department of Computer and Information Sciences, Center for Applied Information Technology and Universal Usability Laboratory, Towson University, Towson, MD, US
xiv
List of Contributors
Wright P., Cultural, Communications and Computing Research Institute, Sheffield Hallam University, UK Wilkinson C., Engineering Design Centre, Department of Engineering, University of Cambridge, Cambridge, UK Williams E.Y., Engineering Design Centre, Department of Engineering, Cambridge University, Cambridge, UK Wyatt D.F., Engineering Design Centre, Department of Engineering, University of Cambridge, Cambridge, UK Yoxall A., Art and Design Research Centre, Sheffield Hallam University, Sheffield, UK Zheng H., School of Computing and Mathematics, University of Ulster, UK
Part I
Understanding Users for Inclusive Design
Chapter 1 The Effects of Hand Strength on Pointing Performance P. Biswas and P. Robinson
1.1 Introduction Pointing tasks form a significant part of human-computer interaction in graphical user interfaces. Fitts’ law (Fitts, 1954) and its variations (Mackenzie, 2003) are widely used to model pointing as a sequence of rapid aiming movements, especially for able-bodied users. Fitts’ Law predicts the movement time as a function of the width and distance to the target. This law is found to be very robust and works in many different situations (even in space and under water). However the application of Fitts’ Law for people with motor impairment is less clear. We have investigated also how the pointing performance of people with motor impairment varies from their able-bodied counterparts. In particular, we have studied how physical strength affects the pointing performance of people with and without motor impairment for different input devices. We have used this study to develop a simulator to help with the design and evaluation of assistive interfaces (Biswas and Robinson, 2008b). The simulator embodies both the internal state of a computer application and also the perceptual, cognitive and motor processes of its user. It takes a task definition and locations of different objects in an interface as input. It then predicts possible eye movements and cursor paths on the screen and uses these to predict task completion times. We hope this study will be helpful to understand and analyse the interaction patterns of people with motor impairment and design better assistive interfaces for them. It will also help in explaining motor action and developing better motor-behaviour models for motor impaired users. In this study, we have measured the physical strength of users by evaluating their hand strength in terms of flexibility and maximum exerted force. It has already been found that the active range of motion (ROM) of the wrist is significantly correlated with movement time in a Fitts’ Law task for children with spasticity (Smits-Engelsman et al., 2007). Hand evaluation devices are cheap, easy to operate and have good test-retest reliability (Mathiowetz et al., 1984). So these are reliable and useful tools for measuring physical strength making these results useful in practice. Our study consisted of the following three experiments:
4
Biswas and Robinson
1. the first experiment involved pointing tasks using a mouse and was undertaken by both motor impaired and able-bodied participants; 2. the second experiment involved pointing tasks using single switch scanning techniques and was undertaken by both motor impaired and able-bodied participants; 3. the third experiment involved two dimensional Fitts’ Law pointing tasks using a mouse, and was undertaken only by able-bodied participants. The remainder of this paper presents the experiments in more detail.
1.2 Experiment One: Pointing Tasks 1.2.1 Procedure, Material and Participants Our study consisted of pointing tasks. A sample screenshot of the task is shown in Figure 1.1. We followed the description of the multiple tapping tasks in ISO 9241 part 9 (ISO, 2000). In this task the pointer initially located at the middle of the screen. The participants had to move it towards a target (one of the red dots, appearing a light grey in monochrome), and click on it. This process was repeated for all the targets. There were eight targets on the screen and each participant performed the test twice (except participant P2, who retired after completing the first test). The distances to the targets ranged from 200 to 600 pixels while target widths were randomly selected as an integer between 16 and 48 pixels.
Figure 1.1. Screenshot of the experiment for mouse interface
We used a standard optical Mouse and an Acer Aspire 1640 Laptop with a 15.5” monitor having 1280×800 pixel resolution. We also used the same seating arrangement (same table height and distance from table) for all participants. We measured the following six variables for hand strength evaluation. Each was measured three times and we took the average. We evaluated only the dominant hand (the hand participants used to operate the mouse). Photographs of the measurement technique can be found at reference (Kaplan, 2006). Grip strength measures how much force a person can generate by gripping by hand. We
The Effects of Hand Strength on Pointing Performance
5
measured it using a mechanical dynamometer. Tip pinch strength measures the maximum force generated by a person squeezing something between the tips of his thumb and index finger. We measured it using a mechanical dynamometer. The following ranges of motion are defined with respect to the standard anatomical position (Kaplan, 2006). Radial deviation is the motion that rotates the wrist away from the midline of the body when the person is standing in the standard anatomical position (Kaplan, 2006). When the hand is placed over a table with palm facing down, this motion rotates the hand about the wrist towards the thumb. We measured the maximum radial deviation using a goniometer. Ulnar deviation is the motion that rotates the wrist towards the midline of the body when the person is standing in the standard anatomical position. When the hand is placed over a table with palm facing down, this motion rotates the hand about the wrist towards the little finger. We measured it with the goniometer. Pronation is the rotation of the forearm so that the palm moves from a facing up position to a facing down position. We measured it using a wrist-inclinometer. Supination is the opposite of pronation, the rotation of the forearm so that the palm moves from a facing down position to a facing up position. We measured it with the wrist-inclinometer. We collected data from 10 motor impaired and six able-bodied participants (Table 1.1, next page). The motor impaired participants were recruited from a local centre, which works on treatment and rehabilitation of disabled people and they volunteered for the study. To generalise the study, we selected participants with both hypokinetic (e.g. restricted movement, participants P1, P3, P4 etc.) and hyperkinetic (e.g. uncontrolled movement/tremor, participants P5, P6 etc.) movement disorders (Flowers, 1976). All motor impaired participants used a computer at least once each week. Able-bodied participants were students of our university and expert computer users.
1.2.2 Results We found that the movement time significantly correlates (ρ = 0.57, p < 0.001) with the number of pauses. We defined a pause as an instance while the pointer does not move for more than 100 msec. We correlated the average number of pauses per pointing task with the hand strength metrics. Figure 1.2 shows the graphs of average number of pauses per pointing task with respect to the Grip Strength. We found that some users did not have any range of motion in their wrist, though they managed to move the mouse to perform the pointing tasks correctly. We also found that the natural logarithm of grip strength significantly correlates with the mean (ρ = -0.72, p < 0.001) and standard deviation (ρ = -0.53, p < 0.05) of the number of pauses per pointing task. We did not find any correlation between that movement time and the distance, width or Fitts’ Law index of difficulty (ID) (Fitts, 1954) of the targets for motor impaired users. This may be due to the presence of physical impairment and the number of pointing tasks (only 16) performed by the participants. We also did not find any significant correlations involving ranges of motion. More details about these results can be found in a separate paper (Biswas and Robinson, 2009).
6
Biswas and Robinson Table 1.1. List of participants Age
Gender
Impairment
C1 C2 C3 C4 C5
30 29 28 25 29
M M M M M
Able-bodied
C6
27
F
P1
30
M
P2
43
M
P3
25-45
F
P4
30
M
P5
62
M
P6
44
M
P7
46
F
P8
>45
F
Spina Bifida/ Hydrocephalus wheelchair user
P9
43
F
P10
>45
M
Did not mention disease restricted hand movement no tremor Cerebral Palsy from birth restricted hand movement no tremor.
Cerebral Palsy reduced manual dexterity wheel chair user Cerebral Palsy reduced manual dexterity also some tremor in hand wheel chair user One handed (dominant hand) the other hand is paralysed Dystonia cannot speak cannot move fingers wheelchair user Left side (non-dominant) paralysed after a stroke in 1973 also has tremor Cerebral attack significant tremor in whole upper body part fingers always remain folded Did not mention disease difficulty in gripping things no tremor
Number of Pauses vs. Grip Strength 18
Number of Pauses
16 14 12 10
Motor-impaired Able-bodied
8 6 4 2 0 0
10
20
30
40
50
60
70
Grip Strength (in Kg)
Figure 1.2. Average number of pauses per pointing task vs. grip strength
We divided the whole movement path into three phases (Biswas and Robinson, 2008b, 2009) and observed how the hand strength affects in the initial, main movement and homing phases. We found that grip strength significantly correlates with the average number of pauses near the source (ρ = -0.61, p < 0.01) and near the
The Effects of Hand Strength on Pointing Performance
7
target (ρ = -0.78, p < 0.001). We also found that the mean and standard deviation of the velocity of movement were significantly correlated with grip strength (Figure 1.3, ρ = 0.82, p < 0.001 for mean and ρ = 0.81, p < 0.001 for standard deviation). Velocity vs Grip Strength
Velocity (in pixel/msec)
0.4
Mean Velocity
0.35 0.3
Stddev Velocity
0.25 0.2
Linear (Stddev Velocity) Linear (Mean Velocity)
0.15 0.1 0.05 0 0
10
20
30
40
50
60
70
Grip Strength (in Kg)
Figure 1.3. Velocity of movement vs. grip strength
1.3 Experiment Two: Scanning Study Many physically challenged users interact with a computer through one or two switches with the help of a scanning mechanism. Scanning is the technique of successively highlighting items on a computer screen and pressing a switch when the desired item is highlighted. In this study we used two scanning systems. A block scanning system iteratively segments the screen into equally sized sub-areas. The user has to select a sub-area that contains the intended target. The segmentation process iterates until the sub-area contains a single target. A cluster scanning system iteratively divides the screen into several clusters of targets based on their locations. The user has to select the appropriate cluster that contains the intended target. The clustering process iterates until the cluster only contains a single target. Details of these scanning systems can be found in our previous paper (Biswas and Robinson, 2008a).
1.3.1 Procedure, Material and Participants In this experiment, the participants were instructed to press a set of buttons arranged on a screen (Figure 1.4) in a particular sequence. All of the buttons were coloured grey except the next target, which was red. After selecting the target its colour changed to grey and another target became red. The same task was repeated for both the scanning systems. We recorded the cursor trajectories, target height, width, and task completion time. For internal validity of the experiment, the scan delay was kept constant at two seconds for all motor impaired participants and at one second for the control group since the reaction times of motor impaired users
8
Biswas and Robinson
were longer. These values were selected to exceed their maximum measured reaction time. All participants were trained adequately with the scanning systems before undertaking the experiment.
Figure 1.4. Screenshot of the experiment
We used a push button switch (The Super-Switch, 2007) and an Acer Aspire 1640 Laptop with a 15.5” monitor having 1280×800 pixel resolution. We used the same seating arrangement for all participants. We measured the same six variables for hand strength evaluation as in Experiment One. We collected data from eight motor impaired (all participants except P3 and P9 in Table 1.1) and eight able-bodied participants (five female, three male, average age 28.75). The motor impaired participants were recruited from a local centre and they volunteered for the study. All motor impaired participants used a computer at least once each week. Able-bodied participants were students of our university and expert computer users. None of the participants had used the scanning systems before.
1.3.2 Results We measured the following three variables to investigate the scanning systems: • • •
Number of missed clicks: We counted the number of times the participants wrongly pressed the switch. Idle count: The scanning systems periodically highlight the buttons. This variable measures the number of cycles when the participants did not provide any input, though they were expected to do so. Efficiency: The scanning systems require a minimum time to complete any task which depends on the particular scanning system and not on the performance of the user. We calculated the efficiency as the ratio OptimalTime . An efficiency of 100% indicates optimal performance, ActualTime
50% indicates taking twice the minimal time and 0% indicates failure to complete the task. Table 1.2 shows the correlation coefficients of these variables with the hand evaluation metrics. The only significant effect is a correlation between the number
The Effects of Hand Strength on Pointing Performance
9
of missed clicks in the cluster scanning system and grip strength; there was a similar, but weaker, effect in the block scanning system. It seems that hand strength does not affect performance of users with the scanning systems. An equal variance t-test did not find any significant difference between the performance of motor impaired and able-bodied users at the p < 0.05 level. We also failed to find any effect of target size (height and width) on the task completion time, which is not surprising as the scanning systems did not depend on target size. Table 1.2. Correlation coefficients for the scanning systems (* significant at p < 0.05) Cluster Scanning System
Correlations
Significance
Block Scanning System
Missed Click
Idle Count
Efficiency
Missed Click
Idle Count
Efficiency
GS
0.580*
-0.191
0.168
-0.429
-0.331
0.283
TPS
-0.374
-0.105
0.110
-0.271
-0.153
0.093
ROM Wrist
-0.414
-0.154
0.189
-0.127
-0.120
0.068
ROM Forearm
0.000
0.106
-0.079
-0.268
-0.225
0.076
GS
0.018
0.478
0.534
0.097
0.210
0.289
TPS
0.153
0.699
0.686
0.310
0.572
0.731
ROM Wrist
0.111
0.569
0.484
0.639
0.659
0.803
ROM Forearm
1.000
0.695
0.770
0.315
0.401
0.778
1.4 Experiment Three: Fitts’ Law Study After analysing the effect of hand strength of motor impaired users, we also investigated how hand strength affects performance of able-bodied users. It would help to compare and contrast the pointing patterns of motor impaired users from their able bodied counterpart.
1.4.1 Procedure, Material and Participants Fitts’ Law provides a robust and accurate model for rapid aiming movements of able-bodied users. So we conducted a 2-dimensional Fitts’ Law task. We used 26 different combinations of target amplitude (A, ranged from 30 to 700 pixels) and
10
Biswas and Robinson
target width (W, ranged from 16 to 48 pixels). The resulting index of difficulty (ID) ranged from 2 to 5. Each participant performed 450 pointing tasks. We used a standard optical Mouse and an Acer Aspire 1640 Laptop with 15.5” monitor having 1280×800 pixel resolution. We also used the same seating arrangement for all participants. We measured the same six variables for hand strength evaluation as in Experiment One. We collected data from 14 able-bodied users (nine male, five female, and age range 22 to 50 with average age of 29.3). All participants were expert computer users.
1.4.2 Results The correlation coefficients between index of difficulty (ID) and movement time ranges from 0.73 to 0.95 with an average value of 0.85, which conforms to Fitts’ Law. We compared the hand evaluation metrics with the Fitts’ Law coefficients (a ID ⎛A ⎞ ). We and b where, MT = a + b log2 ⎜ W +1⎟ and Index of Performance (IP = MTAverage Average ⎝ ⎠ found that IP is significantly correlated with the grip strength and tip pinch strength (ρ = 0.57, p < 0.05 for grip strength, ρ = 0.72, p < 0.005 for tip pinch strength, Figures 1.5 and 6 respectively). The parameter b significantly correlates with tip pinch strength (ρ = 0.65, p < 0.01, Figure 1.7). We did not find any other significant correlation between IP, a, b and any other hand evaluation metrics.
Index of Performance (bits/sec)
IP vs. Grip Strength 4 3.8 3.6 3.4 3.2 3 2.8 2.6 2.4 2.2 2 10
15
20
25
30
35
40
45
50
55
60
Grip Strngth (in Kg)
Figure 1.5. Index of performance vs. grip strength IP vs. Tip Pinch Strength
Index of Performance (bits/sec)
4 3.8 3.6 3.4 3.2 3 2.8 2.6 2.4 2.2 2 1
2
3
4
5
6
7
8
Tip Pinch Strength (in Kg)
Figure 1.6. Index of performance vs. tip pinch strength
9
The Effects of Hand Strength on Pointing Performance
11
b vs. Tip Pinch Strength 360 320
Parameter b
280 240 200 160 120 80 40 1
2
3
4
5
6
7
8
9
Tip Pinch Strength (in Kg)
Figure 1.7. Parameter b vs. tip pinch strength
1.5 Discussion For able-bodied users, pointing performance is generally analysed in terms of Fitts’ Law. Fitts’ Law can be applied to rapid aiming movements in many different contexts, but a proper explanation of this law is still unclear. Crossman and Goodeve pioneered an early by limited mathematical explanation (Rosenbaum, 1991). Meyer and colleagues gave a generalised model of rapid aiming movements in which Fitts’ Law comes as a special case; however, alternative explanations are also available (e.g. the Mass Spring model) (Rosenbaum, 1991). However, Fitts’ Law does not account for the users’ physical abilities in predicting movement time. This seems reasonable for able-bodied users. Our analysis indicates that people having higher hand strength also have greater control in hand movement and can perform pointing faster. The positive correlation between the velocity of movement and grip strength also supports this claim. As motor impairment reduces the strength of a hand, motor impaired people lose control of hand movement. So the number of pauses near the source and target are significantly affected by grip strength. The logarithmic relation between grip strength and number of pauses indicates that there is a minimum amount of grip strength (about 20 kg) required to move the mouse without pausing more than twice. This threshold of 20 kg can be used to determine the type of input device suitable for a user, along with other factors like preference, expertise etc. Our analysis also showed that flexibility of motion (as measured by ROM of wrist or forearm) is not as important as strength of hand (as measured by grip strength). We found that hand strength affects pointing performance of able-bodied users, too. The positive correlation between index of performance and hand strength shows people with greater hand strength perform pointing faster. The correlation between the constant term b and tip pinch strength indicates a difference in movement patterns among people with different hand strengths. As the constant b indicates the effect of index of difficulty (ID) on the movement time, perhaps the movement pattern of people with higher hand strength mainly consists of an initial ballistic phase and does not have a long homing phase since time to complete the homing phase should depend more on the target characteristics. The opposite holds true for people with less hand strength.
12
Biswas and Robinson
As the homing phase requires more control in hand-movement, the negative correlation between b and hand strength also indicates people having higher hand strength also have greater control in hand movement. We also failed to find any effect of hand strength on pointing performance while participants used the scanning systems. There are two possible explanations: • •
the switch used in scanning only requires a gentle push to operate and the hand strength of motor impaired users is sufficient to operate the switch; the scanning software does the navigation itself and the users need not move their hand to move the pointer.
This result with the scanning system also shows that an appropriate choice of assistive technology can make interaction independent of the physical strength of users.
1.6 References Biswas P, Robinson P (2008a) A new screen scanning system based on clustering screen objects. Journal of Assistive Technologies, 2(3): 24–31 Biswas P, Robinson P (2008b) Automatic evaluation of assistive interfaces. In: Proceedings of International Conference on Intelligent User Interfaces (IUI’08), Canary Islands, Spain Fitts PM (1954) The information capacity of the human motor system in controlling the amplitude of movement. Journal of Experimental Psychology, 47: 381–391 Flowers KA (1976) Visual ‘closed-loop’ and ‘open-loop’ characteristics of voluntary movement in patients with parkinsonism and intention tremor. Brain, 99: 269–310 ISO (2000) Ergonomic requirements for office work with visual display terminals (VDTs): requirements for non-keyboard input devices. ISO 9241-9:2000. International Organization for Standardization, Geneva, Switzerland Kaplan RJ (2006) Physical medicine and rehabilitation review, 2nd edn. MacGraw Hill Mackenzie IS (2003) Motor behaviour models for human-computer interaction. In: Carroll JM (ed.) HCI models, theories, and frameworks: toward a multidisciplinary science. Morgan Kaufmann, San Francisco, CA, US Mathiowetz V, Weber K, Volland G, Kashman N (1984) Reliability and validity of hand strength evaluation. Journal of Hand Surgery, 9: 18–26 Rosenbaum DA (1991) Human motor control. Academic Press Inc., US Smits-Engelsman BCM, Rameckers EAA, Duysens J (2007) Children with congential spastic hemiplegia obey Fitts’ Law in a visually guided tapping task. Journal of Experimental Brain Research, 177(4): 431–439 The Super-Switch (2007) Available at: http://rjcooper.com/super-switch/index.html (Accessed on 19 November 2009)
Chapter 2 Harnessing Different Dimensions of Space: The Built Environment in Auti-biographies S. Baumers and A. Heylighen
2.1 Introduction An understanding of diversity is a key principle in the development of theories, tools and techniques of design for inclusion. In assembling new perspectives for inclusive design, we want to gain a more accurate insight into the diversity of people’s interaction with the designed environment. People with autism spectrum disorders, for example, due to their particular way of thinking, make sense of their surrounding world in a unique way. Starting from this notion, our research questions the relevance to them of the meaning attributed to the built environment in our society, by studying the interaction between the world of experience of people with autism and the design of the built environment. In this paper, we investigate the way people with autism talk about space and the importance they attach to their physical environment, as reflected in stories and autobiographies of people with autism themselves—in short, auti-biographies. By analysing their own descriptions, we try to gain more insight into an autistic way of thinking and acting in relation to the built environment.
2.2 Architectural Design and Autism In our society, designers have a significant impact on daily life. They commit themselves to certain ideas, which take shape in the artefacts they conceive. In architecture also, a design embodies the designers’ line of thought. The tangible space which is the materialisation of an architectural design, thus carries a whole ideological background. However, most people are only exposed to the concrete realisation of the architectural design, to the physical space surrounding them. Yet the way a person deals with this surrounding space and brings it into use does not only depend on the design of the physical space, infused with the ideas of the architect; it is to a large extent based on the personal interpretation this person
14
Baumers and Heylighen
attaches to the physical environment. Jakob von Uexküll (1934) elaborates how the same physical entity can play different roles in the worlds—die Umwelten—of different people. Following this line of thought, Marta Dischinger (2006) points out that in wayfinding different features may become landmarks, depending on the person’s interests, attention and perceptual possibilities. The specific disposition of each person can lead to a different interpretation of the environment, which can develop into another use of space. Whatever the disposition from which an architect designs a certain environment, eventually people interpret the built environment from a personal disposition and they use their surrounding space according to this particular interpretation. Starting from a wide diversity of people, this paper focuses on the interpretation of the environment and the use of space by people with Autism Spectrum Disorders (ASD), namely autism. Up till now, ASD is still diagnosed the basis of a characteristic behaviour, known as restricted and repetitive actions (Wing, 1997; Noens and van IJzendoorn, 2007). Nevertheless, the true essence of autism, underlying this distinctive behaviour, is thought to be situated in a characteristic difference in cognitive functioning (Noens and van Berckelaer-Onnes, 2004). Their specific ‘distinct’ way of perceiving and information processing causes people with ASD to make sense of their surrounding world in a unique way (de Roeck, 1997). Both the characteristic autistic behaviour and the particularly different way of sense-making of people with autism influence their spatial experience and their interaction with space. This is a central argument in exploring the confrontation between people with autism spectrum disorders and their built environment.
2.3 Through the Words of Themselves In trying to gain insight into the way people with ASD interpret their built environment and how they deal with space, we do not intend to analyse various scientific points of view or general considerations concerning the varied disorders of the autism spectrum; nor do we intend to investigate the behavioural ‘anomalies’, interpreted according to our own standards. Instead, our starting point is the way people with autism—themselves—think about their interaction with space, getting to the bottom of their own reflections on it. Since Susan Reynolds Whyte established that people with disabilities experience a general tension between their personal values—shaped by their personal experiences with disability—and the general values of larger society (Albrecht, 2003), we want to emphasise these personal values, drawing attention to their personal verbal accounts of the physical environment, and to question what they can impart to the general values. However, the way to gain access to the range of thought of people with autism is not obvious. Even though ASD includes a wide spectrum of disorders, showing a large range of capabilities, impairments in social interaction and communication are common characteristics of the autistic spectrum (Wing, 1997; Noens and van Berckelaer-Onnes, 2004). Nevertheless, a few people with autism find a way to pour out their thoughts and feelings by writing them down (Klonovsky, 1993). Some of them write in detail characteristic memories of their life (e.g. Gunilla
Harnessing Different Dimensions of Space
15
Gerland, Daniel Tammet, Liane Holliday Willey), while others write their experiences and feelings in letters or diaries (e.g. Landschip (Landschip is a Dutch pseudonym, which can be translated as ‘Landship’), Birger Sellin, Dietmar Zöller) or they make their thoughts known in a leaflet (e.g. Lourens Bijlsma, Brad Rand, Jim Sinclair). The written reflections on their own life allow us to catch a glimpse of an autistic way of thinking. In this paper, those published autobiographies of people with autism—so-called auti-biographies—are used as a particular source to analyse the importance of the physical environment, the interpretation of this environment and the corresponding way of dealing with the built environment by people with autism. To start with, we will discuss the role of physical space in a number of auti-biographies, selected on the dual conditions of having been written (1) by people with autism and (2) about experiences of their own lives (Zöller, 1989; Sellin, 1993; Sinclair, 1993; van Dalen, 1994; Gerland, 1996; Rand, 1997; Willey, 1999; Bijlsma, 2000; Dumortier, 2002; Landschip and Modderman, 2004; Tammet, 2006). Analysing which meaning the authors ascribe to the built environment and how they behave in dealing with the surrounding space, we want to identify some characteristic elements in their reflections on the built environment.
2.4 Divergent Dimensions of Space 2.4.1 The Confidence in Physical Space ‘My consolation, my safe retreat in the world, was a brown armchair in one corner. I could just fit in behind it. With my face close to the back of it, I would stare into the upholstery so that I could see every tiny little bit of it. I became absorbed in the brown material, in its threads, in the minute holes between the threads. […] There was no energy to be found there, but there was rest, a way of keeping my mouth shut and holding on to a little of the energy that had otherwise been spent in trying to understand what was incomprehensible, how everything hung together.’ (Gerland, 1996)
The incomprehensibility of society is a frequently recurring theme in the studied autobiographies of people with autism. Contemporary society does not give the authors something to hold on to in understanding what this world is all about; their life stories reveal a continuous struggle for some grip on the world around them. In pursuit of a sense of certainty in this world, physical space is presented as a fixed and self-evident feature of the environment, as a physical entity that gives the sense of grip the authors are looking for. The physical space offers some grip, not only metaphorically, but also in the literal sense of the word: it is visible and tangible, one can perceive it. Physical space simply is, and gives a sense of certainty unlike the transient information and the concepts of life that are not directly perceptible, such as the inner self of human beings. In this respect, physical entities, either single objects or whole spaces, seem to inspire more confidence than people. “People were never safety points to me,” Gunilla Gerland (1996) writes, whereas objects could mean a reassurance, even in new and incalculable environments. Physical entities were the anchorages to the world, not
16
Baumers and Heylighen
other people. “I didn’t want to move house, most certainly not. Our house and garden were my security. The house was closer to me than people were.” However, Gerland mentions that, after all, from a physical point of view there is no essential difference between people and objects. Even human bodies, or empty faces, can be seen as physical entities in space. “Those [empty] faces were as lacking in content as furniture, and I thought that, just like furniture, they belonged in the rooms I saw them in.” Consequently, “sitting on the lap of a stranger, on the lap of an empty face, hadn’t been any more difficult than sitting on an armchair.” (Gerland, 1996) Other auti-biographers note the same essential similarity between people and objects. According to Brad Rand (1997): “When I’m not concentrating on people, they just look like shapes, like furniture and trees are shapes.” When it turns out that people are not only stage-property, that there is some inner self behind these other entities labelled as human beings, and that they too have a personal state of mind, it is no surprise that—being not on the same wavelength to understand these underlying characteristics—human beings cause an unsafe feeling. The meaningful world behind human beings generates some unpredictability and uncertainty. “Because of important reasons,” Birger Sellin (1993) writes, “I can find safety only in things. People are incalculable and distinct monsters.” Nonhuman predictable behaviour and iron regularity are thing-like traits that people can only try hard to simulate, and in this way, “people with autism are offered more grip, sometimes even literally so, by nonhumans than by humans” (Hendriks, 1998). The predictability and—perhaps indisputable—perceptibility of physical space inspire confidence in the authors of the selected auti-biographies. In the descriptions of Gunilla Gerland (1996), this confidence is even enhanced by the sense of confirmation of her inner feelings, evoked by the physical environment. She likes to wander round the residential area at night, lonely, and writes: “The world was quieter then, and it looked just as deserted as it felt. That tallied. [...] I liked it when things tallied, when there was both an internal and an external emptiness.” Gerland longs to relate the environment to her own senses. Furthermore, she explains her bent towards curved objects, her desire to touch them, triggered by the feeling of being so ‘straight’ inside. “It’s because my nervous system is rectilinear that I need to acquire a curve from outside. As if, when I really need an inner curve so as not to be so rigid, I have to find it somewhere outside myself.” In both cases, the surrounding space, either as a mirror or as a complement of her own inner sense, ensures her own feelings. The sense of certainty and confidence experienced in relation to the physical environment makes the auti-biographers seek comfort in tangible space rather than with other human beings. Gunilla Gerland (1996), faced with the threat of losing track of herself when the certainties in the social world were lost, slipped in behind the brown armchair, to find a safe retreat to be left in peace. And Liane H. Willey (1999), whenever she began to feel as though she would come unravelled, crawled into the symmetrical alcove under her bed, until she felt “as square and symmetrical as the alcove itself.” There she could always find herself. The tangible space is able to offer a place to come to one’s senses again, as a safety point amidst an incomprehensible society.
Harnessing Different Dimensions of Space
17
2.4.2 The Hidden Logic of the Built Environment Even though in most stories of people with autism the physical space features as a source of certainty, there are more sides to the physical environment as utilised by society. Each of the considered auti-biographies reveals an unpredictable feeling with regard to the built environment, resulting from problems or ‘maladjusted’ behaviour experienced in dealing with it. A recurring problem concerns orientation and wayfinding. Several autibiographers describe situations in which they lose their spatial orientation, in outside environments as well as inside a building. J.G.T. van Dalen (1994), diagnosed to be slightly autistic, writes how easily he loses his way. Even in familiar environments, he has to think a lot to find the right way. Also Landschip describes how, although he practised the route between two points, he was still not able to find again this way under various circumstances, snow, a closed road, … (Landschip and Modderman, 2004). Dominique Dumortier (2002) mentions similar problems of orientation, even inside a building: after visiting a new apartment, she does not succeed in pointing out the right door to the exit. And Liane H. Willey (1999) and Daniel Tammet (2006) describe how they had to rely hopelessly on trial and error to find their way inside the school building. Gunilla Gerland (1996) nuances her wayfinding through the school. She is not only aware of her problems in finding the right way, but also of the differences between her and her classmates in using this building. This realisation causes some doubt about the certainty of the perceived environment: “There must be a sign of some sort on the doors,” she writes, “because the others didn’t hesitate over where they should go.” Her problems in dealing with the environment, but especially the realisation of a different use or interpretation of the environment compared to others, make Gerland question which innate abilities enable the others to retrieve this logic. Despite the certainty offered by the fixed physical space, the imagined organisation and assumed logic behind the tangible space causes confusion. In the same way, van Dalen (1994), impervious to a functional meaning of the built environment, is aware of his ‘dysfunctional’ reactions: “For example, I often experienced that I pushed the open-button to close the doors of the elevator and the other way around.” The button to close an elevator is usually indicated by two thick vertical lines, widely separated, which—from a structural point of view— represent an opened door. However, the meaning of this button is grasped from the two functional arrows between those lines, pointing to each other to represent the act of doors closing. The connotations and meanings attributed to the built environment in our society can lead to situations in which people with autism do not behave themselves according to the rules people without ASD inherently connect with it. Aware of their problems or—compared to others—’maladjusted’ behaviour in dealing with the built environment, most authors describe how they consciously try to develop strategies to get round these difficulties, and how they try to blur some differences towards outsiders. It is a search for tricks that allow them to function in an apparently spontaneous way (Dumortier, 2002), a search for skills as a compensation for the missing automatic pilot (Landschip and Modderman, 2004).
18
Baumers and Heylighen
In her attempt to harmonise her behaviour with the way of dealing with the environment she noticed with others, Gunilla Gerland (1996) developed theories that—in her opinion—were a useful aid in making sense of what was happening. Her theories became a truth that had an impact on how she tried to understand the world. “I desperately wanted to understand, and this led to theories: if everything looked in a certain way in the living-room—the sun shining in through the curtains, the ashtray on the table with a newspaper beside it—and if Kerstin then came back from school ... I thought that everything had to look exactly the same the next day, for her to come back from school. And in fact, it often did.” Strikingly enough, the auti-biographers, developing strategies to compensate for the experienced ignorance of a hidden logic behind the built environment, unconditionally connect their theories to the immediately perceptible space. Even the unpredictable side of the built environment is compensated for by reverting to the physical certainty of space.
2.5 Interpreting Their Words 2.5.1 A Conscious Perception of Space A rough overview of the role of the built environment in the auti-biographies under consideration reveals that the authors’ perception and interpretation of the built environment—and its divergent dimensions—is shaped by a recurring way of experiencing space: the continuous consciousness of physical space as a tangible entity. Presenting the physical environment as an anchorage of confidence in the world, as a safety point by contrast with human beings, this reliability seems to have its roots in the tangibility and perceptibility of space. The idea of perceiving humans as a physical entity, just like objects and spaces are physical entities, illustrates the essential logic behind the conscious way of perceiving their environment. The unsafe feeling caused by human beings is due to the unpredictable aspects of an ‘inner self hidden behind the physical, which cannot be perceived just like that. In the same way, the experienced unpredictable side of the built environment stems from what exceeds the directly perceptible. The way the physical environment is utilised in our society often requires a sense of more than what is really tangible. To succeed in orienting yourself and finding the right way, for example, it is necessary—even in familiar environments—to be able to imagine what is not immediately present, placing the real perception in a general conception of the environment. When viewing a certain door, Dominique Dumortier (2002) could not—just like that—imagine the space into which that door gave entrance. And walking through the city, she was not able to imagine where a certain street would end, even if she had walked it a few times before. According to van Dalen (1994), even the smallest change in his view point makes him perceive an almost totally new environment, and failing to relate one image to the other explains his problems in wayfinding. Merely building on directly
Harnessing Different Dimensions of Space
19
perceptible aspects to interpret the built environment can apparently lead to problems in dealing with space or ‘maladjusted’ behaviour, compared to others. Moreover, attributing meaning to the built environment in our society, we begin with a range of non-perceivable, non-concrete information, which gives the tangible space an extra dimension. The built environment surpasses the physical space, and this is the area of the authors mentioned above experience their problems, which strongly suggest that their interpretation of the world is mainly based on immediate perceptions of the physical space. The auti-biographers’ consciousness of the tangible space is also present in their reaction to both the predictable and the unpredictable characteristics of the built environment. Developing strategies to compensate for the experienced problems in interaction with the built environment, the authors describe how they fall back on their experience of an inherent certainty of the existing tangible space. Landschip, for example, to control his movements and actions, explicitly refers to his position in relation to the surrounding space. He explains that the quality of coordination between his eyes and hands—a skill in which he excels at certain moments, whereas at other moments it leaves much to be desired—does not only depend on which activity he is doing, but often “it goes well as soon as I have ‘positioned’ myself in space.” It is the tangible environment that gives him the sense of certainty to position himself in space and for this reason, he writes, he feels more safe when he travels by bike than when he goes on foot: “That bicycle is literally and metaphorically something to hold on to, an anchor, a point of departure that makes me know, all the time, what is upside and what is downside of reality” (Landschip and Modderman, 2004). It is the act of touching the fixed, tangible environment that vouches for his sense of safety. Just like Landschip, Gunilla Gerland (1996), confused by an unexpected and piercing noise, looks for something to hold on to in the physical environment, so as not to lose her bearings. “The din made the ground under my feet disappear and I could neither see nor feel the world around me. Up and down were suddenly in the same place and I had no sense of where my feet were. [...] I had to feel something that stood still, something anchored, in a world that had suddenly become totally unpredictable.” The concrete perception of physical space—one more time—offers her an alternative way to make sense of what is happening.
2.5.2 An Own World of Experience In different spheres, the conscious experience of the physical environment, described in the auti-biographies, shows a unique mode of perception and consciousness. The authors are aware of what is, what is directly perceptible. They substantiate the experiences described in relation to the built environment by chronicling an ‘other’ way of thinking, which makes them perceive, interpret and experience the world around them in an ‘other’ way. To explain his problems in dealing with the built environment, van Dalen (1994) mentions that, according to him, the real causes have a perceptual nature. He discovered how he, as distinct from most other people, perceives the world in a structural way rather than a functional way, which involves another way of making sense of the environment.
20
Baumers and Heylighen
“Anyway, my perception functions in an unambigously different way,” Birger Sellin (1993) remarks. Landschip specifies his ‘other’ way of perceiving: “I do not think my senses are developed in a better or sharper way, I rather think that it is caused by what I’m doing with my sensory information” (Landschip and Modderman, 2004). Attempting to have complete cognitive control over the way he perceives the world around him and the way these perceptions are organised inside his head, he explicitly describes how he perceives the surrounding space by means of his body, and even how he consciously experiences his body. “Probably,” Landschip writes, “except for your senses, you have other means at your disposal to know who you are, and to define the boundaries between yourself and what is around you.” Nevertheless, he, himself, uses the perception of the boundaries of his body as a reference. In this context, he describes his fear of remaining seated on a chair for too long. It ends up blurring the difference between the chair and his body. “At a certain moment, the interface of the chair is as warm as my body temperature, and at that moment I have lost the boundary between me and that chair” (Landschip and Modderman, 2004). To support his attempts to harness the environment with his mind, Landschip is aware of the importance of a conscious perception of the world by using his body. Gunilla Gerland (1996) extends the importance of bodily experience with her attempts to mentally follow up everything she is doing. When she plans to visit new places, it always requires a precise preparation. But however much she tries to harness the unknown environment, preparing everything with her mind, there is always an unpredictable aspect: “All that remains, what I can’t prepare for, is the town’s tempo, what the air feels like, what the town sounds like—everything that gives it a colour inside me.” Thanks to a continuous focus on cognitively grasping everything, she is aware of the fact that it is only possible to experience that colour of a city by being physically present. The way the authors discussed consciously perceive the world and process information from the environment makes their view of the world unique. It does not only characterise their perception and interpretation of the built environment, but leaves its marks on their routine experience. “Autism is a way of being,” Jim Sinclair (1993) clarified. “It is pervasive; it colours every experience, every sensation, perception, thought, emotion, and encounter, every aspect of existence.” Autism, as a pervasive disorder, forcibly shapes the world of experience.
2.6 In a Wider Perspective A first analysis of the considered descriptions of the built environment by people with autism reveals a characteristic way of perceiving. Most authors of these autibiographies express how they experience their own perception of the world as different in comparison with other people. “I was only beginning to see how peculiar my world was,” relates Liane H. Willey (1999), “not wrong or embarrassing or unessential, just peculiar and different.” Due to their different way of perceiving, the authors look at the world in a different way, they interpret the world from this particular perspective and experience the world in a distinct way.
Harnessing Different Dimensions of Space
21
Birger Sellin (1993) testifies: “In my world, I know my way well, but the reality is macabrely different.” To elaborate this ‘difference’ of worlds, Lourens Bijlsma (2000), convincingly describing his lack of overview of the environment, adds that an autistic person is not aware of it by himself: “it is the world as he got to know it.” The non-autistic environment clarifies for him that ‘reality’ contains more, that there are connections—assumed to be ‘real’ ones—that he is not able to perceive. It illustrates the power of society to disable: “Assuming that there is one way to be in a culture encourages the misunderstanding that those who are different from perceived norms are missing something, that it is their doing, that they are locked out for a reason, that they are in fact, in reality, disabled” (McDermott and Varenne, 1995). Due to the way certain skills are made to count in various social settings, the different world of experience of people with autism could be considered as a deprivation. However, the critique from a non-autistic world of experience could be reflected the other way around, turning the characteristics of autism into “something that everyone in the community could easily work with, and turn it into a strength” (McDermott and Varenne, 1995). “The AS community gives us much cause to celebrate,” states Liane H. Willey (1999). Jim Sinclair (1993) also explicitly transcends a sense of inferiority. Describing a child with autism as stranded in an alien world, he argues that this other world of people with autism invites our society, perhaps challenges our society, to look into their reality: “You’re going to have to give up the certainty that comes of being on your own familiar territory, of knowing you’re in charge, and let them teach you a little of [their] language, guide you a little way into [their] world.” It is an invitation that can be framed in the cultural disability model of Devlieger et al. (2003), suggesting that taking up a lens of disability, or an autistic point of view, enables us to ask questions about the dominant way of thinking of our society. Applying this stance to the analysis above—interpreting the built environment through the view of people with autism—an understanding of their descriptions can be a challenge to open our eyes towards a certain critique on the way we—so-called neurotypicals— think about the built environment. And what’s more, this unique point of view questions the way we assume our standards as normative in organising space in this society. Describing their world of experience, the authors uncover a direct and conscious way of experiencing the world, a plain view of the world that most of us are not aware of. In this way, the perspective of autism could prompt us to think about our own experience of space by throwing light on their particular experience, and challenges our way of thinking about the built environment. But the cultural model of disability goes further, taking advantage of the difference between worlds of experience by admitting an exchange between those different worlds. In Dubbelklik (Literally translated: ‘Double-click’) (Landschip and Modderman, 2004), a correspondence between a person with autism and a person without, this exchange between the different worlds of both authors is clearly represented. In the end, the two of them testify that, thanks to this dialogue, they did not only get to know each other, but also got to know themselves. The descriptions of Loes Modderman about the non-autistic world taught Landschip a great deal about his autism. Conversely, Landschip’s stories of his autism made Loes Modderman become acquainted with her non-autism. In this way, Modderman writes that, thanks
22
Baumers and Heylighen
to her contacts with Landschip, she—herself—realised more consciously the way she perceives and feels. Fascinated by the fact that ‘realities’ can be that different, she posits that this book is promising proof that “a bridge can be built between one world and the other, just like that, offering a valuable gift.” Looking for a bridge between two worlds of experience, this paper used the reading of auti-biographies as an approach to discover the world of experience of people with autism. “Writing is my first step out of the other world,” Birger Sellin (1993) explains. Studying their written stories with a focus on their interpretation of the built environment, this research maintained a specific approach to challenge the usual conception and design of space. However, starting from only a selection of auti-biographies, the scope of this research is limited. Insights can still be extended by analysing the work of some other major authors with autism (e.g. Temple Grandin). Moreover, considering only written stories of people with autism themselves—consciously putting into words their own actions and feelings—the understanding of their perspective is strictly limited to their own interpretation of experiences in interaction with the built environment. Looking ahead to further research, the analysis of these well-considered written reflections will be supplemented with an investigation of the real interaction of people with autism with the built environment, focusing on their performance of actions in space, and their spoken reflections on it. Despite the limitations of this research, the analysis of the built environment through the reflections of people with autism in their autobiographies seen allowed us to raise a corner of the veil covering the autistic perspective on the built environment. Using this particular approach, this study unveiled how an ‘other’ mental disposition can broaden the designers’ outlook on the built environment by accepting an exchange between both worlds of experience.
2.7 Acknowledgements This research is supported by the Research Foundation – Flanders (FWO), of which Stijn Baumers is a Ph.D. fellow. The research of Ann Heylighen is funded by the European Research Council under the European Community’s Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement n° 201673. Special thanks should also go to Eva Boodman, for fine-tuning the English spelling and grammar in this paper.
2.8 References Albrecht G (2003) Disability values, representations and realities. In: Devlieger P, Rusch F, Pfeiffer D (eds.) Rethinking disability. The emergence of new definitions, concepts and communities. Garant, Antwerpen-Apeldoorn, The Netherlands Bijlsma L (2000) Wat autisme eigenlijk is, gezien door een autist. Engagement, 2000(1) De Roeck A (1997) Over autisme en cognitie: de andere informatieverwerking van mensen met autisme. Van Horen Zeggen, 37(4): 4–11
Harnessing Different Dimensions of Space
23
Devlieger P, Rusch F, Pfeiffer D (2003) Rethinking disability as same and different! Towards a cultural model of disability. In: Devlieger P, Rusch F, Pfeiffer D (eds.) Rethinking disability. The emergence of new definitions, concepts and communities. Garant, Antwerpen-Apeldoorn, The Netherlands Dischinger M (2006) The non-careful sight. In: Devlieger P, Renders F, Froyen H, Wildiers K (eds.) Blindness and the multi-sensorial city. Garant, Antwerp-Apeldoorn, The Netherlands Dumortier D (2002) Van een andere planeet. Autisme van binnenuit. Houtekiet, Antwerpen/Amsterdam, The Netherlands Gerland G (1996) A real person. Life on the outside. Souvenir Press, London, UK Hendriks R (1998) Egg timers, human values, and the care of autistic youths. Science, Technology & Human Values, 23(4): 399–424 Klonovsky M (1993) Vorwort. In: Sellin B (ed.) Ich will kein Inmich mehr sein: Botschaften aus einem autistischen Kerker. Kiepenheuer & Witsch, Cologne, Germany Landschip, Modderman L (2004) Dubbelklik: autisme bevraagd en beschreven. EPO & VDA McDermott R, Varenne H (1995) Culture as disability. Anthropology & Education Quarterly, 26(3): 324–348 Noens I, van Berckelaer-Onnes I (2004) Making sense in a fragmentary world. Autism, 8(2): 197–218 Noens I, van IJzendoorn R (eds.) (2007) Autisme in orthopedagogisch perspectief. Boom Academic, Amsterdam, The Netherlands Rand B (1997) How to understand people who are different. Available at: www.autism-pdd.net/brad.htm, consulted 23/12/2008 (Accessed on 19 November 2009) Sinclair J (1993) Don’t mourn for us. Our Voice, 1(3) Sellin B (1993) Ik wil geen inmij meer zijn: Berichten uit een autistische kerker. Rotterdam: Thoth. Translation of: ‘Ich will kein Inmich mehr sein: Botschaften aus einem autistischen Kerker.’ Kiepenheuer & Witsch, Cologne, Germany Tammet D (2006) Born on a blue day. Hodder & Stoughton, London, UK van Dalen JGT (1994) Autisme van binnenuit bekeken. Kijken door licht-autistische ogen. Engagement, 3: 3–8 von Uexküll J (1934) Streifzuge durch die Umwelten von Tieren und Menschen. Julius Springer, Berlin, Germany Willey LH (1999) Pretending to be normal: living with Asperger’s Syndrome. Jessica Kingsley Publishers, London, UK Wing L (1997) The autistic spectrum. The Lancet, (350): 1761–66 Zöller D (1989) Als ik met je praten kon…: brieven en dagboeken van een autistisch kind. Utrecht: Uitgeverij Kosmos. Translation of: ‘Wenn ich mit euch reden könnte...’ Scherz, Bern, Switzerland
Chapter 3 Have I Just Pressed Something? The Effects of Everyday Cold Temperatures on Dexterity E. Elton, D.Dumolo and C. Nicolle
3.1 Introduction This paper details work on the effect of physical context of use on inclusive product interaction. Context of use refers to a set of circumstances which relate to the users, tasks, equipment/tools and environment (both Physical and Social) (ISO, 1998). In particular, the physical context of use refers to factors such as lighting levels, temperature, weather conditions, vibration, noise, the built environment, etc. Consideration of the context of use is an integral, although sometimes implicit, part of any product design process. When a mismatch between context and a product occurs, it is unlikely that the benefits of a product will be realised (Maguire, 2001). Recent evidence suggests context of use can have a multi-faceted impact on product use (e.g. increasing or decreasing user capability and/or increasing product demand) particularly with older adults who have significantly reduced capability due to their age (Elton et al., 2008). Specifically, it is the physical environment that significantly affects capability. The vast majority of product interactions make demands on the visual and dexterous (arm, hand and finger) capabilities of the user. Whilst other capabilities are also used, it is these that are most common. Several studies (Riley and Cochran, 1984; Havenith et al., 1995; Boyce, 2003) have reported the effect of the physical environment on vision and dexterity. However, such studies focus on the body’s physiological response to such conditions and generally investigate extremes, e.g. freezing temperatures. Whilst these studies indicate the extent to which the physical environment can affect capability, they have very little relevance to everyday scenarios where products are used. Previous research investigated the effect of everyday lighting levels on visual capabilities (Elton and Nicolle, 2009). This paper reports the findings from a pilot study that investigated the effect of an everyday winter temperature on dexterity and how this can affect product interaction.
26
Elton, Dumolo and Nicolle
3.2 Dexterity Dexterity is referred to as a motor skill that is determined by a range of arm, hand and finger movements and the ability to manipulate with hand and fingers (Heus et al., 1995). Dexterity comprises both manual dexterity and fine finger dexterity. Fine finger dexterity refers to the ability to manipulate objects with the distal (fingertip) part of the hand. This involves precise movement of the fingers, e.g. writing, dialling a number, picking up a coin, fastening a button, etc. Manual dexterity involves less refined and less precise movements of the hand and fingers (Desrosiers et al., 1995). The object is usually larger and manipulation requires more gross movements, e.g. digging, opening a door, placing a saucepan on the hob, etc. Dexterity is extremely important in carrying out everyday product interactions and nearly all products in today’s marketplace require dexterity in one form or another. Functioning of the hands is determined by several physiological parameters that are described in Table 3.1. Table 3.1. Factors that influence dexterity (Heus et al., 1995) Component of dexterity
Description
Reaction time
The time between a stimulus being presented and the start of motor response
Sensibility (sensitivity)
The response of receptors in the skin to tactile, pressure, thermal and pain stimuli
Nerve conduction
The speed at which nerves conduct signals
Grip strength
The force that can be developed by the muscles of the upper and lower arm
Time to exhaustion
The time to when a decrease in force exerted by the muscles occurs
Mobility
The range of motion of the hands and fingers
3.3 Effects of the Cold on Dexterity When people are in cold environments the temperature of their body’s extremities reduces initially, caused by cold air coming into contact with the skin. When the skin cools, the blood flow to that area decreases, which results in less heat being dispersed to that part of the body (Edwards and Burton, 1959). This then lowers the temperature of the skin further. Cold also decreases the nerve conduction velocity (i.e. the speed the nerve sends a message from the brain to the muscles that control the hand). Furthermore, it causes the synovial fluid which lubricates the joints to become more viscous, so that movements are slower and require greater muscle power. In summary, dexterity (both manual and fine finger) is significantly
The Effects of Everyday Cold Temperatures on Dexterity
27
reduced due to physiological effects of the cold on the human body (Heus et al., 1995). However, little is known about the extent to which typical everyday cold temperatures affect the functioning of the human hand and what effect this can have on a user’s capability to interact with a product. Is it just extreme temperatures that cause these physiological changes to occur, thus reducing dexterity, or could being outside for 20 minutes on a winter’s day have a significant effect?
3.4 Aims and Objectives The overall aim of this research is to produce a capability dataset that can be used by designers to produce products that are inclusive in the contexts in which they will be used. The specific objectives of this pilot study are to: • • • •
obtain an indication of which forms of dexterity are affected by the cold (approximately 5°C) and to what extent; determine the likely effect on product interaction; identify which tests are good predictors of product interaction capability; identify appropriate dexterity tests for a larger scale study.
3.5 Methods 3.5.1 Dexterity Tests Objective measures were used to assess dexterity as they have the advantage of providing direct measures of human response (Parsons, 2005). Manual and fine finger dexterity were measured using a combination of empirical tests and representative real world tasks. The aim of the pilot study was to identify from these tests which form(s) of dexterity are affected by the cold. The empirical tests chosen are detailed in Table 3.2 and the representative real world tasks chosen are detailed in Table 3.3.
28
Elton, Dumolo and Nicolle Table 3.2. Empirical dexterity tests used in experiment
Test
Description
Purdue pegboard
A test of fine finger dexterity. The assessment involves a series of four subtests which involve placing as many pins as possible into a pegboard with the right hand, then the left hand and then both hands – each in a 30 second period. The fourth subtest is an assembly task using pins, collars and washers – this was not used in this experiment.
Power grip strength
Maximal grip strength (kg) a person can exert with their hand (measured by squeezing together the middle joints of all four fingers and the palm). Just the dominant hand was measured by following the standard protocol as provided with the dynamometer (Takei Scientific Instruments - T.K.K.5401 Grip D [Digital Grip Dynamometer]). The test was repeated three times and mean averaged.
Pinch grip strength
Maximal force that can be exerted between the index finger and thumb pulps. Just the dominant hand was measured in a standardised posture. The maximum force was measured in kg and was repeated three times then mean averaged. Equipment used was the Baseline Hydraulic Pinch Gauge.
Table 3.3. Representive real world tasks used in experiment Real world task
Description
The Moberg pickup test
A real world timed test that uses a combination of pinch grip and fine finger dexterity. The test requires participants to pick up a selection of 12 real world objects from a table and place them in a container as quickly as possible. The test was modified to use a selection of representative everyday products, including a mobile phone SIM card, paperclip, safety pin, AA battery, PDA stylus, match, UK 1p, UK 2p, credit card, key, bolt and wing nut. The test was repeated a second time and then mean averaged.
Using a mobile phone
The task requires fine finger dexterity. The time taken to enter an eleven digit number, in the style of a UK landline telephone number, into a mobile phone (NOKIA 3210e) was recorded.
Using gardening secateurs
The task requires the exertion of a power grip. Participants were asked to cut through increasing thicknesses of wooden dowel (3, 5, 6, 9, 10 and 12 mm diameters) using a pair of garden secateurs (B&Q Deluxe Branch and Thicker Stem Secateurs). The maximum thickness of dowel that they could cut through was recorded.
The rationale for selecting these particular dexterity measures will be detailed in another paper that is currently in preparation.
The Effects of Everyday Cold Temperatures on Dexterity
29
3.5.2 Cold Temperatures The coldest outdoor temperatures in the UK are experienced through the winter months (December, January and February). Mean temperature across the country usually varies between -4°C to +8°C; however on average, mean temperatures lie around the 4–5°C mark (Met Office, 2009). Also, 5°C is the temperature threshold used by the Met Office to issue a cold weather warning (Goodwin, personal communication, 2009). Based on these national statistics and temperature thresholds, 5°C was the chosen temperature to represent cold environmental conditions.
3.5.3 Procedure A climatic chamber was used to regulate the desired temperature of 5°C. This had the advantage of ensuring consistency in testing conditions and elimination of experimental noise. Thermo-neutral testing (an environment that keeps the body at an optimum point) was conducted within a room adjacent to the climatic chamber which was regulated between 19°C–24°C. In order to replicate real world scenarios as closely as possible, each participant was asked to bring their own winter clothes (suitable for temperatures of 5°C) to wear in the climatic chamber. The only item of winter clothing they did not wear was gloves as the experiment was concerned with the effect of the cold on the hand/dexterity. Gloves are another variable that are known to influence dexterity. In a study conducted by Havenith and Vrijkotte (1993), it was found that wearing gloves decreased fine finger dexterity by up to 70% and hand dexterity by up to 40% in comparison to ungloved hands. Currently, there is no data that simultaneously details the effects of the cold and gloves on dexterity. However, in relation to this study, measuring the effects of the cold and gloves in one experiment is not practical, i.e. participants would have to spend prolonged time in the cold and would have to conduct double the number of tests which could easily result in fatigue, discomfort and significantly increased blood pressure. When in the climatic chamber participants were asked to sit for 20 minutes, prior to undertaking the battery of dexterity tests, in order to let their hands cool. In the thermo-neutral environment participants dressed in their ‘normal’ clothing for the time of year (summer 2009). A repeated measures design was chosen to provide the best comparison between the two types of environments. The order of experiencing the two environments and the dexterity tests was varied systematically using a balanced Latin square. This counter balancing of the conditions and tests mitigated against any order or carry over effects.
30
Elton, Dumolo and Nicolle
3.5.4 Sample Since there is a lack of specific information on the prevalence of disorders affecting dexterity in the UK, it was not possible to recruit a random proportionate sample. An initial purposive sampling strategy to recruit a highly variant sample of users with mixed dexterity abilities was therefore adopted. A total of 14 participants (six male and eigth female), aged between 65 and 75 years (mean age=69.57, SD=3.756) completed the pilot study. A minimum age criterion for the sample was set at 65 years old as significant reductions in hand functions are seen after this age (Shiffman, 1992). It is these users who are already working to the limits of their ability; therefore any reduction in capability due to context would result in their being excluded. A dataset that details this reduction and variation in capability will allow for the design of mainstream products that are accessible to, and usable by, as many people as reasonably possible, without the need for special adaptation or specialised design (Clarkson et al., 2007).
3.5.5 Ethical Consideration Ethical clearance for the study was obtained from Loughborough University’s Ethical Advisory Committee. All participants answered a health screening questionnaire to ensure they had no conditions that could be adversely affected by the cold. They received a participant information pack that contained full details of the study prior to their arrival. During the study blood pressure and finger skin temperature were monitored to ensure they did not exceed safe levels based on expert and medical advice.
3.6 Results The results in this section detail the findings from the pilot study. All participants completed the battery of tests in both thermo-neutral (mean temperature=21.5°C, SD=0.75) and cold (mean temperature=5.7°C, SD=1.25) environments. Mean finger skin temperature in the warm was 30°C, and in the cold mean finger skin temperature reduced to 19°C. Outliers were removed and the data was tested for normality. Data for nearly all tests was normally distributed (parametric) apart from the Secateurs test, in both warm and cold conditions. Thus, median values and non parametric tests have been used to analyse the results for the Secateur data sets. The average performance for all dexterity tests in both the thermo-neutral and cold environments is detailed in Table 3.4.
The Effects of Everyday Cold Temperatures on Dexterity
31
Table 3.4. Average dexterous performance in thermo-neutral and cold environments
Dexterity test
Thermoneutral average (SD)
Cold average (SD)
Difference in performance (%)
Purdue pegboard (R+L+Both = no. pins)
Mean = 35.50 (SD = 1.46)
Mean = 33.14 (SD = 1.02)
-7%
Power grip strength (kg)
Mean = 29.7 (SD = 11.29)
Mean = 28.87 (SD = 11.14)
-3%
Pinch grip strength (kg)
Mean = 5.75 (SD = 1.64)
Mean = 5.52 (SD = 1.53)
-4%
Moberg pickup test (sec)
Mean = 13.79 (SD = 2.21)
Mean = 15.74 (SD = 4.84)
14%
Mobile phone (sec)
Mean = 13.35 (SD = 4.28)
Mean = 14.20 (SD = 3.60)
6%
Median = 5 (IQR = 5)
Median = 5 (IQR = 5)
0%
Secateurs (max. diameter of dowel cut = mm)
A reduction in mean dexterity was observed on the Purdue pegboard (7% reduction), Moberg pick-up test (14% reduction) and the mobile phone task (6% reduction) when in the cold environment. A slight reduction was observed with grip strength performance (power 3% and pinch grip 4%) when in the cold environment. However, mean performance on the real world grip strength test using the Secateurs did not appear to be affected by the cold. Paired t-tests were used, on the normally distributed data, to determine whether there was a significant difference in performance between the two environments. The results from this analysis are detailed in Table 3.5. Table 3.5. Paired t-test results between the thermo-neutral and cold environment Dexterity test
Mean difference (SD)
Sig. (2-tailed) p<0.05
Purdue pegboard
- 2.36 pins (3.5)
0.026
Power grip strength
- 0.83 kgs (2.5)
0.227
Pinch grip strength
- 0.23 kgs (0.6)
0.189
Moberg pick-up test
1.95 secs (1.0)
0.024
Mobile phone
0.85 secs (2.3)
0.188
Results from the paired t-test analyses revealed the cold environment had a significant (p<0.05) effect on performance with the Purdue pegboard (p=0.026) and the Moberg pick-up test (p=0.024). However, the cold environment did not significantly affect dexterous performance on the grip strength tests (power and pinch).
32
Elton, Dumolo and Nicolle
A Wilcoxon signed ranks test was used to compare performance on the non parametric data (secateurs data). Results from the analysis revealed no significant difference on the secateurs task (p=0.102) when comparing performance between the thermo-neutral and cold environments. Pearson correlation coefficients (r) were calculated to determine whether the chosen dexterity tests (with parametric data) were good predictors of real world product capability in the cold. Spearman’s rho was used to correlate the non parametric data (secateurs). Results from these analyses are detailed in the Table 3.6. Table 3.6. Correlation coefficents for dexterity tests and real world tasks Pearson’s correlation
Purdue pegboard Pinch grip Power grip
Mobile phone
Moberg pick-up
-0.665 (p=0.013)
-0.269 -0.199
Spearman’s rho Secatures
0.749 (p=0.002) 0.771 (p=0.001)
Results from the Pearson’s correlations suggest a strong negative relationship exists between the Purdue pegboard and the mobile phone task (r=-0.665), which was significant (p<0.05). Relationships between the Purdue pegboard and the Moberg pick-up test were weak (r=-0.269), so were relationships between pinch grip and the Moberg pick-up test (r=-0.199). Results from the Spearman’s rho analysis suggest a strong positive relationship exists between pinch grip and the secateurs task (rs=0.749) and a strong, approaching very strong, relationship exists between power grip and the secateurs task (rs=0.771). Both the Spearman’s rho correlations were significant (p<0.01).
3.7 Discussion The pilot study has provided an indication of the types of dexterity that are affected by everyday cold temperatures. Fine finger dexterity as measured by the Purdue pegboard was found to be significantly affected (p=0.026). On average, performance on the Purdue pegboard decreased by 7%. Findings from a similar study by Riley and Cochran (1984) found that performance on fine manipulative tasks, such as the Purdue pegboard, can decrease by up to 15% on average when the ambient temperature is reduced from 23.9°C to 1.7°C. For the grip strength tests only minor differences were observed and these were not significant (power p=0.227, pinch p=0.189).
The Effects of Everyday Cold Temperatures on Dexterity
33
Dexterous performance was also measured on a selection of real world products in both environments. The tasks/products selected were: (1) a modified Moberg pick-up test, (2) entering an 11 digit number into a mobile phone, and (3) cutting through different thicknesses of dowel with a set of garden secateurs. Fine finger dexterity is required to complete the tasks one and two, and a power grip is required in task three. The greatest decrease in performance across all tests was observed with the Moberg pick-up test. A 14% decrease in performance in the cold was observed with this test, which was significant (p=0.024). For the mobile phone task performance decreased on average by 6% in the cold; however this was not found to be significant (p=0.188). For the secateurs task, no difference in performance was observed. Results from the correlation analysis suggested that a person’s capability on the the Purdue pegboard is a good predictor of their ability to use a mobile phone when in the cold. The same was found for both the power and pinch grip measures in relationship to the secateurs task when in the cold. However, due to the limited sample size a greater number of correlations is needed to ensure this relationship is not down to random noise or error. The results from the pilot study suggest that fine finger dexterity is affected by everyday cold temperatures. In reality this means such tasks either take substantially longer (upto 14%) or the same work rate is not possible in the cold. This reduction in capability is particulary pertinent with users who may already be working to the limits of their capability in the warm, thus a significant reduction in capability in the cold would result in their being excluded. The cause of performance decrements may be due to 11°C mean reduction in skin temperature which may have caused the synovial fluid in the joints to thicken and the loss of sensibility in the finger tip receptors to occur (Mackworth, 1953; Heus et al., 1995). Results from the pilot study suggest that such physiological changes to the hand can occur at 5°C. No significant differences in performance were observed with the gripping tests. A possible explanation for this is participants were dressed warmly in their winter clothes, leaving only their hands exposed to the cold. Grip strength, both power and pinch, is controlled by the extrinsic hand muscles in the forearm, which are kept warm by the clothing insulation, thus not exposed to the cold temperature and its physiological effects.
3.8 Conclusion and Future Work Results from the pilot study indicate that grip strength is not significantly affected by everyday cold temperatures. Therefore, obtaining an accurate measure of this capability in the cold is not necessary for the purpose of ensuring products are inclusively designed. The results suggest a standard measure of this form of dexterity could be used, unless the intended product is likely to be used after prolonged periods in the cold. Fine finger dexterity has been shown to be affected by average winter temperatures and not just extreme conditions. Results from both empirical tests and real world tasks are significant. In relation to product interaction it is likely that
34
Elton, Dumolo and Nicolle
such tasks as using a mobile phone, pressing a sequence of buttons on a screen or key pad, using a stylus to interact with a touch screen, picking up and placing objects such as keys, nuts, coins, bank-cards etc are likely to be affected. The pilot study has established which forms of dexterity are affected by cold temperatures and which tests are good predictors of real world product interaction capability. From this study it was possible to identify which tests (Purdue pegboard and the Moberg pick-up test) provide a relevant and accurate measure of dexterity in relation to product interaction in the cold. The larger scale study will utilise these tests to gather further capability data to inform and guide the design process. Once this data has been gathered, it will then be translated into a tool that can be used to inform/guide designers in the development of inclusive outdoor products.
3.9 References Boyce P (2003) Lighting for the elderly. Technology and Disability, 15(3): 165–180 Clarkson PJ, Coleman R, Hosking I, Waller S (eds.) (2007) Inclusive design toolkit. Engineering Design Centre, University of Cambridge, UK Desrosiers J, Hébert R, Bravo G, Dutil E (1995) The Purdue pegboard test: normative data for people aged 60 and over. Disability and Rehabilitation, 17(5): 217–224 Edwards M, Burton A (1959) Correlation of heat output and blood flow in the finger, especially in cold-induced vasodilation. Journal of Applied Physiology, 15(2): 201–208 Elton E, Nicolle C (2009) Now you see it, now you don’t. In: Proceedings of International Conference on Inclusive Design (INCLUDE 2009), Helen Hamlyn Centre, London, UK Elton E, Nicolle C, Mitchell V (2008) Identifying contextual factors in inclusive design. In: Proceedings of the 4th Cambridge Workshop on Universal Access and Assistive Technology (CWUAAT’08), Cambridge, UK Havenith G, Heus R, Daanen HAM (1995) The hand in the cold, performance and risk. Arctic Medical Research, 54(Supplement 2): 1–11 Havenith G, Vrijkotte TGM (1993) Effectiveness of personal protective equipment for skin protection while working with pesticides in greenhouses. Part III, comfort and ergonomics. Report TNO Human Factors Research Institute, Soesterberg, The Netherlands Heus R, Daanen HAM, Havenith G (1995) Physiological criteria for functioning of hands in the cold: a review. Applied Ergonomics, 26(1): 5–13 ISO (1998) ISO 9241-11: ergonomic requirements for office work with visual display terminals (VDTs) – part 11: guidance on usability. International Organization for Standardization, Geneva, Switzerland Mackworth NH (1953) finger numbness in very cold winds. Journal of Applied Physiology, 5: 533–543 Maguire M (2001) Context of use within usability activities. International Journal of Human Computer Studies, 55(4): 453–484 Met Office (2009) Coldest winter for a decade. Available at: http://www.metoffice.gov.uk/corporate/pressoffice/2009/pr20090225.html (Accessed on 13 August 2009) Parsons K (2005) The environmental ergonomics survey. In: Wilson JR, Corlett R (eds.) Evaluation of human work, 3rd edn. Taylor and Francis Group, Boca Raton, FL, US Riley M, Cochran D (1984) Dexterity performance and reduced ambient temperature. Human Factors, 26(2): 207–214 Shiffman L (1992) Effects of aging on adult hand function. The American Journal of Occupational Therapy, 46(9): 785–792
Chapter 4 Understanding the Co-occurrence of Ability Loss S.D. Waller, E.Y. Williams, P.M. Langdon and P.J. Clarkson
4.1 Introduction Many solutions for one kind of ability loss rely on another kind of ability to compensate, so understanding the co-occurrence of ability loss is critical when designing or specifying products, services or environments that should be accessible to, and usable by, the widest possible range of people. For example, a common solution for vision ability loss is to provide supplementary audio information, yet the success of this strategy will depend on the extent to which people who have vision ability loss do not have hearing ability loss. Investigating the co-occurrence of ability loss requires a single data source that covers all types of ability losses that may be relevant for performing everyday tasks, yet the 1996/97 Disability Follow-up Survey (DFS) (Grundy at al., 1999) remains the most recent UK dataset to do this (Johnson et al., 2010). The DFS was commissioned to measure the prevalence of disability in UK adults (16+), according to the severity of the corresponding quality of life impairment. Within the DFS approximately 7 200 participants were asked up to 300 questions regarding their ability to perform different everyday tasks. Not all participants were asked all the questions, instead the questions asked depended on the participant’s answers to previous questions, so that fewer questions were required to determine the participant’s overall severity of quality of life impairment. The complete set of questions, the criteria that determined which questions were asked, and the participants answers to the questions are publicly available from the Department of Social Security Social Research Branch (UK Data Archive).
36
Waller, Williams, Langdon and Clarkson
4.2 Re-analysis of Disability Follow-up Survey Participants’ Answers In order to use the DFS to examine the co-occurrence of ability loss, it was first necessary to reconstruct the answers for all instances where participants were not asked a particular question. This reconstruction process can conveniently be expressed as a series of assumptions, where an answer to a particular question provided an assumed answer for another question that he or she was not asked. For example, if the participant answered a question to indicate that they “cannot hear sounds at all“, then they were not asked the question “can you follow a TV at a normal volume?“, and it is assumed that this participant would “be unable to follow a TV at a normal volume“. The complete set of these assumptions appear online (Waller, 2009). Previous publications by the same authors (e.g. Clarkson et al., 2007) had to present the DFS results using the seven categories from Grundy at al. (1999), which were named: vision, hearing, thinking, communication, dexterity, reach and stretch, and locomotion. However, the reconstruction process enabled the results to be presented in this paper in categories and subcategories that were selected and named in order to better suit the intended purpose of examining the co-occurrence of ability loss. Considering an example, the questions “can you pick up a bag of potatoes in each hand?”, “do you have difficulty tying a bow in laces?” and “do you have difficulty reaching both arms above your head?” are considered in this paper as being part of the same category, even though these questions originally came from within different categories (Grundy at al., 1999). Three overall categories are used, named sensory, cognitive and motor. The sensory category includes subcategories named vision and hearing, the cognitive category includes subcategories named executive function, memory, communication part one and communication part two, while the motor category includes subcategories named upperbody and locomotion. The categories, subcategories and their names are intended to provide the best way of depicting the DFS results for the purpose of examining the co-occurrence of ability loss. However, this is not intended to imply that the tasks within any particular subcategory are comprehensive or best practice for measuring ability loss in that subcategory. Neither is there any implication that the subcategories within any category are comprehensive or best practice for measuring ability loss within that category. Also, note that the accompanying paper (Waller, 2010) presents the DFS results for the purpose of examining different types and levels of ability loss, and therefore uses alternative groupings with differet names. The publicly available original survey results contain each participants answer to each question, together with the number of UK adults that each participant represents, based on the original sampling strategy of the survey. These multiplier values were used to convert the number of survey participants that were unable to perform each task to the proportion of the UK adult population that were unable to perform that task. All population numbers presented here reflect the UK adult (16+) population in 1997, which was 45.6 million adults (these figures have not been adjusted to present-day). For brevity, the phrase “proportion of adults“ will be used throughout the rest of the paper to mean “proportion of the [UK] adult population in 1997“.
Understanding the Co-occurrence of Ability Loss
37
No judgements were made as to whether the questions chosen represent an equivalent amount of ability loss, or an equivalent severity of quality of life impairment. Nevertheless, the number of people who were unable to perform each of the tasks, and the extent to which they were able to perform some of the tasks but not others provide useful insights. While most of the assumptions used to reconstruct the data reflect common sense, the authors have not performed a detailed consideration of the frequency with which the assumptions were made, or the validity of these assumptions. The results presented here represent a transitional phase of research, based on the best currently available UK dataset, but the authors are aware that a new dataset is needed, because this data is 13 years old, and was collected for a different purpose. The i~design project website (EDC, 2009) describes the program of research that intends to design and pilot a national survey that will measure the proportion of the UK population who are unable to perform real-world tasks. In the meantime, it is not considered prudent to perform a detailed examination of the accuracy of the results presented in this paper, or the validity of the assumptions used to create them. The results are only intended to indicate in what instances the relative co-occurrence of ability loss appears to be significant or not. For clarity, all population numbers have been rounded to thousands of UK adults (also in Figures 4.1 to 13); note this is not intended to indicate the accuracy of these numbers. Also, as a result of this rounding, the numbers presented within this paper may not sum exactly.
4.3 Insights from the Results For brevity, the phrase “the X-related tasks“ will be used throughout the rest of this paper to mean “the tasks that were chosen to best represent the types of X-ability loss covered within the DFS“. The co-occurrence of ability losses is initially investigated by looking at the extent to which people were unable to achieve each of the tasks within each subcategory, depicted in Figures 4.1 to 8. Some insights from this investigation are now discussed. Figure 4.1 shows that of the adults who “have difficulty reading ordinary newsprint”, 60% also “have difficulty recognising a friend across the road”, roughly indicating that using facial expressions to present information would not be likely to provide a useful supplement to text.
Figure 4.1. Breakdown of the adults who were unable to perform one or more vision related tasks. Tasks were performed while using any desired vision aids.
38
Waller, Williams, Langdon and Clarkson
Examining Figure 4.2, nearly twice as many adults “have great difficulty following a conversation against background noise”, as compared to those who “cannot follow a TV at a normal volume”, indicating that design solutions that can eliminate background noise will be likely to make a significant difference to the number of people who can understand speech.
Figure 4.2. Breakdown of the adults who were unable to perform one or more hearing related tasks. Tasks were performed while using any desired hearing aids.
Considering the adults who could not perform one or more of the executive function related tasks, Figure 4.3 shows that the majority (79%) reported that they “often forget what the task was whilst in the middle of it”. A slight minority (44%) reported that they were unable to perform just one of the executive function related tasks, with a slight majority reporting being unable to perform two or more of these tasks. Design solutions that help to guide the user through the task will therefore be likely to offer a significant benefit, but all of the executive function related aspects of a task need to be considered to reap the greatest rewards.
Figure 4.3. Breakdown of the adults who were unable to perform one or more executive function related tasks Of the memory related tasks shown in Figure 4.4, remembering to turn things off involves prospective memory, whereas remembering the names of people involves long-term memory. Of all the adults who were unable to perform one or both of these tasks, only 23% of them were unable to perform both tasks,
Understanding the Co-occurrence of Ability Loss
39
indicating that these two types of memory loss do not tend to occur together, and assistive designs may be able to use one type of memory to compensate for losses in the other.
Figure 4.4. Breakdown of the adults who were unable to perform one or more memory related tasks
Figures 4.5 and 6 show the number of adults who were unable to perform the communication related tasks. Considering all the adults in Figure 4.6 who cannot perform one or both of the reading and writing related tasks, 37% of them were unable to perform both tasks, which provides a rough indication of the extent to which reading and writing ability losses tend to co-occur.
Figure 4.5. Breakdown of the adults who were unable to perform one or more communication related tasks (part 1)
40
Waller, Williams, Langdon and Clarkson
Figure 4.6. Breakdown of the adults who were unable to perform one or more communication related tasks (part 2) Figure 4.7 shows some interesting couplings between the different types of upperbody ability loss. Considering all the adults who were unable to achieve one or more of the three upperbody related tasks, a significant proportion (40%) did not have difficulty reaching both arms above the head, but were unable to pick up a bag of potatoes in each hand, and had difficulty tying a bow in laces. Also, a significant proportion (33%) were unable to perform all three tasks.
Figure 4.7. Breakdown of the adults who were unable to perform one or more upperbody related tasks Even more significant couplings for the different types of locomotion ability loss are evident from Figure 4.8. Of all the adults who occasionally need to hold on to keep balance, 93% of them also have another kind of locomotion ability loss. Of all the adults who cannot perform one or both of the walking and steps related tasks, 65% of them cannot perform both of these tasks. Devices that are designed to assist walking should therefore also assume that the user will need help climbing steps.
Understanding the Co-occurrence of Ability Loss
41
Figure 4.8. Breakdown of the adults who were unable to perform one or more locomotion related tasks. The walking and steps related tasks were performed while using any desired aids, but without any assistance from others. The bending related task was performed with something available to hold on to. A higher level investigation of the co-occurrence of ability loss now examines the extent which adults were unable to achieve tasks within different subcategories, depicted in Figures 4.9 to 11. Some insights from this investigation are now discussed. Considering Figure 4.9 in further detail, of all the adults who are unable to perform one or more of the sensory related tasks, only 21% were unable to perform a vision related task and also unable to perform a hearing related task, so designs that allow users to interpret information with either their vision or their hearing will significantly reduce the corresponding exclusion.
Figure 4.9. Breakdown of the adults who were unable to perform one or more sensory related tasks Figure 4.10 shows the breakdown of the adults who were unable to perform the cognitive related tasks, which includes executive function, memory and communication related tasks. Of the adults who were unable to achieve one or more of these different types of tasks, a slight minority (39%) were unable to achieve just one task, while a slight majority (61%) were unable to achieve two or more tasks. Assistive technologies designed for one type of cognitive loss should clearly assume that some other type of cognitive loss is also present.
42
Waller, Williams, Langdon and Clarkson
Figure 4.10. Breakdown of the adults who were unable to perform one or more cognitive related tasks The breakdown of the adults who were unable to perform the motor related tasks is shown in Figure 4.11. Of all the adults who were unable to achieve one of the upperbody related tasks, 84% of them were also unable to achieve one of the locomotion related tasks. Devices that are designed to assist one particular type of body motion should therefore assume that other types of body motion may also be impaired, and specifically consider which hands are available for a given task, given that one or both hands may be needed to assist balance or interact with some kind of walking aid.
Figure 4.11. Breakdown of the adults who were unable to perform one or more of the motor related tasks Finally, the overall extent of co-occurrence of ability loss is examined in Figures 4.12 and 13. Figure 4.12 shows the breakdown of the adults who were unable to perform the sensory, cognitive and motor related tasks. Of all the adults who were unable to perform one or more of the tasks described in this paper, 15% of them were unable to perform a sensory related task, and also unable to perform a cognitive related
Understanding the Co-occurrence of Ability Loss
43
task, as well as being unable to perform a motor related task. Further considering the adults who were unable to achieve one or more of the motor related tasks, 56% of them were also unable to achieve a sensory or a cognitive related task. In order to further investigate the overall levels of co-occurrence, Figure 4.13 specifically examines the breakdown of all the adults who were unable to perform one of the tasks described in this paper, according to whether the tasks they were unable to perform were vision related only (2.2%), hearing related only (5.2%), cognitive related only (5.2%), upper-body related only (3.8%), or locomotion related only (20.9%). The remaining proportion of these adults (62.6%) were unable to perform tasks within more than one of these categories. Although locomotion related losses are more prevalent than the others, co-occurrence of ability losses is more typical than any single type of ability loss. However, the design of assistive products and legislation for accessibility typically only consider one particular type of ability loss.
Figure 4.12. Breakdown of the adults who were unable to perform one or more of all of the tasks described in this paper
Figure 4.13. Breakdown of the 8,122,000 adults who were unable to perform one or more of the tasks described in this paper, according to the proportion of these adults who were only unable to perform vision, hearing, cognitive, upperbody, or locomotion related tasks.
44
Waller, Williams, Langdon and Clarkson
4.4 Conclusion A re-analysis of participants’ answers to the DFS has enabled new insights into the prevalence of co-occurrence of different types of ability loss. While a new survey is needed to test and confirm these insights, preliminary indications are that the majority of people who are unable to perform tasks associated with one type of ability loss are also unable to perform tasks associated with another type, indicating a critical need for treatment of co-occurrence of ability loss within legislation, inclusive and accessible design.
4.7 References Clarkson PJ, Coleman R, Hosking I, Waller S (2007) Inclusive design toolkit. Engineering Design Centre, University of Cambridge, UK. Available at: www.inclusivedesigntoolkit.com (Accessed on 19 November 2009) EDC (2009) i~design project website. Engineering Design Centre, University of Cambridge, UK. Available at: www-edc.eng.cam.ac.uk/idesign3/ (Accessed on 19 November 2009) Grundy E, Ahlburg D, Ali M, Breeze E, Sloggett A (1999) Disability in Great Britain: results from the 1996/97 disability follow-up to the family resources survey. Research Report 94. Corporate Document Services Department of Social Security, Leeds, UK Johnson D, Clarkson PJ, Huppert F (2010) Capability measurement for inclusive design. Journal of Engineering Design (in press) Waller SD (2009) Assumptions required to reconstruct the answers from the 1996/97 DFS. Availailable at: www.inclusivedesigntoolkit.com/betterdesign/assumptions.pdf (Accessed on 19 November 2009) Waller SD, Williams EY, Langdon PM, Clarkson PJ (2010) Quantifying exclusion for tasks related to product interaction. In: Langdon PM, Clarkson PJ, Robinson P (eds.) Designing inclusive interactions. Springer, London, UK UK Data Archive. Study number 4090: disability follow-up to the 1996/97 family resources survey. Available at: www.data-archive.ac.uk/doc/4090\mrdoc\pdf\4090userguide.pdf (Accessed on 19 November 2009)
Chapter 5 Accessibility is in the Palm of Your Hand E.M. Rodriguez-Falcon and A. Yoxall
5.1 Introduction Child Resistant Closures (CRC) are applied to packaging in order to prevent young children from gaining access to harmful contents. This is most commonly seen in the storage of medicines which when ingested by a child can be extremely dangerous and cause poisoning. There are two main types of CRC: reclosable and non-reclosable. In the UK, they are regulated by separate British Standards. This article focuses on the design and use of reclosable packaging which relates to ISO 8317:2003 (ISO, 2003). Regulations on the supply of highly toxic medicines such as aspirin and Paracetamol, were first produced in the early 1970’s (US Government, 1970). These regulations led to the production of child resistant packaging, firstly reclosable bottles and more recently blister packs. Child resistant packaging for products containing aspirin, paracetamol and iron has become mandatory in the UK since 1 October 2003 by means of a Statutory Instrument (2003), the Medicines (Child Safety) Regulations. The main reclosable CRCs in use today are ‘push and turn’, ‘align and push up’ and ‘squeeze and turn’, and of course, all meet current standards. A typical example of this kind of closure is shown in Figure 5.1.
Figure 5.1. Typical ‘push and turn’ closure for medicine bottles
46
Rodriguez-Falcon and Yoxall
The current test for CRC reclosable packaging contains an adult test on 50 to 70 year olds and a test on children from 42 months to a five years old. A reclosable design is deemed to fail if a child is able to gain access to sufficient doses to cause severe injury or damage or to remove more than eight units. Several studies have been undertaken to find out how children access packaging in order to design more foolproof products. An article titled Childhood poisoning: access and prevention gathered evidence on how children gained access to six poisoning agents. It was found that the children generally gained access whilst the agent was in use. The article concluded that there was little scope for improved supervision, but that child resistant packaging should be improved. (Ozanne-Smith et al., 2001). For most, if not all, current reclosable designs when they are in use the CRCs are removed leaving access to all of the package contents. Therefore, if most poisoning happens whilst a product is in use then this highlights a major flaw in current CRC design. A study in the US into The effectiveness of child resistant packaging for aspirin found a 34% reduction in the aspirin-related mortality rate, which equated to the prevention of about 90 child deaths during the 1973 to 1990 post-regulatory period. The study suggested that child resistant packaging has been only partially effective and further poison prevention strategies should be developed (Lemberskyy et al., 1996). This suggests that CRCs are effective but there is still room for improvement. It can be seen from the literature review that the unintentional ingestion of medications with significant toxicity, particularly by children, could be attributed to a lack of a CRC, inadequate design of CRC’s, attitudes concerning the toxicity of medications, lack of vigilance by parents and carers in storage and administration or a combination of all these problems. There is therefore, a significant tension between the demand to keep the contents of the bottle away from children and yet allow access to those who need it, particularly older people. This tension leads to the misuse of the CRC and a loss of the perceived benefits that the CRC is designed for. New measures for child resistance look into the reasons children try to open the bottles and focus more on a complete rethink of child safety than merely improving the closure. Investigation has shown that children are often intrigued by the noise of the tablets inside the bottles which can sound like a rattle. Research carried out in the US by Bix et al. (2004) has highlighted the playful, inquisitive nature of children as traits which could be exploited by medication container designers. She has looked into placing the tablets into a gel which would stop them from rattling, obviously the problems are possible contamination of the medication and how are the tablets removed from the gel. Another fairly recent design exploits an adult’s greater ability to read by turning a cap through a sequence of letters before the cap will open. Although there is currently a large amount of research being put into the design of new CRCs there have been few changes or developments to the CRCs we use today over the last twenty years. The same designs are being manufactured. The ‘push and turn’ closure is the most widely used design of CRC: it requires the lid to be pushed down onto the body of the bottle before the
Accessibility is in the Palm of Your Hand
47
threads will align and the top can be unscrewed. The instructions for these actions are usually indicated on the lid: an example can be seen in Figure 5.1. This design uses the concept of ‘false affordancing’, for example if there was a flat plate at chest height on a door, one would assume that in order to open the door it needed to be pushed; this would be a true affordance. A false affordance uses sights, sounds and textures to complicate a task. A child might think to turn a bottle top but they are unlikely to be able to read the instructions which inform the user to depress the top first. This solution requires a cognitive approach; the mental process of knowing, including aspects such as awareness, perception, reasoning, and judgment. This CRC is effective in preventing children from accessing a bottle’s contents but it has also been found that elderly people struggle to use it due to the force and dexterity required. Consequently, another possible major problem contributing to child poisoning is a lack of openability of CRCs since users often decant their medication into non-CRC bottles for ease of use (Wilkins, 2001). The ‘squeeze and turn’ CRC tries to exploit the higher grip strength of an adult in comparison to a child in that the sides of the bottle cap need to be pushed inwards before the cap can be unscrewed. The lid has ridged grooves on its edge to provide grip and then flat or smooth patches to indicate where to apply pressure. Lids like these are often found on non-medical application for CRC’s such as bleach bottles and cough syrups. The last main type is the ‘align and push up’ CRC. An arrow or mark on the lid has to be lined up with an arrow or mark on the body of the bottle. Once aligned the cap can be flipped off. A child is unlikely to carry out both actions and in the correct order whereas an adult can easily follow the logical steps after reading the brief instructions on the cap or being shown. These caps need to be sensitive to alignment and require a certain amount of force, so that the chance of a child opening one when playing with it is reduced, i.e. the probability of a child lining up the arrows needs to be as small as possible. This can cause problems for elderly people who have poor sight and/or reduced dexterity for fine movements. It might be thought that this CRC would require a much lower strength than the previous CRCs mentioned, making it more useable by the elderly. However, a lab based experiment carried out in conjunction with adult surveying and data tables showed that the force of 32.5N required to open the lid was not achievable by all adults over 65 (Thorpe, 2005). However, certainly in the UK these types of closure systems have fallen out favour largely due to legislative pressures that have seen them replaced with blister packs.
5.2 Designing for the Majority The main aim of a CRC, put simply, is to prevent a child from gaining access to the contents of a container. Current designs block children, achieving the primary goal but who else do they exclude? At present the population has an ageing demographic, meaning that there will be a greater number of elderly people; as a
48
Rodriguez-Falcon and Yoxall
whole we are living longer. This was recognised by the regulatory body when in 2005 the previous 1993 British Standard was superseded (Wilkins, 2005). People over the age of 65 consume more prescription and over the counter medication than any other age group. Having an awareness of age related changes and realising the main user group of the product, it should then be possible to work towards designing a more inclusive CRC. The British Standards Institute defines inclusive design as ‘The design of mainstream products and/or services that are accessible to, and usable by, as many people as reasonably possible....without the need for special adaptation.’ (BSi, 2005)
Obviously in terms of medical packaging, the design should ‘include’ as many adults as possible but ‘exclude’ childern. That packaging companies are starting to develop designs in an attempt to address the ‘inclusive’ design agenda, and see added value in this approach, can be seen by the packaging shown in Figure 5.2.
Figure 5.2. Ergonomic ‘easy-open’ closure
It can be seen that the closure has been shaped to improve access, and has the words ‘EASY OPEN’ embossed in the closure along with ‘our premium quality’. It should be noted that these designs shown here are not child resistant and the authors make no statements as to the authenticity of the claims printed on the closure.
5.3 Methodology Previous work by the authors has shown that strength deteriorates with age (Yoxall et al., 2006), and combined with a loss of dexterity is another problem that people over the age of 65 experience impeding the ease of access to their medicines. In order to overcome this problem, for the past few years, the authors have explored the use of group muscles other than fingers, in particular the use of the palm in the opening of CRCs. A palm grip is formed by resting the object against the palm as can be seen in Figure 5.3. The ‘palm grip’ mimics the motion of clapping hands, however, with
Accessibility is in the Palm of Your Hand
49
the object placed between the palms of both hands. The fingers may or may not be wrapped around the object.
Figure 5.3. Schematic diagram of palmar grip
When the fingers are wrapped around the object but the palm takes the majority of the weight and force applied then the grip is defined as the ‘palmar grip’ (Schlesinger, 1919). Such a grip is especially useful to elderly people as it allows them to exert more force than conventional grips without the need of dexterity. The palm and palmar grips are mostly seen when people hold round objects such as baseballs or oranges. Therefore, a round shaped bottle creates a positive affordance whereby people are encouraged to hold the bottle with the palm. To test this theory, modified bottles, either produced from ABS plastic (via rapid prototyping), or modelling clay were produced, as is shown in Figure 5.4.
Figure 5.4. A photograph of modified bottles
In the initial study the bottles produced (shown here on the left and middle of Figure 5.4) were purely shaped to test the idea of positive affordance and were unable to be opened. Hence, in the second study modelling clay was applied to a standard ‘push and turn’ bottle (the picture on the right of Figure 5.4). Whilst the prototypes here were ‘rough and ready’ these studies by the authors have shown that a round bottle does indeed encourage the palm/palmar grip and thus the next step taken for this project was to study how people would grasp round objects of various sizes and if the size of the hand would affect the choice of grip. This next study was carried out by first measuring the hand size of the participants of this study. Then they were asked to pick up and hold round objects of various sizes.
50
Rodriguez-Falcon and Yoxall
This study revealed that choice of grip for most participants when gripping round objects is quite uniform in the sense that most of them would pick up objects with a palmar grip and then hold them in a palm grip. Palm grip has been proven through this study to be the natural choice to hold round objects. Thus, it was concluded that the CRC shape should be round to promote the use of the palm/palmar grip to grasp it. Based on the success of these initial trials, these prototypes were refined and a more ‘functional’ prototype was created in 2009; shown in Figure 5.5. This more ‘functional’ prototype is made up of two parts. The outer part is a ‘rapid prototyped’ shaped outer shell, designed to test the theory of encouraging the use of the palmar grip for ease of access. Inside the shell is a standard ‘push and turn’ bottle and CRC to which the shell is attached. This method of construction was chosen for several reasons: firstly it was a cheap and reliable way of manufacturing the CRC and intricate threaded part. Secondly, it allowed for direct comparison with the standard ‘push and turn’ closure since in theory the closure mechanisms were identical. The tests on grip choice were undertaken on a sample of 60 adults whose ages ranged from 19 to 71. All those tested were asked to observe, open and reflect on the experience of opening: a) a normal ‘push and turn’ pill bottle, b) a new bottle without ridges, and c) a new bottle with ridges. All participants were filmed whilst the study took place (with their permission). A group of ten children were also tested with both the current and new designs in accordance with ISO8317 (ISO, 2003). During the first part of the test, children were not given any instructions. During the second part of the test, children were told how to open the bottles. The test lasted from five to ten minutes and children’s actions were recorded both on paper and/or film. The main objectives of this test were; to test the ease of access or ‘openability’ of a prototyped pill bottle; to observe how people approach opening this especially people over 55 years of age; to test the bottle with children to ensure the prototype meets with the ISO 8317 (ISO, 2003) and to analyse the grips observed during testing.
Figure 5.5. Functional prototype
As mentioned earlier, all interviews and opening techniques were filmed and this (basic) ethnographic study examined at a later date. In this study, the first two tests conducted followed this approach, whereby the researchers interviewed the participants and observed their interactions with the initial prototypes. However, video ethnography was used to observe people interacting with the third prototype. This is generally a more reliable method of
Accessibility is in the Palm of Your Hand
51
gathering information than interviewing subjects about their habits; it is possible to observe what they do rather than relying on what they say they do. The two can often be very different.
5.4 Results and Discussion 5.4.1 ‘Push and Turn’ Bottle The standard ‘push and turn’ CRC (as shown in Figure 5.1) was firstly evaluated in order to act as a ‘control’. The chosen CRC used for this test was noted to be particularly difficult to open and required a relatively large amount of force to successfully open it. The reason behind this choice was to study the grip of choice subjects would use to overcome such a problem. Table 5.1 shows the choice of grip used by participants to open a bottle with ‘push and turn’ CRC. Table 5.1. Grip choice totals used by participants to pick up and hold test bottle ‘push and turn’ standard Males Grip Choice
Females
Cap
Bottle
Cap
Bottle
Palmar
0
0
1
0
Lateral
31
0
26
1
Cylindrical
2
33
1
26
The test showed that most subjects were aware of the method of opening the CRC as it was the most widely available CRC on the market. During the test a significant number of subjects commented on the difficulties that opening this type of closure presents. Participants over the age of 55 were seen to struggle more than younger ones. Whilst women of all ages were seen to exhibit problems accessing the bottle, young male participants were seen to have few difficulties. The majority (approximately 93%) of the participants used the cylindrical grip when holding the bottle and the lateral grip on the closure, (shown schematically in Figure 5.6) on the closure with only several users using their palms (shown schematically in Figure 5.7).
52
Rodriguez-Falcon and Yoxall
Figure 5.6. Schematic of typical grip on bottle and closure
Figure 5.7. Palmar grip used by minority of respondents on standard closure
Of those that did use their palms, the following comment from a 37 year old woman who suffers from psoriatic arthritis was typical: “before I went to rehabilitation these bottles were impossible to open. Occupational Therapists have now showed me the best way to open this bottles and that is by using the palm” This further supports the idea of using palm grip technique to overcome such a problem.
5.4.2 ‘Push and Turn’ Shaped Prototypes (with and without Ridges) Two ‘push and turn’ shaped prototypes were produced as shown in Figure 5.5. One of the prototypes had ridges on the lid and the other was produced with a smooth lid. Both designs were tested on the panel of 60 participants. Table 5.2 sumarises the grip type used by participants for both closure and bottle for the bottle without ridges, since there was no significant difference in the way in which the two prototypes were used. However, they themselves were used markedly differently to the standard ‘push and turn’ closure. In the later, significant use was made of the lateral grip on the closure and cylindrical grip on the bottle. In the modified design, the majority of the participants (approximately 75%) were seen to use the palmar grip on both the closure and a similar proportion on the bottle body, as shown in Figure 5.8.
Accessibility is in the Palm of Your Hand
53
Figure 5.8. Grip used by participants using prototype three
Further, participants mentioned that they found the shape aesthetically pleasing and comfortable to hold. The only negative responses were issues relating to size and cost. Many respondents worried that such a large bottle would mean increased costs when purchasing medicines. Table 5.2. Choice of grip used by participants to pick up and hold test bottle ‘push and turn’ prototype one without ridges Males Grip Choice
Females
Cap
Bottle
Cap
Bottle
Palmar
23
27
16
20
Lateral
10
0
9
0
Cylindrical
0
6
0
7
Closure systems such as those being proposed in this paper have to meet BS EN ISO 8317 (ISO, 2003) and to that end the system was tested on six children under four and on a further four children over the age of four. Of those tested, no participants were able to gain access to the bottle either with or without instruction within the time limit as outlined in the standard. With the larger bottle size, children were seen struggling to manipulate the bottle to gain easy access.
5.5 Conclusion The authors wanted to test the hypotheses that changes to geometry and positive affordances could lead to changes in the way in which bottles such as those used for medicines were gripped, primarily to encourage the use of larger muscle groups and reduced dependency on dexterous functions and micro-manipulations used in current packaging types. To that end ergonomically shaped prototypes were produced and tested in accordance with British Standards. These prototypes performed well and were seen to improve access over current designs whilst retaining their child resistant properties and were seen to encourage the use of palmar grips as hypothesised.
54
Rodriguez-Falcon and Yoxall
Previous studies have shown that many older people struggle to access packging due to a range of factors such as lack of cognition and or reduced strength and dexterity. This paper is part of a range of work by the authors (and others, such as Bix at Michigan School of Packaging and Jenson at Aarhus School of Architecture, Denmark) that seeks to influence the next generation of packaging, looking to create easily accessible packaging through using cognition and encouraging the use of larger muscle groups than current designs.
5.6 Acknowledgements The authors would like to thank the practical assistance of Amy Penington, Nicholas Wong and Amitesh Dubey. We would also like to say thanks for the inspiration, support and advice from Dr Laura Bix at Michigan State University, US and Professor Birgitte Geert-Jenson at Aarhus School of Archtecture, Denmark.
5.7 References Bix L et al. (2004) The Universal Pack Conference. Michigan State University, US BSi (2005) Design management systems. Managing inclusive design. BS 7000-6:2005. The British Standards Institution, UK ISO (2003) Child-resistant packaging – requirements and testing procedures for reclosable packages. ISO 8317:2003. International Organization for Standardization, Geneva, Switzerland Lembersky RB, Nichols MH, King WD (1996) Effectiveness of child-resistant packaging on toxin procurement in young poisoning victims. Veterinary and Human Toxicology, 38: 380–383 Ozanne-Smith J, Day L, Parsons B, Tibballs J, Dobbin M (2001) Childhood poisoning: access and prevention. Journal of Paediatrics and Child Health, 37: 262–265 Schlesinger G (1919) Der Mechanische Aufbau der Kunstlichen Glieder. In: Borchardt M et al. (Eds.) Erastzglieder und Arbeitshilfen fur Kriesgbeschadigte und Unfallverletzte. Springer, Berlin Statutory Instrument (2003) The medicines (child safety) regulations, No 2317. The Stationery Office Ltd., London, UK Thorpe A (2005) Advances in child resistant closures. Final year project thesis, University of Sheffield, Sheffield, UK US Government (1970) Poison prevention packaging act of 1970. US Government Publication, Washington, WA, US Wilkins S (2001) Child resistant flexible packs in the European Union. In: Proceedings of the 11th International Conference on Pharmaceutical and Medical Packaging, Copenhagen, Denmark Wilkins S (2005) Comparison between BS EN 28317 1993 and BS EN ISO 8317 2004 together with comments concerning the regulatory impact of the new standard Yoxall A, Janson R, Bradbury SR, Langley J, Wearn J (2006) Openability: producing design limits for consumer packaging. Packaging Technology and Science, 19: 219–225
Part II
Measuring Inclusion
Chapter 6 Quantifying Exclusion for Tasks Related to Product Interaction S.D. Waller, E.Y. Williams, P.M. Langdon and P.J. Clarkson
6.1 Introduction Inclusive design aims to enable more people to use mainstream products, services and environments, especially those with minor ability loss. In this context, a mainstream product refers to one that is readily available “off-the-shelf” in competitive markets, produced according to economies of scale. However, there will often be an inevitable limit to the level of ability loss that can be accommodated by such designs, whilst keeping the production volume and styling suitable for mass-market sales at an appropriate price point. Setting appropriate targets for an inclusive design therefore requires understanding the trade-offs between the number of people who are unable to perform tasks that relate to different levels of ability loss, within all the types of ability that are required to interact with the product. Understanding these trade-offs requires a single data source that covers all of these types of ability loss, yet the 1996/97 Disability follow-up survey (DFS) (Grundy at al., 1999) remains the most recent UK dataset that covers all such aspects of ability loss (Johnson et al., 2009). The DFS was commissioned to measure the prevalence of disability among adults in the UK, according to the severity of the corresponding quality of life impairment. Approximately 7 200 participants were asked up to 300 questions regarding their ability to perform different everyday tasks. Not all participants were asked all the possible questions; instead, the questions asked depended on the participant’s answers to previous questions, so that fewer questions were required to determine the participant’s overall severity of quality of life impairment. The complete set of questions, the criteria that determined which questions were asked, and the participants’ answers to the questions are publicly available from the Department of Social Security Social Research Branch (UK Data Archive).
58
Waller, Williams, Langdon and Clarkson
6.2 Re-analysis of Participants’ Answers to the Disability Follow-up Survey In order to use the DFS to examine the co-occurrence of ability loss, it was first necessary to reconstruct the answers for all instances where participants were not asked a particular question. This reconstruction process can conveniently be expressed as a series of assumptions, where an answer to a particular question that a participant was asked provided an assumed answer for another question that he or she was not asked. For example, if the participant answered a question to indicate that they “cannot hear sounds at all”, then they were not asked the question “can you follow a TV at a normal volume?”, and it is assumed that this participant would “be unable to follow a TV at a normal volume”. The complete set of these assumptions is published online (Waller, 2009). After this reconstruction process, expert judgement was used to examine the tasks involved in all of the questions that were asked. The objective was to identify sets of three tasks with increasing levels of difficulty, where the vast majority of people who were unable to perform the easier tasks were also unable to perform the harder tasks within the same set. Grouping the questions in this manner allows for a significant reduction in complexity, because it allows the number of people who were unable to perform each of the tasks to be plotted on the same axis. Considering an example for dexterity, the tasks involved in the questions that were asked included picking up and carrying a mug, a pint of milk, a paperback book, a safety pin, a bag of potatoes, a full kettle; turning a tap, turning the pages of a book, squeezing water from a sponge, unscrewing the lid of a coffee jar, using a pen or pencil, wringing out light washing, using scissors, and tying a bow in laces. For these questions, it was considered that the tasks of “picking up a mug in each hand”, “picking up a pint of milk in each hand” and “picking up a bag of potatoes in each hand” were likely to form a set of three tasks with increasing levels of difficulty, where people who were unable to perform the easier tasks were also unable to perform the harder tasks. To check this, all eight possible combinations of answers to these three questions were examined. Considering each question separately, 767 participants reported that they “cannot pick up a mug in each hand”, 1 002 reported that they “cannot pick up a pint of milk in each hand”, and 1 684 reported that they “cannot pick up a bag of potatoes in each hand”. Of the 767 participants who reported that they “cannot pick up a mug of coffee in each hand”, 641 (83.5%) of them also reported that they “cannot pick up a pint of milk in each hand”, and 734 (95.7%) of them also reported that they “cannot pick up a bag of potatoes in each hand”. Of the 1 002 participants who reported that they “cannot pick up a pint of milk in each hand”, 978 (97.6%) of them also reported that they “cannot pick up a bag of potatoes in each hand”. In general terms, the acceptability of plotting a set of three tasks on the same scale is now defined by considering all the participants who reported that they could not perform the easier tasks in the set, and checking what proportion of
Quantifying Exclusion for Tasks Related to Product Interaction
59
them also reported that they could not perform the harder tasks in the same set. Given that there are three tasks within each set, there are three potential proportions to calculate (as above), and the scale acceptability is taken as the minimum of these three values; so the scale acceptability for the previous example is 83.5%. Plotting the number of people who were unable to perform a set of related tasks on the same axis significantly reduces the complexity of data presentation, given that the alternative choice is to present all the numbers completely separately. However, it is rare to find a set of three tasks that form a scale with an acceptability of 100%, because ability can decline in many different ways. A threshold of 75% was therefore judged to be an appropriate balance between the accuracy of the data presented and the ease with which it can be understood. Similarly, presenting three tasks on each axis was judged to be the best trade-off between the simplicity of the data presented and its granularity. The publicly available original survey results contain each participants’ answer to each question, together with the number of UK adults that each participant represents, based on the original sampling strategy of the survey. These multiplier values were therefore used to convert the number of survey participants that were unable to perform each task to the proportion of the UK adult population that were unable to perform that task. All figures presented in this paper therefore reflect the UK adult (16+) population in 1997, which was 45.6 million adults (these figures have not been adjusted to present-day). For brevity, the phrase “proportion of adults” will be used throughout the rest of the paper to mean “proportion of the UK adult population in 1997” Sets of tasks that met the criteria for plotting on the same axis were readily identified within the DFS questions that related to vision, hearing, dexterity, reach and stretch, and locomotion. However, none of the DFS questions that related to cognition were suitable for plotting on the same axis, so the numbers of people unable to perform these 13 tasks were plotted on 13 separate axes. Note these 13 tasks were originally contained within categories named as “thinking” and “communication” in previous publications by the same authors (e.g. Clarkson et al., 2007). This distinction between thinking and communication was based on the categorisation from Grundy at al. (19996), yet the distinction was not considered beneficial, and the reconstruction process enabled all 13 questions to be combined into one category. Note that the accompanying paper (Waller, 2010) presents the DFS results for the purpose of examining the co-occurrence of ability loss, and therefore presents the data using alternative groupings with different names. The axes used to plot the proportion of adults who could not perform each of the tasks are presented as thermometers, which is intended to help convey the increasing difficulty of the tasks within each set, and that people who were unable to perform the easier tasks were also unable to perform the harder tasks. However, the sets of tasks are not necessarily comprehensive in depicting the ways in which ability may decline. No judgement was made as to whether the tasks that were selected were in any way comparable across the different sets, and within each set, no judgement was made as to whether the differences in ability required to perform the three tasks were in any way comparable. It is therefore not
60
Waller, Williams, Langdon and Clarkson
appropriate to imply that there is an equal spacing between the data points, or to label them as low, medium or high ability loss. Some of the calculated scale acceptabilities were artificially high, because each participant was not necessarily asked all of the questions: some scales even show an acceptability of 100% due to this. Nevertheless, the proportion of the adults that were unable to perform the tasks still provides useful and meaningful data. The results presented here represent a transitional phase of research, based on the best currently available UK dataset. However, the authors are aware that a new dataset is needed, because this data is 13 years old, and was collected for a different purpose. The i~design project website (EDC, 2009) describes the program of research that intends to design and pilot a national survey that will measure the proportion of the UK population who are unable to perform realworld tasks. In the meantime, it is not considered prudent to perform a detailed examination of the accuracy of the results presented here, or further investigate each scale’s acceptability, or consider the validity of the assumptions which were used to reconstruct the answers to all of the questions. The results are intended to indicate the relative extent to which people were unable to perform particular tasks, rather than the actual number of people who were unable to complete any one particular task.
6.3 Insights from the Results Figure 6.1 shows the proportion of adults that were unable to perform the sets of tasks that relate to vision and hearing ability losses. Figures 6.2 and 3 show the corresponding data for cognitive ability loss, while Figures 6.4 to 6 show the corresponding data for dexterity, reach and stretch and locomotion ability losses respectively. The calculated acceptability for each scale is printed next to it. Considering the tasks related to vision ability loss, Figure 6.1 shows that the proportion of adults who “had difficulty reading ordinary newsprint” is approximately three times larger than the proportion who were “unable to read large print text”, yet the proportion who were “unable to read large print text” is roughly comparable with the proportion who were “unable to read a newspaper headline”. This indicates considerable advantage for setting a minimum text size that is comparable to large print, yet there are diminishing returns for making it larger still. Considering hearing, the proportion of adults who “had great difficulty understanding speech against background noise” is much greater than the proportion who were unable to perform any the other tasks related to hearing, and also much greater than the proportion who were unable to perform any of the tasks related to vision.
Quantifying Exclusion for Tasks Related to Product Interaction
61
Figure 6.1. The proportion of adults who were unable to perform sets of vision and hearing related tasks. Vision and hearing tasks were performed while using any desired corresponding aids.
62
Waller, Williams, Langdon and Clarkson
Considering the tasks related to cognitive ability loss, Figure 6.2 shows that a relatively small proportion of adults were “unable to count well enough to handle money”, whereas a much greater proportion were “unable to do something without forgetting what the task was whilst in the middle of it”.
Figure 6.2. The proportion of adults who were unable to perform cognitive related tasks (part 1)
Within Figure 6.3, the top four thermometers relate to single aspects of communication, while the bottom three relate to compound aspects: the number of people unable to perform the tasks was much greater for the three tasks that involved compound aspects of communication.
Quantifying Exclusion for Tasks Related to Product Interaction
63
Figure 6.3. The proportion of adults who were unable to perform cognitive related tasks (part 2)
Considering the sets of tasks related to dexterity ability loss, Figure 6.4 shows that a much greater proportion of adults were unable to perform the two-handed tasks, when compared with the equivalent one-handed tasks. The proportion of adults who were unable to perform the picking up and carrying related tasks was roughly equivalent to the proportion who were unable perform the fine-finger manipulation related tasks.
64
Waller, Williams, Langdon and Clarkson
Figure 6.4. The proportion of adults who were unable to perform sets of dexterity related tasks
Quantifying Exclusion for Tasks Related to Product Interaction
65
Considering the sets of reach and stretch related tasks, Figure 6.5 shows that the proportion of adults who reported having difficulty with these tasks is approximately twice the proportion who reported being completely unable to perform them. Also, the proportion of adults who reported being unable to achieve the tasks involving both arms was approximately twice the proportion who reported being unable to achieve the tasks that only involved one arm. Considering the sets of tasks that related to locomotion ability loss, Figure 6.6 shows that the proportion of adults who were unable to achieve these tasks was much greater than the proportions presented in Figures 6.1 to 5. Within the locomotion related tasks, the greatest proportion of adults were unable to perform the walking related tasks, followed by the steps related tasks, then the bending related tasks, and then the balancing related tasks. The proportion of adults who were unable to manage 12 steps decreased dramatically if a handrail was available. Surprisingly, the proportion of adults who were unable to do the easiest bending related task of “bend down to touch the knees” was greater than the proportion who were unable to do the hardest tasks within the vision, cognition, and reach and stretch.
66
Waller, Williams, Langdon and Clarkson
Figure 6.5. The proportion of adults who were unable to perform sets of reach and stretch related tasks
Quantifying Exclusion for Tasks Related to Product Interaction
67
Figure 6.6. The proportion of adults who were unable to perform sets of locomotion related tasks. Walking and steps tasks were performed while using any desired aids, but without any assistance from others. Bending tasks were performed with something available to hold on to.
68
Waller, Williams, Langdon and Clarkson
6.4 Conclusion A re-analysis of participants’ answers to the Disability Follow-up Survey has enabled new insights into the proportion of the 1997 GB adult population who were unable to perform sets of tasks, which is especially useful for considering the number of people who might benefit from a particular design change. However a new nationally representative survey is still needed to examine this issue specifically.
6.5 References Clarkson PJ, Coleman R, Hosking I, Waller S (2007) Inclusive design toolkit. Engineering Design Centre, University of Cambridge, UK. Available at: www.inclusivedesigntoolkit.com (Accessed on 19 November 2009) EDC (2009) i~design project website. Engineering Design Centre, University of Cambridge, UK. Available at: www-edc.eng.cam.ac.uk/idesign3/ (Accessed on 19 November 2009) Grundy E, Ahlburg D, Ali M, Breeze E, Sloggett A (1999) Disability in Great Britain: results from the 1996/97 disability follow-up to the family resources survey. Research Report 94. Corporate Document Services Department of Social Security, Leeds, UK Johnson D, Clarkson PJ, Huppert F (2010) Capability measurement for inclusive design. Journal of Engineering Design (in press) Waller SD (2009) Assumptions required to reconstruct the answers from the 1996/97 DFS. Availailable at: www.inclusivedesigntoolkit.com/betterdesign/assumptions.pdf (Accessed on 19 November 2009) Waller SD, Williams EY, Langdon PM, Clarkson PJ (2010) Understanding the cooccurrence of ability loss. In: Langdon PM, Clarkson PJ, Robinson P (eds.) Designing inclusive interactions. Springer, London, UK UK Data Archive. Study number 4090: disability follow-up to the 1996/97 family resources survey. Available at: www.data-archive.ac.uk/doc/4090\mrdoc\pdf\4090userguide.pdf (Accessed on 19 November 2009)
Chapter 7 Investigating the Accessibility of State Government Web Sites in Maryland J. Lazar, P. Beavan, J. Brown, D. Coffey, B. Nolf, R. Poole, R. Turk, V. Waith, T. Wall, K. Weber and B. Wenger
7.1 Introduction All citizens have the right to access information from their government. At both the federal and state levels in the United States, citizens with impairments expect the same access to government information as anyone else. When government information is available on a web site, that information must be accessible for people with impairments. People with perceptual, motor, and/or cognitive impairments may access web sites using different assistive technologies, such as screen readers or alternative keyboards, or may need content in alternative formats (e.g. a transcript needs to exist for any audio). In cooperation with the Maryland Technology Assistance Program, a research group from Towson University evaluated Maryland state government web sites for accessibility. The goal of this paper is to report on how well Maryland state government web sites are currently meeting the needs of these diverse user populations. This is a major issue: recent population estimates indicate that approximately 112 000 individuals in Maryland between the ages of 16 and 74 have a sensory impairment, and 281 000 individuals in Maryland between the ages of 16 and 74 have a physical impairment (see www.ilr.cornell.edu/edi/DisabilityStatistics/acs.cfm for more population statistics). Universal Usability is a goal of most web sites: to be easy to use for a diverse range of users, utilising various technologies. It’s a long-term goal, and a more realistic short-term goal is web accessibility, which means designing a web site that can technically be accessed by users with impairments. While ease of use for all is an ideal goal, the specific goal of web accessibility is one that is covered by US state and federal laws. There is currently a large gap between the technical accessibility of government web sites, and the true ease of use of those sites (Bertot and Jaeger, 2006). Making all government web sites technically accessible should
70
Lazar, Beavan, Brown, Coffey, Nolf, Poole et al.
be the current goal. The software tools, laws, policies, guidelines, and expertise currently exist to make web sites technically accessible. However, the reality is that many web sites, both public and private, remain inaccessible.
7.2 Design Guidelines An accessible web site means that any user, using any type of assistive technology (such as screen readers, alternative pointing devices or alternative keyboards) can successfully access the content on a web site. From a practical point of view, it’s not sufficient to just say that a site needs to be made accessible for people with impairments. Web designers, programmers, software engineers, and project managers have many different demands on their time, and many stakeholders to satisfy (Lazar et al., 2004). Guidance needs to be given on HOW to make web sites accessible for people with impairments. The most common approach for making a web site accessible is to follow a series of web site design guidelines. The first major guidelines to address web content accessibility were the Web Content Accessibility Guidelines (WCAG), which were originally introduced in 1999 from the Web Accessibility Initiative, a non-governmental organisation. Because drafts of the original WCAG guidelines were in circulation, they greatly influenced laws being developed at the time. Amendments to the original Rehabilitation Act of 1973 in the United States, which were approved in 1998, and known as Section 508, required that all US federal web sites (and other federally funded technology) be accessible as of mid-2001. Slowly, individual states have adopted their own laws relating to web accessibility, which typically are very similar to or exactly the same as the guidelines from Section 508. There is a cascading effect when it comes to the implementation of accessibility guidelines. The original WCAG 1.0 was approved in 1999. US Section 508 guidelines were approved in December 2000 for implementation in mid-2001, and were greatly influenced by the WCAG 1.0 guidelines. States then typically take time to implement their own guidelines. So a major change in the design guidelines from a non-governmental organisation will result in eventual changes in design guidelines specified by federal and state governments. A new version of WCAG, version 2.0, was approved in 2008, and Section 508 guidelines are now being re-written, but these events have yet to impact on design guidelines for the state level. Maryland first passed a law related to non-visual access to information technology in 2000, and the guidelines were approved and implemented in 2005. The current legal standard for Maryland state government web sites, identical to the Section 508 laws, is available at http://doit.maryland.gov/policies/Pages/NVAReg05.aspx. The following list shows the guidelines, along with the paragraph labels for each guideline. Please note the descriptions in parentheses are those of the authors, and are NOT a part of the law. (a) text equivalent (have a text equivalent for any graphical elements); (b) synchronised equivalent alternatives (have captioned video, transcripts of any audio, or other alternatives for multimedia); (c) use of color (color should not be used as the only method for identifying elements of the web page or any data);
Investigating the Accessibility of State Government Web Sites in Maryland
71
(d) organisation (style sheets are encouraged, but users should still be able to utilise a web page when style sheets are turned off); (e) redundant text links on server-side image map (have redundant clickable links for server-side image maps, and accessible client-side image maps are preferred); (f) client-side image maps (as above in point e); (g) row header (use appropriate headers and markup to allow easy navigation of a table); (h) column headers (as above in point g); (i) frames (title all frames and label all frames for easy identification and navigation, e.g. use “navigation”, “main content” and “search” rather than “top” or “bottom”; (j) screen flicker frequency (limit or eliminate the use of flickering, which can provoke seizures); (k) text-only page default (if a web page cannot be made accessible, provide an equivalent text-only page, and make sure it is kept up to date); (l) scripting languages (make sure that equivalents for any non-accessible scripting are included, e.g. for those who are not using pointing devices); (m) linked plug-in or applet (if any plug-ins are required, make sure to provide a link to an accessible version of the plug-in); (n) online electronic forms (all forms must be properly labeled and accessible).
7.3 Research Methodology Many previous research studies have examined the accessibility levels of various categories of web sites. The following list, which is not exhaustive, displays some of the recent research publications examining accessibility. The general finding is that, not surprisingly, many sites on the web are not fully accessible. Even government web sites which are required to be accessible are often not accessible. • • • • • • • • •
government, companies, and universities in the United Kingdom (Bailey and Burd, 2005); government web sites in Taiwan (Chen et al., 2005); government web sites in Brazil (Freire et al., 2008); government web sites in Nepal (Shah and Shakya, 2007); government web sites in Northern Ireland (Paris, 2006); non-governmental web sites in the Mid-Atlantic United States (Lazar et al., 2003); fifty of the web’s most popular sites (Sullivan and Matson, 2000); universities around the world (Kane et al., 2007); large companies in the United States (Loiacono and McCoy, 2004).
A literature review turned up two research studies focusing on web accessibility in state-level government in the USA. Goette et al. (2006) found that, in 2004 (when the data was collected), approximately a third of the state government home
72
Lazar, Beavan, Brown, Coffey, Nolf, Poole et al.
pages had at least one WCAG priority level one accessibility violation, and therefore were not accessible. This study only looked at the home page of each state (and the District of Columbia), not at any other web pages of the state government. A different study, from Fagen and Fagen (2004), found that in 2002 (when data was collected), only three out of fifty state legislature web sites were fully accessible. All of the previously published research studies share a similar research methodology. These studies tend to use automated software tools to examine web sites looking for problems related to accessibility. Software tools, both free (e.g. A-prompt or functional accessibility evaluator, or the old Bobby) and proprietary (such as RAMP or InFocus), can examine web sites using the accessibility guidelines, and find where there are accessibility flaws in the web pages. While these tools are preferred by developers because of the ease and speed of examining and improving web sites, these tools often miss the common sense mistakes in accessibility. For instance, these tools can determine that a web page includes alternative text for assistive technology or doesn’t include such “alt text,” but generally cannot determine if the alt text is useful (such as alt=”picture here”, which wouldn’t be useful) or if alt text isn’t needed (since often images of blank space are used for layout purposes, and don’t require alt text to be accessible). Furthermore, many automated tools require the use of manual checks, which is when the tool indicates that due to the presence of certain design features (such as cascading style sheets), a human should manually check the web page to determine if there are any accessibility problems. These evaluation tools are known to have flaws, and as web pages become more complex, the tools are less capable of assessing the accessibility level. Research by Mankoff et al. (2005) has found that the most effective way to determine what actual accessibility flaws exist is to have multiple interface experts evaluate a web site using screen readers and then examine the code to determine where accessibility flaws exist. Jaeger (2006) had similar findings, that the automated accessibility evaluation tools give misleading and inaccurate results. Therefore, we decided to use the most effective evaluation method possible: human evaluators utilising screen readers. There are at least 81 agencies listed on the agency index at www.maryland.gov, the portal web site to all state-level government services. In addition, there are many other government organisations and web pages in Maryland. For simplicity’s sake, we utilised the list most obvious and available to the general public: the one on the Maryland.gov portal. Since it would not be possible in the time given to evaluate all of the web sites, we then decided to use a stratified sample to evaluate the web sites. There are a number of Maryland agencies, departments, or services that have a primary mission to work with people with impairments. It was important to include these in our evaluation, yet in a true random sample, these agencies might not have been selected. After a short survey of individuals who work in rehabilitation services in Maryland, five web sites were chosen to be evaluated, based on their specific focus on people with impairments. There was a sixth web site that would be appropriate under that category (the MD TAP), but since MD TAP was cooperating with us in this evaluation, we decided that to remain unbiased we would remove MD TAP from
Investigating the Accessibility of State Government Web Sites in Maryland
73
the evaluation. So those five agencies were considered to be one stratum in our stratified sample. The other stratum would be a random selection of Maryland state agencies. A list of agencies was made, based on the agency links on the homepage of www.maryland.gov. From that list of 81 state agencies, 10 agencies were selected using the random number generator function in Excel. A total of 15 Maryland state government web sites were then evaluated: 10 agency web sites that were randomly selected, and five state impairment-related web sites that were selected by experts in the field (and it is important to note that in most cases, those five web sites did not represent agency-level web sites). Table 7.1 lists the web sites that were selected for evaluation. Table 7.1. Maryland government web sites chosen for evaluation
Maryland State Agencies Office on Aging Assessments and Taxation Children, Youth and Families Lieutenant Governor Minority Affairs People’s Counsel Rural Maryland Council Service and volunteerism Supplemental retirement plan Treasurer
Impairment-related service web sites Department of Rehabilitation Services Maryland Library for the Blind and Physically Handicapped Motor Vehicle Administration (MVA) Maryland Transportation AuthorityMobility Services Department of Disabilities
Since automated tools were not being used, it was important to ensure standardisation of evaluation across multiple reviewers. The reviewers were primarily undergraduate juniors and seniors majoring in Information Systems at Towson University, under the guidance of one of their professors. In addition, the director of MD TAP came in, to train all reviewers on the laws and on advanced functionality of JAWS. Prior to the training, all reviewers were already familiar with Section 508 (which the Maryland guidelines for web sites are an exact replica of), had used JAWS before, and had at least a basic familiarity with HTML and JavaScript. Following the training, the reviewers evaluated the www.maryland.gov portal as a group, with the director of MD TAP advising how to evaluate each guideline from the Maryland law. Only one accessibility flaw was found on the Maryland portal: the large rotating graphic in the middle of the page, which rotates to multiple different items, was inaccessible, and did not have alt text to describe what was going on or links to the items presented, or allow users an alternative, non-timed method for listening to those various choices (See Figure 7.1 for a screen shot of Maryland.gov).
74
Lazar, Beavan, Brown, Coffey, Nolf, Poole et al.
Figure 7.1. www.maryland.gov portal
With the Maryland.gov portal as a pilot exercise, each of the reviewers was assigned to review three different web sites individually, and fill out a spreadsheet template with the data collected, using the state accessibility guidelines as a guide. Each reviewer examined the homepage of the site (and not lower-level pages) using JAWS version 10, and also examined the actual web page code for accessibility problems that might not be obvious using JAWS (such as captioning or use of color). In stage one of the research, since there were a total of 15 web sites evaluated, there were a total of 45 individual reviews performed. In stage two of the research, each group of three reviewers that evaluated a specific site met, combined their reviews, and found where there were discrepancies. They then interpreted, discussed, re-evaluated, and agreed upon one common report for that web site. This is a common method used for expert usability reviews, where multiple expert reviewers will share and discuss their results, noting where some reviewers found flaws that others missed, and will come up with a common, agreed-upon review that has a higher level of validity than a single individual review. Both stages of the research took place in April and May 2009.
7.4 Results Out of 15 web sites, 14 violated at least one of the Maryland state guidelines related to web accessibility. The number of guidelines violated is generally considered to be a more accurate reflection of accessibility than the number of actual violations, because multiple violations of the same guideline are much easier to fix than multiple guidelines being violated (Lazar et al., 2003). While it is true that some violations might be considered more important than others, there is no objective way to determine which violations are more important. Furthermore, in
Investigating the Accessibility of State Government Web Sites in Maryland
75
some cases, to judge that one violation is more important than another, is to judge that one impairment is more important than another impairment, and we are not willing to make that type of decision. Therefore, we present the data with all paragraph violations being considered equal. Table 7.2 displays data from the five web sites that deal specifically with people with impairments. Table 7.3 displays data from the 10 agency web sites that were randomly selected. Table 7.2. Data on accessibility violations from Maryland state web sites that deal specifically with people with impairments
Dept. of Rehab Services (had no violations)
Maryland Dept. of Disabilities
MTA-Mobility
Library for the Blind and Physically Handicapped
Motor Vehicle Administration
Paragraph (from state law)
X
(a) Text equivalent (b) Synchronised equivalent alternatives (c) Use of colour (d) Organisation (e) Server-side image maps (f) Client-side image maps (g) Row and column headers (h) Table markup (i) Frames (j) Screen flicker frequency (k) Text-only page default (l) Scripting languages (m) Linked plug-in or applet (n) Online electronic forms
X X
X X X
X
(o) Skip navigation (p) Alerts on timed responses
In the strata of agencies that deal specifically with people with impairments, only one agency had no guidelines violated. Those agencies had, on average, 1.4 guidelines violated, with a range of one to three violations. In the strata of agencies that were randomly selected, no agency had no violations, and the agencies had, on average, 3.1 guidelines violated, with a range of two to six violations. One agency in particular had six guidelines violated. The two guidelines that were violated most often were guideline A (dealing with alt text) and guideline O (skip navigation).
76
Lazar, Beavan, Brown, Coffey, Nolf, Poole et al. Table 7.3. Accessibility data from a random sample of 10 Maryland agency web sites
X
X
Minority affairs
X
Supplemental retirement program
X
Lieutenant Governor
X
Children, Youth and Families
Office on Aging
X
Treasurer
X
Service and volunteerism
X
People’s Counsel
Rural Maryland Council
(a) Text equivalent (b) Synchronised equivalent l Use iof colour (c) (d) Organisation (e) Server-side image maps (f) Client-side image maps (g) Row and column headers (h) Table markup (i) Frames (j) Screen flicker frequency (k) Text-only page default (l) Scripting languages (m) Linked plug-in or applet (n) Online electronic forms (o) Skip navigation (p) Alerts on timed
Assessment and Taxation
Paragraph from state law
X
X X X X
X
X
X
X X
X
X
X X X
X X
X
X
X X
X
X
7.5 Discussion Maryland state law clearly requires all government agencies to have fully accessible web sites. When a law of this nature is implemented, it usually does take a few years for all agencies to fall into line. The good news in this evaluation is that most of the web sites only have minor accessibility violations that could easily be fixed in a short amount of time. Furthermore, agencies that work specifically with individuals with impairments did tend to have more accessible web sites than other state agencies. Accessibility is not binary, in that a site is either fully accessible or not. There are degrees of accessibility, e.g. a site that has violated nine different guidelines has far more accessibility challenges than a site that has violated only two guidelines. Because accessibility problems can be caused by both the number of guidelines violated, as well as the number of times a specific guideline is violated, there is no clear metric (e.g. the site is 90% accessible) to
Investigating the Accessibility of State Government Web Sites in Maryland
77
determine the severity. It is important to note that a majority of web sites evaluated had relatively few accessibility violations, meaning that a large amount of time would not be required to bring these web sites into full compliance with the Maryland state accessibility guidelines. It is important to note that these evaluations are performed as a “snapshot in time,” and web sites evolve and change on a daily basis.
7.6 Conclusion Ensuring complete accessibility on a permanent basis is not an easy thing to do. It involves training all who are involved with web development and management, including webmasters, content providers, software engineers, web programmers, and managers and policymakers. Often, even minor changes to a web site can introduce accessibility problems, and accessibility problems tend to get introduced into web sites as the sites evolve over time (Hackett et al., 2004; Lazar and Greenidge, 2006). Frequent re-evaluations of web sites are often needed to ensure that a web site remains accessible. And minor improvements can usually bring a site into full accessibility compliance. We do believe that the methodology used in this research study for a web accessibility site evaluation is more robust than using an automated software evaluation tool. In the future, we hope to evaluate different methods for accessibility evaluation in more detail, to determine how to continuously improve the quality of the evaluation.
7.7 References Bailey J, Burd (2005) Web accessibility evolution in the United Kingdom. In: Proceedings of the 7th IEEE International Symposium on Web Site Evolution (WSE 2005), Budapest, Hungary Bertot J, Jaeger P (2006) User-centered e-government: challenges and benefits for government websites. Government Information Quarterly, 23(2): 163–168 Chen Y, Chen Y, Shao M (2005) Accessibility diagnosis on the government web sites in Taiwan, R.O.C. In: Proceedings of the International Cross-disciplinary Workshop on Web Accessibility (W4A 2006), Edinburgh, Scotland, UK Fagen J, Fagen B (2004) An accessibility study of state legislative web sites. Government Information Quarterly, 21(1): 65–85 Freire A, Bittar T, Fortes R (2008) An approach based on metrics for monitoring web accessibility in Brazilian municipalities web sites. In: Proceedings of the 23rd Annual ACM Symposium on Applied Computing, Ceará, Brazil Goette T, Collier C, Whilte J (2006) An exploratory study of the accessibility of state government web sites. Universal Access in the Information Society, 5(1): 41–50 Hackett S, Parmento B, Zeng X (2004) Accessibility of internet websites through time. In: Proceedings of the ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2004), Atlanta, GA, US Jaeger P (2006) Assessing section 508 compliance on federal e-government websites: a multi-method, user-centered evaluation of accessibility for persons with disabilities. Government Information Quarterly, 23(2): 169–190
78
Lazar, Beavan, Brown, Coffey, Nolf, Poole et al.
Kane S, Shulman J, Shockley T, Ladner R (2007) A web accessibility report card for top international university web sites. In: Proceedings of the International Cross-disciplinary Conference on Web Accessibility (W4A 2007), Banff, Alberta, Canada Lazar J, Beere P, Greenidge K, Nagappa Y (2003) Web accessibility in the Mid-Atlantic United States: a study of 50 web sites. Universal Access in the Information Society, 2(4): 331–341 Lazar J, Dudley-Sponaugle A, Greenidge K (2004) Improving web accessibility: a study of webmaster perceptions. Computers in Human Behavior, 20(2): 269–288 Lazar J, Greenidge K (2006) One year older, but not necessarily wiser: an evaluation of homepage accessibility problems over time. Universal Access in the Information Society, 4(4): 285–291 Loiacono E, McCoy S (2004) Web site accessibility: an online sector analysis. Information Technology and People, 17(1): 87–101 Mankoff J, Fait H, Tran T (2005) Is your web page accessible? A comparative study of methods for assessing web page accessibility for the blind. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2005), Portland, OR, US Paris M (2006) Website accessibility: a survey of local e-government web sites and legislation in Northern Ireland. Universal Access in the Information Society, 4(4): 292– 299 Shah B, Shakya S (2007) Evaluating the web accessibility of websites of the central government of Nepal. In: Proceedings of the 1st International Conference on Theory and Practice of eElectronic Governance (ICEGOV 2007), Macao, China Sullivan T, Matson R (2000) Barriers to use: usability and content accessibility on the web’s most popular sites. In: Proceedings of the ACM Conference on Universal Usability, Arlington, VA, US
Chapter 8 Developing User Data Tools: Challenges and Opportunities F. Nickpour and H. Dong
8.1 Introduction One established way to facilitate the uptake of a new approach or area of practice by an intended target group is to provide tools that can support them throughout the implementation process. These tools should be both informative and inspiring as there is not yet enough relevant awareness, expertise and experience in the target group. Inclusive design, as one new area of practice in design, is an excellent example of how such support tools become both essential and significant. In the past twenty years a number of resources and tools have been developed to support inclusive design (Dong and Clarkson, 2004); these serve different purposes and come in a variety of formats. In terms of purpose, inclusive design tools can go in two major categories. The first group is the ‘comprehensive tools’ that provide multi-faceted support on different aspects of inclusive design practice. One example of such tools is the Inclusive design toolkit developed at Cambridge Engineering Design Centre (Clarkson et al., 2007). The second group is the ‘specific tools’ that provide support and information on a particular aspect of inclusive design. This category includes user data tools, simulation tools, CAD modelling tools etc. Such tools can be originally developed either for the purpose of inclusive design or as a general design tool which then is adopted in the inclusive design practice. HADRIAN (Gyi et al., 2004), a CAD modelling tool with a database of one hundred individuals, is one example of a specific tool developed for the purpose of inclusive design. Inclusive design tools also have a wide range of formats. Goodman et al. (2007) categorise them under two main groups; ‘paper’ and ‘mixed-media’. The paper format includes books, booklets, cards, leaflets etc. and the mixed media format includes software, websites, on-line resources and physical kits. Certain challenges, limitations and opportunities will arise in the process of design, development and delivery of both comprehensive and specific inclusive
80
Nickpour and Dong
design tools. Some of these issues are tool specific, based on the intended function and format, and some are common to all tools. It is very important to identify, clarify and subsequently classify these challenges, limitations and opportunities for future practices. Identifying and classifying challenges and limitations should not be considered a negative approach; it is extremely useful for future practices in the field. It would provide researchers and tool developers with a more comprehensive understanding of the nature of existing issues, hence better prepare them to address these problems explicitly and rigorously. This paper focuses on ‘user data’ tools for inclusive design and aims to identify the challenges and limitations as well as opportunities in the course of defining and developing such support tools for designers. One approach towards identifying challenges and opportunities could be to closely observe all the stages in the design and development process of one design tool and study both general and specific issues arising at each stage of such a process. Also, in parallel, the findings from previously reported research and practice in this area could be reviewed to provide a firm ground for further clarification and classification of the issues. This paper adopts such a methodology; the analysis is mainly based on the findings and observations made in the continuing process of a tool design project, in addition to referring to existing literature on similar tool design activities where relevant. The tool design project studied here was about devising a user data tool for inclusive design aiming to effectively communicate this data to experienced designers.
8.2 The Project Aiming to facilitate wider uptake of inclusive design, a research project was initiated. The intention was to focus on user data, mainly the physical ergonomics data on body size, measurements, strength and flexibility (Pheasant and Haslegrave, 2006) and embody it in a support tool in an engaging way that would serve the purpose of both informing and inspiring designers. The data tool was mainly aimed at experienced designers in order to directly influence the industry uptake (Nickpour and Dong, 2009). A co-design methodology was adopted in all stages of the project.
8.2.1 Initial Exploration: Understanding Designers’ Needs At the first stage, a preliminary study was undertaken to understand how experienced designers used the existing user data and what their expectations, preferences and suggestions were. The researchers mainly focused on anthropometrics. This type of data is often regarded as the basis of designers’ knowledge, especially when they design physical objects (Moggridge, 2007). Designers’ general behaviour in relation to anthropometric data was investigated under three categories: data use, data preferences and data suggestions. Structured,
Developing User Data Tools: Challenges and Opportunities
81
open-ended interviews were conducted with experienced designers from ten UK– based design consultancies. Also, five anthropometric data tools selected from a wide range, were graded by designers in a ranking questionnaire. Figure 8.1 shows the questionnaire used in the preliminary study.
Figure 8.1. Questionnaire for ranking five existing anthropometric data tools
8.2.2 Conceptualisation: Devising Tool Concepts Eight concept tools were mocked-up based on the comments and preferences received from the experienced designers during the first stage of the project. The mock-up tools were deliberately left under-defined, suggesting the overall principle, number of possible features and content, but not detailing the data, means of manipulation, or means of presentation (McGinley and Dong, 2009). Figure 8.2 illustrates one of the eight tool concepts presented at the workshop.
82
Nickpour and Dong
Figure 8.2. One of the eight tool concepts presented at the workshop
8.2.3 Prototyping: Concept Evaluation and Co-design The concept tools developed in phase two were then presented to two groups of audience in two co-design workshops. As Spinuzzi (2005) suggests in his model of participatory design research stages, this was the processes discovery stage, aiming to clarify the users’ goals and values and agree on the desired outcome through discussion, assessment and co-design. The two workshops had an overall number of 36 participants with slightly different audiences; the first workshop included 22 design students and academics and the second workshop invited 14 design professionals. The workshops consisted of three main stages: (1) Introduction to tools: The eight tool concepts were presented, followed by a brief question session for any required clarification. (2) Individual comments: Participants were then asked to give individual feedback by giving their first impression ratings and comments. Figure 8.3 presents an example of designers’ colour-coded individual comments on one of the concept tools. (3) Group discussion, rating and co-design: Participants were divided in groups for discussion. They then presented their ranking and pros and cons for each concept. The final task was the co-design; the participants were asked to create their own ideal tool concept employing any existing or extra features if desired and then present their concept to other groups. Figure 8.4 shows an example of a tool concept developed by one team.
8.3 Findings This paper has an analytical angle and aims to identify the existing issues around a tool development process rather than report on the specific outputs of the tool development project itself. Hence, the findings are presented under the three
Developing User Data Tools: Challenges and Opportunities
83
categories of the challenges, limitations and opportunities observed in each phase of the project.
Figure 8.3. Designers’ colour-coded comments on a tool concept presented to them
Figure 8.4. A co-designed tool concept developed by one team of designers
84
Nickpour and Dong
8.3.1 Challenges 8.3.1.1 Phase l: Exploration The preliminary exploration of the existing situation and designers’ needs (as users), clearly pointed out one critical issue; the extent to which researchers and tool designers could sometimes develop an unrealistic image of the users’ (designers’) needs and wants in regard to a data tool. As Dong et al. (2006) state, one main problem is that the tool developers tend to prescribe what they ‘perceive’ to be useful to the users rather than present what they ‘have verified’ to be useful for the users. In this project, the whole understanding and assumptions of the researchers on the existing use of user data tools were challenged; the results showed that experienced designers’ use of existing anthropometric data tools is very limited. The results also identified that the problems with the existing tools included not only the lack of ‘usability’ and ‘desirability’ - as already expected by researchers - but also, and most importantly, lack of ‘usefulness’. This brought up a totally new dimension into the project and had immense impacts on its future strategy, direction and research process. Designers also challenged the idea of a ‘data’ tool by demonstrating that in their daily practice, they are more interested in data that is put in context rather than unexplained data. In other words, designers found the ‘information’ on users more useful as opposed to the raw ‘data’ made available to them. The research also highlighted the dominant role of experimental methods (versus referring to conventional user data tools) in providing experienced designers with user information. This results in shifting the focus from developing ‘user data’ tools to developing ‘user information’ tools that can better motivate and support experienced designers and can resonate with their inherent experimental approach to the information. 8.3.1.2 Phase Il: Conceptualisation Designers’ opinions on user data tools varied significantly and it was difficult to achieve consensus in terms of their preferences for such tools. This made the concept generation stage a challenging task. However, most desired and preferred tools had a number of specifications in common such as accommodating experiential data, high visual and graphic qualities and intuitive and simple presentation of data. These specifications, however, were quite general themselves and could be applied to almost any tool development project. How to turn these general implications and guidelines into design specifications was yet another challenge in the tool development process. It was impossible to combine some tool specifications as suggested by most designers. Specifications such as ‘simplicity’, ‘comprehensiveness’, ‘balanced level of information’ and ‘ease of use’, as expected by designers, could hardly come together all in one tool in reality, as some of these specifications are contrary to each other. In devising the tool concepts, the developers had to be careful to keep a balance between the desired tools and features explicitly recommended by designers and the tools and features they themselves considered to be useful based either on their understanding of what designers implicitly wanted or their observations of the current situation. This caused some hesitation over the level of involvement the
Developing User Data Tools: Challenges and Opportunities
85
researchers and tool developers should have in the concept generation stage. The subtle question here was, to what extent the concept developers were allowed to rely on their own evaluation of ‘what seemed to be needed’ or ‘what was stated as needed’ directly by designers. 8.3.1.3 Phase lII: Evaluation and Co-design It is clear that a common preference is for experimental and face-to-face interaction. For professional designers especially this is a challenge when developing user data tools. As McGinley and Dong (2009) argue, there are two issues to be considered here: • •
Should a tool change the way designers work, or should it supplement their current methods? Is it even possible to impact a designer once they have reached the professional level or is this something that would be more suited to the educational sector, where the concepts and considerations might become part of the instinctive thinking that designers typically use in their day-today thinking process?
In developing user data tools for designers, the above questions become extremely critical, as the answer to them would directly influence the strategic direction and the design of the tools. It is therefore imperative to carefully consider the target users’ different levels of practice – educational versus professional, experienced versus novice etc. The eight concept tools were designed to cover various aspects, hoping that this would polarize opinion and generate definite answers. However, the response demonstrated that there was value perceived in every tool, often relating to specific features, again suggesting that user data needs are wide and varied. This wide range and variety puts forward another challenge in the later convergent stages of the tool development process when tool developers want to put together the design specification. The designers’ needs and wants also varied according to their background, whether coming from an art-based or engineering-based design background. However, the more ‘human’ representations of users in the tools received high positive feedback from all the designers irrespective of their backgrounds. As the target users of the support tool - the experienced designers in the design industry come from a mixture of backgrounds, both engineering and artistic approaches to data should be considered in the final design of the support tool.
8.3.2 Limitations 8.3.2.1 Phase l: Exploration There are some limitations in the initial exploration phase of the project that could challenge the validity and generalisation of the findings. One main issue is the type of methods used. In this instance, in-depth interviews accompanied by simple questionnaires were used for understanding the situation in terms of data use and identifying existing barriers, motivations and real needs of designers (as users). It
86
Nickpour and Dong
can be argued that interview (and also questionnaire) is a limited method for understanding the users and their situation as it only relies on what people ‘say’ rather than what they ‘do’ in reality; users do not and can not necessarily express and reflect on exactly what they need, prefer or wish. In other words, users do not always say what they want and do not always want what they say they want. Another issue is the bias in the selected sample of users. This could be in terms of the size or specifications of the sample: •
•
Sample size – a limited number of participants in the initial investigation stage could deliver a limited and exclusive perspective of all the issues to be addressed. In this project, the interview sample was relatively small (approximately ten designers) and therefore limited. Sample specifications – designers’ perspectives and expectations from a user data tool could vary hugely based on their design background, level of expertise, organisational role and type and size of design industry (consultancy, corporate or freelance) they work in. The existing project only reflected on the viewpoints of small to medium sized design consultancies.
8.3.2.2 Phase Il: Conceptualisation There were limitations in terms of the number of tool concepts generated for demonstration. There had to be a balance in the number of tool concepts generated so that on one hand the reviewers would not feel overwhelmed and confused by the high volume of tools presented to them and on the other hand, a good range and a rich diversity of concepts covering various possibilities could be presented. This was necessary in order to achieve the most comprehensive, valid and efficient feedback from the reviewers. The same balance was necessary when deciding on the number of features and the level of detail in each tool concept. As mock-ups providing no interaction with the reviewers, each tool concept could only accommodate a limited number of features and a very basic level of detail. This was to some extent intentional as the concept developers wanted to present only initial concepts. However, this eventually jeopardised the reliability of evaluations made by reviewers. 8.3.2.3 Phase lII: Evaluation and Co-design Certain limitations in methods were observed in the evaluation and co-design stage; the concept tools were all demonstrated to the designers as mock-ups and in a static way, making it impossible to fully interact with the tools. Therefore designers were at best providing their perceptions of the tools or ‘what they saw in the tools’. This increased the risk of evaluating the concepts more on the basis of their semantics and graphic qualities rather than their usefulness and usability. Designers, as a group of users with professional visual awareness and strong graphic preferences, highlighted this bias even more and obviously the tools with more professional graphic representation and more engaging visuals stimulated more positive feedback. This highlights a significant barrier in evaluation of tool concepts and early prototypes which is the ‘perceived value’ versus the ‘practical value’ of a tool as indicated by the potential users.
Developing User Data Tools: Challenges and Opportunities
87
8.3.3 Opportunities 8.3.3.1 Phase l: Exploration Integrating ethnographic methods such as observation, user-centred methods such as cultural probes and more participatory ways of understanding designers’ needs can be immensely helpful in the initial exploration phase. However, due to time, availability and financial limitations, the use of such methods can be difficult. 8.3.3.2 Phase Il: Conceptualisation Embodying users’ comments and suggestions in the format of early concepts and subsequently presenting them back to the users provides a great opportunity for reflection, analysis and validation. In terms of user data tools, developing early concepts provides an opportunity for researchers and tool developers to understand the scope and complexity of the tasks and motivates them to get involved in prototyping as early as possible. 8.3.3.3 Phase lII: Evaluation and Co-design Adopting a co-design methodology in undertaking such research and design projects proves helpful in that it avoids the common bias of tool developers on their users’ needs, values and demands. Use of new media and social networks such as YouTube, Facebook or Twitter could provide an opportunity for reaching larger number of target groups and facilitate more interaction and better dynamic participation by users. This may help overcome the bias in sample size and specification by enlarging and diversifying user participation. Such social networks could also provide a better platform for indepth evaluation and analysis of concepts by users. Participatory design methodology could benefit from employing such wide-reaching new media.
8.4 Conclusion This paper contained a brief yet holistic reflective evaluation of the process of designing support tools for designers. In doing so, it specifically focused on user data tools developed for experienced designers. For this purpose, the real-world design process of a user data support tool was reported stage by stage and issues regarding challenges, opportunities and limitations were discussed. One main conclusion is that the exploration and evaluation methods are prone to limitations and risks in terms of type of methods applied and the samples selected. More time should be spent on the exploration stage in order to better understand the users and what they ‘really’ need. A better understanding of the designers at whom the tool is aimed at is strongly suggested. Certain differences were identified in designers’ approach to a data tool resulting from their level of expertise and design background. Researchers’ and tool developers’ biases and assumptions on the value and usefulness of a potential tool should be carefully considered, specifically in the initial exploration stage of a tool design project.
88
Nickpour and Dong
It is suggested that new media and social networks be considered for future use in the exploration and co-design stages, as they provide an opportunity for reaching larger numbers, hence help overcome the bias in sample size and specification. They also facilitate a better dynamic participation and more interaction by users. Integrating user-centred ethnographic methods such as cultural probes and more participatory ways of understanding designers’ needs can be helpful in the initial exploration phase. Also a co-design methodology, as an inclusive approach to understanding, exploring and solving the problem, is highly recommended. Adopting such a methodology implemented by a selection of appropriate participatory methods proves helpful in that it avoids the common bias of tool developers regarding their users’ needs, values and demands. This would help define and deliver better tools for designers.
8.5 Acknowledgements This project is supported by the Engineering and Physical Science Research Council’s grant, EP/F032145/1.
8.6 References Clarkson PJ, Coleman R, Hosking I, Waller S (2007) Inclusive design toolkit. Engineering Design Centre, University of Cambridge, UK Dong H , Cassim J, Coleman R (2006) Addressing the challenge of inclusive design: a case study approach. In: Stephanidis C, Pieper M (eds.) Universal access in ambient intelligence environments. Springer, Berlin Heidelberg, Germany Dong H, Clarkson PJ (2004) Requirements capture for inclusive design resources and tools. In: Proceedings of the 7th Biennial ASME Conference Engineering Systems Design and Analysis (ESDA’04), Manchester, UK Goodman J, Langdon PM, Clarkson PJ (2007) Formats for user data in inclusive design. In: Proceedings of the 12th International Conference on Human-Computer Interaction (HCI’07), Beijing, China. Gyi GE, Sims RE, Porter JM, Marshall R, Case K (2004) Representing older and disabled people in virtual user trials: data collection methods. Applied Ergonomics, 35(5): 443– 451 McGinley C, Dong H (2009) The collection, communication and inclusion of user data in design. In: Proceedings of the 2nd SED Research Student Conference (ReSCon’09), Brunel University, London, UK Moggridge B (2007) Designing interactions. MIT press, Boston, MA, US Nickpour F, Dong H (2009) Anthropometrics without numbers! An investigation of designers use and preference of people data. In: Proceedings of International Conference on Inclusive Design (INCLUDE 2009), Helen Hamlyn Centre, London, UK Pheasant S, Haslegrave C (2006) Bodyspace: anthropometry, ergonomics and the design of work. Taylor and Francis, London, UK Spinuzzi C (2005) The methodology of participatory design. Technical Communication, 52(2): 163–174
Chapter 9 User-pack Interaction: Insights for Designing Inclusive Child-resistant Packaging J. de la Fuente and L. Bix
9.1 Introduction Drug packages protect and deliver prescription and over-the-counter (OTC) drugs, as well as communicating necessary warnings and directions to people, enabling the therapeutic benefit of their contents. Despite the fact that people with disabilities and older adults represent a significant portion of the pharmaceutical market, protocols across the world for testing child-resistant (CR) packaging (ISO, 2003), exclude people with any obvious or overt disabilities from the senior-related portion of the test (Bix et al., 2009) (Figure 9.1). These regulations have a critical impact on the level of inclusivity of commercially available CR packaging currently on the market. CR package design presents a very difficult challenge. Young children must be excluded while access for adults, many of whom are impaired by sickness or disability, must be facilitated. As a general rule, commercially-available CR packaging designs rely extensively upon working principles that demand a combination of: hand-finger strength, hand-finger dexterity, and specific cognitive abilities. Given the extensive amount of literature indicating that older consumers have difficulty opening CR packages (Donaghy et al., 2003; Kou, 2006), we asked the question, “What are the common characteristics older adults and people with disabilities share that are not present in young children?”. The objectives of this research were to: • • •
benchmark commercially available CR packages in the hands of people with disabilities, older adults, and children; characterise physical attributes of these three groups; develop guidelines to design more inclusive CR packaging.
90
de la Fuente and Bix
9.2 Materials and Methods Researchers collected both qualitative and quantitative data from three working groups. Data included timed openings, task analysis, size, dexterity, and strength measures, and insights gathered from focus groups, interviews, and observation of use. Data were used to develop design criteria with the eventual goal of creating more inclusive CR designs. Due to restrictions in space, only package testing and physical characterisations are presented in this chapter. For a more complete report see de la Fuente (2006).
9.2.1 Subjects Three working groups were at the core of the study: •
•
•
Working group A (people with disabilities): subjects in this group were 18 years old or older and had a variety of types of disabilities but at least partial use of one hand. The average age of this group (n=10) was 52 years (sd=14, min=31, max=73) and it consisted of one male and nine females. People in this group reported taking, on average, nine medications daily (sd=8, min=1, max=24). Working group B (older adults): people 65 years old and older that had at least partial use of one hand. The average age of this group (n=10) was 84 years (sd=6, min=71, max=90) and it consisted of two males and eight females. People in this group reported taking, on average, seven medications daily (sd =4, min=1, max=12). Working group C (children): children between 42 and 54 months old. The maximum age limit for the US Consumer Product Safety Commission (CPSC) child test protocol is 51 months so the child panel for this study likely represents a more severe test of the package (i.e. older children are more likely to be able to open it than younger). For this same reason, the working group children also had no physical or mental handicaps, injuries, or illness that would interfere with testing. The average age of this group (n=8) was 47 months (sd=4.7, min=41, max=54) and it consisted of three females and five males.
9.2.2 Package Testing 9.2.2.1 Adult Test Procedure The openability and re-closability of four CR packages (Figure 9.2a-d) were tested during two separate meetings with working groups A and B. Participants tested the packages individually in a well-lighted and distraction-free room. All testing was conducted in accordance with instructions of the US Consumer Product Safety Commission (CPSC) adult test protocol for CR packaging (US CPSC, 1995) (Figure 9.1) with minor modifications. For example, when participants were unable to read, a researcher read the consent form aloud. The standard protocol indicates
User-pack Interaction
91
that participants that cannot read (forgot glasses, illiterate, visually impaired, etc.) are disallowed from the test. Additionally, the majority of the people in groups A and B would not be eligible for test under the protocol, as it disallows those with overt or obvious disabilities, and specifies adults aged 50 to 70 serve as test subjects. Sessions were videotaped to obtain records which were used to confirm test times and inform design recommendations. Prior to the beginning of the test, information about the participants was recorded. The experiment, like the protocol, was divided into three parts: a five minute period, a one minute period, and a screening test.
Figure 9.1. Diagram of the US Consumer Product Safety Commission (CPSC) senior-adult test (US CPSC, 1995) that has been taken as a model for international regulations across the world (ISO, 2003)
Five minute period: During the initial phase of testing, the participants attempted to open and properly close a CR package for a maximum period of five minutes. Once the package had been opened and closed, opening time and time to close (if appropriate) were recorded. If the package was not opened, the time at which the participant gave up the test or, if the participant continued trying, until the permitted time had elapsed was recorded.
92
de la Fuente and Bix
One minute period: Subjects that successfully opened the package during the five minute period were tested a second time. During the one minute portion of the test, a CR package identical to the one tested during the five minute period was handed to the subject. The times required to open and close were recorded. Screening test: As with protocol, participants that were not successful opening the package during the five minute test period participated in a screening test. The screening consisted of two one minute periods in which the participant tried to open and close two non-CR packages (Figure 9.2e-f). Subjects that opened and closed both the non-CR screening packages again attempted the CR package for a one minute period. If both screening packages were not opened, the opening portion of testing ceased with all results recorded. In the traditional protocol, test participants who do not pass the screening test are excluded from the panel test; their results are not recorded and they are replaced with another test subject. After testing each package, participants were asked if they agreed with the statement “This package is easy to use”. A five-point Likert scale from zero (“strongly disagree”) to four (“strongly agree”) was then employed. Results were used to calculate an ease-of-use rating. 9.2.2.2 Child Test Procedure The four CR packages were also tested during two separate meetings of working group C. Children tested the packages in pairs in a well-lighted room that was familiar to them. All testing was conducted in accordance with the instructions of the child test protocol for CR packaging (US CPSC, 1995). The experiment was divided into two five minute periods. Each child was given a package for a five minute period and asked to try to open it. After the five minute trial, if the child had not successfully opened the package, a tester demonstrated opening, and asked the child to try again for another five minutes. Children were also told that they could use their teeth if they wished. All sessions were videotaped from behind a one-way mirror. 9.2.2.3 Packages Two CR vials and two unit dose CR packages were tested: • • • •
a 10-dram 1-Clic® (Owens-Illinois Inc., Toledo, OH) vial and closure, ASTM type IIB “Hold fitment down while turning closure” (Figure 9.2a); a 13-dram Screw-Loc® (Owens-Illinois Inc., Toledo, OH) vial and closure, ASTM type IIA “Random push down while turning” (Figure 9.2b); a Shellpak ® (MeadWestvaco Corp., Stamford, CT), ASTM type XIIIA “Press hold, pull out (parts remain together), push out” (Figure 9.2c); a CR blister package (Perrigo Co., Allegan, MI), ASTM type VIIIB “Remove tab, peel back, and push out” (Figure 9.2 d).
Data collected through interviews with pharmacists and information provided by packaging manufacturers confirmed these packages to be among the most prevalent in the American prescription drug market (de la Fuente, 2006). Two nonCR packages were used in the screening test:
User-pack Interaction
•
•
93
a 2-ounce square plastic bottle that had a plastic continuous thread (CT) closure with a bottle finish diameter of 28mm (Figure 9.2e) closed 72 hours before testing with a torque of 10 inch-pounds using a Sure Torque automatic torque tester; a plastic 8-dram round vial with a plastic snap-type closure and a finish diameter of 28mm (Figure 9.2f).
Figure 9.2. CR packages: a) 1-Clic®; b) Screw-Loc®; c) Shellpak ®; d) CR blister. Non-CR packages: e) Continuous thread closure; f. Snap closure.
9.2.3 Characterisation of Adults and Children Participants of the three working groups were characterised with regard to handfinger strength, hand-finger dexterity, and anthropometrics of their hands. 9.2.3.1 Equipment and Procedures The evaluation of hand strength included an assessment of grip strength, pinch strength (tip pinch, key pinch, and palmar pinch), wrist strength, and bilateral palm to palm squeeze strength. Grip strength was evaluated with a Jamar® hydraulic dynamometer (0-200 lb, Sammons Preston Inc., Chicago, IL, US) set at the second position. Pinch strength was measured using a B&L® pinch gauge (0-60 lb, B&L Engineering, Tustin, CA), wrist strength with a Baseline® wrist dynamometer, and bilateral palm to palm squeeze strength evaluations were conducted with a Baseline® pneumatic squeeze dynamometer. Three trials were recorded for each hand on each device. Testing was done in accordance with the American Society of Hand Therapists (ASHT) standards (Mathiowetz et al., 1985a). The 9-HPT was used to measure hand-finger dexterity. It is a standard test commonly used by occupational therapists that measures fine motor dexterity in terms of the number of seconds (i.e., completion time) a subject takes to place nine pegs in a pegboard and then place them back in a container (Mathiowetz et al., 1985b). Standard instructions were provided prior to testing each subject. A stopwatch was started by the examiner as soon as the subject touched the first peg and stopped when the last peg hit the container. Each hand was tested twice, with the first trial being used as an acclimation period. Hand dominance was recorded as the hand used for writing; a selfreport for adult participants and the hand children used to draw a circle. A photographic method was used to collect anthropometric data (DTI, 2002). Every participant was asked to place their hands on a grid of known proportions, spreading their fingers (if possible). Three different digital pictures were taken:
94
de la Fuente and Bix
palm down, palm up, and hand closed with the thumb parallel to the lens. Using CorelDRAW ® software these pictures were scaled and measured. Functional grip diameter, defined as the maximum diameter that can be grasped between the thumb and middle finger (Figure 9.3a), was measured using a wooden grasping cone (DTI, 2002). Two hand grip spans were measured using a flat wooden triangular plate (DTI, 2002). Researchers recorded the hand grip span one (between thumb and index finger’s first phalanx) (Figure 9.3b) and the hand grip span two (between thumb and index finger’s second phalanx) (Figure 9.3c). Three measurements were recorded for each dimension.
Figure 9.3. a) Functional grip diameter, b) hand grip span one, c) hand grip span two
9.2.3.2 Data Analyses All data were analysed using SPSS® version 14.0. One-way analyses of variance (ANOVAs) and an independent-samples t test were computed to examine potential differences on the dependent variables related to strength, dexterity, and size.
9.3 Results 9.3.1 Package Testing Average time to open, number of successful participants, and ease-of-use ratings by group are summarised in Table 9.1. For the purpose of this study, it was decided that a package should get more than two points in the named scale to be considered easy to use. For both working groups, the 1-Clic® package received the best ratings (2.7 for people with disabilities and 3.6 for older adults). However, the lower rating for people with disabilities points to some usability problems. The Screw-Loc® (Figure 9.2b) was opened faster than the other three packages, eight seconds (people with disabilities) and nine seconds (older adults). However, in working group A, people with disabilities, three subjects (30%) were unable to open it and the overall Likert rating for this package was 1.2 out of four. All participants in group A disagreed with the statement that this package was easy to use. The 1-Clic® (Figure 9.2a) ranked second in opening speed and first in subject’s ease-of-use rating for both adult working groups, 2.7 (working group A) and 3.6 (working group B). For the group of people with disabilities, Shellpak® (Figure 9.2c) ranked second in rating (1.6) and second in the percentage of people being able to open it (80%). The CR Blister (Figure 9.2d) showed the worst performance overall; lower subject ratings (0.3 and 0.2), longer average opening times (98 and 153 seconds) and limited participants
User-pack Interaction
95
successfully opening (40% and 30%). Ninety percent (18 subjects) of the total number of the adult participants (n=20) opened both types of screening packages; all participants were able to open the non-CR snap-type closure. During the one minute period, times to open were reduced drastically suggesting that familiarisation with a package during the five minute period played an important role in opening speed. Table 9.1. Average opening times, number and percentage of subjects able to open the four CR packages in both periods of time, and ease-of-use rating Five minute period Working Package group
Average No. of Average No. of Ease of use opening subjects opening subjects rating time able to time able to (seconds) open each (seconds) open each container container
1-Clic®
A (n=10)
B (n=10)
One minute period*
26 Shellpak® 57 Screw-Loc® 8 15 CR Blister 3
± 23 9 ± 28 8 ± 10 7
(90%) 12 ± 14 9 (100%) 2.7 ± 1.3 (80%) 18 ± 12 7 (87%) ‡ 1.6 ± 1.6 (70%) 5 ± 4 7 (78%) † 1.2 ± 1.5
± 52 4
(40%) 50 ±
1-Clic®
± 8 9 ± 38 7 ± 6 9 ± 41 3
(90%) 5 ± 3 10 (100%) (70%) 26 ± 16 9 (90%) (90%) 12 ± 14 10 (100%) (30%) 54 ± 0 1 (10%)
10 64 Screw-Loc® 9 CR Blister 98 Shellpak®
0
1 (13%) ‡ 0.3 ± 0.9 3.6 1.3 2.1 0.2
± ± ± ±
0.5 1.3 1.3 0.4
*
Only participants that passed the screening test, † n=9 because 1 participant did not pass the screening test, ‡ n=8 because 2 participants did not pass the screening test
In the child test, during the two 5-minute test periods, 50% of children tested (four children) opened the Screw-Loc®, 25% (two children) removed a pill from the Shellpak®, 12.5% (one child) opened the 1-Clic® and 12.5% (one child) got a pill from the CR blister. Of the total openings, seven were during the second 5minute period and one was in the first 5-minute period. This suggests that the visual demonstration given by the tester has an important impact on the results.
9.3.2 Adults and Children Characterisation Hand-finger strength measurements, hand-finger dexterity scores, and hand dimensions of the three working groups were compared against normative data. Hand-finger strength and dexterity measures for group A were significantly variable. This is because the group was composed of people with varied disabilities and included people with perceptual, cognitive, and physical difficulties.
96
de la Fuente and Bix
9.3.2.1 Hand-finger Strength Average values for hand strengths are shown in Table 9.2. A one-way ANOVA comparing the average strengths of the three groups was computed. Significant differences were found between groups in grip strength (F(2,25)=8.63, p<.01), key pinch strength (F(2,25)=3.58, p<.05), tip pinch strength (F(2,25)=5.48, p<.05), palmar pinch strength (F(2,25)=4.48, p<.05), and bilateral palm-to-palm squeeze (F(2,24)=4.8, p<.05). A Tukey’s HSD post-hoc comparison was used to determine the nature of the differences between groups. This analysis revealed that older adults and people with disabilities had higher grip strength than children. It also showed that older adults had higher pinch strengths (tip, key, and palmar) and bilateral palm-to-palm squeeze strength than people with disabilities and children and that these latter two groups did not differ significantly from each other. Table 9.2. Average strength for dominant hand
Average strength (lbs) Strength type
People with disabilities (A)
Older adults (B)
Children (C)
Grip
33.00 a ±
22.03 43.50 a ±
Tip pinch
5.07 a
±
4.34
8.20 b
±
2.99
3.08 a
±
1.18
Key pinch
8.03 a
±
5.71
11.27 b ±
5.34
4.92 a
±
1.44
Palmar pinch
7.03 a
±
5.99
11.33 b ±
4.31
4.67 a
±
1.27
Wrist
62.67 a ±
62.25 43.67 a ±
29.65
ND*
Bilateral squeeze
2.56 a
2.64
3.72
1.83 a
±
1.94
±
5.73 b
±
11.23 13.75 b ±
3.38
a,b
Identical letters on each row represent groups with no indication of significant difference at the .05 level. *ND: not determined.
Wrist strength of working group C could not be measured because it was below the detection limit of the dynamometer utilised. An independent-samples t test was calculated comparing the mean wrist strength of participants in groups A and B. No significant difference was found (t(18)=.846, p>.05). 9.3.2.2 Hand-finger Dexterity A summary of average completion times on the 9-HPT is presented in Table 9.3. A one-way ANOVA comparing the dexterity scores of the three groups was computed. No significant differences were found for dominant hand (F(2,24)=2.618, p=.094) and for non-dominant hand (F(2,24)=2.766, p=.083).
User-pack Interaction
97
Table 9.3. Average completion time on the nine-hole peg test
Average completion time (seconds) Hand
People with disabilities (A)
Older adults (B)
Children
(C)
Dominant
52.56 a ±
42.13
28.90 a ±
10.62
52.75 a ±
12.28
Non-dominant
69.78 a ±
62.19
29.40 a ±
6.62
52.75 a ±
19.29
a
Identical letters on each row represent groups with no indication of significant difference at the .05 level
9.3.2.3 Hand Anthropometrics Table 9.4 shows a summary of all dominant hand anthropometrics measured for the three working groups. Table 9.4. Average dimensions for dominant hand Average dimension (mm) Hand characteristic
People with disabilities (A)
Functional grip
37.03 a ±
8.55
Hand grip span 1
82.83 a ±
Hand grip span 2
Older adults (B) 43.67 b ±
3.72
Children
(C)
25.54 c ±
1.38
27.03 104.17 a ±
13.90 58.75 c ±
7.84
73.33 a ±
14.41
78.00 a ±
15.57
33.54 b ±
11.56
Index finger’s length
65.60 a ±
7.76
71.06 a ±
6.94
42.76 b ±
4.46
Hand width
86.74 a ±
6.73
91.73 a ±
8.82
61.70 b ±
3.11
a,b,c
: Identical letters on each row represent groups with no indication of significant difference at the .01 level
A one-way ANOVA comparing the average dimensions of the three groups was computed. Significant differences were found among groups in functional grip diameter (F(2,25)=23.19, p<.01), hand grip span one (F(2,25)=12.73, p<.01), hand grip span two (F(2,24)=25.5, p<.01), index finger length (F(2,25)=43.59, p<.01), and hand width (F(2,25)=47.42, p<.01). Tukey’s HSD post-hoc comparison was used to determine the nature of the differences between groups. This analysis revealed that the three groups had significantly different functional grip diameters. Older adults and people with disabilities had longer index fingers, wider hands, bigger hand grip span one and two than children. In summary, the two adult groups had consistently bigger hand dimensions than the working group of children.
98
de la Fuente and Bix
9.4 Discussion The information summarised in this chapter has the potential to guide designers not only in design choices, but also in dimension and force related decisions regarding CR package design. Hand-finger strength: Of all strength measurements, only grip strength was significantly different between the two adult groups (A and B) and the group of children (C). Statistical analysis provided no evidence of differences in pinch strength (tip, key, and palmar) when compared with the children’s group. In order to include people with disabilities, a CR package design should not have its operational principle relying on the use of pinch strength. Of additional interest in this regard is that the results of our study suggest that when finger strength and dexterity are required, user ratings tend to be low (i.e. CR blister, ScrewLoc®, and Shellpak®). Hand-finger dexterity: Children between the ages of three and five have rapid gains in manipulative skills, finger dexterity, and tool use (Pehoski, 2005). The younger the child, the less dexterous he/she is. By the age of four, child dexterity scores are really close to scores of people 80 and older. In addition, older adults and people with disabilities represent a population with a broad range of hand-finger dexterity. Statistical analysis provided no evidence of differences in hand-finger dexterity when comparing adult groups with the children’s group. Hand-finger dexterity seems to be inappropriate as the main factor to include adults and exclude children. Cognitive abilities: Researchers observed during testing that basic affordances used in packaging are recognised by both young children and adults. For instance, when presented with a vial+container system all children in working group C immediately recognised the cap (as the interface for opening) and the container (as the interface for gripping). However, children did not seem to have a clear understanding of “mechanical causality”, for example, pushing a tab down to unlock a mechanism while turning a cap. In the same situation, adults tended to not to read instructions. To understand the working principle of the CR package, they seemed to rely on previous experiences with a similar package and on what the package communicates through shape, color and configuration (for instance: “If it is a round vial with a cap, then I have to push and turn”). When this strategy did not work adults focused on reading instructions more carefully. Hand anthropometrics: The difference in size between adults and young children provides a unique opportunity to include one and exclude the others that does not appear to be widely leveraged by the industry. For instance, using parts of the hand for functional operation (e.g. index finger, grip span thumb-index finger, hand width, etc.). This finding is somewhat related to conclusions from previous studies that have suggested that large CR containers have better child resistant properties than small ones (Kresel et al., 1982), and also are generally easier to open than small containers (Lisberg et al., 1983; Keram et al., 1988). Our findings suggest that in order to include people with disabilities and older adults, and exclude children, the possible pathways are reduced to anthropometrics and cognitive abilities. If the level of strength and dexterity demanded by a package design increases, people with disabilities might be excluded as users.
User-pack Interaction
99
Another important finding of this research is that the vast majority of the participants of working groups A and B, all of which would have been excluded from testing according to international protocols for senior testing, could successfully open the screening packages required by the protocol. All adult participants, including people with muscular weakness and very limited hand dexterity, could easily open a non-CR snap-type closure.
9.5 Acknowledgments The authors wish to acknowledge the partial funding provided through the Center for Food and Pharmaceutical Packaging Research which provided travel funding and subject incentives. Special thanks to all the people who participated in this study and to Dr. Debra Lively, Anita Collins, Ellen Weaver, Liz O’Brien, Audrey Whaling, Josh Vincent, and Erik Kou.
9.6 References Bix L, de la Fuente J, Pimple KD, Kou E (2009) Is the test of senior friendly/child resistant packaging ethical? Health Expectations, 12(4): 430–437 De la Fuente J (2006) The use of a universal design methodology for developing child resistant drug packaging. Master’s Thesis: 198, School of Packaging, East Lansing, Michigan State University, MI, US Donaghy A, Wright D (2003) Standardising risk assessment to reduce unintentional noncompliance in aged patients pre-discharge. International Journal of Pharmacy Practice 11(3) DTI (2002) Specific anthropometric and strength data for people with dexterity disability. Department of Trade and Industry, London, UK ISO (2003) Child-resistant packaging – requirements and testing procedures for reclosable packages. ISO 8317:2003. International Organization for Standardization, Geneva, Switzerland Keram S, Williams ME (1988) Quantifying the ease or difficulty older persons experience in opening medication containers. The Journal of the American Geriatrics Society, 36(3): 198–201 Kou EY (2006) Child resistant drug packaging and arthritis: can older consumers access their medications? Master’s Thesis: 164, School of Packaging. East Lansing, Michigan State University, MI, US Kresel JJ, Lovejoy FH, Boyle WE, Easom JM (1982) Comparison of large and small childresistant containers. Journal of Toxicology – Clinical Toxicology, 19(4): 377–384 Lisberg R, Higham C, Jayson M (1983) Problems for rheumatic patients in opening dispensed drug containers. Rheumatology, 22(3): 188–189 Mathiowetz V, Kashman N, Volland G, Weber K, Dowe M, Rogers S (1985a) Grip and pinch strength: normative data for adults. Archives of Physical Medicine and Rehabilitation, 66: 69–75 Mathiowetz V, Weber K, Kashman N, Volland G (1985b) Adult norms for nine hole peg test of finger dexterity. Occupational Therapy Journal of Research, 5(1): 25–38
100
de la Fuente and Bix
Pehoski C (2005) Object manipulation in infants and children. In: Henderson A, Pehoski C (eds.) Hand function in the child: foundations for remediation. Mosby Inc., St. Louis, MO, US US CPSC (1995) 16 CFR Part 1700 - requirements for the special packaging of household substances (final rule). Title 16, Part 1700 to 1750 sub chapter. US Consumer Products Safety Commission, Washington, DC, US
Chapter 10
A Colour Contrast Assessment System: Design for People with Visual Impairment H. Dalke, G.J. Conduit, B.D. Conduit, R.M. Cooper, A. Corso and D.F. Wyatt
10.1 Colour Contrast Assessment Model and System Development Visually Impaired People (VIP) encounter difficulties with the perception of products and environments in their everyday life such as a door in a wall or a column on a station concourse. Contrast can be an essential and vital aid for negotiating the world for people with low vision (Bright et al., 1997; Dalke et al., 2004b). The development of a colour contrast assessment system would enable the construction and design sectors to create more accessible spaces and objects. A requirement of perception by the human eye is the ability to assess visual contrast between adjacent surfaces or edges of material objects and judge distances. This function is one of two distinct systems in human vision, i.e. a fast, contour-extracting system (Ramachandran and Rogers-Ramachandran, 1998). Contrast is now included in guidelines for accessibility for design of environments, products and services for VIPs; Building Regulations Part M of the Disability Discrimination Act (DDA, 2004) and Light Reflectance Value (LRV) of a surface (BSi, 2008). However, despite standards and regulations, there are no ‘tools’ to help professionals establish ‘good colour contrast’ for their projects. Mechanisms for the provision of interventions - for achieving success - had not yet been fully mapped out. So no definitive advice existed on how effective colour contrast could be achieved easily and inexpensively. Colour contrast assessment can be a confusing or complex process. For example, access personnel may not be able to devote much time or resources to it. Also, existing colour measurement technology (spectrophotometry) is too expensive (approximately £4,000 – £8,000) and overspecified (multiple colour spaces) for simple and easy contrast evaluation.
102
Dalke, Conduit, Conduit, Cooper, Corso and Wyatt
A software prototype for automated prediction of visibility for the detection of contrast thresholds was established using data derived from testing with two sets of visually impaired volunteers; twenty two VIPs with a control group of six in a ‘real world’ viewing of 380 objects, then a stratified group of ten VIPs across the visual ability range V1 to V10 who took part in validating the software in the laboratory. A system model which has been developed is the basis of on-line software, a prototype measurement tool and a colour contrast guide for the design, manufacture and planning of product design and buildings. The work has led to the creation of a first generation algorithm for firmware to be used in a tool microchip. The growing importance of accessibility to an expanding aging population motivates this study of vision and visual impairment in a ‘real-world’ context. A major advance in this work was the establishment of five key factors which affect visual acuity, including contrast, for assessing the visibility of designs in a real world context. The colour contrast assessment model - developed by Dalke, Conduit and Conduit - (Dalke and Conduit, 2010) enables architects, designers, access consultants, and developers of the built environment to predict before and after manufacture or installation whether a person with low vision is able to see an object, text or element of a product or building component. Nearly 3 million people in the UK have some form of low vision. Exact figures on the size of the VIP categories of visual ability are unclear. Around 2% of the registered VIP population (classified as severely impaired or blind) have no ability ‘to see any light at all that may be coming through a window’ (V1), and 1% may just be able to perceive light (V2). A useful scale of visual ability maps the population from V1 to V10 (Table 10.1). Experts in the field recognise that the number of registered VIPs does not reflect the actual scale of low vision in the UK; people with poor visual acuity may not present themselves to either GPs or opticians for early diagnosis so the statistics on the VIP population are thought to be much larger than figures recorded, making research in this area even more pressing. People who experience some form of ocular disease may also have colour vision impairment (Adams, 1990; Marshall, 1991). Although colour vision may be impaired, VIPs can usually discriminate between two adjacent surfaces in terms of the difference between their Light Reflectance Values (LRV), known as contrast (Bright et al., 1997). The LRV of a surface is defined using the Commission International d’Eclairage (CIE) 1931 colour space; it is the Y value of the light reflected by the surface illuminated with the CIE D65 standard illuminant. The model for perception explored here was investigated after observations during an EPSRC/Link research project (Dalke et al., 2004a). We saw that a coherent use of contrast would improve visibility of environments and be efficacious for the community of VIPs. Colour contrast and lighting were identified as two of five key factors making the environment accessible; the three others being visual ability of the target group, the dimension of the object text or element, and its distance from the observer. A strategy to develop a model took into account these five key factors to predict which VIP groups could distinguish the object, text or element. We first describe the collection of the data before outlining the development of the vision model then finally we describe the website and manual that has made the model readily available to access personnel, architects, the construction industry and designers.
A Colour Contrast Assessment System
103
Table 10.1. Visual ability categories and percentage gross figures for visual ability levels V1-V9, age 16+. Ability level is measured with any desired vision aids (Grundy, 1999; Douglas et al., 2006) V1
Cannot tell by the light where the windows are
2%
V2
Cannot see the shapes of furniture in a room
1%
V3
Cannot recognise a friend if close to his/her face
4%
V4
Cannot recognise a friend who is at arm’s length away
3%
V5
Cannot read a newspaper headline
6%
V6
Cannot read a large print book
4%
V7
Cannot recognise a friend across a room
3%
V8
Has difficulty recognising a friend across the road
45%
V9
Has difficulty reading ordinary newspaper print
32%
V10
Full vision ability
-
10.2 Vision Research In earlier work (Dalke et al, 2002, data was gathered while testing VIP participants at three transport sites. For these ‘Real World’ site tests, vision testing required establishing the vision capability of all the participants. They were a mix of gender and age, and both visually impaired and fully sighted control group volunteers. The range of visual acuity results for the participants ran from 20/21 to 20/380, with a mean of 119. Some results of these observations showed clear design directions for the high visibility of signage for example in public spaces (Figure 10.1).
Figure 10.1. Signage with success rate of 86-100% of being seen by all VIPs at a station site
Visual acuity and function improved dramatically with an increasing light level, which is a significant design intervention for VIPs in the man-made world. Although five factors determine the ability to perceive objects, text and surfaces,
104
Dalke, Conduit, Conduit, Cooper, Corso and Wyatt
Lux levels provide the critical factor to perception depending on user’s abilities. Standards and recommendations for lighting specific environments do exist and vary considerably according to the variables of user and task (CIBSE, 1994, 2008; CIE, 1997). Table 10.2. Examples of measurements of objects, signs and elements examined at a bus station. The categories of data were: eye levels of participants; distance of object from the observer; lux vertical and horizontal levels in the vicinity of the object; dimensions of the item; visual angle; the LRV or luminance of the object and whether it was seen or not. Seen
Object 1
Object 2
Object 3
Object 4
Seen/Yes
Seen/Yes
Seen/Yes
Seen/Yes
Av. eye level (m)
1.544
1.544
1.544
1.544
Max eye level (m)
1.75
1.75
1.75
1.75
Min eye level (m)
1.42
1.42
1.42
1.42
Distance
6.45m
6.45m
5.5m
6.5m
Lux Vertical
8350
8350
8350
4060
Lux horizontal
13190
13190
13390
6260
Height off ground
1.98
0.91
N/A
N/A
Width of object
0.61
2.59
N/A
N/A
Height of object
0.61
0.88
N/A
0.43
Visual A V
5.3
7.8
0.0
3.6
Visual A V (max)
5.4
7.8
0.0
3.6
Visual A V (min)
5.3
7.8
0.0
3.7
1
78
N/A
N/A
Black
White
Off-white
Grey
Colour 1 LRV Colour 1 Descript
These detailed measurements of 380 objects (seen by VIPs) in their environs – namely the surfaces’ LRV, then size, distance from an observer and lux levels on ‘real world’ sites were collated in further studies (Table 10.2). Analysis of the data revealed a strategy for defining critical points for perception of the environment by VIPs. In these new studies key factors were established for defining perception of objects and environment elements.
10.3 Model Development The model depended on the five key factors: visual ability of the observer f ; the tonal contrast difference of the object to background, t , the lighting intensity, l ,
A Colour Contrast Assessment System
105
the projected width w and height h of the object, and the distance d from the object to viewer. These parameters are summarised in Figure 10.2. The model aimed to link these concepts to predict the fraction of ‘Viewers’ able to see the sign, f . In order to simplify the model we assumed that vision depends just on the visual angle of the object, which can be calculated from the distance, width and height with ⎛ min(w, h) ⎞ ⎟ 2d ⎠ ⎝
(1.1)
θ = 2 arctan⎜
Formally, one should factor in the effect of the different heights that objects were off the ground, compensated for by the average eyelevel of the ‘Viewers’. This tended to have little effect on the results when assessed since critical objects are typically at ground to eyelevel, and much further away than their elevation. Tonal contrast between object and background, t
Background
Distance from ‘Viewer’ to object, d
Object
Viewable Area of object, A
Visual angle,
θ
Viewer
Height from ‘Viewer’ level
Eye Level
Figure 10.2. The object and ‘Viewer’ in the model
10.3.1 Choice of Model To use the key factors of tonal contrast, lighting, and visual angle to predict the fraction of ‘Viewers’ able to distinguish a sign, we developed a numerical algorithm that we calibrated using the ‘real world’ data. In designing a suitable algorithm we assumed that each of the key factors affects the critical fraction independently. Therefore the model for the critical fraction is written in the form f (l , θ , t ) = L(l ) × Θ(θ ) × T (t )
(1.2)
106
Dalke, Conduit, Conduit, Cooper, Corso and Wyatt
Each of the functions L(l ) , Θ(θ ) , and T (t ) represents a separate model for light intensity, visual angle, and tonal contrast respectively. This factorisation simplifies the model by allowing us to consider each of the parameters separately. This functionality has the additional benefit that it can be readily numerically calculated and is also easily invertible, allowing, for example, the model to make a prediction for the critical tonal contrast. We assume each of these functions can be modelled by the physically motivated cumulative normal (Gaussian) distribution. This tells us what fraction of the ‘Viewers’ are able to interpret the object as its parameter is varied. For example, Figure 10.3 shows the variation in the fraction of ‘Viewers’, L able to interpret the object with lux level l . In strong light nearly everyone can interpret it so L is approximately one. In dim light very few can interpret it so L is almost zero. In intermediate light, where about half the ‘Viewers’ can interpret the object, the distribution decreases rapidly with the light level. Here the number of ‘Viewers’ that can view the object falls rapidly. The median (denoted by a bar) light level l of the distribution is given when half the ‘Viewers’ can interpret the object. The range of lux values over which the majority of the population (68%) can interpret is l ± σ l , the standard deviation. The parameters used to develop the model are derived from observations on previous data gathered from VIP volunteers (the ‘Viewers’) at each of three ‘real world’ sites. Objects, texts or elements were tested creating data sets, which were later reduced to 144 robust sets of data (Dalke et al., 2004a). Our model gives the criterion for a required fraction of ‘Viewers’ to be able to interpret an object, text or element. Since data was collected from various different ‘Viewers’ at different sites, we had to assume that the ‘Viewers’ on each site presented the same gamut of impairment types and levels of residual vision. This reflects the breadth, types and variations of vision loss which introduces random error into the result. Table 10.3. Example input data and ranges Symbol
Example data
Range of possible data
Definition and description
f
V4
V1–V10
Fraction of ‘Viewers’ able to view the object – Visual Ability Groups
t
55
5–85
Tonal contrast – the difference between the Light Reflectance Values (LRV) of two visually adjacent surfaces
l
800lux
80–28000lux
Intensity of the lighting on and around the object, text, element
w, h
2m
0.1m–6.7m
Width and height of object, text, element
d
5m
2.5m–80m
Distance from object to viewer
A Colour Contrast Assessment System
107
1
l L
σl
σl
0
l Figure 10.3. The cumulative normal distribution, l is the median Viewer light level, the standard deviation of Viewer critical distances
σ l is
10.3.2 Validating the Model The first generation model required validation in the lab but with observations conducted in a slightly more complex near ‘real world’ setting (Figure 10.4). There were ten participants from Visual Acuity (VA) levels V1 to V10 who tested the software predictions of the visibility of 2 187 gray scale patches against the 10 gray scale backgrounds. Over 1 000 separate measurements were taken as part of the validation and testing.
(a)
(b)
Figure 10.4. Testing: (a) setting up the test showing the 10 metre test distance and (b) a participant observing the gray scale shapes on a dark background
108
Dalke, Conduit, Conduit, Cooper, Corso and Wyatt
The participants were invited to observe the grayscale boards of 5, 10, 21, 27, 40, 53, 62, 71, 82, 93% LRV, on which were placed patches of the same gray scales from 5 – 93% LRV in different sequences (Figure 10.4). All greyscales were measured with a spectrophotometer (xyY), tested with a range of different size square patches 0.15, 0.30 and 0.75m and each test was conducted in a controlled lux level. A floor grid of 0.5 metre provided the participants with a central line (Figure 10.4a) along which they walked until able to see any patch on the board. The distance from the participant to the observed patch on the board was recorded. The colour contrast model predictions were checked and shown to be robust when compared with the data generated in the validation lab tests and these predictions made using the system would therefore be safe and reliable.
10.4 Real-world Deployment and Concluding Remarks We have developed a new model of object, text or element visibility and rigorously tested the accuracy of the prediction software established against the extensive validation test results. An interactive website has been created to enable automated use of the model by end users. Any combination of the key factors for visibility by VA groups may be entered into the website as parameters to estimate the threshold of viewers able to distinguish the object. If all five parameters of the key factors are entered, a result of either VISIBLE or NOT VISIBLE by the VA group/s will be produced. If between two and five parameters are entered the website suggests the values that are missing to achieve a final result of VISIBLE. These suggestions are achieved by manipulating Equation (1.2) to find the product of the functions of the missing parameters (e.g. L(l ) for Lux Level), assuming the values of these functions are equal, and applying the inverse of each function to this value. The website was programmed with PHP and integrates with a database. In addition, a prototype tool/device was developed to measure one of the key inputs to the model, tonal contrast (t), defined as the difference between the LRV of the two surfaces. The LRV of a surface is defined using the CIE 1931 colour space; it is the Y value of the light reflected by the surface when illuminated with the CIE D65 standard illuminant (scaled such that a perfect reflector has an LRV of 100 and a perfect absorber has an LRV of 0). The tool is designed for rapid on-site measurements and calculation of tonal contrast between surfaces, which (when compared with the required tonal contrast value obtained from the model) would allow designers to evaluate proposed materials for the design of products or buildings for optimum visibility. The prototype tool uses a MAZeT MTCS-TIAMI colour sensor head, which contains an array of photodiodes with colour filters producing spectral sensitivities close to the CIE 1931 2 degree observer. The light sources are white surface-mount LEDs with a 45 degree incident/0 degree reflected optical path. In the prototype, tonal contrast is calculated by an on-board microcontroller and displayed to the user; future developments will incorporate a visibility prediction model to give a direct readout of visibility. Testing the prototype on a Gretag-Macbeth Colour-Checker showed that, when compared with
A Colour Contrast Assessment System
109
readings from an X-Rite Spectrophotometer 962, (D65 10 degrees specular excluded 8MM 450), there was a 96% agreement with an average error of 2.49 on the Y value. However, a skew on the blue and green hue angle (CIELch) was exposed, due to the difference between the LEDs spectrum and the D65 illuminant. Further work is planned to develop the prototype tool. Finally the system has a colour contrast guide for quick reference on-site decision-making; it provides the data on required visual contrast of products or materials. This document was created using the software model and consistently used two fixed key factors, that of visual ability and lux level. The guide is the final component of an intended low-cost entry system for the specification of contrast in product and environmental design. Several objectives of the integrated design research studies were achieved. Firstly, the utility and development of the prediction software for contrast was rigorously tested and its accuracy established against the extensive validation test results. The software has been used to create the guide. A prototype tool was developed that can achieve accurate LRV measurements and calculate contrast difference between any two solid opaque surfaces in products or environments. Finally this integrated system of colour contrast assessment is available for all professionals who need information about contrast specification that is easy and accessible.
10.5 Acknowledgements For their valuable contributions of assistance, guidance and advice the authors would like to thank the following people: Brenda Billington, Keith Bright, Geoff Cook, Nilgün Camgöz, Yohannes Iassu, Peter Barker, Laura Stott, Rebecca Hunt, Guillaume Steadman, Elga Niemann, and Anne Conduit. This work would not have been possible without the dedication and commitment of all the anonymous participants, and the support of X-Rite, the RNIB and the Macular Disease Society.
10.6 References Adams AJ (1990) Normal and abnormal mechanisms of vision. In: Spillman L, Werner J (eds.) Visual perception: the neurophysiological foundation. Academic Press, San Diego, CA, US Bright K, Cook G, Harris J (1997) Colour, contrast and perception: design guidance for internal built environments. Brooker Publications, London, UK BSi (2008) Light reflectance value (LRV) of a surface: method of test. BS8493:2008. The British Standards Institution, UK CIBSE (1994) Code for interior lighting. Chartered Institution of Building Service Engineers, London, UK CIBSE (2008) Hospitals and healthcare buildings, lighting guide 2. Chartered Institution of Building Service Engineers, London, UK CIE (1997) Lighting needs for the partially sighted. Commission Internationale de l‘Eclairage, Vienna, Austria
110
Dalke, Conduit, Conduit, Cooper, Corso and Wyatt
Dalke H (2002) EPSRC/DETR LINK (2001-2003) Inclusive transport environments: colour design lighting and visual impairment. Internal report Dalke H (2004) Inclusive transport environments: colour design lighting and visual impairment. In: Future integrated transport programme – progress and results. Department for Transport, London, UK (Product code 45TSRLM02251) Dalke H, Conduit G (2010) The contrast guide. Cromocon, London, UK (in press) Dalke H, Cook G, Bright K, Camgöz N, Yohannes I, Niemann E (2004a) Future integrated transport environments: colour design, lighting and visual impairment. Department for Transport, London, UK (Internal report) Dalke H, Littlefair P, Loe D (2004b) Lighting and colour for hospital design. The Stationery Office, London, UK DDA (2004) Building regulations Part M. Disability Discrimination Act. Douglas G, Corcoran C, Pavey S (2006) Network 1000: opinions and circumstances of VIP people in Great Britain: report based on over 1000 interviews. Visual Impairment Centre for Teaching and Research, University of Birmingham, UK Grundy E, Ahlburg D, Ali M, Breeze E, Sloggett A (1999) Disability in Great Britain: results from the 1996/97 disability follow-up to the Family resources survey. Technical Report 94. Department of Social Security, Leeds, UK Marshall J (1991) The macular, ageing and age related macular degeneration. In: Marshall J (ed.) The susceptible visual apparatus. Macmillan Press, London, UK Ramachandran VS, Rogers-Ramachandran DC (1998) Psychophysical evidence for boundary and surface systems in human vision. Vision Research, 38: 71–77
Part III
Inclusive Interaction
Chapter 11 Evaluating the Cluster Scanning System P. Biswas and P. Robinson
11.1 Introduction Many physically challenged users cannot interact with a computer through a conventional keyboard and mouse. For example, spasticity, amyotrophic lateral sclerosis and cerebral palsy confine movement to a very small part of the body. People with these disorders may interact with a computer through one or two switches with the help of a scanning mechanism. Scanning is the technique of successively highlighting items on a computer screen and pressing a switch when the desired item is highlighted. We have developed a new scanning system that works by clustering screen objects in a graphical user interface (GUI). Currently we have implemented it for Microsoft Windows operating system, however it can be extended to any other GUI based operating systems. In a previous paper we compared the cluster scanning system with other scanning systems using simulation (Biswas and Robinson, 2008a). In this paper we validate the results obtained in simulation by evaluating the cluster scanning system through a controlled experiment with motor impaired users. We describe the scanning systems and results obtained using simulation in the following two sections. In Section 11.4, we present our study followed by concluding remarks.
11.2 The Scanning Systems In this study we compared two technologies: a conventional block scanning system and our new cluster scanning system. A block scanning system iteratively segments the screen into equally sized sub-areas. The user has to select a sub-area that contains the intended target. The segmentation process iterates until the sub-area contains a single target (Figure 11.1).
114
Biswas and Robinson
Figure 11.1. The block scanning system
The cluster scanning system initially collects all possible targets in a screen. Then it iteratively divides a screen into several clusters of targets based on their locations (Figure 11.2). The user has to select the cluster that contains the intended target. The clustering process iterates until the cluster contains a single target (Figure 11.1).
Evaluating the Cluster Scanning System
115
Figure 11.2. The cluster scanning system
The cluster scanning system works by enumerating objects being shown in the screen and storing positions of windows, buttons, icons and other possible targets. The algorithm starts by considering all the processes running on the computer. If a process is controlling a window, then the algorithm also considers all child and thread processes owned by it. During the enumeration process, the algorithm identifies the foreground window and separately stores the positions of the foreground window and targets within it from the background windows. The algorithm also calculates the area occupied by the foreground window. Then it separately clusters the targets in the foreground window and background windows. The ratio of the number of clusters in foreground and background windows is proportional to the ratio of the area occupied by the foreground window in the whole screen. We used the Fuzzy c-means algorithm (Ross, 1997) to cluster the targets into similarly sized groups. The algorithm is similar to k-means clustering algorithm. The k-means algorithm partitions points into k clusters where each point belongs to the cluster with nearest mean. This algorithm aims at minimising the following objective function k
J =∑ j =1
n
∑x i =1
−cj
( j) i
where x i( j ) − c j
2
2
is a distance measure between a data point
xi( j ) and the cluster
centre cj. The Fuzzy c-means algorithm returns the membership values of data
116
Biswas and Robinson
points into different clusters instead of putting the data points into separate clusters. As a result when the data points are not naturally separated, it returns overlapping clusters. The c-means algorithm takes the number of clusters (c) as input. It aims at minimising the following objective function N
Jm = ∑ i =1
C
∑u j =1
m ij
xi − c j
2
, 1 ≤ m <∝
where m is any real number greater than 1, uij is the degree of membership of xi in the cluster j, xi is the ith of d-dimensional measured data, cj is the d-dimension center of the cluster, and ||*|| is any norm expressing the similarity between any measured data and the center. We found by differentiating task completion time with respect to the number of clusters that five is the optimum number of clusters. So we clustered the targets into five regions.
11.3 Evaluation Through Simulation We recorded sample interactions by two able-bodied users to generate a list of tasks, which were fed to the simulator (Biswas and Robinson, 2008b). The model for the cluster scanning system takes the scan delay, the number of clusters, the intended target and the total number and positions of targets in a screen as input and gives the target acquisition time as output by running the cluster scanning algorithm on the input. The model for the block scanning system takes the scan delay, the number of blocks and the total number of targets in a screen as input and gives the target acquisition time as output which is equal to s × log k n , where
s is the scan delay k is the number of blocks n is the number of targets It should be noted that for the block scanning system, the target acquisition time is constant for any target in the screen. We investigated the block scanning system for different numbers of blocks and different numbers of iterations, and the cluster scanning system for different numbers of clusters. The cluster scanning system performed best when the number of clusters was five. However, among the different versions of cluster and block scanning processes, we found that a type of block scanning that divided the screen into four equal sized partitions for four iterations performed best. We had expected that the cluster scanning process would perform better since it uses information about target types (e.g. labels are not considered as possible targets) and locations in the clustering process. So we extended the analysis to consider the actual tasks undertaken by our participants. Most of the time our participants used instant messenger software and browsed the World Wide Web.
Evaluating the Cluster Scanning System
117
The present version of the clustering process does not consider locations of hyperlinks in the target acquisition process and so it might miss possible targets during Web surfing. The further study revealed that participants should take less time to complete a task using the cluster scanning system than other scanning systems if the clustering process could include all targets in a screen. Details of this study can be found in our previous paper (Biswas and Robinson, 2008a).
11.4 The Experiment We validated the results obtained in simulation by a controlled experiment on motor impaired users. We discuss the detail of the experiment in the following sections.
11.4.1 Procedure, Material and Participants In this experiment, the participants were instructed to press a set of buttons placed in a screen (Figure 11.3) in a particular sequence. All the buttons were coloured grey except the next target, which was red. The same task was repeated for all the scanning systems. In particular, we evaluated the cluster and block scanning systems. We recorded the cursor traces, target height, width, and task completion time. For internal validity of the experiment, we did not use any scan delay adaptation algorithm. The scan delay was kept constant at two seconds for motor impaired participants and at one second for the control group. These values were selected to exceed their observed reaction time. All participants were trained adequately with the scanning systems before undertaking the experiment.
Figure 11.3. Screenshot of the experiment
We used a push button switch (Super-Switch, 2007) and an Acer Aspire 1640 Laptop with 1280×800 pixel screen resolution. We also used the same seating arrangement (same table height and distance from table) for all participants.
118
Biswas and Robinson
We collected data from eight motor impaired and eight able-bodied participants (Table 11.1). The motor impaired participants were recruited from a local centre, which works on treatment and rehabilitation of disabled people, and they volunteered for the study. All motor impaired participants used computer at least once each week. Able-bodied participants were students of our university and expert computer users. None of them had used the scanning systems before. Table 11.1. List of participants Participant
Age
Gender
Impairment
C1
27
F
C2
28
F
C3
30
M
C4
30
M
C5
31
M
C6
28
F
C7
30
F
C8
26
F
P1
30
M
Cerebral palsy reduced manual dexterity, wheel chair user
P2
43
M
Cerebral palsy reduced manual dexterity, also has tremor in hand, wheel chair user
P3
30
M
Dystonia, cannot speak, cannot move fingers, wheelchair user
P4
62
M
Left side (non-dominant) paralysed after a stroke in 1973, also has tremor
P5
44
M
Cerebral attack, significant tremor in whole upper body part, fingers always remain folded
P6
46
F
Did not mention disease, hard to grip things, no tremor
P7
>45
F
Spina bifida/hydrocephalus, wheelchair user
P8
>45
M
Cerebral palsy from birth, restricted hand movement, no tremor
Able-bodied
Evaluating the Cluster Scanning System
119
11.4.2 Results and Discussion Initially we measured the total task completion time for the scanning systems (Table 11.2 and Figure 11.4). It can be seen that participants took less time to complete the task using the cluster scanning system. The dotted bars in Figure 11.4 mean that two participants could not complete the task using the block scanning system. To further investigate the scanning systems, we measured the following three dependent variables: •
Number of missed clicks: We counted the number of times the participants wrongly pressed the switch. Idle count: The scanning systems periodically highlight the buttons. This variable measures the number of cycles when the participants did not provide any input, though they were expected to do so. Efficiency: The scanning systems require a minimum time to complete any task which depends on the particular scanning system and not on the performance of the user. We calculated the efficiency as the ratio
• •
OptimalTime ActualTime
.
An efficiency of 100% indicates optimal performance, 50% indicates taking twice the minimal time and 0% indicates failure to complete the task. Table 11.2 presents the efficiency of each participant. Comparing Cluster and Block Scanning Systems 200000
Task Completion Time (in msec)
180000 160000 140000 120000 Cluster Scanning System
100000
Block Scanning System
80000 60000 40000 20000 0 P1
P2
P3
P4
P5
P6
P7
P8
C1
C2
C3
C4
C5
C6
C7
C8
Participants
Figure 11.4. Task completion times for the scanning systems
We did not find any significant difference between the performances of motor impaired and able-bodied users by an equal variance t-test at p < 0.05 level. However, the average number of missed clicks and idle count are significantly lower in the cluster scanning system than in the block scanning system in an equal variance paired t-test (p < 0.05) (Figure 11.5). Additionally two participants (P3 and P7) could not complete the task using the block scanning system while all participants could complete the task using the cluster scanning system.
120
Biswas and Robinson
Comparing Cluster and Block Scanning Systems 0.79 0.87
Avg. Efficiency
18.59
Avg. Idle Count
Avg. Missed Click
3.75 Block Scanning System
6.96
Cluster Scanning System
0.42
Figure 11.5. Comparing the scanning systems Table 11.2. Comparing scanning systems Participants
Cluster scanning system
Block scanning system
Efficiency
Task completion time (in sec.)
Efficiency
Task completion time (in sec.)
P1
0.80
166
0.71
618
P2
0.92
86
0.87
125
P3
0.96
73
0.92
104
P4
0.86
113
0.15
171
P5
0.85
120
0.74
107
P6
0.87
107
0.78
236
P7
0.89
97
0.86
137
P8
0.83
131
0.82
165
C1
0.85
58
0.84
77
C2
0.95
39
0.84
73
C3
0.83
67
0.79
105
C4
0.92
43
0.88
62
C5
0.90
46
0.89
58
C6
0.85
59
0.89
68
C7
0.87
53
0.89
59
C8
0.81
77
0.82
84
Average
0.87
83
0.79
141
Evaluating the Cluster Scanning System
121
The simulator predicted that the task completion time would be less in the cluster scanning system than the block scanning system when the cluster scanning system can consider all possible targets in its clustering process. The experiment also shows similar results. The total task completion time, sub-optimal task completion time, idle time and number of missed clicks are less in the cluster scanning system than the block scanning system. The efficiency of the cluster scanning system can be attributed to the following factors. • •
The cluster scanning system does not introduce any new interface element like a frame or form in the screen as Autonomia (Steriadis and Constantnou, 2002) or FastScanner (Ntoa et al., 2004) systems do. The cluster scanning system does not blindly divide the screen in a predefined number of segments as the ScanBuddy (2007) or block scanning systems do. It clusters the target so that the targets are evenly divided into blocks and a block is not drawn in a region that does not contain any target.
However as the optimal task completion time was higher in the block scanning system than the cluster scanning system, so we did not find the difference in efficiency significant in an equal variance t-test (p = 0.07). Our study also confirms the value of automatically evaluating assistive interfaces using a simulator (Biswas and Robinson, 2008a-b). Before running a formal user trial, a system designer may tune interface parameters or select the best design alternative using our simulator. As each alternative design does not need to be evaluated by a user trial, the simulator will reduce the development time significantly.
11.5 Conclusion We have developed a new scanning system that works by clustering screen objects. In a previous paper we evaluated the cluster scanning system and found it superior to the block scanning system. In this paper we have presented a study of motor impaired users to evaluate the cluster scanning system in practice and to validate the results of the simulation. We found that the total task completion time, idle time and number of missed clicks are less in the cluster scanning system than for the block scanning system. The results also in turn validate the simulator. So we can infer that motor impaired users found the cluster scanning system faster, easier and more accurate than the conventional block scanning system
11.6 References Biswas P, Robinson P (2008a) A new screen scanning system based on clustering screen objects. Journal of Assistive Technologies, 2(3): 24–31 Biswas P, Robinson P (2008b) Automatic evaluation of assistive interfaces. In: Proceedings of International Conference on Intelligent User Interfaces (IUI’08) , Canary Islands, Spain
122
Biswas and Robinson
Ntoa S, Savidis A, Stephanidis C (2004) FastScanner: an accessibility tool for motor impaired users. In: Proceedings of the 9th International Conference on Computers for Handicapped Persons (ICCHP 2004), Paris, France Ross TJ (1997) Fuzzy logic with engineering application, international edition. McgrawHill, NY, US ScanBuddy (2007) The ScanBuddy System. Available at: www.ahf-net.com/Scanbuddy.htm (Accessed on 21 May 2007) Steriadis CE, Constantinou P (2002) Using the scanning technique to make an ordinary operating system accessible to motor impaired users. The autonomia system. In: Proceedings of the International ACM SIGACCESS Conference on Computers and Accessibility (ASSET 2002), Paphos, Cyprus Super-Switch (2007) Available at: http://rjcooper.com/super-switch/index.html (Accessed on 19 November 2009)
Chapter 12 Facets of Prior Experience and Their Impact on Product Usability for Older Users J. Hurtienne, A.-M. Horn and P.M. Langdon
Prior experience is one of the main factors influencing the performance of older adults with technology. Products that better match the prior experience of their users increase the speed and effectiveness of interaction (Czaja and Sharit, 1993; Langdon et al., 2007; Lewis et al., 2008; Blackler et al., 2009; Fisk et al., 2009). These findings suggest that in order to design successful and usable products, designers need to establish the level of prior experience in their respective target user groups. Unfortunately, the concept of technology experience is ill-defined and used inconsistently across studies (Smith et al., 1999; Garland and Noyes, 2004). A number of different definitions and operationalisations of experience exist (e.g. Thompson et al., 1994; Potosky and Bobko, 1998; Smith et al., 1999; Garland and Noyes, 2004; Smith et al., 2007) but the differing power of these operationalisations to predict the usability of products for older users has rarely been investigated systematically. This study seeks to fill that gap in exploring the impact of different such operationalisations of experience on the usability of an interactive system. It is argued that technology experience as it relates to inclusive design has at least three components. It is proposed that two of these components, exposure and competence, are directly relevant for the current discussion about prior experience in inclusive design and that they can predict to different degrees the usability of a product for older users. In an empirical study these facets of expertise are each operationalised on three levels of specificity, their impact on usability is assessed, and implications for future research are drawn.
12.1 Facets and Measures of Experience 12.1.1 Experience as Exposure and Competence Experience with technology assumes different operationalisations in the literature (Potosky and Bobko, 1998; Smith et al., 1999; Garland and Noyes,
124
Hurtienne, Horn and Langdon
2004). From the multitude of operationalisations three different components emerge: exposure, competence, and subjective feeling. Exposure to technology is split into at least three subcomponents: duration of use, intensity of use, and diversity of use. Duration of use describes the length of time a product has been used and can be measured as the number of months or years the product has been used. Intensity of use describes the frequency with which a product is used and can be measured in hours per week. Diversity of use describes the number of different functions used or tasks solved with the product. These three measures are not necessarily correlated. A person that has used a product for a long time (high duration) may have done so only sporadically (low intensity) while using only one specific function of the product (low diversity). Using only one measure of exposure could therefore considerably distort the outcome of a study on technology experience. Therefore, these measures of exposure are often combined and sometimes accompanied by other measures of exposure, including the opportunity to use the product or an indirect measure of exposure to information about the product (Smith et al., 1999). The second component of experience, competence with technology, describes the level of skills and knowledge required for interacting with a product. Competence can be measured via self-assessment or objective tests. Examples of self-assessment include one-item statements (e.g. “How well do you think you can handle the product?”) or standardised multi-item questionnaires asking about different areas of user competences (e.g. self-efficacy with technology, Beier, 2004). Examples of objective tests include a simple test of typing skills (cf. Czaja and Sharit, 1993) or a knowledge test about terms and symbols commonly used in computer interfaces (Sengpiel and Dittberner, 2008). The third component of experience, subjective feeling, is about the actual user experience when using the product, i.e. the users’ private feelings and thoughts when interacting with technology (Smith et al., 1999, 2007). While all three components, exposure, competence, and subjective feeling can be looked at as preconditions or outcomes of interacting with technology (e.g. Thompson et al., 1994; Smith et al., 1999, 2007), exposure and competence are more often treated as preconditions and subjective feeling is more often treated as a result of interacting with a product (cf. Schifferstein and Hekkert, 2007). The focus here is on exposure and competence as preconditions for inclusive interaction. How do exposure and competence relate to usability, the effectiveness, efficiency and satisfaction in use? Often, expertise is only operationalised in terms of exposure to technology and a subsequent influence on usability is assumed. More likely, however, is that exposure influences usability via the build up of skills and knowledge forming the competence of a user to interact successfully. Hence, it is expected that the influence of competence on usability should be direct and greater than the influence of exposure on usability (cf. Thompson et al., 1994).
12.1.2 Levels of Specificity When considering prior experience, whether exposure or competence, it is important to determine the level of specificity on which prior experience with
Facets of Prior Experience and Their Impact on Product Usability for Older Users
125
technology has the greatest impact on usability. At least three levels can be differentiated: the prior exposure and competence (1) with the product in focus, (2) with other products of the same type, and (3) with a broad range of products of different types. Exposure to the same product should impact usability the most. Users have gained skills and knowledge in operating the product and they bring these to future uses of the product. On the next level of specificity, the experience with other products of the same type will also contribute to usability as certain operations (e.g. cut and paste in different software user interfaces) are ubiquitous across devices and the skills and knowledge gained can be transferred across products. Finally, exposure to and competence in using technical devices on a more general level may impact usability to a lesser extent because the amount of transferable interaction knowledge may be lower.
12.2 Investigating the Impact of Exposure, Competence and Specificity on Usability With a sample of older adults the effects of the different facets of technology experience on the usability of ticket vending machines (TVM) was investigated. The study had the following objectives: First, to investigate separately the effects exposure and competence have on product usability. As discussed above, it is expected that the more direct effects of competence on usability are higher than the more indirect effects of exposure on usability. Second, to disentangle the effects of different levels of specificity of technology experience. At the highest level of specificity the prior experience with the ticket vending machine is taken into account. As the TVM was operated via direct manipulation on a touch screen display and interaction was similar to interacting with a computer, at a level of medium specificity computer experience was measured. Finally, at the lowest level of specificity, experience with vending machines and electronic devices in general was measured. It is expected that the more specific the operationalisation of experience, the higher its impact on measures of usability. Third, to investigate how the impact of prior experience on usability changes with a redesign of the original ticket vending machine following inclusive design principles. It is expected that a redesigned ticket vending machine will reduce the impact of relevant experience on usability.
12.2.1 Method 12.2.1.1 Product Simulations Two versions of a ticket vending machine were used. The first version was a simulation of the original TVM of the Berlin public transport company BVG. The second was a redesign of the original TVM applying inclusive design principles. The redesigned version required less working memory capacity of the
126
Hurtienne, Horn and Langdon
user and it reduced the need for familiarity with interaction principles and symbols frequently found in computer user interfaces. The need for visual search was reduced and the task of purchasing a ticket was broken down into a number of manageable steps named Who? Where to? How long? How many? and Pay (for more details see Horn, 2009; Sengpiel and Wandke, submitted). The two versions of the TVM differed in appearance and menu structure but not in functionality. They were built in Squeak/Smalltalk and were used with a 20 inch touch screen monitor. 12.2.1.2 Participants and Experience Levels Fifty-eight older adults took part in the study (34 female), of which 28 interacted with the original version (19 female) and 30 interacted with the redesigned version of the ticket machine (15 female). Descriptive statistics of age and facets of experience are shown in Table 12.1. Table 12.1. Participants’ age and prior experience with technology Variable
Mean
SD
Median
Min
Max
Original ≠ Redesign?
Age
61.72
7.54
63.00
50
76
O=R
Expo-TVM
4.91
3.50
5.00
0
15
O=R
Expo-Computer
12.60
6.39
13.00
0
24
O=R
Expo-Devices
22.98
4.57
22.50
14
34
O=R
Comp-TVM
3.73
1.61
4.00
1
7
O=R
Comp-Computer
18.94
7.02
20.00
1
30
O=R
Comp-Devices
34.40
8.61
36.00
13
51
O=R
Exposure (Expo) was measured at all three levels of specificity. At the most specific level, participants indicated which of a list of 18 different tickets they had bought at BVG ticket machines and which of five different functions of the ticket machine they had already used. This resulted in a diversity measure ranging from 0 to 23. At the level of medium specificity, participants indicated how often (from ‘never’ to ‘frequently’) they are engaged in eleven computer-related activities ranging from word processing to online-banking. This resulted in a combined diversity x intensity measure ranging from 0 to 33. At a low specificity level, participants indicated how often (from ‘never’ to ‘frequently’) they use each of thirteen devices from ATMs and vending machines to information systems and gaming machines. This resulted in a combined diversity x intensity measure ranging from 0 to 52. Competence (Comp) was also measured at all three levels of specificity. At the most specific level participants indicated on a 7-point Likert item how well they thought they could handle the BVG ticket machine. At the level of medium specificity participants completed a standardised test of computer literacy for older
Facets of Prior Experience and Their Impact on Product Usability for Older Users
127
adults (Sengpiel and Dittberner, 2008) with a score ranging from 0 to 30. At a low specificity level, participants filled in a standardised questionnaire about their selfefficacy with interactive technology (Beier, 2004) with a possible score ranging from 12 to 60. Participants using the original version of the TVM did not differ significantly in their age and experience from participants using the redesigned version of the TVM (Table 12.1, last column). 12.2.1.3 Procedure and Measurement of Dependant Variables After giving their informed consent about the study, participants filled in a set of questionnaires asking them about their familiarity with the tariff system of the BVG and their experience with the BVG TVM. Participants were then assigned randomly to the original or the redesigned version of the ticket machine. In 12 realistic tasks participants purchased pre-selected tickets from the machine. After this they filled in another set of questionnaires, asking for their evaluation of the system as well as their experience with computers and a wider range of devices. The whole session lasted about 90 min on average, of which 25 minutes were spent interacting with the TVM. For a more detailed description of the study and the range of other data collected see Horn (2009). Usability was measured according to ISO 9241-11 (ISO, 1998) as the effectiveness, efficiency, and satisfaction of use. Effectiveness was measured as the percentage of correctly solved tasks. Efficiency was measured as the time and steps taken in solving the tasks. To enable comparisons between the two TVM versions, both efficiency measures were transformed to percentages. An efficiency score of 50% means that a participant took twice the time of the best participant or twice the steps that were minimally necessary for the specific TVM version. Satisfaction was measured in terms of the subjective consequences of intuitive use (questionnaire QUESI, Hurtienne et al., 2009) and of the seven dialogue principles specified in ISO 9241-110 (questionnaire ISO 9241/110-S, Prümper and Regelmann, submitted).
12.2.2 Results As the measures of experience were significantly intercorrelated, multiple regression was used to assess the unique contributions of the different facets of technology experience to predict levels of usability. As the small sample size only allowed for a small number of predictors entered into each regression model to yield meaningful results (cf. Field, 2009), analyses proceeded stepwise. First, the effects of exposure on usability and competence on usability are investigated separately. Then the strongest predictors of these analyses are entered into a third analysis determining the combined effect of exposure and competence on usability measures. Preliminary analyses were conducted to ensure no violations of the assumptions of normality, linearity, and multicollinearity. As a consequence, efficiency/time measures were logarithmically transformed and analyses were computed on these transformed measures.
128
Hurtienne, Horn and Langdon
12.2.2.1 Effects of Exposure on Usability Exposure to technology, on different levels of specificity, explained up to 46% of the variance in usability measures (R2, Table 12.2). Overall, prior exposure to technology had the highest impact on efficiency/time, some impact on efficiency/steps and effectiveness but less on satisfaction measures. Significant effects (β weights) were only found at the medium level of technology specificity: exposure to computers. The redesigned version of the TVM was different from the original version in that prior exposure to technology played almost no role, except for a weak effect of exposure to computers on efficiency/time. Table 12.2. Effects of exposure on usability. Multiple regression results show standardised β coefficients. Cells with significant β values are shown in shades of grey, *** p < .001, ** p < .01, * p < .05, m p < .10.
Original
Effectiveness
Redesign
Satisfaction
% solved
Time
Steps
QUESI
ISO
R2
.24m
.46**
.43**
.22
.32*
βExpo-TVM
-.07
.05
.07
.23
-.10
βExpo-Computer
.52*
.61***
.49**
.25
.53**
βExpo-Devices
-.10
.12
.27
. 13
.18
R
.08
.30*
.08
.04
.07
βExpo-TVM
-.08
.25
.24
.19
.22
βExpo-Computer
.33
.40m
.06
-.14
-.29
βExpo-Devices
-.06
-.09
-.16
.13
.03
2
2
Combined
Efficiency
m
R
.16*
.38***
.13
.09
.05
βExpo-TVM
-.10
.15
.11
.16
.09
βExpo-Computer
.45**
.54***
.30m
.11
.20
βExpo-Devices
-.08
.01
.01
.13
.03
12.2.2.2 Effects of Competence on Usability Competence with technology, on different levels of specificity, explained up to 68% of the variance in usability measures (Table 12.3). Competence with technology had the highest impact on efficiency/time, medium impact on effectiveness and satisfaction measures. Significant effects (β weights) were found at the highest and medium level of technology specificity: prior competence with the ticket machine and with computers. The redesigned version of the TVM was different from the original version in that competence with technology again played a less prominent role. In general, the effects of competence were numerically higher than the effects of exposure on usability measures (Table 12.2).
Facets of Prior Experience and Their Impact on Product Usability for Older Users
129
Table 12.3. Effects of competence on usability. Multiple regression results show standardised β coefficients. Cells with significant β values are shown in shades of grey, *** p < .001, ** p < .01, * p < .05, m p < .10. Effectiveness
Efficiency
% solved
Time
Redesign
Original
R2 βComp-TVM
-.02 m
Steps
.57** m
.33
QUESI
ISO
.41*
.49**
.68***
.22
.39*
.45**
m
βComp-Computer
.42
.60**
.43
.51*
.67**
βComp-Devices
.25
.16
.27
.15
.07
R2
.20
.27m
.02
.22
.14
βComp-TVM
-.14
.16
.04
.42*
.24
m
βComp-Computer
.42
.34
-.01
-.25
-.38
βComp-Devices
.11
.19
-.13
.29
.24
R
.27**
.40***
.11
.25**
.15m
βComp-TVM
-.09
.21m
.05
.33*
.22
βComp-Computer
.41**
.45**
.29m
.17
.20
βComp-Devices
.19
.22
.07
.23
.16
2
Combined
.34m
Satisfaction
12.2.2.3 Joined Effects on Usability To assess the combined effect of exposure and competence on usability, the strongest predictors from the previous two analyses were joined in the regression models (Table 12.4). The percentage of explained variance did not increase compared to the analysis including competence measures only (Table 12.3). Also, the most influential predictors are measures of competence; exposure contributing much less to usability (except for the effect on efficiency/time in the combined sample). Again, the redesigned version of the TVM was different from the original version in that prior experience with technology played a less prominent role.
12.2.3 Discussion Referring back to the research questions: what answers do the empirical data give? First, it was expected that the effects of competence on usability would be higher than the effects of exposure on usability because of the more direct links between competence and interaction behaviour. This hypothesis was supported by the data. Competence measures were related more strongly to usability than were exposure measures. But there is a differential effect too: competence measures better predict
130
Hurtienne, Horn and Langdon
satisfaction and exposure measures better predict effectiveness. However, in a combined analysis this difference vanishes, as competence measures were overall stronger (Table 12.4). Table 12.4. Effects of the strongest exposure and competence measures on usability. Multiple regression results show standardised β coefficients. Cells with significant β values are shown in shades of grey, *** p < .001, ** p < .01, * p < .05, m p < .10.
Original
Effectiveness
Redesign
Satisfaction
% solved
Time
Steps
QUESI
ISO
R2
.30
.60**
.42*
.49**
.67***
βComp-TVM
-.02
.34*
.23
.38*
.45**
βComp-Computer
.47
.37
.20
.78*
.72**
βExpo-Computer
.08
.38
.45
-.25
-.01
2
Combined
Efficiency
m
R
.19
.29
.04
.15
.09
βComp-TVM
-.15
.15
.05
.41m
.23
m
βComp-Computer
.50
.20
-.26
-.14
-.24
βExpo-Computer
-.05
.31
.28
.01
-.06
R2
.24**
.44***
.13
.21*
.13
m
βComp-TVM
-.10
.21
.05
.31*
.21
βComp-Computer
.48*
.24
.15
.31
.27
βExpo-Computer
.03
.40*
.23
-.05
-.01
Second, it was expected that the more specific the operationalisation of experience, the higher its impact on measures of usability. This hypothesis is partly supported by the data. While the lowest level of specificity (experience with a broad range of devices) does have no effect on usability measures, it seems that experience with the same type of device has more influence than experience with the device itself. This points out the importance of also looking at the next more general level of users’ experience (here, computer experience). Third, it was expected that the redesigned ticket vending machine would reduce the impact of the relevant experience measures on usability. This hypothesis could be clearly confirmed by the data. The influence of prior experience, be it exposure or competence, on usability was greatly reduced in a redesigned version of the TVM showing the power of applying well chosen principles for inclusive interaction to counter the need for prior experience. Encouraging as these results seem, some possible limitations need to be considered. First, the analysis may not be complete because some of the experience variables are (negatively) correlated with age and age should therefore be included in the regression models. However, including age in the above regression analyses
Facets of Prior Experience and Their Impact on Product Usability for Older Users
131
does not change the results, i.e. age has no additional value in predicting usability. The only exception is the contribution of age to efficiency/time that amounts to 6 to 8% changes in the explained variance, compared to the same regression models without age as a predictor. This finding can be seen as a replication of earlier studies of the effect of age on interaction speed (Lewis et al., 2008; Blackler et al., 2009; Fisk et al., 2009). Second, as the results were obtained from using two versions of a ticket vending machine, it is not clear how far they can be generalised to a larger range of interactive products. Previous research has shown that the effects of age and prior experience on interaction performance can vary according to the type of product (Langdon et al., 2007; Lewis et al., 2008; Mieczakowski et al., 2009). Clearly, replications of these findings including products from other domains are needed.
12.3 Conclusion This study presents a step forward to acknowledging the different facets of prior experience of older adults in using technology and the effects of these facets on product usability. It is proposed that prior experience has at least three components that require separate consideration: exposure to technology, competence with technology, and subjective feeling. Exposure and competence were measured at different levels of specificity and their combined effect on usability was investigated. The results suggest that in predicting the usability of a product for older users the frequently applied measures of exposure may not be as predictive as measures of competence. Designers and researchers can gain in viewing prior experience with technology in terms of user competence, especially with the same type of product or similar technology.
12.4 References Beier G (2004) Kontrollüberzeugungen im Umgang mit Technik: Ein Persönlichkeitsmerkmal mit Relevanz für die Gestaltung technischer Systeme. Humboldt University, Berlin, Germany Blackler A, Mahar D, Popovic V (2009) Intuitive interaction, prior experience and ageing: an empirical study. In: Proceedings of the 23rd BCS Conference on Human Computer Interaction (HCI 2009), Cambridge, UK Czaja S, Sharit J (1993) Age differences in the performance of computer-based work. Psychology and Aging, 8: 59–67 Field A (2009) Discovering statistics using SPSS. Sage, Los Angeles, CA, US Fisk A, Rogers W, Charness N, Czaja S, Sharit J (2009) Designing for older adults: principles and creative human factors approaches. CRC Press, Boca Raton, FL, US Garland K, Noyes J (2004) Computer experience: a poor predictor of computer attitudes. Computers in Human Behavior, 20: 823–840 Horn A-M (2009) Validierung des Questionnaire for Intuitive Use am Beispiel der Benutzung zweier Versionen des BVG-Fahrkartenautomaten [Unpublished Master’s Thesis]. Humboldt University, Berlin, Germany
132
Hurtienne, Horn and Langdon
Hurtienne J, Dinsel C, Sturm C (2009) QUESI – design and evaluation of a questionnaire for the subjective consequences of intuitive use [Unpublished manuscript]. Fachgebiet Mensch-Maschine Systeme, Technical University, Berlin, Germany ISO (1998) ISO 9241-11: ergonomic requirements for office work with visual display terminals (VDTs) – Part 11: guidance on usability. International Organization for Standardization, Geneva, Switzerland Langdon PM, Lewis T, Clarkson PJ (2007) The effects of prior experience on the use of consumer products. Universal Access in the Information Society, 6: 179–191 Lewis T, Langdon PM, Clarkson PJ (2008) Prior experience of domestic microwave cooker interfaces: a user study. In: Langdon PM, Clarkson PJ, Robinson P (eds.) Designing inclusive futures. Springer, London, UK Mieczakowski A, Langdon P, Clarkson PJ (2009) Modelling product-user interaction for inclusive design. In: Stephanidis C (ed.) Universal access in human-computer interaction: addressing diversity. Springer, Berlin, Germany Potosky D, Bobko P (1998) The computer understanding and experience scale: a self-report measure of computer experience. Computers in Human Behavior, 14: 337–348 Prümper J, Regelmann N (submitted) Der ISONORM 9241/110-S: Kurzfragebogen zur Software-Ergonomie. Methodische Reflektionen zum Design der Itemanalysen Schifferstein H, Hekkert P (2007) Product experience. Elsevier Science, San Diego, CA, US Sengpiel M, Dittberner D (2008) The computer literacy scale (CLS) for older adults – development and validation. In: Herczeg M, Kindsmüller MC (eds.) Mensch & Computer 2008: Viel Mehr Interaktion. Oldenbourg, Munich, Germany Sengpiel M, Wandke H (submitted) Age differences in computer interaction knowledge – effects and compensation exemplified by the use of a ticket vending machine Smith B, Caputi P, Crittenden N, Jayasuriya R, Rawstorne P (1999) A review of the construct of computer experience. Computers in Human Behavior, 15: 227–242 Smith B, Caputi P, Rawstorne P (2007) The development of a measure of subjective computer experience. Computers in Human Behavior, 23: 127–145 Thompson R, Higgins C, Howell J (1994) Influence of experience on personal computer utilization: testing a conceptual model. Journal of Management Information Systems, 11: 167–187
Chapter 13 Investigating Designers’ Cognitive Representations for Inclusive Interaction Between Products and Users A. Mieczakowski, P.M. Langdon and P.J. Clarkson
13.1 Introduction Widespread population ageing (Keates and Clarkson, 2004) has increased the importance of considering older and disabled people in the design process and reducing their exclusion from product usage. Accordingly, approaches such as ‘inclusive design’, ‘universal design’ and ‘design for all’ have been developed to help designers to include different user groups in design. Inclusive design research understands users’ abilities, needs and expectations, strives to relate the whole range of users’ capabilities to the properties of product interfaces, and aims to optimise design for maximum accessibility and minimise the effort required in using products. Currently, companies start to adopt a more inclusive approach to design led by a twofold moral and financial incentive. Previous research has shown that designers can provide good interaction in products by paying more attention to mental models of users, prior tacit knowledge, interface metaphors, affordances and mappings (Krippendorff, 1989; Norman, 2002). It is believed that inclusive interaction in products can be achieved by means of minimising the sensory, cognitive and motor effort required for product usage, facilitating simplicity and intuitiveness, providing perceptible information and enhancing user satisfaction (Langdon et al., 2010). To support designers in designing more inclusively, tools that foster understanding of sensory, motor and cognitive difficulty with product interactions are required. While the most valid technique will always be to have users interact with the product in question, a variety of constraints mean that this will not always be possible. The use of simulation kits in conveying the effects of sensory and motor impairments to designers was found to be very helpful by researchers in inclusive design (Cardoso et al., 2004). However, interaction with products also involves certain mental demands but the cognitive representations of products that users form in their heads are more difficult to understand and accurately simulate than their sensory or physical capabilities (Mieczakowski et al., 2009).
134
Mieczakowski, Langdon and Clarkson
In order to investigate how designers currently cope with understanding product representations that users form in their heads and employ knowledge about users to facilitate inclusive product-user interaction, we conducted twenty interviews with product designers from a large UK company, which champions the ethos of inclusive design. The study found that designers know about different inclusive design tools but do not use any of them systematically, because they do not have the capacity to represent and match how different users interpret and use a given product with how designers intend that product to be understood and used. Among other major reasons for not designing inclusively are time and cost constraints, which lead to restricted access to real users, reuse of previous designs and prioritisation of functional integration before inclusivity. However, the findings also indicate that designers need a support tool that would: (1) capture information from a number of different users on how they interpret and use a given product; (2) amalgamate that information, compare any areas of commonality and diversity and create a common model of users’ understanding and behaviour; and (3) compare the users’ common model of a given product with the designers’ functional model and make appropriate design decisions based on similarities and differences between these two models. Before the results of twenty interviews are discussed in more detail, it is first necessary to investigate the nature and structure of human cognitive processing and people’s experience with previously encountered products to better understand what happens in users’ heads when they interact with products. Also, it is essential to consider the different approaches to design that many companies currently adopt.
13.1.1 Cognitive Processes Substantial research effort has been expended in the fields of cognitive and engineering psychology that helps us to understand what types of cognitive processes occur while people interact with everyday products (Baddeley, 2000; Wickens and Hollands, 2000). The results of those studies showed that to better understand the cognitive activity of users and ultimately to design more accessible and usable products, designers should be better informed about the cyclic nature of interaction. In essence, when a user interacts with a product, there is a cyclic process of perceiving, thinking, recognising, acting and evaluating actions (Monk, 1998). A simplified model of cognition in Figure 13.1 shows how the main elements of cognition interact both with one another and in the wider context of cognitive processing (Baddeley, 2000; Persad et al., 2007). Initially, the sensory organs perceive an object from the real world. Perception is then responsible for analysing and processing the incoming sensory information and modifying knowledge on bases of that information. Working memory consists of a number of modal stores and an executive function. The executive function manages working memory in retrieving long-term memories and deciding to act on the selected stimuli, directing attentional resources to focus on the most informative parts of the stimuli, and initiating actions and reasoning. Working memory has a limited capacity for holding and manipulating temporary items of information at one time
Investigating Designers’ Cognitive Representations for Inclusive Interaction
135
and it often has to refer to long-term memory in order to match the selected stimuli (Miller, 1956; Baddeley, 2000). Once the scene has been matched, it can be grouped with objects of similar physical properties and functional attributes into categories in memory and named. If the stimuli have been experienced before, then the information about them is likely to affect the speed and efficiency of acting upon given product features. It has been shown that ageing and certain impairments can have profound effects on working memory, long-term memory, perception, attentional resources and reasoning (Rabbitt, 1993; Freudenthal, 1999).
Figure 13.1. A simplified model of cognition incorporating perception, working memory, executive function, attentional resources, long-term memory and similarity matching
13.1.2 Prior Experience When people use new products their cognitive activity is greatly influenced by their prior experiences with other products. Therefore, to design more inclusively designers should exploit what people already know and subsequently base the appearance and behaviour of new interface features on previously well-learnt and transferable features, as well as clearly identify key visual features associated with function (Blackler, 2006; Langdon et al., 2007). Also, designers should refrain from changing the function of a familiar feature from one model to another as this may cause a certain amount of confusion and the inability of experienced users to learn the ‘new’ function of that feature. Krippendorff (1989) believes that everyday products should “make sense” to users and self-contain instructions as to their use. The researchers of the continuum of knowledge sources, Hurtienne and Blessing (2007), argue that user interfaces that tap into subconscious use of primitive linguistic schema in prior knowledge are significantly more intuitive to use. However, Docampo-Rama (2001) stresses that different generations of users
136
Mieczakowski, Langdon and Clarkson
have varied frequency and level of exposure to technology and the range of skills that they have available to deploy. In particular, the results of Docampo-Rama’s (2001) study on users’ technological familiarity show that modern symbols and layered (multi-window and menu) computer interfaces are more familiar and suited to the interactional processes of people under the age of 25 years old. In comparison, older users interact more slowly with unfamiliar products, make more errors and a higher cognitive burden is reported to slow down their reaction time (Freudenthal, 1999). Furthermore, a recent study showed that the absence of prior experience with a new product interface leads users of all age groups to adopt the ‘trial and error’ method when interacting with that product (Langdon et al., 2007, 2009). It is believed that people use this method as a coping strategy when they have not yet developed stored long-term automatic processes and chunked procedures in working memory. However, the ‘trial and error’ method is perceived as slow, repetitive and error-prone. Overall, prior experience positively affects users’ interaction with products because they can carry out tasks on a product interface in a more intuitive, faster and less error-prone manner.
13.2 Design Practice A great emphasis is given to understanding how people think of and use everyday products and subsequently devising ways in which that information can be used to inform the decisions of designers. However, to create more usable support tools, it is necessary to have some idea of how design decisions are currently made. Previous research shows that designers often design products for people like themselves by relying on their own intuition, experience and self observation. Many designers also rely heavily on user information supplied by the client even if that information is often of limited and dubious quality. Consequently, the target cognitive capabilities anticipated by designers and their clients are largely unaffected by age-related cognitive impairment, which in turn results in high demand products (Blackler, 2006; Langdon et al., 2007). Furthermore, poor consideration of users’ capabilities is often a result of designers’ fear that some inclusive design methods may constrain creativity and of a variety of organisational, technical, financial and legislative constraints (Dong, 2005). A wide range of methods have already been developed by researchers in inclusive design to inform the decisions of designers (Stanton et al., 2005). However, a recent survey (Goodman-Deane et al., 2010) found that there is limited uptake of inclusive design materials in design practice mainly due to a poor fit between the structure of many of those materials and the ways in which designers think and work. Also, there is a significant lack of quick-to-use and understandable tools that would raise designers’ awareness of how people interpret and use different product features. The aforementioned survey focused primarily on companies which do not promote and implement inclusivity in their products and services. To complement this previous study, we carried out a study with designers from a large telecommunications company, which champions the ethos
Investigating Designers’ Cognitive Representations for Inclusive Interaction
137
of inclusive design, in order to investigate whether this company really puts the principles and materials of inclusive design into practice. The company studied is located in the UK and it has a lot of experience in product design and consistently produces high quality, usable and competitive products for the UK and global markets.
13.3 Methodology and Analysis The semi-structured style of interviewing was chosen for the collection of data as it was assumed that the interview should be a two-way process taking account of both the areas that are of particular interest to the interviewer and the interests and responses of the interviewee. Accordingly, twenty semi-structured interviews were conducted with product designers in order to gather their views on the following three research questions: 1. 2. 3.
When and how is data about user capabilities collected? Do you transfer any collected user information into the design of your products and services? Do you currently use of any inclusive design methods and/or tools?
All but three of the interviewees were male. They had different levels of education and over five years of experience in product design. The key stakeholders, recruited from across the whole organisational structure, included: (1) three innovation designers, who deliver concept and ideas for a new product; (2) five interaction designers, who are responsible for designing that product; (3) eight software and hardware designers, who build the product; and (4) four technical managers, who test the product’s functional integrity, accessibility and ease of use. The interviews were approximately one hour long and were conducted in different parts of the UK. Each interview was recorded and subsequently transcribed.
13.3.1 General Inductive Analysis General inductive analysis stipulates that the collected interview data be analysed through coding. This approach contains a straightforward set of procedures and, although it is similar to grounded theory, it is less time consuming because it does not separate codes into open and axial coding and thus it limits its theory building to the most dominant and repetitive codes inherent in raw data (Thomas, 2006). Transcripts from the interviews with designers were closely read several times to identify segments with the most meaningful units. Subsequently, a label for the general category of each segment was assigned. After further analysis, the existing segments were subdivided into more detailed codes. For example, a segment initially assigned with the ‘cost-related’ label was later subdivided into ‘low cost’, ‘design reuse’ and ‘feature and cost comparison’ codes. All data was also annotated with researchers’ own interpretations and comments. An important consideration in developing a coding framework is to use more than one
138
Mieczakowski, Langdon and Clarkson
independent researcher to identify and examine the emerging code structures. Unfortunately, it was not possible to include another code reviewer and so the identification of the code structures may suffer from this bias. Overall, thirtyseven overarching codes were identified and organised in a tabular form in a spreadsheet. Examples of codes include ‘time restrictions’, ‘low cost’, ‘lack of awareness’, etc.
13.3.2 Factor Analysis The purpose in using factor analysis is “to summarise the interrelationships among the variables in a concise but accurate manner as an aid in conceptualisation” (Gorsuch, 1999). Therefore, factor analysis was used to reduce the observed thirtyseven variables by correlation, to detect structure in the relationships between different variables and to classify them into a few underlying factors. 13.3.2.1 Total Variance Total variance shows all the factors extracted from the analysis along with their eigenvalues, the percentage of variance attributable to each factor, and the cumulative variance of the factor and the previous factors. As shown in Table 13.1, the first factor accounts for 20.394% of the total variance, the second for 33.610%, the third for 44.550%, the fourth for 53.436% and the fifth for 61.179%. These values indicate that the extracted factors account for approximately 61% of the variance in the data, while the remaining 39% of the variance remains unanalysed. Table 13.1. Total variance explained Initial eigenvalues
Factor
Extraction sums of squared loadings
Total
% of Variance
Cumulative %
Total
% of Variance
Cumulative %
1
7.546
20.394
20.394
7.546
20.394
20.394
2
4.890
13.216
33.610
4.890
13.216
33.610
3
4.048
10.941
44.550
4.048
10.941
44.550
4
3.288
8.886
53.436
3.288
8.886
53.436
5
2.865
7.743
61.179
2.865
7.743
61.179
13.3.2.2 Scree Plot The scree plot has been used to look at the factors that can clearly account for the factor loadings. Beyond five factors, numerous factor loadings are less than 0.5 and are distributed across multiple factor solutions making them difficult to interpret. Consequently, this paper describes only the top five factors retained as they have the most significant values and account for 61% of the variance in the data.
Investigating Designers’ Cognitive Representations for Inclusive Interaction
139
13.3.2.3 Component Matrix The component matrix contains the unrotated loadings of thirty-seven variables for the top five extracted factors. Factor loadings are the weights and correlations between each variable and its corresponding factor. Variables with the highest loads are the most relevant in defining the dimensionality of each factor, while variables with negative value indicate an inverse impact on the factor that they are assigned to. Table 13.2 shows the component matrix with the variables that have the highest absolute value of the loading and mostly contributed to the five factors. Table 13.2. Component matrix Factor 1 Time restrictions
2
3
4
5
.555
-.548
.712
Low cost Lack of awareness
-.589
Gut feeling
.514
Prioritisation of functional integrity Legal requirements
.674
Design reuse
.686
Usability experts
.500
.519
Design conventions
.516
Company guidelines
.748
Review of requirements
-.561
Lack of user custodian
-.722
Poor documentation
-.603
Professional trialists
.667
Use of personas and scenarios Cost as main compromise
.636 -.649
Personas based on designers’
.663
family members Inclusion of user feedback
-.516
Storyboards
.636
Improved user experience
.778
Early tests
.582
-.547
140
Mieczakowski, Langdon and Clarkson
The meanings of the variables with the highest values loaded on one of the five factors were grouped together for each factor and labelled appropriately. The new meanings assigned to the five underlying factors are explained below. •
•
•
•
•
Factor one is mostly defined by ‘usability experts’, ‘professional trialists’, ‘improved user experience’ and ‘early tests’ variables. Variables which have an inverse impact on this factor include ‘lack of awareness’, ‘lack of user custodian’, ‘poor documentation’ and ‘cost as main compromise’. Accordingly, Factor one was labelled as “improved customer experience facilitated by user trials and use of usability experts (subject to time and cost constraints)”; Factor two is mostly defined by ‘time restrictions’, ‘company guidelines’, ‘use of personas and scenarios’, ‘personas based on designers’ family members’ and ‘storyboards’ variables. The only variable which has an inverse impact on this factor is ‘improved user experience’. Consequently, Factor two was labelled as “use of imaginary generic user (due to lack of time to gather data from real users)”; Factor three is mostly defined by ‘gut feeling’, ‘legal requirements’ and ‘design reuse’ variables. The only variable which has an inverse impact on this factor is ‘review of requirements’. As a result, Factor 3 is labelled as “cost-effective reuse of previous designs”; Factor four is defined by only one variable with positive value labelled ‘prioritisation of functional integrity’. The only variable which has an inverse impact on this factor is ‘inclusion of user feedback’. Therefore, Factor four is labelled as “prioritisation of functional integrity before accessibility and usability”; Factor five is defined by two variables – ‘legal requirements’ and ‘design conventions’. The only variable which has an inverse impact on this factor is ‘prioritisation of functional integrity’. Thus, Factor five is labelled as “adherence to legal guidelines and design conventions”.
13.3.3 Results This study found that data about users is collected during concept and test phases through focus groups and usability testing (Question one). Not surprisingly, the strongest result from the interviews is the relationship of time and cost operating as a governing condition on the five factors extracted during factor analysis. In particular, the results from factor analysis show that the company studied promotes improved customer experience during the design of products and services through user trials and the occasional use of usability experts (Factor one). However, depending on the project, the trials and expert advice are subjected to cost and time restrictions. The study also found that designers create an ‘imaginary generic user’ (mainly inspired by designers’ family members) and develop sequences of events describing that user and user situations in order to communicate ideas and themes among all stakeholders participating in the design process (Factor two). Furthermore, there was a strong and consistent opinion among the interviewed
Investigating Designers’ Cognitive Representations for Inclusive Interaction
141
designers that a very limited amount of user capability data is fed into the design of many products (Question two) because approximately 70% of product features are cost-effectively reused for new products, while the remaining 30% are application specific (Factor three). Designers also said that they have to prioritise user feedback on many projects as, again due to time and cost limitations, they work on the premise that functional integration of products comes before accessibility and usability (Factor four). In addition, designers need to adhere to legal requirements, design conventions and company guidelines, which means that the consistent use of any support materials has to be approved across the whole company structure (Factor five). Some of the interviewed designers were more aware of the existence of inclusive design tools (simulation kits, inclusive design toolkit, etc.) than others, but most admitted that they are not using any tools systematically because they do not have the capability to represent and match how different users interpret and use a given product with how designers intend that product to be understood and used (Question three). However, the majority acknowledged that their current design practice does not sufficently consider users’ cognitive representations of products and said that they are interested in a support tool which would do that. When looking at the company’s current design practice summarised by the five factors, it can be said that to fit designers’ ways of working the new tool should be able to: (1) capture information from a number of different users on how they interpret and use a given product (Factor one); (2) amalgamate that information, compare any areas of commonality and diversity and create a common model of users’ understanding and behaviour (Factors two and three); and (3) compare the users’ common model of a given product with the designers’ functional model (which must adhere to legal requirements and design conventions) and make appropriate design decisions based on similarities and differences between these two models (Factors four and five).
13.4 Discussion and Conclusion This study enhances our understanding of how product designers from a UK company currently go about creating inclusive interaction between products and users. To start with, this paper discussed the cognitive processes that take place when users interact with products and the role of prior experience in the intuitive use of products. It also reviewed literature on current design practice and found that designers often rely on the cognitive capabilities of their own demographic and so many everyday products have high cognitive demand for less capable users. Other reasons for not designing inclusively include time and cost constraints, lack of awareness and the fear that some inclusive methods may constrain creativity. The results of twenty interviews with designers from a UK company, analysed in conjunction with factor analysis, indicate that designers know about different inclusive design tools but do not use any of them systematically because they do not have the capacity to represent and match how different users interpret and use a given product with how designers intend that product to be understood and used. Among other major reasons for not designing inclusively are time and cost
142
Mieczakowski, Langdon and Clarkson
constraints, which lead to restricted access to real users, reuse of previous designs and prioritisation of functional integration before inclusivity. However, the findings also suggest that designers need a tool that would: (1) capture information from a number of different users on how they interpret and use a given product; (2) amalgamate that information, compare any areas of commonality and diversity and create a common model of users’ understanding and behaviour; and (3) compare the users’ common model of a given product with the designers’ functional model and make appropriate design decisions based on similarities and differences between these two models. Previous research found that such a tool should capture human goals and actions and their impact on the functional parts of a given product, and proposed the Conceptual Graph Analysis (CGA) as a possible candidate (Mieczakowski, et al., 2009). Future research will continue to develop a tool that enables designers to create inclusive product-user interaction.
13.5 References Baddeley AD (2000) The episodic buffer: a new component of working memory? Trends in Cognitive Sciences, 4: 417–423 Blackler A (2006) Intuitive interaction with complex artefacts. PhD Thesis, School of Design, Queensland University of Technology, Australia Cardoso C, Keates S, Clarkson PJ (2004) Comparing product assessment methods for inclusive design. In: Keates S, Clarkson PJ, Langdon P, Robinson P (eds.) Designing a more inclusive world. Springer, London, UK Docampo-Rama M (2001) Technology generations handling complex user interfaces. PhD Thesis, Eindhoven University of Technology, The Netherlands Dong H (2005) Barriers to inclusive design in the UK. PhD Thesis, University of Cambridge, Cambridge, UK Freudenthal A (1999) The design of home appliances for young and old consumers. PhD Thesis, Delft University Press, The Netherlands Goodman-Deane J, Langdon P, Clarkson PJ (2010) Key influences on the user-centred design process. Journal of Engineering Design (in press) Gorsuch RL (1999) Factor analysis, 2nd edn. Erlbaum, Hillsdale, NJ, US Hurtienne J, Blessing L (2007) Metaphors as tools for intuitive interaction with technology. metaphorik.de, 12: 21–52 Keates S, Clarkson PJ (2004) Countering design exclusion: an introduction to inclusive design. Springer, London, UK Krippendorff K (1989) On the essential contexts of artifacts or on the proposition that design is making sense (of things). Design Issues, 5: 9–39 Langdon PM, Lewis T, Clarkson PJ (2007) The effects of prior experience on the use of consumer products. Universal Access in the Information Society, Special Issue on Designing Accessible Technology, 6: 179–191 Langdon PM, Lewis T, Clarkson PJ (2010) Prior experience in the use of domestic product interfaces. Universal Access in the Information Society (in press) Mieczakowski A, Langdon P, Clarkson PJ (2009) Specifying an inclusive model of productuser interaction. In: Proceedings of the 17th International Conference on Engineering Design (Iced’09), Stanford, CA, US Miller GA (1956) The magical number seven, plus or minus two: some limits on our capacity for processing information. The Psychological Review, 63: 81–97
Investigating Designers’ Cognitive Representations for Inclusive Interaction
143
Monk A (1998) Cyclic interaction: a unitary approach to intention, action and the environment. Cognition, 68: 95–110 Norman DA (2002) The design of everyday things. Basic Books, London, UK Persad U, Langdon P, Clarkson PJ (2007) Characterising user capabilities to support inclusive design evaluation. Universal Access in the Information Society, Special Issue on Designing Accessible Technology, 6: 119–135 Rabbitt P (1993) Does it all go together when it goes? The nineteenth Bartlett memorial lecture. The Quarterly Journal of Experimental Psychology, 46A: 385–434 Stanton N, Hedge A, Brookhuis K, Salas E, Hendrick H (2005) Handbook of human factors and ergonomics methods. CRC Press, Boca Raton, FL, US Thomas DR (2006) A general inductive approach for analysing qualitative evaluation data. American Journal of Evaluation, 27: 237–246 Wickens CD, Hollands JG (2000) Engineering psychology and human performance, 3rd edn. Prentice Hall, Upper Saddle River, NJ, US
Chapter 14 Prior Experience and Learning: Generational Effects upon Interaction C. Wilkinson, P.M. Langdon and P.J. Clarkson
14.1 Introduction In our previous paper (Wilkinson et al., 2009) a methodology was outlined that was designed to capture how individuals perceive, process and respond to stimuli during interaction with products, how product design can affect this, and to reveal what occurs when individuals have little or no previous knowledge of a specific product. Areas of interest involved the generational effect and the effects of ageing upon interaction. In this larger study, a novel product is presented to a sample of participants who are recorded interacting with it whilst providing concurrent protocol. The expectation was that prior experience with similar products would affect users’ ability to interact with the product, and that age-related effects might be observed. Inclusive design has been defined as the design of products and/or services, accessible to and useable by people with the widest range of abilities within the widest range of situations without the need for special adaptation or design (BSi, 2005). Thus, inclusive design has the commercial potential to increase long-term profits and enhance manufacturers’ competitive edge, and can assist in the production of better products for all end users (Dong et al., 2006). Designs catering for older and younger individuals will also satisfy an ever-increasing commercial demand. Generational differences with regard to products and technology have been reported in literature by Langdon et al. (2007) and Docampo-Rama (2001) who proposed them to be a symptom of exposure to technology at a particular stage in life. This may explain the difficulty older generations experience learning and interacting with various modern products and designs, regardless of the effects of natural atrophy upon the older population.
146
Wilkinson, Langdon and Clarkson
14.2 Background 14.2.1 Pilot Study Conceptual and Practical Approach Research considers that all interactions individuals have with their environment are learning processes; perceived information being compared with that held in memory to aid understanding and facilitate the execution of appropriate responses (Edge et al., 2006). Rasmussen (1993) proposed a model that accounted for fluctuations in the level of consciousness required during interaction based on the assumption that individuals operated at a level appropriate to the familiarity of the situation, and an individual’s previous experience of it or something similar. Wickens et al. (1998) expanded this model to account for the type of processing that occurs (Figure 14.1).
Figure 14.1. Wickens’ expanded version of Rasmussen’s skill, rule and knowledge-based processing model
14.2.2 Practical Approach and Rationale The intention of the pilot study was to propose and verify the effectiveness of a methodology designed to capture information regarding what occurs during interaction with novel products, about which users may possess limited prior experience or mental models to see how they may develop over time. This was achieved with the utilisation of a new-to-market, everyday high street product. Other areas of interest to this research included the effects of natural atrophy and ageing, and generational effects upon interaction. Prior experience of products is significant in their usability, and the transfer of previous experience depends on the nature of prior and subsequent experience of similar tasks (Thomas and van Leeuwen, 1999) Familiarity of features within the product design and the interactional style or its conforming metaphor appear to be key features for successful and intuitive interaction (Okeye, 1998). Well rehearsed or well learnt interactional styles can be more effectively transferred across products and designs, thus aiding successful initial and subsequent interaction by all user groups.
Prior Experience and Learning: Generational Effects upon Interaction
147
14.2.3 Pilot Study Results Table 14.1 shows participants’ performance interacting and completing assigned tasks with the novel product. The 16 to 25 age group had a lower average number of button responses than either of the other age groups, and a lower rate of error than either the 26 to 59 or 60 to 80 age groups. In both instances the 26 to 59 and 60 to 80 age groups were similarly matched for number of average button presses and rates of error. Task completion times were more variable, with the older age group completing tasks quicker than both the 16 to 25 and 26 to 59 age groups. Table 14.2 indicates level of familiarity with various forms of technology according to age, the older group recording the highest level of familiarity, and allows analysis of how frequently individuals interacted with technology on a regular basis, as well as the different products and interfaces they were familiar with. Table 14.1. Interaction results overview Performance
16-25
26-59
60-80
Mean number of button presses
24.6
44.5
40.5
Mean rates of error
19.6
39.5
35.5
Mean task completion times (seconds)
121.6
203.3
103.1
Mean times per button press
4.1
2.9
2.3
Table 14.2. Technological familiarity questionnaire (TFQ) results Forms of technology
16-25
26-59
60-80
Question 1
28
20
28
Question 2
13
16
18
Overall TF Score
41
36
46
14.3 Full Scale Study The rationale for the full scale study was identical, but utilised a larger sample. Participants were presented with the product and asked about their understanding of it to identify pre-conceptions held or initially developed. Repeated mid-way and at the experimental conclusion, it was possible to ascertain if product conceptualisations had been developed or modified through interaction. Further discussion in this latter phase centred upon participants’ recognition of design metaphors present in the product, any perceived affordances within its design, any familiar features, and at what stage (if at all) the participants felt they understood the product and its interaction. Further
148
Wilkinson, Langdon and Clarkson
assessment was afforded with the administration of a technological familiarity questionnaire (Blackler, 2006), to verify participants’ level of prior experience with various forms of technology products, and how frequently they interact with them. The CantabeclipseTM cognitive assessment tool was also used to afford further post experimentation analysis of differences between age groups.
14.3.1 Experimental Design, Procedure and Data Analysis Between-subjects design, assigning a total of 16 participants to one of three groups according to age: 16 to 25 (four), 26 to 59 (nine) and 60 to 80 (three), recruited from outside academia to minimise any educational biases. Independent variables were three different age groups: 16 to 25, 26 to 59 and 60 to 80. Dependent variables used were interaction performance, technological familiarity questionnaire performance and CantabeclipseTM cognitive Assessment performance. Experimental procedure included the following steps: administer pre-test assessment using CantabeclipseTM ; record initial exposure to the product and participant understanding; continue to record participants performing three randomised tasks with the product, whilst delivering concurrent protocol: • find the lowest wattage reading for the device attached to the product; • find the current reading for the device attached to the product; • set unit cost price to £99.50/kWh; • record participant understanding of product and interaction at mid-way stage; • record performance of final three randomised tasks: • find the frequency reading for the device attached to the product; • find out how much the device attached to this product has consumed; • find the highest wattage reading for the device attached to the product; • record participant understanding of product and interaction at task; completion stage, and commence brief interview stage regarding their experience; • administer technology familiarity questionnaire. • • •
The recorded video-data verified how the concurrent protocol corresponded to the users’ actions, assessment of task completion times, and rates of error. Analysis also indicated if participants took longer to achieve task completion, according to age-group. Interview material gleaned qualitative data upon user perception of interaction to confirm overall level of product understanding, and how this influenced interaction. The technological familiarity questionnaire posed two questions: “How often do you use the following products?” and “When using the products, how many features of the product are you familiar with and do you use?”. Rated answers provided overall TFQ scores.
Prior Experience and Learning: Generational Effects upon Interaction
149
14.4 Results 14.4.1 Performance Data Figure 14.2 indicates that the younger generation completed tasks in quicker overall times than both the 26 to 59 and 60 to 80 age groups.
51.21
83.72
57.72
16-25
26-59
60-80
Figure 14.2. Mean task completion times (seconds)
Number of Button Presses
Figure 14.3 indicates that the younger generation made a greater number of button presses during task completion, and that they also had the greatest rate of error within their product interaction. The 26 to 59 age group possessed the next highest number of button presses and rates of error, the 60 to 80 age group making fewest errors and fewer attempts toward task completion. 20.00 15.00 10.00 5.00 0.00 16-25
26-59
60-80
Age Group Mean Number of Button Presses
Mean Rate of Error
Figure 14.3. Button press data
150
Wilkinson, Langdon and Clarkson
Figure 14.4 portrays the older generation making lengthier responses and taking more time to make button presses than the 26 to 59 or the 16 to 25 age groups respectively (on the left). It also shows that with regard to technological familiarity, the younger generation were most familiar, closely followed by the 26 to 59 age group, the older generation possessing the lowest TFQ scores (on the right).
54.00 4.74 95.50
6.56
95.33
4.86
16-25
26-59
60-80
16-25
26-59
60-80
Figure 14.4. Mean time per button press (sec/BP) (left) and mean TFQ scores (right)
Figure 14.5 (part of the Cantab test results) indicates that the younger generation possessed a smaller memory span than with of the other age groups, the 26 to 59 age groups possessing the greatest memory span. 8.00
Number of Items
7.00 6.00 5.00 4.00 3.00 2.00 1.00 0.00 16-25
26-59
60-80
Age Group
Figure 14.5. SSP memory span length
14.4.2 Concurrent Protocol Summary All participants initially recognised the product was electrical in nature, and the majority surmised that it was used as a measuring tool. Participants referenced
Prior Experience and Learning: Generational Effects upon Interaction
151
plug-in devices in the initial phase of questioning and, if anything, it was evident that the 26 to 59 age group provided the most accurate and elaborate descriptions at this stage. By the mid-way stage participants had confirmed their original ideas and nearly all confirmed that it was designed to measure the flow of electricity, and that it could be set to indicate how much usage cost. The 60 to 80 age group were the vaguest, having not (unlike other age groups) solidified their understanding of the product or its interaction at this stage. All age groups voiced disquiet at the complexity of setting the electrical cost function (task three). In the latter stage, the older generation provided the vaguest descriptions of the device, the 16 to 25 and 26 to 59 age groups providing more concrete, thorough, and accurate descriptions of the purpose, function and interaction of the product. The scrolling menu feature was learned and understood rapidly by all age groups, and was cited as being a design feature with which many were familiar. The most frequently cited product resemblance was to Digital Watches and Alarm Clocks, both featuring multibutton press requirements, scrolling menus and up and down adjustment controls. The 60 to 80 age group provided the fewest number of familiar devices, followed by the 16 to 25 age group, the 26 to 59 age group citing the highest.
14.5 Discussion 14.5.1 Full Scale Study Results The interaction data indicates that with regard to task completion times, the younger generations exhibited faster responses and overall task completion times than the older generation (Figure 14.1). Indeed, the older generation took considerably longer to complete tasks in comparison with the other age groups. It may be that the older generation took more time to consider each move, for a variety of reasons, and that the younger generations are keener to learn by direct manipulation experience. This is supported by both the mean number of button press results (Figure 14.2) and the rate of error data (Figure 14.3), as well as anecdotal evidence obtained during the post-test interviews. It would appear participants of the older age group were reticent and reluctant to try new things with the device: ‘I would have thought you should only have to press any of them once (the buttons) not multiple times. You're afraid and think pressing the buttons quickly will break it.’ Participant 15 (60 to 80 age group)
The younger age groups made more attempts but in so doing made greater numbers of errors. Accordingly, the average time per button press data revealed that those in the 60 to 80 age group took longer to make individual or combinations of moves, as opposed to the younger age groups who were quicker in their average times per button press. Observed in conjunction with each groups’ level of technological familiarity it is evident that the younger generation possess the greatest awareness and level of interaction with contemporary technology. It is, however, arguable that in this instance this greater level of familiarity corresponds little to a meaningful increase in overall task performance, and the more worthy
152
Wilkinson, Langdon and Clarkson
observation is that younger generations adopted a more care-free approach to learning and interaction. This is upheld in the concurrent protocols, where the younger generation were convinced given time they would obtain the solution, whereas the older generation became quickly frustrated when the product would not respond intuitively: ‘Young people would know about multi-button pressing and holding buttons, and have the patience to try different combinations, until they get the response they want. I just don't have the patience. I would try what I know, and if it didn't do what I wanted it to, I'd just go mad and give up with it.’ Participant 16 (60 to 80 age group)
The Cantab results suggested that no memory impairment was present in the older generation due to natural atrophy or cognitive impairment, and although marginal between age groups, those in the 16 to 25 age group had the shortest short-term memory span, and those in the 26 to 59 age group, the longest. As in the pilot study, all participants quickly consolidated their understanding of the function button, rapidly learning the scrolling functionality it represented. Again task three presented some difficulty to all participants. If anything, it would appear that those in the 26 to 59 age group were most used to the multi-button press approach and multi-button functionality model required, but all groups indicated that it was at this stage their understanding, or the adequacy of the design, was lacking. This feature of the product was most likened to alarm clocks or digital watches by the majority of participants and these examples occurred most frequently. Although individuals were familiar with the model or mode of interaction required, there were evidently issues with its implementation.
14.5.2 Full Scale Study in Relation to Pilot Study Results The suitability of the pilot study methodology was upheld in this larger, fullscale study. Unlike the pilot study, these findings are more in line with literary expectation, although error rates of the older age group were lower than any other group. The task completion times also conformed to expectation with the younger age groups completing tasks quicker than the older age group. The TFQ results appeared distinctly different from the pilot study results, with the older age group possessing a much lower technological familiarity score, although it is admitted the majority of items identified in the questionnaire are contemporary and recent. Lack of familiarity of the older generation with these items may provide further support for a generational effect in that as we age our inclination to ‘keep up to date’ with the latest developments may wane, which is supported by interview material.
Prior Experience and Learning: Generational Effects upon Interaction
153
14.6 Conclusion Results suggest distinct differences in approaches adopted toward problem solving and task completion in this instance, with the younger generation making more attempts toward task completion than the older generation, recording a higher rate of error in the process, but achieving quicker overall task completion. The generational effects described may be attributable to younger generations’ greater familiarity with modern technology. It is clear that the development of understanding or ‘mental model’ of the product occurs over the time-frame of exposure. Participants’ rudimentary conception of the product and how it might function clearly evolved over time, including initial recognition of the socket orifice, the plug artefact, and electrical relevance, in conjunction with the perception of the function, power and cost buttons. Understanding the scrolling menu occurred by the mid-way stage with overall understanding latterly complete in accordance with Norman’s (1988) view that models are developed through experience, and are based on the perception of function and behaviour through design. As for the design of the product itself, there was some expectation voiced that with so few buttons, the interaction of the device must be specific and, as it appeared to offer considerable functionality, complicated. The utilisation of up and down arrows was recognised, almost universally, although accessing their function was not deemed intuitive. From a generational perspective, a number of observations were made. The display digits themselves were considered large and assistive toward older individuals’ perception. However, the units were deemed too small and so although the reading could be perceived, the corresponding unit of that reading was often indeterminable. Colour could significantly improve the product’s intuitive interaction, as having the up and down arrows and square icon a different colour to the device’s background would assist their observation and, likewise, the labels above buttons would be better distinguished if they were coloured. Further age-related grievances included lack of screen illumination: in dark environments, particularly in low lighting conditions, the display itself is difficult to read and is highly probable in the home, with plug sockets at floor level. Furthermore, one older participant explained that with increasing arthritis, they could rarely feel the end of their fingertips, and thus successfully manipulating the device was made increasingly awkward, given the size of the buttons. The evidence presented would suggest that simple alterations to the design and the method of interaction would significantly enhance individuals’ ability to learn and use this product. Difficulty in interaction was particularly manifest in attempts to complete task three – setting the unit cost, regardless of age. Simplification of this procedure and maintaining similar levels of complexity to the interaction of the other available functions would reduce the level and extent of learning required when initially exposed to the product.
154
Wilkinson, Langdon and Clarkson
14.7 Future Work Future work in this area would aim to relate these findings in terms of interaction and learning within Wickens’ (1998) expanded model of skill, rule and knowledgebased processing. Later research will then validate such proposals by applying the knowledge learned to other household products and attempt to predict user behaviour with other household products and interfaces. By identifying interface design features causing unnecessary or excessive problems, particularly to the older population, designers can avoid such features wherever possible to maximise their products’ adoption and ease of use by a larger proportion of the population.
14.8 References Blackler A (2006) Intuitive interaction with complex artefacts. PhD Thesis, Queensland University of Technology, Australia BSi (2005) Design management systems. Managing inclusive design. BS 7000-6:2005. The British Standards Institution, UK Docampo-Rama M (2001) Technology generations handling complex user interfaces. PhD Thesis, TU Eindhoven, The Netherlands Dong H, Bobjer O, McBride P, Clarkson P J (2006) Inclusive product design: industrial case studies from the UK and Sweden. In: Bust P (ed.) Contemporary ergonomics. Taylor and Francis, UK Edge D, Blackwell A, Dubuc L (2006) The physical world as an abstract interface. In: Bust P (ed.) Contemporary ergonomics. Taylor and Francis, UK Langdon P, Lewis T, Clarkson PJ (2007) The effects of prior experience on the use of consumer products. Universal Access in the Information Society, 6(2): 179–191 Norman D (1988) The design of everyday things. Currency Doubleday, New York, NY, US Okeye H (1998) Metaphor mental model approach to intuitive graphical user interface design. PhD Thesis, Cleveland State University, OH, US Rasmussen J (1993) Deciding and doing: decision making in natural contexts. Ablex, Norwood, NJ, US Thomas B, van Leeuwen M (1999) The user interface design of the fizz and spark GSM telephones. Taylor and Francis, London, UK Wickens C, Gordon S, Liu Y (1998) An introduction to human factors engineering. Addison-Wesley Educational Publishers Inc., New York, NY, US Wilkinson C, Langdon P, Clarkson PJ (2009) Investigating prior experience and product learning through novel interface interaction. In: Proceedings of the 5th International Conference on Universal Access in Human-Computer Interaction (UAHCI 2009), San Diego, CA, US
Part IV
Assistive Technology
Chapter 15 Expressing Through Digital Photographs: An Assistive Tool for Persons with Aphasia A. Al Mahmud, Y. Limpens and J.B. Martens
15.1 Introduction Much of our social life consists of sharing daily stories with other people. However, sharing personal stories can be extremely difficult for people with limited verbal ability, such as those suffering from expressive aphasia. Aphasia is an acquired communication disorder that is caused by brain injury or trauma. Aphasia affects language comprehension and generation (Hillis, 2007), so that people’s ability to express themselves verbally suffers. As a result, aphasia often leads to increased social isolation and possibly to depression. Solutions that can help sufferers to share experiences effectively will not only empower them, but should also help to reduce the burden on partners and other caregivers. The inability to perform ample storytelling has a large impact on aphasics, and too often leads to the interpretation by those unfamiliar with them that they must be either mad or stupid. Storytelling is an example of evaluative language (Armstrong and Ulatowska, 2007), i.e., it helps to express feelings and opinions and is an important means of demonstrating to other people one’s ability to act as a dedicated discussion partner. Enabling aphasics to better share their daily experiences is likely to help them to become more socially active and to re-engage in their preferred life style. This in turn is expected to promote greater self-esteem, more confidence, less social isolation and less depression among aphasics and more understanding and awareness among non-aphasics (Lasker and Beukelman, 1999; Daemen et al., 2007). Augmentative and alternative communication (AAC) devices such as TouchSpeak (www.touchspeak.co.uk) are widely used in aphasia therapy as well as during the post-therapy period, despite the fact that these devices have some obvious limitations. First, they need to be operated by means of buttons with symbols whose meanings need to be learned. Second, AAC devices mostly support need-based interaction for functional communication. Need-based interaction is very important but obviously only a first step. After the initial period in which people learn how to cope with their disability, other kinds of communication become increasingly
158
Al Mahmud, Limpens and Martens
important for them. Such kinds of communication, as mentioned in Light (1988) are information sharing, social closeness and social etiquette. It is apparent that AACs are not intended for such communication needs. There is for instance hardly any functionality for aphasics in current AACs to capture and share day-to-day experiences. It might therefore not come as a big surprise that many aphasics refuse to depend on such an AAC for basic communication needs, which explains why their adoption has been slow. Therefore, designing new technology for supporting such rich forms of communication is an urgent need for the target user group. This agrees with a more general trend towards inclusive design (Newell and Gregor, 2000) which promotes defining and understanding your target users, making sure that all of them are catered for and not excluded because of their particular motor or sensory capabilities (Reed and Monk, 2006). Photography has been proposed within ‘aphasia talks’ (Levin et al., 2007) as a way of facilitating self-expression in aphasics, and as having an influence on reintegration, improving socialisation and allowing recreation. User studies confirmed that photos could be effective in supporting communication. However, since many aphasics have difficulty in understanding abstract representations, contextually rich photographs are necessary to share experiences. A study has reported about what factors in a photograph are likely to be understandable by aphasics (Mckelvey et al., 2007). However, this study does not reflect on how the aphasics can capture and share digital photographs. The usefulness of digital photographs within storytelling has also been studied (see e.g., iTell in Landry and Guzdial, 2006). However, these latter studies were not tailored to people with limited verbal ability, including aphasics. The challenge or issue of how people with special needs can be empowered to use photographs to share experience does not yet seem to have been explored in depth. Our objective is to design a system that can assist aphasics in sharing stories with the help of digital photos. The first version of such a system was designed iteratively by incorporating feedback from an experienced speech therapist. In order to further validate the design decisions some experiments were conducted with able subjects that received special instructions. The details of this approach will be explained later on in the paper, together with the major results. In the next section, we start by explaining the design process and the resulting prototype.
15.2 Design Process 15.2.1 Understanding the User Group Several methods have been used before to design assistive technology for people with cognitive disabilities, including participatory design (Boyd-Graber et al., 2006), design by proxies etc. (Allen et al., 2008). Although it is very difficult to explain a new concept to aphasics; it is possible to let them test working prototypes. As a first step we visited a local rehabilitation centre to understand the user group and their abilities. We interviewed one speech therapist and one social worker. Later on we
Expressing Through Digital Photographs
159
observed two groups of people having aphasia (one communication group with five male aphasics, and one writing group with three male and two female aphasics). The fact that aphasia varies significantly from person to person makes it difficult to design for all. Therefore, the intended user group needs to be specified. We decided to focus on people with expressive aphasia, i.e., being able to understand language, but having difficulty with verbal and written communication. This means that they often use gestures, eye contact, emotions, and facial mime to support or replace their communication. A complicating factor is that eighty percent of expressive aphasics also have right-sided paresis.
Figure 15.1. Editing panel
15.2.2 Concept Design Concept design started with brainstorming. Several concepts were designed and representative designs were selected for feedback from a therapist. Finally, one concept called Co-creation was chosen for further development of a mediumfidelity prototype. Co-creation is a software program that supports aphasics to share their experiences, by answering the five W’s (who, where, when, what and why). The five W’s is a method to get the full story on something and plays an important role in sharing experiences. Answering the five W’s will be pursued by combining pictures that are taken by the user throughout the day with additional means of communication that help to clarify the information that is depicted. Co-
160
Al Mahmud, Limpens and Martens
creation can automatically upload the user’s pictures and can cluster them as a function of the time (in future, possibly also the location) at which the pictures were taken. Pictures that are taken within a short period of time are assumed to highlight a single activity. The user is able to select a cluster and can edit the pictures by dragging icons onto them, by drawing on top of them, or by typing words to create a caption. The pictures set the context for the story, while the additional tools allow the user to add information that is considered useful for sharing the experience. Co-creation consists of four screens: a startup screen, a setup screen, a cluster screen and a specific picture screen. The start-up screen informs which functions are activated. The set-up screen is where additional functions, such as typing and drawing can be activated. The cluster screen allows the user to browse through the clusters of pictures that have been uploaded by a separate camera tool in a predefined folder. The interface is capable of simultaneously showing the contents of four different clusters as colored groups. The remaining clusters can be accessed through the timeline where each cluster is represented by the first picture in that cluster. The specific picture screen helps to add information to a selected picture.
15.3 Understanding Capturing and Sharing Behaviour A longitudinal experiment was set up in order to better understand the process of taking pictures and sharing experiences. The objective was to obtain feedback on whether or not the Co-creation application could be of assistance in the sharing process. The experiment was set up to also obtain information on key aspects of the design, such as how many pictures are taken per day and per activity, what is the average time between pictures, etc.
15.3.1 Participants and Procedure Six non-aphasic participants (three males and three females, age range 21 to 22), including their partners volunteered for the experiment. Participants were asked to capture their daily experiences. On the next day, participants were asked to share the stories using the photos with us and with their partners in a face-to-face setting. In order to imitate to some extent the situation of an aphasic person, the participants were only allowed to talk in French (or German, if they were less fluent in German). Later, participants expressed the same story in their native language (e.g. Dutch). The experiment was conducted over four days. The storytelling sessions were video recorded for further analysis.
Expressing Through Digital Photographs
161
15.3.2 Results The experience of the first day taught the participants that it is more difficult to communicate their stories with the help of pictures than they thought in advance. Participants also discovered that their pictures didn’t always cover what they wanted to share. Due to the fact that many pictures were actually inadequate, and most participants lacked sufficient knowledge of French to compensate for this, it took quite some time and effort to understand what the participants were trying to convey. In general, participants made much use of gestures and non-verbal sounds. In response to their first experience, most participants took significantly more pictures on the second day, and many of these pictures formed clusters as a function of time. They were much more aware of what they wanted to share and took pictures accordingly. As a result the pictures covered the global storyline, indicating what activities had been done. As the pictures revealed mostly activities and not experiences, the conversations also tended to focus on activities. The inadequate French of the subjects simply did not allow them to address aspects that were not apparent from the pictures. This indicates that aphasics do need other tools to share experiences. The partner of each participant even started asking questions to which partner did not know the answer. Although the questions were of closed type it did enhance the quality of the communication. The story is well thought out beforehand, because the global storyline is very well depicted by the pictures. So there is more time, focus and confidence to talk about details. The partner adapts his questioning (rhetorical or closed questions) when he notices the participant is struggling. On average approximately 15 to 20 pictures, divided into approximately six to seven clusters a day, is the maximum to share experiences. Nevertheless, there are some differences between participants. Furthermore, participants realised that they had just been taking pictures about their activities, not realising that sometime it was not interesting to tell about, while they did not take pictures of something they really wanted to share. Participants achieved expression due to the fact that icons, a drawing pad and a keyboard could be used. This indicates that additional tools, such as icons, a drawing pad and a keyboard are certainly useful to answer the ‘why-question’ if the ability to communicate verbally is lacking. During the sharing of experiences, the icons were often used (a total of 34 times), the drawing pad four times (once by each participant) and the keyboard wasn’t used at all. This seems to be logical, because it is easy to use an icon which immediately expresses an emotion or an event, without making an effort to say the same in French (or German in one case). 15.3.2.1 Digital Pictures and Icons Although investigation showed that the icons were especially helpful for sharing experiences, it was worth investigating what kind of aspects cannot be told with pictures, but can be told with icons. Therefore, to define the parameters of the icons, the video recorded conversations of the experiments were reviewed to see what aspects could not be told solely with pictures.
162
Al Mahmud, Limpens and Martens
Often conversations start with an indication of time and location. In addition many pictures do give you an idea of where were taken; however exactly where this is, may be undefined. For example, a person can photograph a bedroom, but the discussion partner might not know whose bedroom this is. Therefore, an icon that indicates who is connected with the location/object/acquaintance might be useful. Therefore, time and location should be available next to the picture. A part of sharing experiences exists of expressing your opinion. These words are often judgmental; however pictures are not able to convey this. Since using judgmental words is common and it might not always be possible for a user to convey these words verbally, icons depicting them are desired. It happened more than once that the user was waiting for a person or meeting a person. However since this person wasn’t there yet, the picture didn’t show that and at the same time it was not clear to the discussion partner why this picture was taken. Icons that depict human interaction, such as meeting someone, are needed. Human icons, such as an icon representing the user, could make it even more clear who has been saying or doing what. Verbs are important to indicate what exact action has been performed. Sometimes this is obvious (such as doing the dishes); however other cases point to the contrary, because only the end result is shown. Perhaps some icons that depict common verbs such as helping, making, waiting, travelling could help to specify the information. Sometimes it is not clear what kind of activity is depicted in the picture, due to the location and previous cluster. It might be helpful to have icons that can indicate whether this was a leisure activity, a study activity, or a sport activity. Sometimes the connection between pictures is difficult to understand, which may lead to miscommunication. For instance, when the user wants to point out that the action in the second picture happened during the action in the first picture, it might be an option to show these two pictures more closely together.
15.3.3 Discussion We have the impression that the discussion partners want to know a certain level of information. If this information is available by showing pictures then the partners just ask rhetorical questions to confirm their assumptions and occasionally ask closed questions. However, these are often related to what is seen in the picture and therefore easy to understand and answer. If few pictures have been made, less information will be available automatically. As a result the partner starts asking questions, but since less is depicted in the pictures and therefore available to refer to, it is more likely for open questions to be used. For aphasics such questions would be difficult to answer. When icons are available they are often used to answer questions. In addition, the success of the sharing stage depends quite heavily on the discussion partner. If the partner listens carefully and supports the user, the communication will be more successful because the user is more at ease and not frustrated. Furthermore, sometime the partner is so focused on the pictures that he/she does not see the facial expressions or gestures that the user makes. Such non-verbal information is very important for the overall understanding of the story.
Expressing Through Digital Photographs
163
All the participants that were involved during their experiments did not have aphasia and were therefore asked to share the experiences in French. This was a good imitation in terms of the ability to express, because the participants knew what they wanted to say, but they did not know how to. Nevertheless, quite often aphasics have additional problems such as cognitive disabilities, a right sided paralysis etc. These additional challenges were more difficult to imitate. Although the participants were not allowed to use the right side of their body, they easily forgot this. In addition, concentration problems and information processing problems can’t be imitated by users who do not have these problems. Not all aphasics have these additional problems, but it is certainly possible and should therefore be taken into account. Another limitation of the experiments is the number of participants. The conclusions are now based on six different participants, but for more general conclusions further investigation is needed.
15.4 Redesigning the Interface The experiment led us to rethink the design of Co-creation. The suggestions given by the speech therapist have also been analysed. Therefore, the following redesign requirements have been set up: • •
•
• • •
An overview page (e.g. a calendar) should be implemented, so that not all the pictures have to be loaded or browsed through before the specific cluster has been found. While designing, the data and its consequences should be taken into account. Approximately 15 to 20 pictures per day divided into six to seven clusters are common. The clusters have an average of three pictures; though clusters that contain five or six pictures may exist. The final experiment showed us that for users that take a few pictures during a day it is more beneficial to show them all at once and leave out the clustering. There is no point in clustering four pictures that are taken throughout the day. An option that makes this possible should be implemented. It should become possible to delete pictures within a cluster, so that nonvaluable pictures will not distract both the user and the discussion partner. The final icon set should be implemented. Subsequently it will be important that custom icons can be uploaded. The new design should make use of “smart” colors; a combination of colors that highlighting different the functionalities, but do unite these separate functionalities into one interface.
After the redesign of the application (Figure 15.2), an evaluation of this second version of Co-creation was performed together with a speech therapist in order to gain insight into which parts of the interface were likely to be most problematic for aphasics. Some evaluation outcomes are discussed below:
164
Al Mahmud, Limpens and Martens
•
•
Cluster screen. It was made possible to delete pictures by using a red-cross button in the upper right corner of the interface. However, it is not possible to add pictures to a cluster. According to the speech therapist: They (aphasics) are capable of deleting invaluable pictures; however they are not capable of uploading a picture (from a database). Icons: We chose to show only the most frequently used icons; other icons could be made visible by clicking on “more”. The corresponding speech therapist comment was: Although the icons that are always shown are enough, you should take into account that aphasics will forget the other icons (of other categories) if only the most frequently ones are shown. To help them remember that there are additional icons, you might consider showing the most used icon per category and to distinguish categories by background colour.
The speech therapist also provided feedback on some parts of the recorded conversations which led to some general remarks. These remarks were not taken into account when redesigning Co-creation, but were suggestions for testing with aphasics. •
•
Gestures. The participants in our experiments gestured a lot, and the observation variation in gestures is unlikely for people with aphasia, as their gesturing is often also affected by their broken language system. Gesturing is also not prominent in Dutch culture, as excessive gesturing makes you stand out or look weird. Furthermore, aphasics’ gestures are no longer that fluid due to their right sided paralysis. Sharing/Editing with icons. Although it might help aphasics to review their pictures before sharing them with their partner, the speech therapist does not expect them to edit the pictures with the icons beforehand. She thinks that aphasics cannot use the icons on their own, so icons mostly will be used to respond to questions of their partner. However, this should be empirically confirmed.
Figure 15.2. The redesigned interface of Co-creation
Expressing Through Digital Photographs
165
15.5 Conclusion Co-creation is valuable for aphasics to share their experiences for the following reasons. Firstly, unlike existing communication aids, such as the communication book or TouchSpeak, Co-creation makes use of several means of communication and the content is determined and delivered by the user. This creates a shared context between the user and the discussion partner and the application which leverages the ability to share experiences. Although there are several applications which support clustering and editing personal pictures, these have drawbacks for aphasics. In addition the amount of information that has to be processed in order to interact with these applications is too much for people with aphasia due to their possible cognitive disabilities. Secondly, by clustering the pictures per activity, the amount of information shown at once is less which helps aphasics to concentrate on one activity at a time. Furthermore, the interface of Co-creation spreads the information available by minimising components that are not in use. Limiting the amount of information by clustering and spreading will be beneficial for sharing experiences due to aphasics’ possible cognitive problems. Thirdly, Co-creation uses the pictures as a starting point for the conversation. The pictures are the most important communication means and tell the global storyline. Nevertheless, pictures cannot tell every detail and in that situation icon, typing and drawing functions are means to specify additional information. The discussion partner could heavily affect the success of sharing. If he or she listens carefully and supports the user by asking rhetorical questions, the communication will be more successful, because the user is more at ease and does not get frustrated. The experiments that were conducted during this project shed light on the collecting, editing and sharing process of people who had difficulty with verbal and written communication. However, more of the same experiments should be conducted with people who have aphasia, since they often have cognitive disabilities which could lead to different actions and results during the collecting, editing and sharing processes. For example, it could be the case that aphasics do not use the icons to edit the pictures before sharing the experience, but use them as their partners ask questions during the sharing stage. Nevertheless, the experiments conducted can be used as valuable input for further research. Additionally, it could be checked whether Co-creation can be used as an additional application on AAC devices such as TouchSpeak or MindExpress (www.jabbla.com). Using digital pictures as a way of communication through the Co-creation application would create more opportunities for aphasics to remove the communication barrier.
15.6 Acknowledgments We thank speech therapist for her support. We are grateful to the research participants for their time and cooperation.
166
Al Mahmud, Limpens and Martens
15.7 References Allen M, McGrenere J, Purves B (2008) The field evaluation of a mobile digital image communication application designed for people with aphasia. ACM Transactions on Accessible Computing, 1(1): 1–26 Armstrong E, Ulatowska H (2007) Making stories: evaluative language and the aphasia experience. Aphasiology, 21(6): 763–774 Boyd-Graber JL, Nikolova SS, Moffatt KA, Kin KC, Lee JY, Mackey LW et al. (2006) Participatory design with proxies: developing a desktop-PDA system to support people with aphasia. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI‘06), New York, NY, US Daemen E, Dadlani P, Du J, Li Y, Erik-Paker P, Martens J et al. (2007) Designing a free style, indirect, and interactive storytelling application for people with aphasia. Lecture Notes In Computer Science, 4662(2007): 221 Hillis AE (2007) Aphasia: progress in the last quarter of a century. Neurology, 69: 200–213 Landry BM, Guzdial M (2006) iTell: supporting retrospective storytelling with digital photos. In: Proceedings of the 6th Conference on Designing Interactive systems (DIS’06), University Park, PA, US Lasker J, Beukelmanoe D (1999) Peers’ perceptions of storytelling by an adult with aphasia. Aphasiology, 13(9–11): 857–869 Levin T, Scott BM, Borders B, Hart K, Lee J, Decanini A (2007) Aphasia talks: photography as a means of communication, self-expression, and empowerment in persons with aphasia. Topics in Stroke Rehabilitation, 14(1): 72–84 Light J (1988) Interaction involving individuals using augmentative and alternative communication systems: state of the art and future directions. Augmentative and Alternative Communication, 4(2): 66–82 Mckelvey M, Dietz A, Hux K, Weissling K, Beukelman D (2007) Performance of a person with chronic aphasia using personal and contextual pictures in a visual scene display prototype. Journal of Medical Speech Language Pathology, 15: 305–317 MindExpress. Available at: www.jabbla.com/software/indexlang.asp (Accessed on 17 November 2009) Newell AF, Gregor P (2000) User sensitive inclusive design – in search of a new paradigm. In: Proceedings of the Conference on Universal Usability (CUU’00), Arlington, VA, US Reed D, Monk A (2006) Design for inclusion. In: Clarkson J, Langdon P, Robinson P (eds.) Designing accessible technology. Springer, London, UK TouchSpeak. Available at: www.touchspeak.co.uk/ (Accessed on 17 November 2009)
Chapter 16 An Investigation into Stroke Patients’ Utilisation of Feedback from Computerbased Technology J. Parker, G.A. Mountain and J. Hammerton
16.1 Introduction Strokes are the largest single cause of disability in the UK (DH, 2007). It is estimated that the incidence of first strokes will increase by 30% between 2000 and 2025 (Truelson et al., 2006). Evidence indicates that intensive post−stroke rehabilitation improves function, independence and quality of life (Kwakkel, 2004; Pollock et al., 2007), but according to the Chartered Society of Physiotherapy, the demand for rehabilitation outweighs supply (CSP, 2007). Nevertheless, recent technological advances have promoted the development of tools that may potentially complement the direct efforts of therapists and could in the future even act as surrogates (Liebermann et al., 2006). They include robotassisted movement therapy (Kwakkel et al., 2008), virtual reality technology (Henderson et al., 2007) and inertial tracking devices (Mountain et al., 2010). These systems have the potential to provide consistent, detailed, individually adapted feedback to the user (Intercollegiate Stroke Working Party, 2008) in the absence of the therapist. However, much of the evidence supporting conventional post-stroke rehabilitation suggests that feedback is provided verbally face to face by a therapist and typically involves hands-on therapy (Hartvelt and Hegarty, 1996; Ballinger et al., 1999; DeJong et al., 2004; Wohlin-Wottrich et al., 2004). The demand for post stroke rehabilitation means that service demand cannot be met and other solutions are necessary. Additionally there are unanswered questions regarding the reliance that a stroke survivor can have upon a therapist for both motor learning skills (Magill, 2007) and the self management of the resultant longterm disability (Jones, 2006).
168
Parker, Mountain and Hammerton
16.1.1 The SMART Rehabilitation Technology System The SMART rehabilitation technology system includes a wireless sensor system that allows for 3D real-time computer feedback (www.thesmartconsortium.org, The SMART Consortium, 2008). The system uses three matchbox sized inertial tracking devices (comprising accelerometers, magnetometer and gyroscope technology) worn on the upper arm, wrist and chest to record kinematic data from the users undertaking a functional activity (Willmann et al., 2007). Following an iterative design process (Mountain et al., 2006), garments were created to hold the sensors: these are held in place using an arm sleeve, a single strap wrist band and a Velcro fastening vest (Figure 16.1). The SMART rehabilitation technology system allows motion patterns to be identified, recorded, analysed and corrected by both the therapist and the patient (Mountain et al., 2010). The program is designed to enable recording and playback using a manikin presentation (Figure16.2a-b). Advantageously, this feedback allows the patient and therapist to analyse and record the movements performed in the absence of a therapist (Zhou et al., 2006). Figure 16.1 illustrates a participant using the equipment.
Figure 16.1. User participation with The SMART rehabilitation technology system in the home-setting
16.1.2 Previous Research Using the SMART Rehabilitation Technology System Since 2003 the SMART Consortium (2008) has carried out ongoing research to develop and test a prototype telerehabilitation device (The SMART rehabilitation technology system) for therapeutically prescribed rehabilitation for the upper limb. Initial iteration involved a collaboration of professionals and end users to establish a ‘proof of concept’ for the development of a prototype, computer interface and better usability of the equipment (Mountain et al., 2006). Further research, involving a series of focus groups (seven), usability tests (21) and home visits (four per subject) with stroke survivors (n = 4, 2 male and 2 female) and their carers, was performed to explore the usability of the SMART system in the home. Following further development, the four original subjects who had carried out
An Investigation into Stroke Patients’ Utilisation of Feedback
169
home testing carried out usability tests of a wireless prototype. The final stage involved semi-structured interviews with a further four stroke survivors in their homes (n = 4, men). These interviews provided insightful accounts with regard to: accommodating the equipment in the home; attaching the sensors; using the computer; interpreting the screen presentation and the overall use of the equipment (Mountain et al., 2010). These preliminary findings highlighted the requirement for further work to explore how well stroke patients can interpret and utilise the information presented to them to facilitate their rehabilitation. Given the major developments in gaming devices, and in particular the Nintendo® Wii™ since the commencement of work on the SMART system in 2003, it was also important to take into account changing user expectations of the device. The two case studies presented in this paper describe two of a series of further user tests which focussed upon use of the screen based decision support interface during the upper limb rehabilitative process with people recently discharged from community stroke rehabilitation services.
16.2 Methodology The two case studies involved the collection, analysis and triangulation of data from semi-structured interviews, observations and field notes (Yin, 2003). Using a constructivist paradigm (Guba and Lincoln, 2005) for data analysis, this study has been conducted to explore the multifaceted question of how extrinsic feedback from computer technology can be utilised for home-based stroke rehabilitation.
16.2.1 Participants Both of the volunteer participants were recruited through a private physiotherapist and had experience of using the SMART equipment from their involvement in earlier research (Mountain et al., 2010). Neither of the participants had any apparent cognitive or communication impairment(s). An informed consent form was signed by each participant and their carer separately. Participant One: Mr. Black was a 70 year old male stroke survivor (24 months post-stroke). He was a retired insurance broker who was familiar with using computers. Mr. Black suffered a right cerebrovascular accident (CVA) resulting in left hemiparesis. However, at the time of study he had recovered sufficiently to be independently mobile with the use of a walking stick. His left upper limb had approximately 80° of shoulder flexion; 45° of abduction and full elbow flexion. His carer and wife (Mrs. Black) was a 69 year old company director. Participant Two: Mr. White was a 75 year old male stroke survivor (60 months post-stroke). He was a retired economist who was also familiar with using computers. Mr. White suffered a right CVA resulting in left hemiparesis. He had also recovered to be independently mobile with the use of a walking stick and had approximately 120° of shoulder flexion; 90° of abduction and full elbow flexion of the left upper limb. His
170
Parker, Mountain and Hammerton
carer and wife (Mrs. White) was a 75 year old retired teacher. NB: The names of the participants have been changed for anonymity.
16.2.2 Research Setting Following successful acquisition of all necessary university ethical and governance approvals, all contact with participants and their carers was in their homes. This allowed for the participants to be interviewed and use the computer system in a natural environment as well as meeting the aim of the study: home based rehabilitation using technology.
16.2.3 Procedures In order to elicit individual opinions, the researcher conducted semi-structured interviews in separate rooms with the participants and their carers in their homes to establish what forms of information they had previously been given during community based rehabilitation, especially what information they had been left with to facilitate rehabilitation in their home outside scheduled therapy sessions. The researcher then provided a SMART system for each participant to use for up to one week in his own home. Each was given two prescribed physiotherapy exercises to perform: reaching forward and reaching sideways. During and following prescribed activity a qualitative chart and a graph are displayed on a laptop computer screen (Figure 16.2).
Figure 16.2. Feedback is provided via real-time 3-D images. A qualitative chart and a graph are displayed on a lap-top computer screen.
An Investigation into Stroke Patients’ Utilisation of Feedback
171
At the end of the testing period (three days Mr. Black and five days Mr. White) the researcher conducted further semi structured interviews with the participants and carers to allow them to express their views of the system and specifically of the decision support interface and feedback it provided.
16.2.4 Recording Semi-structured interviews conducted before and after user testing were audiotaped to ensure the transcriptions were presented verbatim. In addition, observations and field notes were taken of the person’s physical abilities as they moved around their home and of other factors such as their functional ability and activities of daily living, to account for informal discussion and their physical behaviour.
16.2.5 Data Analysis The interview data was analysed using thematic analysis (Pope and Mays, 2006). Thematic analysis allowed for themes to emerge from interviews to describe the data as well as allowing examination of interconnections and relationships (Pope et al., 2006).
16.2.6 Findings 16.2.6.1 Themes The aim of the first interview (before equipment use) was to explore user experiences of the feedback they had received during stroke rehabilitation and how they had been able to use this. The aim of the second interview (after equipment use) was to gather their opinions of the experience of using the prototype technology. The following themes emerged during pre testing (before technology use) interviews with all four respondents (two users and two carers). 16.2.6.2 Reliance on External Help Both participants and their carers described the need to rely on external sources of information and feedback; for the stroke survivors, confirmation of progress was sought and for the carers, sources of help and advice. All four described how they found it difficult to notice recovery as progress was slow. Carers expressed the need for community based rehabilitation to continue for longer, ‘...the team that came around were very effective and if that could have gone on longer it would have been good. They seemed to just stop suddenly from virtually 5 days a week to 1 day a week.’ Mrs. White
172
Parker, Mountain and Hammerton
16.2.6.3 Recovery Recovery and indications of recovery were described by the people who had experienced stroke and their carers in two distinct ways: Mr. Black described how his need to get better was affected by his living circumstances. He suggested that because he has a carer to help with everday tasks such as making meals and hot drinks, his desire to carry out these tasks independently was unecessary. Mr. White however, described how his need for further recovery had diminished over the years and that his aim was not to deteriorate any further. The stroke survivors both described how they measured further recovery through the functional and social activities they could carry out. Carers, however, measured recovery by how much activity the person they cared for could achieve over a day/week. 16.2.6.4 Current Provision All the participants and carers described how they found it difficult to remember what exercises to do and how to do them in the absence of the therapist. To help with this, they relied on the observation of therapists to understand what exercises to do, how to do them and whether they were doing them correctly. 16.2.6.5 Conflict of Roles Throughout the interviews both carers described the conflict between being a wife for their husband and being a carer for a stroke survivor. One carer suggested that using the computer might help to motivate her husband and could even empower her to provide feedback on his performance, ‘…that machine would probably make him go and do it and if it was recorded you’d be able to say well you’ve been useless that week.’ Mrs. Black
Unfortunately, due to a family bereavement the researcher was unable to interview Mr. White and his carer following equipment use. However, the multiple methods of data collection employed for both case studies (including researcher observation) confirmed that he was able to carry out an exercise session in the presence of his carer. 16.2.6.6 Preferred Feedback The preferred presentation of visual feedback differed between the two stroke survivors. Mr. Black explained how he preferred the chart display as opposed to the replay of the manikin. His reason for this appeared to be two-fold; firstly, by seeing the graphs he could see whether he had achieved the goal the therapist had set him and he could track his changes over a number of attempts. Secondly; he said that he did not like to be reminded of his body image, “Oh yes! [laughs loudly] A picture of a shabby figure... I knew which was the bad one”. He also suggested that because the graphs illustrated when he had done well, he enjoyed the reward of immediate feedback on this, “...when you get a green it gives you an uplift doesn’t it [laughs].”
An Investigation into Stroke Patients’ Utilisation of Feedback
173
However, Mr. White explained that he didn’t like the graphs as they did not mean anything to him. This was mainly because they did not have any values for the x and y axis which therefore made it hard for him to quantify his changes. 16.2.6.7 Motor Learning Both participants were able to detect their own movements and change their movements as a result of the feedback they received. After watching the replay of the avatar Mr. Black was able to analyse his posture and arm movement, “Well it’s me posture isn’t it a lot of it wasn’t it... It showed that I wasn’t getting the proper stretch that I ought to be getting.”. Mr. White also commented on his movement, “I appear to be leaning that way and my arm isn’t as high as that one.”. Mr. White was able to refer to the avatar to increase his shoulder flexion and keep his trunk stable which he then confirmed by the results graphs whereas Mr. White only used the graphical feedback to analyse his movements and then made the movement changes on his second attempt. Both participants were keen to have a second attempt to improve on their first attempt. Both carers also expressed an interest in the feedback. Speaking to her husband Mrs. Black commented, “Oh you can see the difference can’t you... that’s better you’re getting higher now”.
16.3 Discussion There are a number of technological systems that are undergoing development and testing for stroke rehabilitation and there has been some consideration of how and what these systems should deliver (Johnson et al., 2007; Kemna et al., 2009; Timmermans et al., 2009). Existing literature suggests that extrinsic feedback such as visual and auditory feedback, knowledge of results and knowledge of performance are an important component of stroke rehabilitation (Van Vliet and Wulf, 2006). However, there is no evidence exploring the extent to which stroke survivors can utilise extrinsic feedback using technology. The results of this study have provided an initial insight into some of the factors that may influence the utilisation of feedback provided through computer technology in the home. In particular, how the utilisation of feedback may be affected by internal influences such as the users: their personal goals; actual and perceived progress of recovery; the type of feedback they prefer and the time since they had their stroke as well as external factors such as the practical issues inherent in using technological equipment in the home, the usability of the equipment and their social circumstances. For example; Mr. Black explained how his need to carry out functional activities is reduced by the social support he receives. Mr. White suggested that the length of time since his stroke may affect his improvement as he only expects to be able to maintain his current performance. This study also highlighted the need to explore how the interaction between others involved in the rehabilitation process, namely, the carer/close family member and the therapist, may also influence the utilisation of feedback. For
174
Parker, Mountain and Hammerton
example, Mrs. Black suggested that using technology may empower her to have a more direct influence during the process, whereas the therapist may not be receptive to the concept of using technology in what has traditionally been a therapist led ‘hands-on’ experience.
Figure 16.3. Postulated influences on the utilisation of computer feedback
Nonetheless, initial findings have confirmed the value of individually adapted feedback. For example, both of the participants in this study were similar in that they: both suffered a right sided CVA; were a similar age (70/75); had a moderate range of movement in the left arm; were independently mobile with the use of a walking stick; had a professional background and had previous experience using a computer. However, despite these similarities, the participants expressed differing preferences for the type of on-screen feedback. Nevertheless, both of them were able to relate to the avatar and/or feedback charts and alter their physical performance. Significantly, these two case studies have revealed how both participants demonstrated the potential for physical behavior change even after minimal use and after a significant time post stroke. Further exploration may elicit how these preferences and motor learning opportunities are utilised by the stroke survivor to promote recovery and potential behavior change both physically and socially. With the growing interest in gaming technology such as the Nintendo® Wii™; future users may be more receptive to computer interaction; however, they may also have higher expectations in terms of the interface and motivational components of rehabilitative devices.
16.4 References Ballinger C, Ashburn A, Low J, Roderick P (1999) Unpacking the black box of therapy – a pilot study to describe occupational therapy and physiotherapy interventions for people with stroke. Clinical Rehabilitation, 13: 301–309
An Investigation into Stroke Patients’ Utilisation of Feedback
175
CSP (2007) A new ambition for stroke. A consultation on national strategy response from the Chartered Society of Physiotherapy. London, UK DeJong G, Horn SD, Gassaway JA, Slavin MD, Dijkers MP (2004) Toward a taxonomy of rehabilitation interventions: using an inductive approach to examine the “black box” of rehabilitation. Archives of Physical Medicine and Rehabilitation, 85: 678–86 DH (2007) National stroke strategy. Department of Health, Crown Copyright, London, UK Guba, EG, Lincoln YS (2005) Paradigmatic controversies, contradictions, and emerging influences. In: Denzin NK, Lincoln YS (eds.) The Sage handbook of qualitative research, 3rd edn. Sage, Thousand Oaks, CA, US Hartvelt A, Hegarty JR (1996) Augmented feedback and physiotherapy practice. Physiotherapy, 82(8): 480–490 Henderson A, Korner-Bitensky N, Levin M (2007) Virtual reality in stroke rehabilitation: a systematic review of its effectiveness for upper limb motor recovery. Topics in Stroke Rehabilitation, 14: 52–61 Intercollegiate Stroke Working Party (2008) National clinical guidelines for stroke, 3rd edn. Royal College of Physicians, London, UK Johnson MJ, Feng X, Johnson LM, Winters JM (2007) Potential of a suite of robot/computer-assisted motivating systems for personalized, home-based, stroke rehabilitation. Journal of NeuroEngineering and Rehabilitation, 4(6) Jones F (2006) Strategies to enhance chronic disease self-management: how can we apply this to stroke? Disability and Rehabilitation, 28(13): 841–847 Kemna S, Culmer PR, Jackson AE, Makower S, Gallagher JF, Holt R et al. (2009) Developing a user interface for the iPAM stroke rehabilitation system. In: Proceedings of the 11th International Conference on Rehabilitation Robotics (ICORR 2009), Kyoto, Japan Kwakkel G, Kollen BJ, Krebs HI (2008) Effects of robot-assisted therapy on upper limb recovery after stroke: a systematic review. Neurorehabil Neural Repair, 22: 111–121 Kwakkel G, van Peppen R, Wagenaar RC, Wood Dauphinee S, Richards C et al. (2004) Effects of augmented exercise therapy time after stroke: a meta-analysis. Stroke, 35: 2529–2539 Liebermann DG, Buchman AS, Franks IM (2006) Enhancement of motor rehabilitation through the use of information technologies. Clinical Biomechanics, 21: 8–20 Magill RA (2007) Augmented feedback. In: Magill RA (ed.) Motor learning and control: concepts and applications, 8th edn. McGraw Hill, London, UK Mountain GA, Mawson SJ, Hammerton J, Ware PM, Zheng J, Davies R et al. (2010) Exploring the usability of a prototype technology for upper limb rehabilitation following stroke. Journal of Engineering Design (in press) Mountain GA, Ware PM, Hammerton J, Mawson SJ, Zheng J, Davies R et al. (2006) The SMART project: a user led approach to developing applications for domiciliary stroke rehabilitation. In: Clarkson PJ, Langdon J, Robinson P (eds.) Designing accessible technology. Springer, London, UK Pollock A, Baer G, Pomeroy V, Langhorne P (2007) Physiotherapy treatment approaches for the recovery of postural control and lower limb function following stroke. Cochrane Database of Systematic Reviews, Issue 1 Pope C, Mays N (eds.) (2006) Qualitative methods in health research. In: Pope C, Mays N (eds.) Qualitative Research in Health Care, 3rd edn. BMJ Books, Blackwell Publishing Ltd, Oxford, UK Pope C, Ziebland S, Mays N (2006) Analysing qualitative data; In: Pope C, Mays N (eds.) Qualitative Research in Health Care, 3rd edn. BMJ Books, Blackwell Publishing Ltd, Oxford, UK The SMART Consortium (2008) Available at: www.thesmartconsortium.org (Accessed on 10 January 2009)
176
Parker, Mountain and Hammerton
Timmermans AAA, Seelen HAM, Willmann, Kingma H (2009) Technology-assisted training of arm-hand skills in stroke: concepts on reacquisition of motor control and therapist guidelines for rehabilitation technology design. Journal of NeuroEngineering and Rehabilitation, 6(1) Truelsen T, Piechowski-Jozwiak B, Bonita R, Mathers C, Bogousslavsky J et al. (2006) Stroke incidence and prevalence in Europe: a review of available data. European Journal of Neurology, 13(6): 581–598 Van Vliet P, Wulf G (2006) Extrinsic feedback for motor learning after stroke: what is the evidence? Disability and Rehabilitation, 28(13–14): 831–840 Willmann RD, Lanfermann G, Saini P, Timmermans A, te Vrugt J, Winter S (2007) Home stroke rehabilitation for the upper limbs. IEEE Engineering in Medicine and Biology Society, 4015–4017 Wohlin-Wottrich AW, Stenstrom CH, Engardt M, Tham K, Von Koch L (2004) Characteristics of physiotherapy sessions from the patient’s and therapist’s perspective. Disability and Rehabilitation, 26(20): 1198–1205 Yin RK (2003) Case study research: design and methods, 3rd edn. SAGE Publications, London, UK Zhou H, Hu H, Tao Y (2006) Inertial measurements of upper limb motion. Journal of Medicine, Biological Engineering and Computing, 44: 479–87
Chapter 17 How to Make a Telephone Call When You Cannot Operate a Telephone T. Felzer, P. Beckerle and S. Rinderknecht
17.1 Using a Scanning-based Environment Control System to Interact with the Immediate Surroundings 17.1.1 Who Needs this Kind of Alternative Input Device and Why Modern technology offers a lot of opportunities to initiate and maintain relationships. Internet chat and email are text-based services utilised to “live a social life”. In combination with voice-over-IP technology, the Internet also supports oral conversations – i.e., telephone calls – in an easy and convenient manner. The potential ease of use, of course, applies to anyone, but, in particular, it has arguably never been easier for someone with a disability to fully participate in society than today. However, making this technology accessible to users with different impairments requires the design of appropriate alternative interface devices for operating a computer and also the development of suitable software. For example, a person with severe physical impairments needs something that demands as little physical effort as possible, while speech generation helps people who cannot speak intelligibly – and, by the way, also blind people (in a different context) – and speech recognition opens the door for deaf and hard-of-hearing persons. This chapter deals with a scanning-based environment control system – called 3dScan – which has been around for almost two years. The software was originally designed to provide an effortless way for physically disabled persons to interact with their immediate environment – e.g., to switch arbitrary electrical appliances on or off, to use the computer as a universal remote control, or to have a synthesised voice “speak for them”. Thanks to a recent change in its data acquisition component, the tool is now ready to be extended and converted into a
178
Felzer, Beckerle and Rinderknecht
universal assistant allowing persons with different disabilities to make telephone calls independently. In addition, the software can also be employed by able-bodied individuals who are temporarily unable or unwilling to utilise a standard phone (or, e.g., to manually control the TV remote). The chapter presents the current progress of the authors’ work in this respect. After a discussion of related work on environment control in the next subsection, 3dScan is described as a whole, followed by a coarse explanation of the program organisation – which is based on row-column scanning. Next, the interaction technique is presented with a special focus on the mentioned acquisition change. This is followed by a closer look at the system’s “Telephone” module, comprising proposed as well as already implemented functionality, before the chapter is concluded with a brief summary, areas of future work, and the referenced literature.
17.1.2 Prior Work of Others on Environment Control and Telephony in the Context of Assistive Technology The majority of the related work on environment control systems dates back more than ten years (e.g., Bresler, l992; Lee and Keating, 1994; von Maltzahn et al., 1995; Schraft et al., 1998). Most of the results found out then of course still apply today. In particular, the importance of accessible communication and telephony devices and methods for persons with disabilities – to stay in contact with friends and family, to communicate with caregivers, or simply to be able to call for help in case of an emergency – was already emphasised in this early work. However, technological advances make it necessary to constantly question older approaches and to check if there could be a better solution. And indeed, because of the rapid changes in computer technology and telecommunications, new improvement opportunities do exist, especially concerning telephony applications. Prior approaches in this area of assistive technology (also Woodburn et al., 1991; Sanford, 1992; Alm et al., 1995) typically interface with a standard (“landline”) telephone. This rather contradicts modern habits, as mobile phones are in widespread use today. Almost anyone (at least in Western Europe) owns (or has a close relative who owns) a cell phone nowadays, whereas ownership was considered “luxury” 10 to 15 years ago. Therefore, extending existing approaches to accommodate cell phones (as, e.g., in a system offered by ETO Engineering®) – if at all possible also making alternative text entry for message composition possible – is definitely a necessary step. On the other hand, Internet telephony increasingly gains importance, and position papers like Maguire (2001) or Fernandes (2001) make it clear that this development is accompanied by increased distribution and acceptance within the “assistive technology community”. 3dScan is designed as an answer to this trend – the only hardware requirements for having telephone conversations or sending text messages shall be a standard computer (including appropriate accessories, of course) and Internet access.
How to Make a Telephone Call When You Cannot Operate a Telephone
179
17.1.3 What 3dScan Looks Like and What it Does As already said above, this chapter talks about 3dScan which is a computer application (developed in C++ under Windows® XP) designed to empower severely disabled persons to control their environment without being continually dependent on the help of others. Its opening screen – depicted in Figure 17.1 – presents four module displays that cover a great deal of those daily activities for which their users do not really need the assistance of another person.
Figure 17.1. Snapshot of 3dScan’s opening screen
Each module display contains up to 34 buttons: one “Meta” button as well as one “Exit” button to the left and to the right of the module label (i.e., at its very top) and a grid with four rows and up to eight columns. Only one module is active at any time – after launching the program, the user can decide which module to work with; that module stays activated until its “Exit” button is invoked. The top left “Telephone” module is for making telephone calls – it is reviewed in detail further below. The “Switch-Board” module interfaces with the X10 technology supplied by Marmitek™. X10 (www.marmitek.com/nl/software/ahsdk_install.zip) uses the power line to send commands to a network of attached hardware components. Each component (e.g., a switchable power outlet) is identifiable by one of 256 “addresses” (consisting of a “home code” between “A” and “P” and a “device code” between “1” and “16”), and it may be associated with an electrical appliance (e.g., an electric door opener) by simply connecting the appliance’s plug to the switchable outlet (in the example). As a consequence, the appliance can be switched on and off, and the user of the module can additionally choose the “home code” and label the 16 corresponding “device codes”. In the current version, the program only supports the “toggle functionality” of X10 – exploring more sophisticated functions (such as dimming lights) is left for the future.
180
Felzer, Beckerle and Rinderknecht
The “IR-Remote” module turns the computer into a comfortable universal remote control with the help of HOMElectronics’® TIRA adapter (www.homeelectro.com/tira2.php). The user can create an unlimited number of “layouts” with 32 “IR code” buttons each, label the corresponding buttons arbitrarily, and assign IR signals to the buttons by capturing the codes sent out by an original remote. The idea is that a severely disabled user can apply the pre-trained layouts to, e.g., independently change channels on the TV, while the training itself might be performed by an able-bodied caregiver. The training process is fully implemented – a user study focusing on this module can be found in Felzer et al. (2009). However, the construction of a library of existing layouts remains an open task. The fourth module allows users who are unable to speak to have oral conversations by having the computer replace their voice. The Speech API of Microsoft® Windows® (www.microsoft.com/speech/speech2007/default.mspx) is used to produce text-to-speech output, so the user can “speak” with any voice installed in Windows® – the approach described in Jreige et al. (2009) might be used to create a personalised voice. In addition to 16 basic pre-defined utterances, such as “Yes” or “Hello”, the computer can be made to pronounce arbitrary userentered phrases – even “speaking while typing” (making real conversations imaginable) is supported. Consolidating the “Text2Speech” module and the “Telephone” module – enabling users who are unable to speak to engage in telephone calls – is an important idea for the future. Each of those four modules requires certain text entry capabilities at some point: the “Telephone” module for composing text messages (see below), the “Switch-Board” and “IR-Remote” modules for button labelling, or the “Text2Speech” module for customising the spoken text. They all use the same auxiliary editor module – every time the program needs to prompt the user for text input, the display of the currently activated module is replaced with the editor display shown in Figure 17.2.
Figure 17.2. The auxiliary text entry (or editor) module
The editor display presents a “Done” and a “Cancel” button at the top, an 8 × 8 button on-screen keyboard on the left, and a panel showing the edited text as well as a “Completion” button (leading to another scanning grid if there is more than one completion candidate) on the right. After “Done” or “Cancel”, the activated module’s display is restored, while either confirming or discarding the entered text. The word completion feature is analogous to the one introduced previously in the context of a different computer application (Felzer and Nordmann, 2006).
How to Make a Telephone Call When You Cannot Operate a Telephone
181
17.1.4 And Here is How it Works The invocation of any particular action in 3dScan relies on the selection of the button belonging to that action. On the one hand, this selection can be done by simply clicking on the corresponding button with the help of an appropriate pointing device (as for text entry, using the standard keyboard – instead of clicking the buttons of the on-screen keyboard – is also possible, if so desired). On the other hand, to accommodate users with severe physical disabilities, the program supports a variant of row-column scanning (Simpson and Koester, 1999). Scanning allows users to indirectly select certain items in the following way: the computer “suggests” various available options (one by one) by cyclically highlighting them for a given period of time (the scan-delay, typically something in between 0.5 and 2.0 seconds) each. The user can select the highlighted option by issuing an input signal – e.g., pressing a single key or actuating a switch (Baljko and Tam, 2006). Row-column scanning refers to the situation where the selectable items are arranged in a rectangular grid. First, the highlight linearly advances through the rows, until the user selects one of them. After that, the columns (actually the items within the selected row) are highlighted in rotation. Another selection signal chooses the resulting item. The disadvantage of conventional row-column scanning is the large amount of time required to linearly advance through long rows or columns, which can be very tiring and unnecessarily increases the overall selection time. To limit the length of rows and columns to be scanned, 3dScan first divides the two-dimensional grid into smaller (sub-)groups (e.g., the four quadrants of the editor’s on-screen keyboard) and starts every scan cycle by cyclically highlighting the groups. The resulting three-step scanning method (as illustrated in Figure 17.3) shall be called three-dimensional scanning.
Figure 17.3. Conversion of two-dimensional into three-dimensional coordinates
The hypothesis that the overall selection time can be reduced by first introducing an additional step for selecting a (smaller) subgroup is justified by the following analytical considerations – again using the example of the 8 × 8 button on-screen keyboard of the editor display. The selection time for any item in row x and column y is certainly proportional to the sum x + y (when using conventional row-column scanning). Extending this idea to a square grid with eight rows and eight columns yields that row-column scanning requires a time span proportional to the total coordinate sum S2d for all 64 items (or buttons) as follows:
182
Felzer, Beckerle and Rinderknecht
S2d
= (1 + 1) + (1 + 2) + … + (1 + 8) + (2 + 1) + (2 + 2) + … + (2 + 8) + ... + (8 + 1) + (8 + 2) + … + (8 + 8) = 44 + 52 + 60 + … + 100 = 576 An analogous measure for three-dimensional scanning comprises four times the coordinate sum of a 4 × 4 quadrant plus the quadrant index for the 16 buttons in each quadrant: = ((1 + 1) + (1 + 2) + (1 + 3) + (1 + 4) S3d + (2 + 1) + (2 + 2) + (2 + 3) + (2 + 4) + (3 + 1) + (3 + 2) + (3 + 3) + (3 + 4) + (4 + 1) + (4 + 2) + (4 + 3) + (4 + 4)) × 4 + 1 × 16 + 2 × 16 + 3 × 16 + 4 × 16 = 80 × 4 + 10 × 16 = 480 This means that – if all 64 buttons were equally probable – three-dimensional scanning introduces a (theoretical) saving of 96/576 = 1/6 = 16.67% versus conventional row-column scanning. Of course, the advantages of three-dimensional scanning are the more obvious, the longer the original rows and columns are. In particular, the “savings” in the “Telephone” display – which basically consists of two 3 × 4 groups – are levelled out by the “losses” associated with every button in the first 3 × 4 group. However, for consistency reasons, 3d-scanning is applied to all modules. Following group selection and ordinary row-column scanning within that group, the computer could go ahead and execute the action associated with the chosen button (e.g., dial a pre-entered phone number), but to be a bit more faulttolerant, it waits for the user to confirm that the selected button really is the desired one by issuing an additional input signal during the following second. After a successful confirmation, the corresponding action is performed, and the user is notified visually. This “selection cycle” is illustrated in Figure 17.4.
Figure 17.4. The invocation of an action belonging to any button in 3dScan coincides with a scan cycle consisting of five steps (illustrated for the “MENU” button of the “Telephone” module – row two, column three). After three input signals (for group selection and rowcolumn scanning within the selected group), the action associated with the chosen button (graphically highlighted by all other buttons drawn in the background colour) is only executed if confirmed – which is responded with a change in the background colour.
How to Make a Telephone Call When You Cannot Operate a Telephone
183
The next section deals with several possible ways the user of the system can issue input signals to select groups, rows or buttons. In addition to several hypothetical approaches, a special focus will be on the solution actually implemented in the current version (based on intentional muscle contractions) and the accompanying data acquisition.
17.1.5 The Underlying Interaction Technique Requires Extremely Little Physical Effort In addition to clicking on the buttons with the help of a pointing device, 3dScan basically offers two techniques for initiating a scan selection. First, less severely disabled or non-disabled users can cause selections by pressing the space bar on the standard keyboard. Individuals with very severe physical disabilities are often unable to reliably employ hands or arms – they can utilise the second interaction technique provided by 3dScan instead: intentional muscle contractions (already introduced in Felzer et al., 2005). To detect those input events (which are associated with a single muscle of choice), the muscular activity of a dedicated muscle is monitored and constantly compared to a threshold. Whenever the threshold is exceeded, the computer assumes that the muscle has been contracted intentionally and acts accordingly. Figure 17.5 depicts a fragment of about 1.7s of the monitored muscle-related signal (here: corresponding to the brow muscle), sampled at about 75Hz.
Figure 17.5. Exceeding the threshold is interpreted as an intentional contraction
The novel aspect of the data acquisition is how the sensor is hooked up to the computer. In Felzer’s prior work, the sensor was always connected to the microphone input of the sound card – making use of the built-in analog-to-digital (AD) converter. However, that design is invalid in a telephone application, where a “real” microphone is needed. The new design uses an Atmel® (www.atmel.com) microcontroller (among other things) to interface with the USB port. As it analyses the time series of one of the user’s physiological functions, 3dScan may be seen as a bio-signal interface (Felzer, 2002). Moreover, due to the low noise-sensitivity of the sensor (which is based on a piezo element), the
184
Felzer, Beckerle and Rinderknecht
activity signal can be amplified by a large factor, so that selection events can be produced with a minimum of physical effort – for instance, in the “brow muscle example”, merely frowning is all it takes to issue noticeable contraction signals. However, the program does require muscle movement and is thus not suitable for everybody. An ambitious longer-term goal is to turn a 3dScan variant into a brain-computer interface (Allison et al., 2007) by analysing the user’s EEG signal and thus allowing brain-based selection (see also Felzer and Freisleben, 2002).
17.1.6 The “Telephone” Module Examined in More Detail Since the “Telephone” module has not been feasible until recently (due to the old acquisition method), much of its functionality only exists in theory and is not fully implemented yet. In this sense, the assistant allowing everyone to make phone calls is merely a “proposed” application (although its realisation is only a matter of time).
The graphical user interface, however, is almost completely finished. Figure 17.6 illustrates the module’s display with a total of 26 buttons (selectable as laid out above), arranged in three subgroups: one group only containing the module’s “Meta” and “Exit” buttons (this group is highlighted in Figure 17.6), one group with miscellaneous buttons, and a numerical button group.
Figure 17.6. The “Telephone” comprises one 2-button subgroup and two 3 × 4 subgroups
The telephony basis of this module is intended to be Voice-over-IP (VoIP) technology, while a first prototype is going to support a single service provider only: Skype™. The implementation – using the corresponding API (described in Campbell, 2005) – will be pretty straightforward. The philosophy behind the “Telephone” will be to provide a functionality comparable to the one found in a common mobile phone: the miscellaneous buttons are used to answer or initiate calls, to hang up, to manage past incoming or outgoing calls, or to configure the module with the help of a suitably deep menu tree; the numerical buttons are used either to dial phone numbers, to compose text
How to Make a Telephone Call When You Cannot Operate a Telephone
185
messages (as alternative to the default editor module), or as shortcuts in the menu structure; the panel labelled with the word “Display” works just as a cell phone display (e.g., displaying the dialled number). In all four of 3dScan’s main application modules, the “Meta” button allows the user to access a supplemental menu supporting higher-level functions – in the “Telephone” module, this refers to managing the account with the service provider. Finally, the “Exit” button deactivates this module (and the user can then select another one).
17.2 Conclusion This chapter discussed the authors’ current work regarding a scanning-based environment control system allowing persons with severe physical disabilities to perform certain daily activities alone (without being dependent on others), which greatly adds to their perceived quality-of-life. In particular, the realisation of a telephone assistant for people with various impairments has been the focus. Future work includes the realisation of the “Telephone” module, enhancements of the acquisition hardware (probably also introducing Bluetooth®) as well as the software improvements mentioned throughout the text, and a user study evaluating the final result. Furthermore, longer-term objectives, concerning extended text-tospeech generation and voice recognition (thus accommodating visually and hearing impaired people) and the addition of brain-based input, reveal the system’s true potential as a tool making telephone calls possible for virtually anyone who cannot use a standard telephone.
17.3 Acknowledgments This work is supported by DFG grant FE 936/3-2 “The AID package – an alternative input device based on intentional muscle contractions”.
17.4 References Allison BZ, Winter Wolpaw E, Wolpaw JR (2007) Brain-computer interface systems: progress and prospects. Expert Review of Medical Devices, 4(4): 463–474 Alm N, Morrison A, Arnott JL (1995) A communication system based on scripts, plans, and goals for enabling non-speaking people to conduct telephone conversations. In: Proceedings of 1995 IEEE International Conference on Systems, Man, and Cybernetic (ICSMC’95), Vancouver, British Columbia, Canada Baljko M, Tam A (2006) Motor input assistance: indirect text entry using one or two keys. In: Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2006), Portland, OR, US Bresler MI (1992) The Tamara system for control of mobility, communication, and environment. In: Proceedings of the Computing Applications to Assist Persons with Disabilities (CAAPWD 1992)
186
Felzer, Beckerle and Rinderknecht
Campbell WJ (2005) SKYPE API GUIDE: learning Skype’s plug-in architecture. The Skype Journal, First edition, April 2005 ETO Engineering®: ZOOMMATE switch adapted bluetooth speakerphone accessory for quadriplegics and physically disabled cell phone users. Available at: www.etoengineering.com/quadriplegic_btspeakerphone.htm (Accessed on 19 November 2009) Felzer T (2002) Verwendung verschiedener Biosignale zur Bedienung computergesteuerter Systeme (Using various kinds of bio-signals for controlling computermediated systems), PhD Thesis, Wissenschaftlicher Verlag Berlin, Germany Felzer T, Fischer R, Grönsfelder T, Nordmann R (2005) Alternative control system for operating a PC using intentional muscle contractions only. In: Proceedings of the 20th Annual International Conference, “Technology and Persons with Disabilities”, (CSUN 2005), Los Angeles, CA, US Felzer T, Freisleben B (2002) BRAINLINK: A software tool supporting the development of an EEG-based brain-computer interface. In: Proceedings of the 2002 International Conference on Mathematics and Engineering Techniques in Medicine and Biological Sciences (METMBS’02), Las Vegas, NV, US Felzer T, Nordmann R (2006) Speeding up hands-free text entry. In: Proceedings of the 3rd Cambridge Workshop of Universal Access and Assistive Technology (CWUAAT’06), Cambridge, UK Felzer T, Nordmann R, Rinderknecht S (2009) Scanning-based human-computer interaction using intentional muscle contractions. In: Proceedings of the 13th International Conference on Human-Computer Interaction (HCI International 2009), San Diego, CA, US Fernandes J (2001) One content, three devices, the same need: access to information by people with special needs. In: Proceedings of the 2001 EC/NSF Workshop on Universal Accessibility of Ubiquitous Computing (WUAUC’01), Alcácer do Sal, Portugal Jreige C, Patel R, Bunnell HT (2009) VocaliD: personalizing text-to-speech synthesis for individuals with severe speech impairment. In: Proceedings of the 11th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2009), Pittsburgh, PA, US Lee NC, Keating D (1994) Controllers for use by disabled people. Computing and Control Engineering, 5(3): 121–124 Maguire M (2001) Problems in making telecommunications services accessible. ACM SIGCAPH Computers and the Physically Handicapped , 69: 4–7 Sanford CJ (1992) TONETALKER [telephone display]. In: Proceedings of the Computing Applications to Assist Persons with Disabilities (CAAPWD 1992) Schraft RD, Schaeffer C, May T (1998) Care-O-bot™: the concept of a system for assisting elderly or disabled persons in home environments. In: Proceedings of the 24th Annual Conference of the IEEE Industrial Electronics Society (IECON’98), Aachen, Germany Simpson RC, Koester HH (1999) Adaptive one-switch row-column scanning. IEEE Transactions on Rehabilitation Engineering, 7(4): 464–473 von Maltzahn WW, Daphtary M, Roa RL (1995) Usage patterns of environmental control units by severely disabled individuals in their homes. IEEE Transactions on Rehabilitation Engineering, 3(2): 222–227 Woodburn R, Arnott JL, Newell AF (1991) Computer-mediated communications for the disabled. IEE Colloquium on CSCW: Some Fundamental Issues: 5/1–5/4
Chapter 18 Husband, Daughter, Son and Postman, Hotwater, Knife and Towel: Assistive Strategies for Jar Opening A.Yoxall, J. Langley, C. Musselwhite, E.M. RodriguezFalcon and J. Rowson
18.1 Introduction Society is ageing and this demographic shift creates significant hurdles for designers, engineers, manufacturers and health practitioners. Not least is the development of a society where the majority of people will have some issues related to a loss of strength, dexterity and, possibly, locomotion, sight and cognition. Public acknowledgement of people with disabilities has changed significantly over recent years with three parallel drivers: legislation such as the Disability Discrimination Act (1995), advances in assistive technology and rehabilitation and lastly the understanding in the design community of the need for a change in the way products are designed. With this change in demographics it is assumed that the elderly will become drivers for change, demanding changes in infrastructure, products and services. Of particular interest in the design community has been the development of a concept called ‘Inclusive’ or ‘Universal’ design, promoted by various organisations, notably the Royal College of Art, in the UK. The British Standards Institute (BSi, 2005) defines inclusive design as “The design of mainstream products and/or services that are accessible to, and usable by, as many people as reasonably possible....without the need for special adaptation.” Underlying this is the principle of independence; inclusive design should allow individuals to utilise and operationalise goods, services and technology themselves without intervention from others. One area that receives much media attention with regard to the aged is the accessibility and usability of packaging. That packaging openability is perceived to be a difficult task is acknowledged in that the term ‘wrap rage’ has been coined to describe it (BBC News, 2004).
188
Yoxall, Langley, Musselwhite, Rodriguez-Falcon and Rowson
In a survey by McConnell (2004) for the magazine Yours, bleach bottles and jars were ranked first and second in their perceived difficulty by aged consumers. Work by Yoxall et al. (2006) and Yoxall and Janson (2008), and others, such as Rohles et al. (1983) and Voorbij and Steenbekkers (2002), attempted to understand the forces needed by elderly consumers to open packaging. Termed a vacuum lug closure, VLC, Yoxall developed an instrumented jar that could measure torque and demonstrated that the torque produced by a consumer was dependant on factors such as their age, gender and grip choice (Rowson et al., submitted). A typical VLC is shown in Figure 18.1 whilst the equipment used to undertake the test is shown in Figure 18.2. Other researchers have since built on this work, using motion capture to understand in detail what the hand is doing, such as Fair et al. (2008), using more complex instrumented jars (Kuo et al., 2010; Su et al., 2009) or a mix of both (Tompson and Carse, 2009). Work by Han et al. (2008) has looked at the forces on the finger when using ring-pull cans and more recent work by Yoxall et al. (submitted) has looked at child-resistant closures (CRCs) and squeezable bottles (Blakey et al., 2009).
Figure 18.1. Typical vacuum lug closure and jar
However, much previous research in this field has largely ignored the social context within which people interact with packaging. Hence, factors involving the sociology and psychology associated with the act of opening jars and cans have been given scant attention. This paper aims to address this gap by reporting the
Assistive Strategies for Jar Opening
189
findings from both quantitative and qualitative research assessing consumers’ experiences of pack accessibility, both addressing user strength and describing the experiences of users.
18.2 Methodology The studies by the authors on the accessibility of jars or VLC’s was largely undertaken in two parts, a quantitative experimental study to determine user strength using the device as shown in Figure 18.2 and a more qualitative study assessing consumers’ experiences of pack accessibility.
Figure 18.2. Data gathering
The results of the quantitative study have been presented in various studies by the authors (referenced earlier), and showed that for males and females there was a significant drop in strength with age and that by 70 years of age, 50% of women would not be able to access 50% of the jars they buy. Similarly the results showed that 15% of women of any age would struggle with 50% of the jars they bought; indicating that openability of jars of this type is a significant problem. In the second survey 202 people were tested opening a standard VLC with 122 of those being women, of whom 60 were between the ages of 50 and 90 years old. Consumers were asked to undertake a short questionnaire and were also videoed during the test to establish any verbal clues to behaviour missed by the questionnaire.
18.3 Findings When asked about whether they avoided products in jars for any reason only 12 respondents (or 6%) stated that they did, and this was due to the contents and spillage issues (a dislike of getting vinegar on the hand due to spillage from pickles and beetroot for example) rather than because of the package format itself. Of
190
Yoxall, Langley, Musselwhite, Rodriguez-Falcon and Rowson
interest to the authors was that the general perception of the VLC and jar was one of high quality as compared to say that of a metal food can or plastic pouch and that it was considered ‘high quality’, ‘safe’, ‘reliable’, ‘recyclable’ and ‘re-usable’ by 40% of the respondents. However, 61% either perceived this form of packaging as difficult to open or that it was ‘high quality but a bit awkward’. A small number of respondents also thought that glass jars were heavy and breakable. When asked whether they attempted to open the jars themselves, 5% of those surveyed stated that they did not even consider trying to open the jars. From the consumer responses it became apparent that when consumers (of all ages) experienced difficulties in accessing jars the resorted to what the authors have termed ‘coping strategies’, i.e. alternative methodologies for accessing the contents of the jar. These strategies can broadly be termed as physical strategies, i.e. the use of a towel or a knife say, or social strategies, i.e. the use of a relative, partner or a neighbour.
18.3.1 Physical Strategies When asked what they did if they found a jar difficult to open, 63% (13% male and 50% female) of the total test population resorted to some form of aid whether from other people or in the form of a tool or object. In the context of gender specific populations, only 40% of the male population used some form of aid to opening jars if they found it difficult whilst for the female population this rose to 74% (see Figure 18.3).
Figure 18.3. Percentage of each gender specific population that used various coping strategies. Hence, 60% of the male population did not use any opening tool or aid whilst only 26% of the female population did not use an opening tool or aid.
Assistive Strategies for Jar Opening
191
The most common form of opening aid was a knife, with 13% of respondents suggesting that this was the way they would open jars that proved too difficult to open by hand. An example of one of the physical jar opening strategies is shown in Figure 18.4, where the user is seen to be opening the jar by hitting the rim of the closure. The most common physical opening strategy was the use of unspecified tools (9% of total population) and cloth of some kind (8% of total population). The unspecified tools were favoured by women (7% of total population being women who used these tools) whilst the cloth was an opening aid or coping mechanism employed equally by men and women. Rubber gloves and hot water were coping mechanisms that were employed by consumers. At the bottom of the popularity list of opening mechanisms were a range of individual techniques including the use of tin openers, screwdrivers, rubber cones, rubber bands and ‘giving it a whack’. These techniques were used by one or two percentage of the total test population each and all were employed by women not men.
Figure 18.4. Typical physical opening strategy
192
Yoxall, Langley, Musselwhite, Rodriguez-Falcon and Rowson
In the youngest age range (20 to 29), 12% of the total population profess to never having used any aid to opening jars. For the male population alone this is actually 17% and for the female population in isolation 9%. In the age range 70 to 79 this value drops to two percentage of the total population that claim never to use opening aids. This is actually six percentage in the male population alone and one in the female population alone. From the age bracket 80 to 89 and upwards, all respondents were seen to use some form of opening aid.
18.3.2 Social Strategies The second most common answer when looking at the accessibility of jars of this type was actually the use of a relative (husband, partner, boyfriend, son etc.) with 12% of the total test population (all female) giving this response (see Figure 18.5). A further two percentage of the total test population suggested using other males (non-related) such as utility men or neighbours, to help with opening jars. Again, these were all female respondents. This gives a total of 14% of the test population (all female) seeking help from other people (all male) to aid in the opening of the jars.
Figure 18.5. Graph of total test population broken down by age group and coping strategy employed
Differences between the men and women across the age ranges are most significantly that women employ a far greater diversity of opening mechanisms to aid with jar opening than men do and from an earlier age. Table 18.1 illustrates the numbers of different coping mechanisms employed by men and women across the various age brackets.
Assistive Strategies for Jar Opening
193
Also of interest is the choice of coping mechanism for men. Male participants aged 40 to 69 tended to go for the knife, screwdriver or door as their principal choice of opening aid, whilst the cloth was the only choice for the 20 to 29 year olds. None of the male participants asked for help from relatives or partners. Table 18.1. Number of different coping mechanisms used by specific age and gender groups Age bracket Gender
20-29
30-39
40-49
50-59
60-69
70-79
80-89
90+
Male
1
1
1
3
2
2
3
1
Female
7
4
8
8
7
6
5
1
18.4 Discussion From this work, it is apparent to the authors that a large proportion of the test population struggled to open these VLC jars with the majority of these being female. All the people tested who struggle to open the packaging develop some form of coping strategy or mechanism to alleviate or eliminate the struggle. Women develop twice as many coping strategies as men and, because they struggle from an earlier age, have to develop the strategy much younger. The coping strategies can broadly be split into two groups. On the one hand are physical techniques; they might for example use a tool specifically designed for the purpose or some improvised tool such as a knife or cloth. This category also includes techniques that might make use of physical or mechanical properties of the closure, such as hot water or tapping with a knife. On the other hand are social coping strategies such as asking a relative, partner, husband etc. or even asking a utility man such as the postman. From this work it can be seen that consumers have developed sophisticated coping strategies for the accessibility of packaging. The term ‘coping strategy’ has been defined as “the specific efforts, both behavioural and psychological, that people employ to master, tolerate, reduce, or minimise stressful events” (Taylor, 1998). Research by Folkman and Lazarus (1980) has shown that people use both physical coping strategies and emotional strategies to combat most stressful events. The predominance of one type of strategy over another is determined, in part, by personal style (for example, some people cope more actively than others) and also by the type of stressful event; for example, people typically employ problem-focused coping to deal with potential controllable problems as in the case of opening packaging. Physical assistive strategies, i.e. the use of knives etc dominate for men, however, women were seen to use more complex ones including social assistive strategies. An extreme but not uncommon example of a purely social strategy is shown in the following interviews (the participant’s voice is prefixed with a P and that of the interviewer an I):
194
Yoxall, Langley, Musselwhite, Rodriguez-Falcon and Rowson
P: “I can’t open those. They’re too hard. I give them to Ken to open” I: “What if Ken’s not around?” P: “I go next door and ask a neighbour.” In addition, some people’s social situation requires more mobility: P: “When I can’t open stuff I phone me daughter. She comes round and opens it for me.” I: “Would you buy jars that were easier to open? P: “To be honest no. I’d be frightened that my daughter wouldn’t come around anymore”
18.5 Conclusion The research shows the importance of considering the social and contextual nature in which packaging is dealt with by individuals when designing products. The authors found that elderly people developed coping strategies to access the contents of jars, and that these strategies could be split into two: namely physical and social. For women, difficulty in undertaking the task (in this case open the pack) meant that coping strategies developed earlier than in men and by the time women reached 40 to 59 years old, a greater range of coping mechanisms were used to combat the task. In general it was found that women used a much greater variety of coping strategies than men with twice as many different coping mechanisms being described by women as men (this may be due to the ability of the women questioned to see two coping mechanisms as different that men might have classified as one technique). For those over 70 years of age, there is a greater attachment to the coping mechanism that has been developed. In the case of social aids this is better described as fondness and eagerness for the social interaction that the coping mechanism brings. For tool based coping mechanisms this is better described as pride and achievement in the ability to innovate solutions and overcome problems. Amongst some of the older test population there was even noted a form of competitiveness in the coping technique used with some subjects attempting to convince others that their technique was better. Again, amongst the over 70 year old demographic there is a greater philosophical acceptance of the fact that ‘things’ are generally more difficult to do and ‘this is a fact of life’. In the age range 40 to 59 years, there is a greater frustration over jars that are difficult to open. This is the group of participants that were most demanding that the current situation was unacceptable. Hence some interesting findings can be drawn. Firstly, that the seemingly simple issue of designing easy to open packaging is in effect more complex. The accessibility and use of packaging and access to food is not only a physical issue but was seen by the researchers to involve what may be termed social factors. The implicit finding from this study is that solutions to even simple problems are intrinsically linked to broader issues. Some consumers in this study used packaging as a means to interact with their partner, as a means to be helpful, to
Assistive Strategies for Jar Opening
195
meet a neighbour, to see their family. This has significant implications for the researchers wanting to undertake design based solutions for the aged. It is well understood that physical needs of the aged consumer become more diverse with age; the authors would argue that so to do the emotional needs and that we need to understand this context in more detail. The authors believe that if our assistive technological solutions do not provide the same level of satisfaction as those currently employed – both physical and emotional then the effectiveness of our interventions will be compromised.
18.6 References BBC News (2004) ‘Wrap rage’ hitting the over-50s, 4 February 2004. Available at: http://news.bbc.co.uk/1/hi/business/3456645.stm (Accessed on 25 November 2009) Blakey S, Rowson J, Tomlinson RA, Sandham A, Yoxall A (2009) Squeezability. Part 1: a pressing issue. Journal of Mechanical Engineering Science, 223(C11): 2615–2625 BSi (2005) Design management systems. Managing inclusive design. BS 7000-6:2005. The British Standards Institution, UK Fair JR, Bix L, Bush TR (2008) Biomechanical analysis of opening glass jars using kinematics. In: Langdon P, Clarkson P, Robinson P (eds.) Designing inclusive futures. Springer, London, UK Folkman S, Lazarus RS (1980) An analysis of coping in a middle-aged community sample. Journal of Health and Social Behavior, 21: 219–239 Han J, Nishiyama S, Yamazaki K, Itoh R (2008) Ergonomic design of beverage can lift tabs based on numerical evaluations of fingertip discomfort. Applied Ergonomics, 39: 150– 157 Kuo L-C, Chang J-H, Lin C-F, Hsu H-Y, Ho K-Y, Su F-C (2010) Jar-opening challenges, part 2, estimating the force-generating capacity of thumb muscles in healthy young adults during jar-opening tasks. Journal of Mechanical Engineering in Medicine (in press) McConnell V (ed.) (2004) Yours Magazine, Pack it in! – Just say no to impossible packaging. Emap Esprit, 30 January – 27 February Rohles FH, Moldrup KL, Laviana JE (1983) Opening jars: an anthropometric study of the wrist twisting strength in elderly. In: Proceedings of the 27th Annual Meeting of the Human Factors Society, Norfolk, VA, US Rowson JL, Yoxall A (submitted) Hold, clutch or grasp. Applied Ergonomics Su F-C, Chiu H-Y, Chang J-H, Lin C-F, Hong R-F, Kuo L-C (2009) Jar-opening challenges, part 1, an apparatus for assessing hand and finger torques and forces in a jar-opening activity. Journal of Mechanical Engineering in Medicine, 223: 1–131 Taylor S (1998) Coping strategies. Summary prepared by Shelley Taylor in collaboration with the Psychosocial Working Group. Last revised July, 1998. Available at: http://www.macses.ucsf.edu/Research/Psychosocial/notebook/coping.html (Accessed on 30 November 2009) Tompson A, Carse B (2009) Older adult requirement data – what designers want! In: Proceedings of International Conference on Inclusive Design (INCLUDE 2009), Helen Hamlyn Centre, London, UK Voorbij AIM, Steenbekkers LPA (2002) The twisting force of aged consumers when opening a jar. Applied Ergonomics, 33(1): 105–109 Yoxall A, Janson R, Bradbury SR, Langley J, Wearn J (2006) Openability: producing design limits for consumer packaging. Packaging Technology and Science, 19: 219–225
196
Yoxall, Langley, Musselwhite, Rodriguez-Falcon and Rowson
Yoxall A, Janson R (2008) Fact or friction: a model for understanding the openability of wide-mouth closures. Packaging Technology and Science, 21: 137–147 Yoxall A, Rodriguez-Falcon EM, Luxmoore J (submitted) Carpe diem, carpe ampulla: a numerical model as an aid to the design of child resistant closures. Applied Ergonomics
Chapter 19 Email Usability for Blind Users B. Wentz, H. Hochheiser and J. Lazar
19.1 Introduction It is estimated that there are nearly 45 million worldwide who are blind with no residual vision (WHO, 2009). When one considers the unemployment statistics of between 70 to 75% for working-age blind individuals in the United States (NFB, 2007) and 75% for blind and visually impaired individuals in the United Kingdom (RNIB, 2008), the usability of email becomes a major concern due to its intersection with many vocational responsibilities. As studies have shown that email frustrations waste the time of all users (Williams and Williams, 2006), the difficulties relating to usability issues combined with the required use of email in the workplace may be creating a workplace barrier for blind users. Blind users face many challenges and obstacles when using computers at home and in the workplace, including difficulties in accessing websites and using corporate software. Email usability challenges for blind users have not been studied in detail, so gaining a better understanding of any problems that exist can lead to improvements in email software interfaces. To help understand the challenges faced, a web-based survey on email usage by blind users was developed in late 2008 and administered by the researchers in early 2009.
19.2 Related Work 19.2.1 Usability Issues for Blind Users Accessibility refers to users with impairments being able to technically access technology, while usability is a broader topic, relating to true ease of use. This research project is focused on usability. Blind users experience many usability challenges when using technology and software. Assistive technology tools such as screen readers are necessary for them to use most software. A screen reader (such as JAWS or Window-Eyes) is software that will audibly read the visual content on
198
Wentz, Hochheiser and Lazar
a computer screen to a blind user. Another method by which blind users to access software is through the use of Braille and Braille-supported devices. The challenge with Braille devices is that they are often cost-prohibitive, and the rate of Braille literacy among blind users is very low (an estimated 10 to 20% in the United States) (NFB, 2007b). Computer frustrations that impact the ability to complete a work task can affect the mood of blind users (Lazar et al., 2006). It is also known that they are more likely to avoid content when they are aware, in advance, that it will cause them accessibility problems such as the problems often presented by dynamic web content (Bigham et al., 2007). Blind users are also often forced to discover some sort of workaround to complete a particular task (Shinohara and Tenenberg, 2007). An example of the usability challenges that they face is well illustrated in the Lazar et al. (2007) study on the frustrations that screen reader users experience on the web, which identified poorly labeled links and forms, missing or confusing alternate text for graphics, and problems with PDF files as being some of the challenges commonly faced by blind users on the web.
19.2.2 Potential Email Concerns for Blind Users While sighted users can visually scan and skip over offensive or non-relevant emails in their inbox, blind users must listen to it one email at a time. Spam can also present a security threat since it is one of the most common carriers of electronic viruses and worms (Stolfo et al., 2006). The obvious primary solution to managing spam is through aggressive spam filtering software. The major tradeoff with a spam filter is that by its very nature (filtering email) false positive and negative identification of spam email is always possible (Cormack and Lyman, 2007). It is perceived that blind users tend to use high levels of spam filtering, which may filter out legitimate incoming emails that are sent using a BCC (blind carbon copy) (Lazar et al., 2005). Studies over a period of 10 years have shown that an email inbox full of messages is something that most users struggle with (Fisher et al., 2006). Some of the most common methods for managing email revolve around archiving and storing messages in folders as well as the common practice of “inbox message visibility” which involves visually scanning the inbox for messages. It is important to determine how blind users handle email organisation as well as other extended features such as calendars and contacts in order to develop suggestions for improvement in design.
19.3 Research Methodology A focus group held in May 2008 at the National Federation of the Blind (NFB, 2007b) in Baltimore, Maryland, identified some possible usage barriers of email (Wentz and Lazar, 2009). Spam was noted to be frustrating and embarrassing at times. Methods of searching for and organising email were discussed as an area
Email Usability for Blind Users
199
that needed further exploration. Web-based email was noted to be cluttered and often difficult to navigate, and the focus group participants further indicated that there were possible problems with the usage of extended features of many email applications (such as the address book/contacts and calendar). Visual CAPTCHAs (distorted letters used to verify that a user is human and not a security threat) were noted to be very problematic for blind users. CAPTCHAs are often required for registering for web-based email accounts and when sending messages through some providers. The results of this focus group prompted the creation of a webbased survey to further explore email usability for blind users. The content of the web-based survey included four questions about demographic information, three questions about work and educational information, five questions about general email usage history and habits, 13 questions about extended email features and organisation, seven questions about phishing and spam management, and specific adaptive sections relating to both stand-alone software and web-based email. In addition, there were two questions about using BrailleNote for email access, two questions about social networking web sites, and one question asking the respondents what could be changed to make email more usable for them. Initially a web-based survey tool called SurveyMonkey (2008) was used to develop the survey because it is advertised as a Section 508 compliant survey tool. After testing the web-based survey with the JAWS screen reader software, it was determined that SurveyMonkey was not, in fact, entirely accessible. At this time, a different tool, SurveyGizmo, was used to develop the web-based survey, and it was tested successfully. A survey was created using skip logic so that if questions were not relevant to a particular respondent, they would not be asked. Example: if a survey respondent indicated that he or she does not use web-based email, the questions relating to web-based email would not be asked. Also, users were permitted to leave questions blank. The web-based survey was advertised through emails to the state chapters of the National Federation of the Blind. In difficult to reach user groups such as those with disabilities, self-selected sampling methods are considered to be valid (Lazar, et al., 2010). In addition, since the population of interest is blind users, and there is no central directory of all blind individuals, a true random sampling would be technically impossible. The goal of this survey was to identify problems and concerns, rather than to rank or prioritise them in a statistically robust manner.
19.4 Results 19.4.1 Demographics Data was gathered from 21 January 2009 through 30 April 2009, and 129 valid responses were received from the survey. The survey respondents were required to be at least 18 years of age, self-labeled as blind, and screen reader users not able to use screen magnification. Not everyone answered every question, since they were permitted to skip questions. Each following statistic discussed will include the
200
Wentz, Hochheiser and Lazar
number of respondents for the particular survey question. The overall employment rate of 49% of respondents was significantly higher than the national employment average of blind individuals in the United States (NFB, 2007a) and that of blind and partially-sighted individuals in the United Kingdom (RNIB, 2008). Twenty one out of 128 (16%) reported being enrolled in academic classes at a college or university. Out of 123 respondents who reported gender, 64 (52%) were female, indicating an almost balanced response from both genders. Out of 126 respondents who answered the question on approximate age, the largest number of respondents were from 22 to 64 years of age. For data on participant age, consult Table 19.1. Table 19.1. Approximate age of respondents Age range
Number of responses
Percentage
65 and over
7
6
55 to 64
41
33
45 to 54
25
20
35 to 44
20
16
22 to 34
28
22
18 to 21
5
4
Hundred and twenty five respondents reported on the number of years that they had been using email. Hundred and thirteen (90%) of the respondents had been using email for more than five years, so one may propose that the sample represented primarily experienced email users. Since the survey was of a selfselected group, the respondents may have been more likely to be employed and experienced. Eighty four (68%) of 124 respondents reported that they checked their email more than three times per day, and 76 (61%) noted that they primarily used email at home. The amount of time spent using email per day was reported by 124 particpants. For data on the number of times per day respondents reported checking email, consult Table 19.2. Table 19.2. Times per day checking email Times per day
Number of responses
Percentage
Once
5
4
Twice
15
12
Three
20
16
More than three
84
68
Email Usability for Blind Users
201
19.4.2 Stand-alone Email Software Usage A hundred of the survey’s 129 respondents (78%) reported using stand-alone email software to some extent (some of these respondents also reported using web-based email). Based on the their responses, 42 out of 129 respondents (33%) indicated that they use stand-alone email software exclusively (i.e. no web-based email used). The most popular email software was from Microsoft. Respondents were restricted to selecting only one type (their primary choice). Some examples of the email software in the “other” category included Mozilla Thunderbird, Eudora, and Windows Mail. For data on the type of stand-alone email software used, consult Table 19.3. Table 19.3. Type of stand-alone email software used Email software
Number of responses
Percentage
Lotus Notes 8
1
1
Microsoft Outlook Express
43
43
Microsoft Outlook 2002
4
4
Microsoft Outlook 2003
28
28
Microsoft Outlook 2007
7
7
Novell GroupWise 6
1
1
Novell GroupWise 7
1
1
Other (not listed)
15
15
19.4.3 Web-based Email Usage Seventy six out of 129 (59%) of the survey respondents reported using web-based email. Seventeen out of 129 respondents (13%) indicated that they use web-based email exclusively. On a scale of one to five (with five being the most important), 43 out of 75 (57%) reported that web-based email was moderately to highly important (selecting three to five on the scale). The most popular specified type of web-based email was Gmail, which was used by 22 out of 74 respondents (30%). Respondents were restricted to selecting only one type (their primary choice). For data on the type of web-based email used, consult Table 19.4.
202
Wentz, Hochheiser and Lazar Table 19.4. Type of web-based email used Web-based email used
Number of responses
Percentage
AOL
2
3
Gmail
22
30
GroupWise Webmail
1
1
Hotmail
11
15
Outlook Web Access 2003
2
3
SquirrelMail
2
3
Yahoo Mail
9
12
Other (not listed)
25
34
19.4.4 Handling Spam Survey respondents were asked to select the statement that best describes their experience with spam emails. A majority (66 out of 119; 55%) of respondents noted that spam is somewhat of an annoyance to them, however, only 12% indicated that spam is very frustrating and embarrassing. Two percent of those respondents did note that spam is so frustrating that it almost prevents them from using email at all. Respondents were next asked whether they were using a spam filter. Eighty two out of 127 (65%) users reported using a spam filter. Three percent of the users were not certain whether or not they were using a spam filter. Eighty two respondents answered the next question which asked how often their spam filter mistakenly filters out legitimate email. 48% of those responding rarely experienced this problem. For data on how respondents reported spam filtering blocking legitimate email, consult Table 19.5. Table 19.5. Spam filter blocking legitimate email Occurrence
Number of responses
Percentage
Never
12
15
Very rarely
39
48
Once or twice a week
17
21
Several times a week
5
6
Almost every day
9
11
Email Usability for Blind Users
203
19.4.5 Extended Email Features 19.4.5.1 Address Book or Contacts The first question relating to extended features of email usage was whether an email address book was used to manage contacts, and out of 127 respondents who answered the question, 110 (87%) reported using an email address book. When asked about the level of difficulty that is experienced when using the email address book, 58 (53%) out of 109 respondents rated their email address book as a one (on a scale of one to five, with five being the most difficult). The next question was whether there would be a benefit from an email function that would allow a user to automatically add a contact to their address book by checking a box when replying to an email. Out of 118 respondents, 85 (72%) indicated that this would indeed be a useful feature. While some email programs do provide this feature, many do not. Respondents using Microsoft stand-alone email software were asked whether they used the auto-complete feature (automatically remembering and suggesting an email recipient when composing an email), and 46 (56%) reported that they do use that feature. On a scale of one to five (with five being the most satisfied), 37 out of 46 (80%) reported being moderately to greatly satisfied with the feature (selecting three to five on the scale). It should be noted, however (refer to the automatic contact function discussed above), that many blind users would prefer more than simply the auto-complete functionality. 19.4.5.2 Calendar The next question asked relating to extended email features was whether the email calendar was used. Out of 128 respondents to this question, only 25 (20%) reported using an email calendar. When asked about the level of difficulty experienced when using the email calendar, only 23 respondents answered the question. The majority of those (nine; 39%) selected a three on a scale of one to five (with five being the most difficult). 18 respondents answered the follow-up question which was to describe the difficulties that they experience with with their email calendar. Navigation and labeling were among the problems that users noted. While not every email client supports calendar integration, stand-alone email software such as Microsoft Outlook and web-based email such as Gmail and Yahoo are examples of common products that do support email and calendar integration. 19.4.5.3 Reminders Hundred and twenty eights respondents answered a question about the use of email reminders. An email reminder is a method of flagging an email for follow-up at a later time when the user selects a format and timeframe to be reminded. The reminder itself is typically a pop-up at the predetermined time, and there is often also an audio cue. Ninety three respondents (73%) indicated that they did not use email reminders When asked about the ease of use of email reminders, 18 out of 34 respondents (53%) selected a one on a scale of one to five (with five being the most difficult). One of the few problems with email reminders that was described was that the reminder causes the screen reader software to lose the focus (position) on the screen.
204
Wentz, Hochheiser and Lazar
19.4.5.4 Storage and Organisation Sorting and searching for email was noted to be problematic for some users, with 40 out of 123 respondents (32%) indicating a moderate to difficult time sorting email and 49 out of 127 respondents (39%) indicating a moderate to difficult time searching for email (selecting three to five on the scale). Hundred and twenty seven respondents reported on the amount of email stored in their inbox on a regular basis. Fifty five (43%) reported storing 50 or more messages in their email inbox. Only 20% noted that they kept only a few messages in their inbox. The most common method of organising email selected was responding to or deleting a message immediately, with 101 out of 127 respondents (80%) utilising this method as one of their methods of email organisation. Storing a message in a folder was almost as common, with 92 respondents (72%) utilising this method. A majority of respondents (67) also noted that they often wait until an email is no longer needed and then delete it. Respondents were permitted to select more than one utilised method of organising email. Sixty one percentage (50 out of 83 responses) of those using Microsoft standalone email software responded that they do not use Microsoft “rules” to organise email messages, and out of the 39% who reported that they do, 54% reported a moderate to high level of difficulty when using this feature (three to five on a scale of one to five, with five being the most difficult). It is possible that sighted users may also experience difficulties with this feature. Respondents using stand-alone email software were asked whether they change email folder settings to make them easier to use with their screen reader software. Out of 84 responses to that question, 48 respondents (57%) indicated that they did not. Additional details concerning which settings users changed were included in the survey. The two most reported changes were changing the sorting order of messages and disabling the preview panel.
19.4.6 Important Improvements Survey respondents were asked an open-ended question regarding what they felt would be the most important changes for blind users that could be made to email software. A hundred of the survey’s respondents offered a variety of suggestions. A summary of the most common responses (ordered in priority according to the number of times an improvement was suggested) are: • • • • • • • • • •
improved search usability within email applications; easier to use contacts/address books; easier method of reading and recognising attachments; more accessible and usable web-based email; overall simplification of the use and navigation of email applications; easier to use email calendars; a concrete solution to spam email; easier to use message “rules”; a unique sound to indicate a message of high importance; better alternatives to visual CAPTCHAs.
Email Usability for Blind Users
205
19.5 Discussion The data from this survey revealed several important facets of email that could be improved for blind users. The email calendar functionality is a vital part of enabling an individual to fully collaborate in the workplace, as exemplified by the use of the calendar to schedule meetings and manage one’s schedule, yet a large percentage of blind users are not using an email calendar. Contacts could also become more usable, with 85 out of 118 (72%) of respondents reporting that a feature to more easily add contacts automatically to the address book would be a benefit to them (not simply an auto-complete cache of email addresses). It was also clear from the comments of the 76 out of 129 (59%) of respondents who reported using web-based email that a focus on creating more usable webbased email would be important to blind users. There were many positive comments about the usability of many of the current web-based email products, but reducing cluttered interfaces, making attachments easier to recognise and simplifying navigation were repeatedly noted as needed improvements. The next step in this research is to obtain more in-depth information about the highlighted problems through formal usability testing of individual stand-alone and web-based interfaces with blind users. The proposed research agenda with desired outcomes for more in-depth usability testing is prioritised in Table 19.6. Table 19.6. Prioritised agenda and outcomes for usability testing Priority
Item
1
Obtaining suggestions for simplification of email interfaces
2
Identifying the navigation problems of web-based email
3
Examining issues relating to email calendaring
4
Articulating a solution to easily adding contacts to the address book
19.6 References Bigham J, Cavender A, Brudvik J, Wobbrock J, Ladner R (2007) WebinSitu: a comparative analysis of blind and sighted browsing behavior. In: Proceedings of the 9th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2007), Tempe, FL, US Cormack G, Lynam T (2007) Online supervised spam filter evaluation. ACM Transactions on Information Systems, 25(3): 1–31 Fisher D, Brush A, Gleave E, Smith M (2006) Revisiting Whittaker and Sidner’s “email overload” ten years later. Paper presented at the 20th Anniversary Conference on Computer supported Cooperative Work (CSCW 2006), Banff, Alberta, Canada
206
Wentz, Hochheiser and Lazar
Lazar J, Allen A, Kleinman J, Lawrence J (2005) Methodological issues in using time diaries to collect frustration data from blind computer users. Paper presented at the 11th International Conference on Human-Computer Interaction (HCII’05), Las Vegas, NV, US Lazar J, Allen A, Kleinman J, Malarkey C (2007) What frustrates screen reader users on the web: a study of 100 blind users. International Journal of Human-Computer Interaction, 22(3): 247–269 Lazar J, Feng J, Allen A (2006) Determining the impact of computer frustration on the mood of blind users browsing the web. In: Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2006), Portland, OR, US Lazar J, Feng J, Hochheiser H (2010) Research methods in human-computer interaction. John Wiley and Sons, Chichester, UK NFB (2007a) Assuring opportunities: a 21st Century strategy to increase employment of blind Americans. National Federation of the Blind. Available at: www.nfb.org/nfb/RandolphSheppard_facts.asp?SnID=2 (Accessed on 20 November 2009) NFB (2007b) Promoting Braille: a campaign to increase literacy for blind youth. National Federation of the Blind. Available at: www.nfb.org/nfb/Louis_Braille_coin_facts.asp?SnID=1758554996 (Accessed on 26 September 2007) RNIB (2008) Increasing employment amongst blind and partially sighted people. Royal National Institute of Blind People. Available at: www.rnib.org.uk/xpedio/groups/public/documents/PublicWebsite/public_campemploy.hcsp (Accessed on 30 June 2009) Shinohara K, Tenenberg J (2007) Observing Sara: a case study of a blind person’s interactions with technology. In: Proceedings of the 9th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2007), Tempe, FL, US Stolfo S, Hershkop S, Hu C-W, Li W-J, Nimeskern O, Wang K (2006) Behavior-based modeling and its application to email analysis. ACM Transactions on Internet Technology, 6(2): 187–221 SurveyMonkey (2008) Are your surveys 508 compliant and accessible? Available at: www.surveymonkey.com/HelpCenter/Answer.aspx?HelpID=247 (Accessed on 9 June 2009) Wentz B, Lazar J (2009) Email accessibility and social networking. In: Proceedings of the 13th International Conference on Human-Computer Interaction (HCI International 2009), San Diego, CA, US WHO (2009) Visual impairment and blindness. World Health Organization. Available at: www.who.int/mediacentre/factsheets/fs282/en/ (Accessed on 30 June 2009) Williams T, Williams R (2006) Too much e-mail! Communication World, 23(6): 38–41
Part V
Inclusion and Healthcare
Chapter 20 The Involvement of Primary Schools in the Design of Healthcare Technology for Children M. Allsop, R. Holt, J. Gallagher, M. Levesley and B. Bhakta
20.1 Introduction There has been an increased emphasis on user involvement within healthcare technology development (Ram et al., 2007). Within the healthcare domain the term “user” can describe a range of different people across all ages; from patients and their families to healthcare professionals, NHS providers and commissioners of services. Within rehabilitation engineering, a “user” of technology is often characterised by the presence of a physical, cognitive, sensory or communication impairment. Although research has focused on the considerations of involving disabled adults in healthcare technology design (e.g. Orpwood, 1990) there has been little research directed towards the design and development of rehabilitation technologies with disabled children. Engaging children in the development of rehabilitation technologies has been reported (Hwang et al., 2004; Weightman et al., 2010), but there are still considerable gaps in our knowledge of the most efficient approaches to engage children in the design process of such technology. The primary school setting is an environment providing the ideal opportunity to investigate how best to involve children with disabilities in healthcare technology research. The presence of children with disabilities in primary schools has increased since the practice of inclusive education, where the benefits of these settings have been described (Lindsay, 2003). Inclusive education leads to the increased use of healthcare technology within the school environment, therefore the opinions of children without disabilities are particularly valuable to designers to ensure social acceptability of healthcare technology that may be deployed within a school setting. The design of healthcare technologies directed at improving childhood participation in education and play requires involvement of stakeholders at the prioritisation stage of identifying the technology “gap” and at the inception of the design process (Light et al., 2007). Stakeholders include disabled children,
210
Allsop, Holt, Gallagher, Levesley and Bhakta
peers without disabilities, and service providers such as teachers (Waller et al., 2005). Teachers can provide invaluable information and insight into the daily factors associated with disability within the classroom and school environment and therefore they are a critical research resource. Parents can also be involved but they are often less involved in technology development within educational settings than technology development within the home. Smith et al. (2009) highlight that although teachers have been involved in collaborative research projects their engagement in research can vary considerably depending on competing pressures on their time. General guidelines can be found for planning and conducting social and educational research (Cohen et al., 2007). However, there is a paucity of published literature that provides practical guidance on involving children and teachers in technology design and development research in educational settings. Approaches to usability testing of computer products (in this instance Microsoft) have been reported (Hanna et al., 1997). Procedures and practices that were developed as part of the process of developing technologies to enhance arm function in children with cerebral palsy (Weightman et al., 2010) will be described. Solutions to the barriers identified in a previous study (Allsop et al., 2009) will also be incorporated.
20.2 Background The Charterhouse Rehabilitation Technologies Laboratory was formed as part of the Academic Department of Rehabilitation Medicine at the University of Leeds. The multi-disciplinary team comprises of medical staff, allied health professionals, engineers and other non-clinical research staff. The research activity of the laboratory has a focus on development and evaluation of rehabilitation interventions. The team has developed a range of pioneering rehabilitation devices for children with cerebral palsy (CP) and adults with stroke. Recent research has focused on the development of a force feedback “joystick” that guides upper limb exercises in the context of an engaging workspace (Holt et al., 2007). The system is intended to assist children with cerebral palsy who have difficulty with voluntary movement of the upper limb in the completion of reach/retrieve exercises. Working with children and their families was a crucial part of this development and provided the backdrop to the investigation of user centred design in the context of the school environment. As part of the user centred design process we involved children at local primary schools where children with CP and children without disabilities participated in the design of the technology. This paper outlines the approaches used to involve schools in formal processes required during device development without being obtrusive in the education of participating children. The presence of disabled children within the school context permitted open discussions between all children about the design of healthcare technologies, not just technology that has been designed for use with a specific disability. With appropriate planning our research team were afforded the opportunity to canvass opinion from disabled as well as able-bodied children, thus avoiding the cost and
The Involvement of Primary Schools in the Design of Healthcare Technology
211
time associated with dedicated user group meetings or workshops (that take place outside of school settings) to gather data from people with a specific disability. Previous research alerts us to the danger of people with disabilities feeling that they are “overresearched” (Mitchell, 2003). Nevertheless deploying a formal user centred design process within school settings has certain advantages. Primary schools within the United Kingdom (UK) are obliged to provide the accessibility hardware and assistive technology that is outlined on any statement of special educational needs for children with disabilities. This policy ensures that the elicitation of ideas and preferences from disabled children, particularly those presenting physical and communication impairments, is supported by appropriate accessibility or assistive equipment. The requirement for such equipment enables researchers to plan what should be fully inclusive and accessible research (National Disability Association, 2003). A key aspect of working in the school environment is contact with teachers. Clearly they are knowledgeable about their students, class dynamics and the accompanying environment and are therefore crucial to the success of any user centred design method used in the school setting. Teachers can also assist in the research process and guide and control a class full of young children in the context of the research project. In addition to this, design research uses methods on the basis of research interest, the size of the population, and the experience of those performing the methods (Druin, 2002). Therefore, provided that researchers involve teachers in trial planning and apply initiative and willingness to adapt methodology within this environment, bespoke research designs can be implemented to ensure that useful information is obtained to inform the design process. Although theorists have questioned the likelihood of success in collaborations between schools and universities (Carlone and Webb, 2005), it is the responsibility of the research team to ensure that the demand placed on any school is not too high. Within our own links with schools, the aim has been to establish a “collaborative labor” (Zigo, 2001). This approach involved the research team engaging with students in the classroom setting to acquire knowledge that can be used to address problems on a larger societal level. The process contained an element of exchange, where ideas were shared between the researchers and the participants. Within this research we initiated discussions surrounding disability and what it can mean for children, alongside introducing concepts of rehabilitation. In exchange, we witnessed the generation of novel ideas and perspectives by the children regarding rehabilitation and associated technology. The rest of the document describes how we involved children and teachers in the design process across six primary schools.
20.3 Preparing Trials with Primary Schools When contacting primary schools there was a noticeable variety in the handling of correspondence from our research team. To arrange research trials, often the most useful method of communication was via telephone conversations with the head
212
Allsop, Holt, Gallagher, Levesley and Bhakta
teacher at a school, as this allowed for detailed discussion of the research project. However, many schools also had special needs coordinators who it was equally important to involve in discussions about the practicalities of our research trials. Establishing such initial links with schools was often confounded by the limited availability of staff in both of these roles. However, once contact was made, most schools expressed strong support and approval for the value and application of our research. To develop this support, the direct benefits and potential outcomes of the project were highlighted and any meetings to discuss the project in more detail were arranged around the existing schedules of a school. These early discussions provided an opportunity to discuss the time commitments that would be expected of a school. For this project, some schools were asked to be involved in one trial that lasted half a day. However, many schools were approached with the intention of developing more permanent links with the research team. In these instances it was proposed that visits to the school could take place once or twice a year; multiple trials could be used for follow up research or involving new cohorts of students. Before our team finalised any research trials in the primary schools, members of the research team attended meetings with head teachers from participating schools. These were used to establish the most effective strategies for integrating research practice into the existing procedures at a school. The current National Curriculum (NC) was suggested as a foundation on which to design research trials. Information outlining the programmes of study and the learning objectives of children within this framework can be found online (www.nc.uk.net). Our own research fitted into the Design and Technology curriculum where learning objectives included developing, planning and communicating ideas, alongside evaluating processes and products. Identifying such objectives allowed our team to arrange research activities that addressed the aims of a project whilst simultaneously contributing to the current practice and teaching within the primary schools. Typically, lesson plans are often used by teachers to outline the intended learning objectives of a lesson. During meetings with head teachers, the development of a lesson plan to accompany our trials in schools was encouraged. By doing so, it clearly communicated to teachers how our research fitted into the NC and provided them with a succinct outline of the trial in a familiar format. Our lesson plans were simple and not over-planned, fitting onto one sheet of A4 paper for ease of reference during a lesson. They contained key points that a teacher or researcher would deliver within the first ten or fifteen minutes of a lesson and detailed an inclusive core activity i.e. the task that the majority of children should complete. Consideration of the dynamic capabilities of a class was important and extension tasks were developed. Extension tasks accounted for the spectrum of learning capabilities within a class by ensuring that there were additional tasks for children who were less able to complete the inclusive core activity, or for those who required further tasks. Head teachers indicated an estimated number of three or four children in both of these categories for most classes. By providing teachers with a draft lesson plan the team had an opportunity to discuss the availability of resources for any proposed activities and review the suitability of any extension tasks within a specific school before the trials took place.
The Involvement of Primary Schools in the Design of Healthcare Technology
213
20.4 Planning Trials with Primary Schools When discussions with primary schools had been completed and trial dates set, our team focused on issues regarding consent forms and information sheets for parents /guardians. Although it is advised to provide assent forms to children immediately before their participation on the day of a trial (Ungar et al., 2006), schools often required consent to be gathered from parents for additional tasks involved in research activities. Due to delays that can occur with the return of such forms our team ensured that letters were sent out at least two weeks in advance of the trial day. The parental information sheets were short and succinct and consisted of one to two short paragraphs detailing the general purpose and activities involved in the trial. Although children’s knowledge of disability has received attention in research (e.g. Magiati et al., 2002), concepts such as rehabilitation have yet to be investigated. Given that the research trials relied heavily on discussions and activities relating to complex topics our team ensured that the class teacher discussed these with a class before the day of a research trial. Although research has shown young children to have an awareness of physical and sensory disabilities, it is not until later in development that an awareness of developmental difficulties such as speech and language disorders can be identified (Diamond and Hestenes, 1996). Teachers were asked to hold discussions surrounding a range of disabilities with a class before the research trials, with this information being consolidated in presentations that were provided on the day of the trials.
20.5 Running Trials in Primary Schools In the experience of our team, research in schools should always be approached with flexibility. On occasions the research team had to perform interviews and focus groups with children in cloakrooms or teaching kitchens. Such setups have been known to evoke feelings of uncertainty in the researchers. However, children are often familiar with most areas of their school and have always appeared comfortable when participating in interviews, independent of the environment. The ability to change in accordance with the needs of a school was a necessary requirement to perform research within the primary school setting. Controlling for extraneous variables was often a challenge, and most schools were not naturally suited to research requiring a controlled environment. When delivering information and instructions to children, our team found it most simple to present year one groups (aged 5 to 6 years old) with phonicallybased instructions, but applied a similar approach to older groups such as year six (aged 10 to 11 years old). This made explanations more straightforward for children to understand and it was the same as methods most frequently used by the teacher to deliver instructions. Research trials led by our team followed a structure (see figure 20.1), beginning with a presentation to the whole class of children regarding the purpose of our visit. The opening presentation often lasted between 15 to 20 minutes beginning with an overview of disability and an outline of the trial activities. This was
214
Allsop, Holt, Gallagher, Levesley and Bhakta
followed by a general overview of engineering and how it can be applied to healthcare, particularly in terms of making healthcare equipment. The inclusion of the topic of engineering has been welcomed by schools and was often integrated into teaching, or themed weeks. The presentation discussed a range of topics within the field of engineering and how these can be applied to the real world (e.g., automotive, structural and robotics engineering). Props included a Sony AIBO robotic dog, and supporting audio and video materials that demonstrated the content of the presentation. Although teachers had already discussed the concept of disability with the children, the researchers took the opportunity to draw specific attention to it within the context of rehabilitation. Role playing was used with the presentation when, for example, children were asked to try to take a jumper off with one hand. This activity was used to provide insight into the effects of a physical impairment of the arm. Subsequent question and answer sessions with the children allowed for any queries, although questions were encouraged throughout all discussions. Importantly, when providing explanations of neurological impairment such as CP it was ensured that the existence of other learning-related impairments was not implied. Group presentation and discussion
Group activity/inclusive core activity
Interviews
Figure 20.1. Summary of research activities completed on a trial day
The group activity, or inclusive core activity, that followed had clear and simple objectives, beginning with the initiation of discussions with children about disability and rehabilitation. Our key points were to highlight the correct use of rehabilitation equipment and discuss how engineering and technology design could assist in the production of equipment that can help people with disabilities. The inclusive core activity involved children working alone or in groups to design their own version of a rehabilitation joystick and often lasted between 60 to 90 minutes. The complete rehabilitation joystick device under development was presented to the children and they were encouraged to operate the device with the accompanying software. In addition to this, material props (e.g., sponge, leather and cotton) and colour charts were given to the children to assist them with the generation of ideas. For the children who completed the task before time, a selection of other case studies and mini design projects were available. When children struggled with the task further assistance was available from the researchers, and if necessary the children could simply replicate the original design and focus on changing only one aspect of it such as the colour. This design task fulfilled the “developing and generating” aspects of the NC. The children generated ideas on the basis of other people’s experiences, talked about their ideas and communicated their ideas using a variety of methods, including drawing and making models. Another aim of the group task was to encourage children to begin thinking about the idea of rehabilitation and related devices alongside focusing their thoughts on the aesthetic of the joystick. To assist in maintaining children’s
The Involvement of Primary Schools in the Design of Healthcare Technology
215
engagement with the task, researchers moved around the class and further discussed topics and emerging designs with the children. It is worth noting that for such an activity, at least two additional research assistants were required for each class of children. This was the optimum number to ensure adequate support was in place to run the trials smoothly and to assist the teacher where necessary. The researchers who attended the trials often worked within the research team on similar projects and had previous experience working with children, whether in an educational and / or research setting. The final stage of the research trial expanded upon the ideas generated in the design tasks within a range of focus groups, interviews, one-to-one design sessions and board games that took place either during or shortly after the group task activity. The interviews lasted 20 minutes and involved either an individual or a group of children. Children with disabilities often did not require any additional support from the research team in these activities as support workers were already in place for those who required mobility and speech support. Children with disabilities were happy to be involved in group interviews, although those with communication impairments took longer to respond to questions. Further information about the barriers to involving children with disabilities in such design research can be found in Allsop et al. (2009).
20.6 Post-trial Information Before trials took place children were always given explanations of the nature of the research and how their information was going to be used. However, children were also asked to complete post-test questionnaires following participation in the design task and an interview. These questions were designed to gather their opinions regarding their enjoyment of trial activities in addition to questions that tried to gauge children’s understanding of the material covered. The security of data was of extreme importance in the research. Any electronic data was modified to incorporate unique individual identifiers, and information that could lead to identification of participants was immediately removed or transformed. Given recent issues surrounding storage of data, guidance on its management is emerging (e.g. McGilchrist and Sullivan, 2007). Maintaining links with a school is crucial, and all schools that have been involved in our initial research trials have indicated willingness to participate in future studies. The levels of involvement that are possible in a school vary, but every school has been visited at least once throughout the year for ongoing projects. To sustain these links, research updates were sent to the schools for dissemination to students and parents / guardians. These included explanations of how the obtained information was being used by the research team, e.g. discussions of developments that had occurred with the joystick device. The researchers also hosted mini-competitions on the basis of suggestions by teachers that involved choosing one or two children who had completed the design tasks in greatest accordance with the instructions that they were given.
216
Allsop, Holt, Gallagher, Levesley and Bhakta Table 20.1. Summary of process for involving primary schools
Preparing trials
When contacting primary schools aim to speak to head teacher or special needs coordinator Highlight the benefits of any outcomes of research on initial contact Ensure the research team is highly flexible with scheduling and trial planning Incorporate research into the National Curriculum where possible Create a clear and simple lesson plan for teacher outlining key points, core activity, and extension tasks
Planning trials with primary schools
Provide schools with clear and succinct outlines of the research project along with consent forms for parents / guardians Ensure that complex concepts such as disability and rehabilitation are discussed by the teachers and children before the research trial day
Running trials in schools
Post-trial information
Provide children with assent forms before participation in trials Be prepared to change the trial setup Delivery of instructions should be phonically-based Ensure that presentations to children are engaging and interactive: this can include audio, video and interactive products Involve an adequate number of research assistants to account for demands of time within trials De-brief children, and if possible use a post-test review (e.g. questionnaire) to gauge children’s understanding Ensure that security measures are implemented with any data Maintain future links with schools through the use of newsletters for parents / guardians and other additional activities De-brief staff and gather insight for improvement during future trials
De-briefing teaching staff was always very insightful, particularly for identifying and gauging the extent to which proposed activities worked in the classroom, and for noting any difficulties that arose. Table 20.1 combines such teacher observations with insight gathered from the research team to summarise the main points of consideration when involving primary schools in research.
20.7 Application of Guidelines and Future Research Children with and without disabilities have important roles in guiding the design and development of healthcare technology. The primary school context is an ideal setting to gather information from all children to create technology that fits into the
The Involvement of Primary Schools in the Design of Healthcare Technology
217
social context in which it is to be used. This case study has outlined the process adopted by our team to gather data from children within the primary school. Our recent history of accessing and performing research trials within this setting has been described and problems that were incurred have been highlighted. The experience of our research team has been outlined to compensate for the current lack of literature covering the involvement of children in the design and development of healthcare technology, and in particular rehabilitation equipment. Research in primary schools can be very insightful, but running clear and trouble-free trials can be difficult, particularly when the content contains relatively complex concepts such as engineering and rehabilitation. There is a need to reflect on methods used to deliver such information when presenting to children and it is hoped that explicating our methods will assist others in performing similar research. Although this paper provides preliminary guidelines regarding access to primary schools, there is still a need for research into the methodology used to involve children in healthcare technology development. Although research has started to consider the availability of methods for involving children with and without disabilities in research (e.g. Guha et al., 2008), further attention is needed when applying them within a range of contexts such as the primary school. Research is also necessary to establish methods for measuring the amount of information that can be acquired from design research in primary schools, alongside the typical outcomes that can be expected. Our own research team acquired a range of design images and a large amount of information about children’s preferences for rehabilitation equipment, but how useful this data is when it is fed into ongoing technology development has yet to be fully validated. Although gaining access to, and running trials in schools can be mapped to an extent, detailing trade-offs between the availability and resources of schools with the data required by a research team is more challenging. Involving adult users has led to the development of usable and clinically effective devices for use by adults (Ram et al., 2007). Through the involvement of primary schools in research it is possible to begin the development of devices of equal worth for children, with an expanding capacity to account for the context of their use and the opinions of those surrounding the user.
20.8 References Allsop M, Holt R, Levesley M, Bhakta B (2009) Involving children in the design of healthcare equipment: an investigation into methodology. In: Proceedings of International Conference on Inclusive Design (INCLUDE 2009), Helen Hamlyn Centre, London, UK Carlone H, Webb S (2005) On (not) overcoming our history of hierarchy: complexities of university/school collaboration. Science Education, 90: 544–568 Cohen L, Manion L, Morrison K (2007) Research methods in education. Routledge, Abingdon, UK Diamond K, Hestenes L (1996) Preschool children’s conceptions of disabilities: the salience of disability in children’s ideas about others. Topics in Early Childhood Special Education, 16: 458–475
218
Allsop, Holt, Gallagher, Levesley and Bhakta
Druin A (2002) The role of children in the design of new technology. Behaviour and Information Technology, 21: 1–25 Guha M, Druin A, Fails J (2008) Designing with and for children with special needs: an inclusionary model. In: Proceedings of IDC - Workshop on Special Needs, Chicago, IL, US Hanna L, Risden K, Alexander K (1997) Guidelines for usability testing with children. Interactions, 4(5): 9–14 Holt R, Weightman A, Allsop M, Levesley M, Preston N, Bhakta B (2007) Engaging children in the design of a rehabilitative game interface. In: Proceedings of International Conference on Inclusive Design (INCLUDE 2007), Helen Hamlyn Centre, London, UK Hwang F, Keates S, Langdon P, Clarkson P (2004) Mouse movements of motion-impaired users: a submovement analysis. In: Proceedings of the 6th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2004), Atlanta, GA, US Light L, Page R, Curran J, Pitkin L (2007) Children’s ideas for the design of AAC assistive technologies for young children with complex communication needs. Augmentative and Alternative Communication, 23: 274–287 Lindsay G (2003) Inclusive education: a critical perspective. British Journal of Special Education, 30: 3–12 Magiati I, Dockrell J, Logotheti A (2002) Young children’s understanding of disabilities: the influence of development, context, and cognition. Journal of Applied Developmental Psychology, 23: 409–430 McGilchrist M, Sullivan F (2007) Assuring the confidentiality of shared electronic health records. British Medical Journal, 335: 1223–1224 Mitchell D (2003) Research methodologies and the quandary of over-analysed populations, in using emancipatory methodologies in disability research. In: Proceedings of the 1st NDA Disability Research Conference, Dublin, Ireland Orpwood R (1990) Design methodology for aids for the disabled. Journal of Medical Engineering & Technology, 14(1): 2–10 Ram M, Grocott P, Weir H (2007) Issues and challenges of involving users in medical device development. Health Expectations, 11: 63–71 Smith C, Blake A, Curwen K, Dodds D, Easton L, McNally J et al. (2009) Teachers as researchers in a major research project: experience of input and output. Teaching and Teacher Education, 25: 959–965 Ungar D, Joffe S, Kodish E (2006) Children are not small adults: documentation of assent for research involving children. The Journal of Pediatrics, 149(1)Supplement 1: 31–33 Waller A, Balandin S, O’Mara D, Judson A (2005) Training AAC users in user-centred design. In: Proceesings of Accessible Design in the Digital World Conference, Dundee, Scotland, UK Weightman A, Preston N, Holt R, Allsop M, Levesley M, Bhakta B (2010) Engaging children in healthcare technology design: developing rehabilitation technology for children with cerebral palsy. Journal of Engineering Design (in press) Zigo D (2001) Rethinking reciprocity: collaboration in labor as a path toward equalizing power in classroom research. International Journal of Qualitative Studies in Education, 14: 351–365
Chapter 21 Gaming and Social Interactions in the Rehabilitation of Brain Injuries: A Pilot Study with the Nintendo Wii Console R.C.V. Loureiro, D. Valentine, B. Lamperd, C. Collin and W.S. Harwin
21.1 Introduction Physical rehabilitation of brain injuries and strokes is a time consuming and costly process. Over the past decade several studies have emerged looking at the use of highly sophisticated technologies, such as robotics and virtual reality, to tap into the needs of clinicians and patients. While such technologies can be a valuable tool to facilitate intensive movement practice in a motivating and engaging environment, the success of therapy also depends on self-administered therapy beyond hospital stay. With the emergence of low cost gaming consoles such as the Nintendo Wii, new opportunities arise for home-therapy paradigms centred on social interactions and values, which could reduce the sense of isolation and other depression related complications. In this paper we examine the potential, user acceptance and usability of an unmodified Nintendo Wii gaming console as a low cost treatment alternative to complement current rehabilitation programmes. Although increased effort is focussed on the recovery process of patients following a brain injury such as a stroke, economic pressures and lack of available human resources mean that patients generally do not reach their full recovery potential when discharged from hospital following initial rehabilitation (Broeks et al., 1999). Although there is already evidence suggesting that the damaged motor system is able to reorganise in the presence of motor practice, optimal training methodologies promoting such reorganisation remain unclear due to discrepancies in current rehabilitation therapy, quantification of dosage and types of rehabilitation (Shadmehr and Mussa-Ivaldi, 1994). The recovery of upper limb function is particularly affected as the initial challenge is to stabilise the trunk and relearn minimum independence levels through gait re-learning. Robotic machines have been identified as a possible way to automate labour-intensive training
220
Loureiro, Valentine, Lamperd, Collin and Harwin
paradigms, to improve patient access to therapy and to provide new tools for therapists. Several authors have already proposed the use of robots for the delivery of this type of physiotherapy (Krebs et al., 1999; Johnson et al., 1999; Lum et al., 1999; Reinkensmeyer et al., 2000; Hesse et al., 2003; Loureiro et al., 2003; Nef and Riener, 2005). One of the challenges still present today is how to best use robotic technology to augment the physiotherapist’s skills (Harwin et al., 2006). Contrary to public perception, robotic technology aims to be an advanced tool available to the physiotherapist and not a replacement. A robotic system is very unlikely to be able to amass all the skills of a physiotherapist, but it will be very good at conducting comparatively simple repetitive and manually intensive therapies. In this context, the physiotherapist would be making all the clinical decisions which would be considered and executed on the robot if suitable. Robotic therapy is appealing because it can deliver complex therapies that would be too difficult for therapists to do, for instance provision of precise repeatable force and haptic feedback coupled with interesting and motivating visual feedback and/or the ability to augment movement errors to help correct a movement pattern (Patton et al., 2006). According to Hesse et al. (2006) rehabilitation robots providing repetitive induced strategies are better positioned for therapy delivery to severely impaired patients in need of external movement assistance and support to overcome muscle weakness problems. Conversely, taskoriented therapy approaches are more suitable for mildly affected patients. New strategies are being examined based on error augmentation (Patton et al., 2006), gravity assistance (Sukal et al., 2005), bilateral training and cueing (Johnson et al., 2006), exoskeletal training options (Nef and Riener, 2005), cooperative home-rehabilitation paradigms (Loureiro et al., 2006) and functional whole-arm (arm and hand) rehabilitation strategies (Loureiro et al., 2007, 2009) but there still is a need to provide better training strategies anchored in motor learning and neuroscience theories and affordable opportunities for intensive therapy. Recent advances in computer power and graphics have enabled a variety of innovative highly-engaging gaming technologies coupled with realistic virtual environments to reach the consumer market. Such technologies have the potential to provide an interesting and effective way of delivering rehabilitation exercises to people recovering from a brain injury or related cognitive deficit. Virtual environments have the potential to deliver safe and customisable training tailored to the patient’s disabilities with performance being monitored and positivity encouraged (Rizzo and Kim, 2005). The relatively low cost of such gaming technologies opens doors not only to its introduction in hospital acute rehabilitation programmes but also to home rehabilitation paradigms. In this context videogames could be used in the future where a group of people with similar disabilities can be engaged in group therapy. The games could be customised to allow the patient to succeed thus boosting morale and increasing participation. Clinicians could log on remotely and adjust the therapy regimes, check progress and monitor safety. The next sections of this paper report on a pilot study evaluating the potential and acceptance of the Nintendo Wii console as additional tool to current rehabilitation practice.
Gaming and Social Interactions in the Rehabilitation of Brain Injuries
221
21.2 Rehabilitation with the Nintendo Wii Console At the end of 2006 Nintendo launched their home videogame console named Nintendo Wii and from the outset it has proved so popular that it has outsold the market competitors Microsoft and Sony (Lee, 2008). Its low cost and revolutionary interactive game-play interface, the Wiimote (Figure 21.1), has attracted attention from traditional hardcore gamers to older people, encouraging group and family play. The Nintendo Wii has been so successful in attracting new classes of gamers that soon nursing homes in the USA started using the console to keep elderly residents active with group activities (BBC News, 2008), followed by its use as part of treatment of burns victims in the south-East of England (BBC News Health, 2008).
B Button (trigger)
D‐Pad
A Button
(a)
(b)
Figure 21.1. Nintendo Wiimote interface. (a) Wiimote controller details; (b) user holding Wiimote. The IR camera is located on the darker area at the tip of the device.
The innovative Nintendo Wiimote interface (Figure 21.1) has been reverse engineered by computer enthusiasts, and used to illustrate the remote interaction capabilities of the Wiimote without the console in a variety of different applications (Lee, 2008). The Nintendo Wiimote (Figure 21.1a) is a handheld device packed with sensors such as buttons, a cursor, a 3-axis accelerometer and an IR camera. Two actuators – a speaker and a vibration motor – provide audio and tactile feedback. Data communication is made wirelessly via Bluetooth connectivity and users interact with the game by holding the Wiimote controller in their hand (Figure 21.1b) and pointing to the television set (Figure 21.2) where often a sensor bar – containing infrared LEDs – is placed. The IR camera incorporated inside the Wiimote is used to transform the x-y coordinates from the dot pairs produced by the infrared LEDs. The console software then uses this information together with the accelerometer data to determine the x-y coordinates and the rotation (yaw, pitch and roll) of the device. Distance to the television set is estimated by calculating the separation of two opposite infrared LEDs on the sensor bar relative to the camera view field (Lee, 2008).
222
Loureiro, Valentine, Lamperd, Collin and Harwin
21.3 Wii Therapy Usability Study A small usability study was conducted at the Royal Berkshire NHS Foundation Trust Hospital in Reading with nine acute subjects undergoing rehabilitation. Subjects played a variety of games for the Nintendo Wii console and interacted with the games using the Nintendo Wiimote. The Wii Sports and Wii Play game packs were selected for this study as they contain a variety of fun games involving varying degrees of physical ability/mobility and cognitive function (Figure 21.2) and are suitable for subjects with different interests and ability. Games such as bowling, tennis, shooting and air hockey promote unilateral play, whereas games such as golf, boxing and baseball are intrinsically more suitable for bilateral movements.
Figure 21.2. Subject playing bowling using the Nintendo Wii console
Games were first demonstrated by one of the occupational therapists (OT) delivering the rehabilitation intervention and played by the subjects either with support from the OT or unaided. Each of the involved OTs completed a simple questionnaire assessing the subjects’ clinical status prior to starting the trial and on trial completion reported gaming limitations observed with each subject during the Wii sessions. At the end of the trial each subject was presented with a simple usability questionnaire rating their opinion of the Wii rehabilitation sessions. Table 21.1 shows a summary of the subjects who have participated in the study. Subjects’ ages ranged from 18 to 66 years (mean ± SD: 43.8 ± 17.1) and they completed between three and 12 sessions (mean ± SD: 8.4 ± 3.2) of Wii therapy. Each session consisted of up to one hour of combined preparation/demonstration and game play. The recruited subjects all had a neurological diagnosis which ranged from cerebral vascular accidents and traumatic brain injuries to spinal cord disease and to other more rare conditions such as Beri Beri with Korsakoff’s syndrome (Table 21.1). Impairments included weakness, reduced range of motion, reduced fine motor skills, pain, oedema, sensory loss, and cognitive deficits including reduced attention and concentration, slowed information processing and reduced memory.
Gaming and Social Interactions in the Rehabilitation of Brain Injuries
223
Table 21.1. Subjects’ summary Subject no.
Age
Sessions Diagnosis completed
Impairment affecting subject participation on the Wii sessions
1
66
10
Spine injury + critical illness neuropathy
Unable to weight bear through right lower limb
2
55
12
Cerebral vascular accident
Reduced active ROM and balance
3
30
3
Cerebral vascular accident
Ataxia and reduced dexterity
4
31
10
Berri Berri + Korsakoff's syndrome
Reduced power, sensation and dexterity in hands, severe pain with difficulties with memory, attention, cognition and reduced capacity to follow instructions
5
49
6
Chronic inflammatory demyelinating polyneuropathy
Reduced balance, sensation, coordination and fine motor skills
6
18
10
Traumatic brain injury
7
57
4
Cerebral vascular accident
Reduced coordination, balance and power in both upper limbs (L>R) Reduced balance and coordination
8
28
10
Spinal tumour
Reduced balance and left shoulder pain
9
60
11
Cervical radiculopathy
Reduced active ROM, oedema in both hands. Hand pain and reduced sensation
21.4 Results and Discussion The results obtained from the user opinion questionnaire show that 44% of the subjects responded that they had already played with the Nintendo Wii before starting the study (Figure 21.3). All of the subjects reported that they enjoyed playing on the Nintendo Wii, and 89% of the subjects thought that it should be a regular part of their treatment sessions whilst in hospital, and would like one at home to carry on with treatment after discharge (Figure 21.3). With regards to their ability to interact and hold the Wiimote while playing the games, 33% of the subjects (subject three, four and nine) required help either from the OT when movement was too jerky to stabilise the arm or used both hands to perform the movement. Interestingly, subject four used the left hand to operate the D-Pad (in the bowling game) and the right hand to perform the swinging movement. Subject nine, on the other hand, required occasional help to press A+B buttons together. Although all games were initially demonstrated by the OT, subjects chose the game they wanted to play. Figure 21.4 shows that 78% of the participants enjoyed playing bowling with only 11% playing air hockey. One of the reasons for the trend observed in Figure 21.4 could be related to the fast pace required to play the game (e.g. air hockey) and to the ability to play a game in a relaxed way with more than one opponent (e.g. bowling).
224
Loureiro, Valentine, Lamperd, Collin and Harwin
33
Response
Need help to hold wiimote 89
Want Wii at home
89
Wii part of treatment 100
Played before
44 0
Enjoyed playing
50
100
% Response Figure 21.3. Participants’ response to the user opinion questionnaire
11
Air Hockey 44
Shooting
22
Game
Baseball 44
Boxing
44
Golf Tennis
67
Bowling
78 0
20
40
60
80
100
% Played Figure 21.4. Percentage of games played by participants
The self-evaluation responses (Table 21.2) obtained with each subject after completing the Wii therapy sessions are in line with the rehabilitation goals the OTs hoped to achieve during the Wii therapy sessions (Table 21.4, second column). The results show a positive impact on improving subjects’ trunk and upper limb strength, balance, coordination, and on increased session participation, attention and concentration.
Gaming and Social Interactions in the Rehabilitation of Brain Injuries
225
It is clear from this study that while the main responses are positive, certain aspects of the Wii therapy require attention and improvement, such as the hardware interface. The subjects’ gaming limitations observed by the OTs (Table 21.4, fifth column) are in line with the participants’ comments and suggestions on how to improve the Wii interface for easy use (Table 21.3). The typical response was that some of the games were too fast or the Wiimote too sensitive to movement change. Likewise interaction with the Wiimote buttons caused difficulties for some participants (subject four, five, six and nine) who could not press the buttons due to their physical size or because of reduced sensation in the participants’ fingers. While OTs were positive about group activities – with participants engaging in competitions while playing bowling and boxing – helping to improve function and cognitive deficits, games were at times too fast and high tone patients could not use the Wiimote. Although the gaming scores are useful for patient feedback, objective data on posture, range of motion, grip and dexterity would be useful to aid therapists’ patient assessment. Table 21.2. Summary of participants’ response to question “How do you think using the Nintendo Wii has helped you?” How do you think using the Nintendo Wii has helped you? Subject Comment 1 2 3 4 5
6 7 8 9
Strenghted my trunk; involvement in game tasks takes mind off pain Improved my concentration and upper limbs No response Helped getting me moving around and using my hands It has greatly improved my upper arm strength and fine motor skills. It has also helped me improve my sitting balance and is now being used to work my standing balance It helps me channel my anger, improve my balance and my hand control Helped me with balance and concentration. Improved my dexterity, ability to getting involved in exercise, and helped to get fit Great cardiovascular exercise (boxing) and enjoyable leisure activity Helped me improve my grip and coordination
226
Loureiro, Valentine, Lamperd, Collin and Harwin
Table 21.3. Summary of participants’ comments and suggestions to question “How can we improve the Wii interface to make it easier to use?” How can we improve the Wii interface to make it easier to use? No.
Participant comment*
1 2
Reduce sensitivity or cursor movements Ability to restrict arm to concentrate on specific movements such as flexion/extension only
3 4
Have a wiimote with larger buttons Easier game interaction: A+B buttons difficult to press together with weak hand and reduced coordination
* Several comments were repeated by different participants. To avoid repetition the main point is stated once
Table 21.4. Summary of occupational therapists’ assessment on participants’ involvement in the study Subject Rehabilitation goals of Wii sessions 1
Increase UL strength, trunk control, standing balance and exercise tolerance
2
Improve balance, progress to playing standing, increase attention up to RW, improve muscle strength and hand-eye coordination Increase participation in session; use of both UL, improve coordination or movement patterns, sequencing and timing Increase participation in session; improve coordination of limbs
3
4
5
Bilateral hand movements and use of left UL, Increase balance and coordination, anger management
7
ability to stand during therapy session and completing game Working on increasing dynamic sitting balance
9
Improved coordination and grip of upper limbs
Subject gaming limitations observed None. Sat for majority of sessions
Full but weak
Full but weaker Confidence and tolerance than Left
Full ataxic movement
Full ataxic movement
Full but bilateral weakness Improve upper limb ROM and strength, Reduced improve sitting balance and core shoulder flexion stabillity
6
8
Subject UL AROM level Left Right Full but weak Full but weak
Sensitivity of cursor + infrared bar; control or direction when moving the controller
Full but bilateral Complexity of controls, sensitivity weakness of controls to rotation
Reduced Sensitivity of cursor on screen. shoulder flexion Issue with which buttons on Wiimote could be pressed unintentionally Full but reduced Full but reduced Reaching speed too quick, did not coordination coordination alow subject time to connect with and power and power ball. Coordination of swinging and releasing button
Full
No functional movement Full
Reduced Left shoulder due to pain Limited due to Limited due to oedema oedema
No response None
Gripping hand control due to oedema. Pressing A+B together. Feeling where button B was due to decreased sensation
Gaming and Social Interactions in the Rehabilitation of Brain Injuries
227
21.5 Conclusion From our experience in conducting this pilot study, we have found that the participants’ comments provide an essential insight into usability issues of the Nintendo Wii console and the Wiimote as a low cost alternative to physical and cognitive exercise in the hospital and potentially at home. We believe that the Nintendo Wii and gaming in general have much to offer to brain injury and stroke rehabilitation. In particular, well-designed games can be highly motivating and engaging – if only addictive – while promoting limb movement and social interactions with friends, family and other people recovering from similar impairment. It is evident from the results reported in this paper that this type of technology should be considered as an integral part of the rehabilitation process. However, to realise the potential and promise of such highly engaging gaming technologies one would need to have the interface controls and game interaction customised to map the patients’ physical limitations to game play.
21.6 Acknowledgements The authors would like to thank all the subjects who participated in the study and the occupational therapists at the Neurorehabilitation Unit of the Royal Berkshire NHS Hospital in Reading who conducted the Wii therapy sessions.
21.7 References BBC News (2008) Wii boost for care homes residents, 30 January 2008. Available at: http://news.bbc.co.uk/1/hi/wales/south_west/7218103.stm (Accessed on 19 November 2009) BBC News Health (2008) Games therapy for burns victims, 25 February 2008. Available at: http://news.bbc.co.uk/1/low/health/7262552.stm (Accessed on 19 November 2009) Broeks JG, Lankhorst GJ, Rumping K, Prevo AJ (1999) The long-term outcome of arm function after stroke: results of a follow up study. Disability and Rehabilitation, 21(8): 357–364 Harwin WS, Patton JL, Edgerton VR (2006) Challenges and opportunities for robotmediated neurorehabilitation. Proceedings of the IEEE, 94(9): 1717–1726 Hesse S, Schmidt H, Werner C (2006) Machines to support motor rehabilitation after stroke: 10 years of experience in Berlin. Journal of Rehabilitation Research and Development, 43(5): 671–678 Hesse S, Schulte-Tigges G, Konrad M, Bardeleben A, Werner C (2003) Robot-assisted arm trainer for the passive and active practice of bilateral forearm and wrist movements in hemiparetic subjects. Archives of Physical and Medical Rehabilitation, 84(6): 915–920 Johnson MJ, van der Loos HFM, Burgar CG, Leifer LJ (1999) Driver’s SEAT: simulation environment for arm therapy. In: Proceedings of the 6th International Conference on Rehabilitation Robotics (ICORR’99), Stanford, CA, US
228
Loureiro, Valentine, Lamperd, Collin and Harwin
Johnson MJ, Wisneski HJ, Anderson J, Nathan D, Smith R (2006) Development of ADLER: the activities of the daily living exercise robot. In: Proceedings of the 1st IEEE/RASEMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob 2006), Pisa, Italy Krebs HI, Hogan N, Volpe BT, Aisen ML, Edelstein L, Diels C (1999) Robot-aided neurorehabilitation in stroke: three-year follow-up. In: Proceedings of the 6th International Conference on Rehabilitation Robotics (ICORR’99), Stanford, CA, US Lee JC (2008) Hacking the Nintendo Wii remote. IEEE Pervasive Computing, 7(3): 39–45 Loureiro R, Amirabdollahian F, Topping M, Driessen B, Harwin W (2003) Upper limb mediated stroke therapy – GENTLE/s approach. Journal of Autonomous Robots, Special Issue on Rehabilitation Robotics, 15 (1): 35–51 Loureiro RCV, Harwin WS (2007) Reach & grasp therapy: design and control of a 9-DOF robotic neuro-rehabilitation system. In: Proceedings of the 10th International Conference on Rehabilitation Robotics (ICORR 2007), Noordwijk, The Netherlands Loureiro RCV, Johnson M, Harwin WS (2006) Collaborative tele-rehabilitation: a strategy for increasing engagement. In: Proceedings of the 1st IEEE/RAS-EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob 2006), Pisa, Italy Loureiro RCV, Lamperd B, Collin C, Harwin WS (2009) Reach & grasp therapy: effects of the Gentle/G system assessing sub-acute stroke whole-arm rehabilitation. In: Proceedings of the 11th International Conference on Rehabilitation Robotics (ICORR 2009), Kyoto, Japan Lum PS, van der Loos M, Shor P, Burgar CG (1999) A robotic system for upper limb exercises to promote recovery of motor function following stroke. In: Proceedings of the 6th International Conference on Rehabilitation Robotics (ICORR 1999), Stanford, CA, US Nef T, Riener R (2005) ARMin – design of a novel arm rehabilitation robot. In: Proceedings of the 9th International Conference on Rehabilitation Robotics (ICORR 2005), Chicago, IL, US Patton JL, Stoykov ME, Kovic M, Mussa-Ivaldi FA (2006) Evaluation of robotic training forces that either enhance or reduce error in chronic hemiparetic stroke survivors. Experimental Brain Research, 168: 368–383 Reinkensmeyer DJ, Kahn LE, Averbuch M, McKenna-Cole A, Schmit BD et al. (2000) Understanding and treating arm movement impairment after chronic brain injury: progress with the ARM guide. Journal of Rehabilitation Research and Development, 37: 653–62 Rizzo A, Kim GJ (2005) A SWOT analysis of the field of virtual reality rehabilitation and therapy. Presence: Teleoperators and Virtual Environments, 14: 119–146 Shadmehr R, Mussa-Ivaldi FA (1994) Adaptive representation of dynamics during learning of a motor task. Journal of Neuroscience, 14: 3208–3224 Sukal TM, Ellis MD, Dewald JPA (2005) Dynamic characterization of upper limb discoordination following hemiparetic stroke. In: Proceedings of the 9th International Conference on Rehabilitation Robotics (ICORR 2005), Chicago, IL, US
Chapter 22 Promoting Behaviour Change in Long Term Conditions Using a Self-management Platform P.J. McCullagh, C.D. Nugent, H. Zheng, W.P. Burns, R.J. Davies, N.D. Black, P. Wright, M.S. Hawley, C. Eccleston, S.J. Mawson and G.A. Mountain
22.1 Introduction By 2050, it is estimated that the number of people aged 60 and over will more than double from 650 million of the global population to 2 billion, representing 22% of humanity. In addition, the number of persons aged 80 and older is increasing rapidly: by 2050, they will constitute approximately 20% of the older population (UN, 2006). In the United States, eighty percent of older adults have at least one chronic condition, and 50% have more (CDC, 2009). In the UK, it is estimated that 17.5 million adults are living with a long term condition. The management and treatment of chronic conditions demands a major proportion of health and social care resources (DH, 2004). These long term conditions are recognised to have huge impact on the physical, emotional and mental well-being of individuals, often making it difficult for people to perform daily routines and to engage in an active social life. Chronic disease management provides a systematic approach to improving health care. Benefits of good management include significant reduction in hospital admissions and bed occupancy, along with a reduction in the use of medication (DH, 2004). The potential of technologies to encourage the involvement of people in their own care and decision making is now beginning to be recognised by health care professionals and policy makers. The ‘Self Management supported by Assistive, Rehabilitation and Telecare Technologies’ project (SMART Consortium, 2007) is developing a personalised self management system (PSMS) for use in the home and in the immediate community for people living with the long term conditions of stroke, chronic pain
230
McCullagh, Nugent, Zheng, Burns, Davies, Black et al.
and congestive heart failure (CHF). Self management encourages the person to solve problems, take decisions, locate and use resources, and take actions to manage their own condition and is perceived as being a significant way of achieving reduction in health care costs and promoting quality of life in people living with a long term condition (Battersby et al., 2009; MIT Media Lab, 2009; Taylor et al., 2009). Over 400 studies worldwide report that self management can lead to improved outcomes for patients. In the UK, the expert patient programme showed that 4-6 months after the course (a) General Practicioner consultations decreased by 7%, (b) out-patient visits decreased by 10%, (c) accident and emergency attendances decreased by 16% and (d) pharmacy visits increased by 18%. Nevertheless, these positive results mask a mixed picture. Even though self management education programmes may lead to small, short-term improvements in participants’ selfefficacy, self-rated health, cognitive symptom management, and frequency of aerobic exercise, a Cochrane review (Effing et al., 2007) indicated that there is currently no evidence to suggest that such programmes improve longer term psychological health, symptoms or health-related quality of life, or that they significantly alter healthcare use. Changing established behaviour, which has developed in response to the long term health condition and its related problems, is a challenging task. This is evidenced by the difficulty in tackling long term societal issues such as smoking, obesity, and alcohol misuse even when the health benefits of a ‘healthy’ lifestyle are compelling. Self management is a health care delivery model based on preventative and person-centred health systems. This new model can only be achieved through the proper use of ICT, in combination with appropriate organisational changes and skills (EC, 2006). The research detailed here describes the technology which is being developed to assist with tailored self management programmes and the complexity of the issues which need to be taken into account. The research questions are as follows: • • •
Can technological solutions be identified to deliver self management interventions to people with long term conditions? Can technology, remote from a health care professional, promote health behaviour change? Can technology which situates behaviour change in everyday life improve traditional self-management strategies?
This paper focuses upon providing the details for an initial prototype of the technological and interface choices for the introduction of technology based solutions to support people with long term conditions being conducted through the SMART2 Project (EPSRC under the EQUAL5 initiative, 2008–2011). The research team combines expertise in computer science, psychology, cognitive science, human computer interaction and clinical rehabilitation. The interaction required by the user with the technology has been identified as of prime importance, and therefore interfaces that support the user and provide appropriately presented information are being designed. This paper describes the process of identifying relevant interface designs with potential users.
Promoting Behaviour Change in Long Term Conditions 231
22.2 Background to the Research Stroke is the single biggest cause of severe disability and the third-most common cause of death in developed countries (CDC, 2009). Balance disorders after stroke are a major problem (Yelnik et al., 2008) and are the strongest risk factor for falls for older women living at home 1 year after suffering from a stroke (Lamb et al., 2003). Previous research carried out by the SMART consortium demonstrated that information and communication technology (ICT) in conjunction with sensing technology can be successfully applied to provide rehabilitation at home for poststroke patients (Zheng et al., 2005, 2006). Pain is a symptom associated with many types of long term illnesses and leads to a very poor quality of life for those with the condition. For chronic pain, cognitive behavioural therapy (CBT) is the psychological approach of choice for treatment. To address the effectiveness of acceptance, one of the most widely used approaches is acceptance and commitment therapy (ACT) (Vowles and McCracken, 2008). Cognitive behavioural therapy can help to self manage the negative consequences of pain. Patient self reporting (by questionnaire) is currently used as part of CBT delivery to measure the effects of the treatment (Vowles and McCracken, 2008). Congestive heart failure (CHF) is a condition in which the heart’s function as a pump to deliver oxygen rich blood to the body is inadequate to meet the body’s needs. CHF may be caused by diseases that weaken the heart muscle, diseases that cause stiffening of the heart muscles, or diseases that increase oxygen demand by the body tissue beyond the capability of the heart to deliver. The symptoms of CHF vary, but can include fatigue, diminished exercise capacity, shortness of breath, and limb swelling. CHF is one of the main challenges for physicians as evidenced through high readmission rates, which have ranged from 29 to 47%, within three to six months of the initial discharge. To prevent deterioration, physiological trends (such as changes in blood pressure and weight) can be monitored, symptoms must be carefully scrutinised, and therapy adjusted accordingly (Bodenheimer et al., 2002a-b). Self management can be tuned to the the conditions. In CHF, the aim is preventative, whereas with Stroke the focus is on restoration of function, as much as possible. With chronic pain the accommodative nature of the condition is important for self-management and therapy. Self-management skills can be taught to most patients; ICT and other forms of technology can be used to assist in the collection, monitoring, management, interpretation and reporting of the information required. SMART2 explores the use of mobile and sensor technologies to monitor people’s activities and lifestyles in domestic environments, including: daily activities, vital signs and self reporting. The project is developing a personalised self management system (PSMS) by integrating: mobile devices, wrist sensors, pedometry and activity monitoring, so that people with long term conditions are enabled to self manage their condition at home and receive automatic feedback from the system as to how they should make adjustments to minimise the impact of their long term condition.
232
McCullagh, Nugent, Zheng, Burns, Davies, Black et al.
22.2.1 Case Study for Self-management Using Stroke Assessment of the self managing needs of a person with stroke and how they might be met through technology is illustrated through the following scenario which would involve the person and the health professional working in partnership: 1. assess need for rehabilitation (e.g., proprioceptor neglect, muscle weakness, shortening of soft tissue, poor balance or gait); 2. determine what the person aims to be able to achieve again in their day to day life (e.g., gardening, dressing, games, meeting friends); 3. propose specific rehabilitative interventions (from restorative to adaptive) that the person can practice remote from a health care professional; 4. identify therapy from a toolkit contained within the PSMS (e.g., motor learning, exercise, education); 5. select appropriate feedback mechanism (e.g., wrist sensor, intelligent shoe, smart home sensor, mobile device); 6. ensure adequate decision support and feedback mechanisms to the user. The use of ICT as an assistive technology is important at each step, but interaction with the therapist and capturing the expertise of the therapist are key components for a successful self management system. Step one assesses individual need. Step two introduces the concept of the therapist and end user setting life goals or ‘end goals’, which the PSMS can then assess and encourage (step three). Therapist expertise can be encapsulated into a computerised toolkit (step four). Feedback (step five) is most import in helping the user achieve these goals. Of course the feedback provided relies on the user centred design process. Initial results suggest that machine learning can provide a method to analyse the data acquired from clients, and therefore, it is feasible to incorporate client self reports into the decision support system (step six).
22.3 Methodology The PSMS comprises of a client monitoring system, a decision support system, a database and the necessary ICT infrastructure (Zheng et al., 2008) as illustrated in Figure 22.1. The development process comprises three prototype design stages. Focus groups with patients suffering from each of the long term conditions along with expert health care professionals were conducted, the output of which informed the development of the initial prototype, described here. Prototypes two and three will involve elicitation of user needs through the process of home visits with users and demonstrations of technology within normal living environments. For the monitoring component, three types of data have been identified as being necessary; data on general activities such as sitting, walking, stepping and position through a sensorised environment; vital sign data, such as blood pressure, heart rate and weight; and data from self
Promoting Behaviour Change in Long Term Conditions 233
reporting questionnaires about user current health and wellbeing; e.g. for people with pain, questions will solicit intensity of the pain, sleep and rest, body care and movement, mobility and emotional behaviour. A user friendly interface deployed on a touch screen device is being devised to gather self reporting data on a daily basis.
Figure 22.1. Illustration of personal self management system (PSMS) structure
The components of the PSMS (prototype one) include a home hub, consisting of touchscreen device with integrated PC (EeeTop), a smart phone (HTC Touch), sensorised environment (contact door switch, bed switch, Passive InfraRed) along with a web server. Specialised gait based sensors and upper arm accelerometers will be incorporated into later prototypes. Selection of the system interface design was guided based on a user centered design approach as adopted in (Zheng et al., 2007), together with systematic literature reviews. Throughout the construction of the PSMS a mixed methodology has been adopted. This has been based on a user centered approach which uses qualitative methods comprising focus groups, oneto-one in-depth interviews, non-participant observation and application of cultural probes (Gaver et al., 1999). The overall aim is the understanding of the needs and preferences of the users, both people with long term conditions and health professionals. The user centered design approach is goal-directed, not taskoriented, which allows the researchers to understand the users’ motivations in performing a task (Cooper et al., 2007). Figure 22.2 illustrates inputs to the PSMS; self reports, gross motor activity (measured by accelerometers), extent and range of indoor activity (measured by the sensorised environment) and healthmeasures such as weight. Feedback in the form of context based education, data/information, and motivation can be
234
McCullagh, Nugent, Zheng, Burns, Davies, Black et al.
provided on the home hub and the mobile device and is viewed as output from the PSMS. A portal for the health care professionals will facilitate remote interpretation and the updating of life goals.
Figure 22.2. Inputs and outputs to the PSMS
22.3.1 User Interface An effective computer interface requires an understanding of cognition and emotion. An interface which is tailored to a healthcare professional may not be appropriate to the user of a system. Healthcare professionals require factual information in an easy to digest format, see Figure 22.3 which illustrates steps walked, distance covered, and weight variation over time (artificial data for illustration purposes only). For CHF, Figure 22.4 illustrates systolic and diastolic blood pressure, recorded over a week. End users of the PSMS, require appropriately, summarised information in a supportive format. The first PSMS prototype adopts the HomePUI approach (Burns et al., 2008), which permits personalisation of layout, interface colour, and available user services by utilising an xml based interface specification. Workshops with users have enabled demonstration of the services which can then be selected by the users for deployment on the devices which will be placed in their homes.
Promoting Behaviour Change in Long Term Conditions 235
Figure 22.3. Activity monitoring, providing feedback over time on steps walked, distance covered and weight
Figure 22.4. Graphical feedback, illustrating systolic and diastolic blood pressure (artificial data for illustration purposes only)
Figure 22.5 provides an indication of ‘motivational feedback’ in a manner proposed by Consolvo et al. (2008), who successfully used a ‘flower metaphor’ in the UbiFit system, demonstrating effectiveness via a mobile device for 28 participants in a trial. In SMART2, as a therapist-suggested intervention, e.g. the number of prescribed steps in a day, is part-achieved, the flower grows more petals, until the goal is attained. In subsequent days, additional flowers can
236
McCullagh, Nugent, Zheng, Burns, Davies, Black et al.
indicate further progress, towards a longer term end goal. Of course, it is also possible for the participant to take too many steps, which could be indicated by an appropriate instructional message (‘end goal achieved’) and a record, e.g. the flower wilts.
Figure 22.5. Feedback to the user, illustrating a positive message and motivational interface: (left) feedback after partial completion of a goal (right) feedback upon goal completion
22.4 Discussion and Future Work Initially work has been focused on collection of user views (end users and health care professionals) to enable identification of the specific elements which might be included within the PSMS and the nature and presentation of user information and feedback. When sufficient data has been collected, see Figure 22.4, decision support algorithms (to be realised within the third prototype version of the PSMS) will utilise ‘real time’ and longer term data to further personalise and improve relevance of feedback. The real time data will comprise information about posture, activity levels, location within the home environment, and activities of daily living. This can provide relevant feedback, e.g. to correct posture (appropriate to management of stroke), or to undertake some activity (general long term conditions management). Longer term data will include trends, which can be used to indicate whether the individual is on the correct track to achieving their end goals The decision support interface for chronic pain, provides an illustration of the factors, clinical, technological and individual, that are being taken into account. To investigate the relationships between self-reporting and treatment stages, techniques haven been applied to analyse self-reporting data (Vowles and McCracken, 2008). Four supervised machine learning algorithms: decision tree C4.5, naïve bayesian, support vector machine and multi-layer perceptron (MLP), were applied to classify three stages (pre-treatment, treatment, follow-up) based on patient answers to questionnaires. Though self reporting has been viewed as subjective, the performance in accuracy ranged from 84.1 to 94.7%, for
Promoting Behaviour Change in Long Term Conditions 237
classification of pre-treatment vs treatment; pre-treatment vs follow-up; and treatment vs follow-up. For example the frequency of pain can be self reported, see Figure 22.6.
Figure 22.6. Self reporting data are collected and used for decision support system
The final decision support system will incorporate data collected by the monitoring component, clinical profiles (such as assessment details; the goals that users identify as being desirable and achievable, for example being able to go to church, shopping, and walking in the garden etc; and care plans, for example walking for 10 minutes) established through traditional face-to-face consultation and expert knowledge from healthcare professionals. The decision support interface needs to be capable of retrieving users’ behaviour patterns and providing reminders, suggestions and/or advice on behaviour changes and life goal/care plan. It also needs to be capable of modification as user needs change over time. The next stage in the project involves the mapping of the findings following the elicitation of user needs into the functionality of the technical infrastructure of the PSMS. This will produce a PSMS which will demonstrate self management concepts to users, feedback from which will support further technical development. This development will incorporate user needs to ensure that the final PSMS offers the desired levels of support.
22.5 References Battersby M, Hoffmann S, Cadilhac D, Osborne R, Lalor E, Lindley R (2009) Getting your life back on track after stroke: a phase II multi-centered, single-blind, randomized, controlled trial of the stroke self-management program vs. the Stanford chronic condition self-management program or standard care in stroke survivors. International Journal of Stroke, 4(2): 67–145 Bodenheimer T, Wagner EH, Grumbach K (2002a) Improving primary care for patients with chronic illness. The chronic care model, part 2. Journal of the American Medical Association, 288(15): 1909–1914 Bodenheimer T, Wagner EH, Grumbach K (2002b) Improving primary care for patients with chronic illness. Journal of the American Medical Association, 288(14): 1775–1779 Burns WP, Nugent CD, McCullagh PJ, Zheng H, Finlay DD, Davies RJ et al. (2008) Personalisation and configuration of assistive technologies. In: Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2008), Vancouver, British Columbia, Canada CDC (2009) Healthy aging improving and extending quality of life among older americans – at a glance 2009. Centers for Disease Control and Prevention. Available at: www.cdc.gov/NCCdphp/publications/aag/aging.htm (Accessed on 19 November 2009) Consolvo S, Klasnja P, Avrahami D, Legrand L, Libby R, Mosher K et al. (2008) Flowers or a robot army? Encouraging awareness and activity with personal, mobile displays. In:
238
McCullagh, Nugent, Zheng, Burns, Davies, Black et al.
Proceedings of the 10th International Conference on Ubiquitous Computing (UbiComp 2008), Seoul, South Korea Cooper A, Reimann R, Cronin D (2007) About face 3: the essentials of interaction design, 3rd edn. Wiley, Indianapolis, IN, US DH (2004) Improving chronic disease management. Department of Health, UK. Available at: www.dh.gov.uk/assetRoot/04/07/52/13/04075213.pdf (Accessed on 19 November 2009) EC (2006) ICT for Health i2010. Transferring the European healthcare landscape. European Commission. Available at: http://ec.europa.eu/information_society/activities/health/docs/publications/ictforhealthand-i2010-final.pdf (Accessed on 19 November 2009) Effing TW, Monninkhof EM, van der Valk PD, van der Palen J, van Herwaarden CL, Partidge MR et al. (2007) Self-management education for patients with chronic obstructive pulmonary disease (Cochrane Review). The Cochrane Library. Issue 4. John Wiley & Sons Gaver W, Dunne T, Pacenti E (1999) Cultural probes. Interactions, 6(1): 21–29 Lamb SE, Ferrucci L, Volapto S, Fried LP, Guralnik JM (2003) Risk factors for falling in home-dwelling older women with stroke: the women’s health and aging study. Stroke, 34: 494–501 MIT Medic Lab (2009) Available at: http://affect.media.mit.edu/ (Accessed on 19 November 2009) SMART Consortium (2007) Available at: www.thesmartconsortium.org (Accessed on 19 November 2009) Taylor DM, Cameron JI, Walsh L, McEwen S, Kagan A, Streiner DL et al. (2009) Exploring the feasibility of videoconference delivery of a self-management program to rural participants with stroke. Telemedicine Journal and E-health, 15(7): 646–654 UN (2006) Population aging 2006. Department of Economic and Social Affairs, United Nations. Available at: www.un.org/esa/population/publications/ageing/ageing2006.htm (Accessed on 19 November 2009) Vowles KE, McCracken LM (2008) Acceptance and values-based action in chronic pain: a study of treatment effectiveness and process. Journal of Consulting and Clinical Psychology, 76(3): 397–407 Yelnik AP, Le Breton F, Colle FM, Bonan IV, Hugeron C, Egal V at al. (2008) Rehabilitation of balance after stroke with multisensorial training: a single-blind randomized controlled study. Neurorehabil Neural Repair, 22: 468–476 Zheng H, Black ND, Harris HD (2005) Position-sensing technologies for movement analysis in stroke rehabilitation. Medical and Biological Engineering and Computing, 43(4): 413–420 Zheng H, Davies RJ, Hammerton J, Mawson SJ, Ware PM, Black ND et al. (2006) SMART project: application of emerging information and communication technology to home-based rehabilitation for stroke patients. International Journal on Disability and Human Development, 5(3): 271–276 Zheng H, Davies R, Stone T, Wilson S, Hammerton J, Mawson SJ et al. (2007) SMART rehabilitation: implementation of ICT platform to support home-based stroke rehabilitation. In: Proceedings of the 2nd International Conference on Usability and Internationalization, Part I (HCII 2007), Beijing, China Zheng H, Nugent CD, McCullagh PJ, Black ND, Eccleston C, Bradley D et al. (2008) Towards a decision support personalised self management system for chronic conditions. In: Proceedings of 2008 IEEE International Conference on Networking, Sensing and Control (ICNSC 2008), Hainan, China
Index of Contributors e
Allsop M.J .................209
Dalke H..................... 101
Lamperd B ................ 219
Al Mahmud A............157
Davies R.J................. 229
Langdon P.M........ 35, 57,
De la Fuente J ............. 89
…………...123, 133, 145
Baumers S ...................13
Dong H ....................... 79
Langley J ……….......189
Beavan P .....................69
Dumolo D ................... 25
Lazar J................. 69, 197 Levesley M................ 209
Beckerle P .................177 Bhakta B....................209
Eccleston C ............... 229
Limpens Y................. 157
Biswas P ................3, 113
Elton E ........................ 25
Loureiro R.C.V ......... 219
Felzer T..................... 177
Martens J.B .............. 157
Bix L............................89 Black N.D .................229
Mawson S.J ............... 229
Brown J .......................69 Burns W.P .................219
Gallagher J................ 209
McCullagh P.J .......... 229 Mieczakowski A ...... 133
Clarkson P.J..........35, 57,
Hammerton J ............ 167
Mountain G.A .. 167, 229
………………...133, 145
Harwin W.S ............. 219
Musselwhite C ......... 187
Coffey D......................69
Hawley M.S ............. 229
Collin C .....................219
Heylighen A ............... 13
Nickpour F .................. 79
Conduit B.D ...............99
Hochheiser H ........... 197
Nicolle C ..................... 25
Conduit G.J ...............101
Holt R. ...................... 209
Nolf B.......................... 69
Cooper R.M...............101
Horn A-M ………….123
Nugent C.D ............... 229
Corso A .....................101
Hurtienne J................ 123
240
Index of Contributors
Parker J......................167
Turk R......................... 69
Wenger B ................... 69 Wenz B ..................... 197
Poole R ........................69 Valentine D............... 219
Williams E.Y......... 35, 57 Wright P ................... 229
Rinderknecht S .........177 Robinson P ............3, 113
Waith V....................... 69
Rodriguez-Falcon E.M ....
Wall T ......................... 69
………………….45, 187
Waller S ................ 35, 57
Rowson J ...................187
Weber K...................... 69
Wyatt D.F.................. 101 Yoxall A.............. 45, 187 Zheng H .................... 229