Candidacy for Implantable Hearing Devices
Guest Editor
Guido F. Smoorenburg, Utrecht
32 figures and 10 tables, 2004
Basel • Freiburg • Paris • London • New York • Bangalore • Bangkok • Singapore • Tokyo • Sydney
S. Karger Medical and Scientific Publishers Basel • Freiburg • Paris • London New York • Bangalore • Bangkok Singapore • Tokyo • Sydney
Fax +41 61 306 12 34 E-Mail
[email protected] www.karger.com
Drug Dosage The authors and the publisher have exerted every effort to ensure that drug selection and dosage set forth in this text are in accord with current recommendations and practice at the time of publication. However, in view of ongoing research, changes in government regulations, and the constant flow of information relating to drug therapy and drug reactions, the reader is urged to check the package insert for each drug for any change in indications and dosage and for added warnings and precautions. This is particularly important when the recommended agent is a new and/or infrequently employed drug.
All rights reserved. No part of this publication may be translated into other languages, reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, microcopying, or by any information storage and retrieval system, without permission in writing from the publisher or, in the case of photocopying, direct payment of a specified fee to the Copyright Clearance Center (see ‘General Information’). © Copyright 2004 by S. Karger AG, P.O. Box, CH–4009 Basel (Switzerland) Printed in Switzerland on acid-free paper by Reinhardt Druck, Basel ISBN 3–8055–7794–X
Vol. 9, No. 4, 2004
Contents
189 Editorial Smoorenburg, G.F. (Utrecht)
Reviews 190 Candidacy for the Bone-Anchored Hearing Aid Snik, A.F.M.; Bosman, A.J.; Mylanus, E.A.M.; Cremers, C.W.R.J. (Nijmegen) 197 Cochlear Implant Candidacy and Surgical Considerations Cohen, N.L. (New York, N.Y.)
Original Papers 203 Channel Interaction in Cochlear Implant Users Evaluated Using the
Electrically Evoked Compound Action Potential Abbas, P.J.; Hughes, M.L.; Brown, C.J.; Miller, C.A.; South, H. (Iowa City, Iowa) 214 HiResolutionTM and Conventional Sound Processing in the
HiResolutionTM Bionic Ear: Using Appropriate Outcome Measures to Assess Speech Recognition Ability Koch, D.B.; Osberger, M.J.; Segel, P.; Kessler, D. (Sylmar, Calif.) 224 Development of Language and Speech Perception in Congenitally,
Profoundly Deaf Children as a Function of Age at Cochlear Implantation Svirsky, M.A.; Teoh, S.-W.; Neuburger, H. (Indianapolis, Ind.) 234 Exploring the Benefits of Bilateral Cochlear Implants van Hoesel, R.J.M. (Melbourne) 247 Auditory Brainstem Implant in Posttraumatic Cochlear Nerve Avulsion Colletti, V.; Carner, M.; Miorelli, V.; Colletti, L.; Guida, M.; Fiorino, F. (Verona)
256 Author Index Vol. 9, No. 4, 2004 256 Subject Index Vol. 9, No. 4, 2004 257 Conference Calendar
© 2004 S. Karger AG, Basel Fax +41 61 306 12 34 E-Mail
[email protected] www.karger.com
Access to full text and tables of contents, including tentative ones for forthcoming issues: www.karger.com/aud_issues
Audiol Neurootol 2004;9:189 DOI: 10.1159/000078387
Editorial
When we started our work on cochlear implants some 20 years ago, I did not anticipate at all that cochlear implants would become such an important issue in audiology and neuro-otology. At that time it was still an open question whether or not it would be possible to evoke meaningful sensations by stimulating the auditory nerve electrically. Applying single-electrode systems, we were glad when the implant provided auditory support to speech-reading. Nowadays, it seems normal when we achieve speech perception without visual support. We should bear in mind, however, that some do not and that these implant recipients deserve special attention. In addition to these clinical achievements cochlear implants also provide a wonderful opportunity for basic scientific research in hearing. The implant enables us to exert control over the spatial and temporal distribution pattern of the activity in the auditory nerve, it enables us to measure back the response of the nerve to the electrical stimulus, and it provides us with a behavioral response – a combination of possibilities undreamt of only a decade ago. In 2002 we organized a meeting in Utrecht, The Netherlands, focused on thorough discussions of the clinical and scientific aspects of implantable hearing aids. Although there were already many meetings in the field, we felt that there was a need for this focused meeting in
ABC
© 2004 S. Karger AG, Basel 1420–3030/04/0094–0189$21.00/0
Fax + 41 61 306 12 34 E-Mail
[email protected] www.karger.com
Accessible online at: www.karger.com/aud
which all papers were presented upon invitation. Participation was limited to 100 persons, and half the time was allotted to discussions. Much of the work presented was in progress and, at the time, unsuited for publication. However, a number of papers reported research that neared completion and these papers have been included in this special session. Two of the papers have been published before in Audiology and Neuro-Otology, one that was submitted already [Smoorenburg et al: 2002;7:335–347] and one review paper on deafness genes that did not quite fit the present issue [Cryns and van Camp: 2004;9:2–22]. The present issue spans the whole range from boneanchored hearing aids to brainstem implants. We hope that it may serve the reader. I wish to acknowledge the principal sponsor of this meeting, the Amplifon Centre of Research and Studies together with the affiliated Dutch company Beter Horen, and the co-sponsors Advanced Bionics Europe, Cochlear Europe in cooperation with Newmedic Belgium and MedEl, Innsbruck. Without their support it would have been impossible to organize the meeting and we would not have had the reflection of this meeting in the present special issue. Guido Smoorenburg, Utrecht
Review Audiol Neurootol 2004;9:190–196 DOI: 10.1159/000078388
Received: February 6, 2003 Accepted after revision: December 8, 2003
Candidacy for the Bone-Anchored Hearing Aid Ad F.M. Snik Arjan J. Bosman Emmanuel A.M. Mylanus Cor W.R.J. Cremers Department of Otorhinolaryngology, University Hospital Nijmegen, Nijmegen, The Netherlands
Key Words Bone-anchored hearing aid W Binaural advantage W Bilateral application W Speech-in-noise test W Chronic draining ears W Aural atresia W Contralateral routing of signal W Unilateral deafness
Abstract The BAHA (bone-anchored hearing aid) is a bone conduction hearing aid with percutaneous transmission of sound vibrations to the skull. The device has been thoroughly evaluated by various implant groups. These studies showed that, in audiological terms, the BAHA is superior to conventional bone conduction devices. In comparison with air conduction devices, the results are ambiguous. However, a positive effect is found with respect to aural discharge. The most powerful BAHA can be applied to patients with a sensorineural hearing loss component of up to 60 dB HL. It was shown that bilateral BAHA application leads to binaural sound processing. Preliminary results on the application of the BAHA in patients with unilateral conductive hearing loss suggest that stereophonic hearing can be re-established. The application of the BAHA as a transcranial CROS (contralateral routing of signal) device in unilateral deafness minimizes head shadow effects. Copyright © 2004 S. Karger AG, Basel
Paper presented at the conference ‘Candidacy for implantable hearing devices’, June 27–29, 2002, Utrecht, The Netherlands.
ABC
© 2004 S. Karger AG, Basel 1420–3030/04/0094–0190$21.00/0
Fax + 41 61 306 12 34 E-Mail
[email protected] www.karger.com
Accessible online at: www.karger.com/aud
Introduction
In patients with inoperable aural atresia, bone conduction hearing aids are the only option to improve their hearing. Further, in hearing-impaired patients with chronic draining ears, bone conduction hearing aids are the safest option. As bone conduction devices are not very popular, quite often air conduction hearing devices are still fitted. However, the ear mould that occludes the ear canal mostly has a negative effect on otorrhea. Continuous otitis media may even cause cochlear damage [Paparella et al., 1984]. Consequently, there has been a search for more acceptable bone conduction devices. The main reason why conventional bone conduction devices are unpopular is that bone conduction transducers have to be pressed firmly against the skull to achieve sufficient sound transmission. This is accomplished by wearing a headband or special spectacles with a rigid frame. The pressure needed to apply the device effectively often causes skin irritations, itching and sometimes headaches. In addition, the headband or special spectacles are often clumsy and thus unattractive. Another problem is a technical shortcoming: bone conduction can be used, but it is not very efficient. This implies that relatively powerful amplifiers are needed. Therefore, problems with gain and the maximum output of the device may limit the patient’s performance. These problems have been overcome by the introduction of a new type of bone conduction hearing aid, the BAHA. The BAHA was developed by Håkansson et al. [1984; 1990a] in Gothenburg, Sweden, in the 80s. It com-
A. Snik Department of Otorhinolaryngology, University Hospital Nijmegen PO Box 9101, NL–6500 HB Nijmegen (The Netherlands) Tel. +31 24 3614927, Fax +31 24 3540251 E-Mail
[email protected]
prises a specially developed bone conduction transducer with supplementary electronics, coupled percutaneously to the skull by a surgically placed skin-penetrating titanium implant. One advantage of the BAHA is that no pressure is needed to apply the transducer, thus no headband or spectacles are needed. Further, the transmission of sound vibrations to the skull is much more efficient than conventional transcutaneous transmission; experiments showed that percutaneous transmission is 10– 15 dB more efficient than transcutaneous transmission [Håkansson et al., 1984], because the attenuating skin and subcutaneous layers are bypassed. The BAHA also has some disadvantages: it is more costly than a conventional device and surgery is needed. Nowadays, the surgical procedure takes 15–30 min and is performed under local anesthesia. The higher cost might be outweighed by fewer outclinic visits (in patients who were previously using air conduction devices) and by the potentially better gain (and thus better speech recognition), which might lead to better communication capabilities and improved quality of life. Below, an overview of audiological data is given to show whether or not the BAHA is better than conventional devices. The questions addressed are: How much gain can theoretically be achieved with the BAHA? Does the BAHA perform better than conventional bone conduction devices? And does the BAHA perform better than air conduction hearing aids? Further, the results of bilateral application are presented as well as preliminary results on the application of the BAHA as a transcranial CROS device and on the BAHA applied in patients with a unilateral air-bone gap and a second normal hearing ear. Detailed data on the stability and safety of the percutaneous implant were published by Tjellström and Håkansson [1995] and Van der Pouw et al. [1999b].
The ‘Functional’ Gain of the BAHA
By definition, for linear devices, functional gain is the difference between aided and unaided sound field thresholds. However, with bone conduction devices, this measure is not informative, as the unaided sounds are perceived via air conduction, whereas the sounds in the aided condition are perceived via bone conduction. Thus functional gain measured in the standard way is primarily an estimate of the air-bone gap.
Candidacy for the Bone-Anchored Hearing Aid
As suggested by Carlsson and Håkansson [1997], the functional gain of bone conduction devices can be defined as the difference between aided sound field thresholds (expressed in dB HL) and bone conduction thresholds measured with a standard audiometer. Using technical data obtained with the specially developed skull simulator, they showed that the maximum functional gain of the standard BAHA (BAHA Classic or the previous BAHA HC200) was 5–10 dB in the mid frequencies, but less, or even negative, at the other frequencies (fig. 1). Thus with the BAHA Classic, an air-bone gap can be ‘closed’, but any possible sensorineural hearing loss can be only marginally compensated for. The most powerful BAHA to date, the BAHA Cordelle, was introduced in 1999 [Van der Pouw et al., 1998]. It comprises a body-worn amplifier powering a transducer that is coupled to the percutaneous abutment. The functional gain was calculated with the technical data of the BAHA Cordelle [Van der Pouw et al., 1998] (fig. 1) and found to be about 15 dB higher than that of the BAHA Classic. The figure also shows the maximum functional gain of another type of implantable bone conduction device, the Audiant bone conduction implant, which was developed by Hough et al. [1986]. This device was also evaluated on the skull simulator [Håkansson et al., 1990b]. Figure 1 clearly shows that the BAHA is more powerful than the Audiant device. This conclusion is in agreement with conclusions drawn in several field studies [Browning and Gatehouse, 1994; Negri et al., 1997; Snik et al., 1998b]. The Audiant device is no longer available on the market.
Comparison between the BAHA and Conventional Bone Conduction Devices
To compare patient performance with the conventional bone conduction hearing aids to that with the BAHA, we studied aided sound field thresholds in 89 consecutive patients. As the patient controlled the gain by using the volume wheel, it was expected that on average, the aided thresholds with the two devices would be the same. However, the aided thresholds were better with the BAHA (5– 7 dB), except at 250 Hz (–2 dB). This indicated that the BAHA was used at a volume setting that resulted in better sound field thresholds than the previous bone conduction hearing aids. The shift in aided thresholds, averaged at 0.5, 1 and 2 kHz, was 5.4 dB (SD 8.4 dB), which was a significant shift (t test, p ! 0.01). Further, a significant shift in the speech reception threshold was found (4.6 B
Audiol Neurootol 2004;9:190–196
191
Fig. 1. Maximum functional gain (aided
thresholds minus bone conduction thresholds) as a function of frequency for the standard BAHA ‘Classic’, the more powerful BAHA Cordelle and the former Audiant bone conductor. A negative functional gain implies that the air-bone gap cannot be fully compensated for.
7.6 dB, p ! 0.01). This speech measure depends directly on the aided tone thresholds. Most likely, the aided thresholds were better with the BAHA than with the conventional aids because the sound quality remained acceptable even at higher volume settings. Owing to the more efficient means of transmitting sound vibrations percutaneously to the skull, saturation of the amplifier by loud sounds will occur less readily and enable a higher volume setting. Significant improvement in speech-in-noise test scores have been reported with the BAHA compared to conventional bone conduction hearing aids [Tjellström and Håkansson, 1995; Van der Pouw et al., 1999a]. In contrast with speech-in-quiet scores, the speech-in-noise scores are in principle independent of the volume setting, because a higher gain means both louder speech and louder noise. Nevertheless, better results were found with the BAHA, which was ascribed to better performance with the BAHA in the frequency range above 0.5 kHz, the most important frequency range for speech perception. Snik et al. [1995] reported that 30 out of 48 patients (64%) had a statistically significantly better speech-innoise test score with the BAHA; the remaining patients had comparable scores with the two devices. None of the patients showed a significant deterioration. Figure 2 shows individual mean aided threshold values (averaged at 0.5, 1 and 2 kHz) versus the mean sensorineural component of the hearing loss (averaged at the
192
Audiol Neurootol 2004;9:190–196
same frequencies) in 89 BAHA users. When a data point falls on the 45° line, the air bone gap is compensated for. When a point falls below this line, it means that part of the sensorineural hearing loss has also been compensated for. The figure shows that in the majority of subjects using the BAHA Classic (or its predecessor, the BAHA HC 200), the air-bone gap was closed to within 10 dB. There was no or very little compensation (in decibels) for the sensorineural hearing loss component. In most patients with a more severe sensorineural hearing loss component fitted with the BAHA Cordelle (or its predecessor, the BAHA 220), part of the sensorineural hearing component was also compensated for. Concerning the application range of the BAHA Cordelle, Van der Pouw et al. [1998] showed that this device can be used successfully in patients with a sensorineural hearing loss component of up to 60 dB HL at least. Questionnaires were used to obtain data on the patients’ subjective overall preference and on items such as speech recognition, sound quality and comfort. Tjellström and Håkansson [1995] who reported on 127 BAHA users, found an overall satisfaction score of 8.7 on a scale from 1 (very poor) to 10 (excellent). Snik et al. [1995] reported that 75% of their patients preferred the BAHA, while 10% preferred the previous bone conduction hearing aid on the aspects sound perception in quiet and in noise. New data from the Birmingham group have recently been published [Dutt, 2002]. Dutt used several disease-
Snik/Bosman/Mylanus/Cremers
Fig. 2. Individual mean aided threshold (at 0.5, 1 and 2 kHz) as a function of the mean sensorineural hearing loss component (at the same frequencies) in 89 patients using either the standard BAHA (Classic or HC200) or the more powerful version (Cordelle or HC220).
specific questionnaires and more general instruments to assess disability, handicap and quality of life. In accordance with other studies, he found significantly improved scores with the BAHA with regard to hearing disability and handicap. Long-term results have also been studied with questionnaires. It was reported that 5–10 years after BAHA fitting, almost all the patients were still using their BAHA on a daily basis and were satisfied with the result [Van der Pouw et al., 1999a]. It could be concluded that on average, the vast majority of patients benefited significantly from the change to the BAHA. Other studies on smaller groups of patients reported similar results [Hamann et al., 1991; Browning and Gatehouse, 1994; Negri et al., 1997; Lustig et al., 2001].
Comparison between the BAHA and Conventional Air Conduction Hearing Aids
In contrast with patients who had been using a conventional bone conduction hearing aid, those switching from an air conduction hearing aid to the BAHA showed ambiguous results. Some patients performed better with the BAHA whereas others performed better with the air conduction hearing aid [Browning and Gatehouse, 1994; Håkansson et al., 1990a; Mylanus et al., 1998].
Candidacy for the Bone-Anchored Hearing Aid
Mylanus et al. [1998] found this ambiguity in 34 patients. They used individual speech-in-noise scores and found that in 15 patients (44%) the score improved statistically significantly whereas in 5 patients (15%) the performance was significantly poorer with the BAHA. Using the BAHA speech recognition increased with the size of the air-bone gap. They argued as follows: First, consider a patient with pure sensorineural hearing loss. As hearing by bone conduction is far less efficient than by air conduction, results with even a powerful bone conduction device will be poorer than those obtained with an air conduction hearing aid. However, if an air-bone gap is present, the amplification of the air conduction hearing aid needs to be increased substantially, because in contrast to the bone conduction device, the air conduction hearing aid has to compensate for the air-bone gap. This might lead to problems such as feedback and saturation and means that with increasing air-bone gap the results with the air conduction hearing aid may become progressively poorer. A ‘break even point’ was found at an air-bone gap of about 25 dB [Mylanus et al., 1998]. Questionnaires showed that about 50% of the patients preferred the BAHA, while 25% preferred the air conduction hearing aid on the aspects sound quality and speech recognition [Mylanus et al., 1998]. Reduced incidence of ear infections was noted by 97% of the subjects. The majority of patients stated that having less trouble with ear infections was the ‘most important advantage’ of the
Audiol Neurootol 2004;9:190–196
193
BAHA compared to the air conduction hearing aid. This is in agreement with the results of Macnamara et al. [1996] who found a significant reduction in discharge in 84% of their 69 patients with chronic draining ears. This implies that these patients will require outclinic treatment less often and that the number of ENT visits will decrease, which is an important effect in terms of cost-benefit.
Bilateral Application
Bilateral application of the BAHA seems open to debate, because the attenuation of sound vibrations in the skull is limited. Therefore, one BAHA can be expected to stimulate both cochleae to approximately the same extent. The attenuation of the vibrations between the two cochleae is on the order of only 10–15 dB. Studies on the bilateral application of the BAHA are still scarce [Hamann et al., 1991; Snik et al., 1998a], but the results are promising. Recently, we assessed the binaural advantage by studying directional hearing, diotic summation and speech perception in noisy conditions [Bosman et al., 2001]. Directional Hearing There are two important cues for directional hearing in the horizontal plane, i.e. interaural differences in time (ITD) and interaural differences in sound level (ILD). ILD arises from the head shadow effect that occurs with high-frequency sounds (above 1 kHz). With low-frequency sounds, directional hearing is based on detecting ITDs between the sound waves reaching the two ears. Mostly, directional hearing is tested using an arc of loudspeakers. ITDs were also studied using the Binaural Masking Level Difference test (BMLD), which measures a patient’s ability to detect a low-frequency tone in noise (narrow band noise with the same frequency as the test tone). The procedure is as follows: Firstly, the test tone and noise have the same ITD. Under this condition, the test tone and noise are perceived in the middle of the head; the (masked) threshold for the test tone is measured under this condition. Next, the test tone is shifted 180° in phase in one ear. This resulted in a noise that should still be perceived in the middle of the head, however, the perception of the test tone shifts away from the midline position of the head. Again, the threshold of the test tone in noise is determined. The difference between the two threshold values is called the ‘release from masking’ or the BMLD value.
194
Audiol Neurootol 2004;9:190–196
Diotic Summation If the two ears cooperate correctly, binaural stimulation will result in 3–6 dB improvement in loudness, owing to the central summation of two inputs. Speech reception thresholds can be used to see whether the patients showed a summation effect after changing from one to two BAHAs. Speech Perception in Noise with Spatially Separated Speech and Noise Sources In these experiments, mostly, a loudspeaker placed in front of the patient presents speech. This is considered to be the most natural listening situation. The noise is presented either to the left side of the patient or to the right side. In both situations, the speech-in-noise threshold can be determined with the patient wearing one versus two BAHAs. In the unilateral aided case, if the noise is presented on the aided side (called the baffle side), the application of a second BAHA on the shadow side of the head might result in an increase in speech recognition. If noise is presented on the initially unaided side, the application of a second BAHA might have a negative, disturbing effect, unless that input can be ignored effectively. Outcomes Twenty-five patients with bilateral BAHAs and symmetrical conductive or mixed hearing loss participated in the study [Bosman et al., 2001]. With one BAHA, the mean scores of the whole group on the directional hearing test at 0.5 and 2 kHz were close to chance level. With bilateral BAHAs there was significant improvement in directional hearing, although it was not perfect. When the criterion for correct identification was broadened, i.e. proper identification of the stimulating loudspeaker plus or minus 1 loudspeaker (equals plus or minus 30°), a mean percentage correct score of around 90% was found at the two frequencies [Bosman et al., 2001]. BMLDs determined for the test tone frequencies of 0.25, 0.5 and 1 kHz amounted to 5.4, 4.9 and 6.1, respectively (SD between 2.9 and 5.0 dB). Thus, release from masking was indeed found, which suggests that the two cochleae perceive sound differently. However, the BMLD values were 5–8 dB lower than those found in subjects with normal hearing. It should be noted that sensorineural hearing loss leads to poorer BMLD values [Jerger et al., 1984]. The BAHA users had mild sensorineural hearing loss components. Based on the data of Jerger et al., it can be concluded that about 3 dB of the 5- to 8-dB discrepancy can be attributed to this effect. The remaining part might be due to cross-stimulation.
Snik/Bosman/Mylanus/Cremers
Table 1 shows the mean results on the diotic summation test and the speech-in-noise test. As speech material, short conversational sentences were used [Bosman et al., 2001]. For comparison purposes, we have added data from 10 subjects with normal hearing who were listening either binaurally or monaurally. In the latter case, one ear was blocked with an earplug and ear muff [Snik et al., 1998a]. With bilateral BAHAs, a 4-dB improvement in the SRT in quiet was found, owing to diotic summation (table 1). In the speech-in-noise tests, when the noise was presented on the side without BAHA (ear 2, table 1) no statistically significant change occurred when the second BAHA was applied. This means that the patients could ignore the second input effectively. When the noise was presented on the BAHA side (ear 1), there was significant improvement in the SRT when the second BAHA was applied. However, the improvement was smaller than in the control group (table 1). Hamann et al. [1991] were the first to publish data on the bilateral application of the BAHA. They studied diotic summation alone and reported a 4-dB improvement in the SRT value, i.e. the same value as reported above.
The BAHA in Patients with Unilateral Air-Bone Gap
The bilateral BAHA studies suggest that transcranial attenuation of vibrations is sufficient to enable different inputs to the two cochleae. The next question is whether or not in patients with unilateral conductive hearing loss, the impaired ear fitted with a BAHA can cooperate with the normal hearing ear in such a way that binaural hearing is achieved. We fitted 6 patients with acquired unilateral air-bone gap with a BAHA and tested them with the protocol used in the bilateral BAHA study mentioned above. Results showed, firstly, that the aided thresholds of the impaired ear were close to the thresholds of the ear with normal hearing (within 15 dB), and, secondly, that significant improvements were present in directional hearing, diotic summation and the speech-in-noise tests. Table 1 shows that the improvement in the speech-in-noise tests is larger than that found in the patients with bilateral BAHA; the improvement is even comparable to that of the controls. Results in 2 patients with longstanding congenital unilateral conductive hearing loss were less convincing [Snik et al., 2002]. Chassin [1998] was the first to publish data on the BAHA application in unilateral conductive hearing loss. He studied diotic summation in 5 patients and found a
Candidacy for the Bone-Anchored Hearing Aid
Table 1. Mean improvement (unaided minus aided score, with SD)
in speech reception thresholds for patients with bilateral hearing loss after changing from one (at ear 1) to two BAHAs (group BL-BB) and for patients (group UL-UB) with unilateral conductive loss after adding a BAHA to the poorer ear (ear 2) Group
n
Change in speech reception threshold in quiet
BL-BB UL-UB Controls
25 6 10
4.0B1.91 1.7B2.4 n.a.
in noise, presented at ear 1
ear 2
2.1B1.81 4.1B1.81 4.6B1.61
0.6B1.4 0.3B1.0 0.2B1.2
n.a. = Not available. For comparison purposes, data on adults with normal hearing are added, after changing from monaural to binaural listening using a combination of an earplug and an ear muff. 1 Significant improvement at the 1% level.
mean improvement of 2.2 dB, comparable to the data reported in table 1.
The BAHA as a Transcranial CROS Device
In the past few decennia, several attempts have been made to help patients with unilateral total deafness [for a review, see Valente et al., 1994]. Apart from conventional CROS hearing aids, so-called transcranial CROS devices have been tried, such as a bone conduction device applied near the deaf ear, or a powerful air conduction hearing aid in the deaf ear, producing vibrations that might be picked up by the skull and transmitted to the normal cochlea via bone conduction. The degree of success varied [Valente et al., 1994]. Recently, Vaneecloo et al. [2001] have published data on the application of the BAHA as a transcranial CROS device to patients with one deaf ear or, at least, significant asymmetry in hearing thresholds between the two ears. They reported that on average, their patients were satisfied and were using the BAHA on a daily basis. In such patients, the BAHA is not expected to have any significant effect on directional hearing or binaural summation, but in certain listening situations it might compensate for head shadow effects. Results in 9 Nijmegen patients with one normal ear and one deaf ear, fitted with a transcranial BAHA CROS, support this assumption [Bosman et al., 2003]. More data on this appealing application will follow soon.
Audiol Neurootol 2004;9:190–196
195
Conclusions
The majority of patients who used to wear a conventional bone conduction hearing aid prefer the BAHA. This can be explained by means of the audiometric results with the BAHA, which on average, are superior to those obtained with conventional bone conductors. This is a consistent finding and indicates that all patients who need a bone conduction device should be considered as candidates for the BAHA. In patients who used to wear air conduction hearing aids, the results are ambiguous; the size of the air-bone gap is important. If it exceeds about 25 dB, better audiometric results with the BAHA can be expected. Moreover, the BAHA is an alternative treatment with highly positive effects on aural discharge. Whether air conduction hear-
ing aids should be withheld completely from such cases is still under debate, in view of the potential cochlear damage in patients with chronic middle ear inflammation. The BAHA Classic was developed for patients with normal cochlear function or mild sensorineural hearing loss. The more powerful BAHA Cordelle is of special importance for subjects with profound mixed hearing loss with a sensorineural component of up to 60 dB HL. For such patients, there is hardly any alternative. Finally, results indicate that bilateral application is beneficial to patients with symmetric bone conduction thresholds. Preliminary results in patients with unilateral air-bone gap suggest that the fitting of a BAHA restores stereophonic hearing at least in part. In our view, the BAHA is a unique and indispensable tool in modern hearing rehabilitation.
References Bosman AJ, Snik AFM, van der Pouw CTM, Mylanus EAM, Cremers CWRJ: Audiometric evaluation of bilaterally fitted bone-anchored hearing aids. Audiology 2001;40:158–167. Bosman AJ, Snik AFM, Hol M, Mylanus EAM, Cremers CWRJ: Bone-anchored hearing aids in unilateral inner ear deafness. Acta Otolaryngol 2003;123:258–260. Browning GG, Gatehouse S: Estimation of the benefit of bone-anchored hearing aids. Ann Otol Rhinol Laryngol 1994;103:872–878. Carlsson PU, Håkansson BEV: The bone-anchored hearing aid: Reference quantities and functional gain. Ear Hear 1997;18:34–41. Chassin M: Bone anchored hearing aids (BAHA) and unilateral conductive losses. Hear Rev 1998;5:34–43. Dutt S: The Birmingham bone-anchored hearing aid programme. Some audiological and quality of life outcomes; thesis, Nijmegen, 2002. Håkansson B, Tjellström A, Rosenhall U: Hearing thresholds with direct bone conduction versus conventional bone conduction. Scand Audiol 1984;13:3–13. Håkansson B, Liden B, Tjellström A, Ringdahl A, Jacobsson M, Carlsson P, Erlandsson BE: Ten years of experience with the Swedish boneanchored hearing system. Ann Otol Rhinol Laryngol Suppl 1990a;151:1–16. Håkansson B, Tjellström A, Carlsson P: Percutaneous vs. transcutaneous transducers for hearing by direct bone conduction. Otol Head Neck Surg 1990b;102:339–344. Hamann C, Manach Y, Roulleau P: La prothèse auditive à ancrage osseux BAHA. Résultats applications bilatérales. Rev Laryngol Otol Rhinol (Bord) 1991;112:297–300.
196
Hough J, Himelick T, Johnson B: Implantable bone conduction hearing device: Audiant bone conductor. Update on our experiences. Ann Otol Rhinol Laryngol 1986;95:498–504. Jerger J, Brown D, Smith S: Effect of peripheral hearing loss on the masking level difference. Arch Otolaryngol 1984;110:290–296. Lustig LR, Arts HA, Brackmann DE, Francis HF, Molony T, Megerian CA, Moore GF, Moore KM, Morrow T, Potsic W, Rubenstein JT, Srireddy S, Syms CA, Takahashi G, Vernick D, Wackym PA, Niparko JK: Hearing rehabilitation using the BAHA, bone-anchored hearing aid; results in 40 patients. Otol Neurotol 2001; 22:328–334. Macnamara M, Phillips D, Proops DW: The boneanchored hearing aid (BAHA) in chronic suppurative otitis media (CSOM). J Laryngol Otol 1996;21(suppl):39–40. Mylanus EAM, van der Pouw CTM, Snik AFM, Cremers CWRJ: An intra-individual comparison of the BAHA and air-conduction hearing aids. Arch Otolaryngol Head Neck Surg 1998; 124:271–276. Negri S, Bernath O, Häusler R: Bone conduction implants: Xomed Audiant Bone Conductor vs. BAHA. Ear Nose Throat J 1997;76:394–396. Snik AFM, Mylanus EAM, Cremers CWRJ: The bone-anchored hearing aid compared with conventional hearing aids. Audiologic results and the patients’ opinions. Otolaryngol Clin North Am 1995;28:73–83. Snik AFM, van der Pouw CTM, Beynon AJ, Cremers CWRJ: Binaural application of the boneanchored hearing aid. Ann Otol Rhinol Laryngol 1998a;107:187–193.
Audiol Neurootol 2004;9:190–196
Snik AFM, Dreschler WA, Tange RA, Cremers CWRJ: Short- and long-term results with implantable transcutaneous and percutaneous bone-conduction devices. Arch Otolaryngol Head Neck Surg 1998b;124:265–268. Snik AFM, Mylanus EAM, Cremers CWRJ: The bone-anchored hearing aid in patients with a unilateral air-bone gap. Otol Neurotol 2002;23: 61–66. Tjellström A, Håkansson B: The bone-anchored hearing aid: Design principles, indications and long-term clinical results. Otolaryngol Clin North Am 1995;28:53–72. Valente M, Valente M, Meister M, Macauley K, Vass W: Selecting and verifying hearing aid fittings for unilateral hearing loss; in Valente M (ed): Strategies for selecting and verifying hearing aid fittings. Stuttgart, Thieme, 1994, pp 228–248. Van der Pouw CTM, Carlsson P, Cremers CWRJ, Snik AFM: A new more powerful bone anchored hearing aid: First results. Scand Audiol 1998;27:179–182. Van der Pouw CTM, Snik AFM, Cremers CWRJ: The BAHA HC200/300 in comparison with conventional bone conduction hearing aids. Clin Otolaryngol 1999a;24:171–176. Van der Pouw CTM, Mylanus EAM, Cremers CWRJ: Percutaneous implants in the temporal bone for securing a bone conductor: Surgical methods and results. Ann Otol Rhinol Laryngol 1999b;108:532–537. Vaneecloo FM, Ruzza I, Hanson JN, Gerard T, Dehaussy J, Cory M, Arrouet C, Vincent C: Appareillage mono pseudo stéréophonique par BAHA dans les cophoses unilatérales: A propos de 29 patients. Rev Laryngol Otol Rhinol (Bord) 2001;122:343–350.
Snik/Bosman/Mylanus/Cremers
Review Audiol Neurootol 2004;9:197–202 DOI: 10.1159/000078389
Received: April 10, 2003 Accepted after revision: December 12, 2003
Cochlear Implant Candidacy and Surgical Considerations Noel L. Cohen Department of Otolaryngology, NYU School of Medicine, New York, N.Y., USA
Key Words Cochlear implants W Candidacy W Surgery
Abstract Numerous changes continue to occur in regard to cochlear implant candidacy. In general, these have been accompanied by concomitant and satisfactory changes in surgical techniques. Together, this has advanced the utility and safety of cochlear implantation. Most devices are now approved for use in patients with severe to profound rather the prior requirement of a bilateral profound loss. In addition, studies have begun utilizing short electrode arrays for shallow insertion in patients with considerable low frequency residual hearing. This technique will allow the recipient to continue to use acoustically amplified hearing for the low frequencies simultaneously with a cochlear implant for the high frequencies. New hardware, such as the behind-the-ear speech processors, require modification of existing implant surgery. Similarly, the new perimodiolar electrodes require special insertion techniques. Bilateral implantation clearly requires modification of the surgical techniques used for unilateral implantation. The surgery remains mostly the same, but takes almost twice as long, and requires some modification since at a certain point, when the first device is in contact with the body, the monopolar cautery may no longer be used. Research has already begun on the development of the totally implantable cochlear implant (TICI). This will clearly require a modification of the surgical technique currently used for the present semiimplantable devices. In addition to surgically burying the
ABC
© 2004 S. Karger AG, Basel 1420–3030/04/0094–0197$21.00/0
Fax + 41 61 306 12 34 E-Mail
[email protected] www.karger.com
Accessible online at: www.karger.com/aud
components of the present cochlear implant, we will also have to develop techniques for implanting a rechargeable power supply and a microphone for the TICI. The latter will be a challenge, since it must be placed where it is capable of great sensitivity, yet not exposed to interference or the risk of extrusion. The advances in design of, and indications for, cochlear implants have been matched by improvements in surgical techniques and decrease in complications. The resulting improvements in safety and efficacy have further encouraged the use of these devices. We anticipate further changes in the foreseeable future, for which there will likely be surgical problems to solve. Copyright © 2004 S. Karger AG, Basel
Introduction
Candidacy for cochlear implantation has changed gradually but significantly since the first multichannel devices were implanted in the late 1970s. The lowering of the permissible hearing loss from profound to severe to profound and the concomitant ability to implant those with some degree of benefit from amplification have been very significant changes [Moog, 2002; Novak et al., 2000; Osberger et al., 2002] This has been largely due to the demonstrable benefits obtained by cochlear implant recipients who suffered from profound hearing loss and had no gain from amplification [Boothroyd and BoothroydTurner, 2002; Dowell et al., 2002; Funasaka, 2000; Gantz et al., 2000; Garnham et al., 2002; Gibson et al., 2000; Kim et al., 2000; Kishon-Rabin et al., 2002; Oh et al.,
Noel L. Cohen, MD NYU School of Medicine 550 First Avenue, New York, NY 10016 (USA) Tel. +1 212 263 7373, Fax +1 212 263 8490 E-Mail
[email protected]
2003; O’Neill et al., 2002; Osberger et al., 2000; Pulsifer et al., 2003, Richter et al., 2002; Rubinstein, 2002; Sainz et al., 2003; Schon et al., 2002; Smoorenburg et al., 2002; Staller et al., 2002; Tomblin et al., 2000; Tyler et al., 2000; Waltzman et al.,1993, 1997, 2002; Waltzman and Cohen, 1998, 1999; Wright et al., 2002; Young and Killen, 2002]. Among the other changes, many of which have a direct impact on the techniques and safety of surgery are: the decrease in age of recipients, our willingness to implant patients with abnormal cochleae, additional handicaps, and those with some residual hearing. New devices have contributed to expansion, as have excellent results in adults and children. Consequently, surgical techniques have been modified and surgery has become safer. Complications have diminished still further from their previously low and acceptable level. We will review the changes in candidacy and surgery, and look into the near future at further changes which can now be anticipated.
Candidacy
Candidacy for cochlear implantation has changed in many ways in the past 20 years, many of which have surgical implications. Among these changes in candidacy are: the age of the candidate [Balkany et al., 2002; van den Broek et al., 1995; Hehar et al., 2002; Uziel et al., 1993; Waltzman and Cohen, 1998], presence of residual hearing [Barbara et al., 2000; von Ilberg et al., 1999], our willingness to implant patients with major cochlear malformations [Au and Gibson, 1999; Fishman and Holliday, 2000; Fishman et al., 2003; Ito et al., 1999; Marangos and Aschendorff, 1997; Weber et al., 1995, 1998] and other abnormalities [Balkany et al., 1991; Camilleri et al., 1999; Formanek et al., 1998; Temple et al., 1999], other handicaps concurrent with hearing loss [Ramsden et al., 1993; Lesinski et al., 1995; Saeed et al., 1998; Waltzman et al., 2000], new cochlear implant hardware and improved software [Balkany et al., 1999; Lenarz et al., 2001; Cohen et al., 2002] and the interest in bilateral implantation [Gantz et al., 2000, 2002; Van Hoesel and Clark, 1995].
1999; Novak et al., 2000; Rizer and Burkey, 1999; Waltzman and Cohen, 1998]. The youngest child implanted at NYU was 6 months of age. Children below the age of 12 months usually have very poorly pneumatized mastoid bones, leading to potentially greater intraoperative blood loss and increasing the risk of facial nerve injury. In addition, these children are at somewhat greater anesthesia risk due to the size of the airway and increased difficulty in maintaining cardiovascular, fluid and temperature homeostasis [Young et al., 1995; Young, 2002; Young and Killen, 2002]. All of these require greater vigilance and, in most cases, the presence of a pediatric anesthesiologist. The thin scalp dictates the care that is needed to avoid perforation or late breakdown. Care should be taken to avoid drying of the flap. Since the skull is so thin, extreme care must be taken in drilling the well for the body of the device: it must be as deep as possible to lower the profile of the implant, but yet the dura must not be violated. The tie-down holes are also somewhat more difficult to fashion. The relative paucity of pneumatization often means that there will be oozing from the bone marrow occupying the mastoid tip. This may be troublesome, requiring hemostasis with cautery, bone wax, oxidized cotton or fibrin glue, and resulting in undesirable blood loss. Younger children also have an increased incidence of otitis media, both purulent and serous. These may cause the surgery to be canceled (for the unsuspected purulent otitis media) or lead to mucosal edema and/or hyperemia (for the OME) with resulting increase in operating time and potential for infection. Fortunately, the cochlea is adult size at birth and the facial recess nearly so, resulting in no increased morbidity for this part of the approach or the cochleostomy and electrode insertion. Given the thin scalp and relatively thick devices (there is no pediatric size, but ‘one size fits all’), it will not be surprising if eventually there turns out to be a greater incidence of implant extrusion in small children than in adults. On the other hand, implantation of the elderly has proven to be safe and generally highly successful [Buchman et al., 1999; Kelsall et al., 1995; Thomas, 1995].
Residual Hearing Age
We are currently implanting children at the age of 12 months and, on special indications such as labyrinthitis ossificans due to meningitis, even younger [van den Broek et al., 1995; Cohen, 1997; Cohen et al., 1997, 2002; Helms et al., 2000; Laszig, 2000; Lehnhardt, 1993; Lenarz et al.,
198
Audiol Neurootol 2004;9:197–202
The presence of residual hearing in the implanted ear should dictate the use of ‘soft’ surgery in an attempt at preserving that hearing [Mondain et al., 2002; Skarzinski et al., 2002]. This requires a conscious change to a less traumatic surgical technique, especially in performing the cochleostomy, avoiding endocochlear trauma and suc-
Cohen
tioning, as well as in inserting the electrode. A special category of implantation of a patient with residual hearing is the use of the so-called Electro-Acoustic or Hybrid Device, a special cochlear implant with a short (6–10 mm) [Gantz et al., 2000] or long [von Ilberg et al., 1999] electrode, designed to be atraumatically inserted into the cochlea of a patient with significant low-frequency hearing and a sharply sloping sensorineural hearing loss. The implant is designed to be used in conjunction with a special hearing aid. Clearly, the surgical technique used for this device must be as atraumatic as possible.
The Dysplastic Cochlea
With increased knowledge of temporal bone anatomy and improved himaging techniques, we are implanting more children not only with Mondini dysplasia, but also with major cochlear malformations, such as common cavities, hypoplastic cochleae, narrow internal auditory canals and large vestibular aqueducts [Au et al., 1999; Dahm et al., 1995; Firszt et al., 1995; Fishman and Holliday, 2000; Fishman et al., 2003; Ito et al., 1999; Marangos and Aschendorff, 1997; Temple et al., 1999; Tucci et al., 1995; Weber et al., 1995, 1998]. These cases require additional attention to preoperative CT and MRI imaging [Casselman, 2002; Fishman and Holliday, 2000; Fishman et al., 2003; Lo, 1998], as well as modifications in surgical techniques in order to allow safe and successful implantation, while avoiding implanting those unlikely to benefit from surgery. Parents must be counseled about the likelihood of CSF ‘gushers’ upon opening the cochlea with a major dysplasia, and the surgeon must be prepared to deal with the problem in an orderly and minimally traumatic fashion [Dahm et al., 1995]. The fluid must be patiently suctioned from the middle ear; the head elevated, and strips of pericranium packed around the electrode in the cochleostomy until fluid control is absolute. Spinal drains are very rarely needed. In the case of the common cavity, care must be taken to avoid passing the electrode into the internal auditory canal, since the lateral end of the internal auditory canal is often dehiscent. We have found intra-operative fluoroscopy very helpful in this situation. The narrow internal auditory canal poses a particular problem, in that it is crucial to attempt to determine whether or not there is a cochlear nerve present. Special MRI studies [Casselman, 2002; Fishman and Holliday, 2000; Fishman et al., 2003] are invaluable in this regard. Surgery in these cases requires extensive counseling since it is possible that the hearing results may not be optimal.
Cochlear Implant Candidacy and Surgical Considerations
Labyrinthitis ossificans
As a consequence of meningitis, new bone may form within the cochlea, causing labyrinthitis ossificans. This may partially or completely obstruct the lumen of the scala tympani and/or the scala vestibuli. While proximal obstruction can often be drilled through and an open lumen located within a few millimeters of the round window, thereby affording full insertion of the electrode array, more severe obstruction may require one of several techniques [Balkany et al.,1991; Bredberg and Lindstrom, 1997; Gibson, 1995; Gibson et al., 1995; Hartrampf et al., 1995; Laszig and Marangos, 1995; Lenarz et al., 2001] which have been designed to afford optimal insertion of a multichannel electrode. These include drilling of a basal tunnel [Cohen, 1997; Cohen et al., 1997, 2002], circummodiolar drill-out [Gantz et al., 2000, 2002; Balkany et al., 1991, 1999, 2002], and the use of a double or split electrode array [Bredberg et al., 1997; Lenarz et al., 1999, 2001].
Additional Handicaps
Current cochlear implant candidates include those with blindness [Ramsden et al., 1993; Young et al., 1995], motor disturbances, some degree of retardation, auditory neuropathy or dyssynchrony [Berlin et al., 2003; Miyamoto et al., 1999; Peterson et al., 2003; Shallop et al., 2001] and a variety of psychiatric disturbances [McCracken and Bamford, 1995; Roberts and Hindley, 1999; Waltzman et al., 2000]. These candidates require additional evaluation and counseling, although the surgical techniques tend not to be much affected. These candidates have often been avoided in the past, but given their additional handicaps, they often receive even more benefit from a cochlear implant than those who have the hearing disorder as their sole problem [Lesinski et al., 1995].
New Devices
New cochlear implant hardware and software have increased candidacy by improving the results of implantation and engendering greater confidence and acceptance of the procedure. To some degree, this has also ameliorated some of the objections to cochlear implants by members of the Deaf Community. As the children continue to demonstrate greater speech understanding, more fluent speech, greater ability to read, and continue to pro-
Audiol Neurootol 2004;9:197–202
199
gress in school at a rate comparable to their hearing peers, there is inevitably a softening of the philosophical objections against implantation. New devices may also lead to further modification of surgical technique [Cohen, 1997; Cohen et al., 1997, 2002; Balkany et al., 1999; Gantz et al., 2000; Roland et al., 2000a, b], as may a desire to improve the cosmetic aspects of implant surgery [O’Donoghue and Nikopoulos, 2002]. If and when the totally implantable cochlear implant (TICI) becomes a reality, it will require a modification of current surgical techniques in dealing with additional hardware, implanting a microphone, and possibly adding hardware to the ossicles.
Bilateral Implantation
Finally, there is growing interest in bilateral implantation [Gantz et al., 2002; Lawson et al., 1998; Truy et al., 2002; Tyler et al., 2000a, b; Van Hoesel and Clark, 1995; Vermeire et al., 2003]. The surgery takes almost twice as long as a single implant, monopolar cautery cannot be used for the second side, and contamination of the field is a risk. Postoperative bilateral loss of vestibular function with resulting ataxia and oscillopsia is a potential consequence. Although the surgery has been free of the feared complications, and the results have been generally encouraging in terms of directionality of hearing and speech understanding in noise, there are significant economic problems inherent in bilateral implantation. Many insurers refuse to reimburse for a second implant; and in areas in which cochlear implants are not freely available, it seems more reasonable to help the greater number of patients by performing only one-sided surgery.
Revision and Reimplantation
Revision surgery, in which the device is not changed, and reimplantation surgery, in which another device (either of the same type or a dissimilar implant) is placed, poses many potential challenges. The most common event leading to reimplantation is device failure, while revision is usually necessitated by a medical or surgical
complication. Typically, reimplantation can be readily accomplished without serious surgical difficulties, and the results are at least as good with the new device as with the original one [Alexiades et al., 2001; Roland et al., 2000]. Revision surgery, on the other hand, may offer many more serious challenges for the surgeon. If the problem is caused by major infection or a significant loss of tissue overlying or adjacent to the devices, intravenous antibiotics, multiple operations and skin grafts may be needed.
Summary In summary, numerous changes have occurred with candidacy, and, in general, these have been accompanied by concomitant and satisfactory changes in surgical techniques. Together, these have advanced the utility and safety of cochlear implantation. Most devices are now approved for use in patients with severe to profound rather than profound loss. In addition, studies have begun utilizing short electrode arrays for shallow insertion in patients with considerable low-frequency residual hearing. This technique will allow the recipient to continue to use acoustically amplified hearing for the low frequencies simultaneously with a cochlear implant for the high frequencies. New hardware requires modification of existing implant surgery. Examples of this are the behind the ear speech processors requiring a modification of the incision site to prevent contact between the internal and external hardware. Similarly, the new perimodiolar electrodes require special insertion techniques and/or insertion tools. Bilateral implantation clearly requires modification of the surgical techniques used for unilateral implantation. The surgery remains the same, but takes almost twice as long, and requires some modification since at a certain point, when the first device is in contact with the body, the monopolar cautery can no longer be used. Within the next few years, work will begin on the Totally Implantable Cochlear Implant TICI. This will clearly require a modification of the surgical technique currently used for the present semi-implantable devices. In addition to surgically burying the components of the present cochlear implant, we will also have to develop techniques for implanting a rechargeable power supply and a microphone for the TICI. The latter will be a challenge, since it must be placed where it is capable of great sensitivity, yet not exposed to interference or the risk of extrusion. The advances in design of, and indications for, cochlear implants have been matched by improvements in surgical techniques and lack of complications. The resulting improvements in safety and efficacy have further encouraged the use of these devices. We anticipate further changes in the foreseeable future, for which there will likely be surgical problems to solve.
References Alexiades G, Roland JT Jr, Fishman AJ, Shapiro W, Waltzman SB, Cohen NL: Cochlear reimplantation: Surgical techniques and functional results. Laryngoscope 2001;111:1608–1613.
200
Au G, Gibson W: Cochlear implantation in children with large vestibular aqueduct syndrome. Am J Otol 1999;20:183–186. Babighian G: Problems in cochlear implant surgery. Adv Otorhinolaryngol 1993;48:65–69.
Audiol Neurootol 2004;9:197–202
Balkany T, Gantz BJ, Steenerson RL, Cohen NL: Systematic approach to electrode insertion in the ossified cochlea. Otolaryngol Head Neck Surg 1991;114:4–11.
Cohen
Balkany TJ, Cohen NL, Gantz BJ: Surgical technique for the Clarion® cochlear implant. Ann Otol Rhinol Laryngol Suppl 1999;188(pt 2): 27–30. Balkany TJ, Hodges AV, Eshraghi AA, Butts S, Bricker K, Lingvai J, Polak M, King J: Cochlear implants in children – A review. Acta Otolaryngol 2002;122:356–362. Barbara M, Mancini P, Mattioni A, Monini S, Ballantyne D, Filipo R: Residual hearing after cochlear implantation. Adv Otorhinolaryngol 2000;57:385–388. Berlin CI, Morlet T, Hood LJ: Auditory neuropathy/dyssynchrony: Its diagnosis and management. Pediatr Clin North Am 2003;50:331– 340. Boothroyd A, Boothroyd-Turner D: Postimplantation audition and educational attainment in children with prelingually acquired profound deafness. Ann Otol Rhinol Laryngol Suppl 2002;189:79–84. Bredberg G, Lindstrom B, et al: Electrodes for ossified cochleas. Am J Otol 1997;18(suppl): 42–43. van den Broek P, Cohen N, O’Donoghue G, Fraysse B, Laszig R, Offeciers E: Cochlear implantation in children. Int J Pediatr Otorhinolaryngol 1995;32(suppl):S217–S223. Buchman CA, Fucci MJ, Luxford WM: Cochlear implants in the geriatric population: Benefits outweigh risks. Ear Nose Throat J 1999;78: 489–494. Camilleri AE, Toner JG, Howarth KL, Hampton S, Ramsden RT: Cochlear implantation following temporal bone fracture. J Laryngol Otol 1999; 113:454–457. Casselman JW: Diagnostic imaging in clinical neuro-otology. Curr Opin Neurol 2002;15:23–30. Cohen NL: Surgical techniques to avoid complications of cochlear implants in children. Adv Oto-Rhino-Laryngol. Basel, Karger, 1997, vol 52, pp161–163. Cohen NL, Waltzman SB, Roland JT, Bromberg B, Cambron N, Gibbs L: Parkinson W, Snead C: The results of speech processor upgrade in a population of VA cochlear implant recipients. Am J Otol 1997;18:462–465. Cohen NL, Roland JT Jr, Fishman A: Surgical technique for the nucleus contour cochlear implant. Ear Hearing 2002;23(suppl):59S–66S. Dahm MC, Weber BP, Lenarz T: Cochlear implantation in a Mondini malformation of the inner ear and the management of perilymphatic gusher. Adv Otorhinolaryngol 1995;50:66–71. Dowell RC, Dettman SJ, Hill K, Winton E, Barker EJ, Clark GM: Speech perception outcomes in older children who use multichannel cochlear implants: Older is not always poorer. Ann Otol Rhinol Laryngol Suppl 2002;189:97–101. Firszt JB, Reeder RM, Novak MA: Multichannel cochlear implantation with inner ear malformation: Case report of performance and management. J Am Acad Audiol 1995;6:235–242. Fishman, AJ, Holliday RA: Principles of cochlear implant imaging; in Waltzman SB, Cohen NL (eds): ‘Cochlear Implants’. New York, Thieme, 2000, pp 79–107. Fishman AJ, Roland JT, Alexiades, G, Mierzwinski J, Cohen NL: Fluoroscopically assisted co-
Cochlear Implant Candidacy and Surgical Considerations
chlear implantation. Otol Neurotol 2003;24: 882–886. Formanek M, Czerny C, Gstoettner W, Kornfehl J: Cochlear implantation as a successful rehabilitation for radiation-induced deafness. Eur Arch Otorhinolaryngol 1998;255:175–178. Francis HW, Niparko JK: Cochlear implantation update. Pediatr Clin North Am 2003;50:341– 361. Funasaka S: Factors influencing speech development in infants with cochlear implants. Adv Otorhinolaryngol 2000;57:192–198. Gantz BJ, Rubinstein JT, Tyler RS, Teagle HFB, Cohen NL, Waltzman SB, Miyamoto RT: Long-term results of cochlear implants in children with residual hearing. Ann Otol Rhinol Laryngol Suppl 2000;185:33–36. Gantz BJ, Tyler RS, Rubinstein JT, Wolaver A, Lowder M, Abbas P, Brown C, Hughes M, Preece JP: Binaural cochlear implants placed during the same operation. Otol Neurotol 2002;23:169–180. Garnham C, O’Driscoll M, Ramsden And R, Saeed S: Speech understanding in noise with a MedEl COMBI 40+ cochlear implant using reduced channel sets. Ear Hear 2002;23:540–552. Gates GA, Miyamoto RT: Cochlear implants. N Engl J Med 2003;349:421–423. Gibson WP, Harrison HC, Prowse C: A new incision for placement of cochlear implants. J Laryngol Otol 1995;109:821–825. Gibson WP: Surgical technique for inserting the cochlear multielectrode array into ears with total neo-ossification. Ann Otol Rhinol Laryngol Suppl 1995;166:414–416. Gibson WP, Harrison HC. Further experience with a straight, vertical incision for placement of cochlear implants. J Laryngol Otol 1997;111: 924–927. Gibson WP, Rennie M, Psarros C: Outcome after cochlear implantation and auditory verbal training in terms of speech perception, speech production and language. Adv Otorhinolaryngol 2000;57:250–253. Gibson WP: A surgical technique for cochlear implantation in very young children. Adv Otorhinolaryngol 2000;57:78–81. Gysin C, Papsin BC, Daya H, Nedzelski J: Surgical outcome after paediatric cochlear implantation: Diminution of complications with the evolution of new surgical techniques. J Otolaryngol 2000;29:285–289. Hartrampf R, Weber B, Dahm MC, Lenarz T: Management of obliteration of the cochlea in cochlear implantation. Ann Otol Rhinol Laryngol Suppl 1995;166:416–418. Hehar SS, Nikolopoulos TP, Gibbin KP, O’Donoghue GM: Surgery and functional outcomes in deaf children receiving cochlear implants before age 2 years. Arch Otolaryngol Head Neck Surg 2002;128:11–14. Helms J, Mueller J, Schoen F, Shehata-Dieler WA: Surgical concepts for cochlea implantation in young and very young children. Adv Otorhinolaryngol 2000;57:199–201. Hoffman RA, Cohen NL: Complications of cochlear implant surgery. Ann Otol Rhinol Laryngol Suppl 1995;166:420–422.
Huttenbrink KB, Zahnert T, Jolly C, Hofmann G: Movements of cochlear implant electrodes inside the cochlea during insertion: An X-ray microscopy study. Otol Neurotol 2002;23:187– 191. von Ilberg C, Kiefer J, Tillein J, Pfenningdorff T, Hartmann R, Sturzebecher E, Klinke R: Electric-acoustic stimulation of the auditory system. New technology for severe hearing loss. ORL J Otorhinolaryngol Relat Spec 1999;61:334. Ito J, Sakota T, Kato H, Hazama M, Enomoto M: Surgical considerations regarding cochlear implantation in the congenitally malformed cochlea. Otolaryngol Head Neck Surg 1999;121: 495–498. Kelsall DC, Shallop JK, Burnelli T: Cochlear implantation in the elderly. Am J Otol 1995;16: 609–615. Kiefer J, von Ilberg C: Special surgical problems in cochlear implant patients. Adv Otorhinolaryngol 1997;52:135–139. Kim HN, Shim YJ, Chung MH, Lee YH: Benefit of ACE compared to CIS and SPEAK coding strategies. Adv Otorhinolaryngol 2000;57:408– 411. Kishon-Rabin L, Taitelbaum R, Muchnik C, Gehtler I, Kronenberg J, Hildesheimer M: Development of speech perception and production in children with cochlear implants. Ann Otol Rhinol Laryngol Suppl 2002;189:85–90. Laszig R, Marangos N: Management in bilaterally obliterated cochleae. Adv Otorhinolaryngol 1995;50:54–58. Laszig R: Cochlear implants in children (soft surgery). Adv Otorhinolaryngol 2000;57:87–89. Lawson DT, Wilson BS, Zerbi M, van den Honert C, Finley CC, Farmer JC Jr, McElveen JT Jr, Roush PA: Bilateral cochlear implants controlled by a single speech processor. Am J Otol 1998;19:758–761. Lehnhardt E: Intracochlear placement of a cochlear implant electrode in soft surgery technique. HNO 1993;41:356–359. Lenarz T, Lesinski-Schiedat A, et al: The nucleus double array cochlear implant: A new concept for the obliterated cochlea. Otol Neurotol 2001; 22:24–32. Lenarz T, Lesinski-Schiedat A, von der Haar-Heise S, Illg A, Bertram B, Battmer RD: Cochlear implantation in children under the age of two: The MHH experience with the Clarion cochlear implant. Medizinische Hochschule Hannover. Ann Otol Rhinol Laryngol Suppl 1999; 177:44–49. Lesinski A, Hartrampf R, Dahm MC, Bertram B, Lenarz T: Cochlear implantation in a population of multihandicapped children. Ann Otol Rhinol Laryngol Suppl 1995;166:332–334. Lo WW: Imaging of cochlear and auditory brain stem implantation. AJNR Am J Neuroradiol 1998;19:1147–1154. Manrique M, Paloma V, Cervera-Paz FJ, Ruiz de Erenchun I, Garcia-Tapia R: Pitfalls in cochlear implant surgery in children. Adv Otorhinolaryngol 1995;50:45–50. Marangos N, Aschendorff A: Congenital deformities of the inner ear: Classification and aspects regarding cochlear implant surgery. Adv Otorhinolaryngol 1997;52:52–56.
Audiol Neurootol 2004;9:197–202
201
McCracken WM, Bamford JM: Auditory prostheses for children with multiple handicaps. Scand Audiol Suppl 1995;41:51–60. Miyamoto RT, Kirk KI, Renshaw J, Hussain D: Cochlear implantation in auditory neuropathy. Laryngoscope 1999;109(2 pt 1):181–185. Mondain M, Sillon M, Vieu A, Levi A, ReuillardArtieres F, Deguine O, Fraysse B, Cochard N, Truy E, Uziel A: Cochlear implantation in prelingually deafened children with residual hearing. Int J Pediatr Otorhinolaryngol 2002;63: 91–97. Moog JS: Changing expectations for children with cochlear implants. Ann Otol Rhinol Laryngol Suppl 2002;189:138–142. Novak MA, Firszt JB, Rotz LA, Hammes D, Reeder R, Willis M: Cochlear implants in infants and toddlers. Ann Otol Rhinol Laryngol Suppl 2000;185:46–49. O’Donoghue GM, Nikolopoulos TP: Minimal access surgery for pediatric cochlear implantation. Otol Neurotol 2002;23:891–894. Oh SH, Kim CS, Kang EJ, Lee DS, Lee HJ, Chang SO, Ahn SH, Hwang CH, Park HJ, Koo JW: Speech perception after cochlear implantation over a 4-year time period. Acta Otolaryngol 2003;123:148–153. O’Neill C, O’Donoghue GM, Archbold SM, Nikolopoulos TP, Sach T: Variations in gains in auditory performance from pediatric cochlear implantation. Otol Neurotol 2002;23:44–48. Osberger MJ, Fisher L, Kalberer A: Speech perception results in children implanted with the Clarion Multi-Strategy cochlear implant. Adv Otorhinolaryngol 2000;57:417–420. Osberger MJ, Zimmerman-Phillips S, Koch DB: Cochlear implant candidacy and performance trends in children. Ann Otol Rhinol Laryngol Suppl 2002;189:62–65. Peterson A, Shallop J, Driscoll C, Breneman A, Babb J, Stoeckel R: Outcomes of cochlear implantation in children with auditory neuropathy. J Am Acad Audiol 2003;14(4):188–201. Portmann M, Portmann D, Negrevergne M: Surgical difficulties with cochlear implants. Adv Otorhinolaryngol 1993;48:59–61. Pulsifer MB, Salorio CF, Niparko JK: Developmental, audiological, and speech perception functioning in children after cochlear implant surgery. Arch Pediatr Adolesc Med 2003;157: 552–558. Ramsden RT, Boyd P, Giles E, Aplin Y, Das V: Cochlear implantation in the deaf blind. Adv Otorhinolaryngol 1993;48:177–181. Richter B, Eissele S, Laszig R, Lohle E: Receptive and expressive language skills of 106 children with a minimum of 2 years’ experience in hearing with a cochlear implant. Int J Pediatr Otorhinolaryngol 2002;64:111–125. Rizer FM, Burkey JM: Cochlear implantation in the very young child. Otolaryngol Clin North Am 1999;32:1117–1125. Roberts C, Hindley P: The assessment and treatment of deaf children with psychiatric disorders. J Child Psychol Psychiatry 1999;40:151–167. Roland JT, Fishman AJ, Alexiades G, Cohen NL: Electrode to modiolus proximity: A fluoroscopic and histologic analysis. Am J Otol 2000a;21: 218–225.
202
Roland JT, Fishman AJ, Waltzman SB, Cohen NL: The Shaw scalpel in revision cochlear implant surgery. Ann Otol Rhinol Laryngol Suppl 2000b;185:23–25. Rubinstein JT: Paediatric cochlear implantation: Prosthetic hearing and language development. Lancet 2002;360:483–485. Saeed SR, Ramsden RT, Axon PR: Cochlear implantation in the deaf-blind. Am J Otol 1998; 19:774–777. Sainz M, Skarzynski H, Allum JH, Helms J, Rivas A, Martin J, Zorowka PG, Phillips L, Delauney J, Brockmeyer SJ, Kompis M, Korolewa I, Albegger K, Zwirner P, Van De Heyning P, D’Haese P: MED-EL. Assessment of auditory skills in 140 cochlear implant children using the EARS protocol. ORL J Otorhinolaryngol Relat Spec 2003;65:91–96. Schon F, Muller J, Helms J: Speech reception thresholds obtained in a symmetrical fourloudspeaker arrangement from bilateral users of MED-EL cochlear implants. Otol Neurotol 2002;23:710–714. Shallop JK, Peterson A, Facer GW, Fabry LB, Driscoll CL: Cochlear implants in five cases of auditory neuropathy: Postoperative findings and progress. Laryngoscope 2001;111:555–562. Skarzynski H, Lorens A, D’Haese P, Walkowiak A, Piotrowska A, Sliwa L, Anderson I: Preservation of residual hearing in children and postlingually deafened adults after cochlear implantation: An initial study. ORL J Otorhinolaryngol Relat Spec 2002;64:247–253. Smoorenburg GF, Willeboer C, van Dijk JE: Speech perception in nucleus CI24M cochlear implant users with processor settings based on electrically evoked compound action potential thresholds. Audiol Neurootol 2002;7:335–347. Staller S, Parkinson A, Arcaroli J, Arndt P: Pediatric outcomes with the nucleus 24 contour: North American clinical trial. Ann Otol Rhinol Laryngol Suppl 2002;189:56–61. Temple RH, Ramsden RT, Axon PR, Saeed SR: The large vestibular aqueduct syndrome: The role of cochlear implantation in its management. Clin Otolaryngol 1999;24:301–306. Thomas JS: Cochlear implantation of the elderly. Ann Otol Rhinol Laryngol Suppl 1995;166:91– 93. Tomblin JB, Spencer LJ, Gantz BJ: Language and reading acquisition in children with and without cochlear implants. Adv Otorhinolaryngol 2000;57:300–304. Tucci DL, Telian SA, Zimmerman-Phillips S, Zwolan TA, Kileny PR: Cochlear implantation in patients with cochlear malformations. Arch Otolaryngol Head Neck Surg 1995;121:833– 838. Truy E, Ionescu E, Ceruse P, Gallego S: The binaural digisonic cochlear implant: Surgical technique. Otol Neurotol 2002;23:704–709. Tyler RS, Gantz BJ, et al: Three-month results with bilateral cochlear implants. Ear Hearing 2000a;23:80–89. Tyler RS, Kelsay DM, Teagle HF, Rubinstein JT, Gantz BJ, Christ AM: 7-year speech perception results and the effects of age, residual hearing and preimplant speech perception in prelingually deaf children using the Nucleus and
Audiol Neurootol 2004;9:197–202
Clarion cochlear implants. Adv Otorhinolaryngol 2000b;57:305–310. Uziel AS, Reuillard-Artieres F, Mondain M, Piron JP, Sillon M, Vieu A: Multichannel cochlear implantation in prelingually and postlingually deaf children. Adv Otorhinolaryngol 1993;48: 187–190. Van Hoesel RJ, Clark GM: Fusion and lateralization study with two binaural cochlear implant patients. Ann Otol Rhinol Laryngol Suppl 1995;166:233–235. Vermeire K, Brokx JP, Van de Heyning PH, Cochet E, Carpentier H: Bilateral cochlear implantation in children. Int J Pediatr Otorhinolaryngol 2003;67:67–70. Waltzman SB, Cohen NL, Gomolin RH, Green JE, Shapiro WH, Hoffman RA, Roland JT: Open set speech perception in congenitally deaf children using cochlear implants. Am J Otol 1997; 18:342–349. Waltzman SB, Cohen NL, Shapiro WH: The benefits of cochlear implantation in the geriatric population. Otolaryngology Head Neck Surg 1993;108:329–333. Waltzman SB, Cohen NL: Cochlear implantation in children under two years of age. Am J Otol 1998;19:158–162. Waltzman SB, Cohen NL: Implantation of patients with prelingual long-term deafness. Ann Otol Rhinol Laryngol Suppl 1999;188(pt 2):84–87. Waltzman SB, Cohen NL: Speech perception in congenitally hearing impaired children using cochlear implants. Curr Opin Otolaryngol. Head Neck Surg 1999;7:248–250. Waltzman SB, Scalchunes V, Cohen NL: Performance of multiply handicapped children using cochlear implants. Am J Otol 2000;21:329– 335. Waltzman SB, Cohen NL, Green J, Roland JT Jr: Long-term effects of cochlear implants in children. Otolaryngol Head Neck Surg 2002a;126: 505–511. Waltzman SB, Roland JT Jr, Cohen NL: Delayed Implantation in congenitally deaf children and adults. Otology Neurotol 2002b;23:333–340. Weber BP, Lenarz T, Hartrampf R, Dietrich B, Bertram B, Dahm MC: Cochlear implantation in children with malformation of the cochlea. Adv Otorhinolaryngol 1995;50:59–65. Weber BP, Dillo W, Dietrich B, Maneke I, Bertram B, Lenarz T: Pediatric cochlear implantation in cochlear malformations. Am J Otol 1998;19: 747–753. Wright M, Purcell A, Reed VA: Cochlear implants and infants: Expectations and outcomes. Ann Otol Rhinol Laryngol Suppl 2002;189:131– 137. Young NM, Johnson JC, Mets MB, Hain TC: Cochlear implants in young children with Usher’s syndrome. Ann Otol Rhinol Laryngol Suppl 1995;166:342–345. Young NM: Infant cochlear implantation and anesthetic risk. Ann Otol Rhinol Laryngol Suppl 2002;189:49–51. Young GA, Killen DH: Receptive and expressive language skills of children with five years of experience using a cochlear implant. Ann Otol Rhinol Laryngol 2002;111:802–910.
Cohen
Original Paper Audiol Neurootol 2004;9:203–213 DOI: 10.1159/000078390
Received: February 6, 2003 Accepted after revision: February 10, 2004
Channel Interaction in Cochlear Implant Users Evaluated Using the Electrically Evoked Compound Action Potential Paul J. Abbas Michelle L. Hughes Carolyn J. Brown Charles A. Miller Heather South Department of Speech Pathology and Audiology, Department of Otolaryngology – HNS, University of Iowa, Iowa City, Iowa, USA
Key Words Cochlear implants W Compound action potential W Electrically evoked compound action potential W Neural response telemetry W Channel interaction
onstrate large variations across individual implant users as well as across electrodes within an individual. In general, the degree of interaction is shown to be dependent on stimulus level. Copyright © 2004 S. Karger AG, Basel
Abstract One likely determinant of performance with a cochlear implant is the degree of interaction that occurs when overlapping subsets of nerve fibers are stimulated by various electrodes of a multielectrode array. The electrically evoked compound action potential (ECAP) can be used to assess physiological channel interaction. This paper describes results from two different methods of analysis of ECAP channel interaction measures made by the Nucleus neural response telemetry system. Using a forward-masking stimulus paradigm, masker and probe pulses are delivered through different electrodes. The response to the probe is then dependent on the extent of overlap in the stimulated neural populations. The amplitude of response to the probe as a function of masker electrode position then reflects the degree of overlap between the population of neurons responding to the masker and those stimulated by the probe. Results dem-
ABC
© 2004 S. Karger AG, Basel 1420–3030/04/0094–0203$21.00/0
Fax + 41 61 306 12 34 E-Mail
[email protected] www.karger.com
Accessible online at: www.karger.com/aud
Introduction
The lack of across-fiber independence in excitation is referred to as channel or spatial interaction and may impose significant limitations on performance in present cochlear implant designs [White et al., 1984; Fu et al., 1998]. If each electrode stimulated the same neurons, then there would be no inherent benefit to multiple versus single-electrode implants. At the other extreme, if there were no overlap among stimulated neural channels, then one may assume that information provided by each stimulus channel is effectively transmitted to the central nervous system. The degree of spread of neural excitation and, consequently, the overlap of stimulated neurons from different electrodes may result in interactions that may compromise the responses in a way that diminishes the information provided on the individual channels of
Paul J. Abbas 127B SHC University of Iowa Iowa City, IA 52242 (USA) Tel. +1 319 335 8733, Fax +1 319 335 8851, E-Mail
[email protected]
stimulation. As a result, measures of the degree of overlapping response areas may be important in determining the performance of individual implant users and devices. The degree of spread of neural excitation in response to stimulation of a specific electrode or combination of electrodes may depend on electrode position relative to the stimulable neurons, the orientation of the electrodes and the resulting electric field, as well as the degree and pattern of neural survival. For instance, if all surviving neurons were remote to 2 monopolar stimulating electrodes, then significant overlap in the stimulated populations would be expected. Alternatively, if the mode of stimulation provides narrow current spread and stimulable neurons are close to the stimulating electrodes, one may expect relatively little overlap in the stimulated neural population, especially at low stimulus levels. The degree of spatial or channel interaction has been assessed in the past using both psychophysical measures and physiological measures of the auditory brainstem response. Interaction can be measured using simultaneously excited electrodes [Shannon, 1983; White et al., 1984; Abbas and Brown, 1988] as well as with a forwardmasking procedure [Shannon, 1983; White et al., 1984; Abbas and Purdy, 1990; Pelizone et al., 2001]. The primary goal of this study was to use the electrically evoked compound action potential (ECAP) to assess channel interaction in a group of multichannel cochlear implant users. Such measures can be complementary to psychophysical assessments in that they provide a peripheral assessment of the neural interaction. In addition, with the advent of telemetry systems to assess neural response in cochlear implant users [Abbas et al., 1999], the ability to make such measures in a time-efficient manner is now available in a large number of cochlear implant users. All of the measures described in this paper were obtained from adult cochlear implant users. The ability to quickly assess channel interaction could be useful in that population. For instance, adjustments in stimulation parameters, such as electrode configuration, pulse duration or level could be used to improve channel independence. The relatively quick assessment procedure provided by the ECAP could make such measures more practical than psychophysical assessments. In addition, these measures may be particularly applicable to young children or difficult-to-test individuals. Measures of the compound action potential have been shown to be useful in the fitting process, i.e. choosing threshold and uncomfortable current levels in young children [Brown et al., 2000; Franck and Norton, 2001; Hughes et al., 2000]. Objective measures of channel interaction could also be made in young children,
204
Audiol Neurootol 2004;9:203–213
who may be unable to report qualitative changes with different speech processor parameters. ECAP measures could partially be used as a basis for changes in stimulation parameters to improve channel independence. Alternatively, the peripheral measure of channel interaction will not reflect central interactions that may be important for perception. There are several different ways in which intracochlear evoked potentials can be used to assess the spatial spread of neural excitation and channel interaction. One method is to measure the amplitude of response as a function of recording electrode position [Finley et al., 1997a; Cohen et al., 2001; Abbas and Brown, 2000; Frijns and Briaire, 2001]. This method assesses the degree to which neural excitation spreads along the length of the cochlea. A basic assumption underlying this method is that the recording electrode primarily measures the response from neurons near that electrode. If one measures the amplitude of response as a function of recording electrode position for a fixed stimulating electrode, then the pattern of response across electrodes reflects the spread of excitation across the population. However, the assumptions necessary to interpret these functions are fairly complex. First, there can be significant effects of stimulus level on the pattern of neural excitation. Specific comparisons among subjects and/or stimulating electrodes will need to be evaluated at equivalent excitation and/or loudness levels. Second, the amplitude of the ECAP is not a direct measure of the excitation at that place in the cochlea, i.e. an electrode at a certain position in the cochlea does not simply record from adjacent spiral ganglion cells. Due to volume conduction, one would expect significant spread of voltage generated by action potentials of each excited fiber. The degree to which this occurs may be dependent, to a great extent, on the radial position of each recording electrode relative to the stimulated neurons. Several research groups have recently proposed using an alternative method of evaluating channel interaction using a two-pulse forward-masking paradigm [Miller et al., 2001; Cohen et al., 2001]. With this method, a masker and probe pulse are presented on different electrodes and the two pulses are separated by 0.5 ms in order to take advantage of the refractory properties of neurons to evaluate the interaction between channels. Thus, changes in the ECAP amplitude as a function of the masker location provide an indication of the degree of overlap in the stimulated neural populations. A primary goal of this study was to collect data from a number of subjects using this method in order to evaluate the range of interaction functions among subjects and among electrodes within individual subjects.
Abbas/Hughes/Brown/Miller/South
A secondary goal of this study was to evaluate two possible methods for analyzing such ECAP channel interaction data. The first is the ‘traditional’ subtraction method [Brown et al., 1990] which is the common method used with present versions of neural response telemetry (NRT) software. We have also analyzed the masked responses using the template subtraction technique proposed by Miller et al. [2000] by using off-line analysis. That technique uses a high-level masker and a probe pulse presented on the same electrode to calculate a template of the probe stimulus artifact. That template can be used to subtract out the stimulus artifact in subsequent recorded conditions with the same probe stimulus. Results from this study are characterized for different probe electrodes as well as for different stimulus levels. We hypothesized that both parameters would affect the degree of channel interaction that is measured.
Methods All of the individuals who participated in this study had received the Nucleus 24M or 24R cochlear implant within the past 5 years. The NRT v3.0 software was used to control stimulation and record the responses. Current levels for stimuli were specified through the software and are reported here in ‘programming units’. These units range from 1 to 255, corresponding to a range of approximately 0.01– 1.75 mA in approximately logarithmic steps. Standard parameters used in these experiments included 25 Ìs/phase biphasic pulses, 0.5-ms masker-probe interval, monopolar stimulation (re: MP1, an electrode placed in the temporalis muscle) and monopolar recording (re: MP2, an electrode on the case of the implant). All measurements were made with 60 dB of gain on the internal recording amplifier as set through the NRT software. The intracochlear electrodes in the Nucleus device are numbered from 1 to 22 with 1 being the most basal electrode. To assess channel interaction, we used a forward-masking paradigm that takes advantage of the refractory properties of the auditory nerve. The basic stimulus paradigm and the standard subtraction method of extracting the response are illustrated in figure 1. The channel interaction paradigm differs from the normal (i.e. singlechannel) subtraction paradigm [Brown et al., 1990] in that the probe pulse is presented on one electrode and the masker is presented on a different electrode. If there is significant overlap between the neurons stimulated by the masker pulse and those stimulated by the probe, then some neurons that would normally respond to the probe will be in the refractory state; therefore, the response in the ‘masker + probe’ condition will be smaller than that to the probe alone. The degree of channel interaction is determined by subtracting the response to the ‘masker + probe’ condition from the ‘probe’ condition. Two additional subtractions are performed by this paradigm (not shown in fig. 1). To remove any response evoked by the masker, the response to the ‘masker alone’ condition is subtracted. Furthermore, to remove an amplifier-switching transient, a no-stimulus condition is subtracted as described in Abbas et al. [1999]. As the masker electrode position is changed and the overlap between the fibers acti-
Channel Interaction in Cochlear Implant Users
base
apex
Response to MASKER and PROBE Response to PROBE alone Subtraction Method Response
Fig. 1. Schematic diagram outlining the subtraction method used to reduce stimulus artifact and to evaluate the interaction between masker and probe electrode stimulation (see text).
vated by the masker and probe is decreased, the subtracted response will decrease. Thus, greater neuronal overlap is indicated by a larger response using the subtraction method. We note that this method can be utilized directly using NRT v3.0 software by simply choosing the masker electrode to be different than the probe electrode and recording in the normal subtraction mode. One difficulty with the subtraction method as illustrated in figure 1 is that under refractory conditions, the response to the forwardmasked probe may not have the same latency and morphology as the unmasked probe response [Finley et al., 1997b; Miller et al., 2000]. Miller et al. [2000] proposed an alternative method for assessing ECAP recovery functions. We used a variation of this method, illustrated in figure 2, in order to assess channel interaction. For this method, we initially measure a template of the stimulus artifact to the probe stimulus by using the forward-masking paradigm (fig. 2a). By choosing the masker to be on the same electrode as the probe and setting the level of the masker to be significantly higher than the probe, the response to the probe should contain only stimulus artifact and no neural response [Brown and Abbas, 1990]. By subtracting the masker-alone response from the masker + probe response we can derive a waveform that represents the probe artifact with no significant neural response. This ‘artifact template’ can then be used to eliminate probe artifact in cases where the masker is presented on different electrodes, as illustrated in figure 2b. Using this method, we subtract the template from the masker + probe response in order to isolate the neural response to the masked probe. As in the first method, we also subtract out the residual masker and no-stimulus condition. Using this second template method, the cases where neuronal overlap is greatest will result in the smallest response, while those cases where there is minimal overlap result in greater response amplitude. Note that this trend is opposite that seen with the first channel interaction subtraction method described above.
Audiol Neurootol 2004;9:203–213
205
Table 1. List of subject identification numbers, etiology and years of
profound hearing loss as reported by the subject, and time after implant hookup when ECAP measures were made
a
Response to MASKER and PROBE on the same electrode Response to MASKER Template of probe artifact
b
base
M
P
apex
Response to MASKER and PROBE on different electrodes
Subject
Etiology
Years deaf
Months after hookup
M1 M4 M13 M14 M15 M25 M32 M35b-L M40 M54 M58b-L M58b-R M61b-L R3 R9 R13 R17 R28b-L R28b-R
progressive noise-induced unknown unknown infection unknown ototoxic noise-induced congenital unknown Ménière’s disease Ménière’s disease unknown unknown unknown hereditary hereditary unknown unknown
20 12 27 15 10 5 2 1 3.5 7 14 1 16 0.3 0 5 0 15 15
60 60 24 54 60 48 36 45 36 36 31 31 32 24 24 24 24 3 3
M indicates straight array; R indicates contour; b indicates bilaterally implanted subjects; L or R indicates the ear stimulated.
Template of probe artifact (from A) Template Method Response
Fig. 2. Schematic diagram outlining the template method used to reduce stimulus artifact (a) and to evaluate the interaction between masker and probe electrode stimulation (b) (see text).
The choice of masker and probe levels can affect these interaction measures. Previous animal studies using a similar forward-masking technique in our laboratory have demonstrated significant effects of stimulus level [Abbas et al., 1999; Miller et al., 2001]. Those studies demonstrated that, as sensitivity to electrical stimulation may vary considerably across electrodes, use of a constant masker current level might not be appropriate for comparing responses across electrodes in the array. Thus, we have adopted a method in which the masker levels are set relative to the maximum comfort level (C level) used in programming the speech processor for each individual (250 Hz stimulation rate). In the data we present here, we have chosen the probe level to be in the upper part of the subject’s dynamic range, typically 10 programming units below the C level. The level of the masker was then varied across stimulating electrodes to conform with the C level contour, i.e. C level minus 10, C level or C level minus 20. This manipulation was an attempt to take into account differences in sen-
206
Audiol Neurootol 2004;9:203–213
sitivity across electrodes and compare interactions for stimuli with similar loudness percepts. We note, however, that the actual loudness of the signal used to evoke the electrophysiological response was likely less than the loudness of the stimulus used to measure the C level due to the difference in presentation rate. In this study, data were collected from a total of 19 ears from 17 subjects. Subject characteristics are reported in table 1. Twelve adults were implanted with the CI24M straight electrode array with 22 banded electrodes, and 5 adults were implanted with the CI24R Contour array with 22 half-band electrodes and removable stylet. Two subjects were implanted bilaterally, and data are reported from both ears. All subjects had complete electrode insertions as assessed by the surgeon at the time of implantation. All of the ECAP data were analyzed off-line using custom-designed programs written in MatLab, which assisted in picking peak amplitudes and plotting data. Amplitudes were measured as the difference between the initial negative peak relative to the following positive peak or plateau.
Results
Figure 3 illustrates responses from 3 subjects. Each panel represents data from a different subject. In each case, the probe was presented on electrode 10 at a level of 10 programming units below C level. The masker elec-
Abbas/Hughes/Brown/Miller/South
C level C level – 10 C level – 20 400
Amplitude (µV)
Fig. 3. Amplitude of the response to a probe plotted as a function of the masker electrode. Each panel represents data from 1 subject as indicated on the graph. In each case, the probe is presented on electrode 10 and the level is 10 programming units less than the measured C level (maximum comfort level). In each panel, the parameter is level of the masker expressed in programming units relative to the C level. Consequently, each plot represents responses where the masker level is held at a fixed level relative to the C level. In this and subsequent figures, subject numbers are designated as either M- or R- denoting straight electrode array (CI24M) or contour array (CI24R), respectively. Also, in this and subsequent figures, the probe electrode is indicated by a vertical arrow on the abscissa.
R3
500
M14
M25
250
400
300
300
200
300 200
150 200
100
100
100
0
50
0 5
10
15
20
0 0
5
10
15
20
0
5
10
15
20
Masker Electrode
trode was varied across the array from electrode 1 to 22, and for each electrode the masker level was chosen at a fixed level relative to the C level. For each subject, the amplitude of response to the masked probe is plotted as a function of masker electrode. The response amplitude was calculated using the standard subtraction technique illustrated in figure 1. The parameter in each plot is the masker level, expressed in programming units relative to the C level obtained using default SPEAK processing strategy parameters (250 Hz stimulation rate). The results for subject R3 in the left graph demonstrate relatively narrow and symmetric masking functions. The peak of the response is at or near the probe electrode, indicating that masker electrodes nearest the probe electrode exhibited the greatest channel interaction. Response amplitude decreased as the masker electrode was moved in either a more basal or more apical direction. These plots also illustrate the typical pattern of change with increasing masker level: the more intense the masker, the wider the interaction. Similar plots are shown for subjects M14 and M25 in figure 3. In those cases, the response patterns are relatively asymmetric, with a wider region of interaction resulting from more apical masker electrodes. For subject M14, the response shows a peak near the probe electrode as expected, but there is relatively little change in masking as the masker electrode is moved more apically. The pattern is similar for the 3 masker levels used. Subject M25 shows a peak in the response at a masker electrode remote from the probe electrode. This pattern is consistent with a highly asymmetric excitation pattern. This pattern changes considerably at higher masker levels, suggesting larger
spread of excitation in a basal direction or possibly an indication of cross-turn stimulation. As noted earlier, we used two different methods to analyze the data and measure interaction functions. The results using the normal subtraction technique as well as the template method are illustrated in figure 4. The plots for the subtraction method (fig. 4a) show results similar to those illustrated in figure 3. The same data analyzed using the template method (fig. 4b) show an approximately inverted function, where masker electrodes near the probe show a relatively small response amplitude and the responses for masker electrodes farther from the probe are relatively large. The two methodologies used in these experiments result in response patterns that are similar in shape, but inverted. We note however that there are differences between the results from the two methods. For masker electrodes near the probe electrode, the template method results in small amplitude responses that are not distinguishable from the noise floor and therefore result in zero amplitude responses across a range of electrodes and masker levels. For the normal subtraction method, the responses for masker electrodes relatively remote from the probe are small and result in zero amplitude responses in the same electrode range where responses are distinguishable using the template method. Thus, even though the two methods broadly show similar results, there are differences between the measured functions that may be useful in providing details of the interaction between areas of stimulation. A second example in figure 5a and b shows similar results from a different subject. The dependency on level is similar in the two methods for the masker electrode
Channel Interaction in Cochlear Implant Users
Audiol Neurootol 2004;9:203–213
207
1.2 C level C level – 10 C level – 20
500
a
C level C level – 5 C level – 10
400
1.0 0.8
800
300
0.6
600
200
0.4
400
0
5
10
15
20
0.2
600
0.0 0
5
10
15
20
1.2
500
800
400 600
300 400
c
a
0
Normalized amplitude
Amplitude (mV)
0
Amplitude (µV)
100 200
200
200
100
0
0
1.0 0.8 0.6 0.4 d
0.2 b
0
b
5
10
15
0.0 0
20
5
Masker Electrode
Fig. 4. Amplitude of response to the probe plotted as a function of masker electrode for subject M4. a Results analyzed using the normal subtraction method (fig. 1). b The same data analyzed using the template method (fig. 2). Levels of the masker and probe are set as described in the legend of figure 3.
15
Masker Electrode
20
0
5
10
15
20
Masker Electrode
Fig. 5. Amplitude of response to the probe plotted as a function of masker electrode for subject M32. In this case, the probe level is set at 5 programming units less than the C level. The masker level is then adjusted to the indicated levels relative to the C level at each masker electrode. a Data analyzed using the subtraction method. b The same data analyzed using the template method. c and d utilize the normalization technique described in the text to re-plot the data in a and b. The horizontal dashed lines in c and d show the criterion level used to determine the interaction width.
near the probe electrode. Nevertheless, the differences across level observed with the template method (fig. 5b, masker electrodes 15–20) at apical masker electrodes are not evident in the data plotted using the normal subtraction method (fig. 5a). We have noted that when using either analysis method, there are changes in the amplitude of the masking function with level. In addition, there is also a trend toward increasing width of the plotted function with increasing stimulus level. In order to more quantitatively compare the shape of the masking functions at different stimulus
208
10
Audiol Neurootol 2004;9:203–213
levels, we used a normalization method to account for the overall increase or decrease in response amplitude. For the subtraction method, the ECAP amplitudes were normalized to the maximum response amplitude for each curve. For the template method, the maximum amplitude for each curve was set to zero, and the minimum response amplitude was normalized to 1, thus inverting the function. Resulting normalized functions for the data in figure 5a and b are illustrated in panels c and d, respectively. The level effects in the normalized functions are quite similar for the subtraction and template methods. While
Abbas/Hughes/Brown/Miller/South
1.2
R3
M1
M25
M4
M14
M13
1.0 0.8 0.6
Normalized Amplitude
0.4 0.2 0.0
1.2 1.0 0.8 0.6 C level C level – 10 C level – 20
0.4 0.2 0.0 0
5
10
15
20
0
5
10
15
20
0
5
10
15
20
Masker Electrode
Fig. 6. Normalized amplitude of response using the subtraction technique plotted as a function of masker electrode
for 6 different subjects as indicated in each panel. In each panel, the probe level was set to 10 programming units below the C level and the parameter is masker level (in programming units) relative to the C level.
there are details, particularly at remote masker electrodes, that are lost in the normalization process, there is a clear tendency toward increased width of the function with increasing masker level, indicating greater channel interaction at higher stimulus levels. The trends with increased masker level that are exhibited in figure 5 are also evident in data from other subjects. Figure 6 plots normalized amplitude data analyzed using the traditional subtraction method from 6 additional subjects demonstrating level-dependent changes in the spread of interaction that are also dependent on subject. In some subjects, such as R3 and M1, the effect is relatively small. In other cases, such as M25, the effect is substantial. In order to summarize data from all subjects, we defined an interaction width for each masking function,
such as those in figure 5c and d. We chose a normalized value of 0.75 as a criterion decrease as illustrated in panels c and d. We then measured the width (in number of electrodes) between the two intercepts of the masking function with that 25% decrease in response amplitude. A piece-wise linear approximation of the data was used to determine the intercepts. Across subjects, we observed a wide range of interaction widths, from less than 3 to greater than 15 electrodes. For probe electrodes near the basal or apical end, we sometimes did not observe a decrease below 0.75. In those cases, we simply defined one side as the limit of the electrode array, i.e. either 1 or 22. Results from the analysis of interaction width are shown in figure 7 as a scatter plot showing the relationship between the interaction width measured for the subtraction method relative to the interaction width calculated for the tem-
Channel Interaction in Cochlear Implant Users
Audiol Neurootol 2004;9:203–213
209
a
20 Subtraction method 15
15
10
5
0 0
5
10
15
20
Interaction width using subtraction method (number of electrodes)
Fig. 7. The interaction width (range of masker electrode that spans a
25% decrease in response amplitude) is plotted for the template method relative to the same data analyzed using the subtraction method. Each symbol represents data from a different subject; for each subject, data are shown for several probe electrodes. In all cases, the masker and probe levels are 5 or 10 programming units below the C level.
Interaction width (number of electrodes)
Interaction width using template method (number of electrodes)
20
10
5
0 – 10
210
Audiol Neurootol 2004;9:203–213
10
Template method 15
10
5
b
0 – 10
plate method. Data are plotted from all 19 ears; data from at least 5 (up to 13) probe electrodes are plotted for each subject, each at a fixed probe level. While there is a great deal of scatter, the results show a clear correlation (R2 = 0.63) and no clear bias toward greater width with one method versus the other. Effects of masker level for both the subtraction and template methods are shown in figure 8. In each graph, interaction width is plotted as a function of masker level with subject and probe level as a parameter. While in some subjects there is relatively little change with masker level, the general trend is clearly that interaction width tends to increase with masker level. A one-way repeated measures ANOVA demonstrated significant differences among all pairs (p ! 0.05) except for the –10 and 0 comparison for the subtraction method. We did not collect extensive data varying the probe level, but there was a tendency toward increased width of the masking function with increasing probe level (see for example fig. 5 and 6). Finally, as noted in the methods section, we chose the masker levels relative to the C level on each electrode in order to compensate for differences in sensitivity across
0
20
0
10
Masker level – probe level (programming units)
Fig. 8. The interaction width is plotted as a function of masker level for a fixed probe level. a Data analyzed using the subtraction method. b Data analyzed using the template method. Each symbol represents data from a different subject or probe level.
the electrode array. In one subject, we compared a masking function using relative levels to responses measured using a constant current level across masker electrodes. Results from those measures are shown in figure 9. Panel a shows the C levels (filled circles), the masker levels chosen at C level minus 20 programming units (open squares) and the alternate set of masker levels chosen at a fixed level of 225 programming units (open triangles) across masker electrodes. Panel b shows the resulting masking functions for these two masking conditions. In this case, the constant masker condition resulted in a slightly broader interaction function than that produced by adjusting the masker level for each electrode.
Abbas/Hughes/Brown/Miller/South
C level C level – 20 Constant at 225
Current level (programming units)
a
250
240
230
220
210
200 0
5
10
15
20
0
5
10
15
20
Amplitude (µV)
200
150
100
50
b
0
Electrode Number
Fig. 9. a The levels used for the masker in the channel interaction data shown in b. Levels of masker were chosen at a fixed level rela-
tive to the C level (open squares) or at a fixed current level of 225 programming units (open triangles). The resulting response amplitude versus masker electrode functions are plotted with corresponding symbols in b.
Discussion
These data demonstrated the feasibility of a method to assess channel interaction among electrodes in the Nucleus implant using NRT. The method is relatively quick, at least compared to typical psychophysical masking methods, i.e. it will typically take less than 10–15 min to
Channel Interaction in Cochlear Implant Users
collect data for a masking function. The responses showed a wide range of characteristics both across subjects and across electrodes within a subject. The extent to which those variations relate to performance with the implant have not yet been determined. The two methods of data analysis used in this paper, the subtraction and the template methods, showed some differences in the details of the channel interaction functions. These differences were particularly evident in the effects of level, both for masker electrodes near the probe as well as for remote masker electrodes (fig. 4 and 5). The subtraction technique tended to provide more detail for masker electrodes near the probe where the amplitude was largest and variations in probe amplitude were evident. For masker electrodes near the probe, the template method resulted in amplitudes that were, in many cases, below the noise floor of the recording system, and therefore no responses were evident. The opposite was true for masker electrodes more remote from the probe. Based on those observations, one might conclude that a combination of the two methods might provide the most detailed assessment of the masking function. Nevertheless, when a normalization procedure was used, the effective interaction width, determined by a 25% decrease criterion, was similar for the two analysis methods (fig. 7). While we observed significant differences among individuals, the data demonstrated a clear effect of masker stimulus level. In most subjects, there was a clear growth in response amplitude, suggesting a greater overlap in the neural populations responding to the masker and probe. In addition to a change in amplitude, there was a tendency toward increased breadth of stimulation as is shown by the normalized functions in figure 6. That trend suggested that at least part of the increased overlap was due to an increased spread of neural excitation. Finally, we observed changes in shape of the masking function with level that could be interpreted as changes in the pattern of spread of excitation with level. An example was shown in figure 3 with subject M25. We hypothesize that the masking functions for that subject may be the result of a region in the cochlea near electrode 10 where there are relatively few stimulable nerve fibers. The large response amplitude for maskers at more apical electrodes suggests that at lower stimulus levels the response to the probe on electrode 10 may be dominated by more apical fibers. This pattern could result from either longitudinal spread of current or possibly across-turn stimulation. Increases in masker level resulted in an interaction function that was relatively broad and more symmetric, suggesting that the spread of neural excitation may span the gap in innervation, possi-
Audiol Neurootol 2004;9:203–213
211
M1
Fig. 10. The threshold level and comfortable loudness level (C level) for the 2 subjects M1 and M25 are plotted as a function of electrode. These 2 subjects demonstrated very different interaction functions as shown in figure 6.
Current level (clinical units)
220
200
200
180
180
160
160
140
140
120
120 0
5
Audiol Neurootol 2004;9:203–213
10
15
20
Electrode number
bly stimulating across the modiolus. Based on such observations, the use of a range of stimulus levels may be most appropriate to evaluate channel interaction. If there is indeed a gap in functional neurons along the length of the cochlea, one might predict differences in the threshold and uncomfortable loudness levels for individual subjects in light of these masking patterns. Figure 10 shows MAP levels for 2 individuals that showed different masking patterns in figure 6 (subjects M1and M25). The subject that showed the sharpest masking pattern, M1, had the lowest thresholds. The subject that demonstrated evidence of a gap in stimulable fibers, M25, had a slight increase in threshold near electrode 10, but the differences in threshold pattern between these subjects were unremarkable. We note that Moore’s recent psychophysical studies [Moore and Alcantara, 2001; Moore et al., 2000] in hearing-impaired subjects have demonstrated ‘dead zones’, i.e. regions without viable hair cells. The adaptation of similar psychophysical techniques to implant users to assess possible correspondence with these ECAP techniques would be of interest to explore. Other investigators have used similar methods both for psychophysical and physiological assessments. Cohen et al. [2001, 2002] have presented ECAP functions using a forward-masking paradigm. Their methodology was quite similar to that reported here except that the masker was presented at a fixed current level across different electrodes in contrast to the variable level used in this paper. If there are variations in threshold and maximum comfort level across stimulating electrode, use of a fixed current level for the masker can affect the channel interaction
212
M25
220
25
0
5
10
15
20
25
Electrode number
functions (fig. 9). If those variations in threshold and C level are small, the resulting changes in channel interaction are likely to be minimal. Liang et al. [1991] used a masking paradigm in experimental animals but measured the level of masker necessary to reach a criterion response rather than the response amplitude for a particular masker level. That paradigm has the advantage that results are more directly comparable to psychophysical forward masking in that psychophysically one also measures a level necessary to reach a threshold criterion. However, that method also tends to be more time-consuming in that the level of the masker needs to be manipulated. Also, as noted relative to the data presented here, the effectiveness of a masker can vary considerably with level as well as electrode position so that there may be differences in results from the two methods. Several psychophysical studies have used stimulus paradigms similar to that used in this study [Pelizzone et al., 2001; Eddington et al., 2002]. The temporal relationship of the masker and probe in those studies is somewhat different than that used for the ECAP work presented here. In those studies, the masker pulses tended to be near threshold or at subthreshold levels. The masker and probe stimuli were either simultaneously presented or more closely spaced in time compared to the paradigm used in this work. The resulting interactions could thus be the result of integration of potentials on the neural membrane rather than refractory effects that we expect are underlying the interactions observed in this work. Those studies have the ability to evaluate interaction effects at relatively
Abbas/Hughes/Brown/Miller/South
low stimulus levels compared to those necessary to evoke a clear ECAP. Nevertheless, the psychophysical assessment could take considerably longer than the 10–15 min that are necessary to collect data for a masking function such as those shown in figures 3–6.
Conclusions
The methods described in this paper provide a tool to assess channel interaction at the level of the auditory nerve in subjects with the Nucleus CI24 cochlear implant
using NRT. Results have demonstrated variations in the degree of interaction among subjects and across electrodes within individual subjects. The results also suggest that this method could be useful in identifying regions of the cochlea that have few or no stimulable neurons. Such information could be important in appropriately programming the speech processor for new users, especially young children. As this is an objective measure that does not require active participation or behavioral responses from the implant user, it is clearly applicable to that population.
References Abbas PJ, Brown CJ: Electrically evoked brainstem potentials in cochlear implant patients with multi-electrode stimulation. Hear Res 1988;36: 153–162. Abbas PJ, Brown CJ: Electrophysiology and device telemetry; in Waltzman SB, Cohen NL (eds): Cochlear Implants. New York, Thieme, 2000, pp 117–133. Abbas PJ, Brown CJ, Shallop JK, Firszt JB, Hughes ML, Hong SH, Staller SJ: Summary of results using the Nucleus CI24M implant to record the electrically evoked compound action potential. Ear Hear 1999;20:45–59. Abbas PJ, Purdy SJ: Use of forward masking of the EABR to evaluate channel interaction in cochlear implant users. Abstracts of the Association for Research in Otolaryngology Midwinter Meeting, St. Petersburg Beach, FL, 1990. Brown CJ, Abbas PJ: Electrically evoked wholenerve action potentials II. Parametric data from the cat. J Acoust Soc Am 1990;88:2205– 2210. Brown CJ, Abbas PJ, Gantz B: Electrically evoked whole-nerve action potentials. I. Data from Symbion cochlear implant users. J Acoust Soc Am 1990;88:1385–1391. Brown CJ, Hughes ML, Luk B, Abbas PJ, Wolaver A, Gervais J: The relationship between EAP and EABR thresholds and levels used to program the Nucleus 24 speech processor: Data from adults. Ear Hear 2000:21:151–163. Cohen LT, Busby PA, Cowan RSC: Measurement of spatial spread of neural excitation, using NRT (version 3) in the Nucleus Cochlear Implant System. 7th International Cochlear Implant Conference, Manchester, UK, 2002, p 53.
Channel Interaction in Cochlear Implant Users
Cohen LT, O’Leary SJ, Saunders E, Knight MR, Cowan RSC: Modelling methods tailored to human psychophysical and ECAP data: Practical applications to sound processing. Abstracts of Conference on Implantable Auditory Prostheses, Pacific Grove, 2001. Eddington DK, Tierny J, Noel V: Triphasic stimulus waveforms reduce suprathreshold electrode interaction. 7th International Cochlear Implant Conference, Manchester, UK, 2002, p 52. Finley CC, Wilson B, van den Honert C, Lawson D: Speech Processors for Auditory Prostheses. Sixth Quarterly Progress Report, NIH Contract NO1-DC-5-2103, 1997a. Finley CC, Wilson BS, van den Honert C: Fields and EP responses for electrical stimuli: Spatial distributors, channel interactions and regional differences along the tonotopic axis. Abstracts of 1997 Conference on Implantable Auditory Prostheses, Pacific Grove, 1997b. Franck K, Norton S: The electrically evoked wholenerve action potential: Fitting application for cochlear implants. Ear Hear 2001;22:289–299. Frijns JHM, Briaire JJ: NRI measurement of spatial selectivity using the forward masking paradigm: Clinical results seen from a theoretical perspective. 7th International Cochlear Implant Conference, Manchester, UK, 2002, p 43. Fu Q, Shannon RV, Wang X: Effects of noise and spectral resolution on vowel and consonant recognition: Acoustic and electric hearing. J Acoust Soc Am 1998;104:1–11. Hughes ML, Brown CJ, Abbas PJ, Wolaver A, Gervais J: Comparison of EAP thresholds with MAP levels in the Nucleus 24 cochlear implant: Data from children. Ear Hear 2000;21: 164–174.
Liang DH, Kovacs GT, Storment CW, White RL: A method for evaluating the selectivity of electrodes implanted for nerve simulation. IEEE Trans Biomed Eng 1991;38:443–449. Miller CA, Abbas PJ, Brown CJ: A new method of reducing stimulus artifact in the electrically evoked whole nerve potential. Ear Hear 2000; 21:265–274. Miller CA, Abbas PJ, Brown CJ: Physiological measurements of spatial excitation patterns produced by electrical stimulation. Conference on Implantable Auditory Prostheses, Pacific Grove, 2001. Moore BCJ, Alcantara JI: The use of psychophysical tuning curves to explore dead regions in the cochlea. Ear Hear 2000;2:268–278. Moore BCJ, Huss M, Vickers DA, Glasberg BR, Alcantara JI: A test for the diagnosis of dead regions in the cochlea. Br J Audiol 2000;34: 205–224. Pelizzone M, Boex C, de Balthasar C, Kos M-I: Electrode interactions in Ineraid and Clarion subjects. Conference on Implantable Auditory Prostheses, Pacific Grove, 2001. Shannon RV: Multichannel electrical stimulation of the auditory nerve in man. II. Channel interaction. Hear Res 1983;12:1–16. White MW, Merzenich MM, Gardi JN: Multichannel cochlear implants. Arch Otolaryngol 1984; 110:493–501. Zwolan TA, Collins LM, Wakefield GH: Electrode discrimination and speech precognition in postlingually deafened adult cochlear implant subjects. J Acoust Soc Am 1997;102:3673– 3685.
Audiol Neurootol 2004;9:203–213
213
Original Paper Audiol Neurootol 2004;9:214–223 DOI: 10.1159/000078391
Received: February 6, 2003 Accepted after revision: January 15, 2004
HiResolutionTM and Conventional Sound Processing in the HiResolutionTM Bionic Ear: Using Appropriate Outcome Measures to Assess Speech Recognition Ability Dawn Burton Koch Mary Joe Osberger Phil Segel Dorcas Kessler Advanced Bionics Corporation, Sylmar, Calif., USA
Key Words Cochlear implant W Sound processing W Hearing loss
Abstract Objective: This study compared speech perception benefits in adults implanted with the HiResolutionTM (HiRes) Bionic Ear who used both conventional and HiRes sound processing. A battery of speech tests was used to determine which formats were most appropriate for documenting the wide range of benefit experienced by cochlear-implant users. Study Design: A repeated-measures design was used to assess postimplantation speech perception in adults who received the HiResolution Bionic Ear in a recent clinical trial. Patients were fit first with conventional strategies and assessed after 3 months of use. Patients were then switched to HiRes sound processing and assessed again after 3 months of use. To assess the immediate effect of HiRes sound processing on speech perception performance, consonant recognition testing was performed in a subset of patients after 3 days of HiRes use and compared with their 3-month performance with conventional processing. Setting: Subjects were implanted and evaluated at 19 cochlear implant programs in the USA and Canada affiliated primarily with tertiary medical centers. Patients: Patients were
ABC
© 2004 S. Karger AG, Basel 1420–3030/04/0094–0214$21.00/0
Fax + 41 61 306 12 34 E-Mail
[email protected] www.karger.com
Accessible online at: www.karger.com/aud
51 postlinguistically deafened adults. Main Outcome Measures: Speech perception was assessed using CNC monosyllabic words, CID sentences and HINT sentences in quiet and noise. Consonant recognition testing was also administered to a subset of patients (n = 30) using the Iowa Consonant Test presented in quiet and noise. All patients completed a strategy preference questionnaire after 6 months of device use. Results: Consonant identification in quiet and noise improved significantly after only 3 days of HiRes use. The mean improvement from conventional to HiRes processing was significant on all speech perception tests. The largest differences occurred for the HINT sentences in noise. Ninety-six percent of the patients preferred HiRes to conventional sound processing. Ceiling effects occurred for both sentence tests in quiet. Conclusions: Although most patients improved after switching to HiRes sound processing, the greatest differences were seen in the ‘poor’ performers because ‘good’ performers often reached ceiling performance, especially on tests in quiet. Future evaluations of cochlear-implant benefit should make use of more difficult measures, especially for ‘good’ users. Nonetheless, a range of difficulty must remain in test materials to document benefit in the entire population of implant recipients. Copyright © 2004 S. Karger AG, Basel
Dawn Burton Koch, PhD Advanced Bionics Corporation 25129 Rye Canyon Loop Valencia, CA 91355 (USA) Tel. +1 847 869 9473, Fax +1 928 244 4436, E-Mail
[email protected]
Introduction
A cochlear implant is a surgically implantable device that provides hearing sensation to individuals with severe-to-profound hearing loss who cannot benefit from hearing aids. By electrically stimulating the auditory nerve directly, a cochlear implant bypasses damaged or undeveloped sensory structures in the cochlea, thereby providing usable information about sound to the central auditory nervous system. Since 1984, when the first single-channel devices were approved by the US Food and Drug Administration for implantation in adults in the USA, the technology has advanced, speech perception benefit has improved and the population that can benefit from implants has expanded to include adults with residual hearing and children as young as 12 months of age. As speech perception benefit has increased in adults, it has become a challenge to employ outcome tools that can document the wide range of benefit demonstrated by adult implant recipients [1]. In many cases, implant users have reached ceiling performance on traditional audiological tests of speech understanding. Ideally, speech test materials must be easy enough to avoid floor effects in ‘poor’ users and difficult enough to avoid ceiling effects in ‘good’ users. In addition, the materials must be sensitive enough to detect incremental differences in speech perception ability, especially if comparisons between implant systems or sound-processing strategies are to be meaningful. This report details the outcome of using speech materials of varying difficulty in a clinical trial of the HiResolutionTM (HiRes) Bionic Ear and HiRes sound processing (Advanced Bionics Corporation, Sylmar, Calif., USA) that concluded in March 2002. In that clinical study, adult patients were fit with conventional sound-processing strategies for 3 months, then with HiRes sound processing for 3 months. Speech perception ability was assessed and compared using a battery of materials including monosyllabic word recognition, easy and hard
sentence recognition in quiet and difficult sentence recognition in noise. The hypothesis was that sentence-in-noise ability would most closely approximate ‘real-world’ listening and would be most representative of the efficacy of each sound-processing algorithm.
Materials and Methods Patients Patients in the clinical trial were 18 years of age or older and had postlinguistic onset of severe-to-profound hearing loss (66 years of age). Their hearing losses were equal to or exceeded 70 dB HL (average pure-tone thresholds for 500, 1000, 2000 Hz). In addition, the patients received marginal benefit from appropriately fitted hearing aids, defined by a Hearing in Noise Test (HINT) [2] sentence score ^50% (two-list average) in each person’s best-aided condition. All patients were proficient in English. A total of 51 patients from 19 implant centers in the USA and Canada completed the study. All procedures in the study were overseen by the Institutional Review Board or Ethics Committee of each participating institution. Preoperative characteristics for the 51 adults are summarized in table 1. Study Protocol In the clinical trial, patients were evaluated preoperatively using a standard battery of medical and audiological tests to determine candidacy and to establish baseline performance. All patients were implanted surgically with the HiRes Bionic Ear and underwent a 4to 6-week recovery period before the external components were fit and the implant was programmed. In phase I of the study, patients were fit with conventional sound-processing strategies – simultaneous analog stimulation (SAS), continuous interleaved sampling (CIS) and multiple pulsatile stimulation (MPS) [3]. Phase I was considered a baseline condition with electrical stimulation to allow patients to acclimate to hearing with a cochlear implant. The programs were fit using the body-worn Platinum Sound Processor, which all patients used throughout the course of the study. Patients were evaluated with their preferred control strategy (CIS, SAS or MPS) after 1 and 3 months of device use using the speech tests described below. In phase II of the study, patients were fit with HiRes sound processing. Patients were then evaluated with HiRes after it had been used for 1 and 3 months, using the same speech materials. At the end of phase II, patients completed a questionnaire indicating their sound-processing preference (conventional or HiRes) and the rea-
Table 1. Demographic and preimplantation
speech perception scores for 51 adults completing phase II of the HiRes clinical trial
HiResolutionTM Bionic Ear Speech Outcomes
Mean Age at implantation, years Duration of severe-to-profound hearing loss, years Pure-tone-average thresholds in implanted ear, dB HL CNC word score, % CID sentence score, % HINT-in-quiet sentence score, % HINT-in-noise (+ 10 dB SNR) sentence score, %
Audiol Neurootol 2004;9:214–223
55 12 104 3 21 12 3
Range 20–82 0–44 80–120 0–36 0–79 0–46 0–37
215
Table 2. Parameter comparison for previous Clarion cochlear implants, the HiRes Bionic Ear implementing conventional strategies and the HiRes Bionic Ear implementing HiRes sound processing
Parameter
Previous Clarion systems
Input dynamic range, dB Audiosignal sampling rate, samples/s Processing speed, instructions/s RF data transmission Number of spectral analysis channels Temporal waveform representation Number of sound delivery circuits Stimulation waveforms Stimulation rate, pulses/s
60 ! 20000 ! 1 million shared by acoustic signal and patient data 8 up to 400 Hz 8 analog and pulsatile ! 20000
Bionic Ear conventional strategies (clinical trial)
HiRes (clinical trial)
80 70000 100 million shared by acoustic signal and patient data 8 up to 2800 Hz 8 analog and pulsatile ! 20000
80 70000 100 million dedicated to acoustic signal 16 up to 2800 Hz 16 pulsatile up to 90000
RF = Radio frequency.
sons for their preference. Patients were then allowed to use the strategy of their choice (conventional or HiRes) in the sound processor of choice (body-worn or behind-the-ear). A subset of 30 of the 51 patients were crossed over to phase II at Advanced Bionics facilities in Sylmar, Calif., and underwent additional testing. Because the research version of HiRes software allowed manipulation of multiple stimulus parameters, these 30 patients were evaluated with various combinations of fitting parameters (e.g. stimulation rates, pulse widths) to determine the settings that were appropriate for the majority of patients. (This parameter set was then used to fit the other 21 patients.) Over a period of 3 days, the Iowa Consonant Test [4] presented in quiet and in noise (+10 dB signal-to-noise ratio) was administered to those patients as HiRes fitting parameters were changed. During those parameter evaluations, no feedback was provided. In addition, patients were assessed with the Iowa Consonant Test (in quiet and noise) on each of the 3 days using their preferred conventional strategy (after a daily acclimation period). Before each consonant test run, patients were familiarized with the procedure using a subset of the test items with feedback. HiRes results from the final day of testing were compared with each person’s best conventional-strategy score from all 3 days to determine the short-term effect of HiRes sound processing on performance. Sound-Processing Strategies During phase I of the study, patients were fit with one or more of the conventional CIS, SAS and PPS strategies that have been implemented in previous Clarion systems. Those were emulated in the HiRes device using 8 analysis channels, 8 current sources and 8 stimulus contacts, albeit with superior preprocessing and sampling of the input sound signal compared to earlier Clarion devices (table 2). Of the 51 patients, 16 preferred CIS, 21 preferred SAS and 14 preferred MPS at the 3-month phase I interval. During phase II of the study, patients were fit with a first implementation of HiRes sound processing, which enables many of the advanced features of the HiRes Bionic Ear hardware. HiRes is designed to deliver a high-fidelity electrical representation of the
216
Audiol Neurootol 2004;9:214–223
incoming sound signal to the auditory nerve (table 2). At the sound analysis front end, HiRes encodes the full spectrum of the incoming signal with an input dynamic range of up to 80 dB. The input signal is sampled at a very high rate (70000 samples/s overall) and sent through a bank of 16 logarithmically spaced bandpass filters. At the output of each filter, HiRes uses an adaptive integration algorithm to represent temporal waveform fluctuations up to 2800 Hz. The resulting waveforms then modulate pulse trains that are sent to 16 electrode contacts along the cochlea using an overall stimulation rate of up to 90000 pulses/s. The combination of fine temporal waveform analysis and a fast stimulation rate insures that a high-resolution representation of the incoming sound is retained in the electrical stimulation pattern. HiRes programs were customized for each patient in the study. Most patients used 13 or more channels/electrodes with stimulation rates that varied from approximately 3000 to 5600 pulses/s/channel. Speech Perception Tests For all 51 patients, speech perception ability was assessed using consonant-nucleus-consonant (CNC) monosyllable words [5], the Central Institute for the Deaf (CID) Everyday Sentence test [6], the HINT sentences delivered in quiet and the HINT sentences presented in spectrally matched noise using a 10-dB signal-to-noise ratio. All tests used recorded materials presented at 70 dB SPL in sound field. For the subset of 30 patients, the Iowa Consonant Test (recorded male talker, /aCa/ context) was administered and scored for phonemes, total information transfer [7] and speech feature transmission. Questionnaire At the end of phase II, a questionnaire assessed each person’s preference for either their conventional strategy or HiRes sound processing. Patients then rated the strength of preference on a scale from 1 (weak preference) to 10 (strong preference). Patients were also asked to indicate whether their preferred strategy helped them in a variety of listening conditions (e.g. speech is clearer in a small group of people, music sounds better).
Koch/Osberger/Segel/Kessler
Fig. 1. Mean Iowa consonant recognition and overall information transfer results in quiet and noise (+ 10 dB SNR) after 3 months using conventional strategies and 1– 3 days using HiRes sound processing (n = 30). ID = Identification.
Results
Cochlear-Implant Benefit Using Conventional Strategies The study group showed significant improvement in speech perception with their cochlear implants compared to conventional hearing aids. For all speech perception tests, there were significant differences in scores obtained at the phase I one- and three-month test intervals compared to the best-aided scores preoperatively (p ! 0.0001). One- to Three-Day HiRes versus Three-Month Conventional Strategy Results Figure 1 shows the Iowa consonant recognition and overall information transfer results in quiet and noise for the subset of 30 patients who were programmed at Advanced Bionics. After only 1–3 days experience with HiRes sound processing, scores were significantly better than scores obtained after 3 months of conventional strategy use (p ! 0.001). Speech feature analyses (fig. 2) showed that listeners could distinguish place, duration, affrication, nasality and voicing significantly better in quiet with HiRes than conventional strategies (p ! 0.05). The manner feature was not significantly different. For consonants presented in noise, place, duration and voicing were discriminated significantly better with HiRes than with conventional sound processing (p ! 0.05). To illustrate individual differences for the more challenging condition, figure 3 shows the consonant identification scores in noise for the 30 patients before and immediately after switching to HiRes processing (rank ordered by 3-month conventional strategy results). A binomialmodel analysis [8] indicated that 10 of the patients (30%)
HiResolutionTM Bionic Ear Speech Outcomes
showed a significant improvement between the two test conditions (p ! 0.05). There were 6 people who showed decreases in scores after crossing to HiRes. However, only 1 of those decreases was significant. A similar analysis of the consonant identification scores in quiet indicated that 12 patients (40%) showed significant improvement (p ! 0.05). Three patients showed decreases in scores after crossing to HiRes, but none of those changes were significant. Note that patients with higher scores on the Iowa consonant test did not demonstrate much difference between the two sound-processing algorithms. Nonetheless, the test was sufficiently sensitive enough to distinguish between conventional and HiRes performance for the low-to-moderate performers. Three-Month HiRes versus Three-Month Conventional Strategy Results Figure 4 shows the average scores on the 4 speech tests preoperatively, after 3 months of conventional strategy use and after 3 months of HiRes use. These average scores represent the results from all 51 patients, including the 30 patients who had Iowa consonant data. The mean differences between the two intervals were statistically significant on all measures (table 3). The HINT-in-noise test, which is the most difficult of the measures, showed the largest change in mean performance. An analysis of variance was conducted for each speech test to determine if there were significant differences in the mean improvements from conventional to HiRes sound processing for patients fitted at Advanced Bionics versus patients fitted by audiologists at the clinical trial sites. The results revealed no significant differences for fitting site (table 4). Mean speech test differences between conventional and HiRes performance were also examined
Audiol Neurootol 2004;9:214–223
217
Fig. 2. Mean Iowa consonant speech feature scores in quiet (a) and noise (+10 dB SNR; b) after 3 months using conventional strategies and 1–3 days using HiRes sound processing (n = 30).
Fig. 3. Individual Iowa consonant identification scores in noise (+ 10 dB SNR) after 3 months using conventional strategies and 1–3 days using HiRes sound processing (rank ordered by 3-month conventionalstrategy results). Using a binomial-model analysis, patients 1, 2, 3, 6, 8, 9, 10, 11, 15 and 24 showed significant improvements after crossing to HiRes. Patient 28 showed a significant decrease in scores after crossing to HiRes.
Table 3. Summary of statistical analyses of
mean improvement between 3 months of conventional-processing use and 3 months of HiRes use
218
Test
Mean difference
SD
t
CID sentences CNC words HINT in quiet HINT in noise
0.08 0.08 0.11 0.14
0.19 0.15 0.16 0.21
2.92 3.88 4.65 4.62
Audiol Neurootol 2004;9:214–223
p
0.01 0.001 !0.0001 !0.0001
Koch/Osberger/Segel/Kessler
95% CI
0.02–0.13 0.04–0.12 0.06–0.15 0.08–0.20
Fig. 4. Mean scores for 4 speech tests preoperatively, after 3 months
of conventional strategy use and after 3 months of HiRes use (n = 51). The 30 patients who underwent consonant testing are part of this larger group.
Table 4. Summary of statistical analyses of
the effect of fitting site on the mean difference between 3 months of conventional-processing use and 3 months of HiRes use
Test
Site
Mean difference
SD
F
p
CID sentences
Advanced Bionics Study clinic Advanced Bionics Study clinic Advanced Bionics Study clinic Advanced Bionics Study clinic
0.09 0.03 0.08 0.09 0.11 0.11 0.15 0.12
0.18 0.22 0.15 0.13 0.16 0.19 0.22 0.20
0.84
0.363
0.03
0.867
0.00
0.995
0.20
0.660
CNC words HINT in quiet HINT in noise
Table 5. Summary of statistical analyses of
the effect of conventional strategy on the mean difference between 3 months of conventional-processing use and 3 months of HiRes use
Test
Conventional strategy
Mean difference
SD
F
p
CID sentences
CIS MPS SAS CIS MPS SAS CIS MPS SAS CIS MPS SAS
0.00 0.10 0.12 0.05 0.07 0.11 0.04 0.14 0.14 0.07 0.18 0.17
0.09 0.16 0.24 0.11 0.17 0.16 0.12 0.13 0.20 0.21 0.14 0.24
2.05
0.140
0.73
0.488
2.19
0.123
1.29
0.286
CNC words
HINT in quiet
HINT in noise
HiResolutionTM Bionic Ear Speech Outcomes
as a function of conventional strategy (i.e. SAS, CIS or MPS). The results revealed no significant effect of conventional strategy on the difference scores on any speech perception test (table 5). In other words, any additional benefit provided by HiRes sound processing after the crossover was independent of whether the patient was using an analog or pulsatile strategy prior to phase II. Figure 5 shows the individual 3-month scores for conventional and HiRes strategies (rank ordered by 3-month conventional-strategy results) for the 4 speech tests. These plots indicate that the sensitivity of each test to detect differences between conventional-strategy and HiRes benefit is dependent upon the difficulty of the test. CID Sentences. The CID sentence results indicate that this test has a ceiling effect and is therefore too easy for all but the ‘poorest’ implant users. Specifically, over 60% of patients scored greater than 80% before crossing to HiRes, and 75% of patients scored greater than 80% after 3 months of HiRes use. Because most of the patients
Audiol Neurootol 2004;9:214–223
219
Fig. 5. Individual scores for 4 speech tests after 3 months of conventional strategy use and 3 months of HiRes use. In each panel, scores are rank ordered by the 3-month conventional-strategy results.
reached the testing ceiling with conventional processing, individual improvements were analyzed descriptively for the poorest performers. For the 40% of patients scoring less than 80% before crossing to HiRes (n = 20), improvements ranged from 3 to 75% (mean = 27%, median = 17%). However, 2 patients showed no change in scores, and 3 patients showed a decrease after crossing to HiRes (range = 8–14%). HINT in Quiet. The HINT-in-quiet results show similar ceiling effects. For this more difficult sentence test, 45% of patients scored greater than 80% before crossing to HiRes, and 63% of patients scored greater than 80% after 3 months of HiRes use. Like the CID sentences, the HINT in quiet was useful for demonstrating benefit in the poorer performers using a descriptive analysis. For the 55% of patients scoring less than 80% before crossing to HiRes (n = 28), improvements ranged from 2 to 60% (mean = 23%, median = 23%). One patient showed no change in score, and 3 patients showed decreases of less than 1% after crossing to HiRes.
220
Audiol Neurootol 2004;9:214–223
CNC Words. The CNC word test showed a greater range of scores than the easier CID and HINT-in-quiet tests. Of the 51 patients, 69% (n = 35) showed an improvement in scores after crossing to HiRes. Using the binomial model, 24% of those improvements were significant. Nine patients showed no change, and 6 individuals showed a decrease in scores (range = 12–24%). (One patient was not tested at 3 months with the control strategy.) HINT in Noise. The HINT-in-noise results also showed a greater range of scores than the CID and HINT-in-quiet tests. Of the 51 patients, 69% (n = 35) showed an improvement in scores after crossing to HiRes. Using the binomial model again, 39% of those improvements were significant. Five patients showed no change, and 9 individuals showed a decrease in scores (range = 2–27%). (Two patients were not tested at 3 months with the control strategy.) Notably, 18% of users scored greater than 80% with the conventional strategies. After crossing to HiRes, 31% scored greater than 80%.
Koch/Osberger/Segel/Kessler
Fig. 6. Comparison of benefit in a variety of listening situations for the 45 patients who preferred HiRes at the end of phase II (6 months of implant use).
Fig. 7. Comparison of benefit in a variety of listening situations for the 12 patients with the highest CNC scores (1 70%) at the end of phase II (6 months of implant use). These patients preferred HiRes sound processing.
Questionnaire Results Fifty of 51 patients completed the preference questionnaire at the end of phase II. Ninety percent (45 of 50) preferred HiRes sound processing to the conventional strategies at the end of phase II. The average strength of preference for HiRes was 8.5 (range: 4–10). Five percent (5 of 50) preferred their conventional strategies at the end of phase II. The average strength of preference for conventional strategies was 5.3 (range: 1–8). Notably, at 12 months after initial stimulation (6 months after the end of phase II), 48 of 50 patients (96%) preferred HiRes. Figure 6 shows the comparison of benefit in a variety of listening situations for the 45 patients who preferred HiRes. Figure 7 shows that same breakdown of benefit, but for the 12 patients with the highest CNC scores
(170%) at the end of phase II. The mean strength of preference for those 12 patients was 9.3 (range: 8–10). Interestingly, although significant speech test differences could not be documented in these ‘good’ listeners because of ceiling effects, a clear listening distinction could be made by these high performers between conventional and HiRes sound processing in everyday listening situations.
HiResolutionTM Bionic Ear Speech Outcomes
Audiol Neurootol 2004;9:214–223
Discussion
Implant Benefit and Sound-Processing Comparisons This study indicates that, overall, patients derive significant benefit from their cochlear implants compared to conventional hearing aids. This clear benefit is consistent
221
with other studies of contemporary cochlear implants in postlinguistically deafened adults [9, 10]. The majority of patients attained higher speech perception scores with HiRes sound processing compared to conventional sound processing. However, the study design does not allow determination of whether HiRes sound processing is solely responsible for the significant improvement seen in phase II because a control for long-term learning effects was not incorporated. Results from previous clinical trials indicate that the largest learning effects typically occur during the first month of implant use with limited improvements beyond 3 months of device experience [11]. In contrast, the improvements seen after switching to HiRes are not consistent with the asymptotic scores typically seen after 3 months of implant use. Moreover, the size of the improvement demonstrated by some of the low-to-moderate performers after using HiRes for only 3 months is not typical of simple learning effects. The questionnaire results also suggest that HiRes sound processing is qualitatively different and preferable to conventional sound processing. Thus, taken together, these observations suggest that at least some of the improved benefit seen in phase II can be attributed to HiRes sound processing and is not simply the result of learning effects. The study also does not discriminate systematically the influence of the various HiRes sound processing parameters on the increased speech perception and preferred sound quality seen in phase II. Specifically, the relative contributions of an increased number of analysis channels and stimulation sites (from 8 to 16), and a higher stimulation pulse rate (e.g. from 818 pulses/s/contact in CIS to up to 5600 pulses/s/channel in HiRes) cannot be determined. Other studies have suggested, for example, that an increase in stimulation rate alone can enhance performance [12] and that simply increasing the number of stimulation channels contributes to better speech perception, especially in noise [13, 14]. Limitations of Speech Tests Although the benefit realized by these cochlear-implant recipients is remarkable, concern exists on how to verify the benefit obtained by adult implant recipients who either (1) have little measurable speech perception skills or (2) demonstrate high scores on traditional measures of speech understanding. The results from this study indicate that sentence recognition in quiet may still be helpful for poor-to-moderate performers. However, those tests are too easy for many implant recipients and do not represent ‘real-world’ listening conditions adequately. These results are consistent with those of Dorman and
222
Audiol Neurootol 2004;9:214–223
Spahr [15], who have shown that HINT and CUNY sentences in quiet exhibit a ceiling effect for implant users who score at least 45% on the more difficult CNC word test. The most difficult tests, especially word and consonant identification, were better able to detect differences in the ‘good’ users. Moreover, the questionnaire was valuable in providing additional data, particularly for the ‘good’ users. For patients who approached or had reached ceiling scores on the speech tests, the questionnaire provided additional qualitative information about benefit that was not discernible from the speech perception results. Consequently, as technology advances and performance continues to improve, it will be necessary to use more sensitive tests to delineate perceptual benefits, especially in ‘good’ users. For example, speech tests may necessitate lower presentation levels [15–17], or assessment in noise might require less favorable signal-to-noise ratios, different types of noise or use of adaptive procedures. Nonetheless, some easier tests should remain part of the test battery to document benefit in ‘poorer’ implant users. In addition, evaluation of music perception might be helpful, as might subjective assessments of hearing benefit (patient or clinician questionnaires).
Conclusions
As a group, adult patients participating in the clinical trial of HiRes sound processing showed significant improvement in speech perception compared to conventional sound processing. Although most patients improved after switching to HiRes sound processing, the greatest differences were seen in the ‘poor’ performers because ‘good’ performers often reached ceiling performance, especially on tests in quiet. Future evaluations of cochlearimplant benefit should make use of more difficult measures, especially for ‘good’ users. Tests should reflect ‘realworld’ listening by making use of more difficult speech stimuli presented in various noise conditions. Nonetheless, a range of difficulty must remain in the selection of test materials in order to document benefit in the entire population of implant recipients.
Acknowledgements The authors are grateful to all of the surgeons, audiologists and patients who participated in the HiRes clinical trial, and to Mario Svirsky for assistance with the binomial-model analyses. These results were presented in part at the Candidacy for Implantable Hearing Devices conference, Utrecht, the Netherlands, June 2002.
Koch/Osberger/Segel/Kessler
References 1 Dorman M: Speech perception by adults; in Waltzman SB, Cohen NL (eds): Cochlear Implants. New York, Thieme, 2000, pp 317–329. 2 Nilsson M, McCaw V, Soli S: Minimum Speech Test Battery for Adult Cochlear Implant Patients: User Manual. Los Angeles, House Ear Institute, 1996. 3 Kessler DK: The Clarion Multi-Strategy cochlear implant. Ann Otol Rhinol Laryngol 1999; 108(suppl 177):8–16. 4 Tyler R, Preece J, Lowder M: Iowa Cochlear Implant Tests. Iowa City, Department of Otolaryngology, Head and Neck Surgery, 1983. 5 Peterson GE, Lehiste I: Revised CNC lists for auditory tests. J Speech Hear Dis 1962;27:62– 70. 6 Davis H, Silverman SR: Hearing and Deafness, ed 4. New York, Holt, Rinehart & Winston, 1978. 7 Miller GA, Nicely PE: An analysis of perceptual confusions among some English consonants. J Acoust Soc Am 1955;27:338–352.
HiResolutionTM Bionic Ear Speech Outcomes
8 Thornton A, Raffin M: Speech-discrimination scores modeled as a binomial variable. J Speech Hear Res 1978;21:507–518. 9 Skinner MW, Holden LK, Whitford LA, et al: Speech recognition with the Nucleus 24 SPEAK, ACE and CIS speech coding strategies in newly implanted adults. Ear Hear 2002;23: 207–223. 10 Gstoettner WK, Hamzavi J, Baumgartner WD: Speech discrimination scores of postlingually deaf adults implanted with the Combi 40 cochlear implant. Adv Otorhinolaryngol 2000;57: 323–326. 11 Zwolan T, Kileny P, Smith S, Mills D, Koch D, Osberger MJ: Adult cochlear implant patient performance with evolving electrode technology. Otol Neurotol 2001;22:844–849. 12 Loizou PC, Poroy O, Dorman M: The effect of parametric variations of cochlear implant processors on speech understanding. J Acoust Soc Am 2000;108:790–802. 13 Dorman MF, Loizou PC, Fitzke J, Tu Z: The recognition of sentences in noise by normalhearing listeners using simulation of cochlearimplant signal processors with 6–20 channels. J Acoust Soc Am 1998;104:3583–3585.
14 Friesen LM, Shannon RV, Baskent D, Wang X: Speech recognition in noise as a function of number of spectral channels: Comparison of acoustic hearing and cochlear implants. J Acoust Soc Am 2001;110:1150–1163. 15 Dorman MF, Spahr TC: New tests of adult patient performance and results for patients fit with different implants. 9th Symposium on Cochlear Implants in Children, Washington, DC, April 2003. 16 Skinner MW, Holden LK, Demorest ME, Fourakis MS: Speech recognition at simulated soft, conversational, and raised-to-loud vocal efforts by adults with cochlear implants. J Acoust Soc Am 1997;101:3766–3782. 17 Donaldson GS, Allen S: Effects of presentation level on phoneme and sentence recognition in quiet by cochlear implant listeners. Ear Hear 2003;24:392–405.
Audiol Neurootol 2004;9:214–223
223
Original Paper Audiol Neurootol 2004;9:224–233 DOI: 10.1159/000078392
Received: February 6, 2003 Accepted after revision: April 4, 2004
Development of Language and Speech Perception in Congenitally, Profoundly Deaf Children as a Function of Age at Cochlear Implantation Mario A. Svirsky a, b Su-Wooi Teoh a Heidi Neuburger a a Department
of Otolaryngology – HNS, Indiana University School of Medicine, and b Departments of Biomedical Engineering and Electrical Engineering, Purdue University, Indianapolis, Ind., USA
Key Words Cochlear implants W Congenitally deaf children W Language development, sensitive periods W Speech perception
Abstract Like any other surgery requiring anesthesia, cochlear implantation in the first few years of life carries potential risks, which makes it important to assess the potential benefits. This study introduces a new method to assess the effect of age at implantation on cochlear implant outcomes: developmental trajectory analysis (DTA). DTA compares curves representing change in an outcome measure over time (i.e. developmental trajectories) for two groups of children that differ along a potentially important independent variable (e.g. age at intervention). This method was used to compare language development and speech perception outcomes in children who received cochlear implants in the second, third or fourth year of life. Within this range of age at implantation, it was found that implantation before the age of 2 resulted in speech perception and language advantages that were significant both from a statistical and a practical point of view. Additionally, the present results are consistent with the existence of a ‘sensitive period’ for language development, a gradual decline in language acquisition skills as a function of age.
Introduction
Profound deafness represents a major hindrance to speech communication, not only because it is very difficult for a person with profound deafness to understand speech without visual cues, but also because congenital profound deafness has a devastating effect on the development of spoken language and speech in children. Until the advent of cochlear implants (CIs), however, there existed no satisfactory treatment for profound deafness. A CI is an electronic device, part of which is surgically implanted into the cochlea and the remaining part worn externally. The CI functions as a sensory aid, converting mechanical sound energy into a coded electric stimulus that bypasses damaged or missing hair cells of the cochlea and directly stimulates remaining auditory neural elements. Most research on cochlear implantation in children has focused on the perception of speech [Osberger et al., 1991; Staller et al., 1991; Fryauf-Bertschy et al., 1992; Geers and Brenner, 1994; Miyamoto et al., 1995]. However, CIs also provide children with critical auditory sensory input necessary for the development of speech and language production. Hearing loss at an early age, particularly before the onset of language, can have a deleterious effect on the development of speech and language in children. Research on various measures of articulation, speech intelligibility and expressive language has shown that these abilities improve after deaf children have
Copyright © 2004 S. Karger AG, Basel
ABC Fax + 41 61 306 12 34 E-Mail
[email protected] www.karger.com
© 2004 S. Karger AG, Basel
Accessible online at: www.karger.com/aud
Prof. Mario A. Svirsky, PhD Department of Otolaryngology – HNS Indiana University School of Medicine, 699 West Dr., RR-044 Indianapolis, IN 46202-5200 (USA) Tel. +1 317 274 7543, Fax +1 317 274 4949, E-Mail
[email protected]
received CIs and continue to improve with increasing experience with the device [Dawson et al., 1995; ElHakim et al., 2001a, b; Spencer et al., 1998, 2003; Svirsky et al., 2000a, b, c; Svirsky, 2000; Szagun, 2000, 2001; Tomblin et al., 2000]. In particular, our studies have shown that profoundly deaf children display a gap in their language development, but once they receive CIs they start developing language at a near-normal rate, and the developmental gap remains about the same size (measured in units of language age). These findings suggest that congenitally deaf children may be able to develop expressive and receptive language skills at a normal pace and with only a negligible delay, if they only received CIs early enough in life. This speculation is consistent with research showing that children who receive CIs between 2 and 5 years of age tend to perceive speech much better than those who receive CIs later [FryaufBertschy et al., 1992]. A prerequisite for receiving a CI early in life is the early identification of hearing loss, which has become possible on a large scale only recently in the USA, with the introduction of mandatory newborn hearing screening. Yoshinaga-Itano and his group [Yoshinaga-Itano, 1999; Yoshinaga-Itano et al., 2000] report that the average age of identification of congenital hearing loss has been reduced from 24–30 months to an average of 2 months. This allows relatively early cochlear implantation, which is expected to be beneficial to language development for several reasons. First, earlier implanted children will have shorter lengths of sound deprivation and, conversely, longer auditory experience with a CI than their later implanted peers. Also, these children may benefit from early exposure to sound before the end of critical/sensitive periods for the development of speech and language [Hurford, 1991; Pickett and Stark, 1987; Ruben, 1986]. Indeed, studies have shown that exposure to a specific language in the first 6 months of life alters infants’ phonetic perception [Kuhl et al., 1992]. Previous work by Jusckzyk et al. [Jusckzyk and Houston, 1998; Jusckzyk and Luce, 2002] has shown that by 9 months of age children can recognize their own names, respond appropriately to ‘mommy’ and ‘daddy’, begin segmenting words, retain information about frequently occurring words and show language-specific preferences for prosodic cues. The latter skill is particularly important because prosody may be necessary for segmenting the acoustic stream into perceptual units. By the age of 10–12 months, sensitivity to nonnative contrasts begins to decline, and infants also appear to integrate different types of word segmentation cues. By 16 months of age, infants show an ability to segment vowel-
initial words, and by 17 months they already show lexical competition effects which affect word learning. However, there is some evidence that normal language learning and development occur only with early exposure to language. Conversely, when language exposure begins later in life, asymptotic performance in the language declines [Newport, 1990]. This phenomenon has been named ‘sensitive period’ for language development. Bortfeld and Whitehurst [2001] provide a careful review listing 4 types of evidence that support the concept of biologically determined sensitive periods: ‘wild’ children who have been deprived of normal linguistic interaction during their first years of life; natural variation in the timing of exposure of deaf children to sign language; loss of perceptual or language learning capacity with age, and differences in cerebral localization of language processing for individuals exposed to languages at different ages. Although there is converging evidence coming from all these sources, Bortfeld and Whitehurst [2001] call this evidence ‘less than definitive’. Difficulties encountered by ‘wild’ children in learning language late in life may be due to deprivation of experiences that are unrelated to language, rather than to a sensitive period for language learning. The sign language data are compelling, but it is possible that sign languages are acquired in different ways than oral languages. Additionally, deaf children who were exposed to sign language late in life may have suffered social or cognitive consequences due to the lack of early input. Changes in perceptual or learning capacities may be due to changes in motivation or opportunity, rather than to a biological window that closes with age. Finally, differences in cerebral localization only demonstrate a neural rather than a behavioral sensitive period, unless they are accompanied by related differences on language tasks. Within this context, the examination of speech and language skills of congenitally deaf children who receive CIs at different ages provides an additional and independent type of evidence that, although imperfect, may be relevant for the investigation of sensitive periods in language development. Thus, study of the early implanted population is important clinically because these children may show substantial benefit in speech and spoken language outcomes, and it is important scientifically because it may provide information about sensitive periods in language development. An additional reason to study the benefit of cochlear implantation in the first few years of life is that such early implantation may carry significant additional risks related to anesthetic complications [Young, 2002]. For example, a study found a higher incidence of bradycardia in
Pediatric CI Outcomes as a Function of Age at Implantation
Audiol Neurootol 2004;9:224–233
225
infants younger than 1 year who underwent noncardiac surgery (1.3%) than in children in the second, third or fourth years of life (0.98, 0.65 and 0.16%, respectively) [Keenan et al., 1994]. Bradycardia was associated with significant morbidity, including hypotension in 30%, asystole or ventricular fibrillation in 10% and death in 8% of the cases. If these numbers applied to cochlear implantation, there would be approximately 4 additional deaths for every 10000 children who are implanted at the age of 3 instead of at the age of 4, and about 2.6 additional deaths for every 10000 children who are implanted in the first year of life instead of the second, or in their second year instead of the third. On the other hand, numbers from the study cited above may overestimate the actual anesthesia risk for children undergoing cochlear implantation, because the numbers represent an average obtained from very diverse cases. Most surgeries in the study of Keenan et al. [1994] were elective, but some were emergency surgeries; most operations took less than 4 h, but some took more; a pediatric anesthesiologist was in charge only in about two thirds of the cases, and about 44% of the children in the study were in a high-risk class according to the American Society of Anesthesiology classification (ASA class 3, 4 or 5). In contrast, CI surgeries are elective, usually last less than 4 h and are almost always performed on healthy patients (ASA class 1). If a pediatric anesthesiologist is in charge, the risks may be further reduced for all age groups. A document from the American Academy of Pediatrics [Kass et al., 1996], after reviewing the literature up to 1990, suggests that ‘after the first 4–5 months of life, age alone is not the major risk factor’, but acknowledges that ‘most studies of anesthetic risk are not stratified by age and ASA class, and therefore it is difficult to determine the precise anesthetic mortality rate for ASA class patients between 6 and 12 months of age’. In summary, even though anesthesia risks may be low, there is at least a possibility that early pediatric implantation may carry additional potential risks. Therefore, it becomes even more important to assess the potential benefits of early implantation. The main goal of the present study was to compare the speech perception and language skills of congenitally deaf children who received CIs in the second, third or fourth year of life. This comparison was performed using a new method named developmental trajectory analysis (DTA). DTA examines the curves representing change in an outcome measure over time (i.e. developmental trajectories) for groups of children that differ along a potentially important independent variable such as age at intervention. Rather than comparing outcomes at a single point in
226
Audiol Neurootol 2004;9:224–233
time, or comparing only the slopes of developmental trajectories, DTA assesses the area under each developmental trajectory and provides an estimate of the average difference in outcome throughout the comparison period. A secondary goal of the study was to compare the language skills shown by profoundly deaf children with CIs to those of age-matched children. In addition to the clinical interest of these comparisons, the study of children implanted at different ages may provide important information about the existence of ‘sensitive’ periods for language development.
Methods Subjects The CI subjects were recruited from the clinical population at the Indiana University Medical Center and from the St. Joseph Institute for the Deaf in St. Louis, Mo., USA. Subjects were tested between 2 and 8 times using the tests listed below (see Outcome Measures). The first session always took place just prior to initial activation of the CI (between 1 and 3 months). When a subject was tested more than once, the testing sessions were at least 6 months apart. Participation in the study was offered to all monolingual English-speaking children implanted before the age of 5 years who used the SPEAK/ACE or CIS strategies since initial device fitting, and who had no other handicapping conditions such as mental retardation or speech motor problems. More than 90% of the children who qualified for the research protocol actually participated in the study. All subjects were congenitally, profoundly deaf. They were divided into three groups according to age at implantation as shown in table 1, which also includes information concerning the amount of residual hearing and communication mode used by each group. Twelve children were implanted in the second year of life, 34 in the third year and 29 in the fourth year. Residual hearing was calculated by averaging hearing thresholds at 500, 1000 and 2000 Hz, measured in decibels. All children were in school settings that promoted the development of auditory/oral skills, with or without the use of signs. American sign language was not the primary mode of communication for any of the children. Some used oral communication (OC), while others used total communication (TC), which is the simultaneous use of oral English and signs. Note that TC involves the use of signs that reproduce English grammar and therefore all responses were monolingual, regardless of whether they were expressed manually or orally. A few children changed their communication mode in the course of the study. In those cases, the percentage of sessions that took place while the child used each method of communication was calculated. For example, a child who used TC in the first 2 testing sessions, was switched to OC and then tested 3 more times, would be considered a ‘60% OC user’ for the purpose of calculating the proportion of OC and TC users within each age-at-implantation group. Although members of our research team do not provide aural rehabilitation and speech therapy, they do make recommendations for each child and remain in contact with schools to make sure that all children receive appropriate rehabilitation. Children in our study typically see a speechlanguage pathologist 2–3 times per week.
Svirsky/Teoh/Neuburger
Table 1. Demographic information of
participants in the study
Range of age at implantation months
Number of subjects
Mean age at implantation and SD, months
Unaided PTA (best ear mean and SD), dB
Range of unaided PTA (best ear), dB
Use of oral communication, %
16–24 25–36 37–48
12 34 29
19.7 (1.9) 29.8 (3.4) 40.6 (2.5)
112 (5) 110 (9) 108 (7)
105–118 90–120 97–120
54 54 58
PTA = Pure-tone average. All were congenitally deaf, monolingual English-speaking children implanted before the age of 5 years, who had used the SPEAK/ACE or CIS strategies since initial device fitting and who had no other handicapping conditions.
Outcome Measures The outcome measure used to assess language development was based on the Reynell Developmental Language Scales (RDLS-III) [Edwards et al., 1997] and the MacArthur Communicative Development Inventories (MCDI) [Fenson et al., 1993]. The RDLS assess expressive and receptive language separately. Expressive language scores were used in this study, in part because these scores are less likely than receptive language scores to be inflated by the use of iconic information when the test is administered using TC and also because the overall results were quite similar using both types of scores. The RDLS and MCDI were chosen to assess language development because they have been extensively normed on children with normal hearing and can be applied to users of either OC or TC. The option of conducting tests in these two modalities is important for measuring the children’s underlying language abilities, as far as possible independently of their ability to understand spoken language or to produce intelligible speech. The MCDI offer a valid and efficient means of assessing early language development, using a parent report format. Two levels of complexity are available for the MCDI and are administered according to the age of the child. The MCDI/Words and Gestures is designed for 8- to 16-month-olds, and the MCDI/ Words and Sentences is designed for 16- to 30-month-olds. The RDLS have been used extensively with deaf children (including CI users; see for example Bollard et al. [1999], Richter et al. [2002], Svirsky et al. [2000b], Vermeulen et al. [1999]) and are appropriate for a broad age range (from 1 to 8 years). Normative data are also available for more than 1000 hearing children [Edwards et al., 1997]. The Kuder-Richardson reliability coefficients are 0.97 for the receptive language test and 0.96 for the expressive language test. Finally, the test format involves object manipulation and description based on questions varying in length and grammatical complexity, reflecting real-world communication and assessing linguistic competence more accurately than single-word vocabulary tests. Scores observed using the RDLS and expressed as age-equivalent scores were used whenever a child performed above the test’s floor. When the child’s skills were more rudimentary, predicted RDLS scores were obtained based on MCDI data. The predictive functions were developed in an earlier study [Stallings et al., 2000] of 91 pediatric CI users who were administered both the RDLS and one of the MCDI forms within the same testing session. The function that predicts RDLS expressive language scores as a function of subscores in the MCDI/Words and Gestures is the following: REXP age equivalent = 1.3386 + 2.0917 W WGlabel + 1.9359 W WGname + 0.0276 W WGwp_rs + 0.2793 W ca_mos,
Pediatric CI Outcomes as a Function of Age at Implantation
where WGlabel indicates a child’s ability to label items, WGname a child’s ability to respond to his name, WGwp_rs the number of words produced, and ca_mos is the child’s chronological age in months. The predictive function using subscores from the MCDI/Words and Sentences is: REXP age equivalent = 13.7111 + 0.2942 W WSirwd_rs + 0.0254 W WSwp_rs + 0.6518 W WScplx_rs, where WSirwd_rs is the number of irregular words produced, WSwp_rs is the total number of words produced and WScplx_rs is the MCDI’s measure of syntactic complexity. Adjusted R-squared values obtained in that study indicated that the predictive functions explained between 71 and 81% of the variance in RDLS scores. Children were also assessed by the Mr. Potato Head task [Robbins, 1994], a modified open set test of spoken word recognition. Mr. Potato Head is a toy with a plastic body with approximately 20 body parts (such as ears or nose) and accessories (such as hats or glasses). Children were asked to manipulate the toys in response to commands given in the auditory-only mode, e.g. ‘Put a hat on Mr. Potato Head’. The children’s responses were scored as the percentage of key words correctly identified. This test assesses the recognition (perception and understanding) of words denoting body parts, accessories and actions. Because the required response is an action instead of the spoken repetition of a sentence, results are not confounded by a subject’s ability to speak intelligibly. Data Analysis Many methods are available to assess the effects of age at implantation. One possibility is to wait a number of years until all children reach a predetermined age and then assess differences. In the case of pediatric CI users, the most extensive and well-controlled study of this type has been carried out by Geers and her colleagues at CID, who studied a large group of deaf children with CIs aged 8–9 years [Geers and Brenner, 2003; Strube, 2003]. One problem with this approach is that it does not evaluate the impact of early implantation throughout the child’s developmental trajectory (i.e. change over time). It is important to investigate whether late implanted children catch up with early implanted children at some point, but it is also important to evaluate whether one group had an advantage over the other during the first years of life. All other things being equal, if a certain age at implantation results in improved speech intelligibility, speech perception skills or language development, this age at implantation should be preferred, even if the later implanted group ends up catching up with the earlier implanted group eventually. To use language age as an example, figure 1 shows 2 hypothetical develop-
Audiol Neurootol 2004;9:224–233
227
96
L ANGUAGE AGE (MONTHS)
84
T
72 60 48 36 24
D
12 0
0
12
24
36
48
60
72
84
96
AGE (MONTHS)
Fig. 1. DTA compares developmental trajectories by calculating the
average size of the difference between two curves (vertical arrow D). This is calculated as the integral of the difference between the two curves (the shaded area) divided by the comparison interval T.
mental trajectories, one for a child implanted shortly after 15 months and another implanted at 50 months. The regression lines for each curve are shown with thick dashed lines. Although the later implanted child almost achieves parity with the earlier implanted child around the age of 84 months, the earlier implanted child has had a functional communicative advantage for several years. Thus, all other things being equal, earlier implantation is better in this example. Speech perception can also be used as an example. Imagine two groups of children, one implanted at the age of 2 and the other implanted at the age of 4, both scoring 0% correct in tests of word recognition prior to implantation. Let us assume that the group of children implanted at 2 reach average word identification scores of 80% correct by the age of 3 and stay at about that level until the age of 8. Let us further assume that children implanted at 4 reach scores of 80% at the age of 5 and also stay at that level until the age of 8. Again, all other things being equal, implantation at 2 years of age is superior to implantation at 4 years in this hypothetical example, because the group of children implanted at 2 are able to understand 80% of the words they hear at the ages of 3 and 4, while the other group of children cannot understand any words at those ages. Even if the later implanted children do catch up later, the quality of life of the earlier implanted children is superior during an important part of their preschool years (ages 3 and 4). This reasoning is based on the premise that, all other things being equal, it is better to understand speech than being unable to understand it. Other methods examine whether the rate of change in an outcome measure is affected by age at implantation. This is a type of analysis that has been used by Kirk et al. [2002] and other authors [Osberger et al., 2002]. An interesting alternative method to analyze rates of
228
Audiol Neurootol 2004;9:224–233
growth in outcome measures after an intervention is binary partitioning analysis [El-Hakim et al., 2002]. However, it is possible for a late implanted child (or group) to have a higher rate of change after implantation than an earlier implanted child (or group) and yet be clearly worse off than that child, as in the example of figure 1. The gray dashed regression line corresponds to postimplantation data for the child implanted at 50 months and is steeper (has a greater rate of change) than the black dashed regression line corresponding to the child implanted at 15 months. Nevertheless, the earlier implanted child shows higher language scores at any given age up to 84 months than the late implanted child. Thus, although analyzing the rate of change after implantation can reveal important scientific and clinical information, it may not be the best way to determine the effect of age at implantation on the child’s developmental trajectory. In addition, the analyses described above are not optimal to determine the preferred age at implantation for the congenitally deaf population. Instead, a new methodology termed DTA is proposed. Let X and Y be two groups of children implanted at different age ranges (with n and m members in each group, respectively), and yi(t) and xj(t) the developmental trajectories for the individual children in each group, for the outcome measure of interest. For example, xj(t) (j = 1 – n) may represent all the curves of children implanted between 12 and 24 months of age, and yi(t) (i = 1 – m) may represent all the curves of children implanted between 25 and 36 months. Let Y(t) be the average curve in group Y. To express this relation in mathematical terms, m
Y(t) =
1 yk(t). m k=1
™
Then, the ‘mean developmental difference’ D between each member of group X (for j = 1 – n) and the average of group Y is calculated as follows: Tj, Y
Dj, Y =
兰
[Xj(t) – Y(t)] dt/Tj, Y,
t=0
where the upper integration limit Tj, Y is the maximum value for which both xj(t) and Y(t) are defined. Thus, the developmental difference Dj, Y is the area between the developmental trajectory corresponding to member j of group X, and the average of group Y, which is then divided by the length of the integration domain. One intuitive interpretation for D is that it represents the average size of the difference between the two curves mentioned above, averaged over the whole analysis period. The developmental difference, D, at a particular age is depicted with a vertical arrow in the example shown in figure 1. Averaged over the whole analysis period, D becomes about 18 months of language age, although the difference between the two curves varies as a function of chronological age: it is as small as 0 (at ages of 0–12 months) or as large as 50 months of language age (at the chronological age of 54 months). To test whether the developmental trajectories from X, as a group, are significantly different from the average curve of group Y, the following null hypothesis can be used: H0 = the set of developmental differences Dj, Y (j = 1 – n) is a sample taken from a normal distribution with a mean of zero. This hypothesis can be easily tested using the Student distribution or (an alternative preferred by many statisticians, particularly for small-sized samples) the Wilcoxon or exact permutation tests. DTA can also be used when the two groups to be compared, X and Y, differ in terms of confounding variables which are presumed to have
Svirsky/Teoh/Neuburger
an effect on the outcome measure (such as residual hearing or communication mode). One way to address this problem is to calculate the ‘confounding variable differences’ Dj, Y (j = 1 – n) which are equal to dj – DY, where dj is the value of the confounding variable for subject j, and DY is the average value of the confounding variable for group Y. Then, instead of testing the null hypothesis listed above, a regression is performed (or multiple regression, if there is more than one confounding variable) of Cj, Y as a function of Dj, Y. In the example of a linear case, the regression equation would be Cj, Y = a W Dj, Y + b, and the null hypothesis would be: H)0 = the intercept of the regression function is zero (or, in other words, the developmental differences are due exclusively to the effect of the confounding variables). In the present case, table 1 shows that the three age-at-implantation groups did not differ significantly along the two potential confounds thought to be most influential in speech and language outcomes for pediatric CI users: residual hearing and communication mode. In fact, there was a very slight advantage for the two groups of children who were implanted later. Thus, if an advantage in language development and speech perception outcomes was found for the earlier implanted group in this study, the advantage could not possibly be due to the effect of residual hearing or communication mode. Given the small differences in the two confounding variables, differences among groups were assessed with the t test approach rather than the multiple regression approach. All pairwise comparisons among the three age-at-implantation groups were carried out. Each comparison was conducted in both directions. For example, the developmental differences resulting from comparing each child implanted at 12–24 months to the average curve for children implanted at 25–36 months were calculated, and a test was applied to determine whether this set of differences was significantly different from zero. Then the process was repeated for the set of developmental differences resulting from comparing each child implanted at 25–36 months to the average curve for children implanted at 12–24 months. The most conservative of the two comparisons was selected to express the significance of the difference between the groups. These comparisons were carried out for both outcome measures, language age and speech perception scores. Potential advantages of the proposed DTA analysis method include the following: no assumptions are made concerning the shape of developmental trajectories, all available data points can be used, missing data are handled easily, the method assesses the whole developmental trajectory rather than individual points and, finally, it has high face validity for the purpose of assessing the effect of age at implantation on outcome measures.
Results
The thin dark lines in the top panel of figure 2 show individual language data for children implanted between 12 and 24 months of age. Three normal-hearing reference curves are provided for comparison: the black diagonal line indicates the progress that would be expected of an average child with normal hearing, and the two gray curves represent 1 and 2 standard deviations below the mean for the normal-hearing population. A skill level that is 1 standard deviation below the mean is equivalent to
Pediatric CI Outcomes as a Function of Age at Implantation
being on the 16th percentile of the normal-hearing population. In other words, it represents language skills that are better than those of one sixth of the normal-hearing population of the same age, which is well within normal limits (although quite rare for children with congenital profound hearing impairment). In contrast, 2 standard deviations below the mean is a level that puts an individual in the second percentile of a normal distribution. Finally, the thick gray curve represents an average of all the individual curves (each individual curve is interpolated from the origin to the first recorded data point, just prior to implantation). Many children in this group had scores that were very close to the average values for children with normal hearing. One way to quantify this observation is to examine how many children implanted before the age of 2 showed near-normal expressive language skills, defining ‘near normal’ as obtaining scores within 1 standard deviation of the normal hearing norms, for at least 2 out of 3 consecutive testing intervals. Although available data are too preliminary to answer the question with any degree of certainty, some trends are emerging. So far, only 3 of the 12 congenitally deaf children implanted before their second birthday have been followed up beyond the age of 4, but 2 of them showed near-normal scores starting at the ages of 4 and 4.5, respectively, and the third was close to obtaining near-normal scores just before his/her fifth birthday. Additionally, 2 of the remaining 9 children have obtained near-normal scores at least once, before the age of 4. These results suggest that language scores for CI users get closer to average values from normal-hearing children as a function of age and, moreover, this happens both in absolute terms (using raw scores or age-equivalent scores) and in relative terms (using Z scores that represent the CI user’s performance compared to the normal-hearing population). The middle panel of figure 2 shows data from children implanted at 25–36 months. Like in the group of children implanted at 12–24 months, there were some children in this group with near-normal development curves, but there were also several children who fell 2 standard deviations or more below the normal hearing mean. Finally, children implanted between 37 and 48 months (bottom panel of fig. 2) normally stayed below the –2 standard deviation curve, even after many years of CI use. Figure 3 shows the three average curves for the data from each group of children. The arrows indicate the average age at which each group received cochlear implants. It is interesting to note that before implantation the three curves are practically superimposed. After 19–20 months, which is the average age at implantation for children who
Audiol Neurootol 2004;9:224–233
229
84
IMPLANTED AT 12-24 MONTHS
L ANGUAGE AGE (MONTHS)
72
60
48
36
24
12
0
0
12
24
36
48
60
72
84
60
72
84
AGE (MONTHS) 84
IMPLANTED AT 25-36 MONTHS
L ANGUAGE AGE (MONTHS)
72
60
48
36
24
12
0 0
12
24
36
48
AGE (MONTHS) 84
IMPLANTED AT 37-48 MONTHS
L ANGUAGE AGE (MONTHS)
72 60 48 36 24 12 0 0
12
24
36
48
60
72
AGE (MONTHS)
230
received CIs between 12 and 24 months, the curve for that group starts to separate from the curves corresponding to the other two age-at-implantation groups. Similarly, the curves for the two later implanted groups start to separate around 29 months, which is the average age at implantation for children who received CIs between 25 and 36 months. At the latest data point, children in the three ageat-implantation groups show clearly different levels of language proficiency: those implanted at 12–24 months are, on average, close to 1 standard deviation below the normal hearing means; children implanted at 25–36 months are, on average, close to 2 standard deviations below the normal hearing means, and those implanted at 37–48 months are well below both benchmarks. DTA results indicated that the average advantage for children implanted between 12 and 24 months (measured in units of language age) was 5.7 months with respect to children implanted at 25–36 months (p ! 0.01). In other words, the average estimated language age for children in the first group was 5.7 months higher than for children in the second group, at the same chronological age. Children implanted at 37–48 months lagged behind those implanted at 25–36 months by 5.6 months (p ! 0.05), and those implanted at 12–24 months by 10 months (p ! 0.001). Figure 4 shows average speech perception scores for the three age-at-implantation groups as well as comparison data obtained from normal-hearing children [Kirk et al., 1997; Robbins and Kirk, 1996]. The trends are the same as for the language development scores: prior to implantation, the developmental curves for the three groups are practically identical, and after implantation differences among groups start to emerge. Note that the earlier implanted group starts showing an advantage over
Audiol Neurootol 2004;9:224–233
84
Fig. 2. The thin lines in each panel show individual language development curves as a function of age, and the thick gray line is the average of all individual curves. Language skills were assessed with the expressive section of the RDLS and expressed as age-equivalent scores. If the RDLS could not be administered, language age was estimated based on results from the MCDI. Each panel shows data for a different age-at-implantation group, from top to bottom: 12–24 months, 25–36 months and 37–48 months. Three lines are provided as a reference to compare data from CI users to the normal-hearing population. The thick diagonal line shows the language development that would be expected of an average child with normal hearing, whereas the two thinner gray lines under the diagonal represent performance levels that are 1 and 2 standard deviations below the normal-hearing average, respectively.
Svirsky/Teoh/Neuburger
84
Age at implantation: 12-24 months 25-36 months 37-48 months
60
100 Age at implantation:
Normal hearing: -1 sd and -2 sd
WOR DS C OR R E C T %
E xpres s ive L anguage Age (months )
72
48 36 24 12
12-24 months 25-36 months 37-48 months
80
Normal hearing
60 40 20 0
0 0
12
24
36
48
60
72
84
0
12
24
the other two groups around 24 months of age, which is a few months after their average age at implantation (19–20 months). Similarly, the group of children implanted at 24–36 months starts showing an advantage over the later implanted group very shortly after implantation. DTA revealed that the average advantage for the group implanted at 12–24 months over the group implanted at 25– 36 months was 12.2% and 26.5% over the group implanted at 37–48 months. The group implanted at 25–36 months also had an advantage of 15.8% over the children implanted at 37–48 months. All these differences in speech perception scores were significant with p values lower than 0.001. Comparing the average developmental curves to the normal hearing data from Kirk et al. [2002], it can be observed that children implanted at 12–24 months reach the 90% correct level about 1 year later than normal-hearing controls, whereas the other two age-atimplantation groups still have not reached 90% correct at the last data collection point.
Pediatric CI Outcomes as a Function of Age at Implantation
48
60
72
84
AGE (months)
AGE (months)
Fig. 3. The three thick curves are the average language development curves for each age-at-implantation group (which were shown separately in each panel of fig. 2). The normal-hearing comparison lines are the same as in figure 2. Note that the average curve for children implanted at 12–24 months reached the –1 standard deviation level (16th percentile for the normal-hearing population) at around 60 months of age, whereas the average curve for children implanted at 37–48 months was well below the –2 standard deviation (2nd percentile). The arrows indicate the average age at implantation for each group.
36
Fig. 4. Average curves for word scores in the Potato Head sentences test for each age-at-implantation group. Arrows indicate average ages at implantation. Children implanted at 12–24 months reached ceiling levels on this test about 1 year later than normal-hearing children, while children in the other age-at-implantation groups lagged behind even further, and it is still not clear whether they will ever reach ceiling scores as a group.
Discussion
These data support the hypothesis that implantation in the second year of life results in better speech perception and language development outcomes than later implantation. In this respect, results are consistent with previous studies that have found advantages in communicative outcomes for children who are implanted earlier in life [Fryauf-Bertschy et al., 1992]. The advantage for the earlier implanted children represents an effect that is both statistically significant and large in size. For example, based on the present data it may be possible to speculate that many children implanted at 12–24 months (perhaps a majority of them) will reach the age of 6 years and enter school with near-normal language skills (at least when those skills are assessed using the RDLS), whereas this is not happening for most children implanted later. Similarly, most children implanted at 12–24 months performed near the ceiling level on the Potato Head test at least a
Audiol Neurootol 2004;9:224–233
231
year before the age of 6, while this is not true of children in the other groups. Instead of performing at ceiling levels at the age of 5, children implanted at 25–36 months only identify an average of 4 out of 5 key words in the relatively simple Potato Head test, and those implanted at 37–48 months only identify 3 words out of 5. These perceptual and language differences during the first few years of a child’s school experience may have a negative effect on learning, even if the perceptual and language differences tended to disappear by the age of 8 or 9, as the studies of Geers and Brenner [1994, 2003] would suggest. However, there is one important caveat that should be taken into account when interpreting the present language data or any other dataset obtained using standard norm-referenced tests: when a CI user achieves scores similar to those of normally developing children it does not mean that the CI user has ‘normal’ language, it only suggests that he or she has age-appropriate skills in the language tasks assessed by the test. Indeed, recent studies suggest that CI users may have more significant difficulties in developing certain aspects of grammar than in developing lexical skills [Svirsky et al., 2002; Szagun, 2000, 2001]. In any case, it seems clear that cochlear implantation in the second year of life rather than later has some advantages in terms of communicative skills, and these advantages probably outweigh any additional surgical risk. In the present study, DTA helped compare the speech and language outcomes over time for children implanted at different ages. The Methods section listed several potential advantages of DTA, but perhaps the most important one is its ability to provide a reasonable estimate of the average difference between two groups of developmental curves without making any assumptions about the shape of those curves. This method may also be useful to compare effect sizes and significance in response to any clinical intervention (including those outside the fields of speech, language and hearing) when it is important to evaluate that intervention over an extended period of time. One important aspect of the present results is the large intersubject variability. Even though the three age-atimplantation groups showed important differences, each group had at least a few outstanding performers. Thus, the results suggest that cochlear implantation before the age of 2 may be beneficial, but excellent results can be achieved at later ages as well. On the other hand, many children with CIs show language development curves that remain well below the –2 standard deviation line. This includes some children implanted in their second year of life, most of the children implanted in the third year and
232
Audiol Neurootol 2004;9:224–233
the vast majority of those implanted in the fourth year. Although children who are implanted later seem to develop speech perception and language skills at a lower pace than children who are implanted earlier, there are numerous individual exceptions to this trend. In consequence, the present results are consistent with the ‘sensitive period’ view [Johnson and Newport, 1993] that postulates a gradual decline in language acquisition skills as a function of age. These results are also consistent with studies of language development in German-speaking CI users [Szagun, 2001]. However, there is an important caveat when examining CI data for their potential relevance to the existence of sensitive periods: the auditory signal provided by a CI is less than optimal, it provides less information than the auditory signal received by children with normal hearing. Thus, it is at least possible that sensitive periods may exist for speech and language development when listeners are exposed to the impoverished signal provided by a CI, but not necessarily when exposed to a normal acoustic signal. The present study falls short of a randomized doubleblind study, which would provide more definite answers concerning the effect of age at implantation on speech and language development. However, conducting such a study would be at least questionable from an ethical point of view, given the available evidence that earlier cochlear implantation is beneficial. Instead, future studies may attempt to refine the present analyses by considering other potential confounding variables and by studying outcomes in children implanted in the first year of life. Additionally, other outcome measures should be considered in future work, including measures of the child’s ability to speak intelligibly and more detailed measures of language development that examine specific skills such as the use of grammar.
Acknowledgments This work was supported by NIH-NIDCD grants R01-DC00423, R01-DC00064 and T32-DC00012.
Svirsky/Teoh/Neuburger
References Bollard PM, Chute PM, Popp A, Parisier SC: Specific language growth in young children using the Clarion cochlear implant. Ann Otol Rhinol Laryngol Suppl 1999;177:119–123. Bortfeld H, Whitehurst G: Sensitive periods to first language acquisition; in Bailey D, Bruer J, Lichtman J, Symons F (eds): Critical Thinking about Critical Periods: Perspectives from Biology, Psychology, and Education. Baltimore, Brookes Publishing, 2001, pp 173–192. Dawson PW, Blamey PJ, Dettman SJ, Barker EJ, Clark GM: A clinical report on receptive vocabulary skills in cochlear implant users. Ear Hear 1995;16:287–294. Edwards S, Fletcher P, Garman M, Hughes A, Letts C, Sinka I: The Reynell Developmental Language Scales III: The University of Reading Edition. Los Angeles, Western Psychological Services, 1997. El-Hakim H, Abdolell M, Mount RJ, Papsin BC, Harrison RV: Influence of age at implantation and of residual hearing on speech outcome measures after cochlear implantation: Binary partitioning analysis. Ann Otol Rhinol Laryngol 2002;189(suppl):102–108. El-Hakim H, Levasseur J, Papsin B, Panesar J, Mount RJ, Stevens D, Harrison RV: Vocabulary acquisition rate after pediatric cochlear implantation and the impact of age at implantation. Int J Pediatr Otorhinolaryngol 2001a;59:187–194. El-Hakim H, Papsin B, Mount RJ, Levasseur J, Panesar J, Stevens D, Harrison RV: Assessment of vocabulary development in children after cochlear implantation. Arch Otolaryngol Head Neck Surg 2001b;127:1053–1059. Fenson L, Dale PS, Reznick JS, Thal D, Bates E, Hartung JP, Pethick S, Reilly JS: MacArthur Communicative Development Inventories. San Diego, Singular Publishing Group Inc, 1993. Fryauf-Bertschy H, Tyler RS, Kelsay DM, Gantz BJ: Performance over time of congenitally deaf and postlingually deafened children using a multichannel cochlear implant. J Speech Hear Res 1992;35:892–902. Geers A, Brenner C: Speech perception results: Audition and lipreading enhancement. Volta Rev 1994;96:97–108. Geers A, Brenner C: Background and educational characteristics of prelingually deaf children implanted by five years of age. Ear Hear 2003;24: 2S–14S. Hurford JR: The evolution of the critical period for language acquisition. Cognition 1991;40:159– 201. Johnson JE, Newport E: Critical period effects in second language learning: The influence of maturational state on the acquisition of English as a second language; in Johnson MJ (ed): Brain Development and Cognition. Oxford, Blackwell, 1993, pp 248–282. Jusckzyk PW, Houston D: Speech perception during the first year; in Slater A (ed): Perceptual Development: Visual, Auditory and Speech Perception in Infancy. East Sussex, UK, Psychology Press, 1998.
Pediatric CI Outcomes as a Function of Age at Implantation
Jusckzyk PW, Luce PA: Speech perception and spoken word recognition: Past and present. Ear Hear 2002;23:2–40. Kass E, Kogan SJ, Manley C, Wacksman JA, Klykylo WM, Meza A, Schultz J, Wiener E: Timing of elective surgery on the genitalia of male children with particular reference to the risks, benefits, and psychological effects of surgery and anesthesia. Pediatrics 1996;97:590–594. Keenan RL, Shapiro JH, Kane FR, Simpson PM: Bradycardia during anesthesia in infants: An epidemiologic study. Anesthesiology 1994;80: 976–982. Kirk KI, Diefendorf AO, Pisoni DB, Robbins AM: Assessing speech perception in children; in Mendel LL, Danhauer LJ (eds): Audiologic Evaluation and Management and Speech Perception Assessment. San Diego, Singular Publishing Group Inc, 1997. Kirk KI, Miyamoto RT, Lento CL, Ying E, O’Neill T, Fears B: Effects of age at implantation in young children. Ann Otol Rhinol Laryngol 2002;189(suppl):69–73. Kuhl PK, Williams KA, Lacerda F, Stevens KN, Lindblom B: Linguistic experience alters phonetic perception in infants by 6 months of age. Science 1992;255:606–608. Miyamoto RT, Robbins AM, Osberger MJ, Todd SL, Riley AI, Kirk KI: Comparison of multichannel tactile aids and multichannel cochlear implants in children with profound hearing impairment. Am J Otol 1995;16:8–13. Newport EL: Maturational constraints on language learning. Cogn Sci 1990;14:11–28. Osberger MJ, Miyamoto RT, Zimmerman-Phillips S, Kemink JL, Stroer BS, Firszt JB, Novak MA: Independent evaluation of the speech perception abilities of children with the Nucleus 22channel cochlear implant system. Ear Hear 1991;12(suppl):66S–80S. Osberger MJ, Zimmerman-Phillips S, Koch DB: Cochlear implant candidacy and performance trends in children. Ann Otol Rhinol Laryngol 2002;189(suppl):62–65. Pickett JM, Stark RE: Cochlear implants and sensory aids for deaf children. Int J Pediatr Otorhinolaryngol 1987;13:323–344. Richter B, Eissele S, Laszig R, Lohle E: Receptive and expressive language skills of 106 children with a minimum of 2 years’ experience in hearing with a cochlear implant. Int J Pediatr Otorhinolaryngol 2002;64:111–125. Robbins AM: The Mr. Potato Head Task. Indianapolis, Indiana University School of Medicine, 1994. Robbins AM, Kirk KI: Speech perception assessment and performance in pediatric cochlear implant users. Semin Hear 1996;17:353–369. Ruben RJ: Unsolved issues around critical periods with emphasis on clinical application. Acta Otolaryngol 1986;429(suppl):61–64. Spencer LJ, Barker BA, Tomblin JB: Exploring the language and literacy outcomes of pediatric cochlear implant users. Ear Hear 2003;24:236– 247. Spencer LJ, Tye-Murray N, Tomblin JB: The production of English inflectional morphology, speech production and listening performance
in children with cochlear implants. Ear Hear 1998;19:310–318. Staller SJ, Dowell RC, Beiter AL, Brimacombe JA: Perceptual abilities of children with the Nucleus 22-channel cochlear implant. Ear Hear 1991;12(suppl):34S–48S. Stallings LM, Svirsky M, Gao S: Assessing the language abilities of pediatric cochlear implant users across a broad range of ages and performance abilities. Volta Rev 2000;102:215–235. Strube MA: Statistical Analysis and Interpretation in a Study of Prelingually Deaf Children Implanted before Five Years of Age. Ear Hear 2003;24:15S–23S. Svirsky MA: Language development in children with profound and prelingual hearing loss, without cochlear implants. Ann Otol Rhinol Laryngol 2000;185(suppl):99–100. Svirsky MA, Chute PM, Green J, Bollard P, Miyamoto RT: Language development in prelingually deaf children who have used SPEAK or CIS stimulation strategies since initial stimulation. Volta Rev 2000a;102:199–213. Svirsky MA, Robbins AM, Kirk KI, Pisoni DB, Miyamoto RT: Language development in profoundly deaf children with cochlear implants. Psychol Sci 2000b;11:153–158. Svirsky MA, Sloan RB, Caldwell M, Miyamoto RT: Speech intelligibility of prelingually deaf children with multichannel cochlear implants. Ann Otol Rhinol Laryngol 2000c;185(suppl):123– 125. Svirsky MA, Stallings LM, Lento CL, Ying E, Leonard LB: Grammatical morphologic development in pediatric cochlear implant users may be affected by the perceptual prominence of the relevant markers. Ann Otol Rhinol Laryngol Suppl 2002;189:109–112. Szagun G: The acquisition of grammatical and lexical structures in children with cochlear implants: A developmental psycholinguistic approach. Audiol Neurootol 2000;5:39–47. Szagun G: Language acquisition in young Germanspeaking children with cochlear implants: Individual differences and implications for conceptions of a ‘sensitive phase’: Audiol Neurootol 2001;6:288–297. Tomblin JB, Spencer LJ, Gantz BJ: Language and reading acquisition in children with and without cochlear implants. Adv Otorhinolaryngol 2000;57:300–304. Vermeulen A, Hoekstra C, Brokx J, van den Broek P: Oral language acquisition in children assessed with the Reynell Developmental Language Scales. Int J Pediatr Otorhinolaryngol 1999;47:153–155. Yoshinaga-Itano C: Benefits of early intervention for children with hearing loss. Otolaryngol Clin North Am 1999;32:1089–1102. Yoshinaga-Itano C, Coulter D, Thomson V: The Colorado Newborn Hearing Screening Project: Effects on speech and language development for children with hearing loss. J Perinatol 2000; 20(8, pt 2):S132–S137. Young NM: Infant cochlear implantation and anesthetic risk. Ann Otol Rhinol Laryngol 2002; 111(5, pt 2):49–51.
Audiol Neurootol 2004;9:224–233
233
Original Paper Audiol Neurootol 2004;9:234–246 DOI: 10.1159/000078393
Received: February 6, 2003 Accepted after revision: December 4, 2003
Exploring the Benefits of Bilateral Cochlear Implants Richard J. M. van Hoesel CRC for Cochlear Implant and Hearing Aid Innovation, Melbourne, Australia
Key Words Bilateral cochlear implants W Speech W Psychophysics
Abstract Several recent reports indicate that both localization and speech intelligibility in spatially separated noise are substantially improved by using cochlear implants (CIs) in both ears rather than in just one. Benefits appear to be largely derived from the effects of level variations at the two ears due to the head shadow whereas contributions from interaural time differences (ITDs) seem smaller than in normal hearing listeners. The effect of binaural unmasking estimated from speech studies to date varies from study to study and is possibly confounded by issues such as listening experience, bias or loudness effects when comparing the performance for the better ear with that using both ears. To improve the contribution from timing information at the two ears, it may be necessary to change present clinical sound-processing schemes that currently preserve only envelope cues so that they also preserve fine-timing information. However, recently published data show that basic psychophysical sensitivity to fine-timing ITDs in CI patients is very poor for rates beyond a few hundred hertz, suggesting that subjects do not actually hear ITD cues at those rates anyway. Data from a number of new studies are presented to discuss these and other issues related to the potential to benefit from bilateral implantation.
Introduction
With increased availability of cochlear implants (CIs), clinical interest in bilateral implantation has escalated. There are a number of reasons why we might expect bilateral implantation to be of benefit, some of which are specific to hearing prosthesis, such as ensuring that the ear with the best post-operative performance is implanted, and others that rely on the preservation of the benefits that normal hearing listeners with two ears also enjoy. This paper focuses on some of those benefits obtained by normal hearing listeners, particularly when listening to speech in noise [Licklider, 1948; Hirsch, 1950; Carhart, 1965; Dirks and Wilson, 1965; MacKeith and Coles, 1971; Bronkhorst and Plomp, 1988, 1989; Peissig and Kollmeier, 1997; Hawley et al., 1999] or localizing sound [Rayleigh, 1907; Stevens and Newman, 1936; Searle et al., Braida et al., 1976; Häusler et al., 1983; Wightman and Kistler, 1992]. When speech and noise are spatially separated, different signal-to-noise ratios (S/N) result at the two ears because of the monaural head shadow effects and different source distances to each ear. This potentially allows the listener to attend the ear with the better S/N. In addition, binaural unmasking benefits for speech in noise, sometimes referred to as the squelch effect, are available from the comparison of information at the two ears. Interaural time (ITD) and level (ILD) differences are also the primary cues enabling normal hearing listeners to determine source laterality (e.g. direction within the horizontal
Copyright © 2004 S. Karger AG, Basel
ABC
© 2004 S. Karger AG, Basel 1420–3030/04/0094–0234$21.00/0
Fax + 41 61 306 12 34 E-Mail
[email protected] www.karger.com
Accessible online at: www.karger.com/aud
R.J.M. van Hoesel 384–388 Albert Street East Melbourne, Vic. 3002 (Australia) Tel. +61 3 9667 7533, Fax +61 3 9667 7518, E-Mail
[email protected]
plane). Furthermore, signals become louder with two ears as opposed to one due to binaural loudness summation, and are easier to detect in noise when interaural phase characteristics differ for signal and noise due to binaural unmasking [e.g. Durlach and Colburn, 1978]. To obtain the same advantages with bilateral CIs, it seems likely that sound-processing strategies should preserve the appropriate cues when translating acoustic to electrical signals. However, that alone does not guarantee that the benefits will be available since CI users may actually not be sensitive to the electrical cues. Several recent studies have shown that even with existing sound processors that do not preserve fine-timing cues, substantial speech-in-noise and localization benefits are available to bilateral CI users [van Hoesel and Clark, 1999; Tyler et al., 2002; van Hoesel et al., 2002; Gantz et al., 2002; Müller et al., 2002; van Hoesel and Tyler, 2003]. However, indications are that these benefits are predominantly derived from changes in the relative levels of signal and/or masker energy reaching the two ears with varying source locations; it is not yet clear whether timing cues offer benefits beyond simple energy effects. The number of publications describing psychophysical results with bilateral CI users is limited to date [van Hoesel et al., 1993, 2001; van Hoesel and Clark, 1995,1997; Lawson et al., 1998, 2001; Long, 2000; van Hoesel and Tyler, 2003]. More data are needed to determine the extent to which interaural timing cues can contribute to binaural benefits. The purpose of this paper is to discuss some of the benefits observed in studies to date, examine limitations with existing systems, and to present additional data from several further studies exploring some of the relevant issues.
1 For the directional cochlear HS-8 microphones used in many of the studies presented in this paper, the availability of ILD cues was verified using pink noise and narrow-band noise as described in van Hoesel et al. [2002].
sequence of the intervening head [e.g. Blauert, 1997]. In present commercial CI sound-processing strategies, which have been designed to work with pulsatile, non-simultaneous multi-channel implant systems, a bank of bandpass filters is used to separate the microphone signal into different frequency bands. The output of each band is then further processed to extract the envelope of the signal in that band. The rate at which that information is sampled is sometimes referred to as the update rate. The envelope information in each band is then used to set stimulation levels on an electrode associated with that frequency. Stimulation is at a fixed rate, which may or may not be the same as the update rate. Such strategies therefore do not preserve fine-timing information in the signals from each filter band. Although it may be the case that envelope cues alone can provide some binaural advantages, those derived from fine-timing information in the signal are lost. In fact, by using fixed stimulation rates that are not related to the signal characteristics, present strategies may actually introduce disruptive fine-timing cues (if perceptible) between the ears. An alternative approach has been taken in a bilateral research strategy developed at the Cooperative Research Centre (CRC) for Cochlear Implant and Hearing Aid Innovation in Melbourne, Australia. This bilateral strategy, henceforth referred to as peak-derived timing (PDT), locates positive peaks in the fine timing of signals at each filter-band output and then stimulates the associated electrode for each band at times corresponding to those peaks. In a particular implementation of the PDT strategy, used in the studies by van Hoesel and Tyler [2003], the lowest four or five out of ten filter bands spanned up to about 1500 Hz. Since this is the range normally considered to be important for fine-timing ITD information, these bands were given priority when arbitration was required to resolve monaural temporal clashes when combining information from the filters for each ear for non-simultaneous stimulation with CIs. Although higher-frequency bands therefore had lower priority, there were more peaks in those bands for a fixed time interval so that shifting a single peak therefore produces fewer percentage-wise errors than for a lower frequency band. Bench tests and simulations of that implementation using broadband signals showed that in those bands electrical pulses were shifted less than 3% of the time, and when they were, the shift was only 70 Ìs (a single-stimulation pulse interval). In comparison, clinical strategies that only extract envelopes, and present that information at a fixed rate at each ear, can introduce arbitrary fine-timing cues that are not related to the signal fine-timing peaks. For a typical clinical
Bilateral Cochlear Implants
Audiol Neurootol 2004;9:234–246
Preserving Cues
Until fairly recently, bilateral cochlear implantation was extremely uncommon. Consequently, commercial sound processor design was not concerned with the preservation of binaural cues. In a typical commercial bilateral CI system, two separate monaural systems are worn, usually with a microphone behind each ear. Although microphone characteristics may need to be evaluated more carefully for bilateral CI systems, when placed behind each ear, the microphone signals will contain many of the useful ILD1 and ITD cues that arise as a con-
235
case with two independent devices, the introduced cue can be up to half the stimulation interval (e.g. 0.5 ms for a stimulation rate of 1 kHz per channel) and conveys no information about the actual fine-timing cues in the signal. Note that simultaneous stimulation of the two sides only replaces the unknown ITD cue with a 0-ms ITD cue that will still be in error for most signals.
Hearing Cues – Experiment 1: Effect of Place-Matching on ITD Sensitivity
Work in the early 1990s by van Hoesel et al. [1993, 1995, 1997] with two bilateral CI users showed that these subjects could readily fuse information from two implants. Although these particular subjects showed very large ITD just-noticeable differences (JNDs; on the order of 1 ms), determined using an adaptive 3AFC task, they showed strong effects of varying electrical stimulation levels at the two ears. They also experienced binaural loudness summation when electrical pulse trains were applied to single-electrode pairs in each ear. When the loudness in each ear was similar, the binaural loudness was about twice as large [van Hoesel and Clark, 1997]. A case study presented by Lawson et al. [1998] found better ITD sensitivity with JNDs around a few hundred microseconds using a single-interval 2AFC lateralization task, and one bilateral CI user studied by Long [2000] also showed consistent and at times large lateralization shifts resulting from signals with ITDs of 300 Ìs. More recent data from van Hoesel and Tyler [2003] showed ITD sensitivity around 100–150 Ìs for 5 subjects using low-rate stimuli in a 2S2I 2-AFC lateralization task, and a progress report from Lawson et al. [2001] describes lateralization shifts with cues as small as 50 Ìs using a single-interval 2AFC task. The van Hoesel and Tyler study also showed that as stimulation rates for unmodulated electrical pulse trains increased beyond a few hundred hertz, ITD sensitivity became very poor. However, when a deep 50-Hz modulation was applied to an 800-Hz pulse train, performance again approached that for an unmodulated 50-Hz pulse train. One factor that may affect interaural sensitivity is the degree to which place of stimulation in the two ears is matched. Few of the speech and localization studies cited have attempted to allow for any difference in insertion depth of the electrode array in each ear and have simply used default frequency-to-electrode assignment. The van Hoesel and Tyler [2003] study did attempt an approximate match for the PDT speech coding strategy by esti-
236
Audiol Neurootol 2004;9:234–246
mating the average insertion offset between ears from a pitch-based task and applying that average in their frequency-to-electrode maps. However, depending on the variations in curvature of the arrays in the two ears, specific place matches along the cochlea in each ear may not correspond to the average across the entire array. Although van Hoesel and Tyler [2003] and Lawson et al. [2001] both used pitch-based place-matching to select specific electrode pairs that were likely to exhibit good interaural sensitivity, a detailed comparison of sensitivity for pitch matched and unmatched cases was not included. Long [2000] did test this for one subject. The resulting data indicated that place variations on the order of 2 mm could affect interaural sensitivity but that pitch-matching did not guarantee optimal electrode pairing for binaural sensitivity. It is worth noting that, for that subject, electrode arrays in the two ears were different designs from different manufacturers and used a relatively wide electrode spacing of several millimeters. Methods To further assess the effect of place-matching variations on ITD sensitivity, data were collected with a bilateral implant user in Melbourne who uses identical Cochlear CI-24M devices in each ear. This device uses electrode arrays with 22 bands spaced 0.75 mm apart. ITD sensitivity was measured using monopolar constant-amplitude biphasic current-pulses delivered at 50 pulses per second (pps), presented at a level eliciting a loudness sensation comparable to everyday speech levels according to subjective reports. Electrode #10, which is located close to the middle of the array, was selected in the left ear and a range of electrodes was tested in the right ear. Analysis of the X-rays using a modified Stenver’s view [Xu et al., 2000] for this subject showed little, if any, difference in average position of the electrode array along the cochlea in each ear. When asked to compare pitch sensations from electrodes stimulated at 50 pps in each ear, the subject consistently indicated that the pitch of electrode L10 in the left ear fell somewhere between those for bands R9 and R10 on the right. This was determined using repeated continuous alternating stimulation of left and right ear electrodes, and for each pairing asking the subject which ear had the higher pitch. To facilitate the comparison of pitch in the two ears, approximate loudness balance was first obtained using paired comparisons with the electrodes selected. Although pitch and loudness can be confused or interact with electrical stimulation, this subject was quite familiar with assessing loudness and reported no difficulty arising from pitch differences for the limited range of electrodes selected. To explore offsets of up to a few millimeters from the pitch-matched condition, right-ear electrodes R7–R12 were selected for pairing with L10. To explore larger offsets, bands R14, R16 and R18 were also selected. A similar procedure was repeated in another session with electrode L11 on the left side, except due to time constraints only the best pitch-matched band on the right (R11) and two relatively large offsets (R6 and R16) were compared. For each pairing, the ITD JND for delays applied to the entire electrical stimulus was determined. Each JND was estimated from a 75% criterion on the psychometric function that was constructed from data collected in a single block. The block comprised 320 presenta-
van Hoesel
tions of pairs of 300-ms stimulus bursts with ITDs that were equal in magnitude and of opposite sign. For each pair of intervals, separated by 300 ms of silence, there was an equal probability of left or right side leading in the first interval. The subject was asked whether the second signal resulted in a lateral position to the left or right of the first irrespective of the absolute position of both stimuli. This is a 2AFC task with two stimuli presented in two intervals (2S2I) and the data displayed in the results show the total cue size available at the JND value, which is twice the magnitude of the actual ITD in either interval. The 320 pairs presented in each block comprised four different ITDs, usually 50, 100, 200 and 400 Ìs, so that each data point used to construct the psychometric curve was derived from responses to 80 signal pairs, half with the left-ear and half with the right-ear stimulus leading in the first interval. If performance for a particular bilateral pair with 50-Ìs ITDs was still above the 75% threshold, the pair was also tested with a block comprising ITDs that were twice as small (25, 50, 100 and 200 Ìs) and the JND was estimated from that block instead.
Results and Discussion Figure 1 shows results when both electrodes L10 and L11 (both of which are on the left side) were paired with various electrodes on the right side. The best JNDs found for L10 were around 150 Ìs for pairings with R9 and R10. The same pairings of electrodes gave the best subjective pitch match at 50 pps. Pairings with electrodes that were 1 or 2 bands offset from the matched pitch condition show only small reductions of ITD sensitivity. Place offsets of 3 bands or more were needed to degrade ITD sensitivity such that the JND was doubled. For electrode L11, the JND for the matched-pitch case, R11, was slightly under 100 Ìs. Even when offsets were as large as 5 bands (3.75 mm) in either direction, JNDs were only about twice as large. Although verification of these results is needed with further subjects, the implication is that small place differences on the order of 1 or 2 mm do not substantially alter ITD sensitivity. If we assume that best sensitivity to ITDs indeed results when regions that in a normal hearing ear code the same frequencies are stimulated, we might readily accept that the small effect of place variations up to 1 or 2 mm is simply due to current spread considerations with electrical stimulation. Perhaps more surprising is that ITD sensitivity was still within a factor of 2 compared with approximately matched conditions with place variations around 3 or 4 mm, and within a factor of 3 for place differences as large as 7 or 8 mm (for L10).
Bilateral Cochlear Implants
Fig. 1. ITD JNDs (2AFC, 2S2I, 75%) for constant-amplitude pulsetrains at 50 pps as a function of place of stimulation in the right ear, with place held fixed at electrode #10 or #11 in the left ear.
Evaluating Bilateral CI Benefits – Experiment 2: Binaural Loudness Summation for Broadband Stimuli
When comparing performance for unilateral and bilateral signal presentation, a number of issues may be worth keeping in mind. One of these is the effect of binaural loudness summation, which may advantage the bilateral condition because of improved audibility of softer signal components. Even for speech in noise, where speech and noise are both likely to be affected by loudness summation, a binaural benefit may exist (at least from a theoretical consideration) due to the limited dynamic range available in CI sound processors. By way of example, consider the situation for a positive S/N where the noise will fall below the processor’s input dynamic range more often than the speech. At those times, binaural loudness summation offers loudness increases for the speech but not the noise (which is below stimulation threshold) and therefore results in an improved S/N. Note however that unless binaural loudness summation varies with stimulation levels, its effect for components that do fall within the processor’s dynamic range is not a binaural advantage per se; since loudness for CI users is mapped via the sound processor, it can also be increased by using higher monaural stimulation levels. Furthermore, from a clinical perspective an implant system should be optimized for the user’s comfort, which implies that a similar loudness range
Audiol Neurootol 2004;9:234–246
237
100, and to listen to the overall loudness percept rather than identify the contribution from either ear alone (in the binaural case).
Fig. 2. Magnitude estimation data assessing binaural loudness summation for subject ME1 with a SPEAR processor and PDT strategy (see text) and with broadband (pink noise) stimuli varied over 32 dB. Squares and triangles are for unilateral presentation of the signal and diamonds are for bilateral (diotic) presentation.
should be targeted (if possible) regardless of whether the subject is fitted unilaterally or bilaterally. Binaural loudness summation data with two subjects using single-electrode stimuli presented in van Hoesel and Clark [1997] showed that sounds approximately doubled in loudness when stimuli were presented binaurally compared with monaurally. The experiment described here examines whether the same is true for broadband stimulation. Methods Broadband pink-noise bursts were presented to a bilateral CI24M user via the audio-input connector on the CRC’s SPEAR research processor. The processor was programmed with a bilateral PDT strategy (see Preserving Cues) implemented as described in van Hoesel and Tyler [2003]. The loudness mapping in the processor was adjusted to balance levels across the 10 electrodes used in each ear, as well as between electrodes on each side, both at levels near threshold and at maximal stimulation levels before discomfort. Noise bursts were 740 ms in duration, including 100 ms linear rise and fall times, and were presented with random attenuation between 0 and 32 dB in 4-dB steps. Each presentation level was included 12 times for each of three conditions: left ear, right ear and bilateral (diotic) presentation, resulting in 36 presentations at each level. Left, right and bilateral presentation was in random order and after each signal, the subject assigned a numerical value to the perceived loudness. Instructions to the subject emphasized both the idea of maintaining ratio-metrically consistent relations between the numbers used and the loudness percepts, rather than numbers simply reflecting rank relationships. Further instructions were to avoid numerical ‘endpoints’ such as 1 or
238
Audiol Neurootol 2004;9:234–246
Results and Discussion The results depicted in figure 2 show binaural listening for the subject ME1 resulted in an average loudness that was about 1.8 times greater than using either ear alone. For attenuation over the range 4–20 dB, the amount of binaural loudness summation was very consistent and the loudness increase was almost exactly equal to a factor of 2. Although there may be reduced summation for the loudest (0-dB attenuation) and softest signals (24- to 32-dB attenuation) tested, this may also be the consequence of numerical saturation effects despite an effort made to instruct the listener to avoid limiting responses to particular values. For some of the low-level signals in particular, summation may also have been incomplete due to larger mismatches in monaural loudness in the two ears. In general, the data are compatible with the findings by van Hoesel and Clark [1997], who for single-electrode stimuli also concluded that sounds are about twice as loud when presented to both ears compared with just one.
Speech in Noise – Experiment 3: Signal Detection in Noise
Several studies have recently been published reporting on speech benefits in noise for bilateral CI users [van Hoesel and Clark, 1999; Tyler et al., 2002; van Hoesel et al., 2002; Gantz et al., 2002; Müller et al., 2002; van Hoesel and Tyler, 2003]. All of these studies report data that are consistent with the idea that bilateral CI users can take advantage of the head shadow effect and attend to the ear with the better S/N, in that bilateral performance was always roughly equal to or better than performance using only the better ear. The indications for additional binaural unmasking benefits are not so clear and vary across studies as well as across subjects within studies. The comparison across studies is complicated by the fact that different methodologies were used. In many of these studies, fixed S/N levels were used in the speech tests so that ceiling effects may have dominated some of the data and therefore underestimated the binaural unmasking advantage that may otherwise have been observed. On the other hand, issues such as binaural loudness summation, listening experience with two ears compared with one, and subjective bias, potentially favour binaural listening conditions and therefore may overestimate any advantage in binaural unmasking.
van Hoesel
In the study by van Hoesel and Tyler [2003] an adaptive S/N method was used to determine head shadow and binaural unmasking advantages for sentences presented from in front of the listeners (0°) with interfering spectrally matching broadband noise presented at 0°, or at 90° to the left or right. Binaural loudness summation was compensated for using a ‘binaural map’ in the processor and when switching between ears, subjects were given 5-min acclimatization intervals before tests were administered. Results showed a head shadow benefit of about 4– 5 dB (p ! 0.001) as measured by the difference in monaural SRTs when ipsilateral and contralateral noise positions at B90° were compared. Although the head shadow is a monaural effect, the ability to monitor signals at either ear is needed for the CI user to have access to the ear with the better S/N irrespective of whether the noise is on his or her left or right side. For a monaural CI user typically using a microphone behind the ear on the implanted side, the head shadow offers an advantage only when the noise is contralateral to the implanted ear. The head shadow slightly disadvantages the listener when the noise is ipsilateral to the implanted ear compared with when the noise is directly in front of the listener. Binaural unmasking effects, defined here as the improved SRT using both ears compared with the better-performing ear alone, were only 1– 2 dB and considerably less robust (p = 0.04) than the head shadow effects. The authors comment that despite the inclusion of 5-min acclimatization periods, they cannot rule out the possibility that the observed unmasking effects were due to reduced listening experience with the unilateral condition rather than a true binaural unmasking effect. In the normal-hearing literature, binaural speech intelligibility gains in noise have been modeled as a frequencyweighted sum of the contributions of binaural masking level differences (BMLDs) using sinusoidal tones [e.g. Levitt and Rabiner, 1967; Zurek, 1997]. BMLDs (here defined as improvements in threshold when listening for out-of-phase sinusoids in diotic narrow-band noise compared with listing for in-phase sinusoids) of up to 10– 15 dB are obtained with normal hearing listeners [e.g. Hirsch, 1948]. Since the BMLD comparing two different binaural conditions does not require comparison with monaural conditions, it is perhaps a better candidate than free-field speech tests for assessing binaural unmasking with bilateral CIs. If such BMLDs can be demonstrated, then binaural speech intelligibility gains may also be possible. However, if BMLDs are not found, it would add weight to the hypothesis that the benefits ascribed to binaural unmasking to date may have other causes.
Bilateral Cochlear Implants
Fig. 3. BMLD data indicating detection thresholds for in-phase and out-of-phase 500-Hz tones in the presence of 500-Hz narrow-band noise for subjects ME1 and ME2. All signals were connected directly into the audio-input connector on a SPEAR processor programmed with a PDT strategy (see text).
Methods BMLDs were measured using 500-Hz sinusoids in narrowband noise, both presented directly into the stereo audio input connector of the SPEAR (bypassing the microphones). Two bilateral CI users, ME1 and ME2, in Melbourne participated in the experiment and used the PDT strategy, which for a 500-Hz sinusoid will present electrical pulses at 500 Hz. Detection thresholds were determined for pulsed 500-Hz pure tones, presented both in- and out-of-phase at the left and right inputs of the processor and in the presence of continuous, diotic narrowband noise centred at 500 Hz. To comply with time restrictions with the subjects, a simple method of adjustment was used, which nevertheless gave good repeatability. Sinusoidal tone bursts were 300 ms in duration, including 10 ms linear rise and fall ramps, and were separated by periods of silence randomized over the range 0.5–1.5 s. The noise was held at a fixed presentation level, subjectively comparable to the loudness of everyday face-to-face conversation. Starting levels for the sinusoids were randomized to give a starting RMS S/N of 4–10 dB, after which the level of the sinusoid was continually modified by the listener using an unmarked dial until the signal could just be heard gating on and off in the noise. For each interaural phase condition, the average of 12 or 10 runs was taken with ME1 and ME2, respectively.
Results and Discussion The average threshold adjustment (and standard deviation) for each signal condition with each subject is plotted in figure 3. The resulting BMLDs, measured as the difference between the averaged thresholds for in- and out-of-phase sinusoids are 2 dB (Û = 1.5) for subject ME1 and 1.5 dB (Û = 2.2) for ME2. Although combining the data from the two subjects gives a statistically significant
Audiol Neurootol 2004;9:234–246
239
BMLD of about 1.8 dB (ANOVA, p = 0.005) this is much less than the 10 or 15 dB that would typically be experienced by normal hearing listeners (even though the ITD for out-of-phase 500-Hz sinusoids is 1 ms and therefore exceeds the largest natural head width delay). This finding suggests that very little binaural speech unmasking would be expected with these subjects. Given ITD sensitivity for CI users in the study by van Hoesel and Tyler [2003] was very poor for rates of stimulation above a few hundred hertz, it is possible that larger BMLDs would arise with lower-frequency sinusoids. However, even if low-frequency sinusoids do provide substantial BMLDs to CI users, the implications for speech intelligibility may be little affected since very little of the speech spectrum critical to speech understanding will be at those low frequencies. It is possible that BMLD effects could result from low-rate envelope rather than fine-timing fluctuations with bilateral implant users, in which case speech intelligibility gains would be possible.
Localization – Experiment 4: Effects of Array Span, Signal Characteristics, Loudspeaker Location and Sound-Processing Strategy
The first published data describing localization abilities beyond simple left/right discrimination with a bilateral CI user were those of van Hoesel et al. [2002], which showed clear benefits when using both implants compared with using either side alone. Similar benefits were shown in van Hoesel and Tyler [2003] with 5 bilaterally implanted subjects. These studies show that localization in the frontal horizontal plane is greatly facilitated by using bilateral devices. The van Hoesel and Tyler study used pink-noise bursts, presented from an array of 8 loudspeakers spanning 108° in azimuth in an anechoic room. For monaural device use, RMS-averaged results for the entire array showed variable performance, both between subjects and sometimes between ears for the same subject, with RMS errors between 20 and 60°. For bilateral device use, much more consistent and substantially improved results were observed with RMS errors of only 10°. Localization performance with the better monaural condition was about three times worse than in the binaural condition. This was true when subjects were tested with their own clinical devices, which they had been using for at least 12 months, as well as for the SPEAR research processor with the PDT strategy, to which they were exposed for just 2–3 weeks. A direct comparison between clinical strategies and PDT was not considered appropriate in
240
Audiol Neurootol 2004;9:234–246
that study due to confounding factors such as hardware variation and listening experience. The data in that study also indicated that errors varied as a function of loudspeaker location, and that for loudspeakers close to the centre of the array the binaural RMS error was smallest (less than 5° for most subjects). More detailed localization and lateralization studies have been completed with two subjects, ME1 and ME2, in Melbourne. In addition to further examining the effect of individual loudspeaker locations, the effects of total array span, signal characteristics and sound-processing strategy were explored. Methods Localization tests always used an 8-loudspeaker array placed in a sound-treated, low-reverberation room that was not strictly anechoic. Pink-noise bursts, as used in the van Hoesel and Tyler study, as well as 50-Hz click trains were tested. Both subjects were tested with the PDT strategy and with one comparable to the commercial CIS approach (with ten filter bands). In addition, ME1 was tested with a strategy comparable to the commercial ACE strategy (with 10 bands selected from 20) and ME2 was tested with the commercial SPEAK strategy. For the current discussion, the most important difference between the PDT strategy and the others tested is that it attempts to extract the fine-timing information in each of the frequency bands, whereas the others preserve only the envelope of the signals in those bands (see Preserving Cues). This means that PDT also preserves ITDs in the fine timing whereas for the others only ITDs in the envelope are preserved. The most significant difference between ACE and SPEAK and the other two strategies (as implemented in these studies) is that they use more frequency bands for the initial analysis and then preserve only the channels with the largest envelope amplitudes. The most significant difference between ACE (as implemented) and SPEAK is that the rate of stimulation and sampling of the envelope information was 1200 Hz for ACE and only 250 Hz for SPEAK, which means envelope ITDs may have been better represented in ACE than in SPEAK. All the strategies were implemented on a SPEAR processor (which is capable of bilateral stimulation), except SPEAK, which was implemented on ME2’s Cochlear ESPRIT behind the ear processors. Subjects were allowed 4 weeks of everyday experience to acclimatize to a new strategy before evaluation. The order of testing was PDT, CIS, ACE for subject ME1 and SPEAK, PDT, CIS for subject ME2. Presentation levels were roved over the range 60–68 dB SPL, measured with a hand-held meter using Aweighting and held at the position of the listener’s head in absentia. The data were collected for 20 repeat presentations from each loudspeaker, resulting in 160 presentations for each combination of signal type, signal-processing strategy, and array configuration. The salience of ILD and ITD cues with the various strategies was also evaluated using a lateralization experiment in which pink-noise bursts and 50-Hz click trains were routed directly into the audio input connectors of the sound processors. Since the microphones were bypassed in this case, additional filtering was applied to the noise bursts and click trains to approximate the spectral weighting usually imparted by the microphones’ frequency response (non-headmounted, 0° angle of incidence response). After filtering, either ILDs or ITDs, but not both, were applied. Signals were presented in a 2AFC paradigm with pairs of signals with opposing left/right cues
van Hoesel
Fig. 4. a RMS localization error as a func-
tion of angle of incidence for 8 loudspeaker positions (20 repeat presentations from each) spanning 180°, for subject ME1 with pink-noise bursts, cochlear HS-8 directional microphones behind each ear and a SPEAR processor with a PDT strategy. b RMS ILD cues measured using HS-8 microphones mounted on a KEMAR manikin for the same pink-noise bursts presented over the range 0–90°. c Scatter plot data for pinknoise bursts with ME1 and a span of 180°. d Scatter plot data for 50-Hz click trains with ME1 and a span of 180°.
presented in random order. This means the total cue available for the pair is twice as large as that in either signal. The subjects were required to respond whether the second sound seemed to originate to the left or the right of the first sound. Forty repeat presentations were used for each cue and a Gaussian fit was used to estimate the psychometric function. The JND was estimated as the total cue size that led to 75% correct on that function (e.g. an ITD JND of 100-Ìs mean signal pairs with B50 Ìs could be lateralized correctly 75% of the time).
Results Figure 4a shows the RMS errors for subject ME1 as a function of loudspeaker position (in degrees of arc relative to the frontal direction) with pink-noise bursts, a total loudspeaker array span of 180 degrees, and using the PDT strategy in a SPEAR processor. For reference, chance performance for this loudspeaker configuration is about 85°. The data showing the worst performance (at 40° and 90° angle of incidence) have been circled. For comparison, figure 4b shows the RMS ILD cues measured from the same type of directional microphones as used by the subject (Cochlear HS-8) and placed behind the ears of a KEMAR manikin in the same manner. The measurements were made using the same pink noise as used in the localization task with the loudspeakers at the same locations (B13, 40, 65 and 90°). These results demonstrate
that for that span, beyond about 35° the broadband ILD cue in the pink-noise signal is ambiguous and non-monotonic. Stimuli at 90° provide a very similar, and in fact, slightly smaller broadband ILD cue to those at 40° (also circled). It is interesting that the RMS error for the loudspeaker at 65° is smaller than for either neighbouring loudspeaker. Two possible explanations for this are presented, although both are related to the broadband ILD cue shown in figure 4b. First, the ILD cue has a maximum at 65°, which may reduce the ambiguous nature of the cue for that angle of incidence. Second, if we assume that due to the ILD cue, the subject could tell perfectly whether a sound arrived from the region 0–35° or 35–90°, but could not distinguish between stimuli within the 35–90° range, chance performance in RMS error terms is about 20° [sqrt((252 + 0 + 252)/3)] at 65° compared with 32° [sqrt((502 + 252 + 0)/3)] for the two neighbouring locations. Note that by the same argument, for the loudspeakers at B13° chance performance would be about 13° [sqrt((252 + 0)/2)]. Although most of the errors in the lateral regions are comparable to that chance performance, at –65° the error is only 12 or 13° rather than 20°. Note that the chance values described also assume no confusions between the two regions. Figure 4c shows the scatter plot
Bilateral Cochlear Implants
Audiol Neurootol 2004;9:234–246
241
Fig. 5. RMS localization errors as a function
of angle of incidence for 8 loudspeakers and with different spans. a, b Subject ME1 using pink noise and 50-Hz click trains, respectively. Both graphs show results for spans of 30° and 180°. The pink-noise plot for this subject further includes data for a 90° span. c, d Results for subject ME2.
for the pink-noise data with ME1. Indeed only moderate confusions between the regions are evident with substantially increased confusions within each of the two lateral regions. However, it is not clear that performance in those regions is reduced to chance since that would result in uniform response distributions there. Note also that the two loudspeakers in the central region were never confused with each other. Further examination of the data for the more lateral regions using percent-correct rather than RMS error analysis, shows scores of 45, 60 and 27% at 40°, 65°, and 90°, respectively (averaged for left and right regions). This suggests that the maximum in the ILD cue at 65° did contribute to improved performance, since just guessing should result in similar percent correct scores for all three locations. For this signal and array configuration, neither ILDs in individual frequency bands, which are smaller but monotonic all the way up to 90° for 500 Hz, for example [van Hoesel et al., 2002], nor ITDs seemed to provide cues allowing the subject to resolve the ambiguity of the
242
Audiol Neurootol 2004;9:234–246
broadband ILD cue. It seems therefore that the broadband ILD was the main cue used by the subject with this signal and array configuration. Further test data with ME1 using 50-Hz click trains, rather than pink noise, are shown in the scatter plot in figure 4d. The number of confusions between loudspeakers at 40° and 90° is clearly reduced compared with the pink-noise case. The ILD curve for the click train was also measured with the KEMAR and ear-mounted microphones and showed similar spatial ILD trends to the pink noise, including a peak cue at 65° and ambiguous cues over the range 35–90°. Furthermore, a repeated test with the noise bursts filtered to match the spectral characteristics of the click train showed that the spectral differences between pink noise and click trains were not responsible for the improved performance with the latter. Instead, it appears that the added low-rate temporal information in the 50-Hz click train allowed the subject to decrease the confusion between signals from 40° and 90° where the ILD cue is similar.
van Hoesel
Table 1. ILD and ITD JNDs for subjects
ME1 and ME2 with both pink-noise bursts and 50-Hz click trains, and for three different sound-processing strategies with each subject
Subject
Strategy
ILD pink
ILD click
ITD pink
ME 1 ME 1 ME 1 ME 2 ME 2 ME 2
ACE CIS PDT SPEAK CIS PDT
! 1 dB ! 1 dB ! 1 dB ! 1 dB ! 1 dB ! 1 dB
! 1 dB ! 1 dB ! 1 dB ! 1 dB ! 1 dB ! 1 dB
unmeasurable unmeasurable unmeasurable unmeasurable unmeasurable unmeasurable
ITD click 180 Ìs 250 Ìs 120 Ìs 1000 Ìs 140 Ìs 170 Ìs
For the directional Cochlear HS-8 microphones used in many of the studies presented in this paper, the availability of ILD cues was verified using pink noise and narrow-band noise as described in [van Hoesel et al., 2002].
Figure 5 shows RMS error plots for data from subjects ME1 and ME2 using the PDT strategy. Results are shown for pink noise and 50-Hz click trains with spans of 30° and 180°, as well as 90° for ME1. For the 180° span, ME1’s performance was better for 50-Hz click trains than the pink noise, except at 65°. For ME2, similar improvements were evident for the two centre loudspeakers but for the two loudspeakers at B90°, performance was actually better with the pink noise. It is possible that this subject did not benefit from the improved ITD salience in the click train because she was accustomed to using a lowrate sound-processing strategy that probably codes even envelope ITD cues relatively poorly, but it is not clear why performance with the click trains should actually be worse for her at the extremes of the array. For both subjects, the improvements in localization for the click trains compared with the pink noise are not evident using the narrower span. This could be because for the narrower span, the ILD remains monotonic over the entire range so that the ILD provides a reliable cue that is available for both signal types (chance performance at this span is about 14°). Also, the emphasized low-rate ITD cue in the click trains compared with the pink noise may be too small for subjects to hear with the narrow span where loudspeakers were spaced less than 5° apart. This is supported by the lateralization data described in table 1 (discussed below). Additional data with a span of 90° were collected using pink noise with ME1. RMS errors have been included in figure 5a. Chance performance in this case would result in an overall RMS error of about 42°. For this span (up to 45° to the left or right), it is clear from the top right panel in figure 4 that ILD cues were not ambiguous. Accordingly, the individual loudspeaker RMS errors are just 5–10° over most of the range, which is similar to those reported in van Hoesel and Tyler [2003] with 5 subjects tested with
a comparable array span of 105° and the same pink-noise bursts. Further tests were also conducted with a 360° span. In this case, front-back confusions were expected to contribute significantly to the overall error due to the absence of pinna-effects and robust overall level cues (due to roved levels). Indeed, when front-back confusions were counted as errors, the overall RMS error was about 40° whereas when front-back confusions were not counted as errors, the RMS error was reduced to 15°. Figure 6 shows further data for various sound-processing strategies with both ME1 and ME2 and using the 180° span. Each graph again shows RMS errors for individual loudspeakers and includes results for pink noise (squares) and 50-Hz click trains (triangles). Above each graph, the two RMS-averaged errors for the entire array are shown for pink noise (first) and 50-Hz click trains (second). The top row of graphs is for three sound-processing strategies with subject ME1 and the bottom row is for ME2. The right-most graphs with the PDT strategy describe the pink noise and 50-Hz click train data already discussed. For ME1 with all three strategies, performance with 50-Hz click trains produced a reduction of the RMS averaged error by about one third compared with performance with pink noise. The differences between strategies are minor in comparison. It seems that the added low-rate ITD information in the envelope provided an advantage with ACE and CIS also. With the pink noise, many of the temporal fluctuations in the signal are at higher rates than for the 50-Hz click-train, so that PDT perhaps did not offer a benefit over the other strategies with that signal because ITDs are not well perceived for rates above a few hundred hertz in bilateral CI patients [van Hoesel and Tyler, 2003]. The data for ME2 also showed no clear effect of strategy. Her data furthermore showed a reduced difference between pink noise and click trains. As was discussed for figure 5, it may be the case that this subject,
Bilateral Cochlear Implants
Audiol Neurootol 2004;9:234–246
243
Fig. 6. RMS localization errors as a function of angle of incidence for 8 loudspeakers spanning 180°. Results for three different strategies are shown in the three graphs for each subject. Within each graph, results for pink-noise burst (black squares) and 50-Hz click trains are shown (white triangles). Above each graph RMS-averaged errors for the entire loudspeaker array are shown for both pink noise (black text) followed by the 50-Hz click trains (grey text). a Subject ME1. b Subject ME2.
who had considerably more exposure to the low-rate SPEAK sound-processing strategy than with the higherrate strategies, had grown accustomed to ignoring ITD cues altogether when localizing sounds. This seems a reasonable approach with lower-rate strategies, not only because envelope ITDs are likely to be poorly coded, but also because of the increased likelihood that misleading ITDs in the fixed stimulation rate at each ear could be misconstrued as actual signal location cues. To further assess the hypothesis that click trains provide improved localization cues compared with the pink noise, table 1 shows the ILD and ITD sensitivity for those signals. It is clear that both subjects could hear small ILD cues (!1 dB) with any of the strategies and either signal.
244
Audiol Neurootol 2004;9:234–246
However, ITD sensitivity was very different for pink noise and click trains. ITD JNDs for the click trains were usually between 100 and 200 Ìs but neither subject could hear ITDs in the pink-noise bursts (for which performance remained below the 75% threshold criterion even with 800-Ìs cues). The exception to the moderately good sensitivity with the click trains was for ME2 when using her own BTE processors with a low-rate (250 pps/electrode) SPEAK strategy. In that case, the JND was notably larger, on the order of 1 ms, compared with the higherrate strategies. The ITD JNDs for the higher-rate strategies with the 50-Hz click trains are similar to those found with electrical 50-Hz pulse trains on single-electrode stimuli (with the implant stimulation currents under direct
van Hoesel
The data now available from bilateral CI studies leave little doubt that recipients stand to gain substantially from using both ears rather than just one, particularly for understanding speech in the presence of noise from another direction and for localization of sounds in the horizontal plane. However, the main benefits seem to be derived from level cues at the two ears rather than comparison of interaural fine timing. Data from speech studies with signals presented from loudspeakers show stronger effects from head shadow than binaural unmasking. Furthermore, those studies may have overestimated the amount of actual unmasking due to the fact that binaural listening may have other advantages. The binaural unmasking data described in this manuscript compared two binaural listening conditions instead. Detection thresholds for in-phase and out-of-phase 500-Hz sinusoidal tones in diotic noise were measured and showed minimal evidence of binaural unmasking when compared with results for normal hearing listeners, even when using the PDT sound-processing approach for the CI users, which preserves fine-timing ITDs rather than just envelope ITDs. If ITD discrimination using CIs is a good indication of how well electrical interaural timing information is
perceived, it seems plausible that the lack of binaural unmasking with the 500-Hz tone is related to the fact that substantially reduced ITD sensitivity has been observed for rates of stimulation above a few hundred hertz [van Hoesel and Tyler, 2003]. In that case, even with soundprocessing approaches such as that used by PDT, we may not see all the benefits available to normal hearing listeners due to the present inability with electrical stimulation to provide detailed timing cues at higher rates. Although the data from experiment 1 show that approximate placematching can improve sensitivity to interaural timing cues (for low stimulation rates), small variations on the order of one or two electrodes (0.75–1.5 mm) showed similar performance, as might be expected from currentspread considerations with electrical stimulation. The broadband ILD cue available from the behind the ear microphones with a bilateral CI system seems to be able to account for much of the performance observed in the localization studies. Depending on whether the loudspeaker array employed includes regions for which this cue is ambiguous or even non-monotonic, performance can be altered substantially. This means that performance in experiments with different loudspeaker configurations and signals of varying frequency content cannot be readily compared. For the microphones and signals used in these studies, arrays spanning up to nearly B65° will not introduce such ambiguity and the addition of low-rate ITDs, to which subjects show sensitivity on the order of 100 Ìs, may be of little benefit. However, for wider spans that do include regions where broadband ILDs are not monotonically related to source azimuth, the addition of low-rate ITDs cues can improve localization. For the 180° span, subject ME1 clearly demonstrated better localization with the 50-Hz click trains than with pink noise with both commercial high-rate envelope extracting strategies as well as with PDT. The finding that localization improved with click trains compared with pink noise for him agrees with the psychophysical result that ITD sensitivity is reduced at higher rates [van Hoesel and Tyler, 2003]. Further evidence that ITDs can be of benefit when signals contain low-rate cues comes from the lateralization task which showed much better performance with the click trains compared with pink noise for both subjects. It is interesting that for ME2, even with higher-rate strategies, no corresponding improvement in localization performance was seen when using click trains rather than pink noise, despite the fact that in the lateralization task ITD sensitivity was much better for the click trains. It is possible that prolonged exposure to strategies with better ITD cues may be required with some subjects, particularly if they
Bilateral Cochlear Implants
Audiol Neurootol 2004;9:234–246
computer control rather than routing audio signals through sound processing strategies) for both ME1 and ME2 (unreported data). They are also comparable to ITD JNDs with electrical stimuli on single electrodes reported in van Hoesel and Tyler [2003]. For signals near 0° a cue between 100 and 200 Ìs would correspond to a change in angle around 10 or 20°. This supports the suggestion that, for the localization tests described in figure 5, the additional low-rate ITD cues available in the click trains compared with the pink noise would be of more benefit for the wide span (with loudspeakers spaced about 25° apart) than the narrow span (where they were spaced only 4° apart). It is interesting to observe that if subject ME2 did ignore the ITD cues with the higher-rate strategies in the localization task, she did not do so in the lateralization task with the same sound-coding strategies. Perhaps in the localization task, where ILDs and ITDs were simultaneously varied with signal location, the ILD always dominated, whereas in the lateralization task where no ILDs were present in the signals containing ITDs, attention was drawn to the ITDs. If so, it seems possible that additional training with the localization task could show a difference between the signals for that subject too.
Conclusions
245
are accustomed to low-rate strategies in which even envelope ITD cues may be absent and in which ‘introduced fine-timing ITDs’ are unrelated to signal characteristics and therefore can be misleading. Binaural loudness summation data from subject ME1 with broadband stimulation showed loudness increased approximately by a factor of 2 over a good deal of the available dynamic range. This was also found for singleelectrode stimuli with two subjects in van Hoesel and Clark [1997]. This is a useful indicator for clinical loudness management with bilateral CIs. If the clinical aim is to provide a comparable range of loudness sensations, from just audible to just below discomfort, regardless of whether CI users are fitted unilaterally or bilaterally, both threshold and maximal stimulation levels will likely need to be reduced in the bilateral fitting. However, it may be the case that alternative approaches are beneficial. If for example loudness increases due to binaural loudness sum-
mation are better tolerated than those due to monaural level increases, dynamic range increases may be possible. Further studies exploring binaural summation at very high and very low stimulation levels, as well as an improved understanding of how loudness changes with electrical stimulation in general are both likely to offer further insight into how best to allow for electrical binaural loudness summation.
Acknowledgements Grateful acknowledgement goes to the two bilaterally implanted research volunteers, ME1 and ME2, who participated in the studies at the CRC laboratories in Melbourne, and to Laurie Cohen for his assistance in analysing the X-ray data. The author also acknowledges the helpful comments from five anonymous reviewers on earlier versions of the manuscript. This work was funded by the Cooperative Research Centre for Cochlear Implant and Hearing Aid Innovation, Australia.
References Blauert J: Spatial Hearing: The Psychophysics of Human Sound Localization, rev ed. Cambridge, Massachusetts, MIT Press 1997. Bronkhorst AW, Plomp R: The effect of headinduced interaural time and level differences on speech intelligibility in noise. J Acoust Soc Am 1988;83:1508–1516. Bronkhorst AW, Plomp R: Binaural speech intelligibility in noise for hearing-impaired listeners. J Acoust Soc Am 1989;86:1374–1383. Carhart R: Monaural and binaural discrimination against competing sentences. Int Audiol 1965; 4:5–10. Dirks DD, Wilson RH: The effect of spatially separated sound sources on speech intelligibility. J Speech Hear Res 1969;12:5–38. Durlach NI, Colburn HS: Binaural phenomena; in Carterette EC, Friedman MP (eds): The Handbook of Perception. New York, Academic Press, 1978, vol 4, chapter 10. Gantz BJ, Tyler RS, Rubinstein JT, Wolaver A, Lowder M, Abbas P, Brown C, Hughes M, Preece JP: Binaural cochlear implants placed during the same operation. Otol Neurotol 2002;23:169–180. Häusler R, Colburn S, Marr E: Sound localization in subjects with impaired hearing. Acta Otolaryngol Suppl 1983;400:1–62. Hawley ML, Litovsky RY, Colburn HS: Speech Intelligibility and localization in a multi-source environment. J Acoust Soc Am 1999;105: 3436–3448. Hirsh IJ: The influence of interaural phase on interaural summation and inhibition. J Acoust Soc Am 1948;20:536–544. Hirsh IJ: The relation between localization and intelligibility. J Acoust Soc Am 1950;22:196– 200.
246
Lawson DT, Wilson BS, Zerbi M, van den Honert C, Finley CC, Farmer JC Jr., McElveen JT, Rousch PA: Bilateral cochlear implants controlled by a single speech processor. Am J Otol 1998;19:758–761. Lawson D, Wolford R, Brill S, Schatzer R, Wilson B: 12th Quarterly Progress Report for contract 98-01, DC8-2105, 2001;http://npp.ninds.nih. gov/ProgressReports. Levitt H, Rabiner LR: Predicting binaural gain in intelligibility and release from masking for speech. J Acoust Soc Am 1967;42:820–829. Licklider JCR: The influence of interaural phase upon the masking of speech by white noise. J Acoust Soc Am 1948;20:150–159. Long C: Bilateral Cochlear Impants: Basic Psychophysics, PhD thesis, MIT, Cambridge, 2000. MacKeith NW, Coles RR: Binaural advantages in hearing of speech. J Laryngol Otol 1971;85: 213–232. Müller J, Schön F, Helms J: Speech understanding in quiet and noise in bilateral users of the MED-EL COMBI 40/40+ Cochlear Implant System. Ear Hearing 2002;23:198–206. Peissig J, Kollmeier B: Directivity of binaural noise reduction in spatial multiple noise-source arrangements for normal hearing and impaired listeners. J Acoust Soc Am 1997;105:1660– 1670. Rayleigh L: On our perception of sound direction. Phil Mag 1907;13:214–232. Searle CL, Braida LD, Davis MF, Colburn HS: Model for auditory localization. J Acoust Soc Am 1976;60:1164–1175. Stevens SS, Newman EB: Localization of actual sources of sound. Am J Psychol 1936;48:297– 306. Tyler RS, Gantz BJ, Rubinstein JT, Wilson BS, Parkinson AJ, Wolaver A, Preece JP, Witt S,
Audiol Neurootol 2004;9:234–246
Lowder MW: Three-month results with bilateral cochlear implants. Ear Hearing 2002; 23(suppl):80S–89S. van Hoesel RJM, Clark GM: Fusion and lateralization study with two binaural cochlear implant patients. Ann Otol Rhinol Laryngol Suppl 1995;166:233–235. van Hoesel RJM, Clark GM: Psychophysical studies with two binaural cochlear implant subjects. J Acoust Soc Am 1997;102:495–507. van Hoesel RJM, Clark GM: Speech results with a bilateral multi-channel cochlear implant for spatially separated signal and noise. Aust J Audiol 1999;21:23–28. van Hoesel R, Ramsden R, O’Driscoll M: Sounddirection identification, interaural time delay discrimination and speech intelligibility advantages in noise for a bilateral cochlear implant user. Ear Hearing 2002;23:137–149. van Hoesel RJM, Tong YC, Hollow RD, Clark GM: Psychophysical and speech perception studies: A case report on a bilateral cochlear implant subject. J Acoust Soc Am 1993;94: 3178–3189. van Hoesel RJM, Tyler RS: Speech-perception and localization with bilateral cochlear implants. J Acoust Soc Am 2003;113:1617–1630. Wightman F, Kistler DJ: The dominant role of lowfrequency interaural time differences in sound localization. J Acoust Soc Am 1992;91:1648– 1661. Xu J, Xu SA, Cohen LT, Clark GM: Cochlear View: Post-operative radiology for cochlear implantation. Am J Otol 2000;21:49–56. Zurek PM: Binaural advantages and directional effects in speech intelligibility; in Studebaker GA, Hochberg I (eds): Acoustical Factors Affecting Hearing Aid Performance, ed 2. Boston, Allyn & Bacon, 1993.
van Hoesel
Original Paper Audiol Neurootol 2004;9:247–255 DOI: 10.1159/000078394
Received: February 6, 2003 Accepted after revision: March 4, 2004
Auditory Brainstem Implant in Posttraumatic Cochlear Nerve Avulsion V. Colletti M. Carner V. Miorelli L. Colletti M. Guida F. Fiorino ENT Department, University of Verona, Verona, Italy
Key Words Auditory brainstem implant W Profound hearing loss W Head injury
Abstract Patients aged over 12 years with neurofibromatosis type 2 are considered candidates for an auditory brainstem implant (ABI). This study extends the indication criteria of ABI to subjects with profound hearing loss due to damaged cochleas and/or cochlear nerves (CNs) following head injuries. In our department, over the period from April 1997 to November 2002, 32 patients, 23 adults and 9 children, were fitted with ABIs. Their ages ranged from 14 months to 70 years. These patients were suffering from a variety of tumor (13 subjects) and nontumor CN or cochlear diseases (19 subjects). Six patients, 5 adults and 1 child, had profound hearing loss following head injury. Their mean age was 25 years (range: 16–48 years). Five were male and 1 female. The retrosigmoid approach was used in all 6 patients. The electrode array was inserted into the lateral recess of the fourth ventricle and correct electrode positioning was monitored with
the aid of electrically evoked auditory brainstem responses and neural response telemetry. Correct implantation was achieved in all patients. No complications were observed due to implantation surgery or related to ABI activation and stimulation of the cochlear nuclei. At activation, an average of 9.8 electrodes (range 5–13) were switched on without side effects. One to 6 electrodes were activated in the following sessions after time periods ranging from 2 to 16 months. All patients achieved auditory-alone-mode closed-set word recognition scores ranging from 40 to 100%; 3 had auditoryalone-mode open-set sentence recognition scores of 60– 100%; 2 of these even had speech-tracking performance scores of 38 and 43 words, respectively, showing an ability to engage in normal conversation and converse over the phone. The present study demonstrates that the ABI is a useful rehabilitation instrument in subjects with damaged cochleas and/or CN avulsion following head injury who are unamenable or poorly responsive to auditory rehabilitation using cochlear implants. Copyright © 2004 S. Karger AG, Basel
Introduction This paper was presented at the International Conference on Candidacy for Implantable Hearing Devices, Utrecht, The Netherlands, June 27–29, 2002.
ABC
© 2004 S. Karger AG, Basel 1420–3030/04/0094–0247$21.00/0
Fax + 41 61 306 12 34 E-Mail
[email protected] www.karger.com
Accessible online at: www.karger.com/aud
The auditory brainstem implant (ABI) affords a valuable opportunity for auditory rehabilitation in neurofibromatosis type 2 patients older than 12 years who suffer
Vittorio Colletti ENT Department, University of Verona P.le L.A. Scuro, 10 IT–37100 Verona (Italy) Tel. +39 045 807 4275, Fax +39 045 581 473, E-Mail
[email protected]
from deafness due to bilateral disruption of the cochlear nerve (CN). Potentially, the indications for ABI may be extended to patients suffering from similar pathophysiological conditions, i.e. a ‘disconnection’ between environmental sounds and the central auditory system. There are, in fact, special categories of patients who present severe impairment of the CN, e.g. aplasia or avulsion following head injury, or severe abnormalities of the cochlea, e.g. malformations, or acquired ossification, or fibrosis, as may happen in head injuries. In these cases, it is virtually impossible or useless to insert a cochlear electrode array and lack of intervention condemns these subjects to a dramatic inability to hear and communicate. Hearing loss following head injury may involve any level of the auditory system and it may be conductive in 10% of patients, mixed in 25% and sensorineural in 65% of subjects [Koefoed-Nielsen and Tos, 1982]. Following severe trauma [Griffith, 1979], 17–56% of patients present variable degrees of sensorineural hearing loss (SNHL) with 8% suffering from severe or profound bilateral impairment [Kockhar, 1990], thus becoming candidates for cochlear implantation [Camilleri et al., 1999]. Cochlear implants (CIs), however, have shown conflicting results in these patients and in some cases have proved to be of little or no use [Coligado et al., 1993; Camilleri et al., 1999; Moore and Cheshire, 1999]. Six patients, 5 adults and 1 child, with profound hearing loss following head injury have been operated on with ABI implantation over the last 5 years in our department. The present paper is, to the best of our knowledge, the first report in the literature on the technique and the results obtained with the application of ABIs in these patients.
Materials and Methods In our department, over the period from April 1997 to November 2002, 32 patients, 23 adults and 9 children, ranging in age from 14 months to 70 years, were fitted with ABIs. These patients were suffering from a variety of tumor (13 subjects) and nontumor CN or cochlear diseases (19 subjects). Six patients (5 males and 1 female) had profound hearing loss following head injury. Five were adults and 1 was a child and their ages ranged from 16 to 48 years (mean 25 years). These patients constitute part of the total population of 12 subjects referred to our department with profound hearing loss following head injuries for possible CI application. All patients had been in the intensive care unit (ICU) for coma lasting 12 days to 5 months. Two patients refused surgery and 4 subjects were excluded for severe tem-
248
Audiol Neurootol 2004;9:247–255
porocerebral lesions with neurological signs and cognitive, behavioral and communication deficits. Preoperative CT scans showed bilateral temporal bone fractures in all 6 subjects: in 4 of them, the fractures were confined to the labyrinth and in 2 subjects they involved the periotic bone (fig. 1a,b). MRI showed frontal lobe encephalomalacia in 1 subject and a frontal porencephalia in another; in the other 4 patients, no abnormalities were detected with MRI. Electrical stimulation, using a gold ball electrode on the round window, was performed in all patients preoperatively. Under local anesthesia, a myringotomy was performed in the posterosuperior quadrant of the tympanic membrane and a gold ball electrode was thus placed under direct view in the round window niche. Electrical stimulation was performed at various intensities and frequencies and patients were asked if they perceived any sound sensation. Thereafter, objective electrophysiological recording using electrically evoked auditory brainstem response (EABR) was performed. Biphasic pulse stimuli were delivered by a constant current stimulator to the ball electrode. The stimulating electrode was referenced to a needle electrode placed into the ipsilateral earlobe. Pulse duration was 0.1 ms per phase and its intensity ranged from 1 to 2.5 mA. The EABRs were recorded using a contralateral electrode montage with the positive electrode inserted at the vertex, the negative electrode at the contralateral earlobe and the ground at the forehead. EEG band-pass filter was set at 1–2500 Hz. Each waveform was composed of 800–1000 samples of EEG activity over a 10-ms time base. On the basis of round window stimulation which did not elicit any psychoacoustic or electrophysiologic response, and lack of results with traditional hearing aids in any of the 6 patients, we came to the conclusion that the only possible solutions for these subjects were ABIs, which were applied using a retrosigmoid (RS) approach. Subjects A.S., a 35-year-old male, reported head trauma in 1983 with right profound hearing loss. In 1986, he again suffered a head injury and was in a coma for 3 months. After discharge, he complained of profound hearing loss with tinnitus and vertigo and he was assessed as suitable for CI after having tried traditional hearing aids without success. CT scan showed a bilateral labyrinthine fracture and MRI revealed frontal lobe encephalomalacia. Auditory brainstem responses (ABRs) and round window electrical stimulation elicited no responses. The application of an ABI was therefore planned. Surgery was performed on April 3, 2001, with insertion of the ABI on the right side. One month later, 5 electrodes could be activated and the patient was aware of environmental sounds. He continued to do well and at 1 year, with 8 active electrodes, he achieved 60% in open-set sentence recognition in the auditory-alone mode, with fair conversational abilities. D.M., a 32-year-old male, sustained a head injury in 1988 and was in the ICU for a coma lasting 12 days. After discharge, he complained of profound hearing loss and bilateral tinnitus, along with vertigo. He was referred to our department for the application of a CI. CT scan revealed a bilateral labyrinthine fracture, whereas MRI showed no brain lesions and an apparently normal eighth cranial nerve. Electrical stimulation, using a gold ball stimulating electrode on the round window, elicited no EABRs. Since traditional hearing aids had been tried without success, we came to the conclusion that
Colletti/Carner/Miorelli/Colletti/Guida/ Fiorino
the only possible solution for this patient was an ABI, which was applied on the left side on May 28, 2001. At ABI activation, the patient was capable of detecting sounds and words with 13 active electrodes. After about 14 months, with 16 active electrodes, he scored 100% in auditory-alone-mode open-set sentence recognition. His conversation is now normal and he can use the telephone without difficulty and also reports the suppression of the tinnitus on the side with the ABI. C.L., a 24-year-old female, suffered a head injury in 1983 and was in a coma for 2 months. She complained of dizziness, profound bilateral hearing loss, tinnitus, vertigo and right seventh cranial nerve palsy and was referred to our department for the application of a CI since hearing aids had been tried without success. CT scan showed a bilateral transverse fracture of the temporal bone and MRI excluded eighth cranial nerve or brainstem abnormalities. ABRs and round window electrical stimulation elicited no responses and an ABI was applied on the right side on September 25, 2001. When the ABI was activated with 11 electrodes, the patient was able to detect sounds and words. One year later, with 13 active electrodes, she scored 45% in open-set sentence recognition in the auditory-alone mode and was able to sustain simple ordinary conversations and to conduct very simple telephone conversations with relatives. D.G.M., a 32-year-old male, suffered a head injury in 1996 and was in a coma for 5 months. He had profound hearing loss, bilateral tinnitus, vertigo, blindness and paraplegia. CT scans showed a bilateral labyrinthine fracture and MRI revealed frontal lobe porencephalia. ABRs and round window electrical stimulation elicited no responses and traditional hearing aids had been tried without success. An ABI was applied on the right side on October 4, 2001. In November, with 6 active electrodes, he was capable of detecting sounds of different frequencies. One year later, with 10 active electrodes, he scored 42% in closed-set word recognition in the auditoryalone mode. He is acquiring perceptual skills very slowly and with great difficulty due to his blindness. C.R., a 48-year-old male, sustained a head injury in 2000, and was in a coma for 2 months. He was left with profound hearing loss, bilateral tinnitus and vertigo. He was referred to our department for the application of a CI. CT scans revealed a bilateral labyrinthine fracture, whereas MRI revealed no brain lesions and an apparently normal eighth cranial nerve and empty cochleas. ABRs and round window stimulation elicited no responses and an ABI was inserted on the right side on June 3, 2002. At ABI activation, 12 electrodes were switched on with detection of environmental sounds. Three months later, with 13 active electrodes, he scored 75% in the auditory-alone-mode closed-set word recognition test and achieved 20% correct responses in the auditoryalone-mode open-set sentence recognition test. P.G., a 16-year-old male, had worn bilateral hearing aids since the age of 3 years for bilateral chronic otitis media with profound hearing loss. In April 2002, following a car accident, he sustained a head injury, and was in a coma for 2 months. He was left with right anacusis and profound left hearing loss, bilateral tinnitus and vertigo. He was referred to our department for the application of a CI as he was no longer able to obtain any useful auditory results with his hearing aids. CT scan showed a bilateral transverse temporal bone fracture with hypodense tissue in the right middle ear. MRI revealed no brain lesions and an apparently normal eighth cranial nerve and empty cochleas. ABRs and round window electrical stimulation elicited no
Auditory Brainstem Implant in Posttraumatic Cochlear Nerve Avulsion
Fig. 1. a CT scan (axial view) showing a fracture line (arrow), inter-
rupting the basal and intermedius gyrus of the cochlea, in the temporal bone of a patient (D.M.) following head injury. b CT scan (coronal view) showing a fracture line involving the cochlea, close to the internal meatus and reaching the promontory of the temporal bone of a patient (C.L.) after head injury.
responses. The ABI was inserted on the left side on September 2, 2002. At ABI activation, 12 electrodes were switched on with detection of environmental sounds, and 1 month later, with 14 electrodes activated, he scored 80% in the auditory-alone-mode closed-set word recognition test and 10% in the auditory-alone-mode open-set sentence recognition test. Surgical Technique RS application of the ABI has been extensively described in previous papers [Colletti et al., 2001, 2002] and will be briefly outlined here. Microsurgical dissection is necessary to reach the cochlear nucleus area. The dorsal cochlear nucleus, which is the most accessible
Audiol Neurootol 2004;9:247–255
249
sia. This enables MRI to be performed for tumor monitoring when necessary.
Fig. 2. The ABI device is located in front of the ninth cranial nerve and is about to be inserted in the foramen of Luschka in the floor of the lateral recess, in the area corresponding to the cochlear nuclei, via the RS approach in a patient (D.M.) with CN atrophy following head injury.
portion of the cochlear nuclear complex for electrical stimulation, is positioned on the surface of the lateral recess of the fourth ventricle. Various landmarks are normally used to locate the foramen of Luschka which affords access to the fourth ventricle. Usually, the chorioid plexus, which covers the foramen of Luschka, lies within a triangle formed by the eighth nerve, the ninth nerve and the lip of the foramen of Luschka. To approach the fourth ventricle, the arachnoid over the foramen is cut, and the flocculus and the choroid plexus retracted. To this end, rostromedial retraction of the cerebellum is necessary. The choroid plexus projecting from the lateral recess and overlying the cochlear nucleus complex is followed and the entrance to the lateral recess is found. Opening of the lateral recess is confirmed by the outflow of cerebrospinal fluid and the cochlear nucleus complex is identified since it bulges in the floor of the lateral recess. The electrode array is then completely inserted into the lateral recess with the aid of a small forceps (fig. 2). The correct position of the electrode is then estimated with the aid of EABRs and neural response telemetry (NRT). When reliable EABRs are obtained, the dacron mesh surrounding the electrode carrier is bent back towards the brainstem to stabilize the implant device. In addition, muscle plugs are inserted into the foramen to improve stabilization. The electrode wire is inserted in a groove drilled in the posterior wall of the petrous bone for better stabilization of the implant. Nucleus 24 Channel ABI The device is based on the nucleus 24M cochlear implant systems. It contains a receiver/stimulator package and a flat silicone carrier (3 ! 8 mm) with 21 platinum electrodes arranged in 3 rows. The individual electrode diameter is 0.7 mm. A ‘T’-shaped dacron mesh is attached to the electrode carrier to stabilize the device intraoperatively and to permit connective tissue growth for further postoperative stabilization. The internal magnet of the receiver/stimulator is removable and can be replaced by a silicone plug under local anesthe-
250
Audiol Neurootol 2004;9:247–255
EABR Recording For EABR measurement, an MK15 system (Amplaid, Milan, Italy) is used in combination with the diagnostic programming system from the Cochlear Corporation, which functions as an electrical stimulus generator [for details, see Colletti et al., 2002]. After ABI insertion, the cochlear nuclei are stimulated at a rate of 20 Hz using bipolar modality stimuli of 150 Ìs per phase. The acquisition parameters are 500 sweeps averaged and filtered using a 100to 2500-Hz bandpass. Stimulus intensity, as expressed in arbitrary units (current level: CL), is progressively increased to check the integrity of the ABI and to verify the absence of interference from other monitoring systems. Bipolar stimulations of sample electrodes in the different portions of the array is performed to confirm the correct position of the ABI. Initially, electrodes placed on opposite portions of the electrode array are activated in order to map a large zone of the cochlear nucleus area. Thereafter, sample electrodes from different portions (distal, median and proximal) of the array are activated to map the area more selectively and verify contact between the electrodes and the cochlear nucleus. Neural responses are distinguished from stimulus artifacts by reversing the polarity of the stimulus current. Inversion of the stimulus artifact without inversion of the neural response can thus be observed. If there is no response, the position of the electrode carrier is modified to find the correct place. Neural Response Telemetry Electrically evoked cochlear nucleus action potentials (ECNuPs) are recorded using the NRT software provided by the Cochlear Corporation. Radiofrequency pulses are transmitted by the speech processor interface to the implant. To measure the evoked responses, a second series of radiofrequency pulses is generated, in which information on the magnitude of the voltage recorded by each electrode is coded and transmitted back to the speech processor interface. These voltages are averaged and analyzed and the resulting ECNuPs displayed on the computer monitor. To extract the ECNuPs, it is necessary to use a subtraction technique, which makes it possible to erase the stimulus artifact. A single biphasic pulse (25 Ìs/phase) is followed by a pair of biphasic pulses. The latter stimulation does not allow any neurogenic response due to the refractory period. Therefore, subtraction of the single biphasic pulse from the pair of biphasic pulses yields the neurogenic response [de Sauvage et al., 1983; Brown et al., 1998]. Processor Fitting and Programming Activation of the ABI occurs approximately 6 weeks postoperatively. Because of the possible risks involved in stimulating brainstem structures, this is done in the ICU with electrocardiographic monitoring and the assistance of an anesthetist. The threshold level and maximum comfortable level of each electrode are first assessed to select the optimal electrode configuration. An initial investigation is carried out in the monopolar mode to investigate the electrodes that elicit auditory sensations. Channels that induce unpleasant sounds or nonauditory effects are excluded. The decision whether to proceed with exploration of other electrodes depends on the level of stimulation of the electrodes capable of inducing side effects.
Colletti/Carner/Miorelli/Colletti/Guida/ Fiorino
Unlike CIs, there is no definite tonotopic relationship between the ABI electrode array and the human cochlear nucleus. Place-pitch scaling and ranking are the two procedures we used to determine an ABI recipient’s perception of pitch and to define the appropriate tonotopic order of the electrodes [Colletti et al., 2001, 2002]. Pitch scaling is performed once the threshold level and CL have been established. During pitch scaling, electrodes are stimulated at CL, and the patient is asked to define the sound by assigning to it a sharpness rating between 1 (lowest) and 100 (highest).The software presents each channel 10 times. Once the test is completed, the average and standard deviation for each channel are displayed. A suggested tonotopic order of the channels is also provided. Once scaling has been done, pitch ranking follows. During pitch ranking, 2 electrodes are stimulated in succession and the patient is asked to indicate which sound is the higher in pitch. This information enables the technician to achieve the most appropriate tonotopic arrangement for the subject. When completed, the correct tonotopic position of the ordered electrodes is checked by sweeping all the electrodes. Processor programming is then done according to the pitch scaling and ranking results. All active electrodes in the array are assigned to each filter in accordance with the pitch-ranked order, that is, the highest pitch electrode is assigned to the highest frequency filter and the most basal pitch electrode is assigned to the lowest frequency filter. Stimulus rate is dependent on the encoding strategy, i.e. 250 Hz for spectral peak coding and 900 Hz for Advanced Combination Encoder. Speech Perception Evaluation Patients are monitored regularly for assessment of the efficacy and safety of their implants. One month, 3 months, 6 months, 1 year after activation and annually thereafter, patients return to our department for medical follow-up, reprogramming of their speech processors and speech perception testing. Perception tests include: (1) recognition of environmental sound or sound detection test; (2) closed-set vowel confusion test; (3) closedset consonant confusion test; (4) closed-set word recognition in the vision-only mode (lip-reading), sound-only mode, and soundplus-vision mode; (5) open-set sentence recognition in the visiononly mode, sound-only mode, and sound-plus-vision mode, and (6) speech-tracking test. Speech-tracking test is administered by the therapist in the auditory-alone mode using the voice at an intensity of approximately 70 dB SPL, as measured by means of a phonometer in a noninsulated environment. A story is read to the patient and he/she is invited to repeat the words correctly. The number of the correct words repeated by the patient in 1 min is then evaluated. Here the results of the auditory-alone-mode closed-set word recognition, auditory-alone-mode open-set sentence recognition and the speech-tracking test are reported. Special attention is paid to the occurrence of nonauditory effects, such as dizziness, or tingling sensations, at activation. Postoperative auditory performance assessment is based upon a battery of appropriate closed- and open-set measures included in The Manual of Auditory Rehabilitation [Mecklenburg et al., 1997].
Auditory Brainstem Implant in Posttraumatic Cochlear Nerve Avulsion
Results
After ABI insertion, intraoperative EABR and NRT investigations were performed in all patients to determine correct placement of the implant. Intraoperative EABR recordings were elicited by monopolar stimulation in all patients for the different electrodes tested (at least 15) at stimulation intensities ranging from 110 to 200 CL. Latencies ranged from 0.8 to 1.6 ms for the first positive wave, from 1.4 to 3 ms for the second positive wave and from 2.4 to 3.3 ms for the third positive wave. The latencies were compatible with activation of auditory pathways yielding the conventional acoustically evoked ABR waves III, IV and V, i.e. cochlear nucleus, superior olivar complex and lateral lemniscus. The amplitude of the waves ranged from 150 to 620 nV. ECNuPs, by means of NRT, were recorded at different stimulation intensities (140–200 CL) using bipolar stimulation for the different electrodes activated (at least 8 pairs of electrodes). The latencies of the negative peak ranged from 0.3 to 0.4 ms. A positive potential was evident in all patients at 0.5–0.7 ms. Patients were kept under observation in the ICU and returned to the ENT Department the day after the operation. CT scans with a bone algorithm reconstruction technique (1 mm slice thickness) with 3-dimensional reconstruction, and Stenver’s projection X-rays were performed to evaluate electrode placement before discharge. Imaging showed the ABI in the proper position with no displacement in any of the patients. On average, they were hospitalized for 12 days after implantation. No major postoperative complications were observed. In particular, no cerebrospinal fluid leak occurred. One patient developed mild cerebellar edema, as evidenced by CT scan, which subsided at the time of discharge. The ABIs were activated approximately 6 weeks after surgery. Because of the possible risks involved in stimulating brainstem structures, activation was done in the ICU with electrocardiographic monitoring and the assistance of an anesthetist. During the first tune-up session, all subjects described their auditory perception after stimulation of several electrodes. The electrodes activated without side effects varied from 5 to 13 (average: 9.8). Side effects were induced in all 6 subjects. In particular, all 6 patients complained of dizziness; tingling sensations in the throat and arm were reported by 3 and 2 subjects, respectively. No contralateral side effects were observed. Nonauditory sensations decreased in magnitude over time.
Audiol Neurootol 2004;9:247–255
251
Fig. 3. Individual auditory performance in patients implanted with an ABI. Results are reported as percentage scores. Speech tracking is reported as arbitrary scores (70 words/minute corresponding to 100%). WR = Closed-set word recognition; SR = open-set sentence recognition; ST = speech tracking.
Pitch sensation was projected on the array so that the most medial and caudal electrodes elicited higher and the most lateral and cranial electrodes lower pitch perception. At the latest follow-up, there has so far been no evidence of electrode migration or technical failure, and stimulation parameters have shown no changes in pitch perception with only slight (B5%) changes in CL observed between 1 and 5 months after initial activation. All subjects are capable of identifying environmental sounds and distinguishing them from speech signals in the first session. In the following sessions, performed over a period of time ranging from 2 to 16 months, additional 1–6 electrodes were activated in the 6 patients making a total of 8–16 electrodes (average: 12.6). All patients but 1 (D.G.M.) presented progressive lip-reading enhancement, scoring 100% in the audiovisual mode in all speech recognition tests. In the auditory-alone mode, the patients have also been improving their skills continuously during the follow-up. One month after activation, in the auditory-alonemode closed-set word recognition test, 5 subjects scored between 22 and 45% and 1 patient achieved 80% correct responses. At the latest test performed, 1 patient scored 42%, 1 reached 75%, another subject achieved 80%, 1 scored 90%, 1 subject 92% and 1 patient 100%.
252
Audiol Neurootol 2004;9:247–255
In auditory-alone-mode open-set sentence recognition, only 2 patients were able to achieve a score within 6 months of activation registering 10 and 20% correct responses, respectively. In the session performed at 6 months, another 3 patients achieved scores of 10, 35 and 80% correct responses, respectively. At 1 year, these 3 patients scored between 45 and 100%. Another subject failed to achieve a score at these observation times; in the other 2 patients, the follow-up was less than 6 months and therefore no comparable data could be obtained. In the speech-tracking test at 1 year, only 3 patients managed to achieve scores ranging from 12 to 43 words per minute. They are able to converse normally and 2 of them used the telephone. Figure 3 and table 1 give details of the auditory results obtained with the ABIs by the various subjects during the latest rehabilitation session and the main data relating to the electrodes activated, the nonauditory effects observed and the communication skills achieved.
Discussion
Road accident injuries are steadily increasing in number. An ISTAT [2000] survey estimates that in Italy 305000 persons annually suffer skull fractures or intracranial injuries. Temporal bone injuries have been reported to occur in about 18–22% of all skull fractures, i.e. about 60000 cases/year, and 8% of these have profound SNHL. The sensorineural auditory damage is attributable above all to 4 main mechanisms [Lyos et al., 1995], i.e. (1) disruption of the bony and membranous labyrinth as a result of labyrinthine fracture; (2) concussion injury to the inner ear without labyrinthine fractures; (3) perilymphatic fistula, and (4) injury to the CN and central auditory pathways. This latter mechanism involving the CN is the least known and may be postulated considering that all the cranial nerves departing from the brain are more or less tightly attached to the foramina of the skull and therefore are extremely vulnerable to acceleration and deceleration of the brain relative to the skull, as happens during head injury. In addition, it must be recalled that the CN is particularly sensitive to injury because of the length of its central portion (8–10 mm), which presents a myelin sheath that is less compact than the peripheral myelin [Sekiya and Moller, 1988]. Makashima et al. [1976] furnished histological evidence of CN involvement following head injuries in the form of documented hemorrhage and laceration of the CN fibers.
Colletti/Carner/Miorelli/Colletti/Guida/ Fiorino
Table 1. Patients with hearing loss following head injury and implanted with ABI Patient
Age Sex Head Surgery injury
Followup months
Total Stimuactive lation electrodes
Latest test score achieved (auditory-alone mode) closed-set word recognition, %
open-set speech sentence tracking recogni- words/min tion, %
Nonauditory side effects
Hearing performances
A.S.
35
m
1988
04/03/2001
16
8
mon. ACE
92
60
19
dizziness [14, 16–22]; tingling sensation: throat [2], arm [11–13, 15]
fair conversational ability
D.M.
32
m
1983 1986
05/28/2001
14
16
mon. ACE
100
100
43
dizziness [18–22]
normal conversation, telephone use without difficulty, tinnitus suppression
C.L.
24
f
1983
09/25/2001
14
13
mon. ACE
90
45
12
dizziness [14, 16–21]; tingling sensation in the arm [20–22]
simple ordinary conversation, telephone use for simple conversation
D.G.M. 32
m
1966
10/04/2001
12
10
mon. 42 SPEAK
0
0
dizziness [11, 13–16]; tingling sensation in the arm [17–22]
difficulty in rehabilitation due to blindness
C.R.
48
m
2000
06/03/2002
4
13
mon. ACE
75
20
0
dizziness [14, 16–22]
P.G.
16
m
2000
09/02/2002
2
14
mon. 80 SPEAK
10
0
dizziness [2, 5, 8, 11, 14, 17, 20]
m = Male; f = female; mon. = monopolar; ACE = Advanced Combination Encoder; SPEAK = spectral peak coding.
Different treatments have been proposed for traumatic hearing loss. In conductive or mixed hearing loss, surgery is advisable, e.g. myringoplasty or ossiculoplasty; in mild to severe SNHL, hearing aids are employed, while in profound SNHL, CIs are utilized. The results with CIs may, however, be unsatisfactory in CN injuries: there is no way a CI can adequately rehabilitate hearing loss if the CN is also involved in the injury in addition to the cochlea. Coligado et al. [1993] reported that 1 patient fitted with a CI for SNHL after head injury performed well below the average user. Moore and Cheshire [1999] reported that 1 out of 3 patients was unable to use the CI adequately. Camilleri et al. [1999] studied 7 patients fitted with CI after head injury and reported that 2 subjects had to be explanted because of seventh cranial nerve stimulation, 2 had poor results and 1 did not use the device. In particular, attention should be directed to osteoneogenesis that may follow hemorrhage into the cochlear lumen or may be the result of reaction at the point where the fracture line involves the cochlea. In 3 out of 7 patients, Camilleri et al. [1999] observed partial or total obliteration of the cochleas with failure of the CI. Another problem that must be borne in mind is the facial nerve stimulation rate in patients with CI: in a series of 459 nucleus device operations [Cohen et al.,
1988] in the general population, only 4 patients experienced facial nerve stimulation. By contrast, Camilleri et al. [1999] reported facial nerve stimulation in 2 out of 7 patients with CI following temporal bone fracture. This higher rate of stimulation of the facial nerve is assumed to be due to current leaks from the electrode through the low resistance of the fracture line, capable of stimulating the facial nerve in the region of the geniculate ganglion or in the horizontal portion. When this occurs even at very low stimulation thresholds, it is necessary to explant the CI, as reported by Camilleri et al. [1999], condemning the patient to be left with a loss of hearing. When poor results are obtained with a CI, one should suspect the presence of a retrocochlear lesion that inhibits the activation of the central auditory system. We believe that when there is a suspicion of a retrocochlear injury, an ABI should be indicated for auditory rehabilitation. In this pathological condition, the following selection criteria for the use of an ABI may be defined: (1) evidence at CT scan and MRI of distortion of the cochlear anatomy due to fracture, ossification or fibrosis of the cochleas; (2) no response to round window stimulation with suspected CN avulsion; (3) no severe residual neurological disorders, i.e. cerebral lesions with cognitive, behavioral and communication deficits, and, last but not least, (4) patient motivation.
Auditory Brainstem Implant in Posttraumatic Cochlear Nerve Avulsion
Audiol Neurootol 2004;9:247–255
253
The degree of spiral ganglion survival may be investigated preoperatively using promontory electrical stimulation with evaluation of subjective acoustic responses or objective electrophysiological EABR or electrical middle latency response recording. Behavioral testing has been widely employed in the past [Gray and Baguley, 1990] and absence of any acoustic sensation considered as a negative predictor of cochlear implantation [Kileny et al., 1992]. The value of this test has been questioned by some authors and many centers do not make routine use of promontory stimulation but reserve it only to selected complex cases [Gantz et al., 1993]. Other authors [Smith and Simmons, 1983; Kileny and Kemink, 1987; Fifer and Novak, 1991; Nikolopoulos et al., 2000; Mason et al., 1997] prefer the use of electrophysiological recording with study of input/output functions as a predictor of CN survival. Electrophysiological promontory testing has also been criticized for possible lack of sensitivity [Nikolopoulos et al., 2000], and absence of response has not been considered as a contraindication for implantation. A possible cause of variability of promontory stimulation is the presence of stimulus artifact, discomfort caused by the stimulating current, and use of needle electrode. We have attempted to reduce such factors of variability with a more selective stimulation of the spiral ganglion cell using a ball electrode placed on the round window and recording EABR. This electrical montage is certainly closer to the stimulation target, compared to the needle electrode, and it has the advantage to reduce stimulation of the sensitive nerve fibers of the promontory. A lower threshold and a larger dynamic range of responses have been evidenced by Kileny et al. [1992] by using round window stimulation compared to the promontory stimulation. A preliminary investigation on subjects with normal CNs and patients with severed nerve (previous removal of acoustic neuroma) [Colletti et al., unpubl. obs.] allowed to validate our method of electrical stimulation at the round window. We now use this test routinely in all patients being candidates for CI and ABI and we feel that it is particularly useful in selected complex cases, such as severe trauma with temporal bone fractures. In such patients, absence of both subjective and electrophysiological responses to electrical stimulation is certainly highly suspicious for disruption of the CN or impossibility to stimulate it in order to provide auditory rehabilitation.
254
Audiol Neurootol 2004;9:247–255
We therefore decided to implant an ABI in these patients who participated in the present investigation. An additional very important reason to use this device was the presence of fractures crossing the labyrinth, since insertion of a cochlear electrode in these subjects is not completely free from possible serious complications, such as meningitis. In the present patient population, auditory performance with the ABI is very good and better than the auditory outcomes observed in patients with CN tumor and nontumor diseases, i.e. CN aplasia, complete cochlear ossification and auditory neuropathy, and treated in our department with ABI [Colletti et al., in press]. No early or late complications related to the ABI were observed, and therefore the indications for ABI have to be extended to temporal bone fractures with damage to the auditory nerve. All patients achieved scores in the auditory-alonemode closed-set word recognition test ranging from 42 to 100% and 4 out of 6 had auditory-alone-mode open-set sentence recognition scores ranging from 10 to 100%. Two out of 6 patients were able to engage in normal conversation, achieved speech-tracking performance levels within a few months of ABI activation and were even capable of conversing over the telephone. These results, achieved in what is admittedly a very small patient sample, but which are so much better than anything we could reasonably expect, confirm that the ABI is a distinct breakthrough in the auditory rehabilitation of patients with damaged CNs and fractured or fibrous cochleas following head injuries.
Colletti/Carner/Miorelli/Colletti/Guida/ Fiorino
References Brown CJ, Abbas PJ, Gantz BJ: Preliminary experience with neural response telemetry in the nucleus CI24M cochlear implant. Am J Otol 1998;19:320–327. Camilleri AE, Toner JG, Howarth KL, Hampton S, Ramsden RT: Cochlear implantation following temporal bone fracture. J Laryngol Otol 1999; 113:454–457. de Sauvage RC, Cazals Y, Erre Y, Aran JM: Acoustically derived auditory nerve action potential evoked by electrical stimulation. An estimation of the waveform of single unit contribution. J Acoust Soc Am 1983;73:616–627. Cohen NL, Hoffman RA, Stroschein M: Medical or surgical complications related to the Nucleus multichannel cochlear implant. Ann Otol Rhinol Laryngol 1988;S135:8–13. Coligado EJ, Wiet RT, O’Connor CA, et al: Multichannel cochlear implantation in the rehabilitation of post-traumatic sensorineural hearing loss. Arch Phys Med Rehabil 1993;74:653– 657. Colletti V, Fiorino F, Carner M, et al: Hearing habilitation with auditory brainstem implantation in two children with cochlear nerve aplasia. Int J Ped Otorhinol 2001;60:99–111. Colletti V, Fiorino F, Carner M, et al: Auditory brainstem implantation: The University of Verona experience. Otolaryngol Head Neck Surg 2002;127:84–96. Colletti V, Fiorino F, Carner M, et al: Auditory brainstem implant (ABI): New frontiers in adults and children. Otol Neurotol, in press.
Auditory Brainstem Implant in Posttraumatic Cochlear Nerve Avulsion
Fifer RC, Novak MA: Prediction of auditory nerve survival in humans using the electrical auditory brainstem response. Am J Otol 1991;12:350– 356. Gantz BJ, Woodworth GG, Knutson JF, et al: Multivariate predictors of success with cochlear implant. Adv Otorhinolaryngol 1993;48:153– 167. Gray RF, Baguley DM: Electrical stimulation of the round window: A selection procedure for single-channel cochlea implantation. Clin Otolaryngol 1990;15:29–34. Griffith MV: The incidence of auditory and vestibular concussion following minor head injury. J Laryngol Otol 1979;93:253–265. ISTAT 2000: Statistiche degli incidenti stradali Anno 2000 ISTAT (Istituto di Statistica Italiano). Kileny PR, Kemink JL: Electrically evoked middle latency auditory evoked potentials in cochlear implant candidates. Arch Otolaryngol Head Neck Surg 1987;113:1072–1077. Kileny PR, Zwolan T, Zimmerman-Phillips S, et al: A comparison of round window and transtympanic promontory electrical stimulation in cochlear implant candidates. Ear Hear 1992; 13:294–299. Kockhar MS: Hearing loss after head injury. Ear Nose Throat J 1990;69:537–542. Koefoed-Nielsen B, Tos M: Post-traumatic sensorineural hearing loss. A prospective long-term study. ORL 1982;44:206–215. Lyos AT, Marsh MA, Jenkins HA: Progressive hearing loss after transverse temporal bone fracture. Arch Otolaryngol Head Neck Surg 1995;121:795–799.
Makishima K, Sobel S, Snow JB: Histopathological correlates of otoneurologic manifestations following head trauma. Laryngoscope 1976;86: 1303–1313. Mason SM, O’Donoghue GM, Gibbin KP, et al: Perioperative electrical auditory brainstem response in candidates for pediatric cochlear implantation. Am J Otol 1997;18:466–471. Mecklenburg DJ, Dowell R, Jenison W: The Manual of Auditory Rehabilitation. Ed Cochlear AG Basel, 1997. Moore A, Cheshire JM: Multi-channel cochlear implantation in patients with a post-traumatic sensorineural hearing loss. J Laryngol Otol 1999;113:34–38. Morgan WE, Cocker NJ, Jenkins HA: Histopathology of temporal bone fractures: Implications for cochlear implantation. Laryngoscope 1994; 104:426–432. Nikolopoulos TP, Mason SM, Gibbin KP, et al: The prognostic value of promontory electrical auditory brainstem response in pediatric cochlear implantation. Ear Hear 2000;21:236– 241. Sekiya T, Moller AR: Effects of cerebellar retractions on the cochlear nerve: An experimental study on rhesus monkeys. Acta Neurochir 1988;90:45–52. Smith L, Simmons FB: Estimating eight nerve survival by electrical stimulation. Ann Otol Rhinol Laryngol 1983;92:19–23.
Audiol Neurootol 2004;9:247–255
255
Author Index Vol. 9, No. 4, 2004
Abbas, P.J. 203 Bosman, A.J. 190 Brown, C.J. 203 Carner, M. 247 Cohen, N.L. 197 Colletti, L. 247 Colletti, V. 247 Cremers, C.W.R.J. 190 Fiorino, F. 247 Guida, M. 247 Hoesel, R.J.M. van. 234 Hughes, M.L. 203 Kessler, D. 214
Koch, D.B. 214 Miller, C.A. 203 Miorelli, V. 247 Mylanus, E.A.M. 190 Neuburger, H. 224 Osberger, M.J. 214 Segel, P. 214 Smoorenburg, G.F. 189 Snik, A.F.M. 190 South, H. 203 Svirsky, M.A. 224 Teoh, S.-W. 224
Subject Index Vol. 9, No. 4, 2004
Auditory brainstem implant 247 Aural atresia 190 Bilateral application 190 – cochlear implants 234 Binaural advantage 190 Bone-anchored hearing aid 190 Candidacy 197 Channel interaction 203 Chronic draining ears 190 Cochlear implant 214 – implants 197, 203, 224 Compound action potential 203 Congenitally deaf children 224 Contralateral routing of signal 190 Electrically evoked compound action potential 203
ABC Fax + 41 61 306 12 34 E-Mail
[email protected] www.karger.com
© 2004 S. Karger AG, Basel
Accessible online at: www.karger.com/aud
Head injury 247 Hearing loss 214 Language development, sensitive periods 224 Neural response telemetry 203 Profound hearing loss 247 Psychophysics 234 Sound processing 214 Speech 234 – perception 224 Speech-in-noise test 190 Surgery 197 Unilateral deafness 190
Conference Calendar
7.7.–8.7.2004 Paris France
23rd International Congress of the Ba´ra´ny Society
Information: BARANY 2004, c/o MCI France 11, rue de Solférino, FR–75007 Paris (France) Tel. (+33-1) 5385-8256, Fax (+33-1) 5385-8283 E-Mail:
[email protected] Website: www.baranyparis2004.com
3.8.–7.8.2004 Evanston, Ill. USA
ICMPC8 – 8th International Conference on Music Perception & Cognition
Information: Scott Lipscomb ICMPC8 Conference Organizer Northwestern University School of Music 711 Elgin Rd., Evanston, IL 60201 (USA) Tel. (+1-847) 467-1682 E-Mail:
[email protected] Website: www.northwestern.edu/icmpc/
7.8.–12.8.2004 Boston, Mass. USA
BIOMAG 2004 – 14th International Conference on Biomagnetism
Information: E-Mail:
[email protected] Website: www.BIOMAG2004.org
25.8.–29.8.2004 Lake Tahoe, Calif. USA
IHCON 2004 – International Hearing Aid Research Conference
Information: IHCON, House Ear Institute 2100 W. 3rd St., Los Angeles, CA 90057 (USA) Tel. (+1-213) 353-7047, Fax (+1-213) 413-0950 E-Mail:
[email protected] Website: www.hei.org/ihcon/ihcon.htm
7.9.–10.9.2004 Vienna Austria
EUSIPCO 2004 – 12th European Signal Processing Conference
Information: Websites: www.nt.tuwien.ac.at/eusipco2004/ and www.eurasip.org
4.10.–8.10.2004 Jeju Island South Korea
ICSLP 2004 – 8th International Conference on Spoken Language Processing
Information: Website: www.icslp2004.org
5.10.–8.10.2004 Fukuoka Japan
8th International Evoked Potentials Symposium
Information: Naoki Akamatsu, MD University of Occupational and Environmental Health Dept. of Neurology, 1-1 Iseigaoka, Yahatanishi-ku Kitakyushu 807-8555 (Japan) Tel. (+81-93) 691-7438, Fax (+81-93) 693-9842 E-Mail:
[email protected] Website: http://www.ieps8.com
21.10.–23.10.2004 Frankfurt/Main Germany
49th International Congress of Hearing Aids Acousticians
Information: Nelli Darscht Union der Hörgeräte-Akustiker e.V., Postfach 4006 DE–55030 Mainz (Germany) Fax (+49-61) 3128-3030 E-Mail:
[email protected] Website: www.uha.de
21.10.–24.10.2004 Nice France
European Academy of Otology & Neuro-Otology
Information: Organizing Secretariat Nice Cropolis Congress Centre, 1, Esplanade Kennedy FR–06302 Nice Cedex 4 (France) Tel. (+33-4) 9392 8159/61, Fax (+33-4) 9392 8338 E-Mail:
[email protected] Website: www.nice-acropolis.com/eaonoworkshop/
15.11.–19.11.2004 San Diego, Calif. USA
148th Meeting of the Acoustical Society of America
Information: Website: http://asa.aip.org/meetings.html
18.11.–20.11.2004 Hannover Germany
9th European Workshop on Cochlear Implants, Auditory Brainstem Implants and Implantable Hearing Aids
Information: Gabi Richardson Hals-Nasen-Ohrenklinik der Medizinischen Hochschule Hannover Carl-Neuberg-Strasse 1 DE–30625 Hannover (Germany) Tel. (+49-511) 532 9161, Fax (+49-511) 532 3293 E-Mail:
[email protected]
ABC
© 2004 S. Karger AG, Basel
Fax + 41 61 306 12 34 E-Mail
[email protected] www.karger.com
Accessible online at: www.karger.com/aud
257
2.4.–6.4.2005 Universal City (LA), Calif. USA
5th International Symposium on Ménière’s Disease and Inner Ear Fluid Regulation
Information: Liz Hansen Tel. (+1-213) 989-6741, Fax (+1-213) 483-5675 E-Mail:
[email protected] Website: www.hei.org/confer/md2005.htm
1.6.–4.6.2005 Hannover Germany
4th International Symposium and Workshops ‘Objective Measures in Cochlear Implants’
Information: Gabi Richardson Dept. of Otolaryngology Medical University Hannover Carl-Neuberg-Strasse 1 DE–30625 Hannover (Germany) Tel. (+49-511) 532 9161, Fax (+49-511) 532 3293 E-Mail:
[email protected]
17.7.–20.7.2005 Maastricht The Netherlands
20th International Congress on the Education of the Deaf
Information: Conference Agency Limburg P.O. Box 1402 NL–6201 BK Maastricht (The Netherlands) Tel. (+31-43) 361 9192 Fax (+31-43) 361 9020 or 356 0152 E-Mail:
[email protected] Website: www.iced2005.org
258
Conference Calendar